pmb was talking recently about how the taboo on talking about salary hurts workers at the negotiating table, because the employers have more data. That reminded me of an idea I had a while ago
( Read more... )
An Advogato-style trust metric might mitigate problems of disinformation. Input to the trust metric could include both the explicit social graph of friend/contact relationships and the implicit graph formed by Gmail-style invitations. (Carefully-metered invitations would also limit users' ability to submit bad information en masse.) You could achieve some level of confidence in your view of the data as long as you trust your friends to some degree, their friends somewhat less, and so on.
Ed Felten points out that privacy promises are difficult to rely on: Even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes. [...] But even if the contract is legally bulletproof, the company might still violate it.
That's one reason I favor technical measures that minimize the opportunity for abuse.
Bruce Schneier wrote something relevant at Wired.com recently, and reposted to his blog here He does have a link to a response from David Brin (since The Transparent Society was cited as a source for the position he discusses); the response is interesting but I don't find it entirely compelling.
(The comment has been removed)
Reply
(The comment has been removed)
Even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes. [...] But even if the contract is legally bulletproof, the company might still violate it.
That's one reason I favor technical measures that minimize the opportunity for abuse.
Reply
Reply
Leave a comment