Ryan Calo writes:
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in “totally random” rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows “who you are.” Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm.
Okay, I haven’t had enough coffee yet today and I’m exhausted from a trip to Atlanta, but I’m having a tough time grokking how this situation has anything to do with privacy at all.
I have no doubt that there’s a negative impact of information based on browser or cookie as described above, but are we now equating “information” with “privacy?” People connect to a web site generally understand that the site they visit can detect what browser they’re using and a whole slew of other information. I don’t consider most of that information “private” information. Where you were before you visited the site (referral url) should be private, as should IP (in my opinion, anyway), but the other stuff? I don’t see it.
If someone discriminates against you because they know your race, creed, gender, religion, etc., is that necessarily an “objective privacy harm” just because it is based on a personal factor or characteristic? Don’t we need to distinguish between “personal information,” “private information,” and “privacy?” And if we don’t, then we run the risk of having to conclude that any unequal treatment of people based on any information about them or their belongings is a “privacy harm,” which could make the whole notion of “privacy harm” so broad as to be totally useless.
Ryan, if I’m missing something in your argument, please clarify, but I don’t see where the differential rates based on browser type is *any* type of “privacy harm” even though it is economically disadvantageous or unfair in some sense.
Dissent,
I’ve gotten the same basic comment from a few people. Maybe I’m off.
My assumptions are that (1) most people don’t realize that they are sending their browser type or location (WSJ) to the websites they visit and (2) any time information about you is gathered without your knowledge or consent and then used against you, there has been a privacy harm.
It sounds like you’re pushing back on (1). It would surprise me if most people understood that they are sending their browser type to the websites they visit. But if you’re right and I’m wrong, then I agree with you. In that case you would have at least tacit consent and the problem would not be a privacy problem.
One more thing: I believe that information is “private” to the degree that the subject is adverse to turning it over. Maybe browser type is only a little private then. Maybe location is more private. The degree of harm turns in part on the degree of aversion. There would be more harm in the latter case than the former, but still some harm in each. (After all, would you not use a plug-in that hid your browser type if you thought your browser selection might limit your opportunities? Would that plug-in not be a PET?)
Hope this makes sense. Thanks for your thoughts,
Ryan
Thanks for clarifying, Ryan. See how this plays out:
Two potential car buyers go to a dealership. They both park down the block from the dealership, but the salesmen, looking out the window, note that one customer arrives in an old rundown car. The other customer arrives in a new BMW sports model. Assuming that both customers are interested in purchasing exactly the same car with the same options, they will probably not be quoted the same price when they sit down to negotiate a deal. Indeed, the customer with the poorer car may get a better price than the customer who looks like they can afford more.
Would you say that the more affluent-appearing customer has suffered a “privacy harm” because information about him was used to his detriment? And if you say “yes,” would your answer change to “no” if the customer had parked his car in the lot or had volunteered that he drove a new BMW?
As always, great thanks for your thought-provoking analyses and comments.
/Dissent
I was shocked when I read this. I can understand the exhaustion thing, but dismissing something because you are apathetic to making something public that someone else may want to keep private? That is very dangerous.
I see how it does not fit certain definitions of privacy, but you are overlooking so much more. Obviously Amazon shouldn’t be using cookies as a means of price discrimination. It’s a small step to go from using cookies to identify users to using other http headers or ultra-high-tech browser “features”. I can’t remember if you covered anything like https://panopticlick.eff.org/ or super cookies before.
Secondly, even if the type of browser a person is using is the only data a site used, what good would it do a company like Amazon? At first glance, it may seem relatively harmless or mere snobbery to some people, but if an internet website can effectively charge different prices using “non-personal information” then that probably means they are able to correlate this type of data with data they initially gathered through data mining.
Next, what if other sites did this? What if it was the government? My browser might also reveal I was using Linux as opposed to Windows or Firefox as opposed to Safari. What if I was labeled a bad customer or a political threat. These websites have no business knowing this information. One of the elements of privacy is a guarantee that someone can keep information personal without fear of someone more powerful using their beliefs, choices, or personal information against them.
And of course, it doesn’t matter if a person is doing a dirty work or if a program they made is abusing someone’s right to privacy on their behalf. Ryan makes a good point. This is one of Google’s favorite loopholes, too.
Finally, (or at least the final point I’m going to make in this already long post,) there is one major consequence that definitely threatens privacy. By checking the user agent information someone could unfairly charge a person more by inferring that it is more likely for a Firefox user to have privacy preserving addons or settings than an IE or Chrome user. They could punish people that do not opt to use certain technology data miners often abuse.