There has been relatively little reaction from the privacy community since a consortium of major players in online behavioral advertising led by the Interactive Advertising Bureau (IAB) issued a proposal for self-regulation last week. But Ryan Calo of the Stanford Law School Center for Internet and Society has turned his lawyerly eye to the proposal (pdf) and has dissected some its language for us. Ryan writes, in part:
“Notice” here means disclosure about the advertiser’s data practices; a privacy policy is a form of notice. “Control” refers to what users can do about those practices.
The industry coalition’s proposed principle around notice (pages 12-14) gives participants a lot of options. The text has more “or’s” than the Argonauts (JPG).
[…]
When it comes to user control, however, suddenly we start seeing some “and’s.” Third party advertisers should provide choice around the “collection and use of data” (page 14, my emphasis). Toolbars and other add-on software “should not collect and use data for Online Behavioral Advertising purposes without Consent” (same). The principles could expressly provide control of the collection “or” use of data, but they don’t. Like the NAI opt out discussion, the proposed self-regulatory principles leave ambiguous whether users will be able to opt out of collection at all. This follows because if a company stops using data to target ads, then it is no longer technically “collecting and using” that data.
You can read more of Ryan’s commentary here. I think Ryan has clearly stated a serious concern about the proposal: by its wording, it does not necessarily give web surfers control over the collection of data.
Saul Hansell of the New York Times offers his own reaction to the self-regulation principles, commenting that:
As best as I can tell, the proposal largely codifies the practices engaged in today. The groups, led by the Interactive Advertising Bureau, decided not to endorse any of the ideas that have been actively discussed recently that might give users more meaningful information and control over how their behavior is being tracked.
[…]
Here’s what will happen under the minimum standards in the new plan: Everything will occur exactly the same as before, except that the link you click for the vaguely worded generalities will be called something like “advertising information” rather than “privacy policy.”
Hansell also discusses four privacy protections that were omitted from the industry proposal:
- Every ad should explain itself
- Users should be able to see data collected about them
- Browsers should help enforce user choices about tracking
- Some information is simply too sensitive to track
CDT was somewhat more positive in its reaction, noting that the “transparency principle”
… includes a robust framework for providing notice outside of privacy policies, and lays the groundwork for development of a uniform link or icon that would appear on any web site or advertisement where data is collected or used for behavioral advertising.
But CDT’s analysis concurs with Hansell’s opinion on some important points:
In some areas, though, the principles don’t go far enough. For example, we had suggested to both the FTC and the NAI that the notion of “sensitive information” needed to cover a broad array of data types, including health information and location data. The advertiser principles cover only a very limited subset of medical information and leave out location data altogether. The principles are also silent about consumer access to the behavioral data collected about them. Google has demonstrated that providing profile access is possible, and we would expect the rest of the industry to follow suit.
It seems like the industry proposal leaves much (too much?) to be desired. Because the industry-generated proposal for self-regulation is just that — a proposal — the FTC may not find it sufficiently protective of privacy or sufficiently transparent. If so, the government may still wind up imposing governmental regulation on the process.