Ryan Calo at the Stanford Law School Center for Internet and Society has a thought-provoking article, “People Can Be So Fake: A New Dimension To Privacy And Technology Scholarship.” Here’s the abstract:
This article updates the traditional discussion of privacy and technology, focused since the days of Warren and Brandeis on the capacity of technology to manipulate information. It includes a novel dimension around the impact to privacy of anthropomorphic or social design.
Technologies designed to emulate people—through voice, animation, and natural language—are increasingly commonplace, showing up in our cars, computers, phones, and homes. A rich literature in communications and psychology suggests that we are hardwired to react to such technology as though a person were actually present. Social interfaces accordingly capture our attention and improve interactivity, and can free up our hands for other tasks.
At the same time, technologies that emulate people have the potential to implicate long-standing privacy values. One of the well-documented effects of interfaces and devices that emulate people is the sensation of being observed and evaluated. Their presence can alter our attitude, behavior, and physiological state. Widespread adoption of such technology may accordingly lessen opportunities for solitude and chill curiosity and self-development. These effects are all the more dangerous in that they cannot be addressed through traditional privacy protections such as encryption or anonymization.
The unique properties of social technology also present an opportunity to improve privacy, particularly online. Careful use of anthropomorphic design might one day replace today’s ineffective privacy policies with a direct, visceral notice that lines up our experience with actual information practice.
The full-text article is available as a free download from SSRN here.