Jamie Williams writes on EFF:
The New York Times’ recent story on Clearview AI, maker of a secretive facial recognition app that markets its product to law enforcement, has raised critical questions about what can be done to protect our privacy online. Clearview claims to have amassed a dataset of over three billion face images by scraping websites like Facebook, YouTube, and Venmo.
The solution to the Clearview problem is clear: comprehensive federal privacy legislation that gives consumers real power over their data and real power to fight back.
The Answer is Opt-In Consent for Data Collection and a Private Right of Action
To ensure that companies like Clearview don’t collect consumers’ personal data without their knowledge or consent, and to provide effective recourse against companies that do, we need comprehensive federal consumer data privacy legislation. We need to require private companies that collect, use, retain, or share information about us—including our face prints or other biometric information—to get informed opt-in consent before doing so. And we need to give consumers the right to bring their own lawsuits against the companies that fail to do so.
Illinois’ Biometric Information Privacy Act of 2008 (BIPA), one of our nation’s most important privacy safeguards for protecting ordinary people from corporations that want to harvest and monetize their personal information, requires opt-in consent to collect biometrics, and provides a private right of action. We need protections like these in federal legislation.
EFF has laid out in detail what strong privacy legislation needs to include. It’s critical that Congress work to pass such legislation soon. As the Clearview example shows, we can’t rely on companies to refrain from building tools that radically erode our privacy.
The CFAA is Not the Answer
Since the New York Times Clearview story was published, there has been some discussion online about using the federal Computer Fraud and Abuse Act (CFAA)—a notoriously vague pre-Internet law intended to punish those who break into private computer systems—to go after scraping of publicly available websites.
Properly interpreted, the CFAA does not currently apply to scraping of public websites, and amending it to prohibit this behavior would be a mistake, for many reasons.
The CFAA imposes criminal penalties including incarceration. Criminalizing web scraping would mean criminalizing commonplace online activities. As a technical matter, web scraping is simply machine automated web browsing. Web scraping is a widely used method of interacting with the content on the web. Journalists, researchers, and the Internet Archive use scrapers. The web is the largest, ever-growing data source on the planet. It’s a critical resource for journalists, academics, businesses, and everyday people alike. And meaningful access sometimes requires the assistance of technology to automate and expedite an otherwise tedious process of accessing, collecting, and analyzing public information.
What’s more, big companies have already tried to use the CFAA’s civil enforcement provision to go after web scraping. And as these lawsuits show, they have done so not to protect their users’ privacy, but to try to block competitors. As an analysis by Boston University Law Professor Andrew Sellars shows, the “vast majority” of the web scraping cases in the last twenty years “concern claims brought by direct commercial competitors or companies in closely adjacent markets to each other.” The fact that these unauthorized web scraping cases are consistently about blocking competition—and not about punishing criminals for breaking into private computer systems as the CFAA was intended—demonstrates how the law is being abused. Expanding the CFAA would lead to more abuse.
The CFAA is an old, blunt instrument, and trying to use it as a stand-in for real privacy reform would be a mistake. Rather, we must solve the privacy problem Clearview raises with a strong federal privacy law that gives consumers real power to stop invasive uses of technology.