I’m not posting a link to this essay because I necessarily agree with it all (I don’t), but it certainly provides food for hopefully productive thought. Mason Marks, M.D., J.D., writes:
Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. In this essay, I explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.
Read more of his essay on Harvard Law Bill of Health.
Then think about the potential good that might result from what Facebook can do (and does do), and what potential harm might accrue from what they can do (and do do?). How might we maximize the former while minimizing the latter if we can’t totally prevent or eliminate the latter? Or can we?