Science Blog writes:
With the rise of artificial intelligence, clinicians are turning to chatbots like OpenAI’s ChatGPT to organize notes, produce medical records or write letters to health insurers. But clinicians deploying this new technology may be violating health privacy laws, according to Genevieve Kanter, an associate professor of public policy at the USC Sol Price School of Public Policy.
Kanter, who is also a senior fellow at the Leonard D. Schaeffer Center for Health Policy & Economics, a partner organization of the USC Price School, recently co-authored an article explaining the emerging issue in the Journal of the American Medical Association. To learn more, we spoke to Kanter about how clinicians are using chatbots and why they could run afoul of the Health Insurance Portability and Accountability Act (HIPAA). HIPPA (sic) is a federal law that protects patient health information from being disclosed without the patient’s permission.
Read more at Science Blog.
And to learn even more, read the “viewpoint” article co-authored by Kanter and Eric Packel, “Health Care Privacy Risks of AI Chatbots.”
h/t, Joe Cadillic