Joe Cadillic writes:
Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.
Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.
You can stop imagining, this is really happening.
A recent TechCrunch article, warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.
Read more on MassPrivateI.
Um, what if a person lives in Canada has cancer (or other) and gets the legal suicide kit?
Facebook still going to report that?
Other countries?
American States where suicide is on books as illegal?
Facebook still going to report that?
If they do that, should they not also redo their privacy policy stating that they will intrude on people medical conditions via AI and report on peoples health conditions?
Just say’n…