Olivia Solon reports:
Facebook put the safety of its content moderators at risk after inadvertently exposing their personal details to suspected terrorist users of the social network, the Guardian has learned.
The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.
A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.
Read more on The Guardian.