The Guardian: Revealed: US police prevented from viewing many online child sexual abuse reports, lawyers say

Social media firms relying on AI for moderation generate unviable reports which prevent authorities from investigating cases

Social media companies relying on artificial intelligence software to moderate their platforms are generating unviable reports on cases of child sexual abuse, preventing US police from seeing potential leads and delaying investigations of alleged predators, the Guardian can reveal.

By law, US-based social media companies are required to report any child sexual abuse material detected on their platforms to the National Center for Missing & Exploited Children (NCMEC). NCMEC acts as a nationwide clearinghouse for leads about child abuse, which it forwards to the relevant law enforcement departments in the US and around the world. The organization said in its annual report that it received more than 32m reports of suspected child sexual exploitation from companies and the public in 2022, roughly 88m images, videos and other files.

Meta is the largest reporter of these tips, with more than 27m, or 84%, generated by its Facebook, Instagram and WhatsApp platforms in 2022. NCMEC is partly funded by the Department of Justice, but it also receives private and corporate donations, including from Meta. NCMEC and Meta do not disclose the size of this donation.

Social media companies, Meta included, use AI to detect and report suspicious material on their sites and employ human moderators to review some of the flagged content before sending it to law enforcement. However, US law enforcement agencies can only open AI-generated reports of child sexual abuse material (CSAM) by serving a search warrant to the company that sent them. Petitioning a judge for a warrant and waiting to receive one can add days or even weeks to the investigation process.

Read full article here: https://www.theguardian.com/technology/2024/jan/17/child-sexual-abuse-ai-moderator-police-meta-alphabet

Leave a comment