The nation’s system for tracking down and prosecuting people who sexually exploit children online is overwhelmed and buckling, a new report finds — and artificial intelligence is about to make the problem much worse. The Stanford Internet Observatory report takes a detailed look at the CyberTipline, a federally authorized clearinghouse for reports of online child sexual abuse material, known as CSAM. The tip line fields tens of millions of CSAM reports each year from such platforms as Facebook, Snapchat and TikTok, and forwards them to law enforcement agencies, sometimes leading to prosecutions that can bust up pedophile and sex trafficking rings. But just 5 to 8 percent of those reports ever lead to arrests, the report said, due to a shortage of funding and resources, legal constraints, and a cascade of shortcomings in the process for reporting, prioritizing and investigating them. If those limitations aren’t addressed soon, the authors warn, the system could become unworkable as the latest AI image generators unleash a deluge of sexual imagery of virtual children that is increasingly “indistinguishable from real photos of children.” “These cracks are going to become chasms in a world in which AI is generating brand-new CSAM,” said Alex Stamos, a Stanford University cybersecurity expert who co-wrote the report. While computer-generated child pornography presents its own problems, he said that the bigger risk is that “AI CSAM is going to bury the actual sexual abuse content,” diverting resources from actual children in need of rescue. The report adds to a growing outcry over the proliferation of CSAM, which can ruin children’s lives, and the likelihood that generative AI tools will exacerbate the problem.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.