OpenAI releases a new safety blueprint to address the rise in child sexual exploitation


In response to escalating concerns about child safety online, OpenAI has unveiled a blueprint to enhance U.S. child protection efforts amid the AI boom. The Child Safety Blueprint, which was released Tuesday, is designed to help with faster detection, better reporting, and more efficient investigation into cases of AI-enabled child exploitation.

The overall goal of the Child Safety Blueprint is to tackle the alarming rise in child sexual exploitation linked to advancements in AI. According to the Internet Watch Foundation (IWF), more than 8,000 reports of AI-generated child sexual abuse content were detected in the first half of 2025, a 14% increase from the year prior. This includes criminals using AI tools to generate fake explicit images of children for financial sextortion and to generate convincing messages for grooming. 

OpenAI’s blueprint also comes amid increased scrutiny from policymakers, educators, and child-safety advocates, especially in light of troubling incidents where young individuals died by suicide after allegedly engaging with AI chatbots.

Last November, the Social Media Victims Law Center and the Tech Justice Law Project filed seven lawsuits in California state courts, alleging that OpenAI released GPT-4o before it was ready. The suits claim the product’s psychologically manipulative nature contributed to wrongful deaths by suicide and assisted suicide. They cite four individuals who died by suicide and three others who experienced severe, life-threatening delusions after extended interactions with the chatbot.

This blueprint was developed in collaboration with the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, as well as with feedback from North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown. 

The company says that the blueprint focuses on three aspects: updating legislation to include AI-generated abuse material, refining reporting mechanisms to law enforcement, and integrating preventative safeguards directly into AI systems. By doing so, OpenAI aims not only to detect potential threats earlier but also to ensure actionable information reaches investigators promptly.

OpenAI’s new child safety blueprint builds on previous initiatives, including updated guidelines for interactions with users under 18, which prohibits the generation of inappropriate content, or encouraging self-harm, and avoiding advice that would help young people conceal unsafe behavior from caregivers. The company recently released a safety blueprint for teens in India.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.