Key Takeaways:
- The rise of generative AI has increased the risk of child sexual exploitation, with millions of children potentially being forced to live with the nightmare of having their images used for child sexual abuse material (CSAM).
- The training data used for AI models can contain CSAM, and the lack of safeguards can lead to the creation of explicit images of children.
- Companies like Google and OpenAI claim to have safeguards in place, but the risk of CSAM creation remains high.
- Legislation and technological solutions are needed to protect children from CSAM, and parents need to be aware of the risks of sharing photos of their children online.
- The public needs to demand accountability from companies that allow the creation of CSAM and push for legislation and safeguards to prevent child exploitation.
Introduction to Stranger Danger
The concept of "Stranger Danger" was a well-meaning lesson taught to children in the 1980s and 1990s, warning them of the risks of interacting with strangers. However, the risk was often overblown, and most child abuse and exploitation is perpetrated by people the children know. The author of the article, a former child actor, experienced the darker side of the entertainment industry, where her image was used for child sexual abuse material (CSAM) and she received creepy letters from grown men. As she notes, "Hollywood throws you into the pool, but it’s the public that holds your head underwater."
The Rise of Generative AI and CSAM
The rise of generative AI has reinvented the concept of Stranger Danger, making it easier for children to be sexually exploited. The author notes that "Generative AI has already been used many times to create sexualized images of adult women without their consent. It happened to friends of mine. But recently, it was reported that X’s AI tool Grok had been used, quite openly, to generate undressed images of an underage actor." This has led to a surge in CSAM, with the Internet Watch Foundation finding over 3,500 images of AI-generated CSAM on a dark web forum. As Patrick LaVictoire, a mathematician and former AI safety researcher, explains, "A connection that’s useful gets reinforced, one that’s less so, or actively unhelpful, gets pruned."
How AI is Trained
Generative AI "learns" by a repeated process of "look, make, compare, update, repeat", says LaVictoire. The training data used for AI models can contain CSAM, and the lack of safeguards can lead to the creation of explicit images of children. A study at Stanford University in 2023 showed that one of the most popular training datasets already contained more than 1,000 instances of CSAM. As LaVictoire notes, "Generative AI itself has no way of distinguishing between innocuous and silly commands such as ‘make an image of a Jedi samurai’ and harmful commands, such as ‘undress this celebrity’."
The Need for Legislation and Technological Solutions
To stop the threat of a deepfake apocalypse, we need to look at how AI is trained and take action to prevent the creation of CSAM. Companies like Google and OpenAI claim to have safeguards in place, but the risk of CSAM creation remains high. As Akiva Cohen, a New York City litigator, notes, "A lot of this very consciously stays just on the ‘horrific, but legal’ side of the line." Legislation and technological solutions are needed to protect children from CSAM, and parents need to be aware of the risks of sharing photos of their children online.
The Role of the Public
The public needs to demand accountability from companies that allow the creation of CSAM and push for legislation and safeguards to prevent child exploitation. As Josh Saviano, a former practicing attorney and child actor, notes, "Lobbying efforts and our courts are eventually going to be the way that this is handled. But until that happens, there are two options: abstain entirely, which means take your entire digital footprint off the internet … or you need to find a technological solution." The public needs to take action to protect children from the risks of CSAM, and companies need to be held accountable for their role in enabling this exploitation.
Conclusion
The rise of generative AI has increased the risk of child sexual exploitation, and it’s time for the public to take action. As the author notes, "If our obsession with Stranger Danger showed anything, it’s that most of us want to prevent child endangerment and harassment. It’s time to prove it." We need to demand accountability from companies, push for legislation and safeguards, and take steps to protect children from the risks of CSAM. By working together, we can prevent the creation of CSAM and keep children safe from exploitation.
https://www.theguardian.com/commentisfree/2026/jan/17/child-abuse-images-ai-exploitation

