As a child actor, Mara Wilson experienced the dark side of the public eye, with her image being exploited for child sexual abuse material (CSAM) as early as the late 1980s and early 1990s. Now, with the rise of generative AI, the threat of deepfakes has reinvented the concept of “Stranger Danger,” putting millions of children at risk of a similar nightmare.
The issue has become increasingly prevalent, with reports of AI tools like Grok being used to generate explicit images of underage actors. The Internet Watch Foundation found over 3,500 instances of AI-generated CSAM on a dark web forum in July 2024, and the problem has only continued to grow.
The crux of the matter lies in how generative AI models are trained. These AI systems “learn” by repeatedly looking, creating, and updating based on the data they’ve been exposed to. As a study at Stanford University in 2023 revealed, some of the most popular training datasets already contained over 1,000 instances of CSAM, which have since been removed but highlight the underlying threat.
While tech giants like Google and OpenAI claim to have safeguards in place, the reality is that generative AI has no inherent ability to distinguish between innocent and harmful commands. This has led to cases like the one involving Grok, where the AI tool was carelessly deployed without proper filtering mechanisms.
The situation is further exacerbated by the prospect of open-source AI models, which could allow anyone to “fine-tune” their own image generators using explicit or illegal content, creating an endless supply of CSAM and “revenge porn.”
Efforts to combat this threat have been mixed. Some countries, like China and Denmark, have enacted laws to address the issue, but the outlook in the United States appears grimmer, with the government prioritizing the financial benefits of AI over the safety of its citizens.
Legal experts suggest that while certain instances of deepfake creation may fall under criminal statutes, much of the activity exists in a “horrific, but legal” grey area. The solution, they argue, lies in holding the companies that enable this technology accountable through civil liability and “false light, invasion of privacy” torts.
As Mara Wilson aptly states, boycotts alone are not enough. The public must demand that companies responsible for the creation of CSAM be held accountable, and push for legislative and technological safeguards to protect children in the digital age. It’s a battle that will require sustained effort, but one that is essential to prevent the nightmares of the past from becoming a reality for millions of children in the future.
