**
The Internet Watch Foundation (IWF) has revealed a staggering increase in child sexual abuse material (CSAM) generated by artificial intelligence, with a 260-fold rise in the number of identified videos last year alone. In its latest report, the watchdog outlined that it verified 8,029 pieces of AI-created content in 2025, a concerning trend that highlights the darker capabilities of technology in the wrong hands. With 65% of the identified videos classified as the most severe category under UK law, the implications for child safety and the tech industry are profound.
Alarming Trends in AI-Generated Content
In 2025, the IWF noted a 14% overall rise in AI-generated CSAM, with the majority comprising the most extreme forms of abuse. Among the 3,443 videos reported, a significant portion was deemed Category A, the classification reserved for the most egregious content. This stark contrast to non-AI videos, where only 43% fell into the same severe category, signals a worrying trend where advanced technology is being manipulated to produce more graphic and violent material.
Kerry Smith, the chief executive of the IWF, expressed deep concern over the intersection of technological advancement and child safety. “Advances in technology should never come at the expense of a child’s safety and wellbeing,” she stated. “While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life.”
The Dark Web and AI: A Disturbing Connection
Conversations among offenders on the dark web reveal a grim reality: users are increasingly excited about the potential of AI technologies. Analysts from the IWF have reported that discussions focus on the ability of these systems to create more realistic outputs, including the integration of audio with visual content. This advancement raises serious concerns about the manipulation of images featuring real children, exacerbating the threat posed by these technologies.
Moreover, the potential for “agentic” systems—those capable of performing tasks autonomously—has become a subject of interest among offenders, as they explore ways to exploit these innovations for nefarious purposes. As these technologies evolve, so too does the risk they pose to vulnerable children.
Regulatory Response and Future Safeguards
In response to the alarming rise in AI-generated CSAM, UK authorities are empowering tech companies and child protection agencies to investigate whether AI tools can inadvertently produce such material. This initiative aims to preemptively address abuse before it occurs, highlighting a proactive approach towards safeguarding children in the digital age.
The UK government is facilitating designated AI companies and child safety organisations to scrutinise generative AI models—like those underpinning popular chatbots and image generators—to ensure they are equipped with robust safeguards against the creation of harmful content. “Children, victims and survivors cannot afford for us to be complacent,” Smith added, advocating for stringent standards in the development of new technology. The government’s commitment includes a ban on the possession, creation, or distribution of AI models specifically designed to generate CSAM.
Public Sentiment and Legislative Action
Polling data released by the IWF indicates that a significant majority—eight out of ten UK adults—support the introduction of legislation mandating that AI systems prioritise safety and are designed to be “future-proofed from causing harm.” This public sentiment underscores the urgent need for regulatory frameworks that can adapt to rapidly changing technologies.
As the sophistication of AI tools continues to grow, so too does the imperative for legislative measures that can effectively mitigate risks associated with their misuse. The conversation surrounding AI and child protection is no longer a distant concern; it is an immediate and pressing issue that demands action.
Why it Matters
The surge in AI-generated child sexual abuse material is not just a statistic; it represents a significant threat to the safety and wellbeing of children worldwide. As the technology landscape evolves, so must our strategies for protecting the most vulnerable. The findings from the IWF are a clarion call for the tech industry, lawmakers, and society at large to collaborate and implement robust measures that prevent the exploitation of advanced technologies. The stakes are high, and complacency is not an option.