The Internet Watch Foundation (IWF) has reported a staggering increase in AI-generated child sexual abuse material (CSAM), revealing an alarming trend that has escalated over the past year. In 2025 alone, the IWF identified 8,029 instances of realistic AI-created content, marking a 14% rise from the previous year. Perhaps most concerning is the more than 260-fold increase in the number of videos classified as the most severe under UK law, underscoring the urgent need for intervention and regulation.
Alarming Statistics Highlight Growing Threat
The IWF’s findings paint a grim picture of the current landscape of child exploitation online. Among the 3,443 videos identified, a staggering 65% fell into category A, the classification reserved for the most egregious forms of CSAM. By contrast, only 43% of non-AI-generated videos were categorised in the same way, suggesting that advancements in AI technology are being exploited to create more graphic and disturbing content.
Kerry Smith, the chief executive of the IWF, articulated the deep concerns surrounding these developments. “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous,” Smith stated, emphasising the critical balance that must be struck between innovation and child protection.
Dark Web Conversations Signal Dangerous Trends
An IWF analyst noted that discussions among paedophiles on the dark web reveal a troubling enthusiasm for the latest technological innovations. Users have expressed delight in the increasing realism of AI outputs, which have become capable of incorporating audio into videos or manipulating real images of known children. This capability raises questions about the adequacy of current safeguards and the potential for harm.
The IWF has underscored the necessity for tech companies and child protection agencies to collaborate in testing AI tools designed to prevent the creation of CSAM. In a proactive move, the UK government has authorised designated AI firms and child safety organisations to assess generative AI models—such as those underlying popular chatbots and image generators—to ensure they include stringent safeguards against producing harmful content.
Government Action and Public Sentiment
In light of these alarming statistics, public sentiment has shifted towards advocating for stronger regulations. Recent polling indicated that eight out of ten UK adults support government legislation aimed at ensuring AI systems are developed with safety as a primary concern. Last year, the UK government responded to these concerns by announcing a ban on the possession, creation, or distribution of AI models specifically designed to generate child sexual abuse material.
Smith reiterated the urgency of this issue, stating: “Children, victims, and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.” This highlights the imperative for regulatory frameworks to evolve alongside technology, ensuring that the tools intended to foster innovation do not inadvertently facilitate exploitation.
Why it Matters
The rise of AI-generated child sexual abuse material represents a profound challenge not just for law enforcement but for society as a whole. As technology continues to advance at a rapid pace, the ramifications for child safety and wellbeing become increasingly severe. The need for robust and proactive legislation, alongside the collaboration of tech companies and child protection organisations, is paramount. If left unchecked, these technological innovations could lead to devastating consequences for vulnerable children, making it essential for stakeholders to act decisively and comprehensively.