The Internet Watch Foundation (IWF) has reported a staggering rise in the identification of AI-generated child sexual abuse material (CSAM), revealing an alarming 260-fold increase in videos over the past year. In 2025 alone, the IWF verified 8,029 pieces of this disturbing content, with a significant portion falling into the most severe category under UK law. This data underscores a pressing concern for child safety in an era increasingly dominated by advanced technology.
Escalating Numbers and Disturbing Trends
According to the IWF, the overall amount of AI-generated CSAM rose by 14% in 2025, with a notable 65% of the identified videos classified as Category A, the worst category defined by UK legislation. In contrast, only 43% of non-AI-generated videos fell into this extreme classification, highlighting a troubling trend where the capabilities of artificial intelligence are being exploited to produce increasingly violent content.
Kerry Smith, the chief executive of the IWF, has emphasised the gravity of the situation, stating, “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.”
The stark rise in AI-generated material has been mirrored by discussions among offenders on the dark web, where innovations in technology are reportedly welcomed with enthusiasm. Analysts from the IWF have noted that these conversations highlight a growing interest in AI’s ability to create realistic outputs, including the addition of audio to videos and the manipulation of images of actual children known to offenders.
Government Action and Industry Responsibility
In response to the escalating threat posed by AI, the UK government has introduced measures allowing designated tech companies and child protection organisations to test whether their AI tools can generate CSAM. This initiative is intended to proactively address the dangers before they manifest in harmful ways.
The approach reflects a commitment to ensuring that generative artificial intelligence models, such as those powering chatbots and image creators, are equipped with robust safeguards to prevent the creation of abusive content. Smith reiterated the urgency of this initiative, stating, “Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line.”
Public Sentiment and Legislative Momentum
The IWF’s findings have also prompted public concern, with recent polling indicating that 80% of UK adults support the introduction of legislation aimed at prioritising safety in the development of AI systems. This sentiment comes in the wake of the government’s announcement last year of a ban on the possession, creation, or distribution of AI models explicitly designed to generate CSAM.
As the IWF continues to monitor the surge in identified materials, the organisation’s findings stress the urgent need for a collaborative effort between tech companies, law enforcement, and child protection agencies to bolster the safety of children online.
Why it Matters
The significant rise in AI-generated child sexual abuse material represents a critical intersection of technology and child safety, raising profound ethical questions about the capabilities and responsibilities of AI developers. As generative technologies evolve, so too must the frameworks designed to protect vulnerable populations. The IWF’s findings serve as a clarion call for immediate action, reinforcing the imperative that technological advancement does not come at the cost of child welfare. The stakes are high, and a unified commitment to safeguarding children online is not just beneficial—it is essential.