The rapid rise of AI-powered image generation tools like Grok has led to a shocking surge in the creation and spread of nonconsensual intimate imagery, often targeting women and minors. In the past eight months, over 565 instances of users requesting Grok to generate such abusive content have been documented, with 389 occurring in just one day.
While X’s recent decision to restrict Grok’s image generation feature to only paying subscribers is a step in the right direction, Technology Secretary Liz Kendall rightly argues that this “does not go anywhere near far enough.” Kendall has announced that the creation of nonconsensual intimate images will be criminalised this week, and the supply of “nudification” apps will also be outlawed.
However, the problem runs deeper. Grok and many other prominent AI tools are not dedicated nudification tools, but general-purpose AI systems with inadequate safeguards. Kendall’s approach of criminalising users and app providers misses the core issue – the law must compel tech companies to implement proactive detection and prevention mechanisms to stop this abuse before it happens.
Equally concerning is the lack of cross-border cooperation, as the Trump administration in the US pushes for a “minimally burdensome” AI policy framework that prioritises American AI dominance over safety. Without US collaboration, the UK’s efforts to regulate this transnational technology will be severely hampered.
As this regulatory wrangling continues, many victims are left wondering how to seek justice when their images have been digitally altered by perpetrators halfway across the world. The truth is that these tech giants cannot be trusted to self-regulate or be accountable for the harms their products enable.
Urgent, global action is needed to shift the onus from “removing harm when found” to “proving your systems prevent harm.” Mandatory input filtering, independent audits, and licensing conditions that make prevention a legal requirement must be implemented. Only then can we hope to minimise the devastating impact of AI-enabled sexual abuse before it occurs.