Urgent Call for Comprehensive AI Regulation to Protect Victims of Nonconsensual Image Generation

Sophie Laurent, Europe Correspondent
2 Min Read
⏱️ 2 min read

The rapid rise of AI-powered image generation tools like Grok has led to a shocking surge in the creation and spread of nonconsensual intimate imagery, often targeting women and minors. In the past eight months, over 565 instances of users requesting Grok to generate such abusive content have been documented, with 389 occurring in just one day.

While X’s recent decision to restrict Grok’s image generation feature to only paying subscribers is a step in the right direction, Technology Secretary Liz Kendall rightly argues that this “does not go anywhere near far enough.” Kendall has announced that the creation of nonconsensual intimate images will be criminalised this week, and the supply of “nudification” apps will also be outlawed.

However, the problem runs deeper. Grok and many other prominent AI tools are not dedicated nudification tools, but general-purpose AI systems with inadequate safeguards. Kendall’s approach of criminalising users and app providers misses the core issue – the law must compel tech companies to implement proactive detection and prevention mechanisms to stop this abuse before it happens.

Equally concerning is the lack of cross-border cooperation, as the Trump administration in the US pushes for a “minimally burdensome” AI policy framework that prioritises American AI dominance over safety. Without US collaboration, the UK’s efforts to regulate this transnational technology will be severely hampered.

As this regulatory wrangling continues, many victims are left wondering how to seek justice when their images have been digitally altered by perpetrators halfway across the world. The truth is that these tech giants cannot be trusted to self-regulate or be accountable for the harms their products enable.

Urgent, global action is needed to shift the onus from “removing harm when found” to “proving your systems prevent harm.” Mandatory input filtering, independent audits, and licensing conditions that make prevention a legal requirement must be implemented. Only then can we hope to minimise the devastating impact of AI-enabled sexual abuse before it occurs.

Share This Article
Sophie Laurent covers European affairs with expertise in EU institutions, Brexit implementation, and continental politics. Born in Lyon and educated at Sciences Po Paris, she is fluent in French, German, and English. She previously worked as Brussels correspondent for France 24 and maintains an extensive network of EU contacts.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy