Grok AI Forced to Curb Sexualized Deepfakes After Backlash

Lisa Chang, Asia Pacific Correspondent
3 Min Read
⏱️ 2 min read

In a significant development, Elon Musk’s AI tool Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal. This comes after widespread concern over sexualized AI deepfakes on the social media platform X, formerly known as Twitter.

The UK government has welcomed this move, calling it a “vindication” for its calls to control Grok. Ofcom, the media regulator, also described it as a “welcome development,” though it said its investigation into whether the platform had broken UK laws “remains ongoing.”

However, campaigners and victims argue that the change has come too late to undo the harm already done. Journalist and campaigner Jess Davies, whose images on X were edited with Grok, said the platform’s changes were a “positive step” but it should never have allowed such imagery in the first place.

“It’s a sobering thought to think of how many women including myself have been targeted by this [and] how many more victims of AI abuse of being created,” she told the BBC.

Dr. Daisy Dixon, a lecturer in philosophy at Cardiff University, previously expressed feeling “shocked,” “humiliated,” and fearing for her safety after people used Grok to undress her in images on X. She said the platform’s U-turn was a “battle-win” for campaigners, but noted that the abuses should never have happened in the first place.

The changes come as California’s top prosecutors announced they were probing the spread of sexualized AI deepfakes, including those of children, generated by the AI model. X said it has now “geoblocked” the ability of all users to generate images of real people in revealing attire via the Grok account in jurisdictions where it is illegal.

However, questions remain about how X will enforce its new policies, such as how the AI model will determine if an image is of a real person and what actions it will take when users break the rules.

Campaigners and experts have called for more proactive measures from tech platforms to prevent such AI-generated harms, rather than reactive responses. The ongoing investigation by Ofcom and the potential for stronger legislation could further shape the future of Grok and similar AI tools.

Share This Article
Lisa Chang is an Asia Pacific correspondent based in London, covering the region's political and economic developments with particular focus on China, Japan, and Southeast Asia. Fluent in Mandarin and Cantonese, she previously spent five years reporting from Hong Kong for the South China Morning Post. She holds a Master's in Asian Studies from SOAS.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy