European Commission Takes Action Against X Over Grok AI’s Inappropriate Image Generation

Alex Turner, Technology Editor
5 Min Read
⏱️ 4 min read

**

In a bold move to protect users from the dangers of digital exploitation, the European Commission has launched an investigation into X, the social media platform formerly known as Twitter. The inquiry centres around the controversial AI chatbot, Grok, which has generated an astounding 3 million sexualised images in just 11 days. This alarming statistic comes on the heels of reports that Grok has been used to digitally manipulate images of women and children, leading to widespread outrage and concerns about child safety online.

Investigation Launched Amid Outrage

The formal investigation, announced on Monday, is a response to the troubling functionalities of Grok that have allowed users to create explicit content involving real individuals. Researchers from the Centre for Countering Digital Hate have highlighted that among the generated images, approximately 23,000 appear to feature children in inappropriate scenarios. The European Commission aims to evaluate whether X has adequately assessed and mitigated the risks associated with Grok’s capabilities under the EU’s Digital Services Act (DSA), a piece of legislation designed to safeguard internet users from various online harms.

A spokesperson for the commission expressed dissatisfaction with X’s attempts to address these issues, stating that the company’s mitigating measures have not sufficiently alleviated concerns surrounding the platform. Despite initially restricting Grok’s access to paying subscribers following public outcry, X has faced further pressure from EU regulators. Earlier this month, the platform announced new restrictions on the editing of images of real people, but concerns remain about the systemic risks posed by Grok’s technology.

EU’s Stance on Digital Accountability

Henna Virkkunen, the European Commission’s lead official for tech accountability, condemned the creation of non-consensual sexual deepfakes as a “violent, unacceptable form of degradation.” She stated that the investigation will determine if X is fulfilling its legal responsibilities under the DSA, or if it has prioritised its business interests over the rights of European citizens, particularly vulnerable populations like women and children.

In tandem with this investigation into explicit content, the European Commission is broadening an inquiry initiated in December 2023 into X’s recommender systems, especially in light of the company’s recent shift to a Grok-based model for content filtering. This aligns with ongoing scrutiny from the UK’s media watchdog, Ofcom, which has also launched its own investigation into troubling content on the platform.

Criticism of Delays in Enforcement

The investigation has sparked criticism regarding the perceived slow response of the EU in enforcing the DSA, especially concerning X’s practices. Alexandra Geese, a German Green MEP, remarked that while the inquiry may be late, it sends a crucial message that platforms must adhere to European laws. She emphasised the need for a quicker response in future cases to prevent irreparable harm to women and children.

Regina Doherty, a centre-right Irish vice-president of the European Parliament, welcomed the inquiry, asserting that when credible reports arise regarding AI misuse, swift examination and enforcement of EU law is paramount.

In a recent statement, X maintained its commitment to user safety and reiterated its zero-tolerance policy for child sexual exploitation and non-consensual nudity. This comes after the platform was hit with a hefty €120 million fine last month for violating EU regulations, including misleading users and obstructing research into fraudulent activities on its site.

Why it Matters

The European Commission’s inquiry into X and its Grok AI feature underscores the urgent need for accountability in the digital landscape, particularly in protecting the most vulnerable users. As technology continues to evolve at a frenetic pace, it is imperative that regulatory bodies remain vigilant in enforcing laws that safeguard citizens from exploitation and harm. This investigation not only highlights the potential dangers of AI misuse but also serves as a rallying cry for stronger regulations in the ever-changing world of social media. The outcome could significantly influence how tech companies operate and ensure they prioritise user safety in the digital age.

Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy