Global Outcry Intensifies Against Elon Musk’s X Amid French Raid

Michael Okonkwo, Middle East Correspondent
4 Min Read
⏱️ 3 min read

In a stark escalation of international scrutiny, Australia’s eSafety Commissioner, Julie Inman Grant, has declared that the regulatory focus on Elon Musk’s X has hit a crucial “tipping point.” This assertion follows a significant police operation in France that targeted the tech giant over serious allegations, including complicity in the distribution of child abuse imagery and the generation of sexualised deepfakes featuring women and children.

French Authorities Crack Down

On Tuesday, French cybercrime units stormed X’s Paris headquarters as part of a broader investigation that has drawn attention from multiple countries, including the UK and Australia. The raid is emblematic of mounting global concern over the misuse of artificial intelligence in the creation of harmful content. Reports indicate that X’s AI chatbot, Grok, has been implicated in the mass production of sexualised images based on user prompts, provoking outrage and demands for accountability.

Inman Grant expressed relief at the newfound collective action, stating, “It’s nice to no longer be a soloist, and be part of a choir.” She highlighted the importance of collaboration among global regulators and researchers, underscoring that this represents a pivotal moment in combating the careless development of technology that can facilitate child sexual abuse and non-consensual imagery.

Actions Taken by X

In response to the growing backlash, X has restricted Grok’s image-generation capabilities, limiting it to paying users only. The company has also pledged to implement measures aimed at preventing the generation of explicit content involving real individuals. Yet, experts remain sceptical about the effectiveness and sincerity of these changes, questioning whether they are merely reactive measures rather than genuine attempts at reform.

Inman Grant pointed to the ongoing challenges faced by tech platforms in addressing child exploitation and abuse. Although she acknowledged some progress—such as improved detection of known child abuse material and enhanced reporting mechanisms—the overall performance of these platforms remains inadequate. She remarked, “It’s surprising to me that they’re not attending to the services where the most egregious and devastating harms are happening to kids. It’s like they’re not totally weatherproofing the entire house.”

Regulatory Oversight and Platform Accountability

Earlier this year, eSafety issued notices to several major tech companies, including Apple, Discord, Google, Meta, Microsoft, Skype, and WhatsApp, mandating biannual updates on their child protection efforts. While some companies have made strides—like Microsoft reducing response times for reporting abuse—Inman Grant noted significant gaps, particularly in real-time detection systems.

Apple, which has historically prioritised user privacy, has begun making headway in its safety features, allowing children to report inappropriate images directly to the company. Nevertheless, concerns linger regarding the efficacy of detection on platforms like FaceTime and Messenger, particularly concerning live abuse.

In the wake of these developments, companies will be required to submit further reports to eSafety in March and August 2026, providing transparency about their ongoing efforts. This increased oversight has the potential to illuminate the often opaque operations of these tech giants, paving the way for more stringent regulatory actions.

Why it Matters

The recent events surrounding X serve as a critical reminder of the urgent need for robust regulatory frameworks governing technological advancements, particularly those involving AI. As nations unite in their condemnation of the misuse of technology, the implications extend far beyond corporate accountability; they touch the very core of child safety in an increasingly digital world. The outcome of these investigations could determine the future of how tech companies manage and mitigate the risks associated with their platforms, ultimately shaping the landscape of online safety for generations to come.

Share This Article
Michael Okonkwo is an experienced Middle East correspondent who has reported from across the region for 14 years, covering conflicts, peace processes, and political upheavals. Born in Lagos and educated at Columbia Journalism School, he has reported from Syria, Iraq, Egypt, and the Gulf states. His work has earned multiple foreign correspondent awards.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy