In a significant move for online safety, social media platform X, owned by Elon Musk, has vowed to enhance its response to reports of illegal hate and terrorist content in the UK. The company has committed to reviewing flagged content within an average of 24 hours, a promise made under the scrutiny of Ofcom, the UK’s communications regulator. This initiative comes in the wake of rising concerns regarding hate crimes, particularly those affecting Jewish communities across the nation.
Enhanced Reporting Measures
X’s newly established commitment to tackle illegal content will primarily focus on reports submitted through its illegal content reporting tool. Ofcom’s online safety director, Oliver Griffiths, has welcomed this initiative as a “step forward,” particularly in light of recent incidents targeting religious groups. He highlighted that there is a pressing need for social media platforms to address the persistence of hate speech and terrorist content that has been plaguing major platforms.
As part of this commitment, X will also be submitting quarterly performance reports to Ofcom for a year, allowing the regulator to closely monitor the platform’s adherence to these new targets. The company aims to review at least 85% of reports within a 48-hour window, providing a more robust framework for addressing these serious issues.
Ongoing Investigations and Commitments
In addition to the new reporting commitments, Ofcom is currently conducting a separate investigation into X’s AI tool, Grok, amid concerns that it may have been used to generate inappropriate content. This scrutiny highlights the growing concern over the role of technology in perpetuating harmful online behaviour.

Ofcom has also outlined additional commitments from X designed to bolster user protection. One of these involves engaging with experts to refine the reporting systems for hate and terror content. This comes after organisations expressed their frustration over the lack of clarity regarding the status of their reports.
Furthermore, X has pledged to restrict access to accounts linked to terrorist organisations that are outlawed in the UK, demonstrating a more proactive stance in combating online extremism.
Community Reactions
Community leaders and organisations have expressed cautious optimism regarding these developments. Danny Stone, CEO of the Antisemitism Policy Trust, described the actions as a “good start,” but emphasised that more work is required to address the rampant racism still visible on the platform. He urged Ofcom to hold X accountable to ensure that the promises made translate into tangible results.
Iman Atta, director of Tell Mama, a project dedicated to documenting anti-Muslim incidents, echoed these sentiments, welcoming the new targets as indicative of a more accountable approach. She stressed the importance of ensuring that no platform operates above scrutiny, and highlighted that the real test lies not in promises made but in their fulfilment.
Addressing a Growing Crisis
The UK has witnessed a troubling surge in attacks targeting Jewish communities, including the alarming incidents at the Heaton Park Synagogue in Manchester and arson attempts on Jewish sites in London. These events underline the urgent need for social media platforms to take responsibility for the content shared on their networks.

X’s commitment to swift action against hate speech and terrorist content is a crucial step in addressing the escalating crisis. By implementing these measures, the platform not only aligns itself with regulatory expectations but also plays a vital role in protecting the safety of individuals and communities across the UK.
Why it Matters
The implications of X’s commitments extend far beyond mere compliance with regulatory standards. In a digital age where misinformation and hate speech can escalate into real-world violence, the responsibility of social media platforms to act decisively is paramount. By enhancing their reporting and response systems, X has the opportunity to set a precedent for online safety, fostering a more secure environment for all users. As communities continue to face threats, the effectiveness of these measures will be closely watched, and their success will ultimately define the future of online discourse in the UK.