In a significant move aimed at enhancing digital safety, social media platform X, owned by Elon Musk, has pledged to expedite its review process for reports of illegal hate speech and terrorist content in the United Kingdom. Under new commitments accepted by Ofcom, the UK’s communications regulator, X aims to address flagged content within an average of 24 hours. This initiative comes at a crucial time, particularly following a series of religiously motivated attacks against Jewish communities.
A Step Forward in Online Safety
Ofcom’s online safety director, Oliver Griffiths, heralded this announcement as a “step forward,” emphasising its relevance in light of recent incidents that have shaken the nation. The commitment will enhance X’s illegal content reporting tool, allowing users to flag concerning material with the expectation of a prompt response.
Griffiths noted that evidence indicates the presence of both terrorist and hate speech content persists on major social media platforms. As part of a broader compliance programme launched in December, Ofcom is scrutinising whether these platforms have the necessary systems to effectively manage such reports. X is set to submit performance data to Ofcom quarterly for a year, ensuring accountability and transparency in its operations.
New Targets for Content Review
While X targets an average review time of less than 24 hours, it has also pledged to evaluate at least 85% of flagged reports within a 48-hour window. These measures are designed to ensure that users feel safe and heard when reporting harmful content.

In addition to the expedited review process, X has agreed to two further commitments aimed at refining its approach to illegal content. Firstly, the platform will engage with experts to enhance its reporting systems, addressing concerns from organisations that have previously flagged multiple instances of illegal content without receiving confirmation of actions taken. Secondly, X will restrict UK access to accounts identified as operated by or associated with proscribed terrorist organisations if they are found to be sharing illegal content.
Community Reactions and Concerns
Reactions to X’s new commitments have been mixed. Danny Stone, chief executive of the Antisemitism Policy Trust, described the initiative as a “good start” but warned that the platform continues to struggle with rampant racism. He urged Ofcom to hold X accountable for its promises, stressing the need for effective action in combating online hate.
The urgency of these commitments has been underscored by a series of recent attacks on Jewish institutions, including the Heaton Park Synagogue incident in Manchester in October 2025 and multiple arson attempts targeting Jewish sites across London. These events have reignited discussions about the responsibility of social media platforms in preventing the spread of hate.
Iman Atta, director of Tell Mama, an organisation tracking anti-Muslim incidents in the UK, welcomed X’s updated targets, noting that they represent a shift towards a more accountable approach. Atta highlighted the importance of the commitment to act against accounts linked to terrorist organisations, stating, “This sends an important message that no platform or body operating in this country is above scrutiny.”
Ongoing Regulatory Oversight
As Ofcom continues its investigation into X’s AI tool, Grok, concerns about the platform’s handling of sensitive content remain high. Recent reports regarding Grok’s use in creating inappropriate images have raised alarms, leading to increased scrutiny of X’s operational practices.

With Ofcom at the helm, the regulator is poised to monitor X closely over the coming year. The platform’s ability to meet its newly established targets will be critical in demonstrating its commitment to online safety and user protection.
Why it Matters
This initiative by X is not just about compliance; it’s about fostering a safer online environment for all users. With rising incidents of hate crimes and targeted attacks, the repercussions of inaction are dire. By pledging to act swiftly against illegal content, X is taking a vital step towards restoring trust among its users and contributing to a broader societal effort to combat hate. As the digital landscape continues to evolve, the effectiveness of these measures will be closely watched, setting a precedent for how social media platforms engage with and manage harmful content in the future.