X Takes Steps to Enhance User Safety Amid Growing Concerns Over Online Extremism

Ryan Patel, Tech Industry Reporter
5 Min Read
⏱️ 4 min read

Elon Musk’s social media platform, X, has pledged to bolster safety measures for users in the UK by blocking access to accounts associated with designated terrorist organisations. This initiative comes as part of an agreement with Ofcom, the UK’s communications regulator, aimed at addressing the critical issue of online hate and terrorist content. The commitment highlights a proactive approach to safeguarding users, particularly following a rise in hate crimes targeting the Jewish community in the UK.

Commitment to User Protections

In a recent announcement, Ofcom disclosed that X has agreed to implement enhanced safeguards for its UK users. Oliver Griffiths, director of Ofcom’s online safety group, emphasised the importance of these developments, stating, “Following intensive engagement carried out by Ofcom’s online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely.” This commitment is part of a broader effort to ensure that social media platforms are equipped to effectively manage harmful content.

X will now be required to restrict access to accounts that disseminate illegal terrorist material linked to organisations banned by the UK government. Furthermore, the platform has pledged to assess at least 85% of flagged illegal terrorist and hate content within a 48-hour window. This rapid response mechanism is a vital component of the UK’s Online Safety Act, which seeks to shield users from various forms of harmful digital content.

Addressing Online Extremism

The necessity for stringent action against online extremism has gained urgency, particularly in light of recent incidents of hate crimes. The agreement with Ofcom signifies a step towards a more robust framework for monitoring and managing content that promotes hate and violence. Danny Stone, chief executive of the Antisemitism Policy Trust, noted that while this agreement is a “good start,” there remain significant challenges for X in effectively combating racism on its platform.

Addressing Online Extremism

The issue of content moderation on X has been contentious ever since Musk acquired the platform for $44 billion in 2022, when it was still known as Twitter. The platform has faced scrutiny for its handling of hate speech, especially following violent events, such as the riots prompted by the Southport murders in 2024. Amnesty International previously condemned X for allegedly fostering a “staggering amplification of hate” during this period.

The Role of Regulation and Dialogue

The agreement between Ofcom and X illustrates the potential for productive dialogue between regulatory bodies and social media platforms. Adam Hadley, executive director of Tech Against Terrorism, remarked that this development serves as a “powerful example of what constructive dialogue between regulators and platforms can deliver.” Such interactions are crucial in developing regulations that not only protect users but also encourage responsible behaviour from social media companies.

Despite these promising steps, the effectiveness of X’s new measures will depend on rigorous monitoring and the platform’s willingness to adapt to changing threats in the digital landscape. Ofcom is also continuing its investigation into the use of manipulated images on X, which raises further questions about the platform’s commitment to user safety.

Why it Matters

The commitment from X to enhance user protections reflects a growing recognition of the responsibilities that social media platforms bear in today’s society. As threats from online extremism and hate speech continue to evolve, the effectiveness of these measures will be crucial in shaping a safer online environment. The collaboration between Ofcom and X could serve as a model for other platforms, signalling a potential shift towards greater accountability and proactive measures against harmful content. In an age where digital interactions can have real-world consequences, fostering a safer online space is not just necessary—it’s imperative.

Why it Matters
Share This Article
Ryan Patel reports on the technology industry with a focus on startups, venture capital, and tech business models. A former tech entrepreneur himself, he brings unique insights into the challenges facing digital companies. His coverage of tech layoffs, company culture, and industry trends has made him a trusted voice in the UK tech community.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy