In a significant move to address concerns over illegal content, the social media platform X has committed to reviewing reports of suspected hate and terrorist material within an average of 24 hours. This initiative follows recent scrutiny from Ofcom, the UK’s communications regulator, and aims to bolster user safety amidst rising incidents of hate crimes, particularly against Jewish communities.
Swift Action on Reporting
X, owned by Elon Musk, has pledged to expedite its response to flagged content through its illegal content reporting tool. Ofcom’s online safety director, Oliver Griffiths, hailed this commitment as a vital “step forward,” especially in light of a series of religiously motivated attacks in the UK. Griffiths emphasised the urgency of tackling persistent terrorist content and illegal hate speech that continues to proliferate on major social media platforms.
In a bid to ensure accountability, X will provide performance data to Ofcom every three months for a year. This oversight will help the regulator determine if the platform meets its targets, which include assessing at least 85% of flagged reports within 48 hours.
Additional Safeguards for Users
To further enhance user protection, X has agreed to engage with experts to refine its reporting systems concerning illegal content. This decision comes after various organisations expressed concerns about the lack of transparency regarding whether their reports had been acknowledged or acted upon.
Moreover, X has committed to restricting access to accounts that post UK-proscribed terrorist content, if it is determined they are linked to terrorist organisations. This commitment underscores the platform’s recognition that it must take more responsibility for the content shared on its site.
Responses from Advocacy Groups
Danny Stone, CEO of the Antisemitism Policy Trust, noted that while the commitments made by X represent a “good start,” the platform still has significant shortcomings in addressing overt racism. Stone remarked, “We know where this online harm leads, and so for the sake and safety of all of us in Britain, I hope Ofcom will hold X to account for what it has promised the regulator it will do.”
Iman Atta, director of Tell Mama, a project dedicated to documenting anti-Muslim incidents, welcomed the new targets as indicative of a more accountable approach. Atta stressed the importance of tangible outcomes, stating, “The test is not what is promised, but what is delivered.”
Ongoing Investigations and Community Impact
This announcement comes amidst an ongoing Ofcom investigation into X’s AI tool Grok, which has faced scrutiny for allegedly being used to create sexualised images. The regulator is keen on ensuring platforms are not only reactive but proactive in their approach to safeguarding users from harmful content.
Recent months have witnessed a troubling rise in attacks targeting Jewish communities, including incidents in Manchester and Golders Green. This context adds urgency to X’s commitments, highlighting the platform’s role in mitigating online harm that can translate to real-world violence.
Why it Matters
The commitment by X to expedite the review of illegal content is a pivotal development in the ongoing battle against hate speech and terrorism online. As social media continues to serve as a primary communication medium, the responsibility of these platforms to ensure user safety has never been more critical. The action taken by X, alongside regulatory oversight from Ofcom, marks a potential turning point in how online platforms address illegal content. However, the true measure of success will depend on consistent implementation and the tangible impact of these commitments on community safety.