**
Meta Platforms Inc. is under increasing scrutiny from its own Oversight Board regarding its handling of misleading artificial intelligence-generated videos on its platforms. In a recent call to action, the board highlighted the urgent need for the company to strengthen its content moderation policies, particularly in the context of global military conflicts where misinformation can have dire consequences.
Oversight Board’s Strong Warning
The 21-member Oversight Board, established to provide independent guidance on content moderation, expressed its concerns after a specific incident involving a fake video depicting damage in Haifa, Israel, purportedly caused by Iranian forces. This video, which was created using AI tools, was displayed on Meta’s platforms without any accompanying label to indicate its authenticity. The board warned that the rise of AI-generated misinformation poses a significant threat to public trust in information, stating that it “risks a general distrust of all information.”
In response, Meta has pledged to label the contentious video within a week. However, this incident has raised broader questions about the efficacy of the company’s existing policies regarding AI-generated content.
A Flawed Approach to Content Moderation
Meta’s current strategy relies heavily on users to self-report AI-generated content, with the company only intervening when complaints are lodged. This reactive approach has led to criticisms about its effectiveness, especially during crises when misinformation spreads rapidly. The Oversight Board emphasised that Meta’s existing methods are “neither robust nor comprehensive enough” to handle the sheer volume of AI-generated content, particularly during times of heightened public engagement.

The board’s concerns were triggered by a video shared last June by a Philippines-based Facebook account claiming to be a news source. This particular AI-generated video, which was part of a wave of misleading content surfacing in the wake of the ongoing conflict, amassed over 100 million views, as noted in a BBC analysis. Despite multiple user complaints regarding its misleading nature, Meta initially refrained from labelling or removing the video. It was only when a user escalated the issue to the Oversight Board that Meta took action.
The Need for Proactive Measures
The Oversight Board has called for Meta to take a more proactive stance in labelling AI-generated content. It contended that the company should implement more frequent and rigorous labelling of such videos, especially those related to armed conflicts. The board highlighted that the threshold for determining whether content requires a label is currently set too high, asserting that the video in question should have been marked with a “high-risk AI label” due to the sensitive context it portrayed.
Meta responded to the board’s recommendations by stating it would adhere to these guidelines for any similar content in the future. However, the question remains whether this will be sufficient to curtail the spread of misinformation on its platforms.
A Growing Challenge for Social Media Platforms
This incident is emblematic of a larger challenge facing social media companies as they grapple with the implications of advanced AI technologies. As the capabilities of AI-generated content continue to evolve, the threat of misinformation grows. With users increasingly relying on social media for news, the stakes are higher than ever.

Meta’s situation serves as a wake-up call for the tech industry, underscoring the need for enhanced oversight and robust measures to combat the proliferation of fake content. As the lines between reality and fabrication blur, companies must adapt their strategies to protect users from misinformation while maintaining trust in their platforms.
Why it Matters
The implications of Meta’s struggle with AI-generated misinformation extend far beyond the company itself; they resonate throughout the entire social media landscape. As misinformation can shape public opinion and influence major geopolitical events, the need for effective moderation becomes imperative. If Meta and similar platforms fail to adequately address these challenges, they risk undermining the integrity of information shared online, potentially leading to a broader societal distrust that could have far-reaching consequences in an increasingly digital world.