Meta Platforms Inc. has come under scrutiny from its own Oversight Board, which has called for the social media titan to take stronger action against the surge of fake videos produced using artificial intelligence (AI). This urgent recommendation follows a controversial incident involving an AI-generated clip that falsely depicted extensive damage in Haifa, Israel, purportedly caused by Iranian forces. The board’s criticism reflects growing concerns about the public’s ability to discern truth from fiction, particularly during times of international conflict.
Oversight Board’s Call for Action
The Oversight Board, an independent body established by Meta in 2020 to review content moderation practices, has raised alarms over the company’s lax approach to handling AI-generated misinformation. In a recent statement, the 21-member board highlighted the need for a comprehensive overhaul of Meta’s AI guidelines. They expressed that the proliferation of misleading AI content is undermining trust in all information, a sentiment echoed broadly across the digital landscape.
In the case of the Haifa video, which remained unlabelled despite numerous complaints, the board emphasized that the current methods for identifying and labelling AI-generated content are insufficient. Meta typically relies on users to report such content, a strategy that the board deemed reactive rather than proactive. With the rapid spread of misleading content, especially amidst global crises, the board has insisted that Meta should take the initiative to label such content more frequently and effectively.
The Implications of Inaction
This incident is not an isolated case; it highlights a troubling trend in which AI-generated content can easily mislead millions. The video in question, which amassed nearly one million views, was posted by a Facebook account claiming to provide news from the Philippines. Despite being flagged by users, Meta initially defended its decision not to label the video, arguing that it did not pose an immediate threat. The Oversight Board, however, countered this viewpoint, asserting that the threshold for labelling AI-generated content should be significantly lower during armed conflicts.

The board’s recommendations arose from a review prompted by the viral spread of similar AI videos, which often garnered millions of views and were politically charged—either pro-Israel or pro-Iran. In light of these developments, the board has stressed the importance of a robust framework to manage the unique challenges posed by AI-generated content.
Meta’s Response and Future Steps
In response to the board’s findings, Meta has acknowledged the need to adapt its content moderation strategies. The company stated that it will implement the board’s recommendations when encountering similar content in the future. However, critics remain sceptical about the effectiveness of these measures, given Meta’s history of loosening its content moderation policies despite consistent pushback from the board.
This ongoing tension raises questions about the actual power of the Oversight Board and whether it can effect meaningful change within the company. As misinformation continues to proliferate on social media, the effectiveness of Meta’s strategies will be under constant scrutiny.
Why it Matters
The escalating prevalence of AI-generated misinformation poses significant risks to public discourse and democratic processes. As platforms like Meta grapple with the complexities of content moderation, the stakes are higher than ever. The Oversight Board’s insistence on better oversight reflects a broader societal concern: the need for transparency and accountability in an era where the distinction between reality and fabrication is increasingly blurred. Inaction could lead to a dangerous erosion of trust in media and institutions, ultimately undermining the very fabric of informed citizenship.
