Meta Pressured to Increase Oversight of AI-Generated Videos

Updated:March 11, 2026

Reading Time: 3 minutes
A facebook video

The advisory body overseeing content decisions at Meta Platforms has urged the company to strengthen its oversight of AI videos. 

Meta had left up a misleading AI-created video without a label. The Meta Oversight Board said the company must improve how it identifies and labels deceptive content produced with AI tools.

According to the group, the rapid growth of synthetic media, especially during armed conflicts, has made it harder for users to distinguish real events from fabricated ones.

Meta said it will now label the video in question within seven days. However, the board stressed that the company must adopt stronger and more proactive measures.

Policy Changes

The Oversight Board, which consists of 21 members, reviews content moderation decisions on Facebook, Instagram, and WhatsApp.

In its latest decision, the board criticized the company’s handling of an AI-generated video that falsely showed major destruction. 

The video depicted the Israeli city of Haifa in ruins after an alleged attack by forces linked to Iran.

The video circulated widely online despite depicting events that never occurred. Yet Meta did not label the content as AI-generated; it also chose not to remove it.

The board argued that such decisions weaken the platform’s ability to manage misinformation during crises. As a result, it urged the company to overhaul its AI content policies. 

AI Misinformation

The board warned that AI-generated videos are becoming more common during military conflicts. Therefore, misinformation spreads faster and becomes harder to identify.

According to the group, this trend creates significant risk. If users repeatedly encounter convincing but false content, they may begin to distrust all information online.

Therefore, the board said Meta must take stronger action. In particular, it recommended that the company label suspicious AI-generated material more frequently and more quickly.

Mark Zuckerberg, CEO of Meta
Image Credits: Reuters

Current System 

At present, Meta mainly depends on users to disclose whether their content was produced with AI tools. 

If creators do not provide this information, the company usually waits for complaints before reviewing the material.

Only after users report the content can Meta’s moderation team decide whether to apply a label. However, the Oversight Board said this process is too slow and too limited. 

According to its report, the system is “neither robust nor comprehensive enough” to address the speed and scale of AI-generated media.

This weakness becomes especially serious during major events or conflicts, when online engagement increases sharply.

Also read: Meta Quietly Pulls the Plug on Its Metaverse Dream

A Facebook Post

The review began after a Facebook account based in the Philippines posted the controversial video in June last year. The account described itself as a news source.

The clip claimed to show severe damage in Haifa following an attack. In reality, the footage had been generated using AI.

Despite receiving multiple user complaints, Meta initially decided that the video did not violate its policies. The company argued that the post did not create an “imminent risk of physical harm.”

As a result, it left the video online without any label, letting it gain nearly one million views.

Oversight Board 

The issue reached the Oversight Board only after a Facebook user filed a direct appeal. Once the board began reviewing the case, Meta defended its earlier decision.

But the board disagreed.

It concluded that the video should have received a “high-risk AI label.” The board also stated that Meta had set the threshold for action too high.

 According to the ruling, misleading content about armed conflict should receive stronger scrutiny even if it does not cause immediate harm.

Lolade

Contributor & AI Expert