Meta is being asked to take stronger action against artificial intelligence-generated fake content after the company failed to remove a video about the recent conflict taking place all over the world.
The Oversight Board, which is Meta’s independent advisory board, has said that the current process for detecting artificial intelligence content is not good enough and that there is a need for the company’s artificial intelligence policy to be overhauled.
The 21-member panel said the spread of fake AI content on matters touching on military conflicts has "challenged the public's ability to distinguish fabrication from fact". Meta Oversight Board Chairperson Emily Hwang said the company currently depends on users to identify AI-generated content and report or complain about such content before moderators take any action.
The board referred to this approach as "neither robust nor comprehensive enough", particularly in crisis situations where the content is spreading fast. For instance, the Haifa video, which accumulated close to 1 million views, was not marked despite complaints from multiple users.
The board advised Meta to mark AI-generated content in advance, especially for high-risk topics like armed conflicts. “Meta must do more to address the proliferation of deceptive AI-generated content on its platforms… so that users can distinguish between what is real and fake,” the board advised.
The tech giant said it would follow the suggestions of the board if it came across such content in the future but did not promise any significant changes. The company introduced the Oversight Board in 2020 to oversee content moderation on Facebook, Instagram, and WhatsApp, but the board does not agree with Meta's decisions most of the time.