The Meta Oversight Board has criticised the company's manipulated media policy as "incoherent" in a decision released on Monday. The board emphasised that the policy excessively prioritises determining if a video was altered using artificial intelligence, rather than addressing the potential harm it may cause.
Despite the Oversight Board's affirmation of the company's decision to permit a modified video of President Joe Biden to remain on the platform, a policy recommendation was made. This suggestion was put forth even though the video, technically speaking, did not violate any rules.
Established in 2020 by the management to autonomously evaluate key content moderation choices, the board called on Meta to promptly revise its policies in anticipation of the 2024 U.S. general election.
What was the video about?
In October, the board took on Biden video incident following a user complaint regarding a modified seven-second video of the president posted on Meta's flagship social media platform, Facebook.
The manipulated video of the US President was edited to make it look like he was inappropriately touching his adult granddaughter’s chest. The video included a caption that referred to Biden as a “pedophile”.
The aforementioned video features actual footage of Biden from October 2022 pasting a "I Voted" sticker on his granddaughter's chest, as per a request made by her. However, to give the impression that he touched her improperly, the modified film, which was released as early as January 2023, loops the instant his hand touches her chest.
The board concurred with Meta's perspective that the video did not contravene its policy on manipulated media, as the regulations specifically prohibit creating an illusion of false statements rather than actions.
Presently, these guidelines exclusively pertain to videos generated using AI and do not encompass misleading loops or simpler edits. The board determined that the evident loop edit made it unlikely for the average user to perceive the video as unaltered.
What did the Oversight board say?
Although the board concluded that Meta appropriately enforced its rules in this instance, it recommended substantial modifications to the rules, emphasising the time-sensitive nature of the approaching 2024 elections.
It said, “Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent.” The board opined that the policy should be reconsidered.
The board recommended that the policy should encompass situations where video or audio is manipulated to depict false actions, even if not using the individual's actual words. Expressing skepticism about decisions solely based on editing methods, whether AI-driven or through basic techniques, the group emphasised that non-AI-altered content can also be misleading.
While this doesn't imply Meta should remove all modified posts, the board suggested adopting less stringent measures, such as adding labels to inform users of significant edits in videos.
Responding to the board recommendation, a Meta spokesperson said, “We are reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days in accordance with the bylaws.” However, the company is not required to follow the board's recommendations and has occasionally not done the same.
In August 2023, Meta's Oversight Board provided four suggestions concerning a sponsored Instagram post from 2022 that promoted ketamine, a dissociative anesthetic, as a remedy for anxiety and depression. However, in October 2023, Meta opted not to fully adhere to the Board's recommendations, pledging to revisit the matter at a later time.
Can social media platforms tackle election misinformation?
In 2024, national elections are taking place in over 60 countries globally, with an anticipated turnout of approximately 2 billion voters, constituting roughly a quarter of the world's population. Simultaneously, one month into the year, social media has experienced a surge in deepfake and other manipulated content, specifically designed to disseminate misinformation related to the elections.
In January, social platform X, formerly known as Twitter, was flooded with explicit images of Taylor Swift, which the platform tackled by blocking the search keywords of the singer's name. Similarly, fake robocalls featuring Biden's voice were distributed to voters before the New Hampshire primary, advising them against casting their votes.
An investigation by Decode also found how AI voice cloning technology was used to spread disinformation in the run-up to the Madhya Pradesh assembly elections last year. The findings are concerning as India counts down to a general election this April-May, offering a teaser of how political parties can abuse generative AI voice technology to mislead voters.
Meta's existing policy exclusively prohibits fabricated videos where individuals utter statements they never made. However, it does not extend to depictions of individuals engaging in actions they did not perform, as exemplified by the Biden post. Notably, this policy explicitly pertains solely to videos generated using AI tools. Content altered through non-AI methods, such as video clip looping or reversing, might be potentially misleading for users but is not expressly forbidden.
As AI technology advances and the upcoming election approaches, Meta, alongside counterparts like X and TikTok are expected to grapple with an increasing influx of manipulated media. Meta initially implemented its policy before the 2020 election as part of a broader initiative to combat election-related misinformation, prompted by the vulnerabilities exposed during the 2016 US campaign.
The creation of the Oversight Board aligns with the challenges posed by such situations. Comprising academics and individuals with public policy backgrounds, the independent body is financed by Meta but serves as a check on the company's content control.
The board has previously addressed issues on Facebook and Instagram related to election misinformation and drug-related posts. Notably, it supported Meta's suspension of Donald Trump after the Capitol riot on January 6, 2021, while concurrently critiquing the company's handling of the suspension.