In its report, the Oversight Board attested to the fact that Meta’s automatic tools are at fault.
The Oversight Board of Meta has released its findings from its first-ever expedited review, which focused on content related to the Israel-Hamas war and took only 12 days instead of weeks. The Board reversed the company’s initial decision to remove two pieces of content from both sides of the conflict and supported Meta’s subsequent move to restore the posts on Facebook and Instagram, so no further action from the company is anticipated. Nevertheless, the Board’s review brought to light the ways in which Meta’s reliance on automated tools could hinder people from sharing crucial information; in this case, the Board noted that “it increased the likelihood of removing valuable posts informing the world about human suffering on both sides of the conflict in the Middle East.”
Two specific appeals that are representative of what users in the affected region have been submitting since the October 7th attacks were selected by the Oversight Board for its first expedited review are the following: one is a video posted on Facebook that shows a woman pleading with her captors not to kill her when she was taken hostage during the initial terrorist attacks on Israel, and the other is an Instagram video that shows the aftermath of a strike on the Al-Shifa Hospital in Gaza during Israel’s ground offensive, showing dead and injured Palestinians, including children.
After the October 7 attacks, Meta changed its automated tools to be more aggressive in policing content. For example, the Al-Shifa Hospital video takedown and the denial of a user appeal to have it reinstated were both made without human intervention; both videos were later restored with warning screens stating that such content is allowed for the purpose of news reporting and raising awareness. These findings were confirmed by the Board’s review.
The Board expressed concern that Meta’s rapidly shifting approach to moderation could give it the appearance of arbitrariness and call into question its policies, saying that Meta “should have moved more quickly to adapt its policy given the fast-moving circumstances, and the high costs to freedom and access to information for removing this kind of content.”
Even after the company concluded that the content was meant to raise awareness, the Board discovered that Meta demoted the content that it reinstated with warning screens and prevented other Facebook and Instagram users from seeing it. It should be noted that several users had reported being shadowbanned in October following posts about the situation in Gaza.
In addition, the Board drew attention to the fact that Meta only permitted users from its cross-check lists—a group of well-known users who are exempt from the company’s automated moderation system—to post hostage-taking content from the October 7 attacks between October 20 and November 16. The Board stated that Meta’s decision underscores its concerns regarding the program, particularly its “unequal treatment of users [and]lack of transparent criteria for inclusion,” adding that the business must “ensure greater representation of users whose content is likely to be important from a human-rights perspective on Meta’s cross-check lists.”
The Oversight Board overturned Meta’s initial decision to remove the content, but approved of Meta’s subsequent decision to restore it with a warning screen. Meta had already reinstated the content, so no further action will be taken on it. “As explained in our Help Center, some categories of content are not eligible for recommendations and the board disagrees with Meta barring the content in this case from recommendation surfaces. There will be no further updates to this case, as the board did not make any recommendations as part of their decision,” the company said in a statement to Newtechmania.