Two public persons’ explicit photographs are at issue in these cases.
Once again, the Oversight Board of Meta is taking on the standards that the social network has devised for content that is generated by technology. Two instances that involve photographs of public personalities that were created by artificial intelligence have been accepted by the board.
In spite of the fact that Meta’s rules already prohibit nudity on Facebook and Instagram, the board has issued a statement stating that it is interested in determining whether or not “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” AI-generated photos of female celebrities, politicians, and other public figures have become an increasingly common form of online harassment, and they have garnered a wave of suggested regulations. These images are sometimes referred to as “deepfake porn,” which is another name for the phenomenon. As a result of the two cases, the Oversight Board may exert pressure on Meta to implement additional regulations in order to prevent harassment of this kind on its platform.
In an effort to prevent additional harassment, the Oversight Board has stated that it will not be name the two prominent personalities who are at the center of each instance. However, it has explained the facts surrounding each post.
The first instance includes a post on Instagram that was made by an account that “only shares AI-generated images of Indian women.” The post featured an image of a naked Indian woman that was generated using artificial intelligence. A report was filed with Meta regarding the post; however, the report was closed after forty-eight hours since it was not evaluated. An appeal was filed by the same user against that ruling; however, the appeal was also dismissed and was never examined. The post was later removed by Meta after the user filed an appeal with the Oversight Board, which ultimately resulted in the board agreeing to listen to the case.
The second instance included a post that was made on Facebook within a community that was specifically devoted to artificial intelligence art. “an artificial intelligence-generated image of a nude woman with a man groping her breast,” was displayed in the blog post in question. It was intended for the woman to look like “an American public figure,” whose name was also included in the caption of the post at the time. The post was removed immediately due to the fact that it had been reported in the past, and Meta’s internal systems were able to confirm that it was identical to the article that had been reported earlier. The user filed an appeal against the decision to remove it, but the case was determined to be “automatically closed.” Subsequently, the user filed an appeal with the Oversight Board, which agreed to take the matter under consideration.
The co-chair of the Oversight Board, Helle Thorning-Schmidt, stated in a statement that the board decided to investigate the two incidents that originated from different nations in order to determine whether or not there are any potential differences in the manner in which Meta’s policies are applied. The statement made by Thorning-Schmidt was as follows: “We are aware that Meta is more efficient and quicker at moderating content in certain markets and languages than in others.” Through the examination of two cases, one from the United States and one from India, we intend to determine whether or not Meta is safeguarding women all around the world in an equitable manner.
Over the next two weeks, the Oversight Board will be soliciting feedback from the general public. It is anticipated that the Board will publish its conclusion, along with policy recommendations for Meta, at some point within the next few weeks. A similar process that involved a video of Joe Biden that had been manipulated in a way that was misleading recently resulted in Meta agreeing to mark additional content that was generated by artificial intelligence on its platform.