This will include identifying photos produced by AI.
At the same time that the United States is getting ready for the presidential election in 2024, OpenAI is sharing its plans to combat disinformation that is associated with elections all over the world. The primary objective of these plans is to increase transparency regarding the origin of information. One such highlight is the utilization of cryptography, which is standardized by the Coalition for Content origin and Authenticity, in order to encode the origin of photographs that are produced by DALL-E 3. This will make it possible for the platform to more accurately distinguish images generated by artificial intelligence by utilizing a provenance classifier, which will assist voters in determining the credibility of particular content.
As a component of Google’s own election content strategy, which was revealed a month ago, this approach is comparable to, if not superior to, DeepMind’s SynthID, which is designed to digitally watermark images and audio that are generated by artificial intelligence. An invisible watermark is also added to the content that is generated by Meta’s artificial intelligence picture generator; however, the business has not yet disclosed whether or not it is prepared to combat election-related misinformation.
The company OpenAI has announced that it would soon collaborate with researchers, journalists, and platforms to get input on its provenance classifier. Continuing along the same lines, users of ChatGPT will begin to see real-time news from all around the world, complete with attribution and links. When they ask procedural questions such as where to vote or how to vote, they will also be led to CanIVote.org, which is the official internet source on voting and voting procedures in the United States.
Additionally, OpenAI reaffirms its existing standards regarding the elimination of impersonation efforts in the form of deepfakes and chatbots, as well as information that is created with the intention of distorting the voting process or discouraging individuals from voting. It is also forbidden for the corporation to develop applications that are intended for political campaigning, and its new GPTs enable users to report potential infractions whenever it is necessary to do so.
OpenAI claims that the lessons it has learned from these preliminary actions, assuming they are successful at all (and this is a very big “if”), will assist it in implementing similar techniques all over the world. In the following months, the company will make other announcements dealing with this matter.