Verify, a new tool, will examine images for those watermarks in an attempt to dispel false information from AI.
It is becoming increasingly difficult to determine if the visuals we see on the internet are real or artificially manufactured. When it comes to distinguishing the two, many initiatives include watermarking the outputs of artificial intelligence, while Verify is taking the other route. The application verifies the validity of photographs and assists leading camera manufacturers in including credentials into photographs as they are being taken.
Verify was reportedly developed by a “alliance of global news organizations, technology companies, and camera makers” in response to an increase in the number of “sophisticated fakes,” as reported by Nikkei Asia. The web-based application, which is free to use, gives users the ability to view the digital signature of any image. This signature may include the date, time, location, and photographer information. A label that reads “no content credentials” will be applied to an image by Verify if it does not have a digital signature or if it was made using artificial intelligence. The hope is that Verify will become an essential part of the fight against artificial intelligence-based deception, particularly in the media, which relies significantly on photographers to transmit important information.
In the same way that Nikon, Sony, and Canon are working to include authenticity watermarks into the outputs of their higher-end cameras, Verify will dovetail with those efforts. There will soon be professional-grade mirrorless cameras available from Nikon, according to Nikkei Asia. These cameras will come equipped with built-in authentication technology that will include digital signatures into each and every photograph. The beginning of this year will see Sony releasing a software upgrade for their mirrorless SLR cameras, which will be the beginning of a similar project. At the same time, Canon is anticipated to launch a camera that has built-in authentication signatures at some point in the year 2024. Additionally, video authentication signatures are anticipated to be released later in the same year.
Watermarks are not a novel strategy for combating the spread of false information by artificial intelligence. In the previous year, Google DeepMind began testing SynthID, a system that, in conjunction with the company’s Imagen cloud model, identifies artificial intelligence outputs as being produced by machines. In spite of the majority of attempts to remove and distort them, such as cropping the image, applying filters, and using lossy compression, the watermarks remain intact. Additionally, SynthID is able to search for artificial intelligence watermarks and identify them as being generated by AI.
The concept of SynthID, which was developed by Google, is turned on its head by Verify, which provides evidence that an image was not created by artificial intelligence. The primary purpose of digital signatures is to prevent genuine images from being misidentified as the outputs of artificial intelligence (and vice versa), but they also have the potential to reduce traditional forms of plagiarism. There are a lot of photographs that are attributed to the wrong person on social networking platforms, portfolio pages, and other online venues. This can happen either intentionally or by an honest mistake. Furthermore, although not every person who views a photo online would investigate the digital signature of that photo, the presence of built-in credit may discourage dishonest individuals from attempting to pass off the photographs of another person as their own.