OpenAI, Microsoft, Google, Meta, and other companies have pledged to cease producing and disseminating content that involves child sexual abuse through their AI capabilities.
Prominent artificial intelligence firms, such as Microsoft, Google, OpenAI, Meta, and others, have collectively committed to preventing the exploitation of children and the creation of child sexual abuse material (CSAM) through the use of their AI tools. Leading the project were the non-profit organisation All Tech Is Human, which focuses on responsible tech, and the child safety group Thorn.
According to Thorn, the promises made by AI companies “represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds” and “set a groundbreaking precedent for the industry.” The effort aims to stop the production of sexually explicit content featuring children and remove it from search engines and social media platforms. According to Thorn, in the US alone, more than 104 million files containing content suspected of child sexual abuse were reported in 2023. Generative AI has the potential to exacerbate this issue and burden law enforcement organisations, which are already having difficulty identifying real victims, if no coordinated action is taken.
A new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” was released on Tuesday by Thorn and All Tech Is Human. It provides strategies and recommendations for businesses that create AI tools, search engines, social media platforms, hosting companies, and developers to take action against the use of generative AI to harm children.
Because generative AI tends to blend the two ideas, one of the recommendations, for example, encourages businesses to carefully select the data sets used to train AI models and avoid ones not only contain examples of CSAM but also adult sexual content completely. Thorn also requests that search engines and social media companies take down connections to apps and websites that allow users to “nudity” photos of children, thereby producing fresh artificial intelligence (AI)-generated content about child sexual abuse on the internet. The research claims that a deluge of AI-generated CSAM will increase the “haystack problem,” or the volume of material that law enforcement organisations already have to sort through, making it harder to identify actual victims of child sexual abuse.
The Wall Street Journal was informed by Rebecca Portnoff, vice president of data science at Thorn, “that this project was intended to make abundantly clear that you don’t need to throw up your hands.” “We want to be able to turn this technology around so that its current negative effects stop at the knees.”
According to Portnoff, some businesses have already committed to separating data sets containing pornographic content from photographs, videos, and audio featuring minors in order to stop their models from mixing the two. Others use watermarks to help identify work created by AI, but this approach isn’t infallible because metadata and watermarks are easily deleted.