According to reports, the bot failed to amend inaccurate information about a real person, and the corporation claimed it was unable to step in.
NOYB, or None Of Your Business, is an advocacy group that has filed a privacy complaint against OpenAI in Austria. According to Reuters, the complaint claims that the company’s ChatGPT bot regularly gave false information on a real person (who is not named in the complaint due to privacy concerns). This could be against EU privacy laws.
It is reported that the chatbot misspoke the user’s birthdate rather than just stating it was unsure about the answer. AI chatbots enjoy making things up with confidence and hoping that humans will not notice, much like politicians do. We refer to this phenomenon as a hallucination. But it is one thing when these robots invent components for a recipe; it is quite another when they invent information about actual people.
The complaint further claims that OpenAI declined to assist in removing the incorrect material, claiming that such a change could not be made in a technical sense. On specific prompts, the corporation did offer to filter or block the data. According to TechCrunch, OpenAI’s privacy policy allows users to submit a “correction request” if they discover that the AI chatbot has produced “factually wrong information” about them. However, the business notes that it “may not be able to remedy the inaccuracy in every instance.”
This is more than simply a single complaint, as the chatbot’s propensity for fabrication may violate the General Data Protection Regulation (GDPR) in the area, which regulates the use and processing of personal data. In regards to personal information, EU citizens have rights, such as the ability to have inaccurate data updated. In certain situations, noncompliance with these requirements may result in severe financial penalties up to four percent of the global annual turnover. Authorities have the authority to mandate modifications to the way data is handled.
Maartje de Graaf, a data protection lawyer of NOYB, stated in a statement that “it is evident that firms are now unable to make chatbots like ChatGPT comply with EU law, when processing data about persons.” “A system cannot be utilized to generate data about individuals if it is unable to produce transparent and accurate outcomes. The law must be followed by technology, not the other way around.
The lawsuit also raised issues with OpenAI’s openness, claiming that the business withholds information about the origins of the personal data it generates and whether or not it is kept on file forever. This is especially crucial when it comes to information about private individuals.
Once more, an advocacy organization has filed this complaint, and EU officials have not yet provided a response. Nonetheless, ChatGPT “occasionally creates plausible-sounding but wrong or nonsensical answers,” as OpenAI has already admitted. The Austrian Data Protection Authority has been contacted by NOYB, who has requested that they look into the matter.
Similar complaints have been made about the company in Poland, where ChatGPT is the subject of an investigation by the local data protection authority after a researcher was unable to acquire OpenAI’s assistance in correcting inaccurate personal information. According to the lawsuit, OpenAI violated the EU’s GDPR on a number of occasions in relation to privacy, data access rights, and transparency.
Italy is another. After looking into ChatGPT and OpenAI, the Italian data protection authority came to the conclusion that the company had broken the GDPR in a number of ways. This includes the propensity of ChatGPT to fabricate information about individuals. Prior to OpenAI making some software modifications, such as adding new user warnings and giving users the choice to refuse having their talks used to train the algorithms, the chatbot was technically outlawed in Italy. The Italian inquiry into ChatGPT is still ongoing even if it is no longer prohibited.
OpenAI reacted to the regulatory salvo given by Italy’s DPA, but it has not responded to this most recent complaint. The corporation stated, “We want our AI to learn about the world, not about specific individuals.” “We actively strive to restrict the amount of personal data used to train our algorithms, such as ChatGPT, which also rejects requests for individuals’ private or sensitive information.”