A total of thirteen current and former employees with connections to Google, OpenAI, and Anthropic signed the document.
A collection of current and former employees from top artificial intelligence startups such as OpenAI, Google DeepMind, and Anthropic have signed an open letter requesting greater openness and protection from punishment for people who speak out about the potential problems of artificial intelligence. The letter, which was published on Tuesday, states that “so long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.” This statement was made in reference to the fact that the letter was published. “However, we are unable to voice our concerns because of the extensive confidentiality agreements that we have signed, with the exception of the very companies that may be failing to address these issues.”
The letter comes just a few of weeks after an investigation conducted by Vox found that OpenAI had attempted to silence recently departing employees by compelling them to choose between signing an aggressive non-disparagement agreement or risk losing their vested equity in the firm. The letter now comes after the investigation was published. Sam Altman, the CEO of OpenAI, stated that he had been “genuinely embarrassed” by the provision after the article was released. He also claimed that the provision had been deleted from current exit documents; however, it is unknown whether it is still in effect for any employees.
Former OpenAI staffers Jacob Hinton, William Saunders, and Daniel Kokotajlo are among the thirteen individuals who have signed the document. Kokotajlo stated that he resigned from the company because he had lost faith that it would construct artificial general intelligence in a responsible manner. This is a concept that refers to Artificial Intelligence (AI) systems that are as intelligent as or smarter than humans. The letter, which was signed by notable artificial intelligence (AI) specialists Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, expresses grave concerns regarding the absence of adequate government control for artificial intelligence (AI) as well as the financial incentives that are driving significant technological companies to invest in the technology. The authors warn that the unfettered pursuit of powerful artificial intelligence systems could result in the dissemination of false information, the worsening of inequality, and even the loss of human control over autonomous systems, which could potentially lead to the extinction of humans.
Kokotajlo commented on X that “there is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas because there is a lot that we don’t know about how these systems work.” In the meantime, there is very little to no monitoring of this advancement in technology. Instead, we trust on the corporations that are constructing them to self-govern, despite the fact that profit considerations and excitement about the technology lead them to “move fast and break things.” It is risky to silence researchers and to make them fear reprisal, especially considering that we are currently some of the only persons who are in a position to communicate with the general public.
The requests for comment that were made by Newtechmania were not immediately met with a response from OpenAI, Google, or Anthropic. The firm is proud of its “track record providing the most capable and safest AI systems,” according to a statement that was issued to Bloomberg by a spokeswoman for OpenAI. The spokesperson also stated that the company believes in its “scientific approach to addressing risk.” Furthermore, it was stated that “We agree that rigorous debate is crucial given the significance of this technology,” and that “We will continue to engage with governments, civil society, and other communities around the world for the foreseeable future.”
AI businesses are being urged by the signatories to make a commitment to the following four important principles:
avoiding taking any kind of retaliatory action against workers who express concerns about their safety
Supporting a system that allows whistleblowers to remain anonymous in order to notify the public and regulators about potential dangers
A culture that encourages open criticism is allowed.
Additionally, avoiding non-disparagement or non-disclosure agreements that prevent employees from expressing their opinions and opinions.
The letter comes at a time when OpenAI’s business methods are coming under increasing criticism. Such practices include the dissolution of the firm’s “superalignment” safety team and the departure of important people such as co-founder Ilya Sutskever and Jan Leike, who lambasted the company for placing a higher priority on “shiny products” than on personal safety.