The usage policies page has been updated by the company with new language.
It was only a few days ago that the usage policies page of OpenAI made it clear that the firm does not allow its technology to be used for “military and warfare” objectives. Since then, the one line has been removed. On January 10, the business modified the website “to be clearer and provide more service-specific guidance,” as stated in the changelog. The Intercept was the first to note that the page had been updated. It continues to forbid the use of its large language models (LLMs) for anything that has the potential to cause harm, and it cautions individuals against making use of its services in order to “develop or use military weapons.” The business, on the other hand, has eliminated any verbiage that contains the phrase “military and warfare.”
This modification to the phrasing comes at a time when military organisations all over the world are expressing an interest in employing artificial intelligence, even if we have not yet seen its repercussions in real life. Sarah Myers West, a managing director of the AI Now Institute, stated in an interview with the journal that “it is a notable moment to make the decision to remove the words’military and warfare’ from OpenAI’s permissible use policy.” This decision was made in light of the fact that artificial intelligence systems were used in the targeting of civilians in Gaza.
Considering that the phrase “military and warfare” was specifically mentioned on the list of prohibited uses, it was clear that OpenAI would not be able to collaborate with government agencies such as the Department of Defence, which generally provides rich business opportunities to private contractors. There is not a product that the corporation offers at the time that has the potential to immediately kill or cause physical harm to any individual. As stated by The Intercept, however, its technology might be utilised for activities such as the development of code and the processing of procurement orders for items that have the potential to involve the killing of individuals.
OpenAI spokesperson Niko Felix stated to the publication that the company “aimed to create a set of universal principles that are both easy to remember and apply.” This was in response to the publication’s inquiry regarding the modification of the policy wording. Felix stated that the company’s tools are now being used all over the world by everyday users who are now able to construct GPTs. Felix further clarified that “a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts,” while mentioning that OpenAI “specifically cited weapons and injury to others as clear examples.” It has been reported that the spokesperson declined to provide any clarification regarding whether or not the prohibition on the use of its technology to “harm” others encompassed all forms of military use that were not related to the manufacture of weapons.