Even while the relationship is eye-opening in and of itself, its objectives seem harmless at this point.
According to reports, OpenAI is working along with the Department of Defense (DoD) of the United States. The manufacturer of ChatGPT and DALL-E announced that it is collaborating with the United States Department of Defense to create cybersecurity capabilities during the World Economic Forum meeting that took place this week in Davos-Klosters, Switzerland. The two organizations are also collaborating to come up with strategies to prevent suicides among veterans.
During an interview that took place on Tuesday at Bloomberg House, which is a hub for leadership at Davos, Anna Makanju, who is the vice president of global affairs at OpenAI, discussed the relationship in depth. The initial prohibition that the corporation had placed on cooperating with military organizations was at the forefront of everyone’s thoughts. “Because we had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” Makanju said. “This is because we initially had what was essentially a blanket prohibition on military.”
In point of fact, OpenAI has removed portions of its usage policy that had previously restricted the use of its artificial intelligence for “military and warfare” situations. The new policy, which was updated on January 10, 2024, prohibits the use of OpenAI goods for the purpose of “developing or using weapons” and “injuring others or destroying property.” Utilizing ChatGPT or DALL-E for military goals that do not immediately result in death and devastation is, therefore, considered to be within the realm of acceptable behavior.
The new policy of OpenAI is similar to the “Don’t Be Evil” language that Google removed from its code of conduct in 2018. This clause was removed in 2018. According to appearances, it is not difficult to think of a technological startup that has the intention of doing good and is unwilling to venture into waters that are ethically questionable. Nevertheless, defense firms are consumers worth a lot of money, and because technology companies are typically focused on continuous expansion, it is frequently tough to say no to the military’s huge coffers. It wouldn’t be shocking if OpenAI acknowledged that and revised its usage policy to reflect the shift in values that it has been committed to.
Regarding OpenAI’s collaboration with the Pentagon, Makanju did not provide a great deal of specific information. At this time, we are aware that the organizations are working together on “open-source cybersecurity software” as well as approaches for preventing suicide among veterans. Despite the fact that the latter problem has long been a focus for the Department of Defense (DoD), the Defense themselves Prevention Office is looking at new approaches to minimize the number of service members who commit themselves as a result of a recent increase in the number of suicides committed by active component (also known as full-time enlisted) employees and veterans.