However, a Pentagon official claims that its recommendations were continuously verified by human workers.
According to a new report by Bloomberg, the United States military has increased its utilisation of artificial intelligence techniques in the aftermath of the strikes on Israel that were carried out by Hamas on October 7. While speaking to the news organisation, Schuyler Moore, the top technology officer for the United States Central Command, stated that machine learning algorithms assisted the Pentagon in identifying targets for more than 85 air strikes that were carried out in the Middle East just this month.
These air attacks were carried out by bombers and fighter planes from the United States against seven facilities in Iraq and Syria on February 2. The facilities included rockets, missiles, drone storage facilities, and militia operations centres. All of these facilities were either completely destroyed or at least damaged. Artificial intelligence technologies had also been utilised by the Pentagon in order to locate surface combatants in the Red Sea and rocket launchers in Yemen, both of which were subsequently eliminated by a series of air strikes carried out during the same month.
Project Maven, a cooperation between Google and the Pentagon that has since been terminated, was responsible for the development of the machine learning algorithms that were used to narrow down targets. To be more specific, the project required the United States military to make use of the artificial intelligence technology developed by Google in order to analyse drone footage and flag photos for the purpose of further human inspection. Employees at Google were outraged by the news; thousands of them signed a petition requesting that the firm terminate its cooperation with the Pentagon, and some of them even quit their jobs because of the company’s involvement. Google made the decision not to renew its contract, which had expired in 2019, a few months after the employee protest that brought about the decision.
In an interview with Bloomberg, Moore stated that even after Google’s involvement in the conflict in the Middle East came to an end, the United States military has not stopped experimenting with the use of algorithms to identify prospective targets by utilising drone or satellite imagery technology. As she explained, the military had been conducting digital exercises to try out their use throughout the course of the previous year; but, it did not begin utilising targeting algorithms in actual operations until after the strikes carried out by Hamas on October 7. However, she made it clear that human workers were continually checking and verifying the target recommendations that were generated by the AI systems. In addition, human personnel were the ones who suggested how the attacks should be staged and which weaponry should be utilised. “There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step,” said the researcher. “Every step that involves AI has a human checking in at the end.”