AI weapons are here and the US wants a code of conduct for using them

AI weapons
Image courtesy of ARL.

Share

We may not be at the point where Terminator robots are going back in time and trying to kill other Terminator robots. We are also not yet to the point where autonomous weapons are even in common use on the world’s battlefields, but the United States is taking a proactive approach to ensuring the AI weapons of the future are used responsibly. 

In The Hague, home of the International Criminal Court and a number of other worldwide governing bodies, the U.S. launched an initiative to promote another area of international cooperation: artificial intelligence and autonomous weapons. The proposal comes amid the increased use of drones in Ukraine, some of which could have autonomous capabilities. 

Widely known as lethal autonomous weapons systems (LAWS), many countries are believed to have deployed them in some form. United Nations reports claim Turkey used autonomous drones to take down retreating soldiers during the Libyan Civil War in March 2020. A Kargu-2 drone is said to have “hunted down” members of the Libyan National Army. 

In the demilitarized zone along the border between North and South Korea, South Korea is believed to have posted the the SGR A-1 gun turret, which is believed to have a fully autonomous mode option. No one can confirm if the option has been utilized in the field. Other weapons of this kind include the American Phalanx close-in weapon system and Israel’s Iron Dome missile defense shield. 

“As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years,” Bonnie Jenkins, the U.S. State Department’s under secretary for arms control and international security, said in a statement.

UAV AI weapons
Uncrewed aerial vehicles prep for deployment with artificial intelligence, or AI, toolbox payload technology as part of the Defence Science and Technology Laboratory, or Dstl, HYDRA project trials on Salisbury Plain, U.K., Nov. 4, 2022. (Courtesy photo)

The declaration is political and legally non-binding, but offers a proposed set of guidelines for the military use of artificial intelligence. The U.S. declaration came after a two conference in the Netherlands, offering a 12-point outline for the responsible use of such weapons. Among the first of these points is that the use of AI weapons should comply with existing international laws. 

It also includes provisions relating to nuclear weapons, specifically that human control should be maintained “for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”

At the conference, 60 nations, including China, called for some kind of action around AI and autonomous weapons, to develop “frameworks, strategies, and principles” surrounding the use of intelligent weapons. Ukraine’s digital transformation minister told the Associated Press that his country has already begun research and development in the area of autonomous weapons. 

Russia did not attend The Hague conference, but claims to have autonomous weapons. No country is known to have used artificial intelligence weapons in combat that have killed without some kind of human control but Russia has deployed AI-enabled Kalashnikov ZALA Aero KUB-BLA loitering munitions. Ukraine has used Turkish-made Bayraktar TB-2 drones with autonomous capabilities. 

The United States’ Defense Advanced Research Projects Agency (DARPA) conducted human versus AI test in a simulated dogfight in 2020, tests where the AI defeated the human in all five rounds. China’s testing produced similar results. The results of these AI wargames might have been the catalyst for developing an internationally recognized code of conduct.