US launches global initiative on responsible use of AI in military
The United States has launched a new initiative aimed at promoting international cooperation in the responsible use of artificial intelligence (AI) and autonomous weapons by militaries. The move seeks to create order in the emerging technology which has the potential to change the way war is waged.
Responsible behavior concerning military use of AI
The U.S. political declaration contains non-legally binding guidelines outlining best practices for responsible military use of AI, "can be a focal point for international cooperation", according to Bonnie Jenkins, the State Department's under-secretary for arms control and international security. Jenkins launched the declaration at the end of a two-day conference in The Hague that took on additional urgency as advances in drone technology amid the Russia's war in Ukraine have accelerated a trend that could soon bring the world's first fully autonomous fighting robots to the battlefield.
The U.S. declaration and its 12 points
The U.S. declaration has 12 points, including that military uses of AI are consistent with international law, and that states "maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment." Zachary Kallenborn, a George Mason University weapons innovation analyst who attended the Hague conference, said the U.S. move to take its approach to the international stage "recognizes that there are these concerns about autonomous weapons. That is significant in and of itself." Kallenborn said it was also important that Washington included a call for human control over nuclear weapons "because when it comes to autonomous weapons risk, I think that is easily the highest risk you possibly have."
Call to action by 60 nations
Underscoring the sense of international urgency around AI and autonomous weapons, 60 nations, including the U.S. and China, issued a call for action at the Hague conference urging broad cooperation in the development and responsible military use of artificial intelligence. The participating nations also invited countries "to develop national frameworks, strategies and principles on responsible AI in the military domain."
The risks of autonomous weapons
Military analysts and artificial intelligence researchers say the longer the nearly year-long war in Ukraine lasts, the more likely it becomes that drones will be used to identify, select, and attack targets without help from humans. Ukraine’s digital transformation minister, Mykhailo Fedorov, told The Associated Press in a recent interview that fully autonomous killer drones are "a logical and inevitable next step" in weapons development. He said Ukraine has been doing "a lot of R&D in this direction." Russia also claims to possess AI weaponry, though the claims are unproven. But there are no confirmed instances of a nation putting into combat robots that have killed entirely on their own.
The importance of human oversight of the use of AI systems
The call to action issued in the Netherlands underscored "the importance of ensuring appropriate safeguards and human oversight of the use of AI systems, bearing in mind human limitations due to constraints in time and capacities." China’s ambassador to the Netherlands Tan Jian did attend and said Beijing has sent two papers to the United Nations on regulating military AI applications, saying the issue "concerns the common security and the well-being of mankind, which requires the united response of all countries."