For more than a decade, as the technology has become increasingly real, the substantial advantages and real dangers of artificial intelligence (AI)—“the ability of machines to perform tasks that would otherwise require human intelligence,” such as “recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action”—have preoccupied governments, industry, academics, and non-governmental organizations.Footnote 1 For militaries, the promise of AI is substantial.Footnote 2 As noted by Assistant Secretary of State for Arms Control, Verification, and Compliance Mallory Stewart, “AI-enhanced data analysis could optimize logistics processes, improve decision support, and provide commanders with enhanced situational awareness that enables them to avoid unintended engagements and minimize civilian casualties.”Footnote 3 AI “could increase accuracy and precision in the use of force which can also help strengthen implementation of international humanitarian law's protections for civilians and civilian objects.”Footnote 4 It could also “advance arms control by helping . . . solve complex verification challenges and increasing confidence in states’ adherence to their commitments.”Footnote 5 With these advancements, though, come serious risks. AI generates information, but that information can be “inaccurate, lack[] context, or [be] completely made up.”Footnote 6 Systems can be “poorly designed, inadequately tested, or [have] users [who] do not possess an adequate understanding of the contexts and limitations of those systems.”Footnote 7 Military use of AI could therefore result in errors such as improperly identifying civilians as lawful targets and deploying disproportionate force. Concerns have thus arisen about AI reliability, autonomy, human control, biases, monitoring, and accountability. Accordingly, with the rapid pace of the technology's advancement, the increasing commitment of significant resources to weapons development as militaries race to gain advantage, and the deployment in Ukraine and elsewhere of robotic combat vehicles, unmanned aerial vehicles, and unmanned surface vessels,Footnote 8 the regulation of the military use of AI has gained increased urgency.Footnote 9
The content and form of that regulation are contested, however. The United States has sought to establish agreed-upon non-binding international understandings of AI's proper military application through diplomacy and the articulation and publication of its own positions under the “responsible AI” (RAI) framework. In early 2023, building on prior policies, the U.S. Department of Defense issued an updated directive on Autonomy in Weapons SystemsFootnote 10 and the U.S. Department of State proposed a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.Footnote 11 Many states and non-governmental organizations disagree with the RAI approach taken by the United States and other military powers and have instead called for the negotiation of a legally binding international agreement that would apply and develop the international law of autonomous weapons systems. These debates are part of broader efforts, national and international, to regulate AI.Footnote 12
To date, multilateral consideration of military AI has focused on lethal autonomous weapons systems (LAWS) and has taken place in the meetings of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE) established in 2016Footnote 13 by the Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW).Footnote 14 In its early years, the GGE drafted the Guiding Principles on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems,Footnote 15 which were endorsed by the 2019 CCW meeting of the parties.Footnote 16 Since then, the GGE's mandate has centered on “the consideration of proposals and [the] elaborat[ion], by consensus, [of] possible measures . . . and other options related to the normative and operational framework on emerging technologies in the area of lethal autonomous weapon systems.”Footnote 17 Many working papers have been tabled, but no consensus has emerged on fundamental substantive issues, including definitions (even of the term “lethal autonomous weapons systems”). Nor has there been agreement on the form of the outcome—whether to work toward a non-binding text or a binding instrument. While a clear majority of states favor the latter, for years the GGE's consensus requirement has led to modest outcomes.
The United States is among those that oppose a binding agreement,Footnote 18 preferring instead the issuance of statements that apply existing international law to LAWS.Footnote 19 It views the role of GGE as “developing guidance for States—measures that would strengthen the implementation of international humanitarian law (IHL) and promote responsible behavior by States.”Footnote 20 To this end, at the most recent GGE meeting, in May 2023, the United States, together with six other states, proposed “draft articles on the development, deployment, and use of autonomous weapon systems” (covering prohibited weapons, distinction, proportionality, precaution, and accountability).Footnote 21 The proposal was based on a compilation of consensus conclusions and recommendations previously endorsed by the GGE that “seek to clarify the requirements imposed by existing IHL and specify measures to effectively satisfy these requirements.”Footnote 22 The draft articles were not adopted, nor were proposals made by other states and the chair.Footnote 23 Instead, the session endorsed four sets of non-binding conclusions regarding compliance with existing international law, in particular IHL.Footnote 24
Inside and outside the GGE, numerous statesFootnote 25 and non-governmental organizations (many through the Campaign to Stop Killer Robots)Footnote 26 have called for a legally binding instrument to prohibit fully autonomous weapons and to otherwise regulate LAWS. To this end, some are pushing to move negotiations out of the GGE, where that outcome is unlikely, to either a standalone diplomatic conference, as was done with the Anti-Personnel Mine Ban Convention and the Convention on Cluster Munitions, or the UN General Assembly, where the Treaty on the Prohibition of Nuclear Weapons was drafted.Footnote 27 UN human rights rapporteurs have also called for a global prohibition on LAWS.Footnote 28 And in July 2023, UN Secretary-General António Guterres, in his most forward-leaning statement to date, called on states to “conclude, by 2026, a legally binding instrument to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.”Footnote 29
The United States and other countries promoted the contrasting “responsible AI” approach to military AI at the February 2023 summit on Responsible AI in the Military Domain (REAIM) that was co-organized by The Netherlands and South Korea.Footnote 30 Military RAI's origins stem from the U.S. Department of Defense's (DoD) Artificial Intelligence Strategy in 2018.Footnote 31 In 2020, DoD adopted five Ethical Principles for Artificial Intelligence to “ensure the responsible use of AI by the department,” including: responsibility, equity, traceability, reliability, and governability.Footnote 32 Subsequently, DoD implemented the Ethical Principles in a memorandum articulating “foundational tenets”Footnote 33 and in a pathway setting out RAI's operationalization within the department.Footnote 34 Updated DoD Directive 3000.09Footnote 35 established requirements “to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”Footnote 36 Other countries have adopted similar documents.Footnote 37 The North Atlantic Treaty Organization too has promulgated its own Principles of Responsible Use of Artificial Intelligence in Defence, which are substantially similar to DoD's Ethical Principles.Footnote 38
Military RAI has evolved into a set of best practices and techniques for AI's cautious and safe use. Consistent with the U.S. position in the GGE, it attempts to facilitate, not prohibit, military AI's development and use through the establishment of guardrails that minimize risk in accordance with law. RAI's non-binding character provides flexibility for militaries and implies that existing international law regulates AI adequately. As Under Secretary of State for Arms Control and International Security Bonnie Denise Jenkins explained at the REAIM summit, “[t]he United States approaches using artificial intelligence for military purposes from the perspective of responsible speed, meaning our attempts harness the benefits of AI have to be accompanied by a focus on safe and responsible behavior that is consistent with the law of war and international humanitarian law.”Footnote 39
Fifty-seven government representatives at the summit, including all permanent members of the Security Council except Russia (which was not invited to participate), agreed on a “joint call to action on the responsible development, deployment and use of artificial intelligence (AI) in the military domain.”Footnote 40 Among other commitments, the signatories promised “to continu[e] the global dialogue on responsible AI in the military domain in a multi-stakeholder and inclusive manner and call[ed] on all stakeholders to take their responsibility in contributing to international security and stability in accordance with international law.”Footnote 41 The call also “invite[d] states to develop national frameworks, strategies and principles on responsible AI in the military domain.”Footnote 42 There were some notable absences from the list of countries endorsing the call, including Brazil, India, Israel, and South Africa.
Also at the REAIM summit, the State Department unveiled a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.Footnote 43 Internationalizing the Defense Department's and NATO's RAI texts, the Declaration comprises twelve sets of “best practices that the endorsing States believe should be implemented in the development, deployment, and use of military AI capabilities, including those enabling autonomous systems.”Footnote 44 These include: “subjecting systems to rigorous testing and assurance, taking steps to avoid unintended consequences, minimizing unintended biases, and . . . ensuring AI is used in accordance with States’ obligations under international law.”Footnote 45 “The Political Declaration,” a State Department official said in August, “will serve as a launch pad for sustained dialogue among endorsing states.”Footnote 46 To date, however, no endorsing states have been announced. The Declaration did not satisfy those who advocate for an international agreement to govern AI in the military domain.Footnote 47 A Stop Killer Roberts official described the Political Declaration as “the most backwards position seen from any state, in years” and “an attempt to radically undermine global effort towards establishing a new Treaty on Autonomous Weapons Systems.”Footnote 48