Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-22T16:31:53.299Z Has data issue: false hasContentIssue false

An Operational Perspective on the Ethics of the Use of Autonomous Weapons

Published online by Cambridge University Press:  01 December 2023

David A. Deptula*
Affiliation:
Mitchell Institute of Aerospace Studies, Virginia, United States ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

Rapid technological change is resulting in the development of ever increasingly capable autonomous weapon systems. As they become more sophisticated, the calls for developing restrictions on their use, up to and including their complete prohibition, are growing. Not unlike the call for restrictions on the sale and use of drones, most proposed restrictions are well-intentioned but are often ill-informed, with a high likelihood of degrading national security and putting additional lives at risk. Employed by experienced operators well-versed in the laws of armed conflict, autonomous weapons can advance the objectives of those who would prohibit their use. This essay takes an operational perspective to examine the role that autonomous weapon systems can play while complying with the laws of armed conflict. With responsible design and incorporation of applicable control measures, autonomous weapons will be able not just to comply but also to enhance the ethical use of force. This essay contends that efforts by the international community to use international legal means and/or institutions to over-regulate or even ban lethal autonomous weapons are counterproductive. It considers and describes the end-game results of the use of autonomous weapons in enhancing the application of both international law and human ethical values.

Type
Roundtable: Global Governance and Lethal Autonomous Weapon Systems
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of Carnegie Council for Ethics in International Affairs

Rapid technological change is resulting in the development of ever increasingly capable autonomous weapon systems. As they become more sophisticated, the calls for developing restrictions on their use, up to and including their complete prohibition, are growing. Not unlike calls for restrictions on the sale and use of drones, most proposed restrictions are well intentioned, but are often ill informed, with a high likelihood of degrading national security and putting additional lives at risk. Employed by experienced operators well versed in the laws of armed conflict, autonomous weapons can advance the objectives of those who would prohibit their use.

This essay takes an operational perspective to examine the role that autonomous weapon systems can play while complying with the laws of armed conflict. With responsible design and incorporation of applicable control measures, autonomous weapons will be able not just to comply with but also to enhance the ethical use of force. This essay contends that efforts by the international community to use international legal means and/or institutions to overregulate or even ban lethal autonomous weapons are counterproductive. It considers and describes the endgame results of the use of autonomous weapons in enhancing the application of both international law and human ethical values.

Definitions

Before addressing the key issues regarding the use of autonomous weapon systems, it is appropriate to define them. In this regard, the most recent update to the official U.S. Department of Defense (DoD) Directive 3000.09, “Autonomy in Weapon Systems,” was issued on January 25, 2023. It defines an “autonomous weapon system” as “a weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.”Footnote 1

The directive also states that “autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”Footnote 2 It additionally stipulates that “persons who authorize the use of, direct the use of, or operate autonomous and semiautonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).”Footnote 3

The directive provides policy, direction, and guidelines for the design, development, and use of autonomous or semiautonomous weapon systems. It also specifies responsibilities for the Office of the Secretary of Defense, the military departments, the Office of the Chairman of the Joint Chiefs of Staff and the Joint Staff, the combatant commands, the Department of Defense Office of Inspector General, the defense agencies, the DoD Field Activities, and all other organizational entities within the DoD. Accordingly, “appropriate rules and measures, such as principles, good practices, limitations and constraints” on autonomous weapons called for by the 77th United Nations General Assembly already exist. To some degree, that should assuage the “serious concerns from humanitarian, legal, security, technological and ethical perspectives”Footnote 4 regarding application of autonomy in weapon systems.

Those concerned with the use of autonomous technologies in combat should consider that the U.S. military was the first in the world to release ethical principles for artificial intelligence (AI), which state that it should be “responsible,” “ethical,” “traceable,” “reliable,” and “governable.”Footnote 5 The department continued to build on that foundation with the release of a responsible AI strategy in June 2022 that lays out a path to incorporate AI into new weapon systems.Footnote 6 These are promising first steps for the development, control, and use of autonomous weapon systems in accordance with the already established body of international law and conventions regulating the conduct of war.

Uses of Autonomy in Weapons

The use of autonomous systems in war is already widespread. Cases date back to the earliest decades of the twentieth century with ship-launched torpedoes emerging in World War I and guided munitions and unmanned aircraft coming into use during World War II. Today, thousands of air-to-air missiles use autonomous guidance in homing in on their targets. Examples include the U.S. AIM-120 advanced medium-range air-to-air missile (AMRAAM), the Russian AA-12 Adder, the Chinese PL-12 and PL-15, and many others. Javelin, the anti-tank missile used so successfully by Ukraine, has essentially negated the effectiveness of Russian tanks. The Javelin missile is a “fire-and-forget” weapon that uses automated infrared guidance. Tens of thousands of surface-to-air weapons operated by autonomous control systems are used by multiple nations around the world. Uninhabited underwater vehicles are another variant of a weapon system that relies on autonomy to a great degree to accomplish its purpose.

Other systems with increased degrees of autonomy include loitering munitions that come in many varieties—the Israeli Harpy and Harop, the Chinese CH-901, the Russian Kalashnikov ZALA Aero KUB-BLA, the Iranian Shahed-136, and the U.S. Switchblade variants, as well as many others. Torpedoes like NATO's Spearfish Mod-1 have an automated mode. Automated protection systems, such as the radar-guided U.S. Phalanx close-in weapon system, have been used for decades to automatically defend ships. The Iron Dome air defense system has a completely autonomous defensive mode. Some of these are autonomous weapon systems (AWS) by definition and have demonstrated the viability, versatility, effectiveness, and accuracy of an autonomous weapon system.

It is important to recognize that autonomy does not exist for its own sake. As with any sort of weaponry, autonomous weapons are tools designed to empower strategies, operational concepts, and tactics to defeat adversaries. The United States is increasingly turning to autonomy because the technology is useful in addressing threats posed by peer competitors. These are serious national security circumstances. Losing could pose existential problems for the United States and its allies. Given that the growth in military capability and capacity of peer threats in some areas are equal to or greater than their growth in the United States, the Department of Defense is counting on winning future conflicts by achieving an advantage in information assimilation and making decisions at a rate exceeding that of any adversary. That is the whole point behind the Department of Defense's effort to develop joint all-domain command and control and is at the heart of the U.S. military's joint war-fighting concept. In other words, the United States along with its allies and partners are counting on their collective military personnel being better at finding, fixing, tracking, targeting, engaging, and assessing results, and handling the fog and friction of war than their adversaries.

Fundamental to this joint war-fighting concept is the automation of tasks necessary to deal with processing the masses of data that today are accumulated by defense and intelligence agencies, and which are growing exponentially. To capitalize on all this raw data, the military must have the capacity to separate signal from noise, creating actionable intelligence that will allow it to apply force in the most effective fashion possible. Autonomous weapons will be an integral part of the solution to this problem of mass data as they have the potential to dramatically improve the ability to sort through it and achieve a decisive advantage over potential adversaries. Adversaries are already well advanced in these applications. It is crucial to recognize that a race for decision superiority is underway and autonomous technologies are a key part of gaining the competitive edge. That is why the United States and its allies and partners cannot unilaterally discount this technology. The alternative is suffering defeat at the hands of nations with radically different values and interests. While the U.S. military has thus far signaled that it is committed to designing, developing, and employing autonomous weapons in compliance with international humanitarian law, the real challenge is in convincing all nations and organizations to abide with these laws as well. Current evidence demonstrates that Russia is not inclined to follow international humanitarian law.Footnote 7 Nor is China.Footnote 8

The United States and its allies and partners should not agree to bans or excessive restrictions on AWS. Given the progress that major rival powers, including Russia and China, have made with this technology, “opting out” is not a realistic option. That would be tantamount to a nation saying it was not going to pursue modern combat aircraft, armed ships, or armored vehicles. That is not a tenable proposition. Nor is surrendering AWS in the information age, regardless of adversary progress. To be sure, there are real risks that the development of AWS could devolve into an arms race or lead to crisis instability. However, treaties and international institutions are unlikely to mitigate any destabilizing impacts of AWS.

Unilateral self-enforcement of arms control treaties on our own country while our adversaries develop weapons on their own terms is self-defeating and is disconnected from a reality where defeat has very real, severely adverse consequences.

The development, deployment, and employment of AWS is already a reality and will proceed whether the United States and its allies and partners participate or not. The question is whether the United States is going to get left behind, further eroding its deterrent capabilities, and even inviting aggression. Letting China or Russia advance beyond nonbelligerent nations could realistically be very destabilizing. Adversaries could perceive that, with a significant AWS advantage, they are primed to win with a quick offensive.

There are steps that can be taken to encourage restraint in the use of AWS. More effective than international bans or limiting conventions is building up a credible deterrent. Nations can effectively manage escalation dynamics by building up the forces they need to dissuade an adversary from pursuing aggression. This is a known, proven approach to smart defense—consider the effectiveness of the nuclear triad. Nations can also manage escalation dynamics through confidence-building measures. These might include prudent test and evaluation practices designed to simultaneously broadcast the role of AWS in bolstering deterrence and war-fighting capability, and a commitment to developing an ethical framework for AWS. Indeed, if the AI supporting AWS is developed in a way that upholds ethical values, AWS have the potential to become the most precise means of employing force in a way that reduces collateral damage and minimizes casualties due to the automated nature of their decision-making cycle. Consider the benefits afforded through high-fidelity sensors, real-time tracking, and precision strike munitions. AI radically enhances the ability to secure precise battle space effects while working to minimize unintended collateral damage. Compare that to the damage wrought by old-fashioned unguided artillery barrages. AWS could portend even more progress in this area.

Operational Advantages of Autonomous Weapon Systems

Autonomous weapon technology can deliver key operational advantages that reduce civilian harm and support the principles of proportionality and distinction as established by the laws of armed conflict.Footnote 9 The advantages of autonomous technologies in general, and AWS in particular, are many and can be summarized as existing in three major areas: (1) capitalizing on the growing accuracy of precision-guided munitions, (2) reducing the potential for misidentification of targets, and (3) enhancing deterrence and war-fighting capabilities.

Greater Targeting Accuracy

The accuracy of precision-guided weapons will only improve due to automated systems and increased processing power. This is a simple but important concept. By better understanding a target and the surrounding circumstances, kinetic firepower can be even more precisely delivered at a time and place that will minimize unintended harm. This increased accuracy, in turn, allows for the greater use of low-yield explosives—or the complete elimination of explosives altogether (such as through inert weapons)—presenting a significant opportunity to reduce collateral damage, civilian casualties, and/or unintended consequences. As the commander from 1998 to 1999 of Operation Northern Watch—the combined task force charged with enforcing the no-fly/air exclusion zone north of the 36th parallel in Iraq—I directed the use of inert bombs to enable the attack of enemy surface-to-air weapon systems that the Iraqi military intentionally located near targets the enemy knew were protected under the law of armed conflict, and which it knew the Northern Watch coalition forces would respect.Footnote 10 The use of inert ordnance enabled the elimination of enemy threats that could not have been legally accomplished with explosive weapons. AWS and AI will improve such capabilities.

Reduced Potential for Target Misidentification

Automatic target recognition involves the use of sensors, data-processing capability, and algorithms to assess and determine targets of interest according to how those targets are defined. As a result, it can significantly reduce the potential for misidentification of targets due to human error; improve situational awareness of civilians in the battle space; and increase the speed of target identification, choice, and imposition of effect. All of which improve friendly outcomes by achieving desired military effects more rapidly than an enemy.

Humans write the algorithms that enable automatic target recognition and AWS, and it seems possible to write those algorithms in a way that deals with ethical issues at the speed of conflict and in accordance with international humanitarian law. Properly designed, artificial intelligence could consistently recommend targeting decisions that reflect values and ethics in accordance with the laws of armed conflict, experiencing no undue influence from factors that bias human decision-making such as time pressure, lack of sleep, psychological stress, and/or hunger.Footnote 11 Autonomy in weapon systems can also allow for all the elements of the kill chain—the find, fix, track, target, engage, and assess functions—to be accomplished in significantly shorter times than relying on the same steps being conducted entirely by humans.Footnote 12 The side that can more quickly close the kill chain has a significant advantage because it can accomplish mission objectives more quickly, thereby accelerating the conclusion of the conflict and reducing combat and civilian casualties on both sides.

Relatedly, if automatic target recognition is employed on uninhabited aircraft, which typically have longer loiter times than their inhabited counterparts, the likelihood of misidentification can even be further reduced. Long loiter times allow for even more time to observe, evaluate, and act very quickly, or to take all the time necessary to be sure of a particular action before lethal force is applied.

Enhanced Deterrence and War-Fighting Capability

International law, conventions, restrictions, and limitations on the types and use of weapon systems are proving ineffective in modern conflict. Major combatants often ignore these restrictions and associated agreements. In addition to the behavior of the Russian military in its invasion of Ukraine, and the numerous violations of international law and prior treaties agreed to by Russia, Hamas completely ignored international law and the conventions of war by intentionally committing atrocities against innocent men, women, and children in their October 2023 invasion of Israel.Footnote 13

Autonomous weapons have the potential to deter such egregious behavior. The U.S. Air Force is building a new generation of uninhabited aircraft, known as collaborative combat aircraft, to operate in conjunction with inhabited fighters, bombers, and other weapon systems. These new uninhabited systems will employ a variety of autonomous means to control, assess, and employ weapons, compensating for the precipitous decline in the capacity of the combat aircraft inventory in the U.S. military over the past thirty years. Because of their reduced cost relative to inhabited aircraft, they may be produced in quantities that can offset shortfalls in U.S. combat aircraft force structure, restoring the conventional deterrent necessary to avoid major regional conflict, and thereby dramatically reduce the potential of mass loss of life. If deterrence fails, then these autonomous systems increase the probability of winning over losing and of reducing casualties during conflict by not exposing humans to adversary engagement.Footnote 14

Collectively, the elements identified above equate to AWS possessing the potential for more ethical application of force than any other means.

Conclusion

The use of autonomy will evolve and will involve greater independence of machines from human control. AWS can make life-or-death decisions without human intervention, raising legitimate concerns about the morality of delegating such decisions to machines. That should not be confused with “Terminator”-like fully independent combat machines. Levels of appropriate human oversight and control can be deliberately designed into these systems. It is possible to build ethical frameworks into AWS that adhere to the principles of international humanitarian law. Additionally, the use of AI can minimize civilian casualties by improving target recognition, decision-making, and overall precision. AWS perform without the emotion, rage, revenge, hatred, and other grave conditions that affect the actions of “bad actor” human beings who intentionally kill innocent people, and completely ignore international law and the conventions of war.

While some may argue that the algorithms that control AWS may have human biases that can be replicated through iteration, means can be implemented to mitigate potential negative consequences. Algorithms can be developed that are less susceptible to biases by incorporating fairness and accountability as key components. By analyzing and understanding the sources of biases, appropriate techniques and methodologies can be employed to minimize their impact. By acknowledging the potential biases in algorithm development and actively working to address them, it is possible to create AWS that are more ethical and less prone to perpetuating harmful biases.

Establishing international guidelines and standards for AWS can promote the adoption of best practices and shared ethical principles across different countries. This collaborative approach can lead to more robust and bias-free autonomous weapon systems. However, AWS should not be restricted by international law or conventions ostensibly created to ban or restrict weapons that may “be deemed to be excessively injurious or to have indiscriminate effects.”Footnote 15 By definition, AWS “can select targets” and are therefore not indiscriminate in the effects they accomplish. Guided by the incorporation of the tenets of international humanitarian law as described above, the potential benefits of AWS can outweigh their potential detriments.

As summarized at the First Session of the Group of Governmental Experts on Lethal Autonomous Weapons Systems held in Geneva, Switzerland, on March 25, 2019:

Existing State practice provides many examples of ways in which emerging, autonomous technologies could be used to reduce risks to civilians in armed conflict: (1) incorporating autonomous self-destruct, self-deactivation, or self-neutralization mechanisms; (2) increasing awareness of civilians and civilian objects on the battlefield; (3) improving assessments of the likely effects of military operations; (4) enhancing target identification, tracking, selection, and engagement; and (5) reducing the need for immediate fires in self-defense.Footnote 16

What these approaches, and the ones cited earlier, all suggest is that autonomous technologies can increase our understanding of the battle space and our ability to react to dynamic adversary behavior in a way that is consistent with the laws of armed conflict and international humanitarian law.

Just like other weapon systems, AWS can and should be subject to, and controlled by, the extensive body of international law regarding armed conflict and associated conventions. However, contrary to some characterizations regarding the development and use of lethal autonomous weapon systems, they hold great potential to reduce the loss of life in conflict and may significantly reduce the vagaries of human misconduct and malfeasance in the battle space during conflict.

We can and must minimize unintended casualties. Hardly anyone wants to kill civilians—except for “bad actors” (witness Russia's and Hamas's intentional targeting of civilians in Ukraine and Israel, respectively). That brings into question the morality of any policy that restricts the use of the military—and AWS in particular—to avoid the possibility of collateral damage while allowing the certainty of adversaries’ crimes against humanity. While unintended casualties of war must be avoided by following the laws of armed conflict, those casualties associated with AWS are likely to pale in comparison to the savage acts of those soldiers and/or terrorists who do not comply with the laws of armed conflict and who elect to ignore international humanitarian law.

U.S. and allied military personnel should do all that they can to prevent civilian casualties and are extensively trained on how to do so.Footnote 17 Nations can save more civilian lives by responsibly developing and employing AWS, and figuring out ways to either effectively deter war or, once involved, to win it as rapidly as possible to terminate the horrendous consequences of prolonged conflict.

References

Notes

1 Department of Defense, United States of America, “Autonomy in Weapon Systems,” DoD Directive 3000.09, January 25, 2023, p. 21, www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.

2 Ibid., p. 3.

3 Ibid., p. 4.

4 United Nations General Assembly, “Joint Statement on Lethal Autonomous Weapons Systems First Committee, 77th United Nations General Assembly Thematic Debate—Conventional Weapons,” October 21, 2022, p. 1.

5 Deputy Secretary of Defense, Department of Defense, “Memorandum for Senior Pentagon Leadership Commanders of the Combatant Commands Defense Agency and DoD Field Activity Directors,” May 26, 2021, p. 1.

6 DOD Responsible AI Working Council, U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway (Washington, D.C.: Department of Defense, June 2022), media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF.

7 Lise Morjé Howard, “A Look at the Laws of War—and How Russia Is Violating Them: Upholding International Law Makes Peace Possible, Which Means Russia's Leader Must Be Held to Account,” United States Institute of Peace, September 29, 2022, www.usip.org/publications/2022/09/look-laws-war-and-how-russia-violating-them.

8 “China Responsible for ‘Serious Human Rights Violations’ in Xinjiang Province: UN Human Rights Report,” UN News, August 31, 2022, news.un.org/en/story/2022/08/1125932.

9 Department of the Army, United States of America, and Department of the Navy, United States Marine Corps, “General Background and Basic Principles of the Law of Armed Conflict,” ch. 1 in The Commander's Handbook on the Law of Land Warfare, FM 6-27/MCTP 11-10C, August 2019, armypubs.army.mil/epubs/DR_pubs/DR_a/pdf/web/ARN19354_FM%206-27%20_C1_FINAL_WEB_v2.pdf.

10 It is of interest to highlight that if such property is used for military purposes, it loses its protection against attacks.

11 Ray Reeves, “The Ethical Upside of Artificial Intelligence,” War on the Rocks, January 20, 2020, warontherocks.com/2020/01/the-ethical-upside-to-artificial-intelligence/.

12 U.S. Air Force, Targeting, Air Force Doctrine Publication 3-60, November 12, 2021, www.doctrine.af.mil/Portals/61/documents/AFDP_3-60/3-60-AFDP-TARGETING.pdf.

13 Emery Winter, “Yes, Russia Promised in 1994 to Never Attack Ukraine If It Gave Up Its Nuclear Weapons,” VERIFY, updated March 10, 2022, www.verifythis.com/article/news/verify/global-conflicts/ukraine-agreed-to-give-up-nukes-in-exchange-for-safety-from-russia-invasion-attack-budapest-memorandum-treaty/536-8748a51f-10ee-47f0-be30-b4088750ee44.

14 For more on the deterrence and war-fighting potential of the Air Force's new autonomous collaborative platforms, see Caitlin Lee and Mark Gunzinger, “Penetrating Strike,” part 1 in The Next Frontier: UAVs for Great Power Conflict (Arlington, Va.: Mitchell Institute for Aerospace Studies, December 2022), mitchellaerospacepower.org/event/research-paper-release-the-next-frontier-uavs-for-great-power-conflict-part-i-penetrating-strike/.

15 Office for Disarmament Affairs, United Nations, “Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects,” October 10, 1980, treaties.unoda.org/t/ccwc.

16 Charles Trumbull, “U.S. Statement on LAWS: Potential Military Applications of Advanced Technology” (statement, First Session of the Group of Governmental Experts on Lethal Autonomous Weapons Systems [LAWS], March 25, 2019).

17 Department of Defense, United States of America, “DoD Law of War Program,” DoD Directive 2311.01, July 2, 2020, www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/231101p.pdf?ver=2020-07-02-143157-007.