I. Introduction
On August 29, 2021, the U.S. military launched its last drone strike in Afghanistan before American troops withdrew from the country.Footnote 1 The strike targeted a white Toyota Corolla near Kabul's international airport, driven by Zemari Ahmadi, believed to be carrying an ISIS bomb. As a result of the strike, the targeted vehicle was destroyed and ten people were killed. The U.S. military called it a “righteous strike,” explaining that it was necessary to prevent an imminent threat to American troops at Kabul's airport.Footnote 2 However, following the findings of a New York Times investigation,Footnote 3 a high-level U.S. Air Force investigation found that the targeted vehicle did not pose any danger and that all ten casualties were civilians, seven of them children. Despite these outcomes, the investigation concluded that the strike did not violate any law, because it was a “tragic mistake” resulting from “inaccurate” interpretation of the available intelligence.Footnote 4 The investigation suggested that the wrong—and lethal—interpretation of the intelligence—which included eight hours of drone visuals—resulted from “execution errors” combined with “confirmation bias.”
Using cognitive insights, such as confirmation bias, to explain—and excuse—military errors resulting in civilian casualties, is a step forward, but not necessarily in the right direction. It is a step forward in the sense that it recognizes significant cognitive dynamics that limit crucial military risk assessment and fact-finding processes. But this step will not lead to better outcomes without a deeper understanding of how existing data practices—including real-time drone visuals—are susceptible to, and affected by, cognitive biases. Stronger, more effective, protections to civilians in armed conflicts require acknowledging the core role drone visuals play in generating knowledge that is often perceived as objective—despite being distorted by technical, socio-technical, and cognitive dynamics.
In this presentation I aim to add these technological and behavioral elements in military knowledge production to the important discussions on compliance with international humanitarian law (IHL). Before delving into the substantive issues, it is important to clarify what I mean by “compliance.” Compliance is often invoked in the context of armed conflict as a technical-legalistic term which reflects a particular interpretation of the applicable legal rules. In this presentation, however, I explore compliance not within its very confined and context-dependent legalistic application, but rather as a humanistic term. Viewed through these lenses, I invoke the term “compliance” here to signify the core declared aim of this legal regime, which is to maintain respect for the law of armed conflict (LOAC) for the purpose of mitigating the harms of war and protecting humans, non-humans, and the environment during armed conflicts. This is especially important because when the concrete scope and interpretation of legal rules are deeply contested—as are most of the core IHL rules—compliance in its narrow technical-legalistic sense becomes almost meaningless. I will also use, interchangeably, the terms IHL and LOAC, signaling that this discussion transcends the existing interpretive “camps,” and that the law's protective goals ultimately contribute to human security everywhere.
II. Visualization Technologies and Compliance with IHL/LOAC
My research into the effects of visualization technologies on military decision making identifies several compliance-related challenges that stem from reliance on visualization technologies and can be explained—and improved—using behavioral insights. Visual technologies may influence the relevant legal standards, shaping the meaning of “reasonable commander” and constructing the scope of the legal burdens of care.Footnote 5
An awareness of the effects of cognitive biases on the interpretation of drone visuals may influence the scope of the duties to “do everything feasible” to verify the target identification and to avoid or minimize collateral damage.Footnote 6 For example, meaningful precaution may require mitigating systemic errors deriving from biased interpretation of drone visuals through various debiasing techniques. Additionally, the visible outputs of visualization technologies and the invisible biases involved in their interpretation may amplify pre-existing vulnerabilities in the legal standards and in particular, their murky standards of proof.Footnote 7
Addressing the debate surrounding the required level of certainty on targeting decisions, Tom Oakley points out that there is a knowledge gap concerning the legal requirement.Footnote 8 Indeed, in arguing against the “reasonable certainty” standard and supporting the “near certainty” one, Michael Adams and Ryan Goodman demonstrate that the level of certainty required by LOAC is nothing but certain.Footnote 9 But even if the standard itself was clear, behavioral insights teach us that decisionmakers’ level of certainty may be unconsciously affected by a number of cognitive processes leading to misinterpretation of the available evidence, and to experts’ overconfidence in their biased analysis.Footnote 10
III. Limitations of Visualization Technologies
In the remainder of this presentation, I will focus on this last point, addressing the challenges relating to the effects of visualization technologies on military fact-finding processes, and exposing their technical, socio-technical, and cognitive constraints. I will do so using examples from military investigations in the United States and Israel, drawing attention to the invisible burdens these technologies place on decision-makers. As findings from these investigations show, visualization outputs create an imperfect, yet highly persuasive, virtual representation of the actual conditions on the ground; a representation that is difficult, if not impossible, to refute.
To clarify, my claim is not that military decision-making processes are better or more accurate without the aid of visualization technologies. These technologies indeed provide a large amount of essential information about the battlefield, target identification, and the presence of civilians in the vicinity of a planned attack. I also do not engage here with arguments, such as those made by Samuel Moyn and others, that precision weapons and visualization technologies, combined with sophisticated war lawyering, humanize armed conflicts and legitimize violence.Footnote 11 The argument, instead, is that the undeniable benefits of visualization technologies for military decision-making processes mask their blind spots: visualization technologies are imperfect and limited in several ways, which are not always visible to decisionmakers.
First, visualization technologies have technical and human-technical limitations, including insufficient or corrupt data inputs, blind spots, as well as time and space constraints. The missing details or corrupt information remain invisible, while the visible (yet limited or partial) outputs capture decisionmakers’ attention. Indeed, emerging empirical evidence suggests that real-time imaging outputs may reduce the situational awareness of decisionmakers, who tend to place an inappropriately high level of trust in visual data.Footnote 12 Additionally, technology systems may fail or malfunction.
When military practices rely profoundly on technology systems, decisionmakers’ own judgment, and their ability to evaluate evolving situations without the technology, erodes. The misidentification of the Doctors Without Borders hospital in Kunduz, Afghanistan, in October 2015 as a legitimate target—a decision that led to the killing of forty-two patients and hospital staff members—was partly attributed to the AC-130 aircrew's reliance on infrared visualization technology.Footnote 13 As this visualization technology is incapable of showing colors, it was incapable of depicting the red color of the hospital's red cross symbol, which could have alerted the aircrew that the intended target was a medical facility. Ashley Deeks points out that both a positive target identification, and an implicit approval by not alerting that the target is a protected target, may involve an automation bias, where individuals accept the machine's explicit or implicit recommendation.Footnote 14
Second, these technical (and human-technical) limitations create gaps in the available data. The need to fill these gaps makes military decision making “rife with subjectivity and speculation,” as Tomer Broude puts it.Footnote 15 Anne van Aaken emphasizes the relevance of bounded rationality theories, including concrete biases such as availability, anchoring and confirmation, to the application and interpretation of international law generally, and in particular in the context of armed conflicts.Footnote 16
Availability bias occurs when people overstate the likelihood that a certain event will occur because it is easily recalled, making decisionmakers less sensitive to information that runs contrary to their expectations. This means, that under some circumstances—for example, when depicted in areas where insurgents have been previously identified—individuals depicted in drone visuals may be more likely to be interpreted as insurgents rather than civilians. Anchoring bias occurs when the estimation of a condition is based on an initial value—anchor—that might result from intuition, a guess, or other easily recalled information. The problem is that decisionmakers do not adjust sufficiently from this initial anchoring point. Confirmation bias refers to people's tendency to seek out and act upon information that confirms their existing beliefs or interpret information in a way that validates their prior knowledge. As a result, the interpretation of drone visuals may be skewed based on decisionmakers’ existing expectations, and this confirmation may then serve as an (inaccurate) anchor for casualty estimates or target identification.
To demonstrate the potential effects of these cognitive biases on military decisionmakers, let us return to the August 29 attack on the white Toyota Corolla that killed Zemari Ahmadi, three of his children, and six other family members and neighbors. The investigation concluded that U.S. forces received information about a planned terror attack involving a white Toyota Corolla at a specified location near Kabul's international airport. Once that information was received, visuals of Mr. Ahmadi, who was driving a white Toyota Corolla, were interpreted consistently with this intelligence, and all of Mr. Ahmadi's following movements and actions were interpreted to affirm this suspicion.
Similarly, erroneous subjective judgments—likely affected by availability bias—were found to be the cause for an Israeli Defence Forces erroneous attack on civilians during Operation Cast Lead in January 2009.Footnote 17 On January 5, 2009, Israeli forces fired several projectiles at the Al-Samouni family house south of Gaza City, killing twenty-one civilians. The house was targeted following a drone visual which was misinterpreted as depicting five men holding RPG rockets at that location. An Israeli military investigation later found that the attack resulted from erroneous reading of the drone visual, which in fact depicted the five men holding firewood. The technical limitations of the image left room for human judgment, which inserted subjectivity—and cognitive biases—into a seemingly objective visual.Footnote 18 My ongoing work in this space provides qualitative evidence from several additional investigations.Footnote 19
IV. Strengthening Compliance
Based on this analysis, strengthening compliance with IHL/LOAC's protective goal (as opposed to its contested standards), must include a new program focused on the behavioral elements in its technology-based knowledge production practices. In particular, it is essential to identify how drone visuals affect human risk assessments, adding tailored protections against these unconscious challenges. These may include reconceptualization of the “duty of care” (as suggested by Moshe Hirsch in another context);Footnote 20 heightened visibility of internal disagreements about the interpretation of drone visuals; a rigorous inter-agency review process, with the goal of offering alternative interpretations (similar to the idea of “red teams” in investigative journalism); training sessions that identify the concrete limits and blind spots of the technology (including relevant biases, such as automation bias); and a shift from individual to organizational accountability for technology-related failures.
This last point can lead to better compliance as it encourages individuals to identify their own errors without fear of retaliation. Of course, ex post investigations are themselves influenced by a number of cognitive biases, including outcome bias, as Tomer Broude and Inbar Levy demonstrate.Footnote 21 In my contribution to Andrea Bianchi and Moshe Hirsch's International Law's Invisible Frames book, I propose legal, epistemological, and behavioral ways to strengthen ex post military investigations, with a particular emphasis on ex post fact-finding processes.Footnote 22
While drone visuals hold much promise for evidence driven risk assessments, visualization technologies may also jeopardize safety and security by masking data gaps and triggering unconscious cognitive biases. As governments around the world intensify their investments in sophisticated combat drones, it is essential to develop effective ways to better integrate these technologies into human decision-making processes, acknowledging the limitations of human cognition.