Article contents
Double elevation: Autonomous weapons and the search for an irreducible law of war
Published online by Cambridge University Press: 16 March 2020
Abstract
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: double elevation, above one’s enemy and above oneself. Elevation above one’s enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a ‘ban on killer robots’ help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective.
- Type
- ORIGINAL ARTICLE
- Information
- Copyright
- © Foundation of the Leiden Journal of International Law 2020
Footnotes
I am indebted to Delphine Dogot, Geoff Gordon, Kate Grady, Itamar Mann, and especially, Naz K Modirzadeh. The research benefited from a collaborative grant on Law and Technologies of War received from the Harvard Law School Institute of Global Law and Policy. At the final stage of writing I further benefited from the detailed and insightful feedback from students, and my presentation, at the Harvard Law School’s International Law Workshop, as well as by the comments from Professors Gabriella Blum and William Alford. I am also grateful to the Leiden Journal’s editors and the anonymous reviewer.
References
1 A. Campolo et al., ‘AI Now 2017 Report’, available at ainowinstitute.org/AI_Now_2017_Report.pdf; V. Boulanin, ‘Mapping the Innovation Ecosystem Driving the Advance of Autonomy in Weapon Systems’, (2016) SIPRI Working Paper.
2 Kurzweil, R., The Singularity is Near: When Humans Transcend Biology (2005)Google Scholar. See also the illuminating and entertaining reporting in O’Connell, M., To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death (2017)Google Scholar.
3 Srnicek, N. and Williams, A., Demand Full Automation: Postcapitalism and a World Without Work (2016)Google Scholar.
4 Bostrom, N., Superintelligence: Paths, Dangers, Strategies (2014)Google Scholar.
5 Arkin, R. C., Governing Lethal Behavior in Autonomous Systems (2009)CrossRefGoogle Scholar.
6 K. Anderson and M. Waxman, ‘Law and Ethics for Autonomous Weapons Systems: Why a Ban Won’t Work and How the Laws of War Can’, Stanford University, 2013, available at scholarship.law.columbia.edu/faculty_scholarship/1803.
7 See, for example, Human Rights Watch, ‘Losing Humanity: The Case Against Killer Robots’, 19 November 2012, available at www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots; Asaro, P., ‘On Banning Autonomous Weapons Systems: Human Rights, Automation and the Dehumanization of Lethal Decision-Making’, (2012) 94 International Review of the Red Cross 687CrossRefGoogle Scholar.
8 O’Connell, supra note 2.
9 Fears of the effects of automation on the job market have featured, for example, on the covers of Der Spiegel magazine in March 1964, April 1978, and September 2016.
10 See www.stopkillerrobots.org.
11 See, on the 2019 meeting of the Group of Government Experts, established by 2016 Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW), and related documents, www.unog.ch/80256EE600585943/(httpPages)/5535B644C2AE8F28C1258433002BBF14?OpenDocument.
12 See the profile and interview of N. Bostrom in R. Khatchadourian, ‘The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?’, The New Yorker, 23 November 2015.
13 Simpson, G., ‘The End of the End of History: Some Epitaphs for Liberalism’, (2016) 15(1) Baltic Journal of International Law 332–43Google Scholar.
14 Noll, G., ‘Weaponising Neurotechnology: International humanitarian law and the loss of language’, (2014) 2(2) London Review of International Law 201CrossRefGoogle Scholar, at 204.
15 Ibid. See also Suchman, L., Human-Machine Reconfigurations: Plans and Situated Actions (2007), 1Google Scholar: ‘cultural conceptions have material effects’.
16 See Sartor, G. and Omicini, A., ‘The autonomy of technological systems and responsibilities for their use’, in Bhuta, N.et al., Autonomous Weapons Systems: Law, Ethics, Policy (2016), 39Google Scholar, at 49.
17 ‘[T]he degree of autonomy is often measured by relating the degree at which the environment can be varied to the mean time between failures, and other factors indicative of robot performance.’ Thrun, S., ‘Toward a framework for human-robot interaction’, (2004) 19(1) Human-Computer Interaction 9–24CrossRefGoogle Scholar, at 14.
18 Mindell, D., Our Robots, Ourselves: Robotics and the Myths of Autonomy (2015), 12Google Scholar.
19 Boulanin, supra note 1, at 12. See also N. Sharkey, ‘Staying in the loop: human supervisory control of weapons’, in Bhuta et al., supra note 16, at 27 for a useful five-part categorization.
20 Sartor and Omicini, supra note 16, at 44–8.
21 Ibid., at 52.
22 See, e.g., Ossowski, S., Co-ordination in Artificial Agent Societies: Social Structures and Its Implications for Autonomous Problem-Solving Agents (1999)Google Scholar.
23 See Scharre, P., Robotics on the Battlefield Part II: The Coming Swarm (2014)Google Scholar; Rubenstein, M.et al., ‘Programmable Self-Assembly in a Thousand-Robot Swarm’, (2014) 345 Science 795CrossRefGoogle Scholar.
24 See also C. Heyns, ‘Autonomous weapons systems: living a dignified life and dying a dignified death’, in Bhuta et al., supra note 16, 3, at 4; Ohlin, J. D., ‘The Combatant Stance: Autonomous Weapons on the Battlefield’, (2016) 92 International Law Studies 1Google Scholar on ‘functional autonomy’. The idea of functional autonomy of course hails from Turing’s ‘imitation game’. See Turing, A. M., ‘Computing Machinery and Intelligence’, (1950) Mind: A Quarterly Review of Psychology and Philosophy 433CrossRefGoogle Scholar.
25 US Department of Defence Directive 3000.09, 21 November 2012, available at www.hsdl.org/?abstract&did=726163, Glossary.
26 Human Rights Watch, ‘Losing Humanity: The Case Against Killer Robots’, supra note 7.
27 Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and the purposes of the Convention, submitted by the Netherlands, CCW/GGE.1/2017/WP.2, 9 October 2017.
28 Geneva Academy, ‘Autonomous Weapon Systems under International Law’ (Academy Briefing no. 8, November 2014), at 6, available at www.geneva-academy.ch/joomlatools-files/docman-files/Publications/Academy%20Briefings/Autonomous%20Weapon%20Systems%20under%20International%20Law_Academy%20Briefing%20No%208.pdf.
29 For a number of examples see Lewis, D.et al., War-Algorithm Accountability (2016), at 34CrossRefGoogle Scholar.
30 See generally Boulanin, supra note 1, at 26.
32 See also Samsung’s SGR-1 sentry robots at www.defensereview.com/samsung-sgr-a1-armedweaponized-robot-sentry-or-sentry-robot-remote-weapons-station-rws-finally-ready-for-prime-time/.
33 See G. Reim, ‘Lockheed Martin delivers first Long Range Anti-Ship Missiles’, Flight Global, 20 December 2018, available at www.flightglobal.com/news/articles/lockheed-martin-delivers-first-long-range-anti-ship-454597/.
35 Boulanin, supra note 1, at 50–55. See also D. Brennan, ‘Suicide Drones: Are Tiny Missiles That “Loiter” in the Air for Hours the Future of Assassination Wars?’, Newsweek, 22 February 2019, available at www.newsweek.com/drones-suicide-kamikaze-war-assassination-missile-uav-war-1340751.
36 See A. Rapaport, ‘Loitering Munitions alter the Battlefield’, IsraelDefense, 30 June 2016, available at www.israeldefense.co.il/en/content/loitering-munitions-alter-battlefield.
38 C. Pellerin, ‘Project Maven Industry Day Pursues Artificial Intelligence for DoD Challenges’, Department of Defence News, 27 October 2017, available at www.defense.gov/News/Article/Article/1356172/project-maven-industry-day-pursues-artificial-intelligence-for-dod-challenges/.
39 See Deputy Secretary of Defense’s Memorandum on the Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven), 26 April 2017, available at www.govexec.com/media/gbc/docs/pdfs_edit/establishment_of_the_awcft_project_maven.pdf; D. Lewis, N. Modirzadeh and G. Blum, ‘The Pentagon’s New Algorithmic-Warfare Team’, Lawfare, 26 June 2017, available at www.lawfareblog.com/pentagons-new-algorithmic-warfare-team.
40 The Department of Defense’s Algorithmic Warfare Cross-functional Team (AWCFT)’s ‘objective is to turn the enormous data available to DoD into actionable intelligence and insights at speed’. See Deputy Secretary of Defense’s Memorandum on the Establishment of an Algorithmic Warfare Cross-Functional Team, ibid.
41 Summary of the 2018 US Department of Defense Artificial Intelligence Strategy, released 12 February 2019, available at media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
42 Memorandum on the Establishment of Joint Artificial Intelligence Centre, 27 June 2018, available at admin.govexec.com/media/establishment_of_the_joint_artificial_intelligence_center_osd008412-18_r....pdf.
43 US Department of Defense AI Strategy, at 11. See also, on the priority of the automation in geolocation, Ekelhof, M., ‘Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting’, (2018) 71(3) Naval War College Review, 61Google Scholar, 80 and footnote 107 (reporting an interview with NGA technical director and big-data specialist).
44 Boulanin, supra note 1, at 17. DARPA’s TRACE program is described thus: ‘The Target Recognition and Adaption in Contested Environments (TRACE) program seeks to develop an accurate, real-time, low-power target recognition system that can be co-located with the radar to provide responsive long-range targeting for tactical airborne surveillance and strike applications.’ See www.darpa.mil/program/trace. See also the modestly called ‘Imaging Through Almost Anything Anywhere (ITA3)’ program at www.darpa.mil/program/fast-lightweight-autonomy.
46 ‘[T]he program aims to develop and demonstrate the capability for small (i.e., able to fit through windows) autonomous UAVs to fly at speeds up to 20 m/s (45 mph) with no communication links to the operator and without GPS guidance.’ at www.darpa.mil/program/fast-lightweight-autonomy.
47 See DARPA’s project ‘Communicating with Computers’ (CwC), at www.darpa.mil/program/communicating-with-computers.
50 See the Micro Autonomous Systems Technology (MAST) research program at www.mast-cta.org/.
51 Boulanin, supra note 1, at 17.
52 For the UK Defence Secretary’s statement on developing swarms see www.gov.uk/government/speeches/defence-in-global-britain.
56 See also Sankai, Y. and Sakurai, T., ‘Exoskeletal cyborg-type robot’, (2018) 3(17) Science RoboticsCrossRefGoogle Scholar, available at robotics.sciencemag.org/content/3/17/eaat3912.
57 US Department of Defense AI Strategy, at 12.
58 The Deparment of Defense’s reference to its own culture reflects a broader culture of business and risk. See US DoD AI Summary, at 14: ‘We are building a culture that welcomes and rewards appropriate risk-taking to push the art of the possible: rapid learning by failing quickly, early, and on a small scale.’
59 D. Wakabayashi and S. Shane, ‘Google Will Not Renew Pentagon Contract That Upset Employees’, New York Times, 1 June 2028, available at www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
60 L. Fang, ‘Google hedges on promise to end controversial involvement in military drone contract’, The Intercept, 1 March 2019, available at theintercept.com/2019/03/01/google-project-maven-contract/.
61 See Boulanin, supra note 1 and Appendix C for a list of corporations.
62 See E. B. Kania, ‘Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power’, Center for a New American Security, 28 November 2017, available at www.cnas.org/publications/reports/battlefield-singularity-artificial-intelligence-military-revolution-and-chinas-future-military-power.
64 US Department of Defense Directive, supra note 32. For the ‘troubling lacunae’ of the Directive, see D. Saxon, ‘A human touch: autonomous weapons, DoD Directive 3000.09 and the interpretation of “appropriate levels of human judgment over the use of force”’, in Bhuta et al., supra note 16, at 185.
65 This is reiterated, but not significantly elaborated, in the Department of Defense’s AI strategy, at 15. See also the interview with Shanahan in Ekelhof, M., ‘Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting’, (2018) 71(3) Naval War College Review, 61Google Scholar, 85 and fn. 131 (where he allows that ‘it could very well be that, if a major conflict arises, all bets will be off, with states feeling forced into more reliance on autonomous systems because their adversaries are willing to take more risk’).
66 The UK ministry of defence stated: ‘UK policy is that the operation of weapons will always be under control as an absolute guarantee of human oversight, authority and accountability. The UK does not possess fully autonomous weapon systems and has no intention of developing them.’ See M. Savage, ‘Humans will always control killer drones, says ministry of defence’, The Observer, 10 September 2017, available at www.theguardian.com/politics/2017/sep/09/drone-robot-military-human-control-uk-ministry-defence-policy. However, a previous commander of the UK Joint Forces Command has expressed his scepticism that such pledges will be maintained. See B. Farmer, ‘Prepare for rise of “killer robots” says former defence chief’, Daily Telegraph, 27 August 2017, available at www.telegraph.co.uk/news/2017/08/27/prepare-rise-killer-robots-says-former-defence-chief/.
67 See the Russian statement at www.unog.ch/80256EDD006B8954/(httpAssets)/B7C992A51A9FC8BFC12583BB00637BB9/$file/CCW.GGE.1.2019.WP.1_R+E.pdf, at 4–5.
68 Vermaas, P.et al., A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems (2011)CrossRefGoogle Scholar, at 16 refer to the well-known National Rifle Association slogan: ‘Guns don’t kill people. People kill people’ as an example of a ‘succinct way of summarising what is known as the neutrality thesis of technical artefacts’. See M. Heidegger, ‘The Question Concerning Technology’, in Basic Writings (2008 [1954]), 217, at 217: ‘Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral.’
69 Agre, P., Computation and Human Experience (1997)CrossRefGoogle Scholar, at 240.
70 See, more broadly, Jasanoff, S. (ed.), States of Knowledge: The Co-Production of Science and the Social Order (2004)CrossRefGoogle Scholar; Jasanoff, S., ‘Technology as a Site and Object of Politics’, in Goodin, R. E. and Tilly, C. (eds.), The Oxford Handbook of Contextual Political Analysis (2006), 745Google Scholar.
71 Edwards, P., The Closed World: Computers and the Politics of Discourse in Cold War America (1997)Google Scholar, at ix.
72 Parker, G., The Military Revolution: Military Innovation and the Rise of the West 1500-1800 (1996)Google Scholar.
73 Churchill, W., ‘The River War: An Account of the Reconquest of the Soudan (1899)’, in Headrick, D. R., The Tools of Empire (1981), 118Google Scholar.
74 See also Asad, T., On Suicide Bombing (2007), 34Google Scholar: ‘The modern Western army is concerned with engaging efficiently with dangerous, because underdeveloped, peoples, in ways that are at once ruthless and humane, in which brutal attack may become a civilizing sign.’
75 Munro, C., ‘Mapping the Vertical Battlespace: Towards a legal cartography of aerial sovereignty’, (2014) 2(2) London Review of International Law 233–61CrossRefGoogle Scholar; Gregory, D., ‘From a View to a Kill: Drones and Late Modern War’, (2011) 28 Theory, Culture & Society 7–8CrossRefGoogle Scholar, 188; Moyn, S., ‘Drones and Imagination: A Response to Paul Kahn’, (2013) 24(1) EJIL 227CrossRefGoogle Scholar.
76 Jaubert, A., ‘Zapping the Viet Cong by computer’, (1972) New Scientist, 30 March 1972, at 685, 687, available at books.google.co.uk/books?id=juOOP4nRFrQC&lpg=PP1&hl=EN&pg=PP1#v=onepage&q&f=false.Google Scholar
77 Ibid., at 688.
78 See Chamayou, G., Drone Theory (2015)Google Scholar, Ch. 6.
79 Address by General W. C. Westmoreland, Chief of Staff, US Army, Annual Luncheon Association of the United States Army, Sheraton Park Hotel, Washington, DC, 14 October 1969 (Congressional Record, US Senate, 16 October 1969).
80 See Edwards, supra note 71, at 7.
81 F. Barnaby, ‘Towards tactical infallibility’, New Scientist, 10 May 1973, 348–54, at 351.
82 Gunneflo, M., Targeted Killing: A Legal and Political History (2016)CrossRefGoogle Scholar.
83 Cockburn, A., Kill Chain: Drones and the Rise of High-Tech Assassins (2016)Google Scholar.
84 For their ‘mythical’ role in the production of a ‘new paradigm’ of law and war see Kalpouzos, I., ‘The Armed Drone’, in Hohmann, J. and Joyce, D. (eds.), International Law’s Objects (2018)Google Scholar.
85 Munro, supra note 75; Gregory, supra note 75.
86 DARPA’s TRACE program sets out the logic quite clearly, using language which connects Vietman’s electronic battlefield, through drone use, towards escalated automation, while highlighting the problematic effects of distance on the network’s reliability thus requiring the strengthening and further integration of a human/machine system: ‘In a target-dense environment, the adversary has the advantage of using sophisticated decoys and background traffic to degrade the effectiveness of existing automatic target recognition (ATR) solutions. Airborne strike operations against relocatable targets require that pilots fly close enough to obtain confirmatory visual identification before weapon release, putting the manned platform at extreme risk. Radar provides a means for imaging ground targets at safer and far greater standoff distances; but the false-alarm rate of both human and machine-based radar image recognition is unacceptably high’, available at www.darpa.mil/program/trace.
87 The, arguably overused, phrase, hails from Kuhn’s, T., The Structure of Scientific Revolutions (1962)Google Scholar where he argues that instead of viewing science as a rational cumulative process, it should be understood as entailing ‘intellectual revolutions’ where ‘one conceptual world is replaced by another’ (p. 10).
88 Daston, L. and Galison, P., Objectivity (2007), 49Google Scholar.
89 Wiesner, J.Google Scholar, chairman of the Science Advisor Committee to President John F Kennedy, quoted in G. Allison and F. Morris, ’Armaments and Arms Control. Exploring the Determinants of Military Weapons’, in Long, F. and Rathjens, G. (eds.), Arms, Defence Policy, and Arms Control (1976), at 119Google Scholar.
90 Rid, T., Rise of the Machines: The Lost History of Cybernetics (2017)Google Scholar, Ch. 1.
91 Wiener, N., Cybernetics: or control and communication in the animal and the machine (2013Google Scholar [1948]).
92 See also Arvidsson, M., ‘Targeting, Gender, and International Posthumanitarian Law and Practice: Framing the Question of the Human in International Humanitarian Law’, (2018) 44(1) Australian Feminist Law Journal 9CrossRefGoogle Scholar. See also para. 22 of Annex III of the 2018 GGE Report where the ‘human’ is the only stable, and unproblematized, parameter in the discussion of meaningful human control over autonomous weapons.
93 Galison, P., ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision’, (1994) 21(1) Critical Inquiry 228CrossRefGoogle Scholar. Galison points out, at 231, that, alongside cybernetics, this idea of a calculating enemy also motivated the development of game theory.
94 Edwards, supra note 71, at 75.
95 Valverde, M. and Lomas, M., ‘Insecurity and Targeted Governance’, in Larner, W. and Walters, W. (eds.), Global Governmentality (2004), 233Google Scholar, at 245.
96 Weber, M., ‘The Vocation of Science’, in Whimster, S. (ed.), The Essential Weber: A Reader (2004), 270Google Scholar.
97 See Weber, J., ‘Keep adding. On kill lists, drone warfare and the politics of databases’, (2016) 34(1) Society and Space 107–25Google Scholar.
98 See O’Connell, supra note 2, at 142: ‘If we want to be more than mere animals, we need to embrace technology’s potential to make us machines.’
99 See Husbands, P.et al., ‘Introduction: The Mechanical Mind’, in Husbands, P.et al. (eds.), The Mechanical Mind in History (2008)CrossRefGoogle Scholar.
100 On the function of this promise in relation to the object of the drone, see Kalpouzos, supra note 84.
101 See International Human Rights and Conflict Resolution Clinic (Stanford Law School) and Global Justice Clinic (NYU School of Law), Living Under Drones: Death, Injury, and Trauma to Civilians From US Drone Practices in Pakistan (September 2012).
102 On the practice of signature strikes see Heller, K. J., ‘One Hell of a Killing Machine: Signature Strikes and International Law’, (2013) Journal of International Criminal Justice 89CrossRefGoogle Scholar.
103 Such statements have been especially made in the context of drones. See Brennan, J.Google Scholar(Assistant to the President for Homeland Security and Counterterrorism), ‘The efficacy and ethics of US counterterrorism strategy’, in Jaffer, J. (ed.), The Drone Memos (2016), 199Google Scholar, at 207: ‘it is hard to imagine a tool that can better minimize the risk to civilians’. This is also increasingly promised in the context of autonomous weapons. See the US submission at the 2018 GGE at www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf and the ‘Remarks by Defense Department General Counsel Paul C. Ney Jr. on the Law of War’, Just Security, 28 May 2019, available at www.justsecurity.org/64313/remarks-by-defense-dept-general-counsel-paul-c-ney-jr-on-the-law-of-war/.
104 Schmitt, M., ‘Autonomous Weapons Systems and International Humanitarian Law: A Reply to Critics’, (2013) 4 Harvard National Security Journal 1Google Scholar; Anderson, K. and Waxman, M., ‘Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law’, in Brownsword, R.et al. (eds.), The Oxford Handbook of Law, Regulation and Technology (2017)Google Scholar, at 1097; see Arkin, supra note 5; Beard, J., ‘The Principle of Proportionality in an Era of High Technology’, in Ford, C. and Williams, W. (eds.), Complex Battlespaces: The Law of Armed Conflict and the Dynamics of Modern Warfare (2019)Google Scholar.
105 See, for example, the argument by Margulies that the superior pattern recognition capabilities of autonomous weapons will help in ‘mapping affinities that can ripen into terrorist affiliations’ and can therefore ‘be immensely helpful in identifying previous unknown followers of ISIS or other groups and implementing a targeting plan’. See Margulies, P., ‘Making autonomous weapons accountable: command responsibility for computer-guided lethal force in armed conflicts’, in Ohlin, J. D. (ed.), Research Handbook on Remote Warfare (2017), 405CrossRefGoogle Scholar, at 422–3.
106 I. S. Henderson et al., ‘Remote and Autonomous Warfare Systems - Precautions in Attack and Individual Accountability’, in Ohlin, ibid., at 335.
107 See Professor Mary Cummings in Annex III of the 2018 GGE Report, para. 26: ‘Due to the innate neuro-muscular lag of humans to perceive and act upon a situation … [LAWS] would be far more discriminatory provided existing computer perception issues were sorted out.’
108 See Sassoli, M., ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Issues to be Clarified’, (2014) 91 International Law Studies 308Google Scholar, at 320 (arguing that to the extent that autonomous weapons may be better at taking precautions commanders may have an obligation to use them).
109 The US Department of Defense Artificial Intelligence Strategy, at 16, concurs: ‘We will seek opportunities to use AI to enhance our implementation of the Law of War. AI systems can provide commanders more tools to protect non-combatants via increased situational awareness and enhanced decision support.’
110 See Chappelle, W.et al., ‘An analysis of post-traumatic stress symptoms in United States Air Force drone operators’, (2014) 28(5) Journal of Anxiety Disorders 480; Chamayou, supra note 78, at 117–19.CrossRefGoogle Scholar
111 See Arkin, supra note 5, at xvi. See also Chamayou, ibid., at 208–9 on the distinction between ontological and axiological humanity.
112 See this position in the otherwise critical report of Special Rapporteur Heyns, at para. 54: ‘[Lethal Autonomous Robots] will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically, they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.’ See K. Bergtora Sandvik and K. Lohne, ‘Lethal Autonomous Weapons: Killing the “Robots-don’t-Rape” Argument’, IntLawGrrls Blog, 5 August 2015, available at ilg2.org/2015/08/05/lethal-autonomous-weapons-killing-the-robots-dont-rape-argument/; J. Turner, ‘We should regulate, not ban killer robots’, Spectator, 28 August 2017, available at blogs.spectator.co.uk/2017/08/we-should-regulate-not-ban-killer-robots/.
113 See Arkin, supra note 5, at 29.
114 Ibid., with reference to M. Walzer, Just and Unjust Wars.
115 See Franchi, S. and Güzeldere, G., ‘Machinations of the Mind: Cybernetics and Artificial Intelligence from Automata to CyborgsGoogle Scholar’, in Franchi, and Güzeldere, , Mechanical Bodies, Computational Minds: Artificial Intelligence from Automata to Cyborgs (2005), 15Google Scholar, 40–1.
116 See Beeley, P., ‘Leibniz and Hobbes’, in Look, B. (ed.), The Bloomsbury Companion to Leibniz (2014), 32–51Google Scholar. See also the influence of Leibniz on Charles Babbage and the creation of his Analytical Engine, the first generally programmable machine, in Bullock, S., ‘Charles Babbage and the Emergence of Automated Reason’, in Husbands, P.et al. (eds.), The Mechanical Mind in History (2008), 19–40Google Scholar.
117 While Descartes was not, as a whole, a materialist and believed that reason was beyond the reach of mere machines his role in mechanistic thinking is explored in Wheeler, M., ‘God’s Machines: Descartes and the Mechanization of Mind’, in Husbands, et al., ibid., at 307CrossRefGoogle Scholar.
118 See Williams, R., ‘Introduction’, in Williams, R. and Robinson, D. (eds.), Scientism: The New Orthodoxy (2015); Boudry, M. and Pigliucci, M. (eds.), Science Unlimited? The Challenges of Scientism (2017)Google Scholar.
119 This is also at the centre of Noll’s discussion of neurotechnology. See supra note 14, at 219–23 with some reference to the literature critiquing ‘degenerate Cartesianism’ in the philosophy of neuroscience.
120 Dreyfus, H., What Computers Still Can’t Do: A Critique of Artificial Reason (1992), xiGoogle Scholar.
121 Boden, E., Mind as Machine: A History of Cognitive Science, vols. I & II (2006)Google Scholar.
122 See Arkin, supra note 5.
123 Conversely, idealized versions of the law often feed blanket opposition to autonomous weapons systems, as will be discussed further below.
124 Wagner, M., ‘The Dehumanization of International Humanitarian Law: Legal, Ethical and Political Implications of Autonomous Weapons Systems’, (2014) 47 Vanderbilt Journal of International Law 1371Google Scholar, at 1393.
125 For a recent attempt at articulating the basic principles of IHL as a set of mathematical formulae see Schmitt, M. and Schauss, Major M., ‘Uncertainty in the Law of Targeting: Towards a Cognitive Framework’, (2019) Harvard National Security Journal 148Google Scholar. It should be noted the authors are, however, careful to point out, at 152, that ‘[t]he formulae should not be viewed as an attempt to reduce targeting decisions to mechanical deterministic calculations’ and that their formulations aim to inform the choices of operators, rather than to be imposed through an algorithm.
126 See Arkin, supra note 5, and, for example, his programming algorithm for the principle of proportionality at 186.
127 The expert contribution of W. Boothby at the 2015 CCW Meeting of Experts is typical: ‘We do not know whether future technology may produce weapon systems that can out-perform humans in protecting civilians and civilian objects. It would in my view be a mistake to try to ban a technology on the basis of its current shortcomings, when in future it may actually enable the law to be complied with more reliably than now.’, available at www.unog.ch/80256EDD006B8954/(httpAssets)/616D2401231649FDC1257E290047354D/$file/2015_LAWS_MX_BoothbyS+Corr.pdf.
128 See Williams, R., ‘Introduction’, in Williams, R. and Robinson, D., Scientism: The New Orthodoxy (2015)Google Scholar 1, at 2: ‘Technology is the ultimate pragmatism, and in a world dominated by technology, and intoxicated by technological solutions to practical problems, everything is viewed as “standing in reserve,” ready to be used by or subjected to some sort of technology or technological process.’
129 See generally the work of G. Sartor and the editors and contributors in Artificial Intelligence and Law.
130 See Sartor, G., ‘Artificial Intelligence in Law and Legal Theory’, (1992) 10 Current Legal Theory 1Google Scholar.
131 Ibid., at 2: ‘it is true that the analyses of legal theory have rarely attained the level of specificity and precision required by computability, especially because the use of formal methods in law has been limited to a few exceptions. Nevertheless, even informal analyses can be of enormous importance for AI, which is exposed to simplifications and reductionisms also because of its need for formalization’.
132 Ibid., at 23.
133 Ibid., at 36.
134 Condorcet, Sketch for a Historical Picture of the Progress of the Human Mind (1795).
135 See Alschner, W., ‘The Computational Analysis of International Law’, in Deplano, R. and Tsagourias, N. (eds.), Research Methods in International Law: A Handbook (2019)Google Scholar, for the developing techniques in the ‘mining’ of ‘international law as data’.
136 See Wiener, N., The Human Use of Human Beings: Cybernetics and Society (1950)Google Scholar; Wiener, N., ‘Some moral and technical consequences of automation’, (1960) 131(3410) Science 1355CrossRefGoogle ScholarPubMed. This is developed in more philosophical depth in his God & Golem, Inc.: A Comment on Certain Points where Cybernetics Impinges on Religion (1964).
137 One of these was Alice Mary Hilton whose Logic, Computing Machines and Automation (1963) placed its hopes on automation for ‘human beings [to] become truly civilised’. See Wiener’s letter of 8 March 1963 in Rid, supra note 90, at 103.
138 See Kahn, P., ‘The Paradox of Riskless Warfare’, (2002) Philosophy & Public Policy Quarterly 2Google Scholar.
139 See Chamayou, supra note 78, at 205.
140 Berkowitz, R., ‘Drones and the Question of the “Human”’, (2014) 28(2) Ethics & International Affairs 159CrossRefGoogle Scholar, at 169.
141 See Kurzweil, supra note 2.
142 See O’Connell, supra note 2 for interviews with current adherents of the faith.
143 For example, Bostrom, supra note 4. Importantly Bostrom was a member of the transhumanist movement; his book has been recently recommended by techno-optimists such as Elon Musk and Bill Gates; his Oxford Project is funded, partly, by the former.
144 See www.stopkillerrobots.org; see also Human Rights Watch, ‘Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban’, 9 December 2016, available at www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban.
145 J. Kellenberger, ‘International humanitarian law and new weapon technologies’, International Institute of Humanitarian Law, September 2011, available at www.unog.ch/80256EDD006B8954/(httpAssets)/F77AF42ED509F890C1257CD90025183F/$file/IHL+&+new+weapon+technologies_Sanremo.pdf.
146 Dreyfus, H., What Computers Can’t Do (1972)Google Scholar, at 190.
147 Indeed, the recently released first US Department of Defense Artificial Intelligence Strategy Summary gestures, at 15, towards ‘a global set of military AI guidelines’.
148 Dourish, P., Where the Action Is: The Foundations of Embodied Interaction (2004)Google Scholar, at vii.
149 See Noll, supra note 14, at 220–3.
150 Ibid., at 223
151 These were no bloodless academic affair. The personal nature of the debate arguably reflected fundamental differences in respective epistemologies and can be seen in the introductions to the different editions of Dreyfus’s What Computers (Still) Can’t Do, the reviews of the book’s re-issue in the special issue of vol. 80 (1996) of Artificial Intelligence, and Boden supra 121, Ch. 11.
152 For a discussion of different, ‘strong and weak’, representationist theories see Egan, F., ‘Representationism’, in Margolis, E.et al., The Oxford Handbook of Philosophy of Cognitive Science (2012), 250CrossRefGoogle Scholar.
153 The term denotes the first phase of artificial intelligence, from the 1950s to the 1990s, also referred to as ‘symbolic AI’. It was coined by Haugeland, J. in Artificial Intelligence: The Very Idea (1985)Google Scholar.
154 H. Dreyfus, Alchemy and Artificial Intelligence (1965), available at www.rand.org/pubs/papers/P3244.html.
155 See Dreyfus, supra note 120.
156 Ibid., at xvi–xvii.
157 Ibid., at 4–27.
158 Borges, J. L., The Aleph and Other Stories (1945)Google Scholar, at 181.
159 See, for example, A. Haque’s point that the disagreements on proportionality ‘are substantive, not semantic, and rooted in its contestability, not its vagueness’, in ‘Indeterminacy and the Law of Armed Conflict’, (2019) 95 International Law Studies 15.
160 One of the first cognitive scientists to appreciate Dreyfus’s critique recognizes that phenomenologists ‘point out that in such an approach we are no longer using the theoretical terms in their original authentic sense: We have abandoned the original problem and find ourselves talking about a counterfeit world in place of the one we initially intended to study – a formal world of logic, mathematics and “bits” of information in place of a human world of experience, knowledge and purpose’. See Pylyshyn, Z., ‘Minds, machines and phenomenology: Some reflections on Dreyfus’ “What computers can’t do”’, (1974) 3(1) Cognition 57CrossRefGoogle Scholar, 63.
161 Dreyfus, supra note 120, at xv.
162 See also Berkowitz, supra note 140, at 165, on the question of drone-generated art.
163 Dreyfus, H., ‘Why Heideggerian AI failed and why fixing it would require making it more Heideggerian (2007)’, in Dreyfus, H. and Wrathall, M., Skillful Coping: Essays on the phenomenology of everyday perception and action (2014), 250Google Scholar, 272–3.
164 See especially Weizenbaum, J., Computer Power and Human Reason: From Judgment to Calculation (1976)Google Scholar.
165 Ibid., at 203.
166 McCarthy’s answer was ‘nothing’, as he thought ‘all human knowledge could be formally represented by AI’. See Boden, supra note 121, at 851. For the work of McCarthy, ‘father of AI’, according to his own website, see jmc.stanford.edu/. McCarthy was also one of the least patient reviewers of Dreyfus’s work. See McCarthy, J., ‘Review of Dreyfus, What Computers Still Can’t Do’, (1996) 80 Artificial Intelligence 143CrossRefGoogle Scholar.
167 See Weizenbaum, supra note 164, 226–7.
168 See Collins, H., ‘Embedded or embodied? a review of Dreyfus, What Computers Still Can’t Do’, (1996) 80 Artificial Intelligence 99CrossRefGoogle Scholar, 101–5 referring to late Wittgenstein.
169 Collins, H., Artifictional Intelligence (2018), 5Google Scholar.
170 On the relative under-theorization of the laws of war because of the pro-active and pragmatic nature of humanitarianism see Mégret, F., ‘Theorizing the Laws of War’, in Orford, A. and Hoffmann, F. (eds.), The Oxford Handbook of the Theory of International Law (2016), 762Google Scholar.
171 See, in this direction and using cybernetic theory, L. Suchman and J. Weber, ‘Human-Machine Autonomies’, in Bhuta et al., supra note 16, at 72. See also Liu, H.-Y., ‘From the Autonomy Framework towards Networks and Systems Approaches for “Autonomous” Weapons Systems’, (2019) 10 Journal of International Humanitarian Legal Studies 89CrossRefGoogle Scholar.
172 On discretion focusing on autonomous weapons see E. Lieblich and E. Benvenisti, ‘The obligation to exercise discretion in warfare: why autonomous weapons systems are unlawful’, in Bhuta et al., ibid., at 245.
173 See, for a recent example, Deland, M.et al. (eds.), International Humanitarian Law and Justice: Historical and Sociological Perspectives (2019)CrossRefGoogle Scholar, especially R. Sutton, ‘A Hidden Fault Line: How International Actors Engage with International Law’s Principle of Distinction’.
174 For an essay-length introduction focusing on judging see M. Del Mar, ‘The Legal Imagination’, 2017, Aeon, available at aeon.co/essays/why-judges-and-lawyers-need-imagination-as-much-as-rationality.
175 Modirzadeh, N., ‘Cut These Words: Passion and International Law of War Scholarship’, (2019) Harvard Journal of International LawGoogle Scholar (forthcoming).
- 2
- Cited by