I Mapping the Field: Preliminary Remarks
Technological innovations are likely to increase the frequency of human–robot interactions in many areas of social and economic relations and humans’ private lives. Criminal law theory and legal policy should not ignore these innovations. Although the main challenge is to design civil, administrative, and soft law instruments to prevent harm in human–robot interactions and to compensate victims, the developments will also have some impact on substantive criminal law. Criminal lawsFootnote 1 should be scrutinized and, if necessary, amendments and adaptations recommended, taking the two dimensions of criminal law and criminal law theory, the preventive and the retrospective, into account.
The prevention of accidents is obviously one of the issues that needs to be addressed, and regulatory offenses in the criminal law could contribute to this end. Regulatory offenses are part of a larger legal toolbox that can be called upon to prevent risks and harms caused by malfunctioning technological innovations and unforeseen outcomes of their interactions with human users (see Section II.A). In addition to the risk of accidents, some forms of human–robot interaction, such as automated weapon systems and sex robots, are also criticized for other reasons, which invites the question of whether these types of robots should be banned (Section II.B). If we turn to the second, retrospective dimension of criminal law, the major question, again, is liability for accidents. Under what conditions can humans who constructed, programmed, supervised, or used a robot be held criminally liable for harmful outcomes caused by the robot (Section III.A)? Other questions are whether existing criminal laws can be applied to humans who commit crimes with robots as tools (Section III.B), how dilemmatic situations should be evaluated (Section III.C), and whether self-defense against robots is possible (Section III.D). From the perspective of criminal law theory, the scope of inquiry should be even wider and extend beyond questions of criminal liability of humans for harmful events involving robots. Might it someday be possible for robots to incur criminal liability (Section III.E)? Could robots be victims of crime (Section III.F)? And, as robots become increasingly involved in the day-to-day life of humans and become subject to legal responsibility, might this also have a long-term impact on how human–human interactions are understood (Section IV)?
The purpose of this introductory chapter is to map the field in order to structure current and future discussions about human–robot interactions as topics for substantive criminal law. Marta Bo, Janneke de Snaijer, and Thomas Weigend analyze some of these issues in more depth in their chapters. Before we turn to the mapping exercise, the term “robot” deserves some attention,Footnote 2 including delineation from the broader concept of artificial intelligence (AI). Per the Introduction to the volume, which references the EU AI Act, AI is “software that is developed with one or more of [certain] approaches and techniques … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”Footnote 3 The consequences of the growing use of information technology (IT) and AI are discussed in many areas of law and legal policy.Footnote 4 In the field of criminal justice, AI systems can be utilized at the pre-trial and sentencing stages as well as for making decisions about parole, to provide information on the risk of reoffending.Footnote 5 Whether these systems analyze information more accurately and comprehensively than humans, and the degree to which programs based on machine learning inherit biases, are issues under discussion.Footnote 6 The purpose of this volume is not to examine the relevance of these new technologies to criminal law and criminal justice in general; the focus is somewhat narrower. Robots are the subject. Entities that are called robots can be based on machine learning techniques and AI, technologies already in use today, but they also have another crucial feature. They are designed to perform actions in the real wordFootnote 7 and thus must usually be embodied as physical objects. It is primarily this ability to interact physically with environments, objects, and the bodies of humans that calls for safeguards.
II The Preventive Perspective: Regulating Human–Robot Interactions
II.A Preventing Accidents
Regulation is necessary to prevent accidents caused by malfunctioning robots and unforeseen interactive effects. Some of these rules might need to be backed up by sanctions. It is almost impossible to say much more on a general level about potential accidents and what should be prohibited or regulated to minimize the risk of harm, as a more detailed analysis would require covering a vast area. The exact nature of important “dos and don’ts” that might warrant enforcement by criminal laws obviously depends on the kinds of activities that robots perform, e.g., in manufacturing, transportation, healthcare, households, and warfare, and the potential risks involved. The more complex a robot’s task, the more that can go wrong. The kind and size of potential harm depends, among other things, on the physical properties of robots, such as weight and speed, the frequency with which they encounter the general public, and the closeness of their operations to human bodies. Autonomous vehicles and surgical robots, e.g., require tighter regulation than robot vacuum cleaners.
The task of developing proper regulations for potentially dangerous human–robot interaction is challenging. It begins with the need to determine the entity to whom rules and prohibitions are addressed: manufacturers; programmers; those who rely on robots as tools, such as owners or users; third parties who happen to encounter robots, e.g., in the case of automated cars, other road users; or malevolent intruders who, e.g., hack computer systems or otherwise manipulate the robot’s functions. Another question is who can – and who should – develop legal standards. Not only legislatures, but also criminal and civil courts can and do contribute to rule-setting. Their rulings, however, generally target a specific case. Systematic and comprehensive regulation seems to call for legislative action. But before considering the enactment of new laws, attention should be paid to existing criminal laws, i.e., general prohibitions that protect human life, bodily integrity, property, etc. These prohibitions can be applied to some human failures that involve robots, but due to their unspecific wording and broad scope, they do not give sufficient guidance for our scenarios. More specific norms of conduct, norms tailored to the production, programming, and use of robots, would certainly be preferable. This leads again to the question of what institution is best situated to develop these norms of conduct. This task requires constant attention to and monitoring of rapid technological developments and emerging trends in robotics. Ultimately, traditional modes of regulation by means of laws might not be ideally suited to respond effectively to emerging technologies. Another major difficulty is that regulations in domestic laws do not make much sense for products circulating in global markets. This may prompt efforts to harmonize national laws.Footnote 8 As an alternative, soft law in the form of standards and guidelines proposed by the private sector or regulatory agencies might be a way to achieve faster and perhaps also more universal agreement among the producers and users of robots.Footnote 9
For legal scholars and legal policy, the upshot is that we should probably not expect too much from substantive criminal law as an instrument to control the use of new technologies. Effective and comprehensive regulation to prevent harm arising out of human–robot interactions, and the difficult task of balancing societal interest in the services provided by robots against the risks involved, do not belong to the core competencies of criminal law.
II.B Beyond Accidents
Beyond the prevention of accidents, other concerns might call for criminal prohibitions. If there are calls to suppress certain conduct rather than to regulate it, the criminal law is a logical choice. Strict prohibitions would make sense if one were to fundamentally object to the creation of AI and autonomous robots, in part because the long-term consequences for humankind might be serious,Footnote 10 although it may be too late for that in some instances. A more selective approach would be to demand not a categorical decision against all research in the field of AI and the production of advanced robots in general, but rather efforts to suspend researchFootnote 11 or to stop the production of some kinds of robots. An example of the latter approach would be prohibiting devices that apply deadly force against humans, such as remotely controlled or automated weapons systems, addressed in this volume by Marta Bo.Footnote 12 Not only is the possibility of accidents a particularly serious concern in this area, but also the reliability of target identification, the precision of application, and the control of access are of utmost importance. Even if autonomous weapon systems work as intended, they might in the long run increase the death toll in wars, and ethical doubts regarding war might grow if the humans responsible for aggressive military operations do not face personal risks.Footnote 13 Arguments that point to the risk of remote harm are often based on moral concerns. This is most evident in the discussions about sex robots. Should sex robots in general or, more particularly, sex robots that imitate stereotypical characteristics of female prostitutes, be banned?Footnote 14 The proposition of such prohibitions would need to be supported by strong empirical and normative arguments, including explanations as to why sex robots are more problematic than sex dolls, whether it is plausible to expect such robots to have negative effects on a sizable number of persons, why sexual activity involving humans and robots is morally objectionable, and even if convincing arguments of this kind could be made, why the state should engage in the enforcement of norms regarding sexual morality.
For legal theorists, it is also interesting to ask whether, at some point, policy debates will no longer focus solely on remote harms to other human beings, collective human concerns such as gender equality, or human values and morals, but will instead expand to include the interests or rights of individual robots as well. Take the example of sex robots. Could calls to prohibit sexual interactions between humans and robots refer to the dignity of the robot and its right to dignity? Might we experience a re-emergence of debates about slavery? At present, it would certainly be premature to claim that humans and robots should be treated as equivalent, but discussions about these issues have already begun.Footnote 15 As long as robots are distinguishable from humans in several dimensions, such as bodies, social competence, and emotional expressivity, it is unlikely that the rights humans grant one another will be extended to them. As long as there are no truly humanoid robots, i.e., robots that resemble humans in all or most physiological and psychological dimensions,Footnote 16 tremendous cognitive abilities alone are unlikely to trigger widespread demands for equal treatment such as the recognition of robots’ rights. For the purpose of this introductory chapter, it must suffice to point out that thinking in this direction would also be relevant to debates concerning the need to criminalize selected conduct in order to protect the interests of robots.
III The Retrospective Perspective: Applying Criminal Law to Human–Robot Interactions
The harmful outcomes of human–robot interactions not only provide an impetus to consider creating preventive regulation. Harmful outcomes can also give rise to criminal investigations and, ultimately, to proceedings against the humans involved. The criminal liability of robots is also discussed below.
III.A Human Liability for Unforeseen Accidents
III.A.1 Manufacturers and Programmers
If humans have been injured or killed through interaction with a robot, if property has been damaged, or if other legally protected rights have been disregarded, questions of criminal liability will arise. It could, of course, be argued that the more pressing issue is effective compensation, a goal achievable by means of tort law and mandatory insurance, perhaps in combination with the legal construct of robots as “electronic persons” with their own assets.Footnote 17 Serious accidents, however, are also likely to engage criminal justice officials who need to clarify whether a human suspect or, depending on the legal system, a corporation has committed a criminal offense.
The first group of potential defendants could be those who built and programmed the robot. If the applicable criminal law does not include a strict liability regulatory offense, criminal liability will depend on the applicability of general norms, such as those governing negligent or reckless conduct. The challenges for prosecutors and courts are manifold, and they include establishing causality, attributing outcomes to acts and omissions, and specifying the standard of care that applied to the defendant’s conduct.Footnote 18 Determining the appropriate standard of care requires knowledge of what could have been done better on the technical level. In addition, difficult, wide-ranging normative considerations are relevant. How much caution do societies require, and how much caution may they require when innovative products such as robots are introduced?Footnote 19 As a general rule, standards of care should not be so strict as to have a chilling effect on progress, since manufacturers and programmers can relieve humans of manual, tiresome, and tedious work, robots can compensate for the lack of qualified employees in many areas, and the overall effect of robot use can be beneficial to the public, e.g., by reducing traffic accidents once the stage of automated driving has been reached. Such fundamental issues of social utility should be one criterion when determining the standards of care upon which the criminal liability of manufacturers and programmers are predicated.Footnote 20
Marta Bo focuses on the criminal liability of programmers in Chapter 2, “Are Programmers in or out of Control? The Individual Criminal Responsibility of Programmers of Autonomous Weapons and Self-Driving Cars.” She asks whether programmers could be accused of crimes against persons if automated cars or automated weapons cause harm to humans or if the charge of indiscriminate attacks against civilians can be made. She describes the challenges facing programmers of automated vehicles and autonomous weapons and discusses factors that can undermine their control over outcomes. She then turns her attention to legal assessments, including criteria such as actus reus, the causal nexus between programming and harm caused by automated vehicles and autonomous weapons, and negligence standards. Bo concludes that it is possible to use criminal law criteria for imputation to test whether programmers had “meaningful human control.”
An obvious challenge for criminal law assessment is to determine the degree to which, in the case of machine learning, programmers can foresee developments in a robot’s behavior. If the path from the original algorithm to the robot’s actual conduct cannot be reconstructed, it might be worth considering whether the mere act of exposing humans to encounters with a somewhat unpredictable and thus potentially dangerous robot could, without more, be labeled criminally negligent. While this might be a reasonable approach when such robots first appear on the market, the question of whether it would be a good long-term solution merits careful consideration. It seems preferable to focus on strict criteria for licensing self-learning robots, and on civil law remedies such as compensation that do not require proof of individual negligence, and abandon the idea of criminal punishment of humans just for developing and marketing robots with self-learning features.
III.A.2 Supervisors and Users
Humans who are involved in a robot’s course of action in an active cooperative or supervisory way could, if an accident occurs, incur criminal liability for recklessness or negligence. Again, for prosecutors and courts, a frequent problem will be to identify the causes of an accident and the various roles of the numerous persons involved in the production and use of the robot. A “diffusion of responsibility”Footnote 21 is almost impossible to avoid. Also, the question will arise as to what can realistically be expected of humans when they supervise and use robots equipped with AI and machine learning technology. How can they keep up with self-learning robots if the decision-making processes of such robots are no longer understandable and their behavior hard to predict?Footnote 22
In Chapter 3, “Trusting Robots: Limiting Due Diligence Obligations in Robot-Assisted Surgery under Swiss Criminal Law,” Janneke de Snaijer describes one area where human individuals might be held criminally liable as a consequence of using robots. She focuses on the potential and the challenges of robot-assisted surgery. The chapter introduces readers to a technology already in use in operating rooms: that of automated robots helping surgeons achieve greater surgical precision. These robots can perform limited tasks independently, but are not fully autonomous. De Snaijer concentrates primarily on criminal liability for negligence, which depends on how the demands of due diligence are defined. She describes general rules of Swiss criminal law doctrine that provide some guidelines for requirements of due diligence. The major problem she identifies is how much trust surgeons should be allowed to place in the functioning of the robots with which they cooperate. Concluding that Swiss law holds surgeons accountable for robots’ actions to an unreasonable degree, she diagnoses contradictory standards in that surgeons are held responsible but required by law to use new technology to improve the quality of surgery.
In other contexts, robots are given the task of monitoring those who use them, e.g., by detecting fatigue or alcohol consumption, and, if need be, issuing warnings. Under such circumstances, a human who fails to heed a warning and causes an accident may face criminal liability. Presuming negligence in such cases might have the effect of establishing a higher standard for humans carrying out an activity while under the surveillance of a robot than for humans carrying out the same activity without the surveillance function. It might also mean that the threshold for assuming recklessness, or, under German law, conditional intent,Footnote 23 will be lowered. An interesting question is the degree to which courts will allow leeway for human psychology, including perhaps a human disinclination to be bossed around by a machine.
III.A.3 Corporate Liability
In many cases, it will not be possible or very difficult to trace harm caused by a device based on artificial intelligence to the wrongful conduct of an individual human being who acted in the role of programmer, manufacturer, supervisor, or user. Thomas Weigend starts Chapter 4, entitled “Forms of Robot Liability: Criminal Robots and Corporate Criminal Responsibility,” with the diagnosis of a “responsibility gap.” He then examines the option of holding robots criminally liable before going a step further and considering the introduction of corporate criminal responsibility for the harmful actions of robots. Weigend begins with the controversial discussion of whether corporations should be punished for crimes committed by employees. He then develops the idea that the rationales used to justify the far-reaching attribution of employee conduct to corporations could be applied to the conduct of robots as well. He contends that criminal liability should be limited to cases in which humans acting on behalf of the corporation were (at a minimum) negligent regarding the designing, programming, or controlling of robots.
III.B Human Liability for the Use of a Robot with the Intent to Commit a Crime
Robots can be purposefully used to commit crimes, e.g., to spy on other persons.Footnote 24 If the accused human intentionally designed, manipulated, used, or abused a robot to commit a crime, he or she can be held criminally liable for the outcome.Footnote 25 The crucial point in such cases is that the human who employs the robot uses it as a tool.Footnote 26 If perpetrators pursue their criminal goals with the use of a tool, it does not matter whether the tool is of the traditional, merely mechanical kind, such as a gun, or whether it has some features of intelligence, such as an automated weapon that is, e.g., reprogrammed for a criminal purpose.
While this is clearly the case for many criminal offenses, particularly those that focus on outcomes such as causing the death of another person, the situation with regard to other criminal offenses is not so clear. It will not always be obvious that a robot will be able to fulfil the definitional elements of all offenses. It could, e.g., be argued that sexual offenses that require bodily contact between offender and victim cannot be committed if the offender causes a robot to touch another person in a sexual way. In such cases, it is a matter of interpretation if wrongdoing requires the physical involvement of the human offender’s body. I would answer this particular question in the negative, because the crucial point is the penetration of the victim’s body. However, answers must be developed for different crimes separately, based on the legal terminology used and the kind of interest protected.
III.C Human Liability for Foreseen but Unavoidable Harm
In the situation of an unsolvable, tragic dilemma, in which there is no alternative harmless action, a robot might injure humans as part of a planned course of action. The most frequently discussed examples of these dilemmas involve automated cars in traffic scenarios in which all available options, such as staying on track or altering course, will lead to a crash with human victims.Footnote 27 If such events have been anticipated by human programmers, the question arises of whether they could perhaps be held criminally liable, should the dilemmatic situation in fact occur. When human drivers in a comparable dilemma knowingly injure others to save their own lives or the lives of their loved ones, criminal law systems recognize defenses that acknowledge the psychological and normative forces of strong fear, the will to survive, and personal attachments.Footnote 28 The rationale of such defenses does not apply, however, if a programmer, who is not in acute distress, decides that the automated car should always safeguard passengers inside the vehicle, and thus chooses the course that will lead to the death of humans outside the car.
If a human driver has to choose between swerving to save the lives of two persons on the road directly in front of the car, thus hitting and killing a single person on the sidewalk, or staying the course, thus hitting and killing both persons on the road, criminal law doctrine does not provide clear-cut answers. Under German doctrine, which displays a built-in aversion to utilitarian reasoning, the human driver who kills one person to save two would risk criminal punishment.Footnote 29 Whether this would change once the assessment shifts from the human driver at the wheel of the car at the crucial moment to the vehicle’s programmer is an interesting question. German law is shaped by a strong preference for remaining passive, i.e., one may not become active in order to save the greater number of lives, but for the programmer, this phenomenological difference dissolves completely. At the time the automated car or other robot is manufactured, it is simply a decision between programming option A or programming option B for dilemmatic situations.Footnote 30
III.D Self-Defense against Robots
If a human faces imminent danger of being injured or otherwise harmed by a robot, and the human knowingly or purposefully damages or destroys that robot, the question arises as to whether this situation is covered by a justificatory defense. In some cases, a necessity/lesser evil defense could be raised successfully if the danger is substantial. In other cases, it could be questioned if a lesser evil defense would be applicable, e.g., if someone shoots down a very expensive drone to prevent it from taking pictures.Footnote 31 Under such circumstances, another justificatory defense might be that of self-defense. In German criminal law, self-defense does not require a proportionality test.Footnote 32 In the case of an unlawful attack, it is permissible to destroy valuable objects even if the protected interest might be of comparatively minor importance. The crucial question in the drone case is whether an “unlawful attack”Footnote 33 or “unlawful force by another person”Footnote 34 requires that the attacker is a human being.
III.E Criminal Liability of Robots
In the realm of civil liability, robots could be treated as legal persons, and this status could be combined with the duty of producers or owners to endow robots with sufficient funds to compensate potential accident victims.Footnote 35 A different question is whether a case could also be made for the capacity of robots to incur criminal liability.Footnote 36 This is a highly contested proposal and a fascinating topic for criminal law theorists. Holding robots criminally liable would not be compatible with traditional features of criminal law: its focus on human agency and the notion of personal guilt, i.e., Schuld, which is particularly prominent in German criminal law doctrine. Many criminal law theorists defend these features as essential to the very idea of criminal law and thus reject the idea of permitting criminal proceedings against robots. But this is at best a weak argument. Criminal law doctrine is not set in stone; it has adapted to changes in the real world in the past and can be expected to do so again in the future.
The crucial question is whether there are additional principled objections to subjecting robots to criminal liability. Scholars typically examine the degree to which the abilities of robots are similar to those of humansFootnote 37 and ask whether robots fulfil the requirements of personhood, which is defined by means of concepts such as autonomy and free will.Footnote 38 These positions could be described as status-centered, anthropocentric, and essentialist. Traditional concepts of personhood rely on ontological claims about what humans are and the characteristics of humans qua humans. As possible alternatives, notions such as autonomy and personhood could also be described in a more constructivist manner, as the products of social attribution,Footnote 39 and it is worth considering whether the criminal liability of robots could at least be constructed for a limited subsection of criminal law, i.e., strict liability regulatory offenses, for legal systems that recognize such offenses.Footnote 40
Instead of exploring the degree of a robot’s human-ness or personhood, the alternative is to focus on the functions of criminal proceedings and punishments. In this context, the crucial question is whether some goals of criminal punishment practices could be achieved if norms of conduct were explicitly addressed to robots and if defendants were not humans but robots. As we will see, it makes sense to distinguish between the preventive functions of criminal law, such as deterrence, and the expressive meaning of criminal punishment.
The purpose of deterring agents is probably not easily transferrable from humans to robots. Deterring someone presupposes that the receiver of the message is actually aware of a norm of conduct but is inclined not to comply with it, because other incentives seem more attractive or other personal motives and emotions guide his or her decision-making. AI will probably not be prone to the kind of multi-layered, sometimes blatantly irrational type of decision-making practiced by humans. For robots, the point is to identify the right course of conduct, not to avoid being side-tracked by greed and emotions. But preventive reasoning could, perhaps, be brought to bear on the humans involved in the creation of robots who might be indirectly influenced. They might be effectively driven toward higher standards of care in order to avoid public condemnation of their products’ behavior.Footnote 41
In addition to their potentially preventive effects, criminal law responses have expressive features. They communicate that certain kinds of wrongful conduct deserve blame, and more specifically they reassure crime victims that they were indeed wronged by the other party to the interaction, and not that they themselves made a mistake or simply suffered a stroke of bad luck.Footnote 42 Some of the communicative and expressive features of criminal punishment might retain their functions, and address the needs of victims, if robots were the addressees of penal censure.Footnote 43 Even if robots will not for a long time, if ever, be capable of feeling remorse as an emotional state, the practice of assigning blame could persist with some modifications.Footnote 44 It might suffice if robots had the cognitive capacity to understand what their environment labels as right and wrong and the reasons behind these judgments, and if they adapted their behavior to norms of conduct. Communication would be possible with smart robots that are capable of explaining the choices they have made.Footnote 45 In their ability to respond and to modify parameters for future decision-making, advanced robots are distinguishable from others not held criminally liable, e.g., animals, young children, and persons with severe mental illness.
Admittedly, criminal justice responses to the wrongful behavior of robots cannot be the same as the responses to delinquent humans. It is difficult, e.g., to conceive of a “hard treatment” component of criminal punishmentFootnote 46 that would apply to robots, and such a component, if conceived, might well be difficult to enforce.Footnote 47 It could, however, be argued that punishment in the traditional sense is not necessary. For an entirely rational being, the message that conduct X is wrongful and thus prohibited, and the integration of this message into its future decision-making, would be sufficient. The next question would be if blaming robots and eliciting responses could provide some comfort to human victims and thus fulfil their emotional needs. It is conceivable that a formal, solemn procedure might serve some of the functions that traditional criminal trials fulfil, at least in the theoretical model, but study would be required to determine whether empathy or at least the potential for empathy are prerequisites for calling a perpetrator to account. Criminal law theorists have argued that robots could only be held criminally liable if they were able to understand emotional states such as suffering.Footnote 48 In my view, a deeply shared understanding of what it means, emotionally, to be hurt is not necessarily essential for the communicative message delivered to victims who have been harmed by a robot.
Another question, however, is whether a merely communicative “criminal trial,” without the hard treatment component of sanctions, would be so unlike criminal punishment practices as we know them that the general human public would consider it pointless and not worth the effort, or even a travesty. This question moves the inquiry beyond criminal law theory. Answers would require empirical insight into the feasibility and acceptance of formal, censuring communication with robots. If designing procedures with imperfect similarities to traditional criminal trials would make sense, the question of criminal codes for robots should perhaps also be addressed.Footnote 49
III.F Robots as Victims of Crime
Another area that might require more attention in the future is the interpretation of criminal laws if the victim of the crime is not a human, as assumed by the legislators when they passed the law, but a robot. Crimes against personality rights, e.g., might lead to interesting questions. Might it be a criminal offense to record spoken words, a criminal offense under §201 of the Strafgesetzbuch (German Criminal Code), if the speaker is a robot rather than a human being? Thinking in this direction would require considering whether advanced robots should be afforded constitutional and other rightsFootnote 50 and, should such a discussion gain seriousness, which rights these would be.
IV The Long-Term Perspective: General Effects on Substantive Criminal Law
The discussion in Section III above referred to criminal investigations undertaken after a specific human–robot interaction has caused or threatened to cause harm. From the perspective of criminal law theory, another possible development could be worth further observation. Over time, the assessment of human conduct, in general, might change, and perhaps we will begin to assess human–human interactions in a somewhat different light, once humanoid robots based on AI become part of our daily lives. At present, criminal laws and criminal justice systems are to different degrees quite tolerant with regard to the irrational features of human decision-making and human behavior. This is particularly true of German criminal law where, e.g., the fact that an offender has consumed drugs or alcohol can be a basis for considerable mitigation of punishment,Footnote 51 and offenders who are inclined to not consider possible negative outcomes of their highly risky behavior receive only a very lenient punishment or no punishment at all.Footnote 52 This tolerance of human imperfections might shrink if the more rational, de-emotionalized version of decision-making by AI has an effect on our expectations regarding careful behavior. At present, this is merely a hypothesis; it remains to be seen whether the willingness of criminal courts to accommodate human deficiencies really will decrease in the long term.

I Introduction
In March 2018, a Volvo XC90 vehicle that was being used to test Uber’s emerging automated vehicle technology killed a pedestrian crossing a road in Tempe, Arizona.Footnote 1 At the time of the incident, the vehicle was in “autonomous mode” and the vehicle’s safety driver, Rafaela Vasquez, was allegedly streaming television onto their mobile device.Footnote 2 In November 2019, the National Transportation Safety Board found that many factors contributed to the fatal incident, including failings from both the vehicle’s safety driver and the programmer of the autonomous system, Uber.Footnote 3 Despite Vasquez later being charged with negligent manslaughter in relation to the incident,Footnote 4 criminal investigations into Uber were discontinued in March 2019.Footnote 5 This instance is particularly emblematic of the current tendency to consider responsibility for actions and decisions of autonomous vehicles (AVs) as lying primarily with users of these systems, and not programmers or developers.Footnote 6
In the military realm, similar issues have arisen. For example, it is alleged that in 2020 an autonomous drone system, the STM Kargu-2, may have been used during active hostilities in Libya,Footnote 7 and that such autonomous weapons (AWs) were programmed to attack targets without requiring data connectivity between the operator and the use of force.Footnote 8 Although AW technologies have not yet been widely used by militaries, for several years, governments, civil society, and academics have debated their legal position, highlighting the importance of retaining “meaningful human control” (MHC) in decision-making processes to prevent potential “responsibility gaps.”Footnote 9 When debating MHC over AWs as well as responsibility issues, users or deployers are more often scrutinized than programmers,Footnote 10 the latter being considered too far removed from the effects of AWs. However, programmers’ responsibility increasingly features in policy and legal discussions, leaving many interpretative questions open.Footnote 11
To fill this gap in the current debates, this chapter seeks to clarify the role of programmers, understood simply here as a person who writes programmes that give instructions to computers, in crimes committed with and not by AVs and AWs (“AV- and AW-related crimes”). As artificial intelligence (AI) systems cannot provide the elements required by criminal law, i.e. the mens rea, the mental element, and the actus reus, the conduct element, including its causally connected consequence,Footnote 12 the criminal responsibility of programmers will be considered in terms of direct responsibility for commission of crimes, i.e., as perpetrators or co-perpetrators,Footnote 13 rather than vicarious or joint responsibility for crimes committed by AI. Programmers could, e.g., be held responsible on the basis of participatory modes of responsibility, such as aiding or assisting users in perpetrating a crime. Despite their potential relevance, participatory modes of responsibility under national and international criminal law (ICL) are not analyzed in this chapter, as that would require a separate analysis of their actus reus and mens rea standards. Finally, it must be acknowledged that as used in this chapter, the term “programmer” is a simplification. The development of AVs and AWs entails the involvement of numerous actors, internal and external to tech companies, such as developers, programmers, data labelers, component manufacturers, software developers, and manufacturers. These distinctions might entail difficulties in individualizing responsibility and/or a distribution of criminal responsibility, which could be captured by participatory modes of responsibility.Footnote 14
This chapter will examine the criminal responsibility of programmers through two examples, AVs and AWs. While there are some fundamental differences between AVs and AWs, there are also striking similarities. Regarding differences, AVs are a means of transport, implying the presence of people onboard, which will not necessarily be a feature of AWs. As for similarities, both AVs and AWs depend on object recognition technology.Footnote 15 Central to this chapter is the point that both AVs and AWs can be the source of incidents resulting in harm to individuals; AWs are intended to kill, are inherently dangerous, and can miss their intended target, and while AVs are not designed to kill, they can cause death by accident. Both may unintentionally result in unlawful harmful incidents.
The legal focus regarding the use of AVs is on crimes against persons under national criminal law, e.g., manslaughter and negligent homicide, and regarding the use of AWs, on crimes against persons under ICL, i.e., war crimes against civilians, such as those found in the Rome Statute of the International Criminal Court (“Rome Statute”)Footnote 16 and in the First Additional Protocol to the Geneva Conventions (AP I).Footnote 17 A core issue is whether programmers could fulfil the actus reus, including the requirement of causation, of these crimes. Given the temporal and spatial gap between programmer conduct and the injury, as well as other possibly intervening causes, a core challenge in ascribing criminal responsibility lies in determining a causal link between programmers’ conduct and AV- and AW-related crimes. To determine causation, it is necessary to delve into the technical aspects of AVs and AWs, and consider when and which of their associated risks can or cannot be, in principle, imputable to a programmer.Footnote 18 Adopting a preliminary categorization of AV- and AW-related risks based on programmers’ alleged control or lack of it over the behavior and/or effects of AVs and AWs, Sections II and III consider the different risks and incidents entailed by the use of AVs and AWs. Section IV turns to the elements of AV- and AW-related crimes, focusing on causation tests and touching on mens rea. Drawing from this analysis, Section V turns to a notion of MHC over AVs and AWs that incorporates requirements for the ascription of criminal responsibility and, in particular, causation criteria to determine under which conditions programmers exercise causal control over the unlawful behavior and/or effects of AVs and AWs.
II Risks Posed by AVs and Programmer Control
Without seeking to identify all possible causes of AV-related incidents, Section II begins by identifying several risks associated with AVs: algorithms, data, users, vehicular communication technology, hacking, and the behavior of bystanders. Some of these risks are also applicable to AWs.Footnote 19
In order to demarcate a programmer’s criminal responsibility, it is crucial to determine whether they ultimately had control over relevant behavior and effects, e.g., navigation and possible consequences of AVs. Thus, the following sections make a preliminary classification of risks on the basis of the programmers’ alleged control over them. While a notion of MHC encompassing the requirement of causality in criminal law will be developed in Section V, it is important to anticipate that a fundamental threshold for establishing the required causal nexus between conduct and harm is whether a programmer could understand and foresee a certain risk, and whether the risk that materialized was within the scope of the programmer’s “functional obligations.”Footnote 20
II.A Are Programmers in Control of Algorithm and Data-Related Risks in AVs?
Before turning to the risks and failures that might lie in algorithm design and thus potentially under programmer control, this section describes the tasks required when producing an AV, and then reviews some of the rules that need to be coded to achieve this end.
The main task of AVs is navigation, which can be understood as the AV’s behavior as well as the algorithm’s effect. Navigation on roads is mostly premised on rules-based behavior requiring knowledge of traffic rules and the ability to interpret and react to uncertainty. In AVs, automated tasks include the identification and classification of objects usually encountered while driving, such as vehicles, traffic signs, traffic lights, and road lining.Footnote 21 Furthermore, “situational awareness and interpretation”Footnote 22 is also being automated. AVs should be able “to distinguish between ordinary pedestrians (merely to be avoided) and police officers giving direction,” and conform to social habits and rules by, e.g., “interpret[ing] gestures by or eye contact with human traffic participants.”Footnote 23 Finally, there is an element of prediction: AVs should have the capability to anticipate the behavior of human traffic participants.Footnote 24
In AV design, the question of whether traffic rules can be accurately embedded in algorithms, and if so who is responsible for translating these rules into algorithms, becomes relevant in determining the accuracy of the algorithm design as well as attributing potential criminal responsibility. For example, are only programmers involved, or are lawyers and/or manufactures also involved? While some traffic rules are relatively precise and consist of specific obligations, e.g., a speed limit represents an obligation not to exceed that speed,Footnote 25 there are also several open-textured and context-dependent traffic norms, e.g., regulations requiring drivers to drive carefully.Footnote 26
AV incidents might stem from a failure of the AI to identify objects or correctly classify them. For example, the first widely reported incident involving an AV in May 2016 was allegedly caused by the vehicle sensor system’s failure to distinguish a large white truck crossing the road from the bright spring sky.Footnote 27 Incidents may also arise due to failures to correctly interpret or predict the behavior of others or traffic conditions, which may sometimes be interlinked with or compounded by problems of detection and sensing.Footnote 28 In turn, mistakes in both object identification and prediction might occur as a result of faulty algorithm design and/or derived from flawed data. In the former case, prima vista, if mistakes in object identification and/or prediction occur due to an inadequate algorithm design, the criminal responsibility of programmers could be engaged.
In relation to the latter, the increasing and almost dominant use of machine learning (ML) algorithms in AVsFootnote 29 means that issues of algorithms and data are interrelated. The performance of algorithms has become heavily dependent on the quality of data. A multitude of different algorithms are used in AVs for different purposes, with supervised and unsupervised learning-based algorithms often complementing one another. Supervised learning, in which an algorithm is fed instructions on how to interpret the input data, relies on a fully labeled dataset. Within AVs, the supervised learning models are usually: (1) “classification” or “pattern recognition algorithms,” which process a given set of data into classes and help to recognize categories of objects in real time, such as street signs; and (2) “regression,” which is usually employed for predicting events.Footnote 30 In cases of supervised learning, mistakes can arise from incorrect data annotation instead of a faulty algorithm design per se. If incidents do occur,Footnote 31 the programmer arguably would not be able to foresee those risks and be considered in control of the subsequent navigation decisions.
Other issues may arise with unsupervised learningFootnote 32 where an ML algorithm receives unlabeled data and programmers “describe the desired behaviour and teach the system to perform well and generalise to new environments through learning.”Footnote 33 Data can be provided in the phase of simulating and testing, but also during the use itself by the end-user. Within such methods, “deep learning” is increasingly used to improve navigation in AVs. Deep learning is a form of unsupervised learning that “automatically extracts features and patterns from raw data [such as real-time data] and predicts or acts based on some reward function.”Footnote 34 When an incident occurs due to deep learning techniques using real data, it must be assessed whether the programmer could have foreseen that specific risk and the resulting harm, or whether it derived, e.g., from an unforeseeable interaction with the environment.
II.B Programmer or User: Who Is in Control of AVs?
As shown in the March 2018 Uber incident,Footnote 35 incidents can also derive from failures of the user to regain control of the AV, with some AV manufacturers attempting to shift the responsibility for ultimately failing to avoid collisions onto the AVs’ occupants.Footnote 36 However, there are serious concerns as to whether an AV’s user, who depending on the level of automation is essentially in an oversight role, is cognitively in the position to regain control of the vehicle. This problem is also known as automation bias,Footnote 37 a cognitive phenomenon in human–machine interaction, in which complacency, decrease of attention, and overreliance on the technology might impair the human ability to oversee, intervene, and override the system if needed.
Faulty human–machine interface (HMI) design, i.e., the technology which connects an autonomous system to the human, such as a dashboard or interface, could cause the inaction of the driver in the first place. In these instances, the driver could be relieved from criminal responsibility. Arguably, HMIs do not belong to programmers’ functional obligations and therefore fall outside of a programmer’s control.
There are phases other than actual driving where a user could gain control of an AV’s decisions. Introducing ethics settings into the design of AVs may ensure control over a range of morally significant outcomes, including trolley-problem-like decisions.Footnote 38 Such settings may be mandatorily introduced by manufacturers with no possibility for users to intervene and/or customize them, or they may be customizable by users.Footnote 39 Customizable ethics settings allow users “to manage different forms of failure by making autonomous vehicles follow [their] decisions” and their intention.Footnote 40
II.C Are Some AV-Related Risks Out of Programmer Control?
There are a group of risks and failures that could be considered outside of programmer control. These include communications failures, hacking of the AV by outside parties, and unforeseeable bystander behavior. One of the next steps predicted in the field of vehicle automation is the development of software enabling AVs to communicate with one another and to share real-time data gathered from their sensors and computer systems.Footnote 41 This means that a single AV “will no longer make decisions based on information from just its own sensors and cameras, but it will also have information from other cars.”Footnote 42 Failures in vehicular communication technologiesFootnote 43 or inaccurate data collected by other AVs cannot be attributed to a single programmer, as they might fall beyond their responsibilities and functions, and also beyond their control.
Hacking could also cause AV incidents. For example, “placing stickers on traffic signs and street surfaces can cause self-driving cars to ignore speed restrictions and swerve headlong into oncoming traffic.”Footnote 44 Here, the criminal responsibility of a programmer could depend on whether the attack could have been foreseen and whether the programmer should have created safeguards against it. However, the complexity of AI systems could make them more difficult to defend from attacks and more vulnerable to interference.Footnote 45
Finally, imagine an AV that correctly follows traffic rules, but hits a pedestrian who unforeseeably slipped and fell onto the road. Such unforeseeable behavior of a bystander is relevant in criminal law cases on vehicular homicide, as it will break the causal nexus between the programmer and the harmful outcome.Footnote 46 In the present case, it must be determined which unusual behavior should be foreseen at the stage of programming, and whether standards of foreseeability in AVs should be higher for human victims.
III Risks Posed by AWs and Programmer Control
While not providing a comprehensive overview of the risks inherent in AWs, Section III follows the structure of Section II by addressing some risks, including algorithms, data, users, communication technology, hacking and interference, and the unforeseeable behavior of individuals in war, and by distinguishing risks based on their causes and programmers’ level of control over them. While some risks cannot be predicted, the “development of the weapon, the testing and legal review of that weapon, and th[e] system’s previous track record”Footnote 47 could provide information about the risks involved in the deployment of AWs. Some risks could be understood and foreseen by the programmer and therefore be considered under their control.
III.A Are Programmers in Control of Algorithm and Data-Related Risks in AWs?
Autonomous drones provide an example of one of the most likely applications of autonomy within the military domain,Footnote 48 and this example will be used to highlight the increasingly autonomous tasks in AWs. This section will address the rules to be programmed, and identify where some risks might lie in the phase of algorithm design.
The two main tasks being automated in autonomous drones are: (1) navigation, which is less problematic than on roads and a relatively straightforward rule-based behavior, i.e., they must simply avoid obstacles while in flight; and (2) weapon release, which is much more complex as “ambiguity and uncertainty are high when it comes to the use of force and weapon release, bringing this task in the realm of expertise-based behaviours.”Footnote 49 Within the latter, target identification is the most important function because it is crucial to ensure compliance with the international humanitarian law (IHL) principle of distinction, the violation of which could also cause individual criminal responsibility for war crimes. The principle of distinction establishes that belligerents and those executing attacks must distinguish at all times between civilians and combatants, and not target civilians.Footnote 50 In target identification, the two main automated tasks are: (1) object identification and classification on the basis of pattern recognition;Footnote 51 and (2) prediction, e.g., predicting that someone is surrendering, or based on the analysis of patterns of behavior, predicting that someone is a lawful target.Footnote 52
Some of the problems in the algorithm design phase may derive from translating the open-textured and context-dependentFootnote 53 rules of IHL,Footnote 54 such as the principle of distinction, into algorithms, and from incorporating programmer knowledge and expert-based rules,Footnote 55 such as those needed to analyze patterns of behavior in targeted strikes and translate them into code.
There are some differences compared with the algorithm design phase in AVs. Due to the relatively niche and context-specific nature of IHL, compared to traffic law which is more widely understood by programmers, programming IHL might require a stronger collaboration with outside expertise, i.e., military lawyers and operators.
However, similar observations to AVs can be made in relation to supervised and unsupervised learning algorithms. Prima vista, if harm results from mistakes in object identification and prediction based on an inadequate algorithm design, the criminal responsibility of the programmer(s) could be engaged. Depending on the foreseeability of such data failures to the programmer and the involvement of third parties in data labeling, and assuming mistakes could not be foreseen, criminal responsibility might not be attributable to programmers. Also similar to AVs, the increasing use of deep learning methods in AWs makes the performance of algorithms dependent on both the availability and accuracy of data. Low quality and incorrect data, missing data, and/or discrepancies between real and training data may be conducive to the misidentification of targets.Footnote 56 When unsupervised learning is used in algorithm design, environmental conditions and armed conflict-related conditions, e.g., smoke, camouflage, and concealment, may inhibit the collection of accurate data.Footnote 57 As with AVs, programmers of AWs may at some point gain sufficient knowledge and experience regarding the robustness of data and unsupervised machine learning that would subject them to due diligence obligations, but the chapter assumes that programmers have not reached that stage yet. In the case of supervised learning, errors in data may lie in a human-generated data feed,Footnote 58 and incorrect data labeling could lead to mistakes and incidents that might be attributable to someone, but not to programmers.
III.B Programmer or User: Who Is in Control of AWs?
The relationship between programmers and users of AWs presents different challenges than AVs. In light of current trends in AW development, arguably toward human–machine interaction rather than full autonomy of the weapons system, the debate has focused on the degree of control that militaries must retain over the weapon release functions of AWs.
However, control can be shared and distributed among programmers and users in different phases, from the design phase to deployment. As noted above, AI engineering in the military domain might require a strong collaboration between programmers and military lawyers in order to accurately code IHL rules in algorithms.Footnote 59 Those arguing for the albeit debated introduction of ethics settings in AWs maintain that ethics settings would “enable humans to exert more control over the outcomes of weapon use [and] make the distribution of responsibilities [between manufacturers and users] more transparent.”Footnote 60
Finally, given their complexity, programmers of AWs might be more involved than programmers of AVs in the use of AWs and in the targeting process, e.g., being required to update the system or implement some modifications to the weapon target parameters before or during the operation.Footnote 61 In these situations, it must be evaluated to what extent a programmer could foresee a certain risk entailed in the deployment and use of an AW in relation to a specific attack rather than just its use in the abstract.
III.C Are Some AW-Related Risks Out of Programmer Control?
In the context of armed conflict, it is highly likely that AWs will be subject to interference and attacks by enemy forces. A UN Institute for Disarmament Research (UNIDIR) report lists several pertinent examples: (1) signal jamming could “block systems from receiving certain data inputs (especially navigation data)”; (2) hacking, such as “spoofing” attacks, might “replace an autonomous system’s real incoming data feed with a fake feed containing incorrect or false data”; (3) “input” attacks could “change a sensed object or data source in such a way as to generate a failure,” e.g., enemy forces “may seek to confound an autonomous system by disguising a target”; and (4) “adversarial examples” or “evasion,” which are attacks that “involve adding subtle artefacts to an input datum that result in catastrophic interpretation error by the machine.”Footnote 62 In such situations, the issue of criminal responsibility for programmers will depend on the modalities of the adversarial interference, whether it could have been foreseen, and whether the AW could have been protected from foreseeable types of attacks.
Similar to the AV context, failures of communication technology, caused by signal jamming or by failures of communication systems between a human operator and the AI system or among connected AI systems, may lead to incidents that could not be imputed to a programmer.
Finally, conflict environments are likely to drift constantly as “[g]roups engage in unpredictable behaviour to deceive or surprise the adversary and continually adjust (and sometimes radically overhaul) their tactics and strategies to gain an edge.”Footnote 63 The continuously changing and unforeseeable behavior of opposing belligerents and the tactics of enemy forces can lead to “data drift,” whereby changes that are difficult to foresee can lead to a weapon system’s failure without it being imputable to a programmer.Footnote 64
IV AV-Related Crimes on the Road and AW-Related War Crimes on the Battlefield
The following section will distil the legal ingredients of crimes against persons resulting from failures in the use of AVs and AWs. The key question is whether the actus reus, i.e., the prohibited conduct, including its resulting harm, could ever be performed by programmers of AVs and AWs. The analysis suggests that save for war crimes under the Rome Statute, which prohibit a conduct, the crimes under examination on the road and the battlefield are currently formulated as result crimes, in that they require the causation of harm such as death or injuries. In relation to crimes of conduct, the central question is whether programmers controlled the behavior of an AV or an AW, e.g., the AW’s launching of an indiscriminate attack against civilians. In relation to crimes of result, the central question is whether programmers exercise causal control over a chain of events leading to a prohibited result, e.g., death, that must occur in addition to the prohibited conduct. Do programmers exercise causal control over the behavior and the effects of AVs and AWs? Establishing causation of crimes of conduct presents differences compared with crimes of result in light of the causal gap that characterizes the latter.Footnote 65 However, this difference is irrelevant in the context of crimes committed with the intermediation of AI since, be they of conduct or result, they always present a causal gap between a programmer’s conduct and the unlawful behavior or effect of an AV and AW. Thus, the issue is whether a causal nexus exists between a programmer’s conduct and either the behavior (in the case of crimes of conduct) or the effects (in the case of crimes of result) of AVs and AWs. Sections IV.A and IV.B will describe the actus reus of AV- and AW-related crimes, while Section IV.C will turn to the question of causation. While the central question of this chapter concerns the actus reus, at the end of this section, I will also make some remarks on mens rea and the relevance of risk-taking and negligence in this debate.
IV.A Actus Reus in AV-Related Crimes
This section focuses on the domestic criminal offenses of negligent homicide and manslaughter in order to assess whether the actus reus of AV-related crimes could be performed by a programmer. It does not address traffic and road violations generally,Footnote 66 nor the specific offense of vehicular homicide.Footnote 67
Given the increasing use of AVs and pending AV-related criminal cases in the United States,Footnote 68 it seems appropriate to take the Model Penal Code (MPC) as an example of common law legislation.Footnote 69 According to the MPC, the actus reus of manslaughter consists of “killing for which the person is reckless about causing death.”Footnote 70 Negligent homicide concerns instances where a “person is not aware of a substantial risk that a death will result from his or her conduct, but should have been aware of such a risk.”Footnote 71
While national criminal law frameworks differ considerably, there are similarities regarding causation which are relevant here. Taking Germany as a representative example of civil law traditions, the Strafgesetzbuch (German Criminal Code) (StGB) distinguishes two forms of intentional homicide: murderFootnote 72 and manslaughter.Footnote 73 Willingly taking the risk of causing death is sufficient for manslaughter.Footnote 74 Negligent homicide is proscribed separately,Footnote 75 and the actus reus consists of causing the death of a person through negligence.Footnote 76
These are crimes of result, where the harm consists of the death of a person. While programmer conduct may be remote with regard to AV incidents, some decisions taken by AV programmers at an early stage of development could decisively impact the navigation behavior of an AV that results in a death. In other words, it is conceivable that a faulty algorithm designed by a programmer could cause a fatal road accident. The question then becomes what is the threshold of causal control exercised by programmers over an AV’s unlawful behavior of navigation and its unlawful effects such as a human death.
IV.B Actus Reus in AW-Related War Crimes
This section addresses AW-related war crimes and whether programmers could perform the required actus reus. Since the actus reus would most likely stem from an AW’s failure to distinguish between civilian and military targets, the war crime of indiscriminate attacks, which criminalizes violations of the aforementioned IHL rule of distinction,Footnote 77 takes on central importance.Footnote 78 The war crime of indiscriminate attacks refers inter alia to an attack that strikes military objectives and civilians or civilian objects without distinction. This can occur as a result of the use of weapons that are incapable of being directed at a specific military objective or accurately distinguishing between civilians and civilian objects and military objectives; these weapons are known as inherently indiscriminate weapons.Footnote 79
While this war crime is neither specifically codified in the Rome Statute nor in AP I, it has been subsumedFootnote 80 under the war crime of directing attacks against civilians. Under AP I, the actus reus of the crime is defined in terms of causing death or injury.Footnote 81 In crimes of result with AWs, a causal nexus between the effects resulting from the deployment of an AW and a programmer’s conduct must be established. Under the Rome Statute, the war crime is formulated as a conduct crime, proscribing the actus reus as the “directing of an attack” against civilians.Footnote 82 A causal nexus must be established between the unlawful AW’s behavior and/or the attack and the programmer’s conduct.Footnote 83 Under both frameworks, the question is whether programmers exercised causal control over the behavior and/or effects, e.g., death or attack, of an AW.
A final issue relates to the required nexus with an armed conflict. The Rome Statute requires that the conduct must take place “in the context of and was associated with” an armed conflict.Footnote 84 However, while undoubtedly there is a temporal and physical distance between programmer conduct and the armed conflict, it is conceivable that programmers may program AW software or upgrade it during an armed conflict. In certain instances, it could be argued that programmer control continues even after the completion of the act of programming, when the effects of their decisions materialize in the behavior and/or effects of AWs in armed conflict. Programmers can be said to exercise a form of control over the behavior and/or effects of AWs that begins with the act of programming and continues thereafter.
IV.C The Causal Nexus between Programming and AV- and AW-Related Crimes
A crucial aspect of programmer criminal responsibility is the causal control they exercise over the behavior and/or effects of AVs and AWs. The assessment of causation refers to the conditions under which an AV’s and AW’s unlawful behavior and/or effects should be deemed the result of programmer conduct for purposes of holding them criminally responsible.
Causality is a complex topic. In common law and civil law countries, several tests to establish causation have been put forward. Due to difficulties in establishing a uniform test for causation, it has been argued that determining conditions for causation are “ultimately a matter of legal policy.”Footnote 85 But this does not render the formulation of causality tests in the relevant criminal provisions completely beyond reach. While a comprehensive analysis of these theories is beyond the scope of this chapter, for the purposes of establishing when programmers exercise causal control, some theories are more aligned with the policy objectives pursued by the suppression of AV- and AW-related crimes.
First, in common law and civil law countries, the “but-for”/conditio sine qua non test is the dominant test for establishing physical causation, and it is intended as a relationship of physical cause and effect.Footnote 86 In the language of MPC §2.03(1)(a), the conduct must be “an antecedent but for which the result in question would not have occurred.” The “but for” test works satisfactorily in cases of straightforward cause and effect, e.g., pointing a loaded gun toward the chest of another person and pulling the trigger. However, AV- and AW-related crimes are characterized by a temporal and physical gap between programmer conduct and the behavior and effect of AVs and AWs. They involve complex interactions between AVs and AWs and humans, including programmers, data providers and labelers, users, etc. AI itself is also a factor that could intervene in the causal chain. The problem of causation in these cases must thus be framed in a way that reflects the relevance of intervening and superseding causal forces which may break the causal nexus between a programmer’s conduct and AV- and AW-related crime.
Both civil law and common law systems have adopted several theories to overcome the shortcomingsFootnote 87 and correct the potential over-inclusivenessFootnote 88 of the “but-for” test, in complex cases involving numerous necessary conditions. Some of these theories include elements of foreseeability in the causality test.
The MPC adopts the “proximate cause test,” which “differentiates among the many possible ‘but for’ causal forces, identifying some as ‘necessary conditions’ – necessary for the result to occur but not its direct ‘cause’ – and recognising others as the ‘direct’ or ‘proximate’ cause of the result.”Footnote 89 The relationship is “direct” when the result is foreseeable and as such “this theory introduces an element of culpability into the law of causation.”Footnote 90
German theories about adequacy assert that whether a certain factor can be considered a cause of a certain effect depends on “whether conditions of that type do, generally, in the light of experience, produce effects of that nature.”Footnote 91 These theories, which are not applied in their pure form in criminal law, include assessments that resemble a culpability assessment. They bring elements of foreseeability and culpability into the causality test, and in particular, a probability and possibility judgment regarding the actions of the accused.Footnote 92 However, these theories leave unresolved the different knowledge perspectives, i.e., objective, subjective, or mixed, on which the foreseeability assessment is to be based.Footnote 93
Other causation theories include an element of understandability, awareness, or foreseeability of risks. In the MPC, the “harm-within-the risk” theory considers that causation in reckless and negligent crimes is in principle established when the result was within the “risk of which the actor is aware or … of which he should be aware.”Footnote 94 In German criminal law, some theories describe causation in terms of the creation or aggravation of risk and limit causation to the unlawful risks that the violated criminal law provision intended to prevent.Footnote 95
In response to the drawbacks of these theories, the teleological theory of causation holds that in all cases involving a so-called intervening independent causal force, the criterion should be whether the intervening causal force was “produced by ‘chance’ or was rather imputable to the criminal act in issue.”Footnote 96 Someone would be responsible for the result if their actions contributed in any manner to the intervening factor. What matters is the accused’s control over the criminal conduct and whether the intervening factor was connected in a but/for sense to their criminal act,Footnote 97 thus falling within their control.
In ICL, a conceptualization of causation that goes beyond the physical relation between acts and effects is more embryonic. However, it has been suggested that theories drawn from national criminal law systems, such as risk-taking and linking causation to culpability, and thus to foreseeability, should inform a theory of causation in ICL.Footnote 98 It has also been suggested that causality should entail an evaluation of the functional obligations of an actor and their area of operation in the economic sphere. According to this theory, causation is “connected to an individual’s control and scope of influence” and is limited to “dangers that he creates through his activity and has the power to avoid.”Footnote 99 As applied in the context of international crimes, which have a collective dimension, these theories could usefully be employed in the context of AV and AW development, which is collective by nature and is characterized by a distribution of responsibilities.
Programmers in some instances will cause harm through omission, notably by failing to avert a particular harmful risk when they are under a legal duty to prevent harmful events of that type (“commission by omission”).Footnote 100 In these cases, the establishment of causation will be hypothetical as there is no physical cause-effect relationship between an omission and the proscribed result.Footnote 101 Other instances concern whether negligence on the side of the programmers, via, e.g., a lack of instructions and warnings, have contributed to and caused the omission, constituting a failure to intervene on behalf of the user. Such omissions amount to negligence, i.e., violations of positive duties of care,Footnote 102 and since it belongs to mens rea, will be addressed in the following section.
IV.D Criminal Negligence: Programming AVs and AWs
In light of the integration of culpability assessments in causation tests, an assessment of programmers’ criminal responsibility would be incomplete without addressing mens rea issues. In relation to mens rea, while intentionally and knowingly programming an AV or AW to commit crimes falls squarely under these prohibitions, in both these contexts, the most expected and problematic issue is the unintended commission of these crimes, i.e., cases in which the programmer did not design the AI system to commit an offense, but harm nevertheless arises during its use.Footnote 103 In such situations, programmers had no intention to commit an offense, but still might incur criminal liability for risks that they should have known and foreseen. To define the scope of criminal responsibility for unintended harm, it is crucial to determine which risks can be known and foreseen by an AV or AW programmer.
There are important differences in the mens rea requirements of AV- and AW-related crimes. Under domestic criminal law, the standards of recklessness and negligence apply to the AV-related crimes of manslaughter and negligent homicide. While “[a] person acts ‘recklessly’ with regard to a result if he or she consciously disregards a substantial risk that his or her conduct will cause the result; he or she acts only ‘negligently’ if he or she is unaware of the substantial risk but should have perceived it.”Footnote 104 The MPC provides that “criminal homicide constitutes manslaughter when it is committed recklessly.”Footnote 105 In the StGB, dolus eventualis, i.e., willingly taking the risk of causing death, would encompass situations covered by recklessness and is sufficient for manslaughter.Footnote 106 For negligent homicide,Footnote 107 one of the prerequisites is that the perpetrator can foresee the risk to a protected interest.Footnote 108
Risk-based mentes reae are subject to more dispute in ICL. The International Tribunal for the former Yugoslavia accepted that recklessness could be a sufficient mens rea for the war crime of indiscriminate attacks under Article 85(3)(a) of AP I.Footnote 109 However, whether recklessness and dolus eventualis could be sufficient to ascribe criminal responsibility for war crimes within the framework of the Rome Statute remains debated.Footnote 110
Unlike incidents with AVs, incidents in war resulting from a programmer’s negligence cannot give rise to their criminal responsibility. Where applicable, recklessness and dolus eventualis, which entail understandability and foreseeability of risks of developing inherently indiscriminate AWs, become crucial to attribute responsibility to programmers in scenarios where programmers foresaw and took some risks. Excluding these mental elements would amount to ruling out the criminal responsibility of programmers in most expected instances of war crimes.
V Developing an International Criminal Law-Infused Notion of Meaningful Human Control over AVs and AWs that Incorporates Mens Rea and Causation Requirements
This section considers a notion of MHC applicable to AVs and AWs that is based on criminal law and that could function as a criminal responsibility “anchor” or “attractor.”Footnote 111 This is not the first attempt to develop a conception of control applicable to both AVs and AWs. Studies on MHC over AWs and moral responsibility of AWsFootnote 112 have been extended to AVs.Footnote 113 In their view, MHC should entail an element of traceability entailing that “one human agent in the design history or use context involved in designing, programming, operating and deploying the autonomous system … understands or is in the position to understand the possible effects in the world of the use of this system.”Footnote 114 Traceability requires that someone in the design or use understands the capabilities of the AI system and its effects.
In line with these studies, it is argued here that programmers may decide and control how both traffic law and IHL are embedded in the respective algorithms, how AI systems see and move, and how they react to changes in the environment. McFarland and McCormack affirm that programmers may exercise control not only over an abstract range of behavior, but also in relation to specific behavior and effects of AWs.Footnote 115 Against this background, this chapter contends that programmer control begins at the initial stage of the AI development process and continues into the use phase, extending to the behavior and effects of AVs and AWs.
Assuming programmer control over certain AV- and AW-related unlawful behavior and effects, how can MHC be conceptualized so as to ensure that criminal responsibility is traced back to programmers when warranted? The foregoing discussion of causality in the context of AV- and AW-related crimes suggests that theories of causation that go beyond deterministic cause-and-effect assessments are particularly amenable to developing a theory of MHC that could ensure responsibility. These theories either link causation to mens rea standards or describe it in terms of the aggravation of risk. In either case, the ability to understand the capabilities of AI systems and their effects, and foreseeability of risks, are required. Considering these theories of causation in view of recent studies on MHC over AVs and AWs, the MHC’s requirement of traceability arguably translates into the requirement of foreseeability of risks.Footnote 116 Because of the distribution of responsibilities in the context of AV and AW programming, causation theories introducing the notion of function-related risks are needed to limit programmers’ criminal responsibility to those risks within their respective obligations and thus their sphere of influence and control. According to these theories, the risks that a programmer is obliged to prevent and that relate to their functional obligations, i.e., their function-related risks, could be considered causally imputable in principle.Footnote 117
VI Conclusion
AVs and AWs are complex systems. Their programming implies a distribution of responsibilities and obligations within tech companies, and between them and manufacturers, third parties, and users, which makes it difficult to identify who may be responsible for harm stemming from their use. Despite the temporal and spatial gap between the programming phase and crimes, the responsibility of programmers in the commission of crimes should not be discarded. Indeed, crucial decisions on the behavior and effects of AVs and AWs are taken in the programming phase. While a more detailed case-by-case analysis is needed, this chapter has mapped out how programmers of AVs and AWs might be in control of certain AV- and AW-related risks and therefore criminally responsible for AV- and AW-related crimes.
This chapter has shown that the assessment of causation as a threshold for establishing whether an actus reus was committed may converge on the criteria of understandability and foreseeability of risks of unlawful behavior and/or effects of AVs and AWs. Those risks which fall within programmers’ functional obligations and sphere of influence can be considered under their control and imputable to them.
Following this analysis, a notion of MHC applicable to programmers of AVs and AWs based on requirements for the imputation of criminal responsibility can be developed. It may function as a responsibility anchor in so far as it helps trace back responsibility to the individuals that could understand and foresee the risk of a crime being committed with an AV or AW.

I Introduction
Surgeons have been using automated tools in the operating room for several decades. Even more robots will support surgeons in the future, and at some point, surgery may be completely delegated to robots. This level of delegation is currently fictional and robots remain mostly under the command of the human surgeon. But some robots are already making discrete decisions on their own, based on the combined functioning of programming and sensors, and in some situations, surgeons rely on a robot’s recommendation as the basis for their directions to the robot.
This chapter discusses the legal responsibility of human surgeons working with surgical robots under Swiss law, including robots who notify surgeons about a patient’s condition so the surgeon can take a particular action. Unlike other jurisdictions, negligence and related duties of care are defined in Switzerland not only by civil law,Footnote 1 but by criminal law as well.Footnote 2 This chapter focuses on the surgeon’s individual criminal responsibility for negligence,Footnote 3 which is assessed under the general concept of Article 12, paragraph 3 of the Criminal Code of Switzerland (“SCC”).Footnote 4 Under the SCC, the surgeon is required to carry out a medical surgery in accordance with state-of-the-art due diligence.
In the general context of task sharing among humans, which includes surgeons working in a team, a principle of trust (Vertrauensgrundsatz) applies. The principle of trust allows team members to have a legitimate expectation that each participant will act with due diligence. The principle of trust also means that participants are for the most part only responsible for their own actions, which limit their obligations of due diligence. However, when the participant is a robot, even though the surgeon delegates tasks to the robot and relies on it in a manner similar to human participants, the principle of trust does not apply and the surgeon is responsible for what the robot does. Neither statutes nor cases clearly state an application or rejection of the traditional principle of trust to robots. However, at this point, the principle has only been applied to humans, and it is safe to assume that it does not apply to robots, mainly because a robot is currently not capable of criminal responsibility under Swiss law.Footnote 5 Application of the principle of trust to robots together with a corresponding limitation on the surgeon’s liability would therefore create a responsibility gap.Footnote 6
In view of the important role robots play in a surgical team, one would expect governing regulation to apply traditional principles to the division of work between human surgeons and robots, but the use of surgical robots has not led to any relevant changes, or the introduction of special care regulations that either limit the surgeon’s responsibility or allocate it among other actors. This chapter explores an approach to limiting the surgeon’s criminal liability when tasks are delegated to robots. As the SCC does not provide guidance regarding the duties of care when a robot is used, other law must be consulted. The chapter argues that the principle of trust (Vertrauensgrundsatz) should be applied to limit the due diligence expected from a surgeon interacting with a robot. Incorporating and handling robots in surgery are becoming more integral to effective surgery due to specialization arising from division of labor among humans and robots, and the increase in more precise and quicker medical-technical solutions for patients. Surgeons must rely to some degree on the expertise of the robots they use, and therefore surgeons who make use of promising robots in their operating room should be subject to a valid and practical approach to due diligence which does not unreasonably expand their liability. While the chapter addresses the need to limit the surgeon’s liability when working with robots, chapter length does not allow for analysis of related issues such as the connection to permissible risk, i.e., once the surgical robot is established in society, the possible risks are accepted because its benefits outweigh the risks. The chapter does not address other related issues, such as situations in which a hospital instructs surgeons to use robots, issues arising from the patient’s perspective, or the liability of the manufacturer, except for situations where the robot does not perform as it should or simply fails to function.Footnote 7
The chapter proceeds by articulating the relevant concept of a robot (Section II). A discussion of due diligence (Section III) explains the duties of care and the principle of trust when a surgeon works without a robot (Section III.B), which is followed by a discussion of duties of care when a surgeon works with a robot (Section III.C). The chapter addresses in detail the due diligence expected when a surgical robot asks the human to take a certain action (Section III.C.3). Moving to a potential approach that restricts a surgeon’s criminal liability to appropriate limits, the chapter explores the principle of trust as it could apply to robots (Section III.D), and suggests an approach that applies and calibrates the principle of trust based on whether the robot has been certified (Section III.E). The chapter applies these legal principles to the first stage of surgical robots, which are still dependent on commands from humans to take action and do not contain complete self-learning components. The conclusion (Section IV) looks to the future and shares some brief suggestions about how to deal with likely developments in autonomous surgical robots.
II Terminology: Robots in Surgery
A standardized definition of a robot does not exist.Footnote 8 There is some agreement that a robot is a mechanical object.Footnote 9 In 1920, Karel Capek characterized the term “robota” (slavish, slave labor)Footnote 10 by his story about artificial slaves who take over humankind.Footnote 11 Thereafter, the term was used in countless other works.Footnote 12 The modern use of robot includes the requirement that a robot has sensors to “sense,” processors to “think,” and actuating elements to “act.”Footnote 13 Under this definition, pure software, which does not interact physically with the world, does not count as a robot.Footnote 14 In general, robots are partly intelligent, adaptive machines that extend the human ability to act in the world.Footnote 15
Traditionally, robots are divided into industrial and service robots. A distinction is also made between professional service robots such as restaurant robots, and service robots for private use such as robot vacuums.Footnote 16 The robots considered in this chapter come under the category of service robots, which primarily provide services for humans as opposed to industrial processes. Among other things, professional service robots can interact with both unskilled and skilled personnel, as in the case of a service robot at a restaurant, or with exclusively skilled personnel, as with a surgeon in an operating room.
In discussions of robots and legal responsibility, the terms “agents” or “autonomous systems”Footnote 17 are increasingly used almost interchangeably with the term robot. To avoid definitional problems, only the term “robot” will be used in the chapter. However, the chapter does distinguish between autonomous and automated robots, and only addresses automated robots over which the surgeon exercises some control, not fully autonomous robots. Fully autonomous robots would have significantly increased autonomy and their own decision-making ability, whereas automated robots primarily execute predetermined movement patterns.Footnote 18 Fully autonomous robots that do not require human direction are not covered in this chapter because innovations in the field of surgery have not yet reached this stage,Footnote 19 although the conclusion will share some initial observations regarding how to approach the liability issues raised by autonomous robots.
III Legal Principles Regarding Due Diligence and Cooperation
Generally applicable principles of law regarding due diligence and cooperation are found in Swiss criminal law. Humans must act with due diligence, and if they do not, they can be liable for negligence. According to Swiss criminal law, any person is liable for lack of care if he or she fails to exercise the duty of care required by the circumstances and commensurate with personal capabilities.Footnote 20 But while it is a ubiquitous principle that humans bear responsibility for their own behavior, we normally do not bear responsibility for someone else’s conduct. We must consider the consequences of our own behavior and prevent harm to others, but we are not our brother’s or sister’s keeper. The scope of liability can change if we share responsibilities, such as risk-prone work, with others.Footnote 21 And whether we are acting alone or in cooperation with others, we must be careful, depending on the circumstances and our personal capabilities.
III.A Basic Rules with Examples Regarding the Due Diligence of Surgeons
Unlike other jurisdictions, Swiss law explicitly defines the basic rule determining criminal negligence. In Article 12, paragraph 3 of the SCC, a “person commits a felony or misdemeanour through negligence if he fails to consider or disregards the consequences of his conduct due to a culpable lack of care. A lack of care is culpable if the person fails to exercise the care that is incumbent on him in the circumstances and commensurate with his personal capabilities.”Footnote 22
Determining a person’s precise due diligence obligations can be a complex endeavor. In Swiss criminal law a myriad of due diligence rules underpin negligence and are used to specify the relevant obligations, including legal norms, private regulations, and a catch-all-clause, dubbed the risk principle (Gefahrensatz).Footnote 23 The risk principle establishes that everyone has to behave in a reasonable way that minimizes threats to the relevant legal interest as best as possible.Footnote 24 For example, a surgeon must take all reasonable possible precautions to avoid increasing a pre-existing danger to the patient.Footnote 25
To apply the risk principle, the maximum permissible risk must be determined.Footnote 26 For this purpose, the general risk range must first be determined, and this range is limited by human skill;Footnote 27 no one can be reproached for not being able to prevent the risk in spite of doing everything humanly possible (ultra posse nemo tenetur).Footnote 28 The risk range is therefore limited by society’s understanding of the permissible risk, and by the abilities possessed by a capable, psychologically, and physically normal person; no superhuman performance is expected.Footnote 29 However, if a person’s ability is lower than what is required in a situation, the performed activity should be refrained from.Footnote 30 In the context of medical personnel, a surgeon who is not familiar with the use of robots may not perform such an operation.
As the law does not list the exact duties of care of a surgeon, it is left to the courts to specify in more detail the content and scope of the medical duties of care based on the relevant statutes and regulations. In that respect, it is not of significance whether the treatment is governed by public or private law.Footnote 31
III.B Due Diligence Standards Specific to Surgeons
Swiss criminal law is applied in the medical field, and every healthcare professional who hurts a patient intentionally or with criminal negligence can be liable.Footnote 32 Surgery is an activity that is, in principle, hazardous, and a surgeon may be prosecuted if he or she, consciously or unconsciously,Footnote 33 neglects a duty of care.Footnote 34 According to the Swiss Federal Supreme Court, the duty of care when applying conventional methods of treatment is based on “the circumstances of the individual case, i.e., the type of intervention or treatment, the associated risks, the discretionary scope and time available to the physician in the individual case, as well as his objectively expected education and ability to perform.”Footnote 35
This reference of the Swiss Federal Supreme Court to the educational background and efficiency of the physician does not indicate that the standard is entirely subjective. Rather, the physician should be assessed according to the knowledge and skills assumed to be available to representatives of his specialty at the time the measures are taken.Footnote 36 This objective approach creates an ongoing obligation for the further education of surgeons.
Part of a surgeon’s obligation is that they owe the patient a regime of treatment that complies with the generally recognized state of medical art (lex artis),Footnote 37 determined at the time of treatment. Lex artis is the guiding principle for establishing due diligence in an individual case in Swiss criminal law.Footnote 38 It encompasses the entire medical procedure, from the examination, diagnosis, therapeutic decision, and implementation of the treatment, and in the case of surgeons from preparing the operation to aftercare.Footnote 39 The standard is therefore not what is individually possible and reasonable, but the care required according to medical indications and best practice.Footnote 40 A failure to meet this medical standard leads to a breach of duty of care. Legal regulation, such as the standards of the Medical Professions Act (“MedBG”),Footnote 41 especially Article 40 lit. a, may be used to determine the respective state of medical art. Together, the regulatory provisions provide for the careful and conscientious practice of the medical profession.Footnote 42
Doctors must also observe and not exceed the limits of their own competence. A surgeon must recognize when they are not able to perform a surgery and need to consult a specialist. This obligation includes the duty to cooperate with other medical personnel, because performing an operation without the required expertise is a breach of duty of care in itself.Footnote 43 As with other areas of medical care, the surgeon’s obligations do not exceed the human ability to foresee events and to influence them in a constructive way.Footnote 44
If there are no legal standards for an area of medical practice, courts may refer to guidelines from medical organizations.Footnote 45 In practice, courts usually refer to the private guidelines of the Swiss Academy of Medical SciencesFootnote 46 and the Code of Conduct of the Swiss Medical Association (“FMH”).Footnote 47 Additionally, general duties derived from court decisions, such as “practising the art of medicine according to recognized principles of medical science and humanity,” can be used in a secondary way to articulate a doctor’s specific due diligence obligation.Footnote 48
III.C Due Diligence of a Surgeon in Robot-Assisted Surgery
New technologies have long been making appearances in operating rooms. Arthrobot assisted for the first time in 1983; responding to voice command, the robot was able to immobilize patients by holding them steady during orthopedic surgery.Footnote 49 Arthrobots are still in use today.Footnote 50
The introduction of robots to surgery accomplishes two main aims: (1) they perform more accurate medical procedures; and (2) they enable minimally invasive surgeries, which in turn increases surgeon efficacy and patient comfort by providing a faster recovery. A doctor is, generally, not responsible for the dangers and risks that are inherent in every medical action and in the illness itself.Footnote 51 However, the surgeon’s obligation of due diligence applies when using a robot. The chapter argues that the precise standards of care should differ, depending on whether the surgeon has control of the robot’s actions or whether the robot reacts independently in the environment, and depending on the extent of the surgeon’s control, including the ability to intervene in a procedure.Footnote 52
The next section introduces and explains the functioning of several examples of surgical robots. These robots qualify as medical devices under Swiss law,Footnote 53 and as such are subject to statutes governing medical devices. Medical devices are defined as instruments, equipment, software, and other objects intended for medical use.Footnote 54 Users of medical devices must take all measures required by the state of the art in science and technology to ensure that they pose no additional risk. The lex artis for treatment incorporating robots under Swiss criminal law requires users to apply technical aids lege artis and operate them correctly. For example, when the robot is used again at a later time, its functionality and correct reprocessing must be checked.Footnote 55 A surgeon does not have to be a trained technician, but he or she must have knowledge of the technology used, similar to the way that a driver must “know” a car, but need not be a mechanic.
On its own, the concept of lex artis does not imply specific obligations, and the specific parameters of the obligations must be determined based on individual circumstances. According to Article 45, paragraph 1 of the Therapeutic Products Act (TPA), a medical device must not endanger the health of patients when used as intended. If a technical application becomes standard in the field, falling below or not complying with the standard (lex artis) is classified as a careless action.Footnote 56 Lack of knowledge of the technology, as well as a lack of control over a device during an operation, leads to an assumption of liability (“Übernahmeverschulden”).Footnote 57
A final aspect of the surgeon’s obligations regarding surgical robots is that a patient must always be informedFootnote 58 about the robot before an operation, and the duty of documentationFootnote 59 must be complied with. Although the precise due diligence obligations of surgeons always depend on the circumstances of individual cases, the typical duties of care regarding two different kinds of robots that incorporate elements of remote-control, and the situation in which a robot provides a warning to the surgeon, are outlined below.
III.C.1 Remote-Controlled Robots
The kind of medical robots prevalent today are remote-controlled robots, also referred to as telemanipulation systems in medical literature. They are controlled completely and remotely by the individual surgeon,Footnote 60 usually from a short distance away via the use of joysticks. An example of a remote-controlled robot, DaVinci, was developed by the company Intuitive, and it is primarily used in the fields of urology and gynecology. DaVinci does not decide what maneuver to carry out; it is completely controlled by the surgeon, who works from an ergonomic 3D console using joysticks and foot pedals.Footnote 61 The surgeon’s commands are thus translated directly into actions by the robot. In this case, the robot makes it possible for the surgeon to make smaller incisions and achieve greater precision.
What is the due diligence obligation of a surgeon making use of remote-controlled robots? Remote-controlled robots such as the DaVinci, which have no independence and are not capable of learning, do not present any ambiguities in the law. If injury has occurred, the general Swiss criminal law of liability for negligence holds the surgeon responsible. The robot’s arms are considered to be an extension of the surgeon’s hands, who remains in complete control of the operation.Footnote 62 In fact, the surgeon has always needed tools such as scalpels to operate. Today, thanks to technological progress, the tool has simply become more sophisticated. The surgeon’s duties of care remain the same with a remote-controlled robot as without, and can be stated as follows:Footnote 63 the surgeon must know how the robot works and be able to operate it. Imposing full liability on the surgeon is appropriate here, as the surgeon is in complete control of the robot.
According to Dr. med. Stephan Bauer, a surgeon needs training with DaVinci to work the robot, including at least 15 operations with the console control to become familiar with the robot, and 50 more to be able to operate it correctly.Footnote 64 The surgeon must also attend follow-up training and regular education in order to fulfil his or her duty of care. This degree of training is not currently specified in any medical organization’s guideline, but it is usually recommended by the manufacturer. The surgeon must also be able to instruct and supervise his or her surgical team sufficiently, and should not use a remote-controlled robot if there is insufficient knowledge of the type of operation it will be used in. Lastly, the surgeon must be able to complete the operation without the robot. These principles are basic aspects of any kind of medical due diligence in Switzerland, and they must apply in any kind of modern medicine such as the use of surgical robots.Footnote 65
Medical doctors who do not fulfil the duty of care and supervision for a remote-controlled robot can be held criminally responsible to the same degree as if the doctor made use of a scalpel directly on a patient’s body. If, however, injury occurs due to a malfunction of the robot, such as movements that do not comply with the surgeon’s instructions or a complete failure during the operation, the manufacturer,Footnote 66 or the person responsible for ensuring the regular maintenance of the device,Footnote 67 could be held criminally responsible.
III.C.2 Independent Surgical Robots
Some surgical robots in use today have dual capabilities. These robots are pre-programmed by the responsible surgeon in advance and carry out programming without further instruction from the surgeon, but they can also perform certain tasks independently, based on the combined functioning of their sensors and their general programming. Initially the surgeon plans and programs the motion sequences of the robot in advance, and the robot carries out those steps, but the robot may have the ability to act without instruction from the surgeon. These robots are referred to here as “independent robots,” to indicate that their abilities are not limited to remote-controlled actions, and to distinguish them from fully autonomous robots capable of learning.
An example of an independent robot with dual capabilities is Smart Tissue Autonomous Robot (STAR),Footnote 68 which carries out pre-programmed instructions from the surgeon, but which can also automatically stitch soft tissue. Using force and motion sensors and cameras, it is able to react to unexpected tissue movements while functioning.Footnote 69 In 60 percent of cases, it does not require human assistance to do this stitching, while in the other cases, it only needs minimal amounts of input from the surgeon.Footnote 70 Although the stitching currently requires more time than the traditional technique by a human, it delivers better results.Footnote 71 Another example, Cold Ablation Robot-guided Laser Osteotome (CARLO),Footnote 72 is able to cut bones independently after receiving the surgeon’s instructions, but it can also use sensors to check whether the operation is going smoothly.Footnote 73 According to the manufacturer Advanced Osteotomy Tools (AOT),Footnote 74 CARLO is thus the “world’s first medical, tactile robot that can cut bone … with cold laser technology. The device allows the surgeon to perform bone operations with unprecedented precision, and in freely defined, curved and functional sectional configurations, which are not achievable with conventional instruments.”Footnote 75 In summary, CARLO’s lasers open up new possibilities in bone surgery.
Independent robots have the advantage of extreme precision, and they have no human deficits such as fatigue, stress, or distraction. Among other benefits, use of these robots decreases the duration of hospitalization, as well as the risks of infection and pain for the patient, because the incision and the injury to the tissue is minimal. When independent robots function as intended, surgery time is usually shortened, accidents due to hand trembling of the surgeon are reduced, and improved 3D visualization can be guaranteed.
As noted above, a surgeon is fully responsible for injury caused by a remote-controlled robot, in part because the surgeon has full control over the robot, which can be viewed as an extension of the surgeon’s own hands. What are a surgeon’s due diligence obligations when using an independent surgical robot? When independent surgical robots use their ability to make decisions on their own, should criminal responsibility be transferred to, or at least shared with, say, the manufacturer, particularly in cases where it was not possible for the surgeon to foresee the possible injury?
To the extent that independent robots are remote-controlled, i.e., simply carrying out the surgeon’s instructions, surgeons must continuously comply with the duties of care that apply when using a remote-controlled robot, including the accurate operation, control, and maintenance of the robot. A surgeon’s obligations regarding a careful operation while using an independent robot include, prior to the operation, the correct definition of the surgical plan and the programming of the robot. The surgeon must also write an operation protocol, disinfect the area, and make the first incision.Footnote 76 In addition, further duties arise under Swiss law because of the independence of the robot in carrying out the instructions the surgeon provided earlier, i.e., non-contemporaneous instructions.Footnote 77 During the operation, the surgeon must observe and monitor the movements of the robot so that he can intervene at any time if he or she realizes harm may occur. According to the manufacturer AOT,Footnote 78 CARLO “allows the surgeon full control over this … osteotomy device at any time.” This standard of supervision is appropriate, because the surgeon’s supervision is needed to prevent injury, but as reviewed below, there are limits to what can be expected of a surgeon supervising a robot.
Even if a surgeon complies with the obligations to take precautions and carry out surveillance of the surgery while it is ongoing, a surgical robot may still make a mistake, e.g., cutting away healthy tissue. If it is established that a cautious and careful surgeon in the same position would not have been able to regain control of the robot and avoid the injury, the surgeon is deemed to have not violated his or her duty of care or acted in a criminally negligent manner.Footnote 79 If this occurs, no criminal charges will be brought against the surgeon. This standard is also appropriate, because proper supervision could not have prevented the injury.
III.C.3 Due Diligence after a Robot Warning
Per the principle lex artis, a surgeon using any kind of surgical robot is required to be knowledgeable regarding the functionality of the robot, including the emergency and safety functions, and the messages and warning functions.Footnote 80 A human surgeon using a robot for surgery cannot blindly trust the technology, and current law requires the surgeon to supervise and check whether or not their intervention is required and whether a change of plan is necessary. In the event that the robot fails, or issues a warning signal, the human must complete the surgery without the assistance of the robot. If the robot issues an alert, the human surgeon must always be capable of checking whether such notification is correct and react adequately.Footnote 81 If the human surgeon is not capable of taking over, Swiss law imposes liability according to a sort of organizational negligence, the “Übernahmeverschulden,” which is the principle that if a person assumed a task that he cannot handle properly, and harm is caused, the surgeon acted negligently.Footnote 82 If an alert is ignored because the surgeon does not understand its significance or is not monitoring adequately, the surgeon also acts in a criminally negligent manner.
If the surgeon perceives the robot’s alert, but assesses that the robot advice is wrong, the surgeon may override it. There is a saying in Switzerland that also applies to a surgeon who relies on a surgical robot, although not completely: “Trust is good, verification is better.” In a clearly established cooperation between a surgeon and a robot, if the surgeon decides not to follow an alert from the robot, the surgeon does need a valid justification. For example, if CARLO notifies the surgeon that the bone cannot be cut in a certain way and the surgeon decides to proceed anyway, there would need to be a documented justification for his or her decision to overrule the robot.
While the current requirement of surgeon supervision of robots is justified generally, the law needs some adjustment. There must be a limit to a surgeon’s obligation to constantly monitor and question robot alerts, because otherwise a surgeon–robot cooperation would be unworkably inefficient. It would also result in unjustifiable legal obligations, based on a superhuman expectation that the surgeon monitors every second of the robot’s action. Surgeons are considered to be the “guarantors of supervision,”Footnote 83 which means that they are expected to control everything that the robot does. But when it is suitably established that robots perform more accurately than the average human medical professional in the field, the human must be allowed to step out of the process to some degree. For example, a surgeon would always need to go through the whole operating plan to be sure that robots such as STAR or CARLO are functioning properly. However, this obligation to double-check the robot should not apply to every minute movement the robot makes, as an obligation like this would be contrary to the purpose of innovative technology such as surgical robots, which were invented precisely for the purposes of greater accuracy and time-saving.
Additionally, when it is established that a surgical robot performs consistently without engaging in unacceptable mistakes, there will be a point where it would be wiser for the surgeon to not second-guess the robot, and in the case of a warning or alert, follow its directions. In fact, ignoring the directions of a surgical robot, which is part of the medical state of the art and acts correctly to an acceptable degree, is likely to lead to negligent, if not intentional, liability.
III.D Limiting the Surgeon’s Due Diligence Obligations regarding Surgical Robots through the Principle of Trust (Vertrauensgrundsatz)?
The surgeon’s obligation of supervision currently imposes excessive amounts of liability for the use of surgical robots, because, as discussed above, while surgeons rightfully have obligations to monitor the robot, they should not be required to check every movement the robot makes before it proceeds. The chapter argues that in the context of robot supervision, variations of the principle of trust (Vertrauensgrundsatz) should apply to limit the surgeon’s criminal liability.
When a surgeon works with human team members, the legitimate expectation is that individuals are responsible only for their own conduct and not that of others. The principle of trust is a foundational legal concept, one that enables effective cooperation by identifying spheres of responsibility and limiting the duties of due diligence to those spheres. It relieves individuals from having to evaluate the risk-taking of every individual in the team in every situation, and allows for the effective division of expertise and labor. The principle of trust was developed in the context of road traffic regulation, but it has widespread relevance and is applied today in medical law as well as other areas.Footnote 84
The principle of trust has limits and does not provide a carte blanche justifying all actions. If there are concrete indications that trust is unjustified, one must analyze and address that situation.Footnote 85 An example regarding surgical robots might be the DaVinciFootnote 86 robot. It has been in use for a long time, but if a skilled surgeon notices that the robot is defective, the surgeon must intervene and correct the defect.
The limitations of due diligence arising out of the principle of trust are well established in medical law, an environment where many participants work together based on a division of expertise and labor. In an operating room, several different kinds of specialists are normally at work, such as anesthesiologists, surgeons, and surgical nurses. The principle of trust in this environment limits responsibility to an individual’s own area of expertise and work.Footnote 87
One way of understanding the division of labor in surgery is that the primary area is the actual task, i.e., the operation, and the secondary area is supervisory, i.e., being alert to and addressing the misconduct of others.Footnote 88 Supervisory responsibility can be imposed horizontally (surgeon–surgeon) or vertically (surgeon–nurse), depending on the position a person occupies in the operating room. An example of the horizontal division of labor in the medical context would be if several doctors are assigned equal and joint control, with all having an obligation to coordinate the operation and monitor one another. If an error is detected, an intervention must take place, and if no error is detected, the competence of the other person can be trusted.Footnote 89 With vertical division of labor, a delegation to surgical staff such as assistants or nursing professionals requires supervisory activities such as selection, instruction, and monitoring. The important point here is that whether supervision is horizontal or vertical, the applicability of the principle of trust is not predicated upon constant control.Footnote 90
So far, the principle of trust has only been applied to the behavior of human beings. This chapter argues that the principle of trust should be applied to surgical robots, when lex artis requires it. First, as a general principle, delegation of certain activities must be permitted. Surgeons cannot perform an operation on their own, as this would, in itself, be a mistake in treatment.Footnote 91 Second, regarding robots in particular, given the degree to which surgical robots offer better surgical treatment, surgeons should use them as part of the expected standard of medical treatment.
But can robots, even certified robots, be equated with another human in terms of trustworthiness? Should a surgeon trust the functioning of a robot, and in what situations is trust warranted? The chapter argues that a variation of the principle of trust should be applied to a surgeon’s use of surgical robots. Specifically, an exception to the non-application of the principle of trust for robots should be created for robots that have been certified by competent authority as safe, referred to here as certification-based trust. Before and until the certification is awarded, the principle of mistrust (Misstrauensgrundsatz) should apply. This approach would also impose greater responsibility on the surgeon if, e.g., the robot used by the surgeon was still in a trial phase, or had a lower level of approval from the relevant authorities.Footnote 92
The concept of certified-based trust is supported by the principle of permissible risk. It is a fact that people die in the operating room, because medical and surgical procedures are associated with a certain degree of risk to health or life, but in Switzerland, this is included in the permissible risk.Footnote 93 There is no reason why this level of acceptable risk should not apply to surgical robots. According to Olaf Dössel:Footnote 94
[t]rust in technology is well founded if (a) the manufacturer has professionally designed, constructed and operated the machinery, (b) safety and reliability play an important role, (c) the inevitable long-term fatigue has been taken into account, and (d) the boundary conditions of the manufacturer remain within the framework established when the machinery was designed.
A certification-based trust approach is also consistent with other current practices, e.g., cooperating with newcomers in a field always requires a higher duty of care. When the reliability and safety of surgical robots becomes sufficiently established in practice, the principle of trust should then be applied, to establish the surgeon’s due diligence obligations within the correct parameters.
III.E Certified for Trust
This chapter argues that surgeons working with surgical robots can develop a legitimate expectation of trust consistent with principles of due diligence if the robot they use is certified. This approach to surgeon liability places increased importance on the process of the medical device certification, which is discussed further here.
Certification of medical devices is a well-developed area. In addition to the TPAFootnote 95 and the Medical Devices Ordinance,Footnote 96 other standards apply, including Swiss laws and ordinances, international treaties, European directives, and other international requirements.Footnote 97 These standards define the safety standards for the production and distribution of medical devices.Footnote 98
Swiss law requires that manufacturers keep up with the current state of scientific and technical knowledge, and comply with applicable standards when distributing the robot.Footnote 99 Manufacturers of surgical robots must successfully complete a conformity assessment procedure in Switzerland.
A robot with a CE-certification can be placed on the market in Switzerland and throughout the European Union.Footnote 100 A CE-certification mark means that a product has been “assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements.”Footnote 101 For the robot to be used in an operating room in Switzerland, a CE-certificationFootnote 102 must be issued by an independent certification body.Footnote 103 After introducing the robot to the market, the manufacturer remains obliged to check its product.Footnote 104
This chapter argues that a surgeon’s due diligence obligations when using a surgical robot should be limited by a principle of trust, and that the principle should apply when the robot is certified. A certification-based trust approach is consistent with Dössel’s suggestion that trust in technology is well-founded if, inter alia, the manufacturer has professionally designed, constructed, and operated the machinery.Footnote 105 It is currently not an accepted point of law that the CE-certification is a sufficient basis for the user to trust the robot and not be held criminally responsible, but the chapter suggests that as a detailed, well-established standard, the CE-certification is an example of a certification that could form the basis of application of the principle of trust.
If the principle of certification-based trust is adopted, the surgeon would still retain other due diligence obligations, including the duty to inform patients about the risks involved in a robot’s use.Footnote 106 This particular duty will likely become increasingly important over time, as the performance range of surgical robots increases.
IV Conclusion
Today, lex artis requires surgeons to ensure the performance of the robot assistant and comply with its safety functions. The human surgeon must maintain the robot’s functionality and monitor it during a medical operation and be ready to take over if needed. Requiring surgeons to supervise the robots they use is a sound position, but surgeons should not be expected to monitor the robot’s every micro-movement, as that would interfere with the functioning of surgical robots and the benefits to patients. However, under current Swiss law, the surgeon is liable for all possible injury, unless the robot’s movements do not comply with the surgeon’s instructions or there is a complete failure of the robot during the operation.
Surgeons working with surgical robots are therefore accountable for robotic action to an unreasonable degree, even though the robot is used to enhance the quality of medical services. Thus, a strange picture emerges in Swiss criminal law. In a field where robotics drive inventions that promise to make surgery safer, surgeons who use robots run a high risk of criminal liability if the robot inflicts injury. Conversely, if the surgeon does not rely on new technology and performs an operation alone which could generally be better and more safely performed by a robot, the surgeon could also be liable. This contradictory state of affairs requires regulatory reform, with a likely candidate being the application of a certification-based trust that limits the surgeon’s liability to appropriate limits.
This chapter has addressed issues raised by the robots being used today in operating rooms, including remote-control and independent surgical robots. The chapter has not addressed more advanced, self-learning robots. Given that the law already requires reform regarding today’s robots, even larger legal issues will be raised when it becomes necessary to determine who is responsible in the event of injury by autonomous robots,Footnote 107 those capable of learning and making decisions. In this context, it will be more difficult to determine whether the malfunction was due to the original programming, subsequent robot “training,”Footnote 108 or other environmental factors.Footnote 109 Surgeons may also find that robots capable of learning may act in unpredictable ways, making harm unavoidable even with surgeon supervision. In the case of unpredictable robot action, a surgeon should arguably be able to rely on the technology and avoid criminal negligence, provided it has a CE-certification. Ever-increasing amounts of due diligence, such as constant monitoring, are not desired with today’s or tomorrow’s robots, because the robot is supposed to relieve the surgeon’s workload and should be considered competent to do so if it is certified.
I The Responsibility Gap
The use of artificial intelligence (AI) makes our lives easier in many ways. Search engines, driver’s assistance systems in cars, and robots that clean the house on their own are just three examples of devices that we have become reliant on, and there will undoubtedly be many more variants of AI accompanying us in our daily lives in the near future. Yet, these normally benevolent AI-driven devices can suddenly turn into dangerous instruments: self-driving cars may cause fatal accidents, navigation software may mislead human drivers and land them in dangerous situations, and a household robot may leave the home on its own and create risks for pedestrians and drivers on the street. One cannot help but agree with the pessimistic prediction that “[a]s robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things.”Footnote 1 If a robot’sFootnote 2 malfunctioning can be proved to be the result of inadequate programmingFootnote 3 or testing, civil and even criminal liability of the human being responsible for manufacturing or controlling the device can provide an adequate solution – if it is possible to identify an individual who can be blamed for being reckless or negligent in producing, coding, or training the robot.
But two factors make it unlikely that an AI device’s harmful action can always be traced back to the fault of an individual human actor. First, many persons, often belonging to different entities, contribute to getting the final product ready for action; if something goes wrong, it is difficult to even identify the source of malfunctioning, let alone an individual who culpably caused the defect. Second, many AI devices are designed to learn from experience and to optimize their ability to reach the goals set for them by collecting data and drawing “their own conclusions.”Footnote 4 This self-teaching function of AI devices greatly enhances their functionality, but also turns them, at least to some extent, into black boxes whose decision-making and actions can be neither predicted nor completely explained after the fact. Robots can react in unforeseeable ways, even if their human manufacturers and handlers did everything they could to avoid harm.Footnote 5 It can be argued that putting a device into the hands of the public without being able to predict exactly how it will perform constitutes a basis for liability, but among other issues it is not clear whether this liability ought to be criminal liability.
This chapter considers two novel ways of imposing liability for harm caused by robots: holding robots themselves responsible for their actions, and corporate criminal responsibility (CCR). It will be argued that it is at present neither conceptually coherent nor practically feasible to subject robots to criminal punishment, but that it is in principle possible to extend the scope of corporate responsibility, including criminal responsibility if recognized in the relevant jurisdiction, to harm caused by robots controlled by corporations and operating for their benefit.
II Robots as Criminals?
To resolve the perceived responsibility gap in the operation of robots, one suggestion has been to grant legal personhood to AI devices, which could make them liable for the harm they bring about. The issue of recognizing E-persons was discussed within the European Union when the European Parliament presented this option.Footnote 6 The idea has not been taken up, however, in the EU Commission’s 2021 Proposal for an Artificial Intelligence Act,Footnote 7 which mainly relies on strictly regulating the marketing of certain AI devices and holding manufacturers and users responsible for harm caused by them. Although the notion of imprisoning, fining, or otherwise punishing AI devices must appear futuristic,Footnote 8 some scholars favor the idea of extending criminal liability to robots, and the debate about this idea has reached a high intellectual level.Footnote 9 According to recent empirical research, the notion of punishing robots is supported by a fairly large percentage of the general population, even though many people are aware that the normal purposes of punishment cannot be achieved with regard to AI devices.Footnote 10
II.A Approximating the Responsibilities of Machines and Legal Persons
As robots can be made to look and act more and more like humans, the idea of approximating their movements to human acts becomes more plausible – which might pave the way to attributing the notion of actus reus to robots’ activities. By the same token, robots’ ways of processing information and turning it into a motive for getting active may approach the notion of mens rea. The law might, as Ryan Abbott and Alex Sarch have argued, “deem some AIs to possess the functional equivalent of sufficient reasoning and decision-making abilities to manifest insufficient regard” of others’ protected interests.Footnote 11
Probably the most sophisticated argument to date in favor of robots’ criminal responsibility has been advanced by Monika Simmler and Nora Markwalder.Footnote 12 These authors reject as ideologically based any link between the recognition of human free will and the ascription of culpability;Footnote 13 they instead subscribe to a strictly functionalist theory of criminal law that bases criminal responsibility on an “attribution of freedom as a social fact.”Footnote 14 In such a system, the law is free to “adopt a concept of personhood that depends on the respective agent’s capacity to disappoint normative expectations.”Footnote 15 The essential question then becomes “whether robots can destabilize norms due to the capacities attributed to them and due to their personhood and if they produce a conflict that requires a reaction of criminal law.”Footnote 16 The authors think that this is a probable scenario in a foreseeable future: robots could be “experienced as ‘equals’ in the sense that they are constituted as addressees of normative expectations in social interaction like humans or corporate entities are today.”Footnote 17 It would then be a secondary question in what symbolic way society’s disapproval of robots’ acts were to be expressed. It might well make sense to convict an AI device of a crime – even if it lacks the sensory, intellectual, and moral sensibility of feeling the impact of any traditional punishment.Footnote 18 Since the future is notoriously difficult to foresee, this concept of robots’ criminal responsibility can hardly be disproved, however unlikely it may appear today that humans could have normative expectations of robots and that disappointment of these expectations would call for the imposition of sanctions. However, in the brave new functional world envisioned by these authors, the term “criminal sanctions” appears rather old-fashioned, because it relies on concepts more relevant to human beings, such as censure, moral blame, and retribution (see Section II.B).
One recurring argument in favor of imposing criminal responsibility on AI devices is the asserted parallel to the criminal responsibility of corporations (CCR).Footnote 19 CCR will be discussed in more detail in the following section of this chapter, but it is addressed briefly here because calls for the criminal responsibility of corporations and of robots are reactions to a similar dilemma. In each case, it is difficult to trace responsibility for causing harm to an individual person. If, e.g., cars produced by a large manufacturing firm are defective and cause fatal accidents, it is safe to say that something must have gone wrong in the processes of designing, testing, or manufacturing the relevant type of car. But it may be impossible to identify the person(s) responsible for causing the defect, especially since the companies involved are unlikely to actively assist in the police investigation of the case. As we have seen, harm caused by robots leads to similar problems concerning the identification of responsible humans in the background. Regarding commercial firms, the introduction of CCR, which has spread from the United States to many other jurisdictions,Footnote 20 has helped to resolve the problem of the diffusion of responsibility by making corporations criminally liable for any fault of their officers or even – under the respondeat superior doctrine – of their employees. The main goals of CCR are to obtain redress for victims and give corporations a strong incentive to improve their compliance with relevant legal rules. If criminal liability is imposed on the corporation whenever it can be proved that one of its employees must have caused the harm, it can be expected that corporations will do everything in their power to properly select, train, and supervise their personnel. The legal trick that leads to this desired result is to treat corporations as or like responsible subjects under criminal law, even though everyone knows that a corporation is a mere product of legal rules and therefore cannot physically act, cannot form an intent, and cannot understand what it means to be punished. If applying this fiction to corporations has beneficial effects,Footnote 21 why should this approach not be used for robots as well?
II.B Critical Differences
However attractive that idea sounds, one cannot help but note that there exist significant differences between corporations and AI devices. Regarding the basic requirements of criminal responsibility, robots at their present stage of development cannot make free decisions, whereas corporations can do so through their statutory organs.Footnote 22 At the level of sanctioning, corporations can – through their management – be deterred from committing further offenses, they can compensate victims, and they can improve their operation and become better corporate citizens. Robots have none of these abilities,Footnote 23 although it is conceivable that their performance can be improved through reprogramming, retraining, and special supervision. The imposition of retributive criminal sanctions on robots would presuppose, however, that they can in some way feel punished and can link the consequences visited upon them to some prior malfeasance on their part. Today’s robots lack this key feature of punishability, although their grandchildren may well be imbued with the required sensitivity to moral blame.
The differences between legal persons and robots do not necessarily preclude the future possibility of treating robots as criminal offenders. But the fact that corporations, although they are not human beings, can be recognized as subjects of the criminal law does not per se lend sufficient plausibility to the idea of granting the same status to today’s robots.
There may, however, be another way of establishing criminal responsibility for robots’ harmful actions: corporations that use AI devices and/or benefit from their services could be held responsible for the harm they cause. To make this argument, one would have to show that: (1) corporate responsibility as such is a legitimate feature of the law; and (2) corporations can be held responsible for robots as well as for their human agents.
III Corporate Criminal Responsibility for Robots
III.A Should There Be Corporate Criminal Responsibility?
Before we investigate this option, we should reflect on the legitimacy of the general concept of CCR. If that concept is ethically or legally doubtful or even indefensible, we should certainly refrain from extending its reach from holding corporations responsible for the acts of their human employees to holding them responsible for their robots.
Two sets of theories have been developed for justifying the imposition of criminal responsibility of legal persons for the harmful acts of their managers and employees. One approach regards certain decision-makers within the corporation as its alter ego and therefore proposes that acts of these persons are attributed to the corporation; the other approach targets the corporation itself and bases its responsibility on its criminogenic or improper self-organization.Footnote 24 These two theories are not mutually exclusive. For example, Austrian law combines both approaches: its statute on the responsibility of corporations imposes criminal liability on a corporation if a member of its management or its control board committed a criminal offense on the corporation’s behalf or in violation of its obligations, or if an employee unlawfully committed a criminal offense and the management could have prevented or rendered significantly more difficult the perpetration by applying due diligence.Footnote 25
Whereas in the United States CCR has been recognized for more than a century,Footnote 26 its acceptance in Europe has been more hesitant.Footnote 27 In Germany, a draft law on corporate responsibility with semi-criminal features failed in 2021 due to internal dissent within the coalition government of the time.Footnote 28 Critics claim that CCR violates fundamental principles of criminal law.Footnote 29 They maintain that a corporation cannot be a subject of criminal law because it can neither act nor make moral judgments.Footnote 30 Moreover, a fine imposed on a corporation is said to be unfair because it does not punish the corporation itself, but its shareholders, creditors, and employees, who cannot be blamed for the faults of managers.Footnote 31
It can hardly be denied that CCR is a product of crime-preventive pragmatism rather than of theoretically consistent legal thinking. The attribution of managers’ and/or employees’ harmful acts to the corporation, cloaked with sham historical dignity by the Latin phrase respondeat superior, is difficult to justify because it leads to a duplication of responsibility for the same crime.Footnote 32 It is doubtful, moreover, whether the moral blame inherent in criminal punishment can adequately be addressed to a legal person, an entity that has no conscience and cannot feel guilt.Footnote 33 An alternative basis for CCR could be a strictly functional approach to criminal law which links the responsibility of corporations to the empirical and/or normative expectation that they abide by the legal norms applying to their scope of activities.Footnote 34
There exists an insoluble conflict between the pragmatic and political interest in nudging corporations toward legal compliance and the theoretical problems of extending the criminal law beyond natural persons. It is thus ultimately a policy question whether a state chooses to limit the liability of corporations for faults of their employees to tort law, extends it to criminal law, or places it somewhere in between,Footnote 35 as has been done in Germany.Footnote 36 In what follows, I assume that the criminal law version of CCR has been chosen. In that case, the further policy question arises as to whether CCR should include criminal responsibility for harm caused by AI devices used by the corporation.
III.B Legitimacy of CCR for Robots
As we have seen, retroactively identifying the fault of an individual human actor can be as difficult when an AI device was used as when some unknown employee of a corporation may have made a mistake.Footnote 37 The problem of allocating responsibility for robot action is further exacerbated by the black box element in self-teaching robots used on behalf of a corporation.Footnote 38
It could be argued that the responsibility gap can be closed by treating the robot as a mere device employed by a human handler, which would turn the issue of a robot’s harmful action into a regular instance of corporate liability. But even assuming that the doctrine of respondeat superior provides a sufficient basis for holding a corporation liable for faults of its employees, extending that doctrine to AI devices employed by humans would raise additional doubts about a corporation’s responsibility. It may neither be known how the robot’s harmful action came about nor whether there was a human at fault,Footnote 39 nor whether the company could have avoided the employee’s potential malfeasance.Footnote 40 It is therefore unlikely that many cases of harm caused by an AI device could be traced back to recklessness or criminal negligence on the part of a human employee for whom the corporation can be made responsible.
Effectively bridging the responsibility gap would therefore require the more radical step of treating a company’s robots like its employees, with the consequence of linking CCR directly to the robot’s malfeasance. This step could set into motion CCR’s beneficial compliance mechanism: if the robot’s fault is transferred by law to the company that employs it, that company will have a strong incentive to design, program, and constantly monitor its robots to make sure that they function properly.
How would a corporation’s direct responsibility for actions of its robots square with the general theories on CCR?Footnote 41 The alter ego-type liability model based on a transfer of the responsibility of employees to the corporation is not well suited to accommodating activities of robots because their actions lack the quality of blameworthy human decision-making.Footnote 42 Transfer of liability would work only if the mere existence of harmful activity on the part of an employee or robot would be sufficient to trigger CCR, i.e., in an absolute liability model. Such a model would address the difficulties raised by corporations using robots in situations where the robot’s behavior is unpredictable; however, it is difficult to reconcile absolute liability with European concepts of criminal justice. A more promising approach to justifying CCR for robots relates to the corporation’s overall spirit of lawlessness and/or its inherently defective organization as grounds for holding it responsible.Footnote 43 It is this theory that might provide an explanation for the corporation’s liability for the harmful acts of its robots; if a corporation uses AI devices, but fails to make sure that they operate properly, or uses a robot when it cannot predict that the robot will act safely, there is good reason to impose sanctions on the corporation for this deficiency in its internal organization. This is true even where such AI devices contain elements of self-teaching. Who but the corporation that employs them should be able to properly limit and supervise this self-teaching function?
In this context, an analogy has been discussed between a corporation’s liability for robots and a parent’s or animal owner’s liability for harm caused by children or domestic animals.Footnote 44 Even though the reactions of a small child or a dog cannot be completely predicted, it is only fair to hold the parent or dog owner responsible for harm that could have been avoided by training and supervising the child or the animal so as to minimize the risks emanating from them.Footnote 45 Similar considerations suggest a corporation’s liability for its robots, at least where it can be shown that the robot had a recognizable propensity to cause harm. By imposing penalties on corporations in such cases, the state can effectively induce companies to program, train, and supervise AI devices so as to avoid harm.Footnote 46 Moreover, if there is insufficient liability for harm by robots, business firms might be tempted to escape traditional CCR by replacing human employees by robots.Footnote 47
III.C Regulating and Limiting Robot CCR
Before embracing an extension of CCR from employees to robots, however, a counterargument needs to be considered. The increased deployment of AI devices is by and large a beneficial development, saving not only cost, but also human labor in areas where such labor is not necessarily satisfying for the worker, as in conveyor-belt mechanical manufacturing. Robots do have inherent risks, but commercial interests will provide strong incentives for their companies to control these risks. Adding criminal responsibility might produce an over-reaction, inhibiting the use and further development of AI devices and thus stifling progress. An alternative to CCR for robot malfunction may be for society to accept certain risks associated with the widespread use of AI devices and to restrict liability to providing compensation for harm through insurance.Footnote 48 These considerations do not necessarily preclude the introduction of a special regime of corporate liability for robots, but they counsel restraint. Strict criminal liability for robotic faults would have a chilling effect on the development of robotic solutions and therefore does not recommend itself as an adequate solution.
Legislatures should therefore limit CCR for robots to instances where human agents of the corporation were at least negligent with regard to designing, programming, and controlling robots.Footnote 49 Only if that condition is fulfilled can it be said that the corporation deserves to be punished because it failed to organize its operation so as to minimize the risk of harm to others. Potential control over the robot by a human agent of the corporation is thus a necessary condition for the corporation’s criminal liability. Mihailis E. Diamantis plausibly explains that “control” in the context of algorithms means “the power to design the algorithm in the first place, the power to pull the plug on the algorithm, the power to modify it, and the power to override the algorithm’s decisions.”Footnote 50 But holding every company that has any of these types of control liable for any harm that the robot causes, Diamantis continues, would draw the net wider than “sound policy or fairness would dictate.”Footnote 51 He therefore suggests limiting liability for algorithms to companies which not only control a robot, but also benefit from its activities.Footnote 52 The combination of these factors is in fact perfectly in line with the requirements of traditional CCR, where liability presupposes that the corporation had a duty to supervise the employee who committed the relevant fault and that the employee’s activity or culpable passivity was meant to benefit the corporation.
This approach appropriately limits CCR to corporations that benefit from the employment of AI devices. Even so, liability should not be strict in the sense that a corporation is subject to punishment whenever any of its robots causes harm and no human actor responsible for its malfunction can be identified.Footnote 53 In line with the model of CCR that is based on a dysfunctional organization of the corporation, criminal liability should require a fault on the part of the corporation that has a bearing on the robot’s harmful activity.Footnote 54 This corporate fault can consist, e.g., in a lack of proper training or oversight of the robot, or in an unmonitored self-teaching process of the AI device.Footnote 55 There should in any event be proof that the corporation was at least negligent concerning its obligation to do everything in its power to prevent robots that work for its benefit from causing harm to others. In other words, CCR for robots is proper only where it can be shown that the corporation could, with proper diligence, have avoided the harm. This model of liability could be adopted even in jurisdictions that require some fault on the part of managers for CCR, because the task of properly training and supervising robots is so important that it should be organized on the management level.
Corporate responsibility for harm caused by robots differs from CCR for activities of humans and therefore should be regulated separately by statute. The law needs to determine under what conditions a corporation is to be held responsible for robot malfeasance. The primary issue that needs to be addressed is the necessary link between a corporation and an AI device. Taking an automated car as an example, there are several candidates for potential liability for its harmful operation: the firm that designed the car, the manufacturing company, the programmer of the software, the seller, and the owner of the car, if that is a corporation. If it can be proved that the malfunctioning of the car was caused by an agent of one of these companies, e.g., the programmer was reckless in installing defective software, that company will be liable under the normal CCR rules of the relevant jurisdiction. Special “Robot CCR” will come into play only if the car’s aberration cannot be traced to a particular human source, for example, if the reason for the malfunction remains inexplicable even to experts, if there was a concurrence of several causes, or if the harmful event resulted from the car’s unforeseeable defective self-teaching. In any of these instances, it must be determined which of the corporate entities identified above should be held responsible.
IV Conclusion
We have found that robots can at present not be subject to criminal punishment and cannot trigger criminal liability of corporations under traditional rules of CCR for human agents. Even if the reach of the criminal law is extended beyond natural persons to corporations, the differences between corporations and robots are so great that a legal analogy between them cannot be drawn. But it is in principle possible to extend the scope of corporate responsibility, including criminal responsibility if recognized in the relevant jurisdiction, to harm caused by AI devices controlled by corporations and operating for their benefit. Given the general social utility of using robots, however, corporate liability for harm caused by them should not be unlimited, but should at least require an element of negligence in programming, testing, or supervising the robot.