Skip to main content Accessibility help
×
Hostname: page-component-69cd664f8f-84qt4 Total loading time: 0 Render date: 2025-03-13T07:03:29.110Z Has data issue: false hasContentIssue false

Part I - Human–Robot Interactions and Substantive Law

Published online by Cambridge University Press:  03 October 2024

Sabine Gless
Affiliation:
Universität Basel, Switzerland
Helena Whalen-Bridge
Affiliation:
National University of Singapore
Type
Chapter
Information
Human–Robot Interaction in Law and Its Narratives
Legal Blame, Procedure, and Criminal Law
, pp. 3 - 86
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

1 The Challenges of Human–Robot Interaction for Substantive Criminal Law Mapping the Field

Tatjana Hörnle Footnote *
I Mapping the Field: Preliminary Remarks

Technological innovations are likely to increase the frequency of human–robot interactions in many areas of social and economic relations and humans’ private lives. Criminal law theory and legal policy should not ignore these innovations. Although the main challenge is to design civil, administrative, and soft law instruments to prevent harm in human–robot interactions and to compensate victims, the developments will also have some impact on substantive criminal law. Criminal lawsFootnote 1 should be scrutinized and, if necessary, amendments and adaptations recommended, taking the two dimensions of criminal law and criminal law theory, the preventive and the retrospective, into account.

The prevention of accidents is obviously one of the issues that needs to be addressed, and regulatory offenses in the criminal law could contribute to this end. Regulatory offenses are part of a larger legal toolbox that can be called upon to prevent risks and harms caused by malfunctioning technological innovations and unforeseen outcomes of their interactions with human users (see Section II.A). In addition to the risk of accidents, some forms of human–robot interaction, such as automated weapon systems and sex robots, are also criticized for other reasons, which invites the question of whether these types of robots should be banned (Section II.B). If we turn to the second, retrospective dimension of criminal law, the major question, again, is liability for accidents. Under what conditions can humans who constructed, programmed, supervised, or used a robot be held criminally liable for harmful outcomes caused by the robot (Section III.A)? Other questions are whether existing criminal laws can be applied to humans who commit crimes with robots as tools (Section III.B), how dilemmatic situations should be evaluated (Section III.C), and whether self-defense against robots is possible (Section III.D). From the perspective of criminal law theory, the scope of inquiry should be even wider and extend beyond questions of criminal liability of humans for harmful events involving robots. Might it someday be possible for robots to incur criminal liability (Section III.E)? Could robots be victims of crime (Section III.F)? And, as robots become increasingly involved in the day-to-day life of humans and become subject to legal responsibility, might this also have a long-term impact on how human–human interactions are understood (Section IV)?

The purpose of this introductory chapter is to map the field in order to structure current and future discussions about human–robot interactions as topics for substantive criminal law. Marta Bo, Janneke de Snaijer, and Thomas Weigend analyze some of these issues in more depth in their chapters. Before we turn to the mapping exercise, the term “robot” deserves some attention,Footnote 2 including delineation from the broader concept of artificial intelligence (AI). Per the Introduction to the volume, which references the EU AI Act, AI is “software that is developed with one or more of [certain] approaches and techniques … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”Footnote 3 The consequences of the growing use of information technology (IT) and AI are discussed in many areas of law and legal policy.Footnote 4 In the field of criminal justice, AI systems can be utilized at the pre-trial and sentencing stages as well as for making decisions about parole, to provide information on the risk of reoffending.Footnote 5 Whether these systems analyze information more accurately and comprehensively than humans, and the degree to which programs based on machine learning inherit biases, are issues under discussion.Footnote 6 The purpose of this volume is not to examine the relevance of these new technologies to criminal law and criminal justice in general; the focus is somewhat narrower. Robots are the subject. Entities that are called robots can be based on machine learning techniques and AI, technologies already in use today, but they also have another crucial feature. They are designed to perform actions in the real wordFootnote 7 and thus must usually be embodied as physical objects. It is primarily this ability to interact physically with environments, objects, and the bodies of humans that calls for safeguards.

II The Preventive Perspective: Regulating Human–Robot Interactions
II.A Preventing Accidents

Regulation is necessary to prevent accidents caused by malfunctioning robots and unforeseen interactive effects. Some of these rules might need to be backed up by sanctions. It is almost impossible to say much more on a general level about potential accidents and what should be prohibited or regulated to minimize the risk of harm, as a more detailed analysis would require covering a vast area. The exact nature of important “dos and don’ts” that might warrant enforcement by criminal laws obviously depends on the kinds of activities that robots perform, e.g., in manufacturing, transportation, healthcare, households, and warfare, and the potential risks involved. The more complex a robot’s task, the more that can go wrong. The kind and size of potential harm depends, among other things, on the physical properties of robots, such as weight and speed, the frequency with which they encounter the general public, and the closeness of their operations to human bodies. Autonomous vehicles and surgical robots, e.g., require tighter regulation than robot vacuum cleaners.

The task of developing proper regulations for potentially dangerous human–robot interaction is challenging. It begins with the need to determine the entity to whom rules and prohibitions are addressed: manufacturers; programmers; those who rely on robots as tools, such as owners or users; third parties who happen to encounter robots, e.g., in the case of automated cars, other road users; or malevolent intruders who, e.g., hack computer systems or otherwise manipulate the robot’s functions. Another question is who can – and who should – develop legal standards. Not only legislatures, but also criminal and civil courts can and do contribute to rule-setting. Their rulings, however, generally target a specific case. Systematic and comprehensive regulation seems to call for legislative action. But before considering the enactment of new laws, attention should be paid to existing criminal laws, i.e., general prohibitions that protect human life, bodily integrity, property, etc. These prohibitions can be applied to some human failures that involve robots, but due to their unspecific wording and broad scope, they do not give sufficient guidance for our scenarios. More specific norms of conduct, norms tailored to the production, programming, and use of robots, would certainly be preferable. This leads again to the question of what institution is best situated to develop these norms of conduct. This task requires constant attention to and monitoring of rapid technological developments and emerging trends in robotics. Ultimately, traditional modes of regulation by means of laws might not be ideally suited to respond effectively to emerging technologies. Another major difficulty is that regulations in domestic laws do not make much sense for products circulating in global markets. This may prompt efforts to harmonize national laws.Footnote 8 As an alternative, soft law in the form of standards and guidelines proposed by the private sector or regulatory agencies might be a way to achieve faster and perhaps also more universal agreement among the producers and users of robots.Footnote 9

For legal scholars and legal policy, the upshot is that we should probably not expect too much from substantive criminal law as an instrument to control the use of new technologies. Effective and comprehensive regulation to prevent harm arising out of human–robot interactions, and the difficult task of balancing societal interest in the services provided by robots against the risks involved, do not belong to the core competencies of criminal law.

II.B Beyond Accidents

Beyond the prevention of accidents, other concerns might call for criminal prohibitions. If there are calls to suppress certain conduct rather than to regulate it, the criminal law is a logical choice. Strict prohibitions would make sense if one were to fundamentally object to the creation of AI and autonomous robots, in part because the long-term consequences for humankind might be serious,Footnote 10 although it may be too late for that in some instances. A more selective approach would be to demand not a categorical decision against all research in the field of AI and the production of advanced robots in general, but rather efforts to suspend researchFootnote 11 or to stop the production of some kinds of robots. An example of the latter approach would be prohibiting devices that apply deadly force against humans, such as remotely controlled or automated weapons systems, addressed in this volume by Marta Bo.Footnote 12 Not only is the possibility of accidents a particularly serious concern in this area, but also the reliability of target identification, the precision of application, and the control of access are of utmost importance. Even if autonomous weapon systems work as intended, they might in the long run increase the death toll in wars, and ethical doubts regarding war might grow if the humans responsible for aggressive military operations do not face personal risks.Footnote 13 Arguments that point to the risk of remote harm are often based on moral concerns. This is most evident in the discussions about sex robots. Should sex robots in general or, more particularly, sex robots that imitate stereotypical characteristics of female prostitutes, be banned?Footnote 14 The proposition of such prohibitions would need to be supported by strong empirical and normative arguments, including explanations as to why sex robots are more problematic than sex dolls, whether it is plausible to expect such robots to have negative effects on a sizable number of persons, why sexual activity involving humans and robots is morally objectionable, and even if convincing arguments of this kind could be made, why the state should engage in the enforcement of norms regarding sexual morality.

For legal theorists, it is also interesting to ask whether, at some point, policy debates will no longer focus solely on remote harms to other human beings, collective human concerns such as gender equality, or human values and morals, but will instead expand to include the interests or rights of individual robots as well. Take the example of sex robots. Could calls to prohibit sexual interactions between humans and robots refer to the dignity of the robot and its right to dignity? Might we experience a re-emergence of debates about slavery? At present, it would certainly be premature to claim that humans and robots should be treated as equivalent, but discussions about these issues have already begun.Footnote 15 As long as robots are distinguishable from humans in several dimensions, such as bodies, social competence, and emotional expressivity, it is unlikely that the rights humans grant one another will be extended to them. As long as there are no truly humanoid robots, i.e., robots that resemble humans in all or most physiological and psychological dimensions,Footnote 16 tremendous cognitive abilities alone are unlikely to trigger widespread demands for equal treatment such as the recognition of robots’ rights. For the purpose of this introductory chapter, it must suffice to point out that thinking in this direction would also be relevant to debates concerning the need to criminalize selected conduct in order to protect the interests of robots.

III The Retrospective Perspective: Applying Criminal Law to Human–Robot Interactions

The harmful outcomes of human–robot interactions not only provide an impetus to consider creating preventive regulation. Harmful outcomes can also give rise to criminal investigations and, ultimately, to proceedings against the humans involved. The criminal liability of robots is also discussed below.

III.A Human Liability for Unforeseen Accidents
III.A.1 Manufacturers and Programmers

If humans have been injured or killed through interaction with a robot, if property has been damaged, or if other legally protected rights have been disregarded, questions of criminal liability will arise. It could, of course, be argued that the more pressing issue is effective compensation, a goal achievable by means of tort law and mandatory insurance, perhaps in combination with the legal construct of robots as “electronic persons” with their own assets.Footnote 17 Serious accidents, however, are also likely to engage criminal justice officials who need to clarify whether a human suspect or, depending on the legal system, a corporation has committed a criminal offense.

The first group of potential defendants could be those who built and programmed the robot. If the applicable criminal law does not include a strict liability regulatory offense, criminal liability will depend on the applicability of general norms, such as those governing negligent or reckless conduct. The challenges for prosecutors and courts are manifold, and they include establishing causality, attributing outcomes to acts and omissions, and specifying the standard of care that applied to the defendant’s conduct.Footnote 18 Determining the appropriate standard of care requires knowledge of what could have been done better on the technical level. In addition, difficult, wide-ranging normative considerations are relevant. How much caution do societies require, and how much caution may they require when innovative products such as robots are introduced?Footnote 19 As a general rule, standards of care should not be so strict as to have a chilling effect on progress, since manufacturers and programmers can relieve humans of manual, tiresome, and tedious work, robots can compensate for the lack of qualified employees in many areas, and the overall effect of robot use can be beneficial to the public, e.g., by reducing traffic accidents once the stage of automated driving has been reached. Such fundamental issues of social utility should be one criterion when determining the standards of care upon which the criminal liability of manufacturers and programmers are predicated.Footnote 20

Marta Bo focuses on the criminal liability of programmers in Chapter 2, “Are Programmers in or out of Control? The Individual Criminal Responsibility of Programmers of Autonomous Weapons and Self-Driving Cars.” She asks whether programmers could be accused of crimes against persons if automated cars or automated weapons cause harm to humans or if the charge of indiscriminate attacks against civilians can be made. She describes the challenges facing programmers of automated vehicles and autonomous weapons and discusses factors that can undermine their control over outcomes. She then turns her attention to legal assessments, including criteria such as actus reus, the causal nexus between programming and harm caused by automated vehicles and autonomous weapons, and negligence standards. Bo concludes that it is possible to use criminal law criteria for imputation to test whether programmers had “meaningful human control.”

An obvious challenge for criminal law assessment is to determine the degree to which, in the case of machine learning, programmers can foresee developments in a robot’s behavior. If the path from the original algorithm to the robot’s actual conduct cannot be reconstructed, it might be worth considering whether the mere act of exposing humans to encounters with a somewhat unpredictable and thus potentially dangerous robot could, without more, be labeled criminally negligent. While this might be a reasonable approach when such robots first appear on the market, the question of whether it would be a good long-term solution merits careful consideration. It seems preferable to focus on strict criteria for licensing self-learning robots, and on civil law remedies such as compensation that do not require proof of individual negligence, and abandon the idea of criminal punishment of humans just for developing and marketing robots with self-learning features.

III.A.2 Supervisors and Users

Humans who are involved in a robot’s course of action in an active cooperative or supervisory way could, if an accident occurs, incur criminal liability for recklessness or negligence. Again, for prosecutors and courts, a frequent problem will be to identify the causes of an accident and the various roles of the numerous persons involved in the production and use of the robot. A “diffusion of responsibility”Footnote 21 is almost impossible to avoid. Also, the question will arise as to what can realistically be expected of humans when they supervise and use robots equipped with AI and machine learning technology. How can they keep up with self-learning robots if the decision-making processes of such robots are no longer understandable and their behavior hard to predict?Footnote 22

In Chapter 3, “Trusting Robots: Limiting Due Diligence Obligations in Robot-Assisted Surgery under Swiss Criminal Law,” Janneke de Snaijer describes one area where human individuals might be held criminally liable as a consequence of using robots. She focuses on the potential and the challenges of robot-assisted surgery. The chapter introduces readers to a technology already in use in operating rooms: that of automated robots helping surgeons achieve greater surgical precision. These robots can perform limited tasks independently, but are not fully autonomous. De Snaijer concentrates primarily on criminal liability for negligence, which depends on how the demands of due diligence are defined. She describes general rules of Swiss criminal law doctrine that provide some guidelines for requirements of due diligence. The major problem she identifies is how much trust surgeons should be allowed to place in the functioning of the robots with which they cooperate. Concluding that Swiss law holds surgeons accountable for robots’ actions to an unreasonable degree, she diagnoses contradictory standards in that surgeons are held responsible but required by law to use new technology to improve the quality of surgery.

In other contexts, robots are given the task of monitoring those who use them, e.g., by detecting fatigue or alcohol consumption, and, if need be, issuing warnings. Under such circumstances, a human who fails to heed a warning and causes an accident may face criminal liability. Presuming negligence in such cases might have the effect of establishing a higher standard for humans carrying out an activity while under the surveillance of a robot than for humans carrying out the same activity without the surveillance function. It might also mean that the threshold for assuming recklessness, or, under German law, conditional intent,Footnote 23 will be lowered. An interesting question is the degree to which courts will allow leeway for human psychology, including perhaps a human disinclination to be bossed around by a machine.

III.A.3 Corporate Liability

In many cases, it will not be possible or very difficult to trace harm caused by a device based on artificial intelligence to the wrongful conduct of an individual human being who acted in the role of programmer, manufacturer, supervisor, or user. Thomas Weigend starts Chapter 4, entitled “Forms of Robot Liability: Criminal Robots and Corporate Criminal Responsibility,” with the diagnosis of a “responsibility gap.” He then examines the option of holding robots criminally liable before going a step further and considering the introduction of corporate criminal responsibility for the harmful actions of robots. Weigend begins with the controversial discussion of whether corporations should be punished for crimes committed by employees. He then develops the idea that the rationales used to justify the far-reaching attribution of employee conduct to corporations could be applied to the conduct of robots as well. He contends that criminal liability should be limited to cases in which humans acting on behalf of the corporation were (at a minimum) negligent regarding the designing, programming, or controlling of robots.

III.B Human Liability for the Use of a Robot with the Intent to Commit a Crime

Robots can be purposefully used to commit crimes, e.g., to spy on other persons.Footnote 24 If the accused human intentionally designed, manipulated, used, or abused a robot to commit a crime, he or she can be held criminally liable for the outcome.Footnote 25 The crucial point in such cases is that the human who employs the robot uses it as a tool.Footnote 26 If perpetrators pursue their criminal goals with the use of a tool, it does not matter whether the tool is of the traditional, merely mechanical kind, such as a gun, or whether it has some features of intelligence, such as an automated weapon that is, e.g., reprogrammed for a criminal purpose.

While this is clearly the case for many criminal offenses, particularly those that focus on outcomes such as causing the death of another person, the situation with regard to other criminal offenses is not so clear. It will not always be obvious that a robot will be able to fulfil the definitional elements of all offenses. It could, e.g., be argued that sexual offenses that require bodily contact between offender and victim cannot be committed if the offender causes a robot to touch another person in a sexual way. In such cases, it is a matter of interpretation if wrongdoing requires the physical involvement of the human offender’s body. I would answer this particular question in the negative, because the crucial point is the penetration of the victim’s body. However, answers must be developed for different crimes separately, based on the legal terminology used and the kind of interest protected.

III.C Human Liability for Foreseen but Unavoidable Harm

In the situation of an unsolvable, tragic dilemma, in which there is no alternative harmless action, a robot might injure humans as part of a planned course of action. The most frequently discussed examples of these dilemmas involve automated cars in traffic scenarios in which all available options, such as staying on track or altering course, will lead to a crash with human victims.Footnote 27 If such events have been anticipated by human programmers, the question arises of whether they could perhaps be held criminally liable, should the dilemmatic situation in fact occur. When human drivers in a comparable dilemma knowingly injure others to save their own lives or the lives of their loved ones, criminal law systems recognize defenses that acknowledge the psychological and normative forces of strong fear, the will to survive, and personal attachments.Footnote 28 The rationale of such defenses does not apply, however, if a programmer, who is not in acute distress, decides that the automated car should always safeguard passengers inside the vehicle, and thus chooses the course that will lead to the death of humans outside the car.

If a human driver has to choose between swerving to save the lives of two persons on the road directly in front of the car, thus hitting and killing a single person on the sidewalk, or staying the course, thus hitting and killing both persons on the road, criminal law doctrine does not provide clear-cut answers. Under German doctrine, which displays a built-in aversion to utilitarian reasoning, the human driver who kills one person to save two would risk criminal punishment.Footnote 29 Whether this would change once the assessment shifts from the human driver at the wheel of the car at the crucial moment to the vehicle’s programmer is an interesting question. German law is shaped by a strong preference for remaining passive, i.e., one may not become active in order to save the greater number of lives, but for the programmer, this phenomenological difference dissolves completely. At the time the automated car or other robot is manufactured, it is simply a decision between programming option A or programming option B for dilemmatic situations.Footnote 30

III.D Self-Defense against Robots

If a human faces imminent danger of being injured or otherwise harmed by a robot, and the human knowingly or purposefully damages or destroys that robot, the question arises as to whether this situation is covered by a justificatory defense. In some cases, a necessity/lesser evil defense could be raised successfully if the danger is substantial. In other cases, it could be questioned if a lesser evil defense would be applicable, e.g., if someone shoots down a very expensive drone to prevent it from taking pictures.Footnote 31 Under such circumstances, another justificatory defense might be that of self-defense. In German criminal law, self-defense does not require a proportionality test.Footnote 32 In the case of an unlawful attack, it is permissible to destroy valuable objects even if the protected interest might be of comparatively minor importance. The crucial question in the drone case is whether an “unlawful attack”Footnote 33 or “unlawful force by another person”Footnote 34 requires that the attacker is a human being.

III.E Criminal Liability of Robots

In the realm of civil liability, robots could be treated as legal persons, and this status could be combined with the duty of producers or owners to endow robots with sufficient funds to compensate potential accident victims.Footnote 35 A different question is whether a case could also be made for the capacity of robots to incur criminal liability.Footnote 36 This is a highly contested proposal and a fascinating topic for criminal law theorists. Holding robots criminally liable would not be compatible with traditional features of criminal law: its focus on human agency and the notion of personal guilt, i.e., Schuld, which is particularly prominent in German criminal law doctrine. Many criminal law theorists defend these features as essential to the very idea of criminal law and thus reject the idea of permitting criminal proceedings against robots. But this is at best a weak argument. Criminal law doctrine is not set in stone; it has adapted to changes in the real world in the past and can be expected to do so again in the future.

The crucial question is whether there are additional principled objections to subjecting robots to criminal liability. Scholars typically examine the degree to which the abilities of robots are similar to those of humansFootnote 37 and ask whether robots fulfil the requirements of personhood, which is defined by means of concepts such as autonomy and free will.Footnote 38 These positions could be described as status-centered, anthropocentric, and essentialist. Traditional concepts of personhood rely on ontological claims about what humans are and the characteristics of humans qua humans. As possible alternatives, notions such as autonomy and personhood could also be described in a more constructivist manner, as the products of social attribution,Footnote 39 and it is worth considering whether the criminal liability of robots could at least be constructed for a limited subsection of criminal law, i.e., strict liability regulatory offenses, for legal systems that recognize such offenses.Footnote 40

Instead of exploring the degree of a robot’s human-ness or personhood, the alternative is to focus on the functions of criminal proceedings and punishments. In this context, the crucial question is whether some goals of criminal punishment practices could be achieved if norms of conduct were explicitly addressed to robots and if defendants were not humans but robots. As we will see, it makes sense to distinguish between the preventive functions of criminal law, such as deterrence, and the expressive meaning of criminal punishment.

The purpose of deterring agents is probably not easily transferrable from humans to robots. Deterring someone presupposes that the receiver of the message is actually aware of a norm of conduct but is inclined not to comply with it, because other incentives seem more attractive or other personal motives and emotions guide his or her decision-making. AI will probably not be prone to the kind of multi-layered, sometimes blatantly irrational type of decision-making practiced by humans. For robots, the point is to identify the right course of conduct, not to avoid being side-tracked by greed and emotions. But preventive reasoning could, perhaps, be brought to bear on the humans involved in the creation of robots who might be indirectly influenced. They might be effectively driven toward higher standards of care in order to avoid public condemnation of their products’ behavior.Footnote 41

In addition to their potentially preventive effects, criminal law responses have expressive features. They communicate that certain kinds of wrongful conduct deserve blame, and more specifically they reassure crime victims that they were indeed wronged by the other party to the interaction, and not that they themselves made a mistake or simply suffered a stroke of bad luck.Footnote 42 Some of the communicative and expressive features of criminal punishment might retain their functions, and address the needs of victims, if robots were the addressees of penal censure.Footnote 43 Even if robots will not for a long time, if ever, be capable of feeling remorse as an emotional state, the practice of assigning blame could persist with some modifications.Footnote 44 It might suffice if robots had the cognitive capacity to understand what their environment labels as right and wrong and the reasons behind these judgments, and if they adapted their behavior to norms of conduct. Communication would be possible with smart robots that are capable of explaining the choices they have made.Footnote 45 In their ability to respond and to modify parameters for future decision-making, advanced robots are distinguishable from others not held criminally liable, e.g., animals, young children, and persons with severe mental illness.

Admittedly, criminal justice responses to the wrongful behavior of robots cannot be the same as the responses to delinquent humans. It is difficult, e.g., to conceive of a “hard treatment” component of criminal punishmentFootnote 46 that would apply to robots, and such a component, if conceived, might well be difficult to enforce.Footnote 47 It could, however, be argued that punishment in the traditional sense is not necessary. For an entirely rational being, the message that conduct X is wrongful and thus prohibited, and the integration of this message into its future decision-making, would be sufficient. The next question would be if blaming robots and eliciting responses could provide some comfort to human victims and thus fulfil their emotional needs. It is conceivable that a formal, solemn procedure might serve some of the functions that traditional criminal trials fulfil, at least in the theoretical model, but study would be required to determine whether empathy or at least the potential for empathy are prerequisites for calling a perpetrator to account. Criminal law theorists have argued that robots could only be held criminally liable if they were able to understand emotional states such as suffering.Footnote 48 In my view, a deeply shared understanding of what it means, emotionally, to be hurt is not necessarily essential for the communicative message delivered to victims who have been harmed by a robot.

Another question, however, is whether a merely communicative “criminal trial,” without the hard treatment component of sanctions, would be so unlike criminal punishment practices as we know them that the general human public would consider it pointless and not worth the effort, or even a travesty. This question moves the inquiry beyond criminal law theory. Answers would require empirical insight into the feasibility and acceptance of formal, censuring communication with robots. If designing procedures with imperfect similarities to traditional criminal trials would make sense, the question of criminal codes for robots should perhaps also be addressed.Footnote 49

III.F Robots as Victims of Crime

Another area that might require more attention in the future is the interpretation of criminal laws if the victim of the crime is not a human, as assumed by the legislators when they passed the law, but a robot. Crimes against personality rights, e.g., might lead to interesting questions. Might it be a criminal offense to record spoken words, a criminal offense under §201 of the Strafgesetzbuch (German Criminal Code), if the speaker is a robot rather than a human being? Thinking in this direction would require considering whether advanced robots should be afforded constitutional and other rightsFootnote 50 and, should such a discussion gain seriousness, which rights these would be.

IV The Long-Term Perspective: General Effects on Substantive Criminal Law

The discussion in Section III above referred to criminal investigations undertaken after a specific human–robot interaction has caused or threatened to cause harm. From the perspective of criminal law theory, another possible development could be worth further observation. Over time, the assessment of human conduct, in general, might change, and perhaps we will begin to assess human–human interactions in a somewhat different light, once humanoid robots based on AI become part of our daily lives. At present, criminal laws and criminal justice systems are to different degrees quite tolerant with regard to the irrational features of human decision-making and human behavior. This is particularly true of German criminal law where, e.g., the fact that an offender has consumed drugs or alcohol can be a basis for considerable mitigation of punishment,Footnote 51 and offenders who are inclined to not consider possible negative outcomes of their highly risky behavior receive only a very lenient punishment or no punishment at all.Footnote 52 This tolerance of human imperfections might shrink if the more rational, de-emotionalized version of decision-making by AI has an effect on our expectations regarding careful behavior. At present, this is merely a hypothesis; it remains to be seen whether the willingness of criminal courts to accommodate human deficiencies really will decrease in the long term.

2 Are Programmers in or out of Control? The Individual Criminal Responsibility of Programmers of Autonomous Weapons and Self-Driving Cars

Marta Bo Footnote *
I Introduction

In March 2018, a Volvo XC90 vehicle that was being used to test Uber’s emerging automated vehicle technology killed a pedestrian crossing a road in Tempe, Arizona.Footnote 1 At the time of the incident, the vehicle was in “autonomous mode” and the vehicle’s safety driver, Rafaela Vasquez, was allegedly streaming television onto their mobile device.Footnote 2 In November 2019, the National Transportation Safety Board found that many factors contributed to the fatal incident, including failings from both the vehicle’s safety driver and the programmer of the autonomous system, Uber.Footnote 3 Despite Vasquez later being charged with negligent manslaughter in relation to the incident,Footnote 4 criminal investigations into Uber were discontinued in March 2019.Footnote 5 This instance is particularly emblematic of the current tendency to consider responsibility for actions and decisions of autonomous vehicles (AVs) as lying primarily with users of these systems, and not programmers or developers.Footnote 6

In the military realm, similar issues have arisen. For example, it is alleged that in 2020 an autonomous drone system, the STM Kargu-2, may have been used during active hostilities in Libya,Footnote 7 and that such autonomous weapons (AWs) were programmed to attack targets without requiring data connectivity between the operator and the use of force.Footnote 8 Although AW technologies have not yet been widely used by militaries, for several years, governments, civil society, and academics have debated their legal position, highlighting the importance of retaining “meaningful human control” (MHC) in decision-making processes to prevent potential “responsibility gaps.”Footnote 9 When debating MHC over AWs as well as responsibility issues, users or deployers are more often scrutinized than programmers,Footnote 10 the latter being considered too far removed from the effects of AWs. However, programmers’ responsibility increasingly features in policy and legal discussions, leaving many interpretative questions open.Footnote 11

To fill this gap in the current debates, this chapter seeks to clarify the role of programmers, understood simply here as a person who writes programmes that give instructions to computers, in crimes committed with and not by AVs and AWs (“AV- and AW-related crimes”). As artificial intelligence (AI) systems cannot provide the elements required by criminal law, i.e. the mens rea, the mental element, and the actus reus, the conduct element, including its causally connected consequence,Footnote 12 the criminal responsibility of programmers will be considered in terms of direct responsibility for commission of crimes, i.e., as perpetrators or co-perpetrators,Footnote 13 rather than vicarious or joint responsibility for crimes committed by AI. Programmers could, e.g., be held responsible on the basis of participatory modes of responsibility, such as aiding or assisting users in perpetrating a crime. Despite their potential relevance, participatory modes of responsibility under national and international criminal law (ICL) are not analyzed in this chapter, as that would require a separate analysis of their actus reus and mens rea standards. Finally, it must be acknowledged that as used in this chapter, the term “programmer” is a simplification. The development of AVs and AWs entails the involvement of numerous actors, internal and external to tech companies, such as developers, programmers, data labelers, component manufacturers, software developers, and manufacturers. These distinctions might entail difficulties in individualizing responsibility and/or a distribution of criminal responsibility, which could be captured by participatory modes of responsibility.Footnote 14

This chapter will examine the criminal responsibility of programmers through two examples, AVs and AWs. While there are some fundamental differences between AVs and AWs, there are also striking similarities. Regarding differences, AVs are a means of transport, implying the presence of people onboard, which will not necessarily be a feature of AWs. As for similarities, both AVs and AWs depend on object recognition technology.Footnote 15 Central to this chapter is the point that both AVs and AWs can be the source of incidents resulting in harm to individuals; AWs are intended to kill, are inherently dangerous, and can miss their intended target, and while AVs are not designed to kill, they can cause death by accident. Both may unintentionally result in unlawful harmful incidents.

The legal focus regarding the use of AVs is on crimes against persons under national criminal law, e.g., manslaughter and negligent homicide, and regarding the use of AWs, on crimes against persons under ICL, i.e., war crimes against civilians, such as those found in the Rome Statute of the International Criminal Court (“Rome Statute”)Footnote 16 and in the First Additional Protocol to the Geneva Conventions (AP I).Footnote 17 A core issue is whether programmers could fulfil the actus reus, including the requirement of causation, of these crimes. Given the temporal and spatial gap between programmer conduct and the injury, as well as other possibly intervening causes, a core challenge in ascribing criminal responsibility lies in determining a causal link between programmers’ conduct and AV- and AW-related crimes. To determine causation, it is necessary to delve into the technical aspects of AVs and AWs, and consider when and which of their associated risks can or cannot be, in principle, imputable to a programmer.Footnote 18 Adopting a preliminary categorization of AV- and AW-related risks based on programmers’ alleged control or lack of it over the behavior and/or effects of AVs and AWs, Sections II and III consider the different risks and incidents entailed by the use of AVs and AWs. Section IV turns to the elements of AV- and AW-related crimes, focusing on causation tests and touching on mens rea. Drawing from this analysis, Section V turns to a notion of MHC over AVs and AWs that incorporates requirements for the ascription of criminal responsibility and, in particular, causation criteria to determine under which conditions programmers exercise causal control over the unlawful behavior and/or effects of AVs and AWs.

II Risks Posed by AVs and Programmer Control

Without seeking to identify all possible causes of AV-related incidents, Section II begins by identifying several risks associated with AVs: algorithms, data, users, vehicular communication technology, hacking, and the behavior of bystanders. Some of these risks are also applicable to AWs.Footnote 19

In order to demarcate a programmer’s criminal responsibility, it is crucial to determine whether they ultimately had control over relevant behavior and effects, e.g., navigation and possible consequences of AVs. Thus, the following sections make a preliminary classification of risks on the basis of the programmers’ alleged control over them. While a notion of MHC encompassing the requirement of causality in criminal law will be developed in Section V, it is important to anticipate that a fundamental threshold for establishing the required causal nexus between conduct and harm is whether a programmer could understand and foresee a certain risk, and whether the risk that materialized was within the scope of the programmer’s “functional obligations.”Footnote 20

II.A Are Programmers in Control of Algorithm and Data-Related Risks in AVs?

Before turning to the risks and failures that might lie in algorithm design and thus potentially under programmer control, this section describes the tasks required when producing an AV, and then reviews some of the rules that need to be coded to achieve this end.

The main task of AVs is navigation, which can be understood as the AV’s behavior as well as the algorithm’s effect. Navigation on roads is mostly premised on rules-based behavior requiring knowledge of traffic rules and the ability to interpret and react to uncertainty. In AVs, automated tasks include the identification and classification of objects usually encountered while driving, such as vehicles, traffic signs, traffic lights, and road lining.Footnote 21 Furthermore, “situational awareness and interpretation”Footnote 22 is also being automated. AVs should be able “to distinguish between ordinary pedestrians (merely to be avoided) and police officers giving direction,” and conform to social habits and rules by, e.g., “interpret[ing] gestures by or eye contact with human traffic participants.”Footnote 23 Finally, there is an element of prediction: AVs should have the capability to anticipate the behavior of human traffic participants.Footnote 24

In AV design, the question of whether traffic rules can be accurately embedded in algorithms, and if so who is responsible for translating these rules into algorithms, becomes relevant in determining the accuracy of the algorithm design as well as attributing potential criminal responsibility. For example, are only programmers involved, or are lawyers and/or manufactures also involved? While some traffic rules are relatively precise and consist of specific obligations, e.g., a speed limit represents an obligation not to exceed that speed,Footnote 25 there are also several open-textured and context-dependent traffic norms, e.g., regulations requiring drivers to drive carefully.Footnote 26

AV incidents might stem from a failure of the AI to identify objects or correctly classify them. For example, the first widely reported incident involving an AV in May 2016 was allegedly caused by the vehicle sensor system’s failure to distinguish a large white truck crossing the road from the bright spring sky.Footnote 27 Incidents may also arise due to failures to correctly interpret or predict the behavior of others or traffic conditions, which may sometimes be interlinked with or compounded by problems of detection and sensing.Footnote 28 In turn, mistakes in both object identification and prediction might occur as a result of faulty algorithm design and/or derived from flawed data. In the former case, prima vista, if mistakes in object identification and/or prediction occur due to an inadequate algorithm design, the criminal responsibility of programmers could be engaged.

In relation to the latter, the increasing and almost dominant use of machine learning (ML) algorithms in AVsFootnote 29 means that issues of algorithms and data are interrelated. The performance of algorithms has become heavily dependent on the quality of data. A multitude of different algorithms are used in AVs for different purposes, with supervised and unsupervised learning-based algorithms often complementing one another. Supervised learning, in which an algorithm is fed instructions on how to interpret the input data, relies on a fully labeled dataset. Within AVs, the supervised learning models are usually: (1) “classification” or “pattern recognition algorithms,” which process a given set of data into classes and help to recognize categories of objects in real time, such as street signs; and (2) “regression,” which is usually employed for predicting events.Footnote 30 In cases of supervised learning, mistakes can arise from incorrect data annotation instead of a faulty algorithm design per se. If incidents do occur,Footnote 31 the programmer arguably would not be able to foresee those risks and be considered in control of the subsequent navigation decisions.

Other issues may arise with unsupervised learningFootnote 32 where an ML algorithm receives unlabeled data and programmers “describe the desired behaviour and teach the system to perform well and generalise to new environments through learning.”Footnote 33 Data can be provided in the phase of simulating and testing, but also during the use itself by the end-user. Within such methods, “deep learning” is increasingly used to improve navigation in AVs. Deep learning is a form of unsupervised learning that “automatically extracts features and patterns from raw data [such as real-time data] and predicts or acts based on some reward function.”Footnote 34 When an incident occurs due to deep learning techniques using real data, it must be assessed whether the programmer could have foreseen that specific risk and the resulting harm, or whether it derived, e.g., from an unforeseeable interaction with the environment.

II.B Programmer or User: Who Is in Control of AVs?

As shown in the March 2018 Uber incident,Footnote 35 incidents can also derive from failures of the user to regain control of the AV, with some AV manufacturers attempting to shift the responsibility for ultimately failing to avoid collisions onto the AVs’ occupants.Footnote 36 However, there are serious concerns as to whether an AV’s user, who depending on the level of automation is essentially in an oversight role, is cognitively in the position to regain control of the vehicle. This problem is also known as automation bias,Footnote 37 a cognitive phenomenon in human–machine interaction, in which complacency, decrease of attention, and overreliance on the technology might impair the human ability to oversee, intervene, and override the system if needed.

Faulty human–machine interface (HMI) design, i.e., the technology which connects an autonomous system to the human, such as a dashboard or interface, could cause the inaction of the driver in the first place. In these instances, the driver could be relieved from criminal responsibility. Arguably, HMIs do not belong to programmers’ functional obligations and therefore fall outside of a programmer’s control.

There are phases other than actual driving where a user could gain control of an AV’s decisions. Introducing ethics settings into the design of AVs may ensure control over a range of morally significant outcomes, including trolley-problem-like decisions.Footnote 38 Such settings may be mandatorily introduced by manufacturers with no possibility for users to intervene and/or customize them, or they may be customizable by users.Footnote 39 Customizable ethics settings allow users “to manage different forms of failure by making autonomous vehicles follow [their] decisions” and their intention.Footnote 40

II.C Are Some AV-Related Risks Out of Programmer Control?

There are a group of risks and failures that could be considered outside of programmer control. These include communications failures, hacking of the AV by outside parties, and unforeseeable bystander behavior. One of the next steps predicted in the field of vehicle automation is the development of software enabling AVs to communicate with one another and to share real-time data gathered from their sensors and computer systems.Footnote 41 This means that a single AV “will no longer make decisions based on information from just its own sensors and cameras, but it will also have information from other cars.”Footnote 42 Failures in vehicular communication technologiesFootnote 43 or inaccurate data collected by other AVs cannot be attributed to a single programmer, as they might fall beyond their responsibilities and functions, and also beyond their control.

Hacking could also cause AV incidents. For example, “placing stickers on traffic signs and street surfaces can cause self-driving cars to ignore speed restrictions and swerve headlong into oncoming traffic.”Footnote 44 Here, the criminal responsibility of a programmer could depend on whether the attack could have been foreseen and whether the programmer should have created safeguards against it. However, the complexity of AI systems could make them more difficult to defend from attacks and more vulnerable to interference.Footnote 45

Finally, imagine an AV that correctly follows traffic rules, but hits a pedestrian who unforeseeably slipped and fell onto the road. Such unforeseeable behavior of a bystander is relevant in criminal law cases on vehicular homicide, as it will break the causal nexus between the programmer and the harmful outcome.Footnote 46 In the present case, it must be determined which unusual behavior should be foreseen at the stage of programming, and whether standards of foreseeability in AVs should be higher for human victims.

III Risks Posed by AWs and Programmer Control

While not providing a comprehensive overview of the risks inherent in AWs, Section III follows the structure of Section II by addressing some risks, including algorithms, data, users, communication technology, hacking and interference, and the unforeseeable behavior of individuals in war, and by distinguishing risks based on their causes and programmers’ level of control over them. While some risks cannot be predicted, the “development of the weapon, the testing and legal review of that weapon, and th[e] system’s previous track record”Footnote 47 could provide information about the risks involved in the deployment of AWs. Some risks could be understood and foreseen by the programmer and therefore be considered under their control.

III.A Are Programmers in Control of Algorithm and Data-Related Risks in AWs?

Autonomous drones provide an example of one of the most likely applications of autonomy within the military domain,Footnote 48 and this example will be used to highlight the increasingly autonomous tasks in AWs. This section will address the rules to be programmed, and identify where some risks might lie in the phase of algorithm design.

The two main tasks being automated in autonomous drones are: (1) navigation, which is less problematic than on roads and a relatively straightforward rule-based behavior, i.e., they must simply avoid obstacles while in flight; and (2) weapon release, which is much more complex as “ambiguity and uncertainty are high when it comes to the use of force and weapon release, bringing this task in the realm of expertise-based behaviours.”Footnote 49 Within the latter, target identification is the most important function because it is crucial to ensure compliance with the international humanitarian law (IHL) principle of distinction, the violation of which could also cause individual criminal responsibility for war crimes. The principle of distinction establishes that belligerents and those executing attacks must distinguish at all times between civilians and combatants, and not target civilians.Footnote 50 In target identification, the two main automated tasks are: (1) object identification and classification on the basis of pattern recognition;Footnote 51 and (2) prediction, e.g., predicting that someone is surrendering, or based on the analysis of patterns of behavior, predicting that someone is a lawful target.Footnote 52

Some of the problems in the algorithm design phase may derive from translating the open-textured and context-dependentFootnote 53 rules of IHL,Footnote 54 such as the principle of distinction, into algorithms, and from incorporating programmer knowledge and expert-based rules,Footnote 55 such as those needed to analyze patterns of behavior in targeted strikes and translate them into code.

There are some differences compared with the algorithm design phase in AVs. Due to the relatively niche and context-specific nature of IHL, compared to traffic law which is more widely understood by programmers, programming IHL might require a stronger collaboration with outside expertise, i.e., military lawyers and operators.

However, similar observations to AVs can be made in relation to supervised and unsupervised learning algorithms. Prima vista, if harm results from mistakes in object identification and prediction based on an inadequate algorithm design, the criminal responsibility of the programmer(s) could be engaged. Depending on the foreseeability of such data failures to the programmer and the involvement of third parties in data labeling, and assuming mistakes could not be foreseen, criminal responsibility might not be attributable to programmers. Also similar to AVs, the increasing use of deep learning methods in AWs makes the performance of algorithms dependent on both the availability and accuracy of data. Low quality and incorrect data, missing data, and/or discrepancies between real and training data may be conducive to the misidentification of targets.Footnote 56 When unsupervised learning is used in algorithm design, environmental conditions and armed conflict-related conditions, e.g., smoke, camouflage, and concealment, may inhibit the collection of accurate data.Footnote 57 As with AVs, programmers of AWs may at some point gain sufficient knowledge and experience regarding the robustness of data and unsupervised machine learning that would subject them to due diligence obligations, but the chapter assumes that programmers have not reached that stage yet. In the case of supervised learning, errors in data may lie in a human-generated data feed,Footnote 58 and incorrect data labeling could lead to mistakes and incidents that might be attributable to someone, but not to programmers.

III.B Programmer or User: Who Is in Control of AWs?

The relationship between programmers and users of AWs presents different challenges than AVs. In light of current trends in AW development, arguably toward human–machine interaction rather than full autonomy of the weapons system, the debate has focused on the degree of control that militaries must retain over the weapon release functions of AWs.

However, control can be shared and distributed among programmers and users in different phases, from the design phase to deployment. As noted above, AI engineering in the military domain might require a strong collaboration between programmers and military lawyers in order to accurately code IHL rules in algorithms.Footnote 59 Those arguing for the albeit debated introduction of ethics settings in AWs maintain that ethics settings would “enable humans to exert more control over the outcomes of weapon use [and] make the distribution of responsibilities [between manufacturers and users] more transparent.”Footnote 60

Finally, given their complexity, programmers of AWs might be more involved than programmers of AVs in the use of AWs and in the targeting process, e.g., being required to update the system or implement some modifications to the weapon target parameters before or during the operation.Footnote 61 In these situations, it must be evaluated to what extent a programmer could foresee a certain risk entailed in the deployment and use of an AW in relation to a specific attack rather than just its use in the abstract.

III.C Are Some AW-Related Risks Out of Programmer Control?

In the context of armed conflict, it is highly likely that AWs will be subject to interference and attacks by enemy forces. A UN Institute for Disarmament Research (UNIDIR) report lists several pertinent examples: (1) signal jamming could “block systems from receiving certain data inputs (especially navigation data)”; (2) hacking, such as “spoofing” attacks, might “replace an autonomous system’s real incoming data feed with a fake feed containing incorrect or false data”; (3) “input” attacks could “change a sensed object or data source in such a way as to generate a failure,” e.g., enemy forces “may seek to confound an autonomous system by disguising a target”; and (4) “adversarial examples” or “evasion,” which are attacks that “involve adding subtle artefacts to an input datum that result in catastrophic interpretation error by the machine.”Footnote 62 In such situations, the issue of criminal responsibility for programmers will depend on the modalities of the adversarial interference, whether it could have been foreseen, and whether the AW could have been protected from foreseeable types of attacks.

Similar to the AV context, failures of communication technology, caused by signal jamming or by failures of communication systems between a human operator and the AI system or among connected AI systems, may lead to incidents that could not be imputed to a programmer.

Finally, conflict environments are likely to drift constantly as “[g]roups engage in unpredictable behaviour to deceive or surprise the adversary and continually adjust (and sometimes radically overhaul) their tactics and strategies to gain an edge.”Footnote 63 The continuously changing and unforeseeable behavior of opposing belligerents and the tactics of enemy forces can lead to “data drift,” whereby changes that are difficult to foresee can lead to a weapon system’s failure without it being imputable to a programmer.Footnote 64

IV AV-Related Crimes on the Road and AW-Related War Crimes on the Battlefield

The following section will distil the legal ingredients of crimes against persons resulting from failures in the use of AVs and AWs. The key question is whether the actus reus, i.e., the prohibited conduct, including its resulting harm, could ever be performed by programmers of AVs and AWs. The analysis suggests that save for war crimes under the Rome Statute, which prohibit a conduct, the crimes under examination on the road and the battlefield are currently formulated as result crimes, in that they require the causation of harm such as death or injuries. In relation to crimes of conduct, the central question is whether programmers controlled the behavior of an AV or an AW, e.g., the AW’s launching of an indiscriminate attack against civilians. In relation to crimes of result, the central question is whether programmers exercise causal control over a chain of events leading to a prohibited result, e.g., death, that must occur in addition to the prohibited conduct. Do programmers exercise causal control over the behavior and the effects of AVs and AWs? Establishing causation of crimes of conduct presents differences compared with crimes of result in light of the causal gap that characterizes the latter.Footnote 65 However, this difference is irrelevant in the context of crimes committed with the intermediation of AI since, be they of conduct or result, they always present a causal gap between a programmer’s conduct and the unlawful behavior or effect of an AV and AW. Thus, the issue is whether a causal nexus exists between a programmer’s conduct and either the behavior (in the case of crimes of conduct) or the effects (in the case of crimes of result) of AVs and AWs. Sections IV.A and IV.B will describe the actus reus of AV- and AW-related crimes, while Section IV.C will turn to the question of causation. While the central question of this chapter concerns the actus reus, at the end of this section, I will also make some remarks on mens rea and the relevance of risk-taking and negligence in this debate.

IV.A Actus Reus in AV-Related Crimes

This section focuses on the domestic criminal offenses of negligent homicide and manslaughter in order to assess whether the actus reus of AV-related crimes could be performed by a programmer. It does not address traffic and road violations generally,Footnote 66 nor the specific offense of vehicular homicide.Footnote 67

Given the increasing use of AVs and pending AV-related criminal cases in the United States,Footnote 68 it seems appropriate to take the Model Penal Code (MPC) as an example of common law legislation.Footnote 69 According to the MPC, the actus reus of manslaughter consists of “killing for which the person is reckless about causing death.”Footnote 70 Negligent homicide concerns instances where a “person is not aware of a substantial risk that a death will result from his or her conduct, but should have been aware of such a risk.”Footnote 71

While national criminal law frameworks differ considerably, there are similarities regarding causation which are relevant here. Taking Germany as a representative example of civil law traditions, the Strafgesetzbuch (German Criminal Code) (StGB) distinguishes two forms of intentional homicide: murderFootnote 72 and manslaughter.Footnote 73 Willingly taking the risk of causing death is sufficient for manslaughter.Footnote 74 Negligent homicide is proscribed separately,Footnote 75 and the actus reus consists of causing the death of a person through negligence.Footnote 76

These are crimes of result, where the harm consists of the death of a person. While programmer conduct may be remote with regard to AV incidents, some decisions taken by AV programmers at an early stage of development could decisively impact the navigation behavior of an AV that results in a death. In other words, it is conceivable that a faulty algorithm designed by a programmer could cause a fatal road accident. The question then becomes what is the threshold of causal control exercised by programmers over an AV’s unlawful behavior of navigation and its unlawful effects such as a human death.

IV.B Actus Reus in AW-Related War Crimes

This section addresses AW-related war crimes and whether programmers could perform the required actus reus. Since the actus reus would most likely stem from an AW’s failure to distinguish between civilian and military targets, the war crime of indiscriminate attacks, which criminalizes violations of the aforementioned IHL rule of distinction,Footnote 77 takes on central importance.Footnote 78 The war crime of indiscriminate attacks refers inter alia to an attack that strikes military objectives and civilians or civilian objects without distinction. This can occur as a result of the use of weapons that are incapable of being directed at a specific military objective or accurately distinguishing between civilians and civilian objects and military objectives; these weapons are known as inherently indiscriminate weapons.Footnote 79

While this war crime is neither specifically codified in the Rome Statute nor in AP I, it has been subsumedFootnote 80 under the war crime of directing attacks against civilians. Under AP I, the actus reus of the crime is defined in terms of causing death or injury.Footnote 81 In crimes of result with AWs, a causal nexus between the effects resulting from the deployment of an AW and a programmer’s conduct must be established. Under the Rome Statute, the war crime is formulated as a conduct crime, proscribing the actus reus as the “directing of an attack” against civilians.Footnote 82 A causal nexus must be established between the unlawful AW’s behavior and/or the attack and the programmer’s conduct.Footnote 83 Under both frameworks, the question is whether programmers exercised causal control over the behavior and/or effects, e.g., death or attack, of an AW.

A final issue relates to the required nexus with an armed conflict. The Rome Statute requires that the conduct must take place “in the context of and was associated with” an armed conflict.Footnote 84 However, while undoubtedly there is a temporal and physical distance between programmer conduct and the armed conflict, it is conceivable that programmers may program AW software or upgrade it during an armed conflict. In certain instances, it could be argued that programmer control continues even after the completion of the act of programming, when the effects of their decisions materialize in the behavior and/or effects of AWs in armed conflict. Programmers can be said to exercise a form of control over the behavior and/or effects of AWs that begins with the act of programming and continues thereafter.

IV.C The Causal Nexus between Programming and AV- and AW-Related Crimes

A crucial aspect of programmer criminal responsibility is the causal control they exercise over the behavior and/or effects of AVs and AWs. The assessment of causation refers to the conditions under which an AV’s and AW’s unlawful behavior and/or effects should be deemed the result of programmer conduct for purposes of holding them criminally responsible.

Causality is a complex topic. In common law and civil law countries, several tests to establish causation have been put forward. Due to difficulties in establishing a uniform test for causation, it has been argued that determining conditions for causation are “ultimately a matter of legal policy.”Footnote 85 But this does not render the formulation of causality tests in the relevant criminal provisions completely beyond reach. While a comprehensive analysis of these theories is beyond the scope of this chapter, for the purposes of establishing when programmers exercise causal control, some theories are more aligned with the policy objectives pursued by the suppression of AV- and AW-related crimes.

First, in common law and civil law countries, the “but-for”/conditio sine qua non test is the dominant test for establishing physical causation, and it is intended as a relationship of physical cause and effect.Footnote 86 In the language of MPC §2.03(1)(a), the conduct must be “an antecedent but for which the result in question would not have occurred.” The “but for” test works satisfactorily in cases of straightforward cause and effect, e.g., pointing a loaded gun toward the chest of another person and pulling the trigger. However, AV- and AW-related crimes are characterized by a temporal and physical gap between programmer conduct and the behavior and effect of AVs and AWs. They involve complex interactions between AVs and AWs and humans, including programmers, data providers and labelers, users, etc. AI itself is also a factor that could intervene in the causal chain. The problem of causation in these cases must thus be framed in a way that reflects the relevance of intervening and superseding causal forces which may break the causal nexus between a programmer’s conduct and AV- and AW-related crime.

Both civil law and common law systems have adopted several theories to overcome the shortcomingsFootnote 87 and correct the potential over-inclusivenessFootnote 88 of the “but-for” test, in complex cases involving numerous necessary conditions. Some of these theories include elements of foreseeability in the causality test.

The MPC adopts the “proximate cause test,” which “differentiates among the many possible ‘but for’ causal forces, identifying some as ‘necessary conditions’ – necessary for the result to occur but not its direct ‘cause’ – and recognising others as the ‘direct’ or ‘proximate’ cause of the result.”Footnote 89 The relationship is “direct” when the result is foreseeable and as such “this theory introduces an element of culpability into the law of causation.”Footnote 90

German theories about adequacy assert that whether a certain factor can be considered a cause of a certain effect depends on “whether conditions of that type do, generally, in the light of experience, produce effects of that nature.”Footnote 91 These theories, which are not applied in their pure form in criminal law, include assessments that resemble a culpability assessment. They bring elements of foreseeability and culpability into the causality test, and in particular, a probability and possibility judgment regarding the actions of the accused.Footnote 92 However, these theories leave unresolved the different knowledge perspectives, i.e., objective, subjective, or mixed, on which the foreseeability assessment is to be based.Footnote 93

Other causation theories include an element of understandability, awareness, or foreseeability of risks. In the MPC, the “harm-within-the risk” theory considers that causation in reckless and negligent crimes is in principle established when the result was within the “risk of which the actor is aware or … of which he should be aware.”Footnote 94 In German criminal law, some theories describe causation in terms of the creation or aggravation of risk and limit causation to the unlawful risks that the violated criminal law provision intended to prevent.Footnote 95

In response to the drawbacks of these theories, the teleological theory of causation holds that in all cases involving a so-called intervening independent causal force, the criterion should be whether the intervening causal force was “produced by ‘chance’ or was rather imputable to the criminal act in issue.”Footnote 96 Someone would be responsible for the result if their actions contributed in any manner to the intervening factor. What matters is the accused’s control over the criminal conduct and whether the intervening factor was connected in a but/for sense to their criminal act,Footnote 97 thus falling within their control.

In ICL, a conceptualization of causation that goes beyond the physical relation between acts and effects is more embryonic. However, it has been suggested that theories drawn from national criminal law systems, such as risk-taking and linking causation to culpability, and thus to foreseeability, should inform a theory of causation in ICL.Footnote 98 It has also been suggested that causality should entail an evaluation of the functional obligations of an actor and their area of operation in the economic sphere. According to this theory, causation is “connected to an individual’s control and scope of influence” and is limited to “dangers that he creates through his activity and has the power to avoid.”Footnote 99 As applied in the context of international crimes, which have a collective dimension, these theories could usefully be employed in the context of AV and AW development, which is collective by nature and is characterized by a distribution of responsibilities.

Programmers in some instances will cause harm through omission, notably by failing to avert a particular harmful risk when they are under a legal duty to prevent harmful events of that type (“commission by omission”).Footnote 100 In these cases, the establishment of causation will be hypothetical as there is no physical cause-effect relationship between an omission and the proscribed result.Footnote 101 Other instances concern whether negligence on the side of the programmers, via, e.g., a lack of instructions and warnings, have contributed to and caused the omission, constituting a failure to intervene on behalf of the user. Such omissions amount to negligence, i.e., violations of positive duties of care,Footnote 102 and since it belongs to mens rea, will be addressed in the following section.

IV.D Criminal Negligence: Programming AVs and AWs

In light of the integration of culpability assessments in causation tests, an assessment of programmers’ criminal responsibility would be incomplete without addressing mens rea issues. In relation to mens rea, while intentionally and knowingly programming an AV or AW to commit crimes falls squarely under these prohibitions, in both these contexts, the most expected and problematic issue is the unintended commission of these crimes, i.e., cases in which the programmer did not design the AI system to commit an offense, but harm nevertheless arises during its use.Footnote 103 In such situations, programmers had no intention to commit an offense, but still might incur criminal liability for risks that they should have known and foreseen. To define the scope of criminal responsibility for unintended harm, it is crucial to determine which risks can be known and foreseen by an AV or AW programmer.

There are important differences in the mens rea requirements of AV- and AW-related crimes. Under domestic criminal law, the standards of recklessness and negligence apply to the AV-related crimes of manslaughter and negligent homicide. While “[a] person acts ‘recklessly’ with regard to a result if he or she consciously disregards a substantial risk that his or her conduct will cause the result; he or she acts only ‘negligently’ if he or she is unaware of the substantial risk but should have perceived it.”Footnote 104 The MPC provides that “criminal homicide constitutes manslaughter when it is committed recklessly.”Footnote 105 In the StGB, dolus eventualis, i.e., willingly taking the risk of causing death, would encompass situations covered by recklessness and is sufficient for manslaughter.Footnote 106 For negligent homicide,Footnote 107 one of the prerequisites is that the perpetrator can foresee the risk to a protected interest.Footnote 108

Risk-based mentes reae are subject to more dispute in ICL. The International Tribunal for the former Yugoslavia accepted that recklessness could be a sufficient mens rea for the war crime of indiscriminate attacks under Article 85(3)(a) of AP I.Footnote 109 However, whether recklessness and dolus eventualis could be sufficient to ascribe criminal responsibility for war crimes within the framework of the Rome Statute remains debated.Footnote 110

Unlike incidents with AVs, incidents in war resulting from a programmer’s negligence cannot give rise to their criminal responsibility. Where applicable, recklessness and dolus eventualis, which entail understandability and foreseeability of risks of developing inherently indiscriminate AWs, become crucial to attribute responsibility to programmers in scenarios where programmers foresaw and took some risks. Excluding these mental elements would amount to ruling out the criminal responsibility of programmers in most expected instances of war crimes.

V Developing an International Criminal Law-Infused Notion of Meaningful Human Control over AVs and AWs that Incorporates Mens Rea and Causation Requirements

This section considers a notion of MHC applicable to AVs and AWs that is based on criminal law and that could function as a criminal responsibility “anchor” or “attractor.”Footnote 111 This is not the first attempt to develop a conception of control applicable to both AVs and AWs. Studies on MHC over AWs and moral responsibility of AWsFootnote 112 have been extended to AVs.Footnote 113 In their view, MHC should entail an element of traceability entailing that “one human agent in the design history or use context involved in designing, programming, operating and deploying the autonomous system … understands or is in the position to understand the possible effects in the world of the use of this system.”Footnote 114 Traceability requires that someone in the design or use understands the capabilities of the AI system and its effects.

In line with these studies, it is argued here that programmers may decide and control how both traffic law and IHL are embedded in the respective algorithms, how AI systems see and move, and how they react to changes in the environment. McFarland and McCormack affirm that programmers may exercise control not only over an abstract range of behavior, but also in relation to specific behavior and effects of AWs.Footnote 115 Against this background, this chapter contends that programmer control begins at the initial stage of the AI development process and continues into the use phase, extending to the behavior and effects of AVs and AWs.

Assuming programmer control over certain AV- and AW-related unlawful behavior and effects, how can MHC be conceptualized so as to ensure that criminal responsibility is traced back to programmers when warranted? The foregoing discussion of causality in the context of AV- and AW-related crimes suggests that theories of causation that go beyond deterministic cause-and-effect assessments are particularly amenable to developing a theory of MHC that could ensure responsibility. These theories either link causation to mens rea standards or describe it in terms of the aggravation of risk. In either case, the ability to understand the capabilities of AI systems and their effects, and foreseeability of risks, are required. Considering these theories of causation in view of recent studies on MHC over AVs and AWs, the MHC’s requirement of traceability arguably translates into the requirement of foreseeability of risks.Footnote 116 Because of the distribution of responsibilities in the context of AV and AW programming, causation theories introducing the notion of function-related risks are needed to limit programmers’ criminal responsibility to those risks within their respective obligations and thus their sphere of influence and control. According to these theories, the risks that a programmer is obliged to prevent and that relate to their functional obligations, i.e., their function-related risks, could be considered causally imputable in principle.Footnote 117

VI Conclusion

AVs and AWs are complex systems. Their programming implies a distribution of responsibilities and obligations within tech companies, and between them and manufacturers, third parties, and users, which makes it difficult to identify who may be responsible for harm stemming from their use. Despite the temporal and spatial gap between the programming phase and crimes, the responsibility of programmers in the commission of crimes should not be discarded. Indeed, crucial decisions on the behavior and effects of AVs and AWs are taken in the programming phase. While a more detailed case-by-case analysis is needed, this chapter has mapped out how programmers of AVs and AWs might be in control of certain AV- and AW-related risks and therefore criminally responsible for AV- and AW-related crimes.

This chapter has shown that the assessment of causation as a threshold for establishing whether an actus reus was committed may converge on the criteria of understandability and foreseeability of risks of unlawful behavior and/or effects of AVs and AWs. Those risks which fall within programmers’ functional obligations and sphere of influence can be considered under their control and imputable to them.

Following this analysis, a notion of MHC applicable to programmers of AVs and AWs based on requirements for the imputation of criminal responsibility can be developed. It may function as a responsibility anchor in so far as it helps trace back responsibility to the individuals that could understand and foresee the risk of a crime being committed with an AV or AW.

3 Trusting Robots Limiting Due Diligence Obligations in Robot-Assisted Surgery under Swiss Criminal Law

Janneke De Snaijer Footnote *
I Introduction

Surgeons have been using automated tools in the operating room for several decades. Even more robots will support surgeons in the future, and at some point, surgery may be completely delegated to robots. This level of delegation is currently fictional and robots remain mostly under the command of the human surgeon. But some robots are already making discrete decisions on their own, based on the combined functioning of programming and sensors, and in some situations, surgeons rely on a robot’s recommendation as the basis for their directions to the robot.

This chapter discusses the legal responsibility of human surgeons working with surgical robots under Swiss law, including robots who notify surgeons about a patient’s condition so the surgeon can take a particular action. Unlike other jurisdictions, negligence and related duties of care are defined in Switzerland not only by civil law,Footnote 1 but by criminal law as well.Footnote 2 This chapter focuses on the surgeon’s individual criminal responsibility for negligence,Footnote 3 which is assessed under the general concept of Article 12, paragraph 3 of the Criminal Code of Switzerland (“SCC”).Footnote 4 Under the SCC, the surgeon is required to carry out a medical surgery in accordance with state-of-the-art due diligence.

In the general context of task sharing among humans, which includes surgeons working in a team, a principle of trust (Vertrauensgrundsatz) applies. The principle of trust allows team members to have a legitimate expectation that each participant will act with due diligence. The principle of trust also means that participants are for the most part only responsible for their own actions, which limit their obligations of due diligence. However, when the participant is a robot, even though the surgeon delegates tasks to the robot and relies on it in a manner similar to human participants, the principle of trust does not apply and the surgeon is responsible for what the robot does. Neither statutes nor cases clearly state an application or rejection of the traditional principle of trust to robots. However, at this point, the principle has only been applied to humans, and it is safe to assume that it does not apply to robots, mainly because a robot is currently not capable of criminal responsibility under Swiss law.Footnote 5 Application of the principle of trust to robots together with a corresponding limitation on the surgeon’s liability would therefore create a responsibility gap.Footnote 6

In view of the important role robots play in a surgical team, one would expect governing regulation to apply traditional principles to the division of work between human surgeons and robots, but the use of surgical robots has not led to any relevant changes, or the introduction of special care regulations that either limit the surgeon’s responsibility or allocate it among other actors. This chapter explores an approach to limiting the surgeon’s criminal liability when tasks are delegated to robots. As the SCC does not provide guidance regarding the duties of care when a robot is used, other law must be consulted. The chapter argues that the principle of trust (Vertrauensgrundsatz) should be applied to limit the due diligence expected from a surgeon interacting with a robot. Incorporating and handling robots in surgery are becoming more integral to effective surgery due to specialization arising from division of labor among humans and robots, and the increase in more precise and quicker medical-technical solutions for patients. Surgeons must rely to some degree on the expertise of the robots they use, and therefore surgeons who make use of promising robots in their operating room should be subject to a valid and practical approach to due diligence which does not unreasonably expand their liability. While the chapter addresses the need to limit the surgeon’s liability when working with robots, chapter length does not allow for analysis of related issues such as the connection to permissible risk, i.e., once the surgical robot is established in society, the possible risks are accepted because its benefits outweigh the risks. The chapter does not address other related issues, such as situations in which a hospital instructs surgeons to use robots, issues arising from the patient’s perspective, or the liability of the manufacturer, except for situations where the robot does not perform as it should or simply fails to function.Footnote 7

The chapter proceeds by articulating the relevant concept of a robot (Section II). A discussion of due diligence (Section III) explains the duties of care and the principle of trust when a surgeon works without a robot (Section III.B), which is followed by a discussion of duties of care when a surgeon works with a robot (Section III.C). The chapter addresses in detail the due diligence expected when a surgical robot asks the human to take a certain action (Section III.C.3). Moving to a potential approach that restricts a surgeon’s criminal liability to appropriate limits, the chapter explores the principle of trust as it could apply to robots (Section III.D), and suggests an approach that applies and calibrates the principle of trust based on whether the robot has been certified (Section III.E). The chapter applies these legal principles to the first stage of surgical robots, which are still dependent on commands from humans to take action and do not contain complete self-learning components. The conclusion (Section IV) looks to the future and shares some brief suggestions about how to deal with likely developments in autonomous surgical robots.

II Terminology: Robots in Surgery

A standardized definition of a robot does not exist.Footnote 8 There is some agreement that a robot is a mechanical object.Footnote 9 In 1920, Karel Capek characterized the term “robota” (slavish, slave labor)Footnote 10 by his story about artificial slaves who take over humankind.Footnote 11 Thereafter, the term was used in countless other works.Footnote 12 The modern use of robot includes the requirement that a robot has sensors to “sense,” processors to “think,” and actuating elements to “act.”Footnote 13 Under this definition, pure software, which does not interact physically with the world, does not count as a robot.Footnote 14 In general, robots are partly intelligent, adaptive machines that extend the human ability to act in the world.Footnote 15

Traditionally, robots are divided into industrial and service robots. A distinction is also made between professional service robots such as restaurant robots, and service robots for private use such as robot vacuums.Footnote 16 The robots considered in this chapter come under the category of service robots, which primarily provide services for humans as opposed to industrial processes. Among other things, professional service robots can interact with both unskilled and skilled personnel, as in the case of a service robot at a restaurant, or with exclusively skilled personnel, as with a surgeon in an operating room.

In discussions of robots and legal responsibility, the terms “agents” or “autonomous systems”Footnote 17 are increasingly used almost interchangeably with the term robot. To avoid definitional problems, only the term “robot” will be used in the chapter. However, the chapter does distinguish between autonomous and automated robots, and only addresses automated robots over which the surgeon exercises some control, not fully autonomous robots. Fully autonomous robots would have significantly increased autonomy and their own decision-making ability, whereas automated robots primarily execute predetermined movement patterns.Footnote 18 Fully autonomous robots that do not require human direction are not covered in this chapter because innovations in the field of surgery have not yet reached this stage,Footnote 19 although the conclusion will share some initial observations regarding how to approach the liability issues raised by autonomous robots.

III Legal Principles Regarding Due Diligence and Cooperation

Generally applicable principles of law regarding due diligence and cooperation are found in Swiss criminal law. Humans must act with due diligence, and if they do not, they can be liable for negligence. According to Swiss criminal law, any person is liable for lack of care if he or she fails to exercise the duty of care required by the circumstances and commensurate with personal capabilities.Footnote 20 But while it is a ubiquitous principle that humans bear responsibility for their own behavior, we normally do not bear responsibility for someone else’s conduct. We must consider the consequences of our own behavior and prevent harm to others, but we are not our brother’s or sister’s keeper. The scope of liability can change if we share responsibilities, such as risk-prone work, with others.Footnote 21 And whether we are acting alone or in cooperation with others, we must be careful, depending on the circumstances and our personal capabilities.

III.A Basic Rules with Examples Regarding the Due Diligence of Surgeons

Unlike other jurisdictions, Swiss law explicitly defines the basic rule determining criminal negligence. In Article 12, paragraph 3 of the SCC, a “person commits a felony or misdemeanour through negligence if he fails to consider or disregards the consequences of his conduct due to a culpable lack of care. A lack of care is culpable if the person fails to exercise the care that is incumbent on him in the circumstances and commensurate with his personal capabilities.”Footnote 22

Determining a person’s precise due diligence obligations can be a complex endeavor. In Swiss criminal law a myriad of due diligence rules underpin negligence and are used to specify the relevant obligations, including legal norms, private regulations, and a catch-all-clause, dubbed the risk principle (Gefahrensatz).Footnote 23 The risk principle establishes that everyone has to behave in a reasonable way that minimizes threats to the relevant legal interest as best as possible.Footnote 24 For example, a surgeon must take all reasonable possible precautions to avoid increasing a pre-existing danger to the patient.Footnote 25

To apply the risk principle, the maximum permissible risk must be determined.Footnote 26 For this purpose, the general risk range must first be determined, and this range is limited by human skill;Footnote 27 no one can be reproached for not being able to prevent the risk in spite of doing everything humanly possible (ultra posse nemo tenetur).Footnote 28 The risk range is therefore limited by society’s understanding of the permissible risk, and by the abilities possessed by a capable, psychologically, and physically normal person; no superhuman performance is expected.Footnote 29 However, if a person’s ability is lower than what is required in a situation, the performed activity should be refrained from.Footnote 30 In the context of medical personnel, a surgeon who is not familiar with the use of robots may not perform such an operation.

As the law does not list the exact duties of care of a surgeon, it is left to the courts to specify in more detail the content and scope of the medical duties of care based on the relevant statutes and regulations. In that respect, it is not of significance whether the treatment is governed by public or private law.Footnote 31

III.B Due Diligence Standards Specific to Surgeons

Swiss criminal law is applied in the medical field, and every healthcare professional who hurts a patient intentionally or with criminal negligence can be liable.Footnote 32 Surgery is an activity that is, in principle, hazardous, and a surgeon may be prosecuted if he or she, consciously or unconsciously,Footnote 33 neglects a duty of care.Footnote 34 According to the Swiss Federal Supreme Court, the duty of care when applying conventional methods of treatment is based on “the circumstances of the individual case, i.e., the type of intervention or treatment, the associated risks, the discretionary scope and time available to the physician in the individual case, as well as his objectively expected education and ability to perform.”Footnote 35

This reference of the Swiss Federal Supreme Court to the educational background and efficiency of the physician does not indicate that the standard is entirely subjective. Rather, the physician should be assessed according to the knowledge and skills assumed to be available to representatives of his specialty at the time the measures are taken.Footnote 36 This objective approach creates an ongoing obligation for the further education of surgeons.

Part of a surgeon’s obligation is that they owe the patient a regime of treatment that complies with the generally recognized state of medical art (lex artis),Footnote 37 determined at the time of treatment. Lex artis is the guiding principle for establishing due diligence in an individual case in Swiss criminal law.Footnote 38 It encompasses the entire medical procedure, from the examination, diagnosis, therapeutic decision, and implementation of the treatment, and in the case of surgeons from preparing the operation to aftercare.Footnote 39 The standard is therefore not what is individually possible and reasonable, but the care required according to medical indications and best practice.Footnote 40 A failure to meet this medical standard leads to a breach of duty of care. Legal regulation, such as the standards of the Medical Professions Act (“MedBG”),Footnote 41 especially Article 40 lit. a, may be used to determine the respective state of medical art. Together, the regulatory provisions provide for the careful and conscientious practice of the medical profession.Footnote 42

Doctors must also observe and not exceed the limits of their own competence. A surgeon must recognize when they are not able to perform a surgery and need to consult a specialist. This obligation includes the duty to cooperate with other medical personnel, because performing an operation without the required expertise is a breach of duty of care in itself.Footnote 43 As with other areas of medical care, the surgeon’s obligations do not exceed the human ability to foresee events and to influence them in a constructive way.Footnote 44

If there are no legal standards for an area of medical practice, courts may refer to guidelines from medical organizations.Footnote 45 In practice, courts usually refer to the private guidelines of the Swiss Academy of Medical SciencesFootnote 46 and the Code of Conduct of the Swiss Medical Association (“FMH”).Footnote 47 Additionally, general duties derived from court decisions, such as “practising the art of medicine according to recognized principles of medical science and humanity,” can be used in a secondary way to articulate a doctor’s specific due diligence obligation.Footnote 48

III.C Due Diligence of a Surgeon in Robot-Assisted Surgery

New technologies have long been making appearances in operating rooms. Arthrobot assisted for the first time in 1983; responding to voice command, the robot was able to immobilize patients by holding them steady during orthopedic surgery.Footnote 49 Arthrobots are still in use today.Footnote 50

The introduction of robots to surgery accomplishes two main aims: (1) they perform more accurate medical procedures; and (2) they enable minimally invasive surgeries, which in turn increases surgeon efficacy and patient comfort by providing a faster recovery. A doctor is, generally, not responsible for the dangers and risks that are inherent in every medical action and in the illness itself.Footnote 51 However, the surgeon’s obligation of due diligence applies when using a robot. The chapter argues that the precise standards of care should differ, depending on whether the surgeon has control of the robot’s actions or whether the robot reacts independently in the environment, and depending on the extent of the surgeon’s control, including the ability to intervene in a procedure.Footnote 52

The next section introduces and explains the functioning of several examples of surgical robots. These robots qualify as medical devices under Swiss law,Footnote 53 and as such are subject to statutes governing medical devices. Medical devices are defined as instruments, equipment, software, and other objects intended for medical use.Footnote 54 Users of medical devices must take all measures required by the state of the art in science and technology to ensure that they pose no additional risk. The lex artis for treatment incorporating robots under Swiss criminal law requires users to apply technical aids lege artis and operate them correctly. For example, when the robot is used again at a later time, its functionality and correct reprocessing must be checked.Footnote 55 A surgeon does not have to be a trained technician, but he or she must have knowledge of the technology used, similar to the way that a driver must “know” a car, but need not be a mechanic.

On its own, the concept of lex artis does not imply specific obligations, and the specific parameters of the obligations must be determined based on individual circumstances. According to Article 45, paragraph 1 of the Therapeutic Products Act (TPA), a medical device must not endanger the health of patients when used as intended. If a technical application becomes standard in the field, falling below or not complying with the standard (lex artis) is classified as a careless action.Footnote 56 Lack of knowledge of the technology, as well as a lack of control over a device during an operation, leads to an assumption of liability (“Übernahmeverschulden”).Footnote 57

A final aspect of the surgeon’s obligations regarding surgical robots is that a patient must always be informedFootnote 58 about the robot before an operation, and the duty of documentationFootnote 59 must be complied with. Although the precise due diligence obligations of surgeons always depend on the circumstances of individual cases, the typical duties of care regarding two different kinds of robots that incorporate elements of remote-control, and the situation in which a robot provides a warning to the surgeon, are outlined below.

III.C.1 Remote-Controlled Robots

The kind of medical robots prevalent today are remote-controlled robots, also referred to as telemanipulation systems in medical literature. They are controlled completely and remotely by the individual surgeon,Footnote 60 usually from a short distance away via the use of joysticks. An example of a remote-controlled robot, DaVinci, was developed by the company Intuitive, and it is primarily used in the fields of urology and gynecology. DaVinci does not decide what maneuver to carry out; it is completely controlled by the surgeon, who works from an ergonomic 3D console using joysticks and foot pedals.Footnote 61 The surgeon’s commands are thus translated directly into actions by the robot. In this case, the robot makes it possible for the surgeon to make smaller incisions and achieve greater precision.

What is the due diligence obligation of a surgeon making use of remote-controlled robots? Remote-controlled robots such as the DaVinci, which have no independence and are not capable of learning, do not present any ambiguities in the law. If injury has occurred, the general Swiss criminal law of liability for negligence holds the surgeon responsible. The robot’s arms are considered to be an extension of the surgeon’s hands, who remains in complete control of the operation.Footnote 62 In fact, the surgeon has always needed tools such as scalpels to operate. Today, thanks to technological progress, the tool has simply become more sophisticated. The surgeon’s duties of care remain the same with a remote-controlled robot as without, and can be stated as follows:Footnote 63 the surgeon must know how the robot works and be able to operate it. Imposing full liability on the surgeon is appropriate here, as the surgeon is in complete control of the robot.

According to Dr. med. Stephan Bauer, a surgeon needs training with DaVinci to work the robot, including at least 15 operations with the console control to become familiar with the robot, and 50 more to be able to operate it correctly.Footnote 64 The surgeon must also attend follow-up training and regular education in order to fulfil his or her duty of care. This degree of training is not currently specified in any medical organization’s guideline, but it is usually recommended by the manufacturer. The surgeon must also be able to instruct and supervise his or her surgical team sufficiently, and should not use a remote-controlled robot if there is insufficient knowledge of the type of operation it will be used in. Lastly, the surgeon must be able to complete the operation without the robot. These principles are basic aspects of any kind of medical due diligence in Switzerland, and they must apply in any kind of modern medicine such as the use of surgical robots.Footnote 65

Medical doctors who do not fulfil the duty of care and supervision for a remote-controlled robot can be held criminally responsible to the same degree as if the doctor made use of a scalpel directly on a patient’s body. If, however, injury occurs due to a malfunction of the robot, such as movements that do not comply with the surgeon’s instructions or a complete failure during the operation, the manufacturer,Footnote 66 or the person responsible for ensuring the regular maintenance of the device,Footnote 67 could be held criminally responsible.

III.C.2 Independent Surgical Robots

Some surgical robots in use today have dual capabilities. These robots are pre-programmed by the responsible surgeon in advance and carry out programming without further instruction from the surgeon, but they can also perform certain tasks independently, based on the combined functioning of their sensors and their general programming. Initially the surgeon plans and programs the motion sequences of the robot in advance, and the robot carries out those steps, but the robot may have the ability to act without instruction from the surgeon. These robots are referred to here as “independent robots,” to indicate that their abilities are not limited to remote-controlled actions, and to distinguish them from fully autonomous robots capable of learning.

An example of an independent robot with dual capabilities is Smart Tissue Autonomous Robot (STAR),Footnote 68 which carries out pre-programmed instructions from the surgeon, but which can also automatically stitch soft tissue. Using force and motion sensors and cameras, it is able to react to unexpected tissue movements while functioning.Footnote 69 In 60 percent of cases, it does not require human assistance to do this stitching, while in the other cases, it only needs minimal amounts of input from the surgeon.Footnote 70 Although the stitching currently requires more time than the traditional technique by a human, it delivers better results.Footnote 71 Another example, Cold Ablation Robot-guided Laser Osteotome (CARLO),Footnote 72 is able to cut bones independently after receiving the surgeon’s instructions, but it can also use sensors to check whether the operation is going smoothly.Footnote 73 According to the manufacturer Advanced Osteotomy Tools (AOT),Footnote 74 CARLO is thus the “world’s first medical, tactile robot that can cut bone … with cold laser technology. The device allows the surgeon to perform bone operations with unprecedented precision, and in freely defined, curved and functional sectional configurations, which are not achievable with conventional instruments.”Footnote 75 In summary, CARLO’s lasers open up new possibilities in bone surgery.

Independent robots have the advantage of extreme precision, and they have no human deficits such as fatigue, stress, or distraction. Among other benefits, use of these robots decreases the duration of hospitalization, as well as the risks of infection and pain for the patient, because the incision and the injury to the tissue is minimal. When independent robots function as intended, surgery time is usually shortened, accidents due to hand trembling of the surgeon are reduced, and improved 3D visualization can be guaranteed.

As noted above, a surgeon is fully responsible for injury caused by a remote-controlled robot, in part because the surgeon has full control over the robot, which can be viewed as an extension of the surgeon’s own hands. What are a surgeon’s due diligence obligations when using an independent surgical robot? When independent surgical robots use their ability to make decisions on their own, should criminal responsibility be transferred to, or at least shared with, say, the manufacturer, particularly in cases where it was not possible for the surgeon to foresee the possible injury?

To the extent that independent robots are remote-controlled, i.e., simply carrying out the surgeon’s instructions, surgeons must continuously comply with the duties of care that apply when using a remote-controlled robot, including the accurate operation, control, and maintenance of the robot. A surgeon’s obligations regarding a careful operation while using an independent robot include, prior to the operation, the correct definition of the surgical plan and the programming of the robot. The surgeon must also write an operation protocol, disinfect the area, and make the first incision.Footnote 76 In addition, further duties arise under Swiss law because of the independence of the robot in carrying out the instructions the surgeon provided earlier, i.e., non-contemporaneous instructions.Footnote 77 During the operation, the surgeon must observe and monitor the movements of the robot so that he can intervene at any time if he or she realizes harm may occur. According to the manufacturer AOT,Footnote 78 CARLO “allows the surgeon full control over this … osteotomy device at any time.” This standard of supervision is appropriate, because the surgeon’s supervision is needed to prevent injury, but as reviewed below, there are limits to what can be expected of a surgeon supervising a robot.

Even if a surgeon complies with the obligations to take precautions and carry out surveillance of the surgery while it is ongoing, a surgical robot may still make a mistake, e.g., cutting away healthy tissue. If it is established that a cautious and careful surgeon in the same position would not have been able to regain control of the robot and avoid the injury, the surgeon is deemed to have not violated his or her duty of care or acted in a criminally negligent manner.Footnote 79 If this occurs, no criminal charges will be brought against the surgeon. This standard is also appropriate, because proper supervision could not have prevented the injury.

III.C.3 Due Diligence after a Robot Warning

Per the principle lex artis, a surgeon using any kind of surgical robot is required to be knowledgeable regarding the functionality of the robot, including the emergency and safety functions, and the messages and warning functions.Footnote 80 A human surgeon using a robot for surgery cannot blindly trust the technology, and current law requires the surgeon to supervise and check whether or not their intervention is required and whether a change of plan is necessary. In the event that the robot fails, or issues a warning signal, the human must complete the surgery without the assistance of the robot. If the robot issues an alert, the human surgeon must always be capable of checking whether such notification is correct and react adequately.Footnote 81 If the human surgeon is not capable of taking over, Swiss law imposes liability according to a sort of organizational negligence, the “Übernahmeverschulden,” which is the principle that if a person assumed a task that he cannot handle properly, and harm is caused, the surgeon acted negligently.Footnote 82 If an alert is ignored because the surgeon does not understand its significance or is not monitoring adequately, the surgeon also acts in a criminally negligent manner.

If the surgeon perceives the robot’s alert, but assesses that the robot advice is wrong, the surgeon may override it. There is a saying in Switzerland that also applies to a surgeon who relies on a surgical robot, although not completely: “Trust is good, verification is better.” In a clearly established cooperation between a surgeon and a robot, if the surgeon decides not to follow an alert from the robot, the surgeon does need a valid justification. For example, if CARLO notifies the surgeon that the bone cannot be cut in a certain way and the surgeon decides to proceed anyway, there would need to be a documented justification for his or her decision to overrule the robot.

While the current requirement of surgeon supervision of robots is justified generally, the law needs some adjustment. There must be a limit to a surgeon’s obligation to constantly monitor and question robot alerts, because otherwise a surgeon–robot cooperation would be unworkably inefficient. It would also result in unjustifiable legal obligations, based on a superhuman expectation that the surgeon monitors every second of the robot’s action. Surgeons are considered to be the “guarantors of supervision,”Footnote 83 which means that they are expected to control everything that the robot does. But when it is suitably established that robots perform more accurately than the average human medical professional in the field, the human must be allowed to step out of the process to some degree. For example, a surgeon would always need to go through the whole operating plan to be sure that robots such as STAR or CARLO are functioning properly. However, this obligation to double-check the robot should not apply to every minute movement the robot makes, as an obligation like this would be contrary to the purpose of innovative technology such as surgical robots, which were invented precisely for the purposes of greater accuracy and time-saving.

Additionally, when it is established that a surgical robot performs consistently without engaging in unacceptable mistakes, there will be a point where it would be wiser for the surgeon to not second-guess the robot, and in the case of a warning or alert, follow its directions. In fact, ignoring the directions of a surgical robot, which is part of the medical state of the art and acts correctly to an acceptable degree, is likely to lead to negligent, if not intentional, liability.

III.D Limiting the Surgeon’s Due Diligence Obligations regarding Surgical Robots through the Principle of Trust (Vertrauensgrundsatz)?

The surgeon’s obligation of supervision currently imposes excessive amounts of liability for the use of surgical robots, because, as discussed above, while surgeons rightfully have obligations to monitor the robot, they should not be required to check every movement the robot makes before it proceeds. The chapter argues that in the context of robot supervision, variations of the principle of trust (Vertrauensgrundsatz) should apply to limit the surgeon’s criminal liability.

When a surgeon works with human team members, the legitimate expectation is that individuals are responsible only for their own conduct and not that of others. The principle of trust is a foundational legal concept, one that enables effective cooperation by identifying spheres of responsibility and limiting the duties of due diligence to those spheres. It relieves individuals from having to evaluate the risk-taking of every individual in the team in every situation, and allows for the effective division of expertise and labor. The principle of trust was developed in the context of road traffic regulation, but it has widespread relevance and is applied today in medical law as well as other areas.Footnote 84

The principle of trust has limits and does not provide a carte blanche justifying all actions. If there are concrete indications that trust is unjustified, one must analyze and address that situation.Footnote 85 An example regarding surgical robots might be the DaVinciFootnote 86 robot. It has been in use for a long time, but if a skilled surgeon notices that the robot is defective, the surgeon must intervene and correct the defect.

The limitations of due diligence arising out of the principle of trust are well established in medical law, an environment where many participants work together based on a division of expertise and labor. In an operating room, several different kinds of specialists are normally at work, such as anesthesiologists, surgeons, and surgical nurses. The principle of trust in this environment limits responsibility to an individual’s own area of expertise and work.Footnote 87

One way of understanding the division of labor in surgery is that the primary area is the actual task, i.e., the operation, and the secondary area is supervisory, i.e., being alert to and addressing the misconduct of others.Footnote 88 Supervisory responsibility can be imposed horizontally (surgeon–surgeon) or vertically (surgeon–nurse), depending on the position a person occupies in the operating room. An example of the horizontal division of labor in the medical context would be if several doctors are assigned equal and joint control, with all having an obligation to coordinate the operation and monitor one another. If an error is detected, an intervention must take place, and if no error is detected, the competence of the other person can be trusted.Footnote 89 With vertical division of labor, a delegation to surgical staff such as assistants or nursing professionals requires supervisory activities such as selection, instruction, and monitoring. The important point here is that whether supervision is horizontal or vertical, the applicability of the principle of trust is not predicated upon constant control.Footnote 90

So far, the principle of trust has only been applied to the behavior of human beings. This chapter argues that the principle of trust should be applied to surgical robots, when lex artis requires it. First, as a general principle, delegation of certain activities must be permitted. Surgeons cannot perform an operation on their own, as this would, in itself, be a mistake in treatment.Footnote 91 Second, regarding robots in particular, given the degree to which surgical robots offer better surgical treatment, surgeons should use them as part of the expected standard of medical treatment.

But can robots, even certified robots, be equated with another human in terms of trustworthiness? Should a surgeon trust the functioning of a robot, and in what situations is trust warranted? The chapter argues that a variation of the principle of trust should be applied to a surgeon’s use of surgical robots. Specifically, an exception to the non-application of the principle of trust for robots should be created for robots that have been certified by competent authority as safe, referred to here as certification-based trust. Before and until the certification is awarded, the principle of mistrust (Misstrauensgrundsatz) should apply. This approach would also impose greater responsibility on the surgeon if, e.g., the robot used by the surgeon was still in a trial phase, or had a lower level of approval from the relevant authorities.Footnote 92

The concept of certified-based trust is supported by the principle of permissible risk. It is a fact that people die in the operating room, because medical and surgical procedures are associated with a certain degree of risk to health or life, but in Switzerland, this is included in the permissible risk.Footnote 93 There is no reason why this level of acceptable risk should not apply to surgical robots. According to Olaf Dössel:Footnote 94

[t]rust in technology is well founded if (a) the manufacturer has professionally designed, constructed and operated the machinery, (b) safety and reliability play an important role, (c) the inevitable long-term fatigue has been taken into account, and (d) the boundary conditions of the manufacturer remain within the framework established when the machinery was designed.

A certification-based trust approach is also consistent with other current practices, e.g., cooperating with newcomers in a field always requires a higher duty of care. When the reliability and safety of surgical robots becomes sufficiently established in practice, the principle of trust should then be applied, to establish the surgeon’s due diligence obligations within the correct parameters.

III.E Certified for Trust

This chapter argues that surgeons working with surgical robots can develop a legitimate expectation of trust consistent with principles of due diligence if the robot they use is certified. This approach to surgeon liability places increased importance on the process of the medical device certification, which is discussed further here.

Certification of medical devices is a well-developed area. In addition to the TPAFootnote 95 and the Medical Devices Ordinance,Footnote 96 other standards apply, including Swiss laws and ordinances, international treaties, European directives, and other international requirements.Footnote 97 These standards define the safety standards for the production and distribution of medical devices.Footnote 98

Swiss law requires that manufacturers keep up with the current state of scientific and technical knowledge, and comply with applicable standards when distributing the robot.Footnote 99 Manufacturers of surgical robots must successfully complete a conformity assessment procedure in Switzerland.

A robot with a CE-certification can be placed on the market in Switzerland and throughout the European Union.Footnote 100 A CE-certification mark means that a product has been “assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements.”Footnote 101 For the robot to be used in an operating room in Switzerland, a CE-certificationFootnote 102 must be issued by an independent certification body.Footnote 103 After introducing the robot to the market, the manufacturer remains obliged to check its product.Footnote 104

This chapter argues that a surgeon’s due diligence obligations when using a surgical robot should be limited by a principle of trust, and that the principle should apply when the robot is certified. A certification-based trust approach is consistent with Dössel’s suggestion that trust in technology is well-founded if, inter alia, the manufacturer has professionally designed, constructed, and operated the machinery.Footnote 105 It is currently not an accepted point of law that the CE-certification is a sufficient basis for the user to trust the robot and not be held criminally responsible, but the chapter suggests that as a detailed, well-established standard, the CE-certification is an example of a certification that could form the basis of application of the principle of trust.

If the principle of certification-based trust is adopted, the surgeon would still retain other due diligence obligations, including the duty to inform patients about the risks involved in a robot’s use.Footnote 106 This particular duty will likely become increasingly important over time, as the performance range of surgical robots increases.

IV Conclusion

Today, lex artis requires surgeons to ensure the performance of the robot assistant and comply with its safety functions. The human surgeon must maintain the robot’s functionality and monitor it during a medical operation and be ready to take over if needed. Requiring surgeons to supervise the robots they use is a sound position, but surgeons should not be expected to monitor the robot’s every micro-movement, as that would interfere with the functioning of surgical robots and the benefits to patients. However, under current Swiss law, the surgeon is liable for all possible injury, unless the robot’s movements do not comply with the surgeon’s instructions or there is a complete failure of the robot during the operation.

Surgeons working with surgical robots are therefore accountable for robotic action to an unreasonable degree, even though the robot is used to enhance the quality of medical services. Thus, a strange picture emerges in Swiss criminal law. In a field where robotics drive inventions that promise to make surgery safer, surgeons who use robots run a high risk of criminal liability if the robot inflicts injury. Conversely, if the surgeon does not rely on new technology and performs an operation alone which could generally be better and more safely performed by a robot, the surgeon could also be liable. This contradictory state of affairs requires regulatory reform, with a likely candidate being the application of a certification-based trust that limits the surgeon’s liability to appropriate limits.

This chapter has addressed issues raised by the robots being used today in operating rooms, including remote-control and independent surgical robots. The chapter has not addressed more advanced, self-learning robots. Given that the law already requires reform regarding today’s robots, even larger legal issues will be raised when it becomes necessary to determine who is responsible in the event of injury by autonomous robots,Footnote 107 those capable of learning and making decisions. In this context, it will be more difficult to determine whether the malfunction was due to the original programming, subsequent robot “training,”Footnote 108 or other environmental factors.Footnote 109 Surgeons may also find that robots capable of learning may act in unpredictable ways, making harm unavoidable even with surgeon supervision. In the case of unpredictable robot action, a surgeon should arguably be able to rely on the technology and avoid criminal negligence, provided it has a CE-certification. Ever-increasing amounts of due diligence, such as constant monitoring, are not desired with today’s or tomorrow’s robots, because the robot is supposed to relieve the surgeon’s workload and should be considered competent to do so if it is certified.

4 Forms of Robot Liability Criminal Robots and Corporate Criminal Responsibility

Thomas Weigend
I The Responsibility Gap

The use of artificial intelligence (AI) makes our lives easier in many ways. Search engines, driver’s assistance systems in cars, and robots that clean the house on their own are just three examples of devices that we have become reliant on, and there will undoubtedly be many more variants of AI accompanying us in our daily lives in the near future. Yet, these normally benevolent AI-driven devices can suddenly turn into dangerous instruments: self-driving cars may cause fatal accidents, navigation software may mislead human drivers and land them in dangerous situations, and a household robot may leave the home on its own and create risks for pedestrians and drivers on the street. One cannot help but agree with the pessimistic prediction that “[a]s robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things.”Footnote 1 If a robot’sFootnote 2 malfunctioning can be proved to be the result of inadequate programmingFootnote 3 or testing, civil and even criminal liability of the human being responsible for manufacturing or controlling the device can provide an adequate solution – if it is possible to identify an individual who can be blamed for being reckless or negligent in producing, coding, or training the robot.

But two factors make it unlikely that an AI device’s harmful action can always be traced back to the fault of an individual human actor. First, many persons, often belonging to different entities, contribute to getting the final product ready for action; if something goes wrong, it is difficult to even identify the source of malfunctioning, let alone an individual who culpably caused the defect. Second, many AI devices are designed to learn from experience and to optimize their ability to reach the goals set for them by collecting data and drawing “their own conclusions.”Footnote 4 This self-teaching function of AI devices greatly enhances their functionality, but also turns them, at least to some extent, into black boxes whose decision-making and actions can be neither predicted nor completely explained after the fact. Robots can react in unforeseeable ways, even if their human manufacturers and handlers did everything they could to avoid harm.Footnote 5 It can be argued that putting a device into the hands of the public without being able to predict exactly how it will perform constitutes a basis for liability, but among other issues it is not clear whether this liability ought to be criminal liability.

This chapter considers two novel ways of imposing liability for harm caused by robots: holding robots themselves responsible for their actions, and corporate criminal responsibility (CCR). It will be argued that it is at present neither conceptually coherent nor practically feasible to subject robots to criminal punishment, but that it is in principle possible to extend the scope of corporate responsibility, including criminal responsibility if recognized in the relevant jurisdiction, to harm caused by robots controlled by corporations and operating for their benefit.

II Robots as Criminals?

To resolve the perceived responsibility gap in the operation of robots, one suggestion has been to grant legal personhood to AI devices, which could make them liable for the harm they bring about. The issue of recognizing E-persons was discussed within the European Union when the European Parliament presented this option.Footnote 6 The idea has not been taken up, however, in the EU Commission’s 2021 Proposal for an Artificial Intelligence Act,Footnote 7 which mainly relies on strictly regulating the marketing of certain AI devices and holding manufacturers and users responsible for harm caused by them. Although the notion of imprisoning, fining, or otherwise punishing AI devices must appear futuristic,Footnote 8 some scholars favor the idea of extending criminal liability to robots, and the debate about this idea has reached a high intellectual level.Footnote 9 According to recent empirical research, the notion of punishing robots is supported by a fairly large percentage of the general population, even though many people are aware that the normal purposes of punishment cannot be achieved with regard to AI devices.Footnote 10

II.A Approximating the Responsibilities of Machines and Legal Persons

As robots can be made to look and act more and more like humans, the idea of approximating their movements to human acts becomes more plausible – which might pave the way to attributing the notion of actus reus to robots’ activities. By the same token, robots’ ways of processing information and turning it into a motive for getting active may approach the notion of mens rea. The law might, as Ryan Abbott and Alex Sarch have argued, “deem some AIs to possess the functional equivalent of sufficient reasoning and decision-making abilities to manifest insufficient regard” of others’ protected interests.Footnote 11

Probably the most sophisticated argument to date in favor of robots’ criminal responsibility has been advanced by Monika Simmler and Nora Markwalder.Footnote 12 These authors reject as ideologically based any link between the recognition of human free will and the ascription of culpability;Footnote 13 they instead subscribe to a strictly functionalist theory of criminal law that bases criminal responsibility on an “attribution of freedom as a social fact.”Footnote 14 In such a system, the law is free to “adopt a concept of personhood that depends on the respective agent’s capacity to disappoint normative expectations.”Footnote 15 The essential question then becomes “whether robots can destabilize norms due to the capacities attributed to them and due to their personhood and if they produce a conflict that requires a reaction of criminal law.”Footnote 16 The authors think that this is a probable scenario in a foreseeable future: robots could be “experienced as ‘equals’ in the sense that they are constituted as addressees of normative expectations in social interaction like humans or corporate entities are today.”Footnote 17 It would then be a secondary question in what symbolic way society’s disapproval of robots’ acts were to be expressed. It might well make sense to convict an AI device of a crime – even if it lacks the sensory, intellectual, and moral sensibility of feeling the impact of any traditional punishment.Footnote 18 Since the future is notoriously difficult to foresee, this concept of robots’ criminal responsibility can hardly be disproved, however unlikely it may appear today that humans could have normative expectations of robots and that disappointment of these expectations would call for the imposition of sanctions. However, in the brave new functional world envisioned by these authors, the term “criminal sanctions” appears rather old-fashioned, because it relies on concepts more relevant to human beings, such as censure, moral blame, and retribution (see Section II.B).

One recurring argument in favor of imposing criminal responsibility on AI devices is the asserted parallel to the criminal responsibility of corporations (CCR).Footnote 19 CCR will be discussed in more detail in the following section of this chapter, but it is addressed briefly here because calls for the criminal responsibility of corporations and of robots are reactions to a similar dilemma. In each case, it is difficult to trace responsibility for causing harm to an individual person. If, e.g., cars produced by a large manufacturing firm are defective and cause fatal accidents, it is safe to say that something must have gone wrong in the processes of designing, testing, or manufacturing the relevant type of car. But it may be impossible to identify the person(s) responsible for causing the defect, especially since the companies involved are unlikely to actively assist in the police investigation of the case. As we have seen, harm caused by robots leads to similar problems concerning the identification of responsible humans in the background. Regarding commercial firms, the introduction of CCR, which has spread from the United States to many other jurisdictions,Footnote 20 has helped to resolve the problem of the diffusion of responsibility by making corporations criminally liable for any fault of their officers or even – under the respondeat superior doctrine – of their employees. The main goals of CCR are to obtain redress for victims and give corporations a strong incentive to improve their compliance with relevant legal rules. If criminal liability is imposed on the corporation whenever it can be proved that one of its employees must have caused the harm, it can be expected that corporations will do everything in their power to properly select, train, and supervise their personnel. The legal trick that leads to this desired result is to treat corporations as or like responsible subjects under criminal law, even though everyone knows that a corporation is a mere product of legal rules and therefore cannot physically act, cannot form an intent, and cannot understand what it means to be punished. If applying this fiction to corporations has beneficial effects,Footnote 21 why should this approach not be used for robots as well?

II.B Critical Differences

However attractive that idea sounds, one cannot help but note that there exist significant differences between corporations and AI devices. Regarding the basic requirements of criminal responsibility, robots at their present stage of development cannot make free decisions, whereas corporations can do so through their statutory organs.Footnote 22 At the level of sanctioning, corporations can – through their management – be deterred from committing further offenses, they can compensate victims, and they can improve their operation and become better corporate citizens. Robots have none of these abilities,Footnote 23 although it is conceivable that their performance can be improved through reprogramming, retraining, and special supervision. The imposition of retributive criminal sanctions on robots would presuppose, however, that they can in some way feel punished and can link the consequences visited upon them to some prior malfeasance on their part. Today’s robots lack this key feature of punishability, although their grandchildren may well be imbued with the required sensitivity to moral blame.

The differences between legal persons and robots do not necessarily preclude the future possibility of treating robots as criminal offenders. But the fact that corporations, although they are not human beings, can be recognized as subjects of the criminal law does not per se lend sufficient plausibility to the idea of granting the same status to today’s robots.

There may, however, be another way of establishing criminal responsibility for robots’ harmful actions: corporations that use AI devices and/or benefit from their services could be held responsible for the harm they cause. To make this argument, one would have to show that: (1) corporate responsibility as such is a legitimate feature of the law; and (2) corporations can be held responsible for robots as well as for their human agents.

III Corporate Criminal Responsibility for Robots
III.A Should There Be Corporate Criminal Responsibility?

Before we investigate this option, we should reflect on the legitimacy of the general concept of CCR. If that concept is ethically or legally doubtful or even indefensible, we should certainly refrain from extending its reach from holding corporations responsible for the acts of their human employees to holding them responsible for their robots.

Two sets of theories have been developed for justifying the imposition of criminal responsibility of legal persons for the harmful acts of their managers and employees. One approach regards certain decision-makers within the corporation as its alter ego and therefore proposes that acts of these persons are attributed to the corporation; the other approach targets the corporation itself and bases its responsibility on its criminogenic or improper self-organization.Footnote 24 These two theories are not mutually exclusive. For example, Austrian law combines both approaches: its statute on the responsibility of corporations imposes criminal liability on a corporation if a member of its management or its control board committed a criminal offense on the corporation’s behalf or in violation of its obligations, or if an employee unlawfully committed a criminal offense and the management could have prevented or rendered significantly more difficult the perpetration by applying due diligence.Footnote 25

Whereas in the United States CCR has been recognized for more than a century,Footnote 26 its acceptance in Europe has been more hesitant.Footnote 27 In Germany, a draft law on corporate responsibility with semi-criminal features failed in 2021 due to internal dissent within the coalition government of the time.Footnote 28 Critics claim that CCR violates fundamental principles of criminal law.Footnote 29 They maintain that a corporation cannot be a subject of criminal law because it can neither act nor make moral judgments.Footnote 30 Moreover, a fine imposed on a corporation is said to be unfair because it does not punish the corporation itself, but its shareholders, creditors, and employees, who cannot be blamed for the faults of managers.Footnote 31

It can hardly be denied that CCR is a product of crime-preventive pragmatism rather than of theoretically consistent legal thinking. The attribution of managers’ and/or employees’ harmful acts to the corporation, cloaked with sham historical dignity by the Latin phrase respondeat superior, is difficult to justify because it leads to a duplication of responsibility for the same crime.Footnote 32 It is doubtful, moreover, whether the moral blame inherent in criminal punishment can adequately be addressed to a legal person, an entity that has no conscience and cannot feel guilt.Footnote 33 An alternative basis for CCR could be a strictly functional approach to criminal law which links the responsibility of corporations to the empirical and/or normative expectation that they abide by the legal norms applying to their scope of activities.Footnote 34

There exists an insoluble conflict between the pragmatic and political interest in nudging corporations toward legal compliance and the theoretical problems of extending the criminal law beyond natural persons. It is thus ultimately a policy question whether a state chooses to limit the liability of corporations for faults of their employees to tort law, extends it to criminal law, or places it somewhere in between,Footnote 35 as has been done in Germany.Footnote 36 In what follows, I assume that the criminal law version of CCR has been chosen. In that case, the further policy question arises as to whether CCR should include criminal responsibility for harm caused by AI devices used by the corporation.

III.B Legitimacy of CCR for Robots

As we have seen, retroactively identifying the fault of an individual human actor can be as difficult when an AI device was used as when some unknown employee of a corporation may have made a mistake.Footnote 37 The problem of allocating responsibility for robot action is further exacerbated by the black box element in self-teaching robots used on behalf of a corporation.Footnote 38

It could be argued that the responsibility gap can be closed by treating the robot as a mere device employed by a human handler, which would turn the issue of a robot’s harmful action into a regular instance of corporate liability. But even assuming that the doctrine of respondeat superior provides a sufficient basis for holding a corporation liable for faults of its employees, extending that doctrine to AI devices employed by humans would raise additional doubts about a corporation’s responsibility. It may neither be known how the robot’s harmful action came about nor whether there was a human at fault,Footnote 39 nor whether the company could have avoided the employee’s potential malfeasance.Footnote 40 It is therefore unlikely that many cases of harm caused by an AI device could be traced back to recklessness or criminal negligence on the part of a human employee for whom the corporation can be made responsible.

Effectively bridging the responsibility gap would therefore require the more radical step of treating a company’s robots like its employees, with the consequence of linking CCR directly to the robot’s malfeasance. This step could set into motion CCR’s beneficial compliance mechanism: if the robot’s fault is transferred by law to the company that employs it, that company will have a strong incentive to design, program, and constantly monitor its robots to make sure that they function properly.

How would a corporation’s direct responsibility for actions of its robots square with the general theories on CCR?Footnote 41 The alter ego-type liability model based on a transfer of the responsibility of employees to the corporation is not well suited to accommodating activities of robots because their actions lack the quality of blameworthy human decision-making.Footnote 42 Transfer of liability would work only if the mere existence of harmful activity on the part of an employee or robot would be sufficient to trigger CCR, i.e., in an absolute liability model. Such a model would address the difficulties raised by corporations using robots in situations where the robot’s behavior is unpredictable; however, it is difficult to reconcile absolute liability with European concepts of criminal justice. A more promising approach to justifying CCR for robots relates to the corporation’s overall spirit of lawlessness and/or its inherently defective organization as grounds for holding it responsible.Footnote 43 It is this theory that might provide an explanation for the corporation’s liability for the harmful acts of its robots; if a corporation uses AI devices, but fails to make sure that they operate properly, or uses a robot when it cannot predict that the robot will act safely, there is good reason to impose sanctions on the corporation for this deficiency in its internal organization. This is true even where such AI devices contain elements of self-teaching. Who but the corporation that employs them should be able to properly limit and supervise this self-teaching function?

In this context, an analogy has been discussed between a corporation’s liability for robots and a parent’s or animal owner’s liability for harm caused by children or domestic animals.Footnote 44 Even though the reactions of a small child or a dog cannot be completely predicted, it is only fair to hold the parent or dog owner responsible for harm that could have been avoided by training and supervising the child or the animal so as to minimize the risks emanating from them.Footnote 45 Similar considerations suggest a corporation’s liability for its robots, at least where it can be shown that the robot had a recognizable propensity to cause harm. By imposing penalties on corporations in such cases, the state can effectively induce companies to program, train, and supervise AI devices so as to avoid harm.Footnote 46 Moreover, if there is insufficient liability for harm by robots, business firms might be tempted to escape traditional CCR by replacing human employees by robots.Footnote 47

III.C Regulating and Limiting Robot CCR

Before embracing an extension of CCR from employees to robots, however, a counterargument needs to be considered. The increased deployment of AI devices is by and large a beneficial development, saving not only cost, but also human labor in areas where such labor is not necessarily satisfying for the worker, as in conveyor-belt mechanical manufacturing. Robots do have inherent risks, but commercial interests will provide strong incentives for their companies to control these risks. Adding criminal responsibility might produce an over-reaction, inhibiting the use and further development of AI devices and thus stifling progress. An alternative to CCR for robot malfunction may be for society to accept certain risks associated with the widespread use of AI devices and to restrict liability to providing compensation for harm through insurance.Footnote 48 These considerations do not necessarily preclude the introduction of a special regime of corporate liability for robots, but they counsel restraint. Strict criminal liability for robotic faults would have a chilling effect on the development of robotic solutions and therefore does not recommend itself as an adequate solution.

Legislatures should therefore limit CCR for robots to instances where human agents of the corporation were at least negligent with regard to designing, programming, and controlling robots.Footnote 49 Only if that condition is fulfilled can it be said that the corporation deserves to be punished because it failed to organize its operation so as to minimize the risk of harm to others. Potential control over the robot by a human agent of the corporation is thus a necessary condition for the corporation’s criminal liability. Mihailis E. Diamantis plausibly explains that “control” in the context of algorithms means “the power to design the algorithm in the first place, the power to pull the plug on the algorithm, the power to modify it, and the power to override the algorithm’s decisions.”Footnote 50 But holding every company that has any of these types of control liable for any harm that the robot causes, Diamantis continues, would draw the net wider than “sound policy or fairness would dictate.”Footnote 51 He therefore suggests limiting liability for algorithms to companies which not only control a robot, but also benefit from its activities.Footnote 52 The combination of these factors is in fact perfectly in line with the requirements of traditional CCR, where liability presupposes that the corporation had a duty to supervise the employee who committed the relevant fault and that the employee’s activity or culpable passivity was meant to benefit the corporation.

This approach appropriately limits CCR to corporations that benefit from the employment of AI devices. Even so, liability should not be strict in the sense that a corporation is subject to punishment whenever any of its robots causes harm and no human actor responsible for its malfunction can be identified.Footnote 53 In line with the model of CCR that is based on a dysfunctional organization of the corporation, criminal liability should require a fault on the part of the corporation that has a bearing on the robot’s harmful activity.Footnote 54 This corporate fault can consist, e.g., in a lack of proper training or oversight of the robot, or in an unmonitored self-teaching process of the AI device.Footnote 55 There should in any event be proof that the corporation was at least negligent concerning its obligation to do everything in its power to prevent robots that work for its benefit from causing harm to others. In other words, CCR for robots is proper only where it can be shown that the corporation could, with proper diligence, have avoided the harm. This model of liability could be adopted even in jurisdictions that require some fault on the part of managers for CCR, because the task of properly training and supervising robots is so important that it should be organized on the management level.

Corporate responsibility for harm caused by robots differs from CCR for activities of humans and therefore should be regulated separately by statute. The law needs to determine under what conditions a corporation is to be held responsible for robot malfeasance. The primary issue that needs to be addressed is the necessary link between a corporation and an AI device. Taking an automated car as an example, there are several candidates for potential liability for its harmful operation: the firm that designed the car, the manufacturing company, the programmer of the software, the seller, and the owner of the car, if that is a corporation. If it can be proved that the malfunctioning of the car was caused by an agent of one of these companies, e.g., the programmer was reckless in installing defective software, that company will be liable under the normal CCR rules of the relevant jurisdiction. Special “Robot CCR” will come into play only if the car’s aberration cannot be traced to a particular human source, for example, if the reason for the malfunction remains inexplicable even to experts, if there was a concurrence of several causes, or if the harmful event resulted from the car’s unforeseeable defective self-teaching. In any of these instances, it must be determined which of the corporate entities identified above should be held responsible.

IV Conclusion

We have found that robots can at present not be subject to criminal punishment and cannot trigger criminal liability of corporations under traditional rules of CCR for human agents. Even if the reach of the criminal law is extended beyond natural persons to corporations, the differences between corporations and robots are so great that a legal analogy between them cannot be drawn. But it is in principle possible to extend the scope of corporate responsibility, including criminal responsibility if recognized in the relevant jurisdiction, to harm caused by AI devices controlled by corporations and operating for their benefit. Given the general social utility of using robots, however, corporate liability for harm caused by them should not be unlimited, but should at least require an element of negligence in programming, testing, or supervising the robot.

Footnotes

1 The Challenges of Human–Robot Interaction for Substantive Criminal Law Mapping the Field

* I would like to thank Emily Silverman for improving the language of this chapter.

1 The category “criminal law” is used here in a wide sense, encompassing all norms that prohibit conduct and prescribe sanctions for noncompliance. Details and distinctions, e.g., between criminal offenses in a narrow sense and administrative offenses (Ordnungswidrigkeiten) in German law, are not discussed here. They will, however, play a role once prohibitions are seriously considered, and then, notions such as proportionality or ultima ratio become relevant and the kind and seriousness of potential sanctions need more thought.

2 See also Monika Simmler & Nora Markwalder, “Guilty Robots? Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence” (2019) 30:1 Criminal Law Forum 1 [“Guilty Robots”] at 56.

3 European Union, European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts, COM/2021/206 final (Brussels, Belgium: European Commission, April 21, 2021).

4 See e.g., Horst Eidenmüller & Gerhard Wagner, Law by Algorithm (Heidelberg, Germany: Mohr Siebeck, 2021) [Law by Algorithm].

5 For such instruments, see Sheldon Zhang, Robert Roberts, & David Farabee, “An Analysis of Prisoner Reentry and Parole Risk Using COMPAS and Traditional Criminal History Measures” (2014) 60:2 Crime and Delinquency 167; Carolyn McKay, “Predicting Risk in Criminal Procedure: Actuarial Tools, Algorithms, AI and Judicial Decision-Making” (2020) 32:1 Current Issues in Criminal Justice 22; Lucia Sommerer, Personenbezogenes Predictive Policing (Baden-Baden, Germany: Nomos, 2020).

6 See e.g., Solon Barocas & Andrew Selbst, “Big Data’s Disparate Impact” (2016) 104:3 California Law Review 671; Richard Berk, Hoda Heidari, Shahin Jabbari et al., “Fairness in Criminal Justice Task Assessments: The State of the Art” (2017) 50:1 Sociological Methods & Research 3; John Kleinberg, Himabindu Lakkaraju, Jens Ludwig et al., “Human Decisions and Machine Predictions” (2018) 133:1 Quarterly Journal of Economics 237.

7 See Erico Guizzo, “What Is a Robot?” IEEE (August 1, 2018), https://robots.ieee.org/learn/what-is-a-robot/.

8 See Feasibility Study of a Future Council or Europe Instrument on Artificial Intelligence and Criminal Law (European Committee on Crime Problems, September 4, 2020).

9 Gary Marchant & Brad Allenby, “Soft Law: New Tools for Governing Emerging Technologies” (2017) 73:2 Bulletin of the Atomic Scientists 108; Ryan Hagemann, Jennifer Huddleston, & Adam Thierer, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future” (2018) 17:1 Colorado Technology Law Journal 37; Anna Thaler, Values and Ethical Principles for AI and Robotics: A Qualitative Content Analysis of EU Soft Law Initiatives (Hamburg, Germany: Verlag Dr. Kovač, 2021).

10 See, for possible future risks, Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (New York, NY: Oxford University Press, 2014).

11 For a proposal signed by prominent AI researchers and entrepreneurs, see “Pause Giant AI Experiments: An Open Letter,” Future of Life, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

12 See Chapter 2 in this volume; see also: Jai Galliot, Military Robots: Mapping the Moral Landscape (Abingdon, UK: Routledge, 2017); Paul Springer, Outsourcing War to Machines: The Military Robotics Revolution (Santa Barbara, CA: Praeger, 2018); Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York, NY: W.W. Norton & Company, 2018) [Army of None].

13 For an overview of the ethical issues, see Nehal Bhuta, Susanne Beck, Robin Geis et al. (eds.), Autonomous Weapons Systems: Law, Ethics, Policy (Cambridge, UK: Cambridge University Press, 2016); Army of None, note 12 above, at 271–296.

14 Campaign against Sex Robots website, https://campaignagainstsexrobots.org/; Oliver Bendel, “Love Dolls and Sex Robots in Unproven and Unexplored Fields of Application” (2020) 12:1 Paladyn, Journal of Behavioral Robotics 1.

15 See e.g., Phil McNally & Sohail Inayatullah, “The Rights of Robots: Technology, Culture and Law in the 21st Century” (1988) 20:2 Futures 119; Mark Coeckelbergh, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration” (2010) 12:3 Ethics and Information Technology 209; David Gunkel, Robot Rights (Cambridge, MA: MIT Press, 2018); Henry Shevlin, “How Could We Know When a Robot Was a Moral Patient?” (2021) 30:3 Cambridge Quarterly of Healthcare Ethics 459; John Danaher, “What Matters for Moral Status: Behavioural or Cognitive Equivalence?” (2021) 30:3 Cambridge Quarterly of Healthcare Ethics 472.

16 See, for an example from fiction, Ian McEwan, Machines Like Me (London, UK: Penguin Books, 2019).

17 See, for the idea of an electronic person, European Union, The European Parliament, Resolution of February 16, 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), OJ 2015 C 252 (EU: Official Journal of the European Union, 2017) at No. 59(f); Susanne Beck, “Intelligent Agents and Criminal Law – Negligence, Diffusion of Liability and Electronic Personhood” (2016) 86:4 Robotics and Autonomous Systems 138 [“Intelligent Agents”] at 141–142; Jacob Turner, “Legal Personality for AI” in Jacob Turner, Robot Rules (London, UK: Palgrave, 2018) [“Legal Personality for AI”] 173; Law by Algorithm, note 4 above, at 103–126.

18 See “Intelligent Agents”, note 17 above, at 139.

19 See, for the notion of “admissible risk,” “Intelligent Agents”, note 17 above, at 141.

20 Sabine Gless, Emily Silverman, & Thomas Weigend, “If Robots Cause Harm, Who is to Blame? Self-Driving Cars and Criminal Liability” (2016) 19:3 New Criminal Law Review 412 [“If Robots Cause Harm”] at 433–434.

21 Susanne Beck, “Google Cars, Software Agents, Autonomous Weapons Systems – New Challenges for Criminal Law?” in Eric Hilgendorf & Uwe Seidel (eds.), Robotics, Autonomics, and the Law (Baden-Baden, Germany: Nomos, 2017) 227 [“Google Cars”] at 245.

22 Footnote Ibid. at 243.

23 See, for the notion of conditional intent in German criminal law: Michael Bohlander, Principles of German Criminal Law (Oxford, UK: Hart, 2009) [German Criminal Law] at 63–67; Tatjana Hörnle & Rita Vavra, “Criminal Law” in Joachim Zekoll & Gerhard Wagner (eds.), Introduction to German Law, 3rd ed. (Philadelphia, PA: Wolters Kluwer, 2019) [“Criminal Law”] 503 at 509.

24 See, for the potential of service robots to be used this way, “Google Cars”, note 21 above, at 231.

25 “Legal Personality for AI”, note 17 above, at 118; “If Robots Cause Harm”, note 20 above, at 425.

26 For a discussion of characterization of robots as a tool, see Chapter 13 in this volume.

27 For this dilemma, see Dietmar Hübner & Lucie White, “Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us beyond Harm Minimisation” (2018) 21:3 Ethical Theory and Moral Practice 685; Rob Lawlor, “The Ethics of Automated Vehicles: Why Self-Driving Cars Should Not Swerve in Dilemma Cases” (2021) 28:1 Res Publica 193; and Chapter 15 in this volume.

28 See Strafgesetzbuch (German Criminal Code) (StGB), Germany (November 13, 1998 (Federal Law Gazette I, p. 3322), as amended by Art. 2 of the Act of June 19, 2019 (Federal Law Gazette I, p. 844)) [StGB], §35 (excusing necessity); and David Ormerod & Karl Laird, Smith, Hogan, and Ormerod’s Criminal Law, 15th ed. (New York, NY: Oxford University Press, 2018) at 364367 for the “duress of circumstances” doctrine in English law.

29 See StGB, note 28 above, §34; from the viewpoint of legal philosophy, Ivó Coca Vila, “Self-Driving Cars in Dilemmatic Situations: An Approach Based on the Theory of Justification in Criminal Law” (2018) 12:1 Criminal Law and Philosophy 59 at 64–66; see for a more critical perspective on the anti-utilitarian German stance, Eric Hilgendorf, “Automated Driving and the Law” in Eric Hilgendorf & Uwe Seidel (eds.), Robotics, Autonomics, and the Law (Baden-Baden, Germany: Nomos, 2017) 171 at 190; and for an empirical analysis that shows the human preference for saving the greater number of humans, Anja Faulhaber, Anke Dittmer, Felix Blind et al., “Human Decisions in Moral Dilemmas Are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles” (2019) 25:2 Science and Engineering Ethics 399.

30 Tatjana Hörnle & Wolfgang Wohlers, “The Trolley Problem Reloaded. Wie sind autonome Fahrzeuge für Leben-gegen-Leben-Dilemmata zu programmieren?” (The Trolley Problem Reloaded. How Should Autonomous Vehicles Be Programmed for the Case of a Life-against-Life Dilemma?) (2018) 165:1 Goltdammer’s Archiv für Strafrecht 12 at 23–24; Thomas Weigend, “Notstandsrecht für Selbstfahrende Autos?” (Emergency Law for Self-Driving Cars?) (2017) 10 Zeitschrift für Internationale Strafrechtdogmatik 599.

31 Regarding questions of self-defense, see Michael Froomkin & Zak Colangelo, “Self-Defense against Robots and Drones” (2015) 48:1 Connecticut Law Review 1; Severin Löffler, “Rechtswidrigkeit der Abwehr von Drohnen über privaten Wohngrundstücken” (Lawfulness of Defense against Drones above Private Property) in Susanne Beck, Carsten Kusche, & Brian Valerius (eds.), Digitalisierung, Automatisierung, KI und Recht (Baden-Baden, Germany: Nomos, 2020) 329.

32 German Criminal Law, note 23 above, at 104.

33 StGB, note 28 above, §32; “Google Cars”, note 21 above, at 236 and 242; Wolfgang Mitsch, “Roboter und Notwehr” (Robots and Self-Defense) in Susanne Beck, Carsten Kusche, & Brian Valerius (eds.), Digitalisierung, Automatisierung, KI und Recht (Baden-Baden, Germany: Nomos, 2020) 365.

34 American Law Institute, Model Penal Code: Official Draft and Explanatory Notes: Complete Text of Model Penal Code as Adopted at the 1962 Annual Meeting of the American Law Institute at Washington, DC, 24 May 1962 (Philadelphia, PA: American Law Institute, 1985), §3.04(1).

35 See the citations stated in note 17 above.

36 See, for the argument that the categories of actus reus and mens rea could also be applied to robots, Gabriel Hallevy, When Robots Kill (Boston, MA: Northeastern University Press, 2013).

37 Lawrence Solum, “Legal Personhood for Artificial Intelligences” (1992) 70:4 North Carolina Law Review 1231 [“Legal Personhood”] at 1255–1280.

38 “Legal Personality for AI”, note 17 above, at 416–417; see Chapter 15 in this volume.

39 See Gunther Teubner, “Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law” (2006) 33:4 Journal of Law & Society 497; “Guilty Robots”, note 2 above, at 13–21.

40 See Mireille Hildebrandt, “Criminal Liability and ‘Smart’ Environments” in Antony Duff & Stuart Green (eds.), Philosophical Foundations of Criminal Law (New York, NY: Oxford University Press, 2011) 507 [“Criminal Liability”] at 525–526.

41 Ying Hu, “Robot Criminals” (2019) 52:2 University of Michigan Journal of Law Reform 487 [“Robot Criminals”] at 508–510.

42 Tatjana Hörnle, “The Role of Victims’ Rights in Punishment Theories” in Antje du Bois-Pedain & Anthony Bottoms (eds.), Penal Censure: Engagements Within and Beyond Desert Theory (London, UK: Hart, 2019) 207.

43 “Guilty Robots”, note 2 above, at 21–28.

44 See “Robot Criminals”, note 41 above, at 504–507; Karsten Gaede, Künstliche Intelligenz – Rechte und Strafen für Roboter? (Artificial Intelligences – Rights and Criminal Punishment for Robots?) (Baden-Baden, Germany: Nomos, 2019) [Künstliche Intelligenz] at 64.

45 “Robot Criminals”, note 41 above, at 499.

46 For the distinction between blame and hard treatment, see Andrew von Hirsch, Censure and Sanctions (Oxford, UK: Clarendon, 1993) at 914.

47 Künstliche Intelligenz, note 44 above, at 66–69.

48 “Criminal Liability”, note 40 above, at 530–531.

49 “Robot Criminals”, note 41 above, at 500–503.

50 For a discussion about the legal rights of robots, see “Legal Personhood”, note 37 above.

51 StGB, note 28 above, §21; German Criminal Law, note 23 above, at 135.

52 The definition of conditional intent requires the defendant to be aware of the risk and to accept it: see German Criminal Law, note 23 above, at 63–67; “Criminal Law”, note 23 above, at 509.

2 Are Programmers in or out of Control? The Individual Criminal Responsibility of Programmers of Autonomous Weapons and Self-Driving Cars

* I am grateful for the helpful discussions within the DILEMA team at the Asser Institute and the feedback received on an earlier draft during the author workshop within the SNSF project “Human–Robot Interaction in Law and its Narratives: Legal Blame, Criminal Law, and Procedure.” I thank James Patrick Sexton for his invaluable research assistance and for helping me improve earlier drafts of this chapter. All errors remain mine.

1 Sam Levin & Julia Carrie Wong, “Self-Driving Uber Kills Arizona Woman in First Fatal Crash Involving Pedestrian,” The Guardian (March 19, 2018), www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe [“Self-Driving Uber”]; see also Chapters 6 and 15 in this volume.

2 Lucia Binding, “Arizona Uber Driver Was ‘Streaming The Voice’ Moments Before Fatal Crash,” Sky News (June 22, 2018), https://news.sky.com/story/arizona-uber-driver-was-streaming-the-voice-moments-before-fatal-crash-11413233. In this chapter, I will use interchangeably the terms “driver,” “occupant,” “operator,” and “user.”

3 Highway Accident Report: Collision between Vehicle Controlled by Developmental Automated Driving System and Pedestrian Tempe, Arizona March 18, 2018 (National Transportation Safety Board, 2019), www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf.

4 State of Arizona v. Rafael Stuart Vasquez, Indictment 785 GJ 251, Superior Court of the State of Arizona in and for the County of Maricopa (August 27, 2020), www.maricopacountyattorney.org/DocumentCenter/View/1724/Rafael-Vasquez-GJ-Indictment [State of Arizona].

5 “Uber ‘Not Criminally Liable’ for Self-Driving Death,” BBC News (March 6, 2019), www.bbc.com/news/technology-47468391.

6 Manufacturers of AVs often include responsibility clauses in their contracts with end-users; however, practice may vary: see Keri Grieman, “Hard Drive Crash: An Examination of Liability for Self-Driving Vehicles” (2018) 9:3 Journal of Intellectual Property, Information Technology and E-Commerce Law 294 [“Hard Drive Crash”] at para. 29.

7 Letter dated March 8, 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council (United Nations Security Council, 8 March 2021) S/2021/229, at paras 63–64.

8 Footnote Ibid. at para. 63.

9 See Filippo Santoni de Sio & Jeroen van den Hoven, “Meaningful Human Control over Autonomous Systems: A Philosophical Account” (2018) 5 Frontiers in Robotics and AI 1 [“MHC over Autonomous Systems”] at 10; “Killer Robots and the Concept of Meaningful Human Control: Memorandum to Convention on Conventional Weapons (CCW) Delegates” (Human Rights Watch, 2016), www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control; “Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach” (International Committee of the Red Cross, 2019), www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach.

10 Berenice Boutin & Taylor Woodcock, “Aspects of Realizing (Meaningful) Human Control: Legal Perspective” in Robin Geiß & Henning Lahmann (eds.), Research Handbook on Warfare and Artificial Intelligence (Cheltenham, UK: Edward Elgar, 2024) 9 [“Realizing MHC”] at 2–10.

11 Marta Bo, Laura Bruun, & Vincent Boulanin, Retaining Human Responsibility in the Development and Use of Autonomous Weapon Systems: On Accountability for Violations of International Humanitarian Law Involving AWS (Stockholm, Sweden: Stockholm International Peace Research Institute, 2022) at 38 and 39.

12 See Thomas C. King, Nikkita Aggarwal, Mariarosaria Taddeo et al., “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions” (2020) 26:2 Science and Engineering Ethics 89 at 95; see contra the work of Gabriel Hallevy, “The Criminal Liability of Artificial Intelligence Entities: From Science Fiction to Legal Social Control” (2010) 4:2 Akron Intellectual Property Journal 171; see Chapter 4 in this volume.

13 Direct commission or principal responsibility under international criminal law also includes joint commission and co-perpetration: Gerhard Werle & Florian Jessberger, Principles of International Criminal Law (New York, NY: Oxford University Press, 2020) at paras. 623–659. Co-perpetration as a form of principal responsibility in German criminal law is founded on the concept of “control over whether and how the offense is carried out”: Thomas Weigend, “Germany” in Kevin Jon Heller & Markus D. Dubber (eds.), The Handbook of Comparative Criminal Law (Redwood City, CA: Stanford University Press, 2011) 252 [“Germany”] at 265 and 266. There is no similar “co-perpetration” mode of liability in the United States.

14 See Chapter 4 in this volume.

15 See Sections II and III.

16 United Nations, Rome Statute of the International Criminal Court, 2187 UNTS 3 (adopted July 17, 1998, entered into force July 1, 2002) (Rome, Italy: United Nations, 1998) [Rome Statute].

17 United Nations, Protocol Additional to the Geneva Conventions of 12 August 1949 and Relating to the Protection of Victims of International Armed Conflicts, 1125 UNTS 3 (signed June 8, 1977, entered into force December 7, 1978) (Geneva, Switzerland: United Nations, 1977) [AP I].

18 Some theories of causation recognize that causation in law is a matter of imputation, i.e., a matter of imputing a result to a criminal conduct: Paul K. Ryu, “Causation in Criminal Law” (1958) 106:6 University of Pennsylvania Law Review 773 [“Causation in Criminal Law”] at 785, 795, and 796.

19 In the context of AVs, the responsibility of manufacturers and programmers might overlap; see “Hard Drive Crash”, note 6 above, at para. 29.

20 See Sections IV and V.

21 Henry Prakken, “On the Problem of Making Autonomous Vehicles Conform to Traffic Law” (2017) 25:3 Artificial Intelligence and Law 341 [“Making Autonomous Vehicles”] at 353.

23 Footnote Ibid. at 354.

25 See Prakken’s analysis of Dutch traffic laws which could be extended to other similar European systems by analogy: “Making Autonomous Vehicles”, note 21 above, at 345, 346, and 360. However, Prakken also provides an overview of open-textured and vague norms in Dutch traffic law: Footnote ibid. at 347 and 348.

26 “Making Autonomous Vehicles”, note 21 above, at 347 and 348. See the open-textured traffic rules in the Straßenverkehrsgesetz (Swiss Traffic Code) (StVG), SR 741.01 (as of January 1, 2020), Arts. 4, 26, and 31, www.admin.ch/opc/de/classified-compilation/19580266/index.html.

27 Danny Yadron & Dan Tynan, “Tesla Driver Dies in First Fatal Crash While Using Autopilot Mode,” The Guardian (July 1, 2016), www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.

28 See e.g., the accident involving a Tesla Model 3 which hit a Ford Explorer pickup truck, killing one passenger: Neal E. Boudette, “Tesla Says Autopilot Makes Its Cars Safer. Crash Victims Say It Kills,” The New York Times (July 5, 2021), www.nytimes.com/2021/07/05/business/tesla-autopilot-lawsuits-safety.html.

29 “How Machine Learning Algorithms Made Self Driving Cars Possible?” upGrad Blog (November 18, 2019), www.upgrad.com/blog/how-machine-learning-algorithms-made-self-driving-cars-possible/.

30 See Mindy Support, “How Machine Learning in Automotive Makes Self-Driving Cars a Reality,” Mindy News Blog (February 12, 2020), https://mindy-support.com/news-post/how-machine-learning-in-automotive-makes-self-driving-cars-a-reality/.

32 See “What Does Unsupervised Learning Have in Store for Self-Driving Cars?” intellias (August 22, 2019), intellias.com/what-does-unsupervised-learning-have-in-store-for-self-driving-cars/.

33 Sampo Kuutti, Richard Bowden, Yaochu Jin et al., “A Survey of Deep Learning Applications to Autonomous Vehicle Control” (2021) 22:2 Institute of Electrical and Electronics Engineers Transactions on Intelligent Transportation Systems 712 at 713.

34 Abhishek Gupta, Alagan Anpalagan, Ling Guan et al., “Deep Learning for Object Detection and Scene Perception in Self-Driving Cars: Survey, Challenges, and Open Issues” (2021) 10:10 Array 1 at 8.

35 See “Self-Driving Uber”, note 1 above.

36 See “Hard Drive Crash”, note 6 above.

37 Kathleen L. Mosier & Linda J. Skitka, “Human Decision Makers and Automated Decision Aids: Made for Each Other?” in Raja Parasuraman & Mustapha Mouloua (eds.), Automation and Human Performance: Theory and Applications (Boca Raton, FL: CRC Press, 1996) 201 at 203–210.

38 See Sadjad Soltanzadeh, Jai Galliott, & Natalia Jevglevskaja, “Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems” (2020) 26:5 Science and Engineering Ethics 2693 [“Customizable Ethics”] at 2696.

39 Footnote Ibid. at 2705.

40 Footnote Ibid. at 2697.

41 Kim Harel, “Self-Driving Cars Must Be Able to Communicate with Each Other,” Aarhus University Department of Electrical and Computer Engineering: News (June 2, 2021), https://ece.au.dk/en/currently/news/show/artikel/self-driving-cars-must-be-able-to-communicate-with-each-other/.

43 See, on this topic, M. Nadeem Ahangar, Qasim Z. Ahmed, Fahd A. Kahn et al., “A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges” (2021) 21:3 Sensors 706.

44 Keith J. Hayward & Matthijs M. Maas, “Artificial Intelligence and Crime: A Primer for Criminologists” (2021) 17:2 Crime Media Culture 209 at 216.

45 Matthew Caldwell, Jerone T. A. Andrews, Thomas Tanay et al., “AI-Enabled Future Crime” (2020) 9:1 Crime Science 14 at 22.

46 See Section IV.

47 Arthur Holland Michel, Known Unknowns: Data Issues and Military Autonomous Systems (Geneva, Switzerland: UN Institute for Disarmament Research, 2021) [Known Unknowns] at 10.

48 Merel Ekelhof & Giacomo Persi Paoli, Swarm Robotics: Technical and Operational Overview of the Next Generation of Autonomous Systems (Geneva, Switzerland: United Nations Institute for Disarmament Research, 2020) at 51.

49 Andree-Anne Melancon, “What’s Wrong with Drones? Automatization and Target Selection” (2020) 31:4 Small Wars and Insurgencies 801 [“What’s Wrong”] at 806.

50 The principle of distinction is enshrined in AP I, note 17 above, at Art. 48, with accompanying rules at Arts. 51 and 52.

51 Ashley Deeks, “Coding the Law of Armed Conflict: First Steps” in Matthew C. Waxman & Thomas W. Oakley (eds.), The Future Law of Armed Conflict (New York, NY: Oxford University Press, 2022) 41 [“First Steps”]; “What’s Wrong”, note 49 above, at 12 and 13.

52 E.g., autonomous drones equipped with autonomous or automatic target recognition (ATR) software to be employed for targeted killings of alleged terrorists.

53 “First Steps”, note 51 above, at 53.

54 On the challenges, see Alan L. Schuller, “Artificial Intelligence Effecting Human Decisions to Kill: The Challenge of Linking Numerically Quantifiable Goals to IHL Compliance” (2019) 15:1–2 Journal of Law and Policy for the Information Society 105.

55 “What’s Wrong”, note 49 above, at 14–16.

56 See Known Unknowns, note 47 above, at 4; Joshua Hughes, “The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods” (2018) 21 Yearbook of International Humanitarian Law 99 at 106 and 107.

57 Known Unknowns, note 47 above, at 6.

58 Known Unknowns, note 47 above, at 4.

59 “First Steps”, note 51 above, at 53 and 54.

60 “Customizable Ethics”, note 38 above, at 2704 and 2705.

61 Military targeting must be intended as encompassing more than critical functions of weapon release.

62 Known Unknowns, note 47 above, at 7.

63 Known Unknowns, note 47 above, at 9.

65 Crimes of conduct “rest on an immediate connection between the harmful action and the relevant harm”; crimes of result “are characterized by a [special and temporal] causal gap between action and consequence”: George P. Fletcher, Basic Concepts of Criminal Law (New York, NY: Oxford University Press, 1998) [Basic Concepts] at 61.

66 See, on this topic, “Making Autonomous Vehicles”, note 21 above.

67 While the United States’ Model Penal Code does not contain a provision dealing with vehicular homicide, legislations in certain domestic systems envisage it.

68 See State of Arizona, note 4 above.

69 American Law Institute, Model Penal Code: Official Draft and Explanatory Notes: Complete Text of Model Penal Code as Adopted at the 1962 Annual Meeting of the American Law Institute at Washington, DC, May 24, 1962 (Philadelphia, PA: American Law Institute, 1985) [Model Penal Code].

70 Footnote Ibid., §2.13(1)(b); see Paul H. Robinson, “United States” in Kevin Jon Heller & Markus Dubber (eds.), The Handbook of Comparative Criminal Law (Redwood City, CA: Stanford University Press, 2011) [“United States”] 563 at 585 (emphasis added).

71 Footnote Ibid. (emphasis added).

72 Strafgesetzbuch (German Criminal Code), Germany (November 13, 1998 (Federal Law Gazette I, p. 3322), as amended by Art. 2 of the Act of June 19, 2019 (Federal Law Gazette I, p. 844)) [StGB], §211(1) (emphasis added).

73 Under German criminal law, manslaughter is the intentional killing of another person without aggravating circumstances: StGB, note 72 above, §212.

74 “Germany”, note 13 above, at 262.

75 StGB, note 72 above, §222.

76 “Germany”, note 13 above, at 263.

77 For the underlying IHL, see AP I, note 17 above, Art. 51(4)(a); see also Jean-Marie Henckaerts & Louise Doswald-Beck, Customary International Humanitarian Law, vol. 1: Rules (New York, NY: Cambridge University Press, 2005), Rule 12, at 40.

78 See Marta Bo, “Autonomous Weapons and the Responsibility Gap in Light of the Mens Rea of the War Crime of Attacking Civilians in the ICC Statute” (2021) 19:2 Journal of International Criminal Justice 275 [“Autonomous Weapons”] at 282–285.

79 Knut Dörmann, Elements of War Crimes under the Rome Statute of the International Criminal Court: Sources and Commentary (Cambridge, UK: Cambridge University Press, 2003) [Elements of War Crimes] at 131 and 132; it is worth noting that programmers may have a greater role and responsibility, particularly when it comes to inherently indiscriminate weapons.

80 Both by the ICC and the International Criminal Tribunal for the former Yugoslavia. The latter interpreted violations of Art. 3 of its Statute, relevant to unlawful attack charges, by resorting to AP I, note 17 above, Art. 85(3); See “Autonomous Weapons”, note 78 above, at 283 and 284.

81 AP I, note 17 above, Art. 85(3), the actus reus of the war crime of willfully launching attacks against civilians contains the requirement that an attack against civilians causes “death or serious injury to body or health.”

82 Rome Statute, note 16 above, Arts. 8(2)(b)(i) and 8(2)(e)(i).

83 Moreover, under the Rome Statute, an attack could be considered as a result; Albin Eser, “Mental Elements – Mistake of Fact and Mistake of Law” in Antonio Cassese, Paola Gaeta, & John R.W.D. Jones (eds.), The Rome Statute of the International Criminal Court: A Commentary (New York, NY: Oxford University Press, 2002) 889 at 911.

84 Element 4 of the elements of the crime at Rome Statute, note 16 above, Art. 8(2)(b)(i). As elaborated by the International Tribunal for the former Yugoslavia, the law of war crimes applies “from the initiation of … an armed conflict and extend beyond the cessation of hostilities until a general conclusion of peace is reached”; Elements of War Crimes, note 79 above, at 19–20.

85 “Causation in Criminal Law”, note 18 above, at 785; see contra Basic Concepts, note 65 above, at 63 and 66.

86 See “Causation in Criminal Law”, note 18 above, at 787; also described as “empirical causality,” which refers to the “metaphysical [and deterministic] question of cause and effect”; Marjolein Cupido, “Causation in International Crimes Cases: (Re)Conceptualizing the Causal Linkage” (2021) 32:1 Criminal Law Forum 1, [“International Crimes”] at 24.

87 “Causation in Criminal Law”, note 18 above, at 787.

89 Arthur Leavens, “A Causation Approach to Criminal Omissions” (1988) 76 California Law Review 547 [“Causation Approach”] at 564.

90 “Causation in Criminal Law”, note 18 above, at 789.

91 Footnote Ibid. at 791.

92 Footnote Ibid. at 792.

93 Footnote Ibid. at 795.

94 Model Penal Code, note 69 above, §2.03(3); §2.03(2) and (3) formulate several exceptions to the general proximity standard in cases of intervening and superseding causal forces.

95 Among the “but-for” conditions that are not considered attributable are: “[a] consequence that the perpetrator has caused … if that act did not unjustifiably increase a risk”; “[a] consequence was not one to be averted by the rule the perpetrator violated”; and “if a voluntary act of risk taking on the part of the victim or a third person intervened.” For details, see “Germany”, note 13 above, at 268. See also “International Crimes”, note 86 above, at 26 and 27.

96 “Causation in Criminal Law”, note 18 above, at 797.

97 Footnote Ibid. at 798.

98 “International Crimes”, note 86 above, at 43–47.

99 “International Crimes”, note 86 above, at 41.

100 StGB, note 72 above, §13.

101 On causation in criminal omissions, see Graham Hughes, “Criminal Omissions” (1958) 67:4 Yale Law Journal 590 at 627–631. Causation in “commission by omission” is strictly connected with duties to act and duty to prevent a certain harm: see George Fletcher, Rethinking Criminal Law (New York, NY: Oxford University Press, 2000) at 606; “Causation Approach”, note 89 above, at 562.

102 See Marta Bo, “Criminal Responsibility by Omission for Failures to Stop Autonomous Weapon Systems” (2023) 21:5 Journal of International Criminal Justice 1057.

103 See also Sabine Gless, Emily Silverman, & Thomas Weigend, “If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability” (2016) 19:3 New Criminal Law Review 412 at 425.

104 “United States”, note 70 above, at 575 (emphasis added); see also Guyora Binder, “Homicide” in Markus Dubber & Tatjana Hörnle (eds.), The Oxford Handbook of Criminal Law (New York, NY: Oxford University Press, 2014) 702 at 719: “Negligent manslaughter now usually requires objective foreseeability of death, rather than the simple violation of a duty of care.”

105 Model Penal Code, note 69 above, §2.13(1)(b).

106 “Germany”, note 13 above, at 262.

107 StGB, note 72 above, §222.

108 “Germany”, note 13 above, at 263.

109 See the case law quoted in “Autonomous Weapons”, note 78 above, at 293.

110 “Autonomous Weapons”, note 78 above, at 286–294.

111 Daniele Amoroso & Guglielmo Tamburrini, “Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues” (2020) 1 Current Robotics Reports 187 at 189.

112 “MHC over Autonomous Systems”, note 9 above, at 6–9.

113 Simeon C. Calvert, Daniel Heikoop, Giulio Mecacci et al., “A Human Centric Framework for the Analysis of Automated Driving Systems Based on Meaningful Human Control” (2020) 21:3 Theoretical Issues in Ergonomics Science 478 [“Human Centric Framework”] at 490–492.

114 “MHC over Autonomous Systems”, note 9 above, at 9; “Human Centric Framework”, Footnote note 113 above, at 490 and 491 (emphasis added).

115 Tim McFarland & Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems Be Liable for War Crimes?” (2014) 90 International Law Studies 361 at 366.

116 The anticipation of data issues is central to the above-mentioned UNIDIR report relating to data failures in AWs; see Known Unknowns, note 47 above, at 13 and 14.

117 See Boutin and Woodcock arguing for the need to ensure MHC in the pre-deployment phase: “Realizing MHC”, note 10 above.

3 Trusting Robots Limiting Due Diligence Obligations in Robot-Assisted Surgery under Swiss Criminal Law

* The author owes great thanks for the outstanding support regarding this chapter to Prof. Dr. Sabine Gless and Assoc. Prof. Helena Whalen-Bridge.

1 Entscheid des Bundesgerichts (Decision of the Swiss Federal Court) BGE 133 III 121 E. 3.1; BGE 115 Ib 175 E. 2b; BGE 139 III 252 E. 1.5; BGE 133 III 121 E. 3.1 (the abbreviation for the Swiss Federal Court is BGE, and cases are cited by volume and starting page; all decisions are available online at: www.bger.ch).

2 See e.g., Christopher Geth, Strafrecht Allgemeiner Teil (Criminal Law General Part) (Basel, Switzerland: Helbing Lichtenhahn Verlag, 2021) [Strafrecht Allgemeiner Teil] at 170. Regarding the civil responsibility of a doctor, see Lisa Blechschmitt, Die straf- und zivilrechtliche Haftung des Arztes beim Einsatz roboterassistierter Chirurgie (The Criminal and Civil Liability of Physicians When Using Robot-Assisted Surgery) (Baden-Baden, Germany: Nomos, 2017).

3 Strafgesetzbuch (Swiss Criminal Code), SR 311.0 (as amended January 23, 2023) [SCC], Art. 12, para. 3, www.fedlex.admin.ch/eli/cc/54/757_781_799/en. Negligence differs from intentional action under Art. 12, para. 2, according to which someone intentionally commits a crime or misdemeanor if they carry out the act with knowledge and will.

4 SCC, note 3 above, Art. 12, para. 3.

5 Regarding the ongoing discussion of an e-personhood for robots, see e.g., Martin Zobl & Michael Lysakowski, “E-Persönlichkeit für Algorithmen?” (E-Personhood for Algorithms?) (2019) 1 Digma 42.

6 See Chapter 15 in this volume.

7 See Section III in this chapter, and Chapter 4 in this volume.

8 Neil Richards & William Smart, “How Should the Law Think about Robots?” in Ryan Calo, A. Michael Froomkin, & Ian Kerr (eds.), Robot Law (Cheltenham, UK: Edward Elgar, 2016) 3 [“Think about Robots”].

9 Melinda Florina Müller, “Roboter und Recht” (Robots and Law) (2014) 5 Aktuelle Juristische Praxis 595; Isabelle Wildhaber & Melinda Florina Lohmann, “Roboterrecht – eine Einleitung” (Robotlaw – An Introduction) (2017) 2 Aktuelle Juristische Praxis 135.

10 Susanne Beck, “Grundlegende Fragen zum Umgang mit der Robotik” (Basic Questions about the Use of Robotics) (2009) 6 Juristische Rundschau 225.

11 Thomas Christaller, Michael Decker, M. Joachim Gilsbach et al., Robotik (Robotics) (Berlin, Germany: Springer, 2001) [Robotik] at 18; Karel Capek, “R.U.R.” (play written in 1920, and premiered in Prague in 1922).

12 See e.g., Isaac Asimov, The Complete Robot (London, UK: Harper Collins, 1983).

13 George Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control (Cambridge, MA: MIT Press, 2005) 2.

14 See also George A. Bekey, “Current Trends in Robotics” in Patrick Lin, Keith Abney, & George Bekey (eds.), Robot Ethics (Cambridge, MA: MIT Press, 2012) 17; “Think about Robots”, note 8 above, at 6: “… our definition excludes wholly software-based artificial intelligences that exert no agency in the physical world.”

15 Robotik, note 11 above, at 5.

16 IFR-Website (International Federation of Robotics), https://ifr.org/.

17 More often for programs and artificial intelligence, not necessarily only for robots.

18 Using the example of driving, Daimler, “Information on Daimler AG,” www.daimler.com/innovation/case/autonomous/rechtlicher-rahmen.html; Aleks Attanasio, Bruno Scaglioni, Elena De Momi et al., “Autonomy in Surgical Robotics” (2021) 4 Annual Review of Control, Robotics, and Autonomous Systems 651, www.annualreviews.org/doi/abs/10.1146/annurev-control-062420-090543?casa_token=6SiJq_gdMesAAAAA:ykrIDELrN9BO1-Z63N2jcLiZ8ggbiPnLyTp4n65jy5LMz_Ov-Wko-h1yWeBQTAjVVOyHQnqjV94VSg.

19 Examples from different areas: Rolf H. Weber, “Automatisierte Entscheidungen: Perspektive Grundrechte” (Automated Decisions: Fundamental Rights Perspective) (2020) 1 SZW 18, section III; Atlas der Automatisierung, Automatisierte Entscheidungen und Teilhabe in Deutschland (Atlas of Automation, Automated Decisions and Participation in Germany) (AlgorithmWatch, 2019) 26, https://atlas.algorithmwatch.org/wpcontent/uploads/2019/04/Atlas_of_Automation_by_AlgorithmWatch.pdf. For definitions of autonomy in robotic-assisted surgery, see Guang-Zhong Yang, James Cambias, Kevin Cleary et al., “Medical Robotics – Regulatory, Ethical and Legal Considerations for Increasing Levels of Autonomy” (2017) 2:4 Science Robotics 2.

20 SCC, note 3 above, Art. 12, para. 3.

21 See, for a detailed analysis, Nathalia Bautista Pizzaro, Das erlaubte Vertrauen im Strafrecht (The Permissible Trust in Criminal Law), Strafrecht Studien vol. 77 (Zurich, Switzerland and Baden-Baden, Germany: Nomos, 2017).

22 SCC, note 3 above, Art. 12, para. 3.

23 Andreas Donatsch, Stefan Heimgartner, Berhard Isenring et al. (eds.), Kommentar zum Schweizerischen Strafgesetzbuch (Commentary on the Swiss Criminal Code), 20th ed. (Zürich: Orell Fussli, 2018) [Schweizerischen Strafgesetzbuch], at Art. 12 Note 15.

24 Andreas Donatsch, Sorgfaltsbemessung und Erfolg beim Fahrlässigkeitsdelikt (Due Diligence and Success in the Crime of Negligence) (Zürich, Switzerland: Schulthess Verlag, 1987) [Sorgfaltsbemessung] at 117.

25 See Günther Stratenwerth, Schweizerisches Strafrecht (Swiss Criminal Law), Allgemeiner Teil I: Die Straftat, 4th ed. (Bern, Switzerland: Stampli, 2011) [Schweizerisches Strafrecht] at s. 16 N 9.

26 Sorgfaltsbemessung, note 24 above, at 128; Andreas Donatsch & Brigitte Tag, Strafrecht I (Criminal Law I), 9th ed. (Zürich, Switzerland: Schulthess Verlag, 2013) [Strafrecht I] at 343; BGE 90 IV 11, BGE 116 IV 308, BGE 117 IV 61, BGE 118 IV 133, BGE 121 IV 14, BGE 129 IV 121; for the permitted risk in the context of autonomous vehicles, see also Nadine Zurkinden, “Strafrecht und selbstfahrende Autos – ein Beitrag zum erlaubten Risiko” (Criminal Law and Self-driving Cars – A Contribution to the Permitted Risk) (2016) 3 Recht 144 [“Selbstfahrende Autos”].

27 Sorgfaltsbemessung, note 24 above, at 156.

28 Footnote Ibid. at 144; Schweizerisches Strafrecht, note 25 above, at s. 16 N 10; BGE 127 IV 44, BGE 130 IV 14.

29 Sorgfaltsbemessung, note 24 above, at 130, 146, and 154; Strafrecht I, note 26 above, at 345.

30 Sorgfaltsbemessung, note 24 above, at 154; Marcel Alexander Niggli & St. Maeder, “Article 12” in Marcel Alexander Niggli & Hans Wiprächtiger (eds.), Basler Kommentar, Strafrecht I (Basel Commentary Criminal Law), 3rd ed. (Basel, Switzerland: Helbing Lichtenhahn Verlag, 2013) at N 102; BGE 73 IV 180, BGE 80 IV 49, BGE 106 IV 264, BGE 106 IV 312, BGE 135 IV 70 et seq.

31 BGE 139 III 252 E. 1.5; BGE 133 III 121 E. 3.1; BGE 115 Ib 175 E. 2b; The general duties of physicians and hospitals are not considered here; for details of the contractual relationships between patient and physician or patient and hospital, see Walter Fellmann, “Arzt und das Rechtsverhältnis zum Patienten” (Doctor and the Legal Relationship with the Patient) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 103 [“Rechtsverhältnis zum Patienten”] at 106.

32 Anna Petrig & Nadine Zurkinden, Swiss Criminal Law (Zürich, Switzerland: Dike Verlag, 2015) [Swiss Criminal Law] at 108.

33 Footnote Ibid. “Consciously” means that the person disregards the consequences of his or her behavior through a violation of duty of care. The person has considered it possible that it might succeed, but hopes that it will not. Unconsciously, a person acts if he has not considered the possibility of success occurring at all, although he should have noticed it. Both are treated equally in Swiss law.

34 Swiss Criminal Law, note 32 above, at 108.

35 BGE 133 III 121 E. 3.1; BGE 120 II 248 E.2c.

36 However, successful treatment is not owed (BGE 133 III 121 E.3.1). Generally accepted and valid principles of medical science are: professional treatment and reasonable care. Thomas Gächter & Dania Tremp, “Arzt und seine Grundrecht” (Doctor and His Fundamental Right) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 7; “Rechtsverhältnis zum Patienten”, note 31 above, at 120.

37 Gunther Arzt, “Die Aufklärungspflicht des Arztes aus strafrechtlicher Sicht” (The Physician’s Duty to Inform from a Criminal Law Perspective) in Wolfgang Wiegand (ed.), Arzt und Recht, Berner Tage für die juristische Praxis (Bern, Switzerland: Stampli, 1985) 52 at Diskussion 73. Wiegand stated as late as 1985 that, according to the Swiss Federal Supreme Court, the exercise of the medical profession requires a certain boldness, which lawyers must never restrict. In 1987, however, the Swiss Federal Supreme Court corrected these earlier cited decisions and stated in BGE 113 II 429, 432 E.3a that limiting “… the liability of doctors to severe violations of the duty of care … is not supported by the law.” See also BGE 116 II 519, 521 E. 3: “According to the most recent case law of the Swiss Federal Supreme Court, the liability of physicians is not limited to severe violations of the medical art.”

38 See BGE 134 IV 175, E. 3.2, 177 et seq.; 130 IV 7, E. 3.3, 11 et seq.; 120 Ib 411, E. 4a, 412 et seq.; 113 II 429, E. 3a, 431 et seq.; 66 II 34, 35 et seq.; 64 II 200, E. 4a, 205 f; Antoine Roggo & Daniel Staffelbach, “Offenbarung von Behandlungsfehlern/Verletzung der ärztlichen Sorgfaltspflicht, Plädoyer für konstruktive Kommunikation” (Disclosure of Treatment Errors/Violation of the Medical Duty of Care, Plea for Constructive Communication) (2006) 4 Aktuelle Juristische Praxis/PJA 407; Moritz Kuhn, “Artz und Haftung aus Kunst- bzw. Behandlungsfehlern” (Physician and Liability Arising from Malpractice or Medical Malpractice) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 601 [“Artz und Haftung”] at 601 and 669. Depending on the success of the offense, (negligent) bodily injury offenses are mainly considered after SCC, note 3 above, Arts. 122, 123, 125, or 126; BGE 134 IV 175 et seq.; BGE 130 IV 7 et seq.

39 Ulrich Schroth, “Die strafrechtliche Verantwortlichkeit des Arztes bei Behandlungsfehlern” (The Criminal Liability of the Physician in Cases of Medical Malpractice) in Claus Roxin & Ulrich Schroth (eds.), Handbuch des Medizinstrafrechts, 4th ed. (Stuttgart, Germany: Richard Boorberg Verlag, 2010) 125 [“Strafrechtliche Verantwortlichkeit”]; Brigitte Tag, “Strafrecht im Arztalltag” (Criminal Law in the Everyday Life of a Doctor) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 669 [“Strafrecht im Arztalltag”] at 685.

40 “Rechtsverhältnis zum Patienten”, note 31 above, at 121.

41 Bundesgesetz über die universitären Medizinalberufe (Medical Professions Act), Switzerland, SR 811.11 (with effect from June 23, 2006), www.fedlex.admin.ch/eli/cc/2007/537/de.

42 “Rechtsverhältnis zum Patienten”, note 31 above, at 124.

43 “Strafrecht im Arztalltag”, note 39 above, at 669.

44 Schweizerischen Strafgesetzbuch, note 23 above, at s. 12 N 20.

45 BGE 130 IV 7, E. 3.3, 11 et seq. It is stated in the “Botschaft zum MedBG (Medizinalberufegesetz)” that the code of conduct of the FMH can be used for the interpretation of the open law.

46 Swiss Academy of Medical Sciences, (SAMWASSM), www.samw.ch/en.html; for the Project on Artificial Intelligence, see www.samw.ch/de/Projekte/Uebersicht-der-Projekte/Kuenstliche-Intelligenz.html.

47 FMH Homepage, https://fmh.ch/.

48 BGE 130 IV 7, E. 3.3, 11 et seq.; Strafrecht Allgemeiner Teil, note 2 above, at 160.

49 Olga Lechky, “World’s First Surgical Robot in B.C.,” The Medical Post (November 12, 1985), www.brianday.ca/imagez/1051_28738.pdf.

50 See e.g., Alex Nemiroski, Yanina Y. Shevchenko, Adam A. Stokes et al., “Arthrobots” (2017) 4:3 Soft Robotics 183.

51 “Artz und Haftung”, note 38 above, at 601.

52 See also Jan-Philipp Günther, Roboter und rechtliche Verantwortung (Robots and Legal Responsibility) (Munich, Germany: Herbert Utz Verlag, 2016) [Rechtliche Verantwortung].

53 Federal Act on Medicinal Products and Medical Devices, Therapeutic Products Act, TPA, Switzerland, SR 812.21 (as amended January 1, 2022), www.fedlex.admin.ch/eli/cc/2001/422/en [TPA]; and the Medical Devices Ordinance, Switzerland, SR 812.213 (as amended August 1, 2020), www.fedlex.admin.ch/eli/cc/2001/520/en [MedDO] specify the classification as a medical device. According to Swiss law, the classification as a medical device does not depend on whether or not it acts directly on the human body: only the purpose is relevant (judgment of the Swiss Federal Administrative Court C-669/2016 of September 17, 2018, E.5.1.2; judgment of the Swiss Federal Court 2A.504/2000 of February 28, 2001, E.3).

54 MedDO, note 53 above, Art. 1.

55 TPA, note 53 above, Art. 49; MedDO, note 53 above, Art. 19, para. 1 and Art. 20, para. 1.

56 Monika Gattiker, “Arzt und Medizinprodukte” (Phycisian and Medical Devices) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 495.

58 Iris Herzog-Zwitter, “Die Aufklärungspflichtverletzung und ihre Folgen” (The Breach of the Duty of Disclosure and its Consequences) (2010) HAVE 316 at 318. On the duty of information, see in general, Walter Fellmann, “Aufklärung von Patienten und Haftung des Arztes” (Information of Patients and Liability of the Physician) in Bernhard Rütsche (ed.), Medizinprodukte: Regulierung und Haftung (Bern, Switzerland: Stampfli, 2013) 171; BGE 119 II 456 = Pra 1995 Nr. 72 E.2c.

59 BGE 141 III 363 E.5.1.

60 Azad Shademan, Ryan S. Decker, Justin D. Opfermann et al., “Supervised Autonomous Robotic Soft Tissue Surgery” (2016) 8:337 Science Translational Medicine 1 [“Soft Tissue Surgery”].

62 Rechtliche Verantwortung, note 52 above, at 255f.

63 See Jonela Hoxhaj, Quo vadis Medizintechnikhaftung?: Arzt-, Krankenhaus- und Herstellerhaftung für den Einsatz von Medizinprodukten (Quo vadis Medical Technology Liability?) (Frankfurt, Germany: Peter Lang Verlag, 2000) at 85.

64 Hirslanden, Profile of Dr. med. Stephan Bauer, www.hirslanden.ch/de/corporate/aerzte/1/dr-med-stephan-bauer.html; Martina Bortolani, “Dr. Robotnik, übernehmen Sie!” (Dr. Robotnik, Take Over!) Blick (July 3, 2016), www.blick.ch/life/gesundheit/medizin/wenn-die-maschine-operiert-dr-robotnik-uebernehmen-sie-id5213024.html.

65 Execution of the Swiss Federal Court on telemedicine: BGE 116 II 519, E.3. This decision is a civil law decision, but no reasons are apparent why these principles should not also apply to the criminal law assessment.

66 Sabine Gless, “Strafrechtliche Produkthaftung” (Criminal Product Liability) (2013) 2 Recht 54 [“Strafrechtliche Produkthaftung”] at 56: A manufacturer must bring a product onto the market that is free from defects according to the state of the art in science and technology. See also Chapter 2 in this volume.

67 “Strafrechtliche Produkthaftung”, note 66 above, at 54: Infringement of the duty to inspect and monitor.

68 Star Automation, “Cartesian Robots – Es-II Series” (Smart Tissue Autonomous Robot), www.star-europe.com/en/prodotti/robot-cartesiani-serie-es-ii-4.

69 “Soft Tissue Surgery”, note 60 above.

70 Star Automation, “Robot cartesiani serie Es-II,” www.star-europe.com/es-ii/; Nicola von Lutterotti, “Der Roboter übernimmt” (The Robot Takes Over), Neue Burcher Beitung (May 16, 2016), www.nzz.ch/wissenschaft/medizin/intelligente-medizinaltechnik-der-roboter-uebernimmt-ld.82237?reduced=true.

71 Werner Pluta, “Operationsroboter übertrifft menschliche Kollegen” (Surgical Robot Outperforms Human Colleagues), Golem.de (May 9, 2016), www.golem.de/news/robotik-operationsroboter-uebertrifft-menschliche-kollegen-1605-120779.html.

72 See AOT, “CARLO,”https://aot.swiss/carlo/ [“CARLO”].

73 Santina Russo & Noemi Lea Landolt, “Der überflüssige Chirurg: Schon bald sägen Roboter unsere Schädel auf” (The Superfluous Surgeon: Robots Will Soon Be Sawing Open Our Skulls), Aargauer Zeitung (April 23, 2016), www.aargauerzeitung.ch/leben/der-ueberfluessige-chirurg-schon-bald-saegen-roboter-unsere-schaedel-auf-ld.1550792.

www.aargauerzeitung.ch/leben/der-uberflussige-chirurg-schon-bald-sagen-roboter-unsere-schadel-auf-ld.1550792

74 “CARLO”, note 72 above.

76 “Rechtsverhältnis zum Patienten”, note 31 above, at 103.

77 See also Rechtliche Verantwortung, note 52 above, at 255f.

78 “CARLO”, note 72 above.

79 Sabine Gless & Thomas Weigend, “Intelligente Agenten und das Strafrecht” (Intelligent Agents and Criminal Law) (2014) 126:3 ZStW 561; Nora Markwalder & Monika Simmler, “Roboterstrafrecht, zur strafrechtlichen Verantwortlichkeit von Robotern und künstlicher Intelligenz” (Robot Criminal Law) (2017) 2 Aktuelle Juristische Praxis 177. In the context of autonomous cars, see “Selbstfahrende Autos”, note 26 above; Alexander Schorro, “Autonomes Fahren – erweiterte strafrechtliche Verantwortlichkeit des Fahrzeughalters?” (Autonomous Driving – Extended Criminal Liability of the Vehicle Owner?) (2017) 1 ZStrR 81, and regarding self-driving cars, see Chapters 2 and 4 in this volume.

80 See also Rechtliche Verantwortung, note 52 above, at 255f.

81 Regarding robot testimony, see Chapters 6 and 8 in this volume.

82 A more detailed description can be found under Section III.A.

83 “Strafrecht im Arztalltag”, note 39 above, at 692.

84 For an overview, see Matthias Richard Heierli & Jörg Rehberg, Die Bedeutung des Vertrauensprinzips im Strassenverkehr und für das Fahrlässigkeitsdelikt (The Significance of the Principle of Trust in Road Traffic and for the Crime of Negligence) (Zürich, Switzerland: Schulthess Juristische Medien, 1996); from road traffic law: BGE 129 IV 282, 286; BGE 115 IV 239, 240; René Schaffhauser, Grundriss des schweizerischen Strassenverkehrsrechts (Outline of the Swiss Road Traffic Law), Band I: Grundlagen, Verkehrszulassung und Verkehrsregeln, 2nd ed. (Bern, Switzerland: Stampfli, 2002) at N 441.

85 See “Strafrechtliche Verantwortlichkeit”, note 39 above, at 135; “Strafrecht im Arztalltag”, note 39 above, at 692; on the principle of trust in general, BGE 125 IV 83, E. 2, 87 et seq.; BGE 120 IV 300, E.3; BGE 118 IV 277, E.4.

86 A more detailed description can be found under Section III.C.1.

87 See “Strafrechtliche Verantwortlichkeit”, note 39 above, at 135; “Strafrecht im Arztalltag”, note 39 above, at 692; Hans Wiprächtiger, “‘Kriminalisierung’ der ärztlichen Tätigkeit? Die Strafbarkeit des Arztfehlers in der bundesgerichtlichen Rechtsprechung” (“Criminalization” of Medical Practice? The Criminal Liability of Medical Malpractice in Federal Court Jurisprudence) in Andreas Donatsch, Felix Blocher, & Annemarie Hubschmid Volz (eds.), Strafrecht und Medizin: Tagungsband des Instruktionskurses der Schweizerischen Kriminalistischen Gesellschaft vom 26./27. Oktober 2006 in Flims (Bern, Switzerland: Stampfli, 2007) 61 at 82; on the principle of trust in general, see BGE 125 IV 83, E. 2, 87 et seq.; BGE 120 IV 300, E.3; BGE 118 IV 277, E.4.

88 See Hanspeter Kuhn, Gian Andrea Rusca, & Simon Stettler, “Rechtsfragen der Arztpraxis” (Legal Issues of the Medical Practice) in Moritz Kuhn & Thomas Poledna (eds.), Arztrecht in der Praxis, 2nd ed. (Zürich, Switzerland: Schulthess Verlag, 2007) 265 at 287.

89 See “Strafrecht im Arztalltag”, note 39 above, at 693.

90 See also “Strafrechtliche Verantwortlichkeit”, note 39 above, at 139; “Strafrecht im Arztalltag”, note 39 above, at 694.

91 “Strafrecht im Arztalltag”, note 39 above, at 669.

92 For more on the topic, see e.g., Michael Isler, “Off Label Use von Medizinprodukten” (Off Label Use of Medical Devices) (2018) 2 LSR 79.

93 The theory of “de facto control” is used primarily to determine the indirect actors and accomplices; see e.g., Schweizerisches Strafrecht, note 25 above, at s. 13 N 11.

94 Olaf Dössel, “Vertrauen in die Technikwissenschaften, Vertrauen in die Medizintechnik?!” (Trust in Engineering Sciences, Trust in Medical Technology?!) (2013) Berlin-Brandenburgische Akademie der Wissenschaften 75, https://edoc.bbaw.de/files/2207/13_Debatte13_Doessel.pdf [“Vertrauen in die Technikwissenschaften”].

95 TPA, note 53 above.

96 MedDO, note 53 above.

97 See European Union, The European Parliament, & The Council of the European Union Regulation, Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC, OJ 2017 L 117 (EU: Official Journal of the European Union, 2017).

98 Relevant are ISO 13485:2016; ISO IEC 80601-2-78:2019-07.

99 “Strafrechtliche Produkthaftung”, note 66 above, at 56.

100 See Abkommen zwischen der Schweizerischen Eidgenossenschaft und der Europäischen Gemeinschaft über gegenseitige Anerkennung von Konformitätsbewertungen (Agreement between Switzerland and the European Union on mutual recognition in relation to conformity assessment, June 21, 1999), SR 0.946.526.81, www.fedlex.admin.ch/eli/cc/2002/276/de.

102 See MedDO, note 53 above, Arts. 8, 9, and 10; SwissMedic, “Aktuell,” www.swissmedic.ch/md.

103 Unlike medicinal products, medical devices do not need to be subject to official approval. Swissmedic’s focus in the area of medical devices is, therefore, on efficient market surveillance: Swissmedic, “Medizinprodukte,” www.swissmedic.ch/swissmedic/de/home/medizinprodukte.html. For the CE-certification in Switzerland, the various conformity assessment bodies are monitored by Swissmedic.

104 “Strafrechtliche Produkthaftung”, note 66 above, at 59; see Chapter 4 in this volume.

105 “Vertrauen in die Technikwissenschaften”, note 94 above.

106 On consent to the procedure, see Philippe Weissenberger, Die Einwilligung des Verletzten bei den Delikten gegen Leib und Leben (The Consent of the Injured Person in the Case of Offenses against Life and Limb) (Bern, Switzerland: Stampfli, 1996) at 145. Concerning the obligation to monitor the product after market entry, see “Strafrechtliche Produkthaftung”, note 66 above, at 60. Concerning the responsibility of the manufacturer and the operator in the field of autonomous cars, see Sabine Gless & Ruth Janal, “Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche Verantwortung” (Highly Automated and Autonomous Driving – Risk and Legal Responsibility) (2016) 10 Juristische Rundschau 561.

107 See e.g., Cade Metz, “The Robot Surgeon Will See You Now,” The New York Times (April 30, 2021), www.nytimes.com/2021/04/30/technology/robot-surgery-surgeon.html; James Martin, Bruno Scaglioni, Joseph C. Norton et al., “Enabling the Future of Colonoscopy with Intelligent and Autonomous Magnetic Manipulation” (2020) 2:10 Nature Machine Intelligence 595.

108 See Andreas Matthias, Automaten als Träger von Rechten (Automatic Machines as Bearers of Rights), Dissertation, 2nd ed. (Berlin, Germany: Logos Verlag Berlin, 2010) at 25.

109 Susanne Beck, “Roboter und Cyborgs” (Robots and Cyborgs) in Susanne Beck (ed.), Jenseits von Mensch und Maschine (Baden-Baden, Germany: Nomos, 2012) 9.

4 Forms of Robot Liability Criminal Robots and Corporate Criminal Responsibility

1 Mark A. Lemley & Bryan Casey, “Remedies for Robots” (2019) 86:5 University of Chicago Law Review 1311 [“Remedies for Robots”] at 1313. For a brief overview of applications of AI and the legal issues related to them, see Eric Hilgendorf, “Modern Technology and Legal Compliance” in Eric Hilgendorf & Maria Kaiafa-Gbandi (eds.), Compliance Measures and Their Role in Greek and German Law (Athens: Π.Ν. ΣΑΚΚΟΥΛΑΣ, 2017) 21 at 27–33. For problems associated with controlling self-driving cars, see Chapter 15 in this volume.

2 Although I am aware that the terms “AI device” and “robot” have slightly different connotations, I use them interchangeably in this chapter.

3 On the liability of programmers, see Chapter 2 in this volume.

4 For an interesting example of the logical but dysfunctional learning process of a drone, see “Remedies for Robots”, note 1 above, at 1313: A drone was trained to stay within a certain circle and to head toward the center. If the drone left the circle, it was shut off and someone picked it up on the ground and carried it back into the circle. The drone thus “learned” to leave the circle whenever it got close to the margin, because it could then rely on being carried back into the circle.

5 See Mihailis E. Diamantis, “Algorithms Acting Badly: A Solution from Corporate Law” (2021) 89:4 George Washington Law Review 801 [“Algorithms Acting Badly”] at 821–822; Sabine Gless, Emily Silverman, & Thomas Weigend, “If Robots Cause Harm, Who Is to Blame?” (2016) 19:3 New Criminal Law Review 415 [“If Robots Cause Harm”] at 426–428.

6 European Union, European Parliament, Committee on Legal Affairs, Report with Recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103(INL) (Strasbourg, France: European Parliament, January 27, 2017) at 8, www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.pdf. For a brief account of the ensuing discussion, see Anat Lior, “AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy” (2020) 46:5 Mitchell Hamline Law Review 1043 [“AI Entities”] at 1067–1069. See also Roman I. Dremliuga, Alexey Yu Mamychev, O. A. Dremliuga et al., “Artificial Intelligence as a Subject of Law: Pros and Cons” (2019) VII:1 Revista Dilemas Contemporáneos: Educación, Política y Valores 1 at 9–12.

7 European Union, European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts, COM/2021/206 final (Brussels, Belgium: European Commission, April 21, 2021).

8 See e.g., “Algorithms Acting Badly”, note 5 above, at 807; “AI Entities”, note 6 above, at 1070–1071.

9 See Ying Hu, “Robot Criminals” (2019) 52:2 Michigan Journal of Law Reform 487 at 491; Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Cham, Switzerland: Springer, 2015); Gabriel Hallevy, “The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control” (2010) 4:2 Akron Intellectual Property Journal 171. For a discussion, see “If Robots Cause Harm”, note 5 above, at 415–422.

10 Gabriel Lima, Meeyoung Cha, Chihyung Jeon et al., “The Conflict between People’s Urge to Punish AI and Legal Systems” (2021) 8 Frontiers in Robotics and AI Article 756242.

11 Ryan Abbott & Alex Sarch, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction” (2019) 53:1 UC Davis Law Review 323 [“Punishing Artificial Intelligence”] at 357.

12 Monika Simmler & Nora Markwalder, “Guilty Robots? – Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence” (2019) 30:1 Criminal Law Forum 1 [“Guilty Robots”].

13 Footnote Ibid. at 16: “Idealistic philosophy cannot obscure the fact that the attribution of capacity to reflect, of consciousness, and of other capacities is just that – an attribution – and not cognizable and legally meaningful due to ontological circumstances.”

14 Footnote Ibid. at 15.

15 Footnote Ibid. at 17.

16 Footnote Ibid. at 25.

17 Footnote Ibid. at 30.

18 Cf. “Punishing Artificial Intelligence”, note 11 above, at 365–367.

19 See e.g., Federico Mazzacuva, “The Impact of AI on Corporate Criminal Liability: Algorithmic Misconduct in the Prism of Derivative and Holistic Theories” (2021) 92:1 Revue Internationale de Droit Pénal 143 [“Impact of AI”] at 146–147; “Punishing Artificial Intelligence”, note 11 above, at 357; “Guilty Robots”, note 12 above, at 18–19 and 27–28.

20 For a comparative overview, see Francisco Javier Bedecarratz Scholz, Rechtsvergleichende Studien zur Strafbarkeit juristischer Personen (Comparative Studies on the Punishability of Legal Persons) (Zurich, Switzerland: Dike Verlag (in cooperation with Nomos), 2016).

21 For counterarguments, see text on notes 28–32 below.

22 Nora Osmani, “The Complexity of Criminal Liability of AI Systems” (2020) 14:1 Masaryk University Journal of Law and Technology 53 [“Criminal Liability of AI”] at 61; Dafni Lima, “Could AI Agents Be Held Criminally Liable: Artificial Intelligence and the Challenges for Criminal Law” (2018) 69:3 South Carolina Law Review 677 [“AI Agents”] at 682–683.

23 Vikram R. Bhargava & Manuel Velasquez, “Is Corporate Responsibility Relevant to Artificial Intelligence Responsibility?” (2019) 17:3 Georgetown Journal of Law and Public Policy 829 at 836.

24 For an overview, see Celia Wells, “Corporate Criminal Responsibility” in Stephen Tully (ed.), Research Handbook on Corporate Legal Responsibility (Cheltenham, UK: Edward Elgar, 2005) 147.

25 Verbandsverantwortlichkeitsgesetz (Corporate Responsibility Act), Austria (as amended on May 20, 2016), § 3.

26 The seminal Supreme Court decision in favor of CCR was New York Central & Hudson River Railroad Co. v. United States, 212 U.S. 481 (1909). “Algorithms Acting Badly”, note 5 above, at 817, correctly observes that today there is great public support in the United States for a broad version of CCR, so that an effort at legislative reform would be a “non-starter.” For a report on the present practice of CCR in the United States, see Elisa Hoven & Thomas Weigend, “Praxis und Probleme des Verbandsstrafrechts in den USA” (Practice and Problems of Corporate Criminal Liability in the US) (2018) 130:1 Zeitschrift für die gesamte Strafrechtswissenschaft 213.

27 For a brief overview, see Bernd Schünemann & Luis Greco, “Vorbemerkungen zu §§ 25 para 21” in Gabriele Cirener, Henning Radtke, Ruth Rissing-van Saan et al. (eds.), Strafgesetzbuch. Leipziger Kommentar (Penal Code, Leipzig Commentary), vol. 2, 13th ed. (Berlin, Germany: De Gruyter, 2021).

28 See Germany, Bundesrat, Entwurf eines Gesetzes zur Stärkung der Integrität in der Wirtschaft (Draft Law on the Strengthening of Integrity in the Economy), Bundesratsdrucksache 440/20 (Germany: Bundesrat, August 7, 2020). The draft was not voted on before the parliamentary period ended in the fall of 2021.

29 For critical assessments, see Ulfrid Neumann, “Zur (Un)Vereinbarkeit des Verbandsstrafrechts mit Grundprinzipien des tradierten Individualstrafrechts” (On the (In-)Compatibility of Corporate Criminal Law with Basic Principles of Traditional Criminal Law for Individuals) in Marianne Johanna Lehmkuhl & Wolfgang Wohlers (eds.), Unternehmensstrafrecht (Basel, Switzerland: Helbing Lichtenhahn Verlag, 2020) 49; Frauke Rostalski, “Neben der Spur: Verbandssanktionengesetzgebung auf Abwegen” (Off the Track: Legislation on Corporate Criminal Liability Going Off the Road) (2020) 73:29 Neue Juristische Wochenschrift 2087; Uwe Murmann, “Unternehmensstrafrecht” (Corporate Criminal Law) in Kai Ambos & Stefanie Bock (eds.), Aktuelle und grundsätzliche Fragen des Wirtschaftsstrafrechts (Berlin, Germany: Duncker & Humblot, 2019) 57; Franziska Mulch, Strafe und andere staatliche Maßnahmen gegenüber juristischen Personen (Punishment and Other State Measures against Legal Persons) (Berlin, Germany: Duncker & Humblot, 2017); Friedrich von Freier, “Zurück hinter die Aufklärung: Zur Wiedereinführung von Verbandsstrafen” (Back Behind Enlightenment: On the Re-Introduction of Criminal Punishment for Corporations) (2009) 156 Goltdammer’s Archiv für Strafrecht 98; Arbeitsgruppe Strafbarkeit juristischer Personen, “Bericht” (Working Group Punishability of Legal Persons, “Report“) in Michael Hettinger (ed.), Reform des Sanktionenrechts, vol. 3 (Baden-Baden, Germany: Nomos, 2002) 7. For an overview of the recent German discussion, see Thomas Weigend, “Corporate Responsibility in Germany” in Khalid Ghanayem & Yuval Shany (eds.), The Quest for Core Values in the Application of Legal Norms: Essays in Honor of Mordechai Kremnitzer (Cham, Switzerland: Springer, 2021) 103.

30 “AI Agents”, note 22 above, at 688.

31 Mihailis E. Diamantis, “The Law’s Missing Account of Corporate Character” (2019) 17:3 Georgetown Journal of Law and Public Policy 865 at 880.

32 See Charlotte Schmitt-Leonardy, “Originäre Verbandsschuld oder Zurechnungsmodell?” (Culpability of the Corporation or Imputation Model?) in Martin Henssler, Elisa Hoven, Michael Kubiciel et al. (eds.), Grundfragen eines modernen Verbandsstrafrechts (Baden-Baden, Germany: Nomos, 2017) 71.

33 On these and other problematic aspects of CCR, see Thomas Weigend, “Societas delinquere non potest? A German Perspective” (2008) 6:5 Journal of International Criminal Justice 927. For ways of dealing with corporate misconduct outside the criminal law, see Charlotte Schmitt-Leonardy, Unternehmenskriminalität ohne Strafrecht? (Corporate Crime without Criminal Law?) (Heidelberg, Germany: C. F. Müller Verlag, 2013).

34 As to that approach, see notes 12–18 above.

35 See the strong argument in favor of “a softer version of the State’s powers to prohibit and punish” in “AI Agents”, note 22 above, at 696. The author plausibly warns that an over-extension of criminal sanctions might “weaken our perception of what criminal law is and what it has the power to do.”

36 German law presently permits the imposition of administrative fines on corporations if their leading managers committed criminal offenses or culpably failed to prevent such offenses committed by employees; see Gesetz über Ordnungswidrigkeiten (Law on Administrative Infractions), of February 19, 1987, Germany, Bundesgesetzblatt 1987 I, 602, §§ 30, 130.

37 See text at note 19 above.

38 If the law treats robots like humans, CCR could be applied directly to robots’ malfeasance. See e.g., the Michigan statute discussed by Clint W. Westbrook, “The Google Made Me Do It. The Complexity of Criminal Liability in the Age of Autonomous Vehicles” (2017) 2017:1 Michigan State Law Review 97 [“Google Made Me Do It”]. Michigan Compiled Laws s. 257.665(5), introduced in 2016, declares that an automated driving system is the driver or operator of a vehicle “for purposes of determining conformance to any applicable traffic or motor vehicle laws.” From that legal provision, the author concludes that “manufacturers should be held liable for AV-caused crimes where their products are shown to be culpable for certain criminal acts and harm caused thereby” (“Google Made Me Do It,” at 126), i.e., if a failure in hardware or software caused the infraction (Footnote ibid. at 133).

39 “Criminal Liability of AI”, note 22 above, at 62–63 correctly notes that strict liability for any malfeasance of a robot would place too heavy a burden on its individual programmers, designers, and distributors, eventually hampering the development of new technology.

40 The cause of the harm could also lie in the robot’s self-programming. As pointed out in Algorithms Acting Badly”, note 5 above, at 819–820, humans are increasingly absent from the process of writing code, with algorithms themselves writing most of the code for sophisticated programs.

41 See text at notes 24–25 above.

42 See “Impact of AI”, note 19 above, at 148–149 and 153.

43 See Kurt Schmoller, “‘Verbandsschuld’ als funktionsanaloges Gegenstück zur Schuld des Individualstrafrechts” (‘Corporate Culpability’ as a Functional Analogue to Culpability in Criminal Law for Individual Persons) in Marianne Johanna Lehmkuhl & Wolfgang Wohlers (eds.), Unternehmensstrafrecht (Basel, Switzerland: Helbing Lichtenhahn Verlag, 2020) 67.

44 “AI Entities”, note 6 above, at 1064–1066. Liability would normally be in tort law, but could also extend to criminal law, e.g., where an unsupervised dog bites a person.

45 Accord, “Algorithms Acting Badly”, note 5 above, at 809, 816, and 829 (claiming that “algorithmic action is corporate action”); “Criminal Liability of AI”, note 22 above, at 71–72; “AI Entities”, note 6 above, at 1067 and 1071 (arguing for treating robots as “agents”).

46 “Algorithms Acting Badly”, note 5 above, at 831.

47 Footnote Ibid. at 811.

48 Cf. “AI Agents”, note 22 above, at 694: “Not everything can be foreseen, prevented, or contained, and in everyday life there are several instances where no one is to blame – much more be held criminally liable – for an undesirable outcome … Not everything can or should be regulated under criminal law.”

49 Cf. “Algorithms Acting Badly”, note 5 above, at 836; Dominik Schmidt & Christian Schäfer, “Es ist schuld?! – Strafrechtliche Verantwortlichkeit beim Einsatz autonomer Systeme im Rahmen unternehmerischer Tätigkeiten” (It’s Its Fault?! – Criminal Responsibility in Connection with Employing Autonomous Systems in the Context of Entrepreneurial Activities) (2021) 10:11 Neue Zeitschrift für Wirtschaftsstrafrecht 413 at 420; “AI Agents”, note 22 above, at 693.

50 “Algorithms Acting Badly”, note 5 above, at 835.

51 Footnote Ibid. at 836.

52 Footnote Ibid. at 844; “Criminal Liability of AI”, note 22 above, at 69 also emphasizes the importance of the “benefit” element.

53 Accord, “Criminal Liability of AI”, note 22 above, at 693.

54 For a similar concept in CCR, see Strafgesetzbuch (Swiss Criminal Code), SR 311.0 (as amended January 23, 2023), Art. 102, para. 2.

55 For an overview of potential fault of human beings in connection with robots, see Chapter 1 in this volume.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×