Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-19T06:31:09.454Z Has data issue: false hasContentIssue false

Tort Law, Corrective Justice and the Problem of Autonomous-Machine-Caused Harm

Published online by Cambridge University Press:  19 June 2020

Pinchas Huberman*
Affiliation:
Pinchas Huberman, incoming LLM candidate fall 2020, Yale Law School. [email protected]
Get access

Extract

Developments in artificial intelligence and robotics promise increased interaction between humans and autonomous machines, presenting novel risks of accidental harm to individuals and property.1 This essay situates the problem of autonomous-machine-caused harm within the doctrinal and theoretical framework of tort law, conceived of as a practice of corrective justice. The possibility of autonomous-machine-caused harm generates fresh doctrinal and theoretical issues for assigning tort liability. Due to machine-learning capabilities, harmful effects of autonomous machines may be untraceable to tortious actions of designers, manufacturers or users.2 As a result, traditional tort doctrine—framed by conditions of foreseeability and proximate causation—would not ground liability.3 Without recourse to compensation, faultless victims bear the accident costs of autonomous machines. This doctrinal outcome reflects possible incompatibility between tort’s theoretical structure of corrective justice and accidents involving autonomous machines. As a practice of corrective justice, tort liability draws a normative link between particular defendants and plaintiffs, as doers and sufferers of the same tortious harm, grounding defendants’ agent-specific obligations to repair the harm. Where accidents are caused by autonomous machines, the argument goes, the essential link between defendants and plaintiffs is severed; since resulting harm is not legally attributable to the human agency of designers, manufacturers or users, victims have no remedy in tort.

Type
Research Article
Copyright
© The Author(s) 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I am grateful to Professors Peter Benson and Bruce Chapman for valuable feedback and comments on earlier versions of this essay, which constituted part of my LLM thesis at UofT. I am also grateful to Ben Ohavi and Yona Gal for helpful and lively discussions about ideas in this essay.

References

1. Ryan Calo, “Robotics and the Lessons of Cyberlaw” (2015) 103 Cal L Rev 513 at 532-34.

2. Vladeck, David C, “Machines Without Principals: Liability Rules and Artificial Intelligence” (2014) 89 Wash L Rev 117 at 121-23Google Scholar.

3. Calo, supra note 1 at 542; Peter M Asaro, “The Liability Problem for Autonomous Artificial Agents” in Ethical and Moral Considerations in Non-Human Agents, 2016 AAAI Spring Symposium Series (AAAI Press, 2016) 190 at 191 [Asaro, “Liability Problem”]; Curtis EA Karnow: “The Application of Traditional Tort Theory to Embodied Machine Intelligence” in Ryan Calo, A Michael Froomkin & Ian Kerr, eds, Robot Law (Edward Elgar, 2016) 51 at 63-74.

4. Ryan Calo, “Robots as Legal Metaphors” (2016) 30:1 Harv JL & Tech 209 at 221-25. Calo notes several examples. In one illuminating instance, a Maryland court considered whether an animatronic robot’s dancing and singing constituted a “performance” within the meaning of a statute imposing tax on food served in restaurants “where there is furnished a performance.” It was held that robots do not “perform” (within the meaning of the statute) since they lack skill and potential for spontaneous imperfection. See Comptroller of the Treasury v Family Entertainment Centers, 519 A (2d) 1337 (Md Ct Spec App 1987) at 1338. Moreover, some courts have used the term ‘robot’ metaphorically to justify decisions relieving defendants’ responsibility for tortious harm. For instance, individuals are compared to robots where they are seen to act without agency—as mere instruments of others—and cannot be held legally responsible in tort. See Frye v Baskin, 231 SW (2d) 630 (Mo Ct App 1950) at 635. Likewise, pursuant to the “alter ego” theory, a corporate veil can be pierced by showing that controlled corporations acted “robot-like” in “mechanical response” to a controller’s “pressure on its buttons”. See Culbreth v Amosa (Pty) Ltd, 898 F (2d) 13 (3d Cir 1990) at 15.

5. Calo, “Robots as Legal Metaphors”, supra note 4 at 226.

6. Jerry Kaplan, Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (Yale University Press, 2015) at 4.

7. Ibid.

8. Ibid at 5.

9. Meera Senthilingham, “Would You Let a Robot Perform Your Surgery By Itself?”, CNN (12 May 2016), online: https://www.cnn.com/2016/05/12/health/robot-surgeon-bowel-operation/index.html.

10. Laura Stevens & Georgia Wells, “UPS Uses Drone to Deliver Packages to Boston-Area Island”, Wall Street Journal (23 September 2016), online: https://www.wsj.com/articles/ups-uses-drone-to-deliver-package-to-boston-area-island-1474662123.

11. Heather Somerville, “Uber, Transitioning to Fleet Operator, Orders 24,000 Driverless Cars from Volvo”, Insurance Journal (21 November 2017), online: https://www.insurancejournal.com/news/national/2017/11/21/471938.htm.

12. Jana Kasperkevic, “Swiss Police Release Robot that Bought Ecstasy Online”, The Guardian (22 April 2015), online: https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-darknet-shopper-ecstasy-deep-web.

13. Jacqueline Howard, “Robot Pets Offer Real Comfort”, CNN (1 November 2017), online: https://www.cnn.com/2016/10/03/health/robot-pets-loneliness/index.html.

14. Adriana Barton, “Cylons, They Are Not. These Intelligent and Friendly Robots Are Designed To Help the Elderly Live A Better Life”, The Globe and Mail (26 August 2018), online: https://www.theglobeandmail.com/life/health-and-fitness/article-cylons-they-are-not-these-intelligent-and-friendly-robots-are/.

15. James Vincent, “Mall Security Bot Knocks Down Toddler, Breaks Asimov’s First Law of Robotics”, The Verge (13 July 2016), online: https://www.theverge.com/2016/7/13/12170640/mall-security-robot-k5-knocks-down-toddler.

16. Alistair Barr, “Amazon.com to Buy Kiva Systems for $775 Million”, Reuters (19 March 2012), online: https://www.reuters.com/article/us-amazoncom/amazon-com-to-buy-kiva-systems-for-775-million-idUSBRE82I11720120319.

18. Ryan Abbott, “The Reasonable Computer: Disrupting the Paradigm of Tort Liability” (2018) 86:1 Geo Wash L Rev 1 at 42.

19. Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona Where Robots Roam”, The New York Times (19 March 2018), online: https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html; Peter Holley, “After Crash Injured Motorcyclist Accuses Robot-Driven Vehicle of ‘Negligent Driving’”, Washington Post (25 January 2018), online: https://www.washingtonpost.com/news/innovations/wp/2018/01/25/after-crash-injured-motorcyclist-accuses-robot-driven-vehicle-of-negligent-driving/?utm_term=.ca85154c515e; The Fernandez Firm, “What Happens When a Robot Causes Wrongful Death”, Medium (30 March 2017), online: https://medium.com/@thefernandezfirm/what-happens-when-a-robot-causes-wrongful-death-3d1f4f7e9711; Seth Baum & Trevor White, “When Robots Kill”, The Guardian (23 July 2015), online: https://www.theguardian.com/science/political-science/2015/jul/23/when-robots-kill.

20. See Jack B Balkin, “The Path of Robotics Law” (2015) 6 Cal L Rev Circuit 45 at 47-48.

21. Shannon Vallor & George Bekey, “Artificial Intelligence and the Ethics of Self-Learning Robots” in Patrick Lin, Keith Abney & Ryan Jenkins, eds, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (Oxford University Press, 2017) 338.

22. Calo, “Robotics and the Lessons of Cyberlaw”, supra note 1 at 529. This is neatly phrased as the sense-think-act paradigm.

23. Ibid at 532.

24. Ibid at 534.

25. Ibid at 538-39.

26. Vallor & Bekey, supra note 21 at 340.

27. Ibid.

28. Ibid.

29. Ibid.

30. Harry Surden, “Machine Learning and the Law” (2014) 89 Wash L Rev 87 at 89.

31. Ibid at 91-95.

32. Ibid at 93.

33. Ibid at 94.

34. See Deven R Desai & Joshua A Kroll, “Trust but Verify: A Guide to Algorithms and the Law” (2017) 31:1 Harv JL & Tech 1 at 26.

35. Ibid at 27.

36. Ibid.

37. Ibid at 28.

38. Andreas Matthias, “The Responsibility Gap: Ascribing responsibility for the actions of learning automata” (2004) 6 Ethics and Information Technology 175 at 179.

39. Ibid.

40. See Karnow, supra note 3 at 56-60.

41. Ibid.

42. Ibid.

43. Ibid.

44. Neil M Richards & William D Smart, “How Should the Law Think About Robots” in Ryan Calo, A Michael Froomkin & Ian Kerr, eds, Robot Law (Edward Elgar, 2016) 3 at 18.

45. Matthias, supra note 38 at 178.

46. Calo, “Robotics and Lessons of Cyberlaw”, supra note 1 at 542.

47. Wendell Wallach & Colin Allen, “Moral Machines: Teaching Robots Right from Wrong” (Oxford University Press, 2009) at 39.

48. Matthias, supra note 38 at 182.

49. Ibid.

50. Vallor & Bekey, supra note 21 at 343.

51. Ibid.

52. Asaro, “Liability Problem”, supra note 3 at 191: “some artificial agents may be unpredictable in principle, and many will be unpredictable in practice.”

53. Ugo Pagallo, The Laws of Robots: Crimes, Contracts and Torts (Springer Netherlands, 2013) at 38.

54. Ibid.

55. Wolf Loh & Janina Loh, “Autonomy and Responsibility in Hybrid Systems” in Patrick Lin, Keith Abney & Ryan Jenkins, eds, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (Oxford University Press, 2017) 36.

56. Ibid at 39.

57. Ibid.

58. Abbott, supra note 18 at 5.

59. Samir Chopra & Laurence F White, A Legal Theory for Autonomous Artificial Agents (The University of Michigan Press, 2011) at 9-11.

60. Ibid at 11-18.

61. Jules Coleman, The Practice of Principle: In Defence of a Pragmatist Approach to Legal Theory (Oxford University Press, 2003) at 10 [Coleman, Practice of Principle].

62. As a general matter, the account of tort adopted in this essay—as a practice of corrective justice—follows work by Ernest J Weinrib, The Idea of Private Law (Oxford University Press, 2012) [Weinrib, Private Law]; Arthur Ripstein, Private Wrongs (Harvard University Press, 2016); Peter Benson, “Misfeasance as an Organizing Normative Idea in Private Law” (2010) 60 UTLJ 731; Coleman, Practice of Principle, supra note 61; and Stephen Perry, “Responsibility for Outcomes, Risk and the Law of Torts” in Gerald J Postema, ed, Philosophy and the Law of Torts (Cambridge University Press, 2001) 72.

63. Weinrib, Private Law, supra note 62 at 169-70.

64. Perry, supra note 62 at 111.

65. Ripstein, supra note 62 at 91.

66. Donoghue v Stevenson, [1932] AC 562 (HL) at 580.

67. Palsgraf v Long Island Railroad Co, 162 NE 99 (NYCA 1928) at 100.

68. Ibid.

69. Ripstein, supra note 62 at 102; Perry, supra note 62 at 111.

70. Overseas Tankship (UK) v Morts Dock & Engineering (The Wagon Mound, No 1), [1961] AC 388 (PC); South Australia Asset Management Corp v York Montague Ltd, [1997] AC 191 (HL).

71. Ripstein, supra note 62 at 91.

72. Robert Keeton, Legal Cause in the Law of Torts (Ohio State University Press, 1963) at 49.

73. Ibid. The difference between negligence and strict liability relates to the breadth of the tortious risk description. Under strict liability, the tortious aspect is described in such a general manner so as to render an activity itself tortious because it inherently imposes a degree of risk exceeding that which is typically assumed in ordinary patterns of interaction. Strict liability applies to resulting harm that is within the scope of the activity’s extraordinary risk, following the same proximate causation structure as negligence: strict liability applies if, and only if, the resulting harm is a kind that renders the activity exceedingly dangerous (i.e. tortious).

74. Hughes v Lord Advocate, [1963] AC 837 (HL) [Hughes]; School Division of Assiniboine South No. 3 v Hoffer, [1971] 21 DLR (3d) 608 (MB CA) [Assiniboine]; Jolley v Sutton London Borough Council, [2000] 3 All ER 409 (HL).

75. Clarence Morris, “Duty, Negligence and Causation” (1952) 101 U Pa L Rev 189 at 196-98. As an illustration, consider contrasting proximate cause analyses in two influential cases, Hughes, supra note 74 and Doughty v Turner Manufacturing Co, Ltd, [1964] 1 QB 518 (CA) [Doughty]. In Hughes, a defendant worker negligently left an open manhole unattended with warning lamps next to it, attracting children who took and dropped the lamps into the manhole triggering an explosion that caused the children to fall in and sustain burning injuries. According to the House of Lords, the plaintiff’s burning injury was within the scope of the defendant’s negligent act, as the risk of a burning injury was substantial and foreseeable, i.e., it was the kind of injury that made the defendant’s act negligent. The Court acknowledged that the injury occurred through an unforeseeable sequence of events, an explosion, rather than a foreseeable occurrence, such as the children dropping and breaking the lamp while exploring the manhole, causing burning injuries through direct contact with the flame. Nevertheless, the chosen description of risk was quite general—simply the risk of burning—and not a more specific description, such as risk of burning caused by coming into direct contact with a broken lamp. The unforeseeable means of its occurrence did not undermine the foreseeability of the harm. By contrast, consider Doughty, where an employee suffered a burning injury from an explosion of a cauldron of molten liquid due to a cement cover falling in and disintegrating. The Court found that the defendant employer negligently failed to provide safeguards against the cement cover falling in the cauldron and causing burning injuries from the splashing of molten liquid. However, since the injury in this case was due, not to splashing, but to an explosion below surface—which was unforeseeable—the injury was found to fall outside the scope of the defendant’s negligent act. In other words, the Court’s description of the tortious risk was quite specific and narrow—risk of burning from the splashing of molten liquid caused by the cement cover falling in—rather than a more general description, such as risk of burning by molten liquid caused by the cement cover falling in. In Doughty, the unforeseeable sequence leading to the burning’s occurrence rendered the harm unforeseeable, falling outside the scope of the tortious risk. This contrasts starkly with Hughes, where the unforeseeable sequence leading to the burning’s occurrence did not alter the evaluation of foreseeability of its kind of harm. These cases illustrate both the importance and difficulty of describing tortious risk at the appropriate level of generality. How relevant are the particular details of the means of the occurrence of harm, and do they comprise part of the description of tortious risk? Arguably, Doughty and Hughes exemplify contrasting—though perhaps both plausible—approaches, highlighting an indeterminacy in the proximate cause inquiry. This question is critical in the context of AA-caused harm: to what extent do designers or deployers of AAs need to be able to foresee particular algorithmic developments that cause harm? Is the act of designing, manufacturing or deploying an AA with emergent capabilities itself a tortious risk, regardless of specific algorithmic developments causing harm in the circumstances? Or do unforeseeable—and unlikely—algorithmic developments causing harm in the circumstances render the harm unforeseeable, and therefore, unattributable to an initial tortious act?

76. Karnow, supra note 3 at 63-64, 72-74; Pagallo, supra note 53 at 117.

77. Asaro, “Liability Problem”, supra note 3 at 191.

78. This is an important limitation on the foreseeability requirement: only the kind of resulting harm needs to be foreseeable, not the exact manner in which it actually occurs. However, as discussed above, see note 75, details of the harm’s occurrence may be relevant where the description of tortious risk is specified narrowly. To circumvent this problem, one could construe the risk of deploying AAs broadly, as noted here in the text. I also discuss this strategy below in section IV.

79. To be clear, the problem here is not remoteness of damages—the unforeseeable extent of tortious causation of harm—which would implicate the thin-skull principle. Rather, AA-caused harm challenges the discovery of tortious causation of harm itself. The concern is that since AAs self-develop due to environmental inputs, resulting dangerous behaviour and harmful effects are not caused by tortious action at all. The thin-skull principle operates to make defendants liable for the full extent of damage where their tortious acts cause a kind of harm, the potential for which makes their acts wrongful, even if the extent of damages was not foreseeable. It still assumes that there is a tortious act—constituted by a foreseeable and unreasonable risk to the plaintiff (i.e. breach of duty of care)—causing harm, the potential for which made the act tortious (i.e. proximate causation). The argument in the text is that there are serious obstacles to viewing AA-caused harm as tortious causation of harm: first, if the deployment of AAs does not entail foreseeable and unreasonable risk of harm, it is not itself tortious, and second, even if a tortious act (in operating or programming the AA) is discovered, there may be further difficulty demonstrating that the tortious act caused the harm, rather than some other environmental input. Even a retreat to the (less stringent) directness standard of proximate causation, formulated in In Re Polemis and Furness, Withy & Co, [1921] 3 KB 560 (CA) [Polemis], does not simply resolve this problem. Although the Polemis approach finds liability for any harm that is a direct consequence of a tortious act—supplanting the need to demonstrate that the resulting harm is the kind of harm that that made the act tortious—it still assumes that a tortious act directly causes harm, which inevitably depends on the existence of tortious risk, conditioned by the notion of reasonable foreseeability. Consider this statement in Polemis, per Scrutton LJ: “To determine whether an act is negligent, it is relevant to determine whether any reasonable person would foresee that the act would cause damage …. But if the act would or might probably cause damage, the fact that the damage it in fact causes is not the exact kind of damage one would expect is immaterial, so long as the damage is in fact directly traceable to the negligent act, and not due to the operation of independent causes having no connection with the negligent act, except that they could not avoid its results” [emphasis added]. Accordingly, Polemis does not supplant the reasonable foreseeability determination; it is necessary to render conduct tortious. Moreover, to ground tort liability, the harm must be a direct consequence of the tortious act, and not due to independent causes that have no connection to the tortious act. A plausible ramification of the Polemis formulation is that where harm stems from unpredictable self-learning algorithms impacted by environmental inputs, such harm may be viewed as having an independent cause without connection to the tortious aspect of the act (even if an initial tortious act is identified), so will not ground liability.

80. For instance, in Assiniboine, supra note 74, the defendant invented an alternative, and exceedingly dangerous, means of starting an auto-toboggan which was not in the instruction manual. As a result, his son lost control of the machine and struck a gas-riser pipe in a nearby school. Because the pipe was fractured, gas escaped into the boiler room and was ignited by an open pilot light, causing explosion, fire and damage to the school building. Although the way the damage occurred—by fire and explosion—was not foreseeable, Dickson JA found that kind of physical damage that occurred was still foreseeable (due to impact of the auto-toboggan) and held the defendant liable. Assiniboine illustrates the limits of the foreseeability requirement: only the kind of resulting harm needs to be foreseeable, not the exact manner in which it actually occurs. (Note that the choice to adopt a broad description of risk is similar to Hughes, supra note 74.) At the same time, however, in Assiniboine, liability was traced to a specific tortious act, the dangerous manipulation and use of the auto-toboggan entailing foreseeable and substantial risk of harm.

81. Jules L Coleman, “The Practice of Corrective Justice” in David G Owen, ed, Philosophical Foundations of Tort Law (Clarendon Press, 1995) 53 at 66-69.

82. Arthur Ripstein & Benjamin C Zipursky, “Corrective Justice in an Age of Mass Torts” in Postema, ed, Philosophy and the Law of Torts (Cambridge University Press, 2001) 214 at 218-19.

83. Ernest J Weinrib, “Causation and Wrongdoing” (1987) 63:3 Chicago-Kent L Rev 407 at 425-29.

84. Benson, supra note 62 at 766-70.

85. Ripstein, supra note 62 at 116.

86. Weinrib, “Causation and Wrongdoing”, supra note 83 at 440-41.

87. Ibid at 426.

88. Ibid at 427-28.

89. OW Holmes Jr, The Common Law (Little, Brown, 1881) at 108.

90. This analysis assumes AAs are not legal persons, i.e., rights-and-duties-bearing subjects with legal capacity to comprise tort relations. For the purposes of this essay—situating AA-caused harm with negligence and strict liability frameworks—my analysis assumes AAs are not legal persons. This is simply AAs’ default legal position absent explicit recognition of their personhood in a legal system.

91. William L Prosser, The Law of Torts, 4th ed (West, 1971) at 211.

92. Ibid.

93. Fontaine v British Columbia (Official Administrator), [1998] 1 SCR 424 at paras 17, 23-27.

94. Weirum v RKO General, Inc, 15 Cal (3d) 40 (1975).

95. Stewart v Pettie, [1995] 1 SCR 131 at para 28.

96. Ripstein, supra note 62 at 97.

97. Childs v Desormeaux, [2006] 1 SCR 643 at paras 44-45.

98. Ibid.

99. Ripstein, supra note 62 at 98.

100. Ibid at 97.

101. Home Office v Dorset Yacht Co Ltd, [1970] AC 1004 (HL) [Dorset Yacht].

102. This form of inquiry is also observable in a recent Supreme Court of Canada case, Rankin (Rankin’s Garage & Sales) v JJ, [2018] 1 SCR 587 [Rankin]. In Rankin, two teenagers, both minors, stole an unlocked car form an unsecured commercial car garage property. Though he had never driven before (and did not have a driver’s license), one of the minors drove onto a highway and crashed the car, causing the other to suffer a catastrophic brain injury. At issue was whether the commercial garage owner owed a duty of care to the injured teenager when securing cars stored in the garage. The Supreme Court stated that the proper question is whether the kind of harm suffered—personal injury—was reasonably foreseeable to the car garage owner. The Court emphasized that physical injury would be reasonably foreseeable only if there was risk of theft and that, upon theft, there was risk the stolen car would be driven dangerously. While the Court noted that risk of theft by minors entails risk of reckless driving and would, in theory, link the garage owner’s failure to secure the cars to the kind of harm suffered, it found that there was insufficient evidence of specific risk of theft by minors: minors did not frequent the garage area at night, nor were they especially engaged in stealing cars. Accordingly, the Court held that the resulting injury was not a reasonably foreseeable outcome of the failure to secure the cars, negating the tort duty in the circumstances. This case offers an illustration of what is at stake in determining liability where there is an intervening cause: the resulting harm must be the kind of harm that made the initial act tortious. To this end, the reasonable foreseeability of the intervening act—in this case, theft involving risk of dangerous driving—is crucial. Since failing to secure the car did not constitute tortious risk (of theft involving risk of dangerous driving), the resulting harm was not linked to an initial tortious act.

103. It is not entirely clear whether there is a legal distinction between human intervening causes and other natural intervening causes. For example, the dissent in Bradford v Kanellos, [1974] SCR 409, 40 DLR (3d) 578 [Bradford] at 582 adopts a formulation stating there is no categorical distinction between forces of nature and human action, even consciously controlled. However, as noted below in the text, leading UK cases indicate that forming proximate causation is subject to a higher standard in cases of human intervening causes, in which the chain of causation is broken unless their intervening actions are likely, even inevitable. See Dorset Yacht, supra note 101 per Lord Reid; also see Lamb v London Borough of Camden, [1981] QB 625 (CA), per Oliver LJ. Nevertheless, while the standards for forming proximate cause may differ, the form of argumentation is the same for natural and human intervening causes: the intervening cause needs to be construable as forming a link between an initial wrongdoing and resulting harm.

104. Related to the previous note, an interesting question arises as to whether AAs’ actions need to be likely (or inevitable)—not just foreseeable or “not abnormal”—to form proximate causation between initial wrongdoing and the resulting AA-caused harm, akin to human intervening causes.

105. Dorset Yacht, supra note 101.

106. The idea is that liability is rooted in the defendant’s own tortious act. It is not properly construed as responsibility for the intervening acts of another. This is nicely illustrated in Lord Reid’s framing of the issue: “Even so it is said that the respondents must fail because there is a general principle that no person can be responsible for the acts of another who is not his servant or acting on his behalf. But here the ground for liability is not responsibility for the acts of the escaping trainees; it is liability for damage caused by the carelessness of these officers in the knowledge that their carelessness would probably result in the trainees causing damage of this kind. So the question is really one of remoteness of damage. And I must consider to what extent the law regards the acts of another person as breaking the chain of causation between the defendant’s carelessness and the damage to the plaintiff” [emphasis added]. Ibid at 1027.

107. Ibid at 1030: “[W]here human action forms one of the links between the original wrongdoing of the defendant and the loss suffered by the plaintiff, the action must at least have been something very likely to happen if it is not to be regarded as a novus actus interveniens breaking the chain of causation. I do not think that a mere foreseeable possibility is or should be sufficient, for then the intervening human action can more properly be regarded as a new cause than a consequence of the original wrongdoing.” See also Lamb v London Borough of Camden, [1981] QB 625 (CA) at 644, per Oliver LJ, commenting on Lord Reid’s formulation: “I would respectfully regard Lord Reid’s test as a workable and sensible one, subject only to this; that I think that he may perhaps have understated the degree of likelihood required before the law can or should attribute the free act of a responsible third persons to the tortfeasor …. It may be that some more stringent standard is required. There may, for instance, be circumstances in which the court would require a degree of likelihood amounting almost to inevitability before it fixes a defendant with responsibility for the act of a third party over whom he has and can have no control.” Arguably, this approach is also followed by the Supreme Court of Canada in Bradford, supra note 103. In Bradford, the defendant negligently failed to clean a restaurant grill leading to fire, which was subsequently extinguished. However, the fire-extinguisher made a hissing noise, prompting a costumer to yell that gas was escaping and that there was danger of explosion, causing customers to panic, rush to the doors and injure the plaintiff in the process. The Court held that the plaintiff’s harm caused by third-party customers was not within the scope of the risk constituted by the negligent failure to clean the grill. The dissent argued, however, that the customer’s panic-inducing exclamation of explosion was an “utterly foreseeable” response to the negligent fire, part of “the natural consequence of events leading inevitably to the plaintiff’s injury.” Ibid at 582. In this respect, the dissent seems to ground liability in the inevitability of the intervening act.

108. See, e.g., Rylands v Fletcher (1868), LR 3 HL 330 (HL).

109. See, e.g., Spano v Perini Corp, 25 NY (2d) 11 (NY 1969).

110. See, e.g., Siegler v Kuhlman, 502 P (2d) 1181 (Wash Sup Ct 1972) at 454 [Siegler].

111. See, e.g., Behrens v Bertram Mills Circus [1957] 2 QB 1.

112. Ripstein, supra note 62 at 141.

113. Ibid at 142.

114. Ibid.

115. Ibid at 143.

116. Ibid at 142.

117. See Part II above.

118. Siegler, supra note 110 at 454-55: “Dangerous in itself, gasoline develops even greater potential for harm when carried as freight …. It is quite probable that the most important ingredients of proof will be lost in a gasoline explosion and fire …. As a consequence of its escape from impoundment and subsequent explosion and ignition, the evidence in a very high percentage of instances will be destroyed, and the reasons for and causes contributing to its escape will quite likely be lost in the searing flames and explosions.”

119. This example is in Calo, “Robots as Legal Metaphors”, supra note 4 at 230.

120. Bryant Walker Smith, “Automated Driving and Product Liability” (2017) Michigan State L Rev 1 at 15.

121. Abbott, supra note 18 at 22.

122. For example, in Quebec, there is statutory compensation for injuries due to vaccinations “regardless of responsibility”. Public Health Act, CQLR, c S-2.2 at s 73.

123. John CP Goldberg & Benjamin Zipursky, “The Easy Case for Products Liability Law: A Response to Professors Polinsky and Shavell” (2010) 123:8 Harv L Rev 1919 at 1919.

124. See Phillips v Ford Motor Co of Canada Ltd, [1971] 2 OR 637 (CA), per Schroeder JA.

125. Goldberg & Zipursky, supra note 123 at 1919.

126. Escola v Coca Cola Bottling Co of Fresno, 24 Cal (2d) 453 (Cal 1944), per Traynor J at 461-62 [Escola].

127. David G Owen, “The Evolution of Products Liability Law” (2007) 26:4 Rev Litig 955 at 977.

128. Escola, supra note 126 at 461.

129. Ibid at 461-62.

130. Ibid at 462.

131. Greenman v Yuba Power Products, Inc, 377 P (2d) 897 (Cal 1963) at 901 [Greenman].

132. Restatement (Second) of Torts § 402A(2)(a) (1965).

133. Owen, supra note 127 at 976-77.

134. Ibid at 977.

135. Ibid.

136. Ibid at 979.

137. Ibid at 979-82.

138. Goldberg & Zipursky, supra note 123 at 1923-24.

139. Owen, supra note 127 at 980-81.

140. Ibid.

141. Restatement (Third) of Torts: Products Liability § 1 (1998).

142. Ibid at § 2.

143. Ibid at § 2(a).

144. Ibid.

145. Ibid at § 2(b).

146. Ibid at § 2(c).

147. Ibid at § 3.

148. Owen, supra note 127 at 984.

149. Ibid at 987.

150. Notably, there is an additional threshold issue about whether software operating automated systems—including the data it uses or produces—are products for the purposes of products liability law. See Smith, supra note 120 at 45.

151. Smith, ibid at 2.

152. Ibid at 46.

153. F Patrick Hubbard, “Sophisticated Robots: Balancing Liability, Regulation and Innovation” (2014) 66:5 Fla L Rev 1803 at 1852.

154. Ibid at 1854-58.

155. Ibid at 1851-52.

156. Karnow, supra note 3 at 72-74.

157. Jeffrey K Gurney, “Imputing Driverhood: Applying a Reasonable Driver Standard to Accidents Caused by Autonomous Vehicles” in Patrick Lin, Keith Abney & Ryan Jenkins, eds, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (Oxford University Press, 2017) 51 at 59-60.

158. Kenneth S Abraham & Robert L Rabin, “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era” (2019) 105:1 Va L Rev 127.

159. Ibid at 145-56.

160. Vladeck, supra note 2 at 129.

161. Ibid at 146.

162. Hubbard, supra note 153 at 1867-69.

163. Peter M Asaro, “A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics” in Patrick Lin, Keith Abney & George A Bekey, eds, Robot Ethics: The Ethical and Social Implications of Robots (MIT Press, 2012) 169 at 176-78.

164. Leon E Wein, “The Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence” (1992) 6 Harv JL & Tech 103 at 110-11.

165. Chopra & White, supra note 59 at 134.

166. Ibid at 133.

167. Abbott, supra note 18 at 22.

168. Gurney, supra note 157 at 59-60.

169. Chopra & White, supra note 59 at 127-30 and Mark A Chinen, “The Co-Evolution of Autonomous Machines and Legal Responsibility” (2016) 20:2 Va JL & Tech 338 at 348, where he states: “[a] self-driving car is not dissimilar to a human chauffer,” stressing autonomous vehicles’ agent-like role akin to employees.

170. Pagallo, supra note 53 at 153-54; Chopra & White, supra note 59 at 186-89; Vladeck, supra note 2 at 150.

171. Chinen, supra note 169 at 366, 375-77.

172. This account of enterprise liability, as a form of distributive justice, is based on Gregory C Keating, “Tort, Rawlsian Fairness and Regime Choice in the Law of Accidents,” (2004) 72:5 Fordham L Rev 1857 at 1873 [Keating, “Rawlsian Fairness”].

173. Gregory C Keating, “Strict Liability Wrongs” in John Oberdiek, ed, Philosophical Foundations of the Law of Torts (Oxford University Press, 2014) 292 at 306-08.

174. Keating, “Rawlsian Fairness”, supra note 172 at 1906-07.

175. Ibid at 1899.

176. See Balkin, supra note 20.

177. Meir Dan-Cohen, Rights, Persons and Organizations: A Legal Theory for Bureaucratic Society, 2nd ed (Quid Pro Books, 2016) at 38.

178. Ibid.

179. Calo, “Robots as Legal Metaphors”, supra note 4 at 211.

180. Dan-Cohen, supra note 177 at 38.

181. Ibid. For example, where the law bestows legal (metaphorical) personhood upon some entities (e.g. corporations), but not others (e.g. nonhuman animals), it may express distinctions of status and moral value, though perhaps without any such intention. See Notes, “What We Talk About When We Talk About Persons: The Language of a Legal Fiction” (2001) 114:6 Harv L Rev 1745 at 1760.

182. Deborah G Johnson & Mario Verdicchio, “Why Robots Should Not Be Treated Like Animals” (2018) 20:4 Ethics and Information Technology 291 at 292: “it is not surprising that scholars and lay thinkers make use of analogies to think about robots. Analogical reasoning is a common strategy for understanding new phenomena. We use something familiar to understand something unfamiliar or less familiar.”

183. Calo, “Robots as Legal Metaphors”, supra note 4 at 212-13.

184. Ibid.

185. LL Fuller, “Legal Fictions” (1931) 25 Ill L Rev 513 at 527-28: “Developing fields of law, fields where new social and business practices are necessitating a reconstruction of legal doctrine, nearly always present ‘artificial construction,’ and in many cases, outright fictions.”

186. Calo, “Robotics and Lessons of Cyberlaw”, supra note 1 at 549.

187. Balkin, supra note 20 at 57.

188. Ibid at 57-59.

189. The idea that AAs can be legal agents without legal personhood is maintained by Chopra & White, supra note 59 at 25.

190. Robert L Rabin, “Tort Law in Transition: Tracing the Patterns of Sociolegal Change” (1988) 23:1 Val U L Rev 1 at 13.

191. Ibid at 13-24.

192. Ibid.

193. Jacobs & Youngs, Inc v Kent, 230 NY 239 at 242-43.

194. Benjamin N Cardozo, The Nature of the Judicial Process (Yale University Press, 1921) at 41.

195. Ibid at 39.

196. Ibid at 41, 46.

197. Ibid at 46.