Introduction
It is conventionally argued that an artificially-intelligent (AI) system's actions and their potentially harmful consequences cannot easily be attributed to the system's developers or operators because the system acts autonomously.Footnote 1 Nor can the system, which has no legal personality, be liable on its own account. Victims are left exposed to accountability gaps,Footnote 2 and thus AI disrupts law,Footnote 3 necessitating new models of legal analysis.Footnote 4 In specific doctrinal areas, this question is typically framed as a ‘missing person’ problem: when, instead of humans, AI systems drive,Footnote 5 contract,Footnote 6 defame,Footnote 7 make art,Footnote 8 commit crimesFootnote 9 and, more broadly speaking, cause harm,Footnote 10 how should law respond?Footnote 11
Questions of this form are beginning to reach the courts.Footnote 12 Aiming to plug this perceived gap, in 2017 the European Parliament (EP) proposed a ‘specific legal status’ for AI ‘so that at least the most sophisticated autonomous robots could be … electronic persons responsible for making good any damage they may cause’.Footnote 13 But this resolution was strongly criticised by legal and technological experts as premised on ‘an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities, and a robot perception distorted by Science-Fiction’.Footnote 14 The proposal was promptly shelved, and a 2020 resolution would instead emphasise that electronic personality was unnecessary because ‘all physical or virtual activities… driven by AI systems … are nearly always the result of someone building, deploying, or interfering with the systems’.Footnote 15 This position is reflected in recent EU legal instruments including the draft AI Act which imposes regulatory obligations on providers, distributors, and users of certain AI systems.Footnote 16
Of course, AI technology did not become any less sophisticated between 2017 and today.Footnote 17 The primary difference between the 2017 and 2020 resolutions lies in how each conceptualised AI systems. In 2017, they were intelligent, autonomous beings analogised to Prague's Golem and Frankenstein's Monster.Footnote 18 In 2020, they were software units programmed by humans to act within pre-defined boundaries. This paper examines how these opposing AI conceptions animate legal debates surrounding fault and liability attributions for AI systems. Drawing upon psychological ‘attribution theory’, or the study of how ‘ordinary people [attribute] causes and implications [to] the events they witness’,Footnote 19 the paper contextualises the ‘AI autonomy’ frame above as one built on folk ‘dispositionism’, a well-documented concept in attribution theory, and demonstrates how easily dispositional AI narratives can be manipulated to promote a desired legal conclusion. It then characterises recent proposals to focus on identifying human actors responsible for AI system behaviours as ‘situationist’ responses which view AI systems as what Hanson and Yosifon call ‘situational characters’ – entities whose behaviours are driven more by external than internal forces.Footnote 20 Reviewing the technical capabilities of contemporary AI systems, the paper argues that they are better understood through a situationist lens. Unlike human DNA, which forms part of our natural dispositions, today's AI systems’ decisional processes are written, controlled, and continually re-written by human actors.Footnote 21
Contextualising the legal AI discourse within attribution theory illuminates how future discourse and policy-making surrounding AI systems should proceed. Specifically, it reinforces proposals focusing less on AI systems themselves than on the eco-system of providers, distributors, and users around them. Conversely, arguments premised on framing AI systems as sentient, intelligent beings are put in doubt. More broadly, attribution theory provides a framework for identifying pivotal misconceptions underlying conventional arguments on the legal conceptualisation of AI systems. Dispositional versus situational narratives subtly shape the questions we ask, and the answers we give, on AI liability attributions. As with philosophy and computer science,Footnote 22 AI provides a backdrop against which ‘normative structure[s] underlying our understanding of law’ may be challenged and re-examined.Footnote 23 Thus, the paper's broader significance, especially for scholars interested in more than law and (AI) technology per se, lies in revisiting the implications of attribution theory for law.Footnote 24
The paper first introduces attribution theory and its legal implications. Next, it identifies how far dispositionism animates conventional legal AI discourse by reference to jurisprudence surrounding AI liability, personality, publications, and inventions. Third, it examines how contemporary AI systems operate and argues that they are better understood situationally. The paper concludes with an attribution-theory informed framework for analysing AI-related attributions.
Before proceeding, it should be clarified that this paper focuses on the AI systems in use and development today and says nothing about the attainability of, and potential legal issues around, ‘strong AI’Footnote 25 systems.Footnote 26 Nonetheless, as the technology continues to develop, this work would form an important plank for understanding how the law should conceive of and respond to increasingly sophisticated AI systems. Further, this paper is primarily concerned with fault and liability attributions; issues and materials on AI ethics and governance will only be referenced briefly where relevant.
1. Attribution theory, artificial intelligence, and law
First observe that the conventional ‘missing person’ frame oversimplifies. Law does not necessarily require specific action(s) to be taken by specific person(s). Instead, established rules of attribution are deployed to deem one's actions (or liability) as another's.Footnote 27 These rules are usually premised on familiar doctrines such as control.Footnote 28 Thus a company can be liable for employee wrongs,Footnote 29 a platform can publish user-created content,Footnote 30 a landlord can be responsible for a tenant's nuisance,Footnote 31 and an animal's keeper can be liable if it bites.Footnote 32 The difficulty with AI is better thought of as a problem with applying these attribution rules in light of AI's apparent autonomy. Lloyd-Bostock distinguishes between attribution as ‘a relatively unreflective … process of making sense of and getting about in the world’ on one hand and a deliberate ‘social act’ where norm-violating events are to be explained on the other.Footnote 33 AI systems challenge both kinds of attributions. For the former, the technology's complexity makes intuitive assessments of factual cause-and-effect in relation to AI systems difficult. For the latter, AI's ostensible independence from human control obfuscates assessments of whom their actions should be attributed to. As Pasquale notes, drawing clear lines of AI responsibility is difficult because ‘both journalists and technologists can present Al as a technological development that exceeds the control or understanding of those developing it’.Footnote 34
(a) The dispositionist default
Insofar as the problem is one of attribution, it follows that law can draw important lessons from attribution theory. Attribution theorists would understand the missing person frame and our resulting search for new personalities to fault as a classic dispositionist response. Dispositionism models an agent's behaviour as primarily driven by the agent's internal calculus – its personality, character traits, and preferences.Footnote 35 ‘Good’ and ‘evil’ are basic adjectives for moral dispositions, though law prefers more nuanced terms such as ‘dishonest’, ‘careless’, and ‘reckless’.
Dispositionism offers an elegant mechanism for attributing moral blame and legal liability because assuming that internal nature drives external behaviour lets us infer the former from observing the latter. One who returns a dropped wallet does so because they are good and honest; one who keeps it is evil or dishonest. The wallet-keeper, having demonstrated a morally-suspect disposition, is then blameworthy. One can fairly be held liable for one's actions, and their consequences on the world, because these actions are by and large expressions, and reflections, of one's true nature.Footnote 36
The roots of dispositionism have been traced to Western philosophy,Footnote 37 finding expression in Aristotelian conceptions of virtue,Footnote 38 Cartesian notions of individual will,Footnote 39 and Lockean social contract theory.Footnote 40 It also features in Western legal theory,Footnote 41 for instance in the ‘will theory’ of contractsFootnote 42 and the ‘autonomy doctrine’ for attribution.Footnote 43 Thus legal fault is often premised on dispositionist notions of intention, control, and consent.Footnote 44 The more dispositional the injurer's actions, and the less dispositional the victim's, the more we are likely to fault the former, and seek remedy for the latter.Footnote 45 Dispositions are not reserved for natural persons; companies and organisations are commonly ascribed with personalities as well.Footnote 46
(b) The situationist critique
To situationists, however, dispositionism commits a ‘fundamental attribution error’,Footnote 47 being ‘the error of ignoring situational factors and overconfidently assuming that distinctive behaviour or patterns of behaviour are due to an agent's distinctive character traits’.Footnote 48 The situationist case, which finds support across psychology, moral philosophy, and law,Footnote 49 is premised on empirical evidence of human behaviour. One canonical exampleFootnote 50 is Milgram's obedience experiment,Footnote 51 where a surprising majority (65 per cent) of volunteers were willing to administer a full course of intense electric shocks (up to 450 volts) to unseen human ‘learners’ in another room, despite the latter's vigorous, albeit staged protests.Footnote 52 Situationists attributed Milgram's results to the power of the volunteers’ situation: the gradual shift from the innocuous to the potentially fatal, the experimenter's authority, and the confusing circumstances participants were thrust into.Footnote 53 Because we are geared to ‘see the actors and miss the stage’,Footnote 54 these situational forces, though obvious in hindsight, were largely overlooked.
Situationists therefore argue that we assign more moral and legal weight to disposition than empirical truths about human behaviour suggest is warranted. Since ‘our attributions of causation, responsibility, and blame — and our assessments of knowledge, control, intentions, and motives — are not what we suppose they are’,Footnote 55 insofar as law relies on dispositionist conceptions of these doctrines, it risks itself committing the fundamental attribution error. If anti-social behaviour is produced more by situation, and less by disposition, than commonly thought, then law's focus on correcting faulty dispositions cannot effectively deter bad behaviour; situational causes of such behaviour must be rectified.
To be sure, Milgram's experiments have been subjected to two waves of criticism arguing that they had been misrepresented and misinterpreted.Footnote 56 Nonetheless, modern situationist work, while still referencing Milgram, rests on a broader evidential base.Footnote 57 More importantly, social psychologists have shifted from ‘strong situationism’ towards ‘interactionism’ – explaining behaviour as interactions between disposition and situation (though their explanatory shares unsurprisingly remain disputed).Footnote 58 Thus, the claim is not that situation alone drives behaviour, nor that situation is always completely missed.Footnote 59 In extreme cases, such as the classic gun to the head, situation is prominent enough to be detected.Footnote 60 This is consistent with how exculpatory situations such as duress, inevitable accident, and circumstantial reasonableness are not foreign to law. The argument, more precisely, is that law under-appreciates situation while over-prioritising disposition. Therefore, while situationism has its own critics,Footnote 61 this paper's thesis does not require one to unconditionally accept situationism nor categorically reject all of dispositionism. Rather, the former is advanced as a completing rather than competing account of AI systems.
(c) Disposition versus situation in law
Given attribution theory's implications for legal fault attributions, legal scholarship on attribution theory is surprisingly scarce, particularly in the context of AI systems.Footnote 62 Situationism has primarily been applied in the context of criminal responsibilityFootnote 63 and American tort law.Footnote 64 Thus, before examining how attributional frames shape the AI discourse, an illustration with a classic English case is useful.
In Miller v Jackson,Footnote 65 the Millers claimed in nuisance against a cricket club for cricket balls repeatedly landing in the former's property. Holding against the Millers, Denning MR's dissent predictably framed the Millers dispositionally. They were ‘newcomer[s] who [were] no lover[s] of cricket’ and who specifically ‘asked’ the court to stop the sport.Footnote 66 In this narrative, the Millers had moved themselves into their present position. Conversely, the cricket club had ‘done their very best to be polite’Footnote 67 and did ‘everything possible short of stopping playing cricket on the ground’.Footnote 68 But the Millers ‘remained unmoved’.Footnote 69
The majority's Millers were cast differently. For Lane LJ, cricket balls had been landing dangerously in their property: one had ‘just missed breaking the window of a room in which their (11 or 12 year old) son was seated’.Footnote 70 To Cumming-Bruce LJ, cricket balls were ‘falling like thunderbolts from the heavens’.Footnote 71 The neighbouring Milners, and their nine-month-old infant, were also subject to this danger.Footnote 72 In this narrative, the residents had merely sought to go about their daily lives, ‘picking raspberries in the garden’,Footnote 73 but simply could not because of the situation they had been thrust into. All three judges heard the same evidence, but the narrative each side told differed in the precise manner attribution theory predicts.Footnote 74
In this way, attribution theory yields descriptive, predictive, and prescriptive insights for law. Descriptively, injurers may be cast as actors who chose certain intended actions giving rise to harmful events; victims are vulnerable persons being moved by, rather than moving, those events, and often rely on the injurer's dispositional control.Footnote 75 Predictively, the extent dispositional/situational narratives can be sustained for claimants/defendants provides an indication of how parties may argue, how judges may decide, and how those decisions may come to be justified. Prescriptively, situationism suggests that law should be cognisant of narrative manipulation. If our conclusions regarding concepts like volition and control turn on narrative framing, it is worth asking how reliable they are as tools for attributing fault. Notice that, to portray the Millers as situational characters, the majority dispositionise the cricket balls, describing them as ‘thunderbolts’ bearing down on the plaintiffs. Yet ‘if ever there was an item that is moved more obviously by something other than its own volition, it is a ball’.Footnote 76 What then about those who struck the cricket balls to begin with?
2. Artificial intelligence as dispositional actors
If balls can be dispositionised to influence law, it is not surprising that AI systems, which appear to behave as humans do, could also be. Since lawyers are not typically trained in the technicalities of AI systems,Footnote 77 we naturally ascribe what Dennett calls ‘intentionality’ towards AI systems so as to explain and manage what we cannot otherwise comprehend.Footnote 78 This section demonstrates how far AI dispositionism shapes legal discourse, in the process examining popular conceptions of AI alongside debates on AI liability, personality, publications, and inventions.
(a) Popular culture
In science fiction, AI systems typically present as sentient, embodied robots who reason, act, and want.Footnote 79 Influenced by such imagery, popular culture tends to describe non-fictional AI systems as ‘evil’Footnote 80 and ‘biased’,Footnote 81 imputing to them thoughts and emotions. In 2016, the chatbot ‘Sophia’ made headlines by answering, ‘[o]k. I will destroy humans’ in response to a question from its creator David Hanson. One contemporary headline reported that a ‘[c]razy-eyed robot wants a family – and to destroy all humans’.Footnote 82
Did Sophia ‘want’ to do so, or was it merely programmed to reproduce these words? That is, did the answer stem from ‘her’ internal disposition, or were they simply coded as a set piece in the chatbot's software? AI experts preferred the latter, arguing that Sophia was a mere ‘puppet’ with neither free will nor autonomy.Footnote 83 Its creators had deliberately cast the robot in a dispositional light as a ‘publicity stunt’Footnote 84 and ‘political choreography’ to market the technology.Footnote 85 This notwithstanding, Sophia remains an icon for modern AI technologies frequently covered by news outletsFootnote 86 and was in 2017 granted legal citizenship in Saudi Arabia.Footnote 87
The dispositional AI narrative is not limited to sensationalist tabloids. By selectively prioritising quotes sourced from AI companies and deliberately drawing parallels between AI systems and humans, the general media constructs expectations of a ‘pseudo-artificial general intelligence’ that does not exist.Footnote 88 In turn, this narrative shapes popular thinking around AI liability. In 2018, history's first pedestrian fatality linked to automated vehicles (AVs) occurred in the United States. One contemporaneous headline reported that a ‘[s]elf-driving Uber kill[ed] Arizona woman in first fatal crash involving pedestrian’,Footnote 89 implying the primary culprit was the vehicle itself, not Uber the company, nor anyone else involved in the vehicle's development or use. A similar framing emerges from another headline, ‘[s]elf-driving Uber car that hit and killed woman did not recognise that pedestrians jaywalk’.Footnote 90
(b) AI liability
The law is not wholly determined by lay conceptions of liability. But it may not escape its influence either. The question AVs pose to law is conventionally framed in terms of a missing person problem: when AI replaces human drivers, who – if anyone –is liable for accidents?Footnote 91 Notice how the idea of AI ‘driving’ begins to dispositionise the system: the main actor seems to be ‘the AI’ itself, but since AI systems are not legal persons, they cannot be liable despite being the perpetrator which dispositionism points towards. Thus, the European Commission has questioned the ‘appropriateness’ of traffic liability regimes which either ‘rely on fault-based liability’ or are ‘conditional on the involvement of a driver’.Footnote 92
More broadly, Chesterman calls this the ‘problem of autonomy’ which AI systems pose to law.Footnote 93 Since the vehicle acted ‘autonomously’, it appears that no person, human or legal, can be faulted for the accident. The crux lies in how far AI driving systems (ADS) can properly be said to be autonomous. Chesterman notes that ‘autonomy’ requires the ADS to be ‘capable of making decisions without input from the driver’; such a system would differ from mere ‘automations’ like cruise control.Footnote 94
The line between automation and autonomy, however, is seldom clear. Most legal commentators adopt the Society of Automotive Engineers’ (SAE) six levels of driving automation, found in a standards document indexed ‘J3016’.Footnote 95 First published in 2014, J3016 was substantially revised in 2016, 2018, and 2021. Since 2016, the standard has only used ‘automation’, even to refer to vehicles at the highest levels. The SAE deliberately avoided ‘autonomy’, arguing that the term could ‘lead to confusion, misunderstanding, and diminished credibility’ because:Footnote 96
in jurisprudence, autonomy refers to the capacity for self-governance. In this sense, also, ‘autonomous’ is a misnomer as applied to automated driving technology, because even the most advanced ADSs are not ‘self-governing’. Rather, ADSs operate based on algorithms and otherwise obey the commands of users.
Because the engineers’ definition of ‘autonomy’ only requires that a system ‘ha[s] the ability and authority to make decisions independently and self-sufficiently’,Footnote 97 it encapsulates a range of technologies, such as thermostats,Footnote 98 to which attributing legal autonomy would be strange. Legal commentaries have nonetheless continued to use the term.Footnote 99 Beyond AVs, AI autonomy remains cited as a key challenge to existing liability regimes.Footnote 100
Smith thus identifies the ‘inconsistent use of several key terms [relating to autonomous systems] within and across the legal, technical, and popular domains’ as a source of ‘potential and ultimately unnecessary confusion’.Footnote 101 Indeed, the engineering literature itself, displays ‘a profusion of concepts and terms related to autonomy’Footnote 102 and oscillates between conceptions of autonomy as self-governance (i.e. the primacy of internal control) and self-directedness (i.e. freedom from external control).Footnote 103
Therefore, the issue here is less a problem of autonomy than one with autonomy.Footnote 104 Both the definition of autonomy and its application to identifying truly ‘autonomous’ systems are ambiguousFootnote 105 and subjective.Footnote 106 Since ‘automation’ frames the system situationally, while ‘autonomy’ presupposes and implies disposition, the term one chooses, and the resultant analysis, could be driven by motivated reasoning.Footnote 107
(c) AI personality
The longstanding debate on whether AI systems should have legal personality,Footnote 108 was brought into focus by the 2017 EP resolution which proposed limited electronic personality for ‘at least the most sophisticated autonomous robots’.Footnote 109 The ensuing controversy plays out as attribution theory expects. The 2017 resolution demonstrated a classic, pop-culture informed tendency to dispositionise AI. It emphasised AI autonomy, referring to science fiction to make the point.Footnote 110 The expert critique offered a situationist response: first noting that claims of AI autonomy are overblown, and second calling out stakeholders ‘in the whole value chain who maintain or control’ the AI system's risks.Footnote 111 Echoing how AI personality was unnecessary, the 2020 resolution highlighted the situational forces underlying AI systems – their behaviours ‘are nearly always the result of someone building, deploying, or interfering with the systems’.Footnote 112
AI personality scholarship demonstrates similar tendencies. Proponents generally offer two types of arguments.Footnote 113 First are arguments based on the inherent qualities of AI, including but not limited to autonomy, intelligence, and consciousness. For instance, Hubbard argues that, given the Lockean imperative that all humans should be treated equally because we all possess ‘the same faculties’, any AI system which possesses these faculties should likewise have a prima facie right to personhood.Footnote 114 Second are instrumental arguments based on the extrinsic usefulness of AI personhood. For instance, Čerka and colleagues argue that establishing liability against AI developers is difficult under present laws because of the AI system's ‘ability to make autonomous decisions, independently of the will of their developers, operators or producers’.Footnote 115 Likewise, Koops and colleagues identify challenges with determining the applicable law and enforcing it with AI becoming ‘increasingly autonomous’.Footnote 116 Personality is proposed to bridge this ‘accountability gap’.Footnote 117
While dispositionism directly underpins the inherent arguments, instrumental arguments implicitly build on it also: legal gaps asserted critically assume that AI autonomy precludes the operation of existing laws. Unsurprisingly, the case against personality essentially contests how far AI systems are autonomous or intelligent.Footnote 118 The issue, once again, is whether AI systems are better understood dispositionally or situationally.
(d) AI publications
Courts considering when an algorithm's developers ‘publish’ defamatory material the algorithm produces have likewise reached opposite conclusions on similar facts in a manner which attribution theory predicts. Those holding that developers are not publishers typically highlight how there is ‘no human input’ in the results’ production.Footnote 119 ‘It has all been done by the web-crawling “robots”’;Footnote 120 the developer merely plays a ‘passive’Footnote 121 role in facilitating the same. Conversely, courts holding that developers can be publishers stress that they intentionally designed, developed, and deployed the algorithm. Thus, Beach J in Trkulja v Google (No 5) held that ‘Google Inc intended to publish the material that its automated systems produced, because that was what they were designed to do’.Footnote 122 McDonald J, in a related case, highlighted ‘the human input involved in the creation of the algorithm’ and how the defamation was ‘a direct consequence’ of the search engine operating ‘in the way in which it was intended to operate’.Footnote 123
More recently, in Defteros v Google LLC the Victorian Court of Appeal reiterated that Google's search engine was ‘not a passive tool’ but something ‘designed by humans who work for Google to operate in the way it does’.Footnote 124 This was reversed by a High Court of Australia majority who did not consider Google's role in communicating the defamatory material sufficiently active.Footnote 125 The dissenting justices argued that, given how search engines operated, Google was more than a ‘passive instrument’ conveying informationFootnote 126 and had ‘intentionally’ participated in communicating the material.Footnote 127
Every case in the Trkulja-Defteros litigation involved the same search engine and operator, but each court's reasoning on publication was shaped by whether they understood the algorithms and its creators dispositionally or situationally. Tracing the EP resolutions, if search companies are not to be liable for defamation, we might describe the content as generated by ‘sophisticated’, ‘autonomous’, and ‘intelligent’ robots. But if they are to be liable, we might emphasise how search outputs are always ‘the result of someone building, deploying or interfering with the [algorithm]’.Footnote 128
To be sure, outcome differences in these cases must also be explained by reference to key factual differences that in turn shaped how the complex law and policy considerations surrounding online defamation applied.Footnote 129 For instance, in the Trkulja cases the search company had notice of the defamatory material; in Metropolitan and Bleyer they did not. The narrow point here is that the dispositional/situational framing of Google's search algorithms influences, although it may not wholly determine, judicial analysis on algorithmic publications. It is also remarkable that every court above was, regardless of how they reasoned, happy to base their framing of Google's algorithms on broad narrations, instead of specific technical details, of how those algorithms operate.Footnote 130
(e) AI inventions
The AI and intellectual propertyFootnote 131 literature was recently brought under judicial scrutiny by the ‘Artificial Inventor Project’ (AIP), which aims to secure ‘intellectual property rights for inventions generated by an AI without a traditional human inventor’.Footnote 132 The AIP applied for patents worldwide nominating an AI system ‘DABUS’Footnote 133 as sole inventor. Predictably, the AIP describes DABUS in vividly dispositional terms. The system was described as being ‘sentient’ and as having ‘an emotional appreciation for what it conceives’.Footnote 134 To DABUS’ creator Thaler, ‘DABUS perceives like a person, thinks like a person, and subjectively feels like a person, abductively implicating it as a person’.Footnote 135
Attributing sentience to an AI system – however sophisticated – remains controversial amongst AI experts.Footnote 136 Nonetheless, such assertions were submitted to patent offices and courts worldwide to justify granting DABUS the patent. The English filing describes the system as an ‘autonomous machine’ which ‘independently conceived’ of the invention.Footnote 137 The Australian filing claimed that the invention was ‘autonomously generated by an artificial intelligence’.Footnote 138
These filings elicited different conclusions from different judges. In the Court of Appeal's latest decision on DABUS, it was uncontested that the Patents Act 1977 (c 37) requires ‘inventors’ to be ‘persons’, which DABUS was not.Footnote 139 For the majority, the issue was whether Thaler could apply for the patents as a person ‘entitled to the whole of the property in’ DABUS’ inventions under section 7(2)(b) of the Act.Footnote 140 They held otherwise because there was no rule of English property law applying accession to intangible property produced by tangible property.Footnote 141 Neither did section 7 establish that a machine's owner owns the machine's inventions.Footnote 142 Notably, such reasoning frames DABUS as a mere machine (i.e. tangible property) rather than a kind of (artificial) person. Otherwise, the AIP could arguably have relied on standard rules for attributing one person's intellectual product to another.Footnote 143
Dissenting, Birss LJ thought the case could be resolved on section 13(2), which required applicants to identify the person(s) who devised the invention. For Birss LJ,this could be satisfied by stating an honest belief that the invention has no human inventor, and this Thaler had fulfilled.Footnote 144 Such reasoning implicitly frames DABUS as something beyond a mere machine. To illustrate, suppose DABUS was a fax machine which, one day, ‘autonomously’ printed a document detailing the invention. Thaler files the same application stating that the fax machine invented something. It would be difficult to accept this ‘belief’ as honestly held, whether subjectively or objectively, unless we are prepared to see something in DABUS (a capacity to invent) which we would not see in a fax machine.
The issue under the Australian Patents Act 1990 (Cth) was whether DABUS could be an ‘inventor’ under section 15(1) of the Australian Act.Footnote 145 Beach J's decision, which goes furthest in the AIP's favour, is also most evidently shaped by dispositionism. The judge was expressly against ‘anthropomorphising algorithms’,Footnote 146 and had also rejected Thaler's ‘more ambitious label’ of DABUS as a fully ‘autonomous’ system.Footnote 147 Nonetheless, Beach J accepted that DABUS was a ‘semi-autonomous’Footnote 148 system ‘capable of adapting to new scenarios without additional human input’,Footnote 149 and ‘not just a human generated software program’.Footnote 150 Since ‘machines have been autonomously or semi-autonomously generating patentable results for some time now’, recognising AI systems as inventors would be ‘simply recognising the reality’.Footnote 151
Such reasoning labours under the precise problem with autonomy explained above. The judgment does not substantiate why DABUS (or any contemporary AI system) is properly regarded as (semi-)autonomous.Footnote 152 While Beach J delves into remarkable detail on neural networks in general and DABUS in particular, the judgment mostly echoes the AIP's dispositional narrative.Footnote 153 Autonomy is assumed, not argued. This is surprisingly clear from the judgment, which expressly ‘assumes’ that the system ‘set[s] and define[s] its own goal’, has ‘free choice’ of how to achieve that goal, and ‘can trawl for and select its own data’.Footnote 154
As the Full Court's decision on appeal points out, these assumptions were not substantiated by the evidence.Footnote 155 Beach J may have been giving Thaler the benefit of the doubt on matters which the patent office left unchallenged but, if so, the judgment should arguably not have purported to make any ‘general point’ about the autonomy of AI systems meant to ‘reflect the reality’.Footnote 156
Unsurprisingly, the Full Court overturned Beach J's decision.Footnote 157 The dispositionism which occupied much of Beach J's decision was conspicuously absent from the appellate judgment. Instead, the court observed that while the AI inventor debate was ‘important and worthwhile’, it had ‘clouded consideration of the prosaic question before the primary judge, which concerned the proper construction of’ the relevant Australian statutes.Footnote 158 The dispositional narrative DABUS was clothed in misled the lower court into conflating assumed fact with non-fiction.
Once a court sees through the ruse, however, the legal analysis and outcome takes on a different complexion. Of course, how far a court dispositionises an AI system does not solely determine whether they will rule in ‘its’ favour. Outcome differences in the English and Australian courts (before the Full Court's recent holding aligned the jurisdictions) should be explained by differences between the English and Australian Patent Acts and related jurisprudence.Footnote 159 That said, none of the patent offices nor courts involved questioned DABUS's dispositionist clothes. Even the Court of Appeal majority accepted without questioning the premise that ‘DABUS made the inventions’.Footnote 160
3. Artificial intelligence as situational character
The intuition beneath legal AI dispositionism might be reduced to a variation on Descartes’ cogito: AI appears to think, therefore it is.Footnote 161 The more we think they are, the more it ostensibly follows that fault, liability, and personality can and should be attributed to them. But appearing to think does not mean machines actually do so.Footnote 162 This section explains how today's AI systems operate, before discussing how their actions are determined by their training and deployment situation.
(a) Contemporary AI systems are weak AI systems
A leading textbook defines AI as a branch of computer science focused on creating machines that think or act humanly or rationally.Footnote 163 However, Turing's seminal paper argued that when a machine can be said to ‘think’ was ‘too meaningless to deserve discussion’.Footnote 164 Instead, Turing proposed an ‘imitation game’: if a machine mimicked human conversation so well that a human could not tell it was a machine, for practical purposes we may say it is artificially-intelligent.Footnote 165
The Turing test's focus was not on the machine's internal disposition but its external behaviour. This exemplifies ‘behaviourist’ definitions of intelligence.Footnote 166 Insofar as we are then invited to infer internal disposition from external behaviour, Turing's test demands the very inference that situationists contest. But merely appearing to speak like a human does not imply the machine is thinking like one.Footnote 167 Indeed, likening the Turing test to Justice Stewart's famous ‘test’ for obscenity,Footnote 168 Casey and Lemley argue that defining AI legally may be impossible.Footnote 169
We may leave aside the philosophical question of whether machines in general can ‘think’ and focus on whether AI systems in practical use today ‘think’ in the sense AI dispositionism assumes. Today's AI systems can be broadly classified into machine learning (ML) versus rules-based systems.Footnote 170 ML is a branch of AI which programs computers by using statistical optimisation to infer patterns from data.Footnote 171 Such systems are illustratively juxtaposed against rules-based or ‘symbolic’ AI where decision formulae are manually specified.Footnote 172 Since explicitly coded rules pose fewer complications, ML systems are typically highlighted as the source of legal uncertainty.Footnote 173 This paper thus focuses on ML systems, though the following arguments apply to both kinds of AI.
Consider an AI system meant to predict recidivism.Footnote 174 A rules-based approach may involve the programmer manually specifying the formula below:
The only factors this system considers are the offender's (violent) antecedents and the age of first conviction of any crime.Footnote 175 With weight three, the number of violent antecedents impacts the overall score the most. Of course, this stylised formula will be wholly unsuited for the task. In practice, these factors, their weights, and the formula for mathematically aggregating them, will be more sophisticated. Deep Blue, the AI chess master, was a rules-based system.
However, specifying the right factors, weights, and formulae can be difficult, particularly if a model is meant to approximate legal principles.Footnote 176 ML, by contrast, attempts to uncover the same using statistical computations. Data on offenders – whether they re-offended, antecedent counts, and other relevant factors – would be fed through a ‘learning algorithm’ which computes correlations between said factors and recidivism. Often, though not always, the algorithm essentially identifies a best fit curve for the data then used for out-of-dataset predictions. To illustrate, the ML process may yield the following prediction formula:
The key difference between the rules- and ML-based models is that the latter's decision formulae and weights are statistically computed, not manually specified. One widely used learning algorithm, ‘gradient descent’, begins by initialising all weights at an arbitrary value, often zero. The putative prediction formula is then used to predict outcomes for the dataset. Since setting all weights to zero results simply in predicting zeros (i.e. no re-offending) for all subjects, the initial formula will predict most outcomes wrongly. The weights are then adjusted in a manner informed by the aggregate prediction error.Footnote 177
Inferring weights from data gives machine ‘learning’ its name. Thus, ML always involves two algorithms.Footnote 178 First is the prediction algorithm, such as the formula above, which generates the predictions. Second is the learning algorithm which produces the prediction algorithm to begin with.
To be sure, this brief treatment does not exhaust the depth and sophistication of rules-based or ML-based AI. There exist a vast library of learning algorithms that approach the optimisation problem differently.Footnote 179 Different learning algorithms run on the same dataset may yield different prediction algorithms.Footnote 180 Nonetheless, the principles above generalise to even the most sophisticated AI in use today. This includes the ‘neural networks’ (NNs) which have driven much of the strong AI narrative. NNs are one class of ML algorithms typically trained using a generalised version of gradient descent known as backpropagation. Despite the name, NNs have no physical form. They too are statistical algorithms for computing weights from data. NNs are ‘deep learning’ algorithms because they comprise multiple layers of standalone algorithms (‘neurons’) whose outputs become inputs to yet more algorithms. This allows NNs to approximate a large class of arbitrary formulae.Footnote 181 There is no theoretical limit to an NN's architecture; myriad neuron types may be linked together in myriad ways. Nonetheless, the computer scientists who invented backpropagation noted that this ‘learning procedure … is not a probable model of learning in brains’.Footnote 182
Our technical detour clarifies two critical attributes of contemporary AI systems.
(b) The problem with dispositionising mathematics
First, both rules-based and ML-based AI systems are mathematical systems (of equations). ML's focus is not on any physical ‘machine’ or hardware, but the numerical weights which algorithms figuratively ‘learn’ from data. While AI systems are often embodied within hardware systems such as cars or humanoid robots, putting form to mathematics does not change its inherent nature any more than painting a face on a volleyball should.Footnote 183
Dispositionising maths is, to be clear, not a problem per se. We routinely dispositionise everything from cricket balls to companies and the legal system itself.Footnote 184 Ascribing intentionality to systems whose inner workings are opaque to us may be the most practical way to manage them.Footnote 185 With these systems, however, dispositionism has limits. Corporate personality only arises when formal requirements are met and never argued solely on the basis of a company's ‘autonomy’. Moreover, corporate decisions are ultimately made by people whose minds and wills are, following standard corporate attribution rules,Footnote 186 taken to represent the company's.Footnote 187 In speaking of corporate ‘wants’, we are ultimately personifying human dispositions, not mathematical formulae. Thus the corporate form is often acknowledged as fiction.Footnote 188
By contrast, lawyers framing AI dispositionally seldom seem to realise they may be personifying maths. Reinforced by science fiction, our dispositionist tendencies lead us to conceive of AI systems as autonomous beings, seeing disposition when we should be seeing situation. This tendency to personify AI has been identified by AI researchers as an ‘anthropomorphic bias’Footnote 189 and by legal scholars as an ‘android fallacy’.Footnote 190
That dispositionism misleads lawyers is unsurprising, for even computer scientists do not escape its grasp. ML parlance routinely describes algorithms anthropomorphically: they have ‘neurons’ that are ‘trained’ to pay ‘attention’ and hold ‘memory’.Footnote 191 McDermott famously called these ‘wishful mnemonics’: terms used to reflect what programmers hope the algorithm does, not what it actually does.Footnote 192 More recently, Bender and Koller argue that ‘claims in both academic and popular publications, that [AI] models “understand” or “comprehend” natural language … are overclaims’ and that ‘imprudent use of terminology in our academic discourse … feeds AI hype in the popular press’.Footnote 193
Legal narratives which dispositionise AI must therefore be scrutinised. Notwithstanding the imagery that wishful AI mnemonics conjure, they are inexact metaphors for inevitably statistical computations.Footnote 194 To recall, ‘neurons’ are standalone statistical algorithms which compute numerical weights from data. ‘Training’ is the process of passing data through algebra to compute these weights. ‘Attention’ means increasing the numerical weights accorded to outputs from certain parts of the network.Footnote 195 ‘Memory’ is particular type of neuron (i.e. computation) which feeds into itself such that previous computations influence subsequent ones more directly.Footnote 196 These metaphors make the maths appear as if it has its own mind but neither entail nor imply that it does. As Cardozo CJ famously held, ‘[m]etaphors in law are to be narrowly watched, for starting as devices to liberate thought, they end often by enslaving it’.Footnote 197 Likewise, Calo notes that judges’ ‘selection of a metaphor or analogy for a new technology can determine legal outcomes’ surrounding AI.Footnote 198
(c) Mathematical dispositions are not human dispositions
Secondly, even if we wanted to dispositionise maths, maths does not think or act as we do. Whether an AI system's internal formulae are manually specified or statistically learned, its ‘disposition’ is entirely encapsulated in those formulae. Since these dispositions are mathematically expressed, they can also be mathematically explained. To illustrate, we might say that our recidivism predictor above ‘prefers’ those with no violent antecedents the most, since its formula weights that factor most. Moreover, these formulae are fixed after training, and only updated if the learning algorithm is run on new data. Thus the predictor's ‘disposition’ is stable and deterministic: the same inputs always produce the same outputs. By contrast, we cannot ascribe numbers to how the human mind weighs factors; these weights can and do change over time.
To be sure, much depends on the specific algorithm(s) used. For large NNs that compute billions of weights across millions of factors, unravelling how the system weights each factor can be prohibitively difficult. Even assuming an AI system's prediction algorithm is stable, inputs received in real-time deployment may be ephemeral, prompting split-second changes in the system's outputs. Such opacity indeed challenges fault and liability attributions where victims often need to prove specific software defects and identify person(s) at fault for those failures.Footnote 199 While AI researchers have dedicated an entire sub-field towards AI explainability,Footnote 200 explanations created from those techniques are often not the kind law requires.Footnote 201
Opacity must, however, be distinguished from autonomy. An NN may perform ten billion computations and tweak its output ten times per microsecond, but maths writ large is still maths. If one linear regression is neither sentient nor (truly) autonomous, what changes, if anything, when one links together a (hundred) thousand regressions? Opacity does not imply autonomy, even assuming the converse holds. Our legal system is opaque to most laypersons, and the best lawyers often cannot predict how it will behave, but we do not say that it therefore acts autonomously and in a way which justifies legal personality, rights, and obligations. Crucially, unlike humans, today's AI systems cannot act beyond what they are programmed to do, even to fulfil their ‘wants’.Footnote 202 Our recidivism predictor may ‘prefer’ offenders with fewer violent antecedents, but it cannot, say, propose laws for reducing violent crime. Likewise, Sophia can only produce textual responses to textual prompts. ‘She’ cannot take steps towards starting a family or destroying humans. This is not to say that AI systems have no ‘autonomy’ at all, only that the label attaches primarily in an engineering sense.Footnote 203
Thus, today's AI systems remain instances of Searle's ‘weak’ AI.Footnote 204 The disposition of weak AI systems, insofar as they exist, remain dictated by situation: weights produced from the training data and learning algorithm used, how the tasks they trained to do are defined, and the inputs received in their deployed environments. These invariably involve choices made and actions taken by the AI's developers, operators, and users – batters of the AI cricket ball.
4. Situating AI in law
Diagnosing myths afflicting dispositional AI discourse lends itself to two natural prescriptions. First, legal scholars, regulators, and judges must consciously question and take issue with anthropomorphic AI narratives presented before them. Assertions that an AI system is ‘autonomous’ cannot simply taken as given as they shape the legal conclusion. Particular care is required because anthropomorphisms can be embedded within seemingly descriptive words.Footnote 205 For instance, stating that an AI system ‘drove itself’ and ‘caused’ an accident may be grammatically correct, but implies a factual disposition which attracts legal responsibility.Footnote 206 Moreover, AI developers and operators have incentives to dispositionise their technology to drum up attention and funding while diverting legal consequences away from themselves.
Secondly, situational AI risks must be deliberately highlighted. Conventional dispositionism centralises the legal inquiry around individual ‘bad’ actors like drivers, resulting in what Elish calls ‘moral crumple zones’: human actors who, despite having limited control over a complex system, bear ‘the brunt of the moral and legal responsibilities when the overall system malfunctions’.Footnote 207 Once the AI autonomy myth is avoided, however, it is obvious how eco-systemic stakeholders collectively determine the risks that AI systems pose to society.Footnote 208 Crawford notes that the very idea of AI is inextricably intertwined with the socio-economic forces which build and sustain the technology, calling for ‘a topological approach’ which ‘understand[s] AI in a wider context by walking through the many different landscapes of computation’.Footnote 209 Likewise, Edwards argues that AI is ‘a system delivered dynamically through multiple hands’, involving a ‘complex web of actors, data, models, and services’ who could be held accountable.Footnote 210 Edwards thus critiques the original draft Act for centralising ‘primary responsibility…on an initial provider’ and ‘fail[ing] to take on the work … of determining what the distribution of sole and joint responsibility should be contextually throughout the AI lifecycle’.Footnote 211 In other words, the Act rightly foregrounds situational actors but is not yet situational enough. Notably, Edwards’ critique also applies to the AI Liability Directive because ‘fault’ as defined is closely tied to breaches of AI Act obligations.Footnote 212
Arguments for refocusing attention onto organisational stakeholders in the AI risk creation process are thus reinforced by attribution theory. Once we see how extensively an AI system's behaviour is determined by programming, it is eminently foreseeable that errors in building and/or deploying AI systems could harm.Footnote 213 Therefore, a situationist framing of the legal AI discourse would shift our focus from individual, human dispositions to collective, sociotechnical systems.Footnote 214
Notably, this does not necessarily mean abandoning existing (tort) law entirely. Negligence standards, given their focus on circumstantial reasonableness, are compatible with situationist models of responsibility.Footnote 215 Moreover, once any misconception of contemporary AI systems as science-fictitious, autonomous thinking machines is avoided, existing (dispositional) doctrines generally have fewer problems encompassing AI systems. Recalling the Trkulja litigation, once we acknowledge that search algorithms merely produce results their programmers designed and built them for, search companies can be said to have intentionally published those results. It has also been argued that the doctrine of control, clarified for AVs, could be meaningfully applied towards determining AV liability.Footnote 216 What needs to change is not existing laws per se, as conventionally asserted, but how the law conceptualises AI systems.
To illustrate, suppose a developer D creates an AI system S that Qs with legal consequence L. Assuming L is a harmful consequence, D would like to avoid being fixed with L and argues that S Q-ed autonomously, independent from D's control, intention, and design. The first step must be to ascertain S's technical nature, stripped of any dispositionist baggage Q presents in. While courts may not have the expertise to delve into technical complexities, those who claim their AI to be autonomous may fairly be expected to prove it.
Next, regardless of step one's outcome, deliberate attention should be paid to situational player(s) who shaped S's behaviour. This points first to D, but might also identify other stakeholders, for instance, if D sold S to operator O. Consistent with standard product liability principles, had O deployed S in an environment which D expressly warned against, O's risk contribution cannot be ignored. This step might therefore identify multiple attribution targets.
Selecting the ‘right’ target(s) from this list turns on specific laws and facts at play, but the relative contribution each target makes towards determining S's behaviour is a key consideration. If L is a legally divisible consequence like liability, L might be apportioned proportionately to harm/risk contribution. Indivisible obligations like contracts may be best attributed to the party who contributed the most.
To be clear, situationism's insights would be wasted if it were merely used to identify targets for conventional dispositionist analysis. Each stakeholder's contributions should ideally be assessed situationally as well. We should consider, for instance, actions taken by other stakeholders, the scientific state-of-the-art, and inputs received by AI systems from their deployed environments. This explains why commentaries adopting more technically accurate views of AI systems favour apportioning safety and compensatory obligations across multiple stakeholders.Footnote 217 Such inquiries may, of course, be more complex and expensive than we are used to. Thus, situationism may support moving more radically towards no-fault systems financed by eco-systemic actors,Footnote 218 as well as policies targeting systemic changeFootnote 219 (eg building AI literacyFootnote 220). However, these proposals fall beyond this paper's scope and are best explored in future work.
Conclusion
This paper situates legal debates on AI within the context of attribution theory and uses situationism in particular as a foil to highlight law's traditionally dispositionist tendencies and critique unquestioned AI dispositionism. Folk conceptions of AI permeate the conventional legal, regulatory, and judicial AI discourse, leading to the exact attributional errors that situationists have long criticised. This not only threatens the credibility of legal AI analyses; because dispositional AI narratives are easily manipulable, allowing them to shape legal outcomes is problematic. Overcoming AI dispositionism does not necessarily require total reform; recognising AI systems as situational characters, as recent legal instruments are beginning to do, is sufficient. Implementing this paradigm shift may be challenging, but the more we are interested in an account of AI based on fact rather than fiction, the more we should be willing to abandon fallacious AI anthropomorphisms and re-direct attention to the situational forces driving how today's AI systems ‘think’, ‘act’, and harm.