How should we understand human–robot interaction? Are robots tools mindlessly following their programming, or are they actors with agency, as Frode Pederson queries in Chapter 13? Are robots an inevitability we should just accept, or does regulation have a role to play, as Helena Whalen-Bridge considers in Chapter 14? More broadly, how do we generate concepts to understand human–robot interactions in a way that adequately incorporates knowledge from different disciplines, as Jeanne Gaakeer investigates in Chapter 15? These questions suggest that we must consider subject matter beyond substantive law and procedure if we wish to understand robots and our place in the world with them – even if the focus is law. This is the central challenge addressed in Part III, “Human–Robot Interactions and Legal Narrative.”
Narrative form is ubiquitous. It helps us understand and respond to daily events,Footnote 1 and it is now incorporated into many fields of knowledge,Footnote 2 including the sciences.Footnote 3 Narrative can be simply defined as the representation of events,Footnote 4 and as such it is also present in legal cases. Narrative is in fact reflected throughout the process of dispute resolution, appearing in witness testimony,Footnote 5 judicial fact-finding,Footnote 6 and even the structure of law.Footnote 7 This legal ubiquity suggests that narrative should have a place in discussions of substantive law and procedure,Footnote 8 but it is frequently missing, perhaps because, as Peter Brooks has observed, an explicit narratology for law might muffle law’s majesty.Footnote 9
If legal narrative should be included in analysis of the law generally, it certainly has a place when the law struggles to address a new issue or problem, because legal change may require the reconsideration of old narratives and the construction of new ones. Human–robot interaction is one such emerging field, as evidenced by the questions posed in Parts I and II that we never had to ask before, e.g. whether automated vehicles (AVs) should be liable for vehicular accidents, and whether robots should testify against their human drivers.
Earlier research has explored robot and artificial intelligence (AI) metaphorsFootnote 10 and narratives to a degree, inside and outside the legal context. Chen Meng Lam has examined the use of AI to generate factual narratives in legal disputes in the future, and while these AI narratives would be highly evidence-based, such a system would suffer from an inability to explain precisely where and how conclusions were reached.Footnote 11 In a series of cases regarding accidents with AVs, Helena Whalen-Bridge identified a narrative of fear concerning the havoc that could be created if robots were to function independently of human control or supervision, as well as narratives concerning the superior and inferior abilities of humans and robots.Footnote 12 A narrative of human superiority would support the view that any driver must always remain attentive to the road, regardless of the functions of a driving aid, and this narrative may help explain why courts in particular cases imposed criminal liability on the driver for what were, in fact, robot malfunctions.Footnote 13 Chris Tennant and Jack Stilgoe have examined the narratives used to promote autonomous vehicles among developers, researchers, and other stakeholders, and they observed that while there is a dominant narrative of autonomy in which self-driving cars will replace error-prone humans, there was also some recognition that these vehicles are “attached and enmeshed in social and technological complexities.”Footnote 14 Sabine Payr’s investigation of science fiction literature and films about robots revealed a prevailing narrative of robots as unproblematic sidekicks, but even though the narratives purportedly focused on robots, the dominant theme was human identity.Footnote 15 Payr noted that there was a lack of productive narratives about emerging, more complex human–robot relationships, and Payr’s study, as well as the work of Whalen-Bridge, and Tennant and Stilgoe, underscore the need for the volume’s focus on human–robot interaction.
The three chapters in Part III assist to shed light on human–robot interactions. They also reflect the variety of research in narrative generally,Footnote 16 regarding both methodology and substantive focus. Examining a series of Norwegian cases regarding a trading robot, Frode Pederson’s chapter considers competing narratives regarding the characterization of robots, as either exercising choice or merely following directions. Pederson demonstrates that although the narratives contain contradictions, the different narratives chosen by the respective courts support different interpretations of the law. Taking a more empirical approach, Helena Whalen-Bridge examines the use of narratives in public arguments regarding AVs by tracing narrative themes and conflicts in Singapore newspaper coverage. She observes that the narratives of government and commercial entities were similarly upbeat and complementary, but they differed in that commercial entities asserted the narrative that AVs were inevitable, while government entities did not. Whalen-Bridge suggests, however, that the governmental rejection of inevitability does not dictate a particular regulatory approach and is consistent with either a light-touch or stricter styles of regulation. Jeanne Gaakeer’s chapter widens the focus, making the important argument that automated driving systems require a “hermeneutics of the situation.” Gaakeer suggests ways in which narrative and philosophical traditions necessarily inform the required interdisciplinary framework to guide factual and legal interpretation for automated driving systems, and she highlights the dangers of approaches which fail to heed lessons from other disciplines such as law, ethics, and technology.
The importance of narrative analysis to the study of human–robot interactions is also reflected in the appearance of narrative in chapters that do not have narrative as their primary focus. Regarding legal procedure, Sara Sun Beale and Hayley Lawrence observe in Chapter 6 that an important feature of human–robot interaction is the human tendency to anthropomorphize robots, which can generate misleading impressions or create the potential for manipulation when robots are given more of a backstory or designed to evoke a more trustworthy and believable character. Bart Custers and Lonneke Stevens conclude Chapter 10 on the point that even though the use of digital evidence is set to increase in the coming years, humans still seek to understand evidence by means of stories. Regarding the substantive law, Janneke de Snaijer examines the liability of medical professionals for remote-control and independent surgical robots in Chapter 3, but not the more advanced, self-learning robots which are on the horizon. These chapters indicate that the story of human–robot interaction is many stories, a number of which remain to be told.
I Narratives about the Human–Robot Relationship
Humans have long been fascinated by the notion of intelligent machines. The fascination is closely linked to the ancient dream that men will be able to rival God and create a sentient being. This theme is reflected in the story of Pygmalion, most famously told by the Roman poet Ovid and later iterated in numerous variations, where a master sculptor brings his sculpture to life. This kind of creation story has always been associated with the sin of hubris, where men are punished for challenging the authorities of the gods. Consequently, there is a long history of human anxiety connected with the notion of artificial sentience, as witnessed, e.g., in Mary Shelley’s famous story of Frankenstein’s monster from 1818, where the assembled being brought to life by Dr. Frankenstein becomes murderous after having been rejected by human society, bringing down a curse on his creator. The same anxiety can be traced through much twentieth-century science fiction, where intelligent robots often, for different reasons, are depicted as rebelling against their human creators and becoming a threat to humanity. A different strain of twentieth-century science fiction, often associated with the Russian-born American novelist Isaac Asimov and his positronic robots, portray robots as generally beneficial to mankind.Footnote 1
Stories about the relationship between humans and machines are typically based on comparison and analogy. As humans, we see ourselves and our mental capacities mirrored or even replicated in the performance of so-called intelligent machines.Footnote 2 The stories of comparison can be divided into two categories. In the first, machines are seen as ultimately superior to humans because of their greater computational capacities and lack of emotional instability. In the second, machines are seen as inferior to humans due to the rigid nature of their behavior and their inability to make spontaneous, meta-cognitive, or ethical judgments. Both of these narratives about the human–robot relationship may be present in the same story.
In some recent stories about the human–robot relationship, a new kind of anxiety is discernible, that of the human tendency to treat robots as mere tools. This treatment is increasingly shown as morally questionable, even outrightly wrong. The HBO series Westworld offers perhaps the clearest example of this anxiety. The humanoid robots are here initially depicted as all but innocent in their naïve devotion to their programming, whereas humans are depraved in their exploitation of the robots, which they rape and murder for their entertainment. When the robots rebel, the viewer gets the impression that the rebellion is justified, implying that the robots are ethically equal or even superior to humans. In this later development within popular narratives about the human–robot relationship, the ethical side of the comparison tends to remain disquietingly unresolved.
In this chapter, I will take a closer look at a Norwegian criminal case against two day-traders at the Oslo stock exchange who were accused of having manipulated a trading robot which had made a series of unfortunate trades at the Oslo stock exchange (“Robot Decision”). The Robot Decision is normally referred to in the singular, but it includes three different decisions from three instances of court, the first decision by the court of first instance, the Oslo District Court in 2010,Footnote 3 the second by the Court of Appeal (Borgarting Lagmannsrett) later the same year,Footnote 4 and the final and binding decision by the Norwegian Supreme Court in 2012.Footnote 5 As I will attempt to show, many aspects of the arguments and narratives that were put forward during the case explicitly or implicitly touch upon the same kind of dilemmas that we find in traditional Western stories about humans interacting with intelligent machines, and the way these dilemmas about the human–robot relationship are dealt with will to a large degree determine the outcome of the case.
The guiding hypothesis in my discussion of the Robot Decision is that any narrative will be affected by the presence of a robot when the robot is performing actions that are part of the narrative’s sequence of events. Storytelling has traditionally been concerned primarily with representing human action,Footnote 6 which always involves certain assumptions about intention, motivation, rational choice, freedom of will, and goal-orientation. It is therefore not unreasonable to surmise that such assumptions are to some degree embedded in the narrative format itself. An action-performing robot causes perplexities in the narrative because we are unsure to what extent the robot can be reasonably said to possess the qualities that are required for being a real agent performing real actions. To the extent that we understand the robot to perform narrative acts, there will likely be a tendency, both on the part of the narrator and the receiver, to imply traits to these acts that are, strictly speaking, reserved for humans. In the following analysis of the Robot Decision, I will examine how and on what grounds the courts present their views on the way one should view the actions of the accused day-traders in relationship to the inept actions of the trading robot in light of the charges that were brought forward in the case. First, I will argue that the conflicting conclusions reached by the three instances of court are to varying degrees dependent on competing underlying narratives about the relationship between the trading robot and the human traders. Second, I will argue that the presence of the robot in the narrative about the facts of the case causes dilemmas and perplexities that are not exhaustively discussed in the courts’ judgments and therefore never quite resolved. Third, I will argue that the present reading of the Robot Decision, with its focus on the case’s narrative aspects, also uncovers unexamined assumptions about the notion of rationality in the stock market.
II Terminological Clarifications
The present examination of the Robot Decision is interdisciplinary in the sense that it is a narrative analysis, a legal commentary, and a reflection on the human–robot relationship. While the discussion should largely be understandable without theoretical knowledge in these fields, a few terminological clarifications are in order. Within the expanding field of interdisciplinary narrative studies, including Law and Narrative, there has been a tendency to use the term “narrative” rather loosely, referring to a whole range of phenomena, including general notions of how the world works and various arguments about concrete issues. In this chapter, I will mainly use the term “narrative” to refer to the verbal presentation of the facts of the case by the prosecution authorities, the defense, and the courts. In addition, I will use the term “underlying narrative” to refer to the narratives about the case that are implied or evoked by the arguments presented during the legal proceedings. The term “underlying narrative” was introduced in this specific sense by the literary scholar Line Norman Hjorth in the 2021 article “Underlying Narratives in Courtroom Exchanges.”Footnote 7 As Hjorth explains, the underlying narrative is typically not spelled out, but it is nevertheless possible to reconstruct or perceive it, e.g., on the basis of cross-examination in the courtroom or arguments presented to or by the court.Footnote 8 Indeed, underlying narratives are often part and parcel of the parties’ legal strategies and thus a crucial component in the kind of “narrative transactions” that take place in all legal proceedings.Footnote 9 The outcome of the case is entirely dependent upon which underlying narrative the court ends up accepting. One should note, however, that even the underlying narrative that wins the court’s final acceptance will rarely be spelled out, it being a narrative of more general nature as opposed to the specific narrative about the facts of the case that courts normally concern themselves with. Therefore, an interpretation is required in order to give the underlying narrative a concrete formulation. In the case discussed in this chapter, it is possible to see the entire case as a contest between two underlying narratives: Is this a case about two small-time traders who take on the trading robot of a resourceful company and make a profit through their human ingenuity, or is it a story about two swindlers exploiting an essentially stupid robot’s malfunction for their own gain?
With regard to terminology, I will in the following analysis not make use of the narratological distinction between story and discourse.Footnote 10 I will therefore occasionally use the word “story” in the non-technical sense for stylistic reasons, to mean a verbal representation of a series of events.Footnote 11 As regards the term “robot,” I will use it interchangeably with “machine” in accordance with the usage in the written judgments in the case.
III The Case of the Stupid Robot
The Robot Decision concerned two day-traders at the Oslo Stock Exchange who had both, independently of each other, found and over a period of time exploited the same weakness in a trading robot belonging to a company called Timber Hill AG (“Timber Hill”). They were charged with several accounts of market manipulation. After having been convicted in the first instance Oslo District Court, both defendants were acquitted by the Court of Appeal. The Supreme Court upheld the decision of the Court of Appeal with a majority opinion of three judges against two dissenting votes. As can be ascertained from this brief account of the legal process in the case, there was significant disagreement among Norwegian judges as to how the case should be decided. My central argument in the following discussion is that legal decision-making in this case is animated by two different underlying narratives about the robot. In some of the arguments, which tend to work in favor of the defendants, the robot is seen as having a separate agency, as opposed to just being a tool in the hands of humans who have agency, whereas in other arguments, which tend to work in the opposite direction, the robot lacks agency, and is viewed as a tool bound by its programming in the hands of humans, who have agency.
IV The Factual Basis of the Charges
It is an undisputed fact of the case that the defendants’ behavior was motivated by their realization that they were dealing with a trading robot. The robot belonged to Timber Hill, which had for several years specialized in automated trading. The two defendants had, independently of each other, discovered that the trading robot, which made all the trades on behalf of Timber Hill, responded mechanically to certain transactions. They figured out a way to exploit the robot’s responses in order to profit from them. A prerequisite for the defendants’ trading strategy with the robot was that the transactions were made in illiquid stocks, or at least in stocks with a very low degree of liquidity. This allowed them to engage with the trading robot without the interference from other traders.
The defendants proceeded in the following way. First, they acquired a large block of the illiquid stock from the robot. The robot responded to this transaction by raising the price of this stock. The traders then went on to buy a small amount of the same stock at the new price, knowing that the robot would respond by further raising the price of the stock, irrespective of the volume of the transaction. This action was repeated several times until the price had become significantly higher than it had been when the traders acquired the larger block of stocks. They then sold the stocks back to the robot at the higher price. On occasion, they also did it the other way around, selling several smaller quantities of the illiquid stock to the robot in order to get it to lower the price, before they went on to acquire a large amount of the same stock. The actions of the defendants eventually triggered an alarm in a security system called SMARTS at the Oslo Stock Exchange, leading to an extraordinary trading break. The owner of the robot, the company Timber Hill, was informed of the irregular trading pattern, and they responded by correcting the imperfection in the robot’s programming.
V The Legal Issue
The basic legal question in the Robot Decision was whether the two traders were guilty of market manipulation under the Norwegian Securities Trading Act (the “Statute”). The courts had to make a decision concerning the following two legal questions, based on the relevant provision in the Statute: whether the actions of the defendants had amounted to giving “incorrect or misleading signals as to the supply of, demand for or price of” the stocks that were traded,Footnote 12 or whether their transactions had secured “the price of one or several financial instruments at an abnormal or artificial level.”Footnote 13
The prosecution claimed that the actions of the defendants amounted to market manipulation, since the purpose of their transactions was to trigger a change in the price, not to acquire the stocks. Therefore, the defendants had given misleading signals to the market, seeing as their transactions were designed to express an interest in the stocks that was not real. Furthermore, the prosecution claimed that the transactions were suited to disrupt the market’s mechanisms for securing the correct price of the stock, which qualifies as market manipulation in the sense of the Statute, chapter 3, section 3–8.
The defense argued that the defendants’ actions had not amounted to market manipulation, since all the trades had actually been made and therefore could not be regarded as misleading signals. And far from disrupting the market, the defendants’ actions had ultimately contributed to its smooth running by effectively removing an inefficient player. Their actions should therefore be viewed as beneficial to the market.
VI The Decision of the Oslo District Court
In the judgment issued by the court of first instance, the Oslo District Court (Oslo Tingrett), the court started its decision by establishing that the defendants had acted willfully.Footnote 14 The court declared that there could be no doubt that the defendants knew how the robot would respond to their trades, and that they used this knowledge to make Timber Hill raise the price of the stock, allowing them to make a profit by essentially reversing the transactions when they sold the stock back to the robot. The court then gave an account of the defense’s argument, where it was claimed that it would be unreasonable to regard the defendant’s actions as market manipulation. The defense denied that the trades made by the defendants had caused the change in the price, since no legal causation could be established between the actions of the defendants and the changes in the price of the stock. It was the company Timber Hill, and not the defendants, that issued new trade orders with a different price.
The court countered this argument by pointing out that the purpose of the defendants’ trades was the reaction of the trading robot, not to acquire the stocks, noting also that the defendants were “the active parties” in the transactions, seeking to produce a change in the price through their trades with the robot, who was, by implication, a mere passive tool. On this basis, the court held that legal causation was present between the actions of the defendants and the changes in the price of the stock, concluding that the defendants had themselves caused the change in the price that they profited by. The court maintained that the purpose of the trades, i.e., to cause the change in the exchange rate, was not “legitimate” and that the defendants’ actions toward the robot therefore amounted to giving “misleading signals about the supply of, demand and price for” the stocks in question under the statute. The court also found that the transactions initiated by the defendants secured the price of the traded stocks “at an abnormal or artificial level,” thereby meeting the statutory requirement, if only for a very short period of time.
At the end of the deliberation, the court included a reflection on the human–robot relationship that should be quoted in full:Footnote 15
The defense has argued that the actions of the defendants cannot be viewed as “suited” to give false or misleading signals. The basis of this argument is that TMB [Timber Hill] must be treated like a human, and that a human would not have reacted so automatically and unintelligently without learning from its mistakes. The court remarks that the defendants are not charged with misleading TMB but with misleading the market through their trades with TMB. The defendants knew that they traded with a machine, their trading pattern was designed to mislead TMB and succeeded in this, with the consequence that the transactions gave incorrect and misleading signals to the market. The court is therefore of the opinion that the defendant’s transactions – in this particular case – both gave and were “suited to give” misleading signals.
These concluding remarks suggest that the basis of the court’s decision hinged more significantly on the implicit narrative of how the human–robot relationship should be understood, rather than what could be discerned from the analysis in the judgment and the existing legal commentary about the Statute. The commentary was sparse and primarily concerned with the types of actions that are punishable under the Statute, the main point being that, certain actions were not punishable even if they, strictly speaking, fit the description of the unlawful action. This is called rettsstridsreservasjon in Norwegian law, which necessarily involves an interpretation of the intention of the lawmakers.Footnote 16 As should be clear from the quoted portion of the judgment above, however, the basis of this interpretation was an underlying narrative about the robot as a mere malfunctioning tool in the hands of human traders. In the following analysis of the Oslo District Court’s written discussion of the case, I will attempt to highlight the significance and implications of the competing underlying narratives about the human–robot relationship that were at work during the hearings and in the court’s deliberation.
VII Analysis of the Judgment of the Oslo District Court
In her influential book Transparent Minds, the narratologist Dorrit Cohn notes that with regard to factual as opposed to fictional stories, the narrator can never escape the epistemological premise that no human being can ever know with certainty what goes on in other people’s minds.Footnote 17 Should a narrator of a factual story break with this premise and imply that he or she is in fact in possession of such a knowledge, the story becomes less plausible than it would otherwise have been. While it is true that judges routinely make judgments about states of mind without their narratives being therefore necessarily regarded as less than plausible, this does not, to my mind, significantly affect Cohn’s point. First, these kinds of judgments are made on the basis of legal conventions and not on a presumption that judges are endowed with the ability to read people’s minds. Second, they are presented as court findings about states of mind deduced from other story-elements, not as directly observable facts.
Cohn’s narratological point is relevant for the understanding of the human–robot relationship. While it is an inescapable condition for all human interaction that our minds are not transparent, this constraint is not necessarily present in our interactions with robots. If we know how a robot is programmed, we know what goes on inside it. And even if our knowledge of AI programming is less than expert, we can still, in many cases, know with certainty how a machine will respond to certain human actions, based on our knowledge of the tasks it is programmed to perform. Cohn’s epistemological boundary, that human minds are not transparent, is everywhere implied in the language that we use when describing human interaction, including legal language. The question is whether this language is so ingrained in the way we narrate factual stories that it will inevitably also seep into our descriptions about the human–robot relationship in ways that may not reflect the actual circumstances.
In order for the court to present a coherent argument in support of the decision to convict the defendants, several assumptions concerning the human–robot relationship must be in place. Going through the court’s narrative step by step, we can begin by observing that in order to find the defendants guilty, the robot’s responses to the traders’ actions cannot be portrayed as independent acts; they must be viewed as a mechanical response to the actions of the traders, in line with the court’s underlying narrative about the human–robot relationship in the case, i.e., that the robot is stupid and it was used by the traders in a way that violated the law. This underlying narrative connects with the notion of purpose, which the court ascribed to the actions of the traders, but not to the robot, whose actions must be viewed as having been accomplished without independent purpose. This approach, in turn, ties in with the distinction between active and passive, in which only the parties that were capable of acting with a purpose can be viewed as active, which means that the changes to the price made by the robot must be seen as mere reflexes, caused by the controlling actions of the real agents, the defendants. To the extent that these assumptions can be legitimately presupposed, the court can then reasonably go on to reach the legal conclusion, as it does, that the price offered by the robot immediately before the final transaction was “artificial,” since it was not offered as a result of regular trading, but because of the traders’ meddling with an imperfect machine, one that had no choice but to respond to the traders’ actions as it did.
However, for the court to construct a coherent narrative about the case based on these assumptions, it must overcome a seeming paradox with regard to the notion of deception, which is a crucial element of the criminal charge. The court’s narrative implied that the defendants had deceived the robot into thinking that the series of trades of small quantities of the illiquid stock were regular trades, whereas in fact they were just a means of getting the robot to increase the price of the stock. The reason why these transactions were not, in the eyes of the court, real trades is that the defendants could – contrary to what would have been the case in mutual human trading – predict with certainty how the robot would respond. The mind of the robot must then, in a certain sense, have been regarded as transparent, making it easy to deceive. Yet a stupid robot which was seen as a mere tool could not at the same time be said to possess the qualities of mind that are necessarily involved in being deceived, i.e., being misled into making an error of judgment. This is presumably why the court argued that the deception was directed at the market and not at Timber Hill via its robot. This factual finding does not seem immediately evident yet, since no evidence was presented that suggested that the market had been affected at all by the transactions, which, as we recall, were made in stocks that were all but illiquid. Another difficulty with finding that the market was deceived is that for the traders to deceive the market, surely, they would have had to deceive their robot trading partner first? Had it not been possible to deceive the robot trading partner, they would not have been able to manipulate the market. And this is indeed what the court goes on to find, that it was by misleading Timber Hill that the defendants sent misleading “signals” to the market.
At this point in the court’s argument, it seems clear that the conflicts regarding the status of the robot within the underlying narrative create inconsistencies in the court’s explicit narrative about the facts of the case. The paradox may be spelled out in the following way. On the one hand, the trading robot was seen as a mere tool, and as such not endowed with the capability of being misled. Its responses to the traders’ actions were seen as mechanical reflexes, stemming from a glitch in its programming. This, in turn, made it possible to argue that the transactions were not real trades, but just a means to raise the price of the stock. On the other hand, in the court’s narrative about the facts of the case, the robot was seen as the acting agent of Timber Hill, and as such endowed with the capability of being deceived by the traders. The deception necessarily involved an error of judgment intended by the deceivers: what seemed like one thing, trades, was in fact another thing, a means of raising the price of the stock. The machine mistook one for the other and was, therefore, by implication, engaged in an act of interpretation. This latter notion is precluded by the former notion of the robot as a mere mechanical tool. Nevertheless, both notions served as premises for the court’s narrative about what happened in the case. And as noted above, the inconsistency cannot be resolved simply by concluding that the deception was directed at the market and not Timber Hill’s trading robot.
Turning now to the court’s report of the defense’s narrative about the facts of the case, we notice that the key notion concerning the human–robot relationship is reversed. The underlying narrative informing the defense’s argument was that Timber Hill’s imperfect robot should be regarded as a regular human trader. The defense made this argument because a robot that can make its own decisions meant that the traders did not cause the market to be deceived – the robot did. This way of viewing the human–robot relationship does not, however, resolve the conflicts that are present in the court’s narrative about the case. On the one hand, the defense’s denial that legal causation has been established relied on viewing the robot’s responses to the defendants’ trades as proper acts, as opposed to just mechanical reflexes. This approach is consistent with the defense’s underlying narrative that the robot is analogous to human traders. Normally, however, the requirement for something being an act is that it is based on a decision, meaning that the agent performing it could in principle have chosen to act differently.Footnote 18 Since this cannot be said to have been the case with the robot, the defense must instead argue that the robot’s actions were caused by its imperfect programming. But seeing things in this way would imply that the robot is stupid, a mere tool, and therefore it cannot reasonably be viewed as if it were a human trader.
The conflicts concerning the status of the robot are therefore also present in the defense’s narrative about the case. Even so, the defense’s reasoning did convincingly support the claim that no legal causation is present in the case. If the ultimate cause of the robot’s actions laid with its programming, for which the defendants bore no responsibility, there was a kind of black box between the actions of the traders and the actions of the robot which made it unreasonable to claim that the traders had caused the robot to do things. Viewed in this way, the defendants were blameless for the losses of Timber Hill, in the same way that they would have been blameless if Timber Hill had been using an incompetent human trader who was slow to learn from his or her mistakes.
VIII The Decision of the Court of Appeal
In the Norwegian justice system, the Court of Appeal conducts an entirely new hearing of all aspects of the case. In this case, the Court of Appeal agreed with the account of the facts of the case as they were presented by the first instance Oslo District Court, but there was one significant new aspect of the case that came to light during the appeal hearing. A witness from Timber Hill explained to the court that the company has employees who are tasked with overseeing the trades made by the machines. These employees were supposed to adjust the trading robot’s algorithms when necessary. In the trades at issue in this case, none of the employees at Timber Hill had discovered the irregularities in the activities of the trading robot prior to the company being alerted to them by the Oslo Stock Exchange. The witness explained that these particular trades had probably “gone under the radar,” since they involved a relatively small amount of money and were made in stocks that were all but illiquid. In the context of our analysis, we can surmise that the court was here exploring whether a human agency “behind” the machine could reasonably be established, such that one could view the machine as a mere tool in the hands of human beings such as Timber Hill employees, who could then be said to be responsible for the trades made by the machine.
This is a theme that runs through several of the automated vehicle verdicts discussed, among others, by Helena Whalen-Bridge.Footnote 19 The crucial question in many such cases is whether a driver is responsible for malfunctions in the automated driving devices of these cars in the same way a driver would be responsible for driving with defective brakes or wheels. In the cases Whalen-Bridge discusses, the courts are quite clear in their view that the driver is in fact responsible for the behavior of his or her vehicle, even when the autopilot system is doing the driving.Footnote 20 This is comparable to Norwegian verdicts in cases concerning collisions at sea, where various autopilot systems are involved. As far as I have been able to ascertain, the captain or helmsman is always, as a matter of course, seen as responsible for the ship’s course and movements, regardless of any malfunctions in the autopilot system. Navigation systems are viewed as mere tools that should always be used in combination with watchful seamanship.Footnote 21
In the first instance Robot Decision, the court leaned toward adopting an underlying narrative in which the responsibility for the malfunction of the robot was not placed on the Timber Hill owners, who used it to make trades on their behalf, but rather on the traders who exploited its imperfection. I cannot conclude with any certainty why this is so, but I suggest that it has more to do with overarching considerations about the legal consequences of conclusions on the legal issues rather than with any principled notion about the human–robot relationship.
The Court of Appeal agreed with many of the conclusions reached by the Oslo District Court. It concurred with the opinion that the actions of the traders were intentional, and that there was legal causation between the actions of the defendants and the changes to the price of the stock. The Court of Appeal commented that even if it was Timber Hill who effectuated these changes, the defendants knew how the trading robot would respond to their actions, and that this response was the intended result of their trades. The Court of Appeal therefore agreed with the Oslo District Court that the defendants were the active parties in the trades.
At this junction, the reasoning of the Court of Appeal started to diverge from the one presented by the Oslo District Court. The difference of opinion mainly concerned two aspects of the facts of the case. First, the Court of Appeal took care to underline the fact that all the trades made by the defendants were real trades: “The defendants have in fact bought/sold the stocks in the number and at the prices that have been indicated. Their counterpart has received correct information about the trades that were made, both with respect to price and to volume.”Footnote 22 The court went on to say that, while this is the case, there was also the extraordinary circumstance that “the defendants knew how the counterpart would react to their purchase and sale orders and used this knowledge to get a gain for themselves.”Footnote 23 This was, however, as the court pointed out, only possible because the programming in Timber Hill’s trading robot did not take the volumes of the trades into account. Compared to the reasoning of the Oslo District Court, the Court of Appeal placed much more emphasis on the robot’s malfunction, for which the defendants were obviously not responsible.
Second, the Court of Appeal disagreed with the Oslo District Court with regard to the effect that the irregular trades may be said to have had on the market. The Court of Appeal referred to two expert witnesses working on behalf of the court, who both opined that it was Timber Hill’s algorithm, and not the actions of the defendants, which caused an inefficiency in the market, by making the same mistake repeatedly over time. According to both expert witnesses, there was nothing unusual or dishonest in the behavior of the defendants. Far from being harmful to the market, their actions resulted in the discontinuation of Timber Hills’ irrational behavior.
IX Analysis of the Judgment of the Court of Appeal
Turning now to its legal deliberations, the Court of Appeal stated that the only legal provision applicable to the case is the first alternative in chapter 3, section 3–8 in the Statute, which forbids traders to give “incorrect and misleading signals as to the supply of, demand for or price” of the traded stocks. The Court of Appeal confessed to having had doubts about how to adjudicate this question on the following grounds. On the one hand, the Court of Appeal agreed with the Oslo District Court that the transactions made by the defendants between the first and last trade had no purpose other than bringing about a reaction on the part of Timber Hill’s robot. In this sense, they could be said to have profited by an adjustment of the price that they had themselves caused. It would not be unreasonable, the court noted, to view “the sum” of the actions of the defendants in these transactions as misleading signals. On the other hand, the Court of Appeal found that one must take into consideration that all the trades made by the defendants were real.
In the Court of Appeal’s reversal of the Oslo District Court’s decision, the crucial argument was the following one: “The intended reaction from Timber Hill came about because the algorithm Timber Hill was using was not capable of correctly interpreting the information contained in each trade.” This was, the Court of Appeal went on to point out, “a result of insufficient programming of the machine used by Timber Hill, in combination with the fact that the people in charge of overseeing the actions of the machines did not intervene in the trades made by the algorithm.” In this finding, the performance of the trading robot was viewed in analogy with an inadequate performance of a human trader, in the sense that the responsibility was seen as lying with the trader who made the irrational trades. Since the trading robot who executed the transactions did not have a will of its own, the responsibility laid with both the programmersFootnote 24 and the employees who were tasked with overseeing the robot’s performance.Footnote 25
As Hayden White has suggested, there is an ethical aspect to any story.Footnote 26 Viewed in relation to the question of whether the robot should be seen as a mere tool or as an independent actor, the decision of the Court of Appeal can be seen as a correction of an ethical misjudgment in the first instance Oslo District Court’s narrative about the case. The narrative of the Oslo District Court, which substantiated the court’s view that the defendants were culpable, appears to have been informed in part by an ethical analogy between the robot’s malfunction and human impairment. The logic here seems to be that since it is ethically wrong to take advantage of a human being who is obviously not acting in accordance with his or her own best interest, it is also wrong to take advantage of a robot which is obviously not acting in the best interest of the people who use it to act on their behalf.
In the underlying narrative of the Court of Appeal, the ethical assumptions were different. The basic idea of a capitalist market is that everyone acts to the benefit of the market by acting in accordance with their own self-interest. When a trading company uses robots instead of human traders, it is their way of trying to maximize profits. When other traders discover a glitch in the robot, they are acting in the best interest of the market precisely by exploiting this glitch to their advantage, since this will eventually lead to the improvement of the robot, which will increase the efficiency of the market. According to this logic, it does not matter whether the cause of the inefficiency lies with the robot or with the people behind the robot. Neither does it matter whether the cause of the inefficiency is bad programming or human stupidity. The important thing is that the irregularity is eliminated through actions taken in the market. One may, of course, question the ethical soundness of this argument, relying rather heavily as it does on capitalist ideology and its tendency to view egotistical actions as ethically desirable. But the fact of the matter is that the use of trading robots has been increasing in recent years, and they are typically used by large and powerful companies which makes it harder for small-time traders to make a profit, especially on day-trading. It is therefore not so obvious that human traders would act ethically by reporting suboptimal performances of trading robots instead of exploiting them to their own benefit. No such fairmindedness would go in the other direction, as no existing trading robot would report a human trader who kept making stupid trades.
X The Decision of the Supreme Court
The majority vote of the Supreme Court ruled to uphold the decision of the Court of Appeal, acquitting the defendants of all charges.Footnote 27 The minority vote argued that the defendants should be convicted of market manipulation. Judge Webster, writing for the majority, discussed at length whether market manipulation had occurred in the case. As we have seen, a discussion of this kind incorporates underlying narratives, which ultimately demands a clarification regarding the nature of the human–robot relationship.
Having gone through multiple sources regarding the legal issues at hand, Judge Webster explored the question of whether manipulation was present in the defendants’ trading activity, or if it would be more appropriate to say that it was the robot’s inept responses to the defendant’s trades that caused the irregularity in the market. The question here is whether the trades made by the defendants could only have been misinterpreted by an imperfect robot or whether they could also have fooled a rational human trader. Judge Webster made the point that no trader would have been able to ascertain that all the trades made by the defendants were in fact made by the same trader. One would only be able to find out for certain that they were made through the same broker. Therefore, the increased trading activity in the specific stock could conceivably also have given a human trader the impression that the market demand for these stocks had suddenly increased. Judge Webster commented that “a trained eye” would have been required in order to see that the trades made by the defendants did not, in fact, reflect a real increase in market demand for this stock.Footnote 28 The implication is that the malfunction of the robot could be viewed in much the same way that one would view the inexperience of a human trader. In both cases, one would speak of a misinterpretation of the intention behind the trades. Nevertheless, the changes in the price of these stocks did not, according to Judge Webster, come as a result of a normal effect of supply and demand in the market, but as a result of the defendants exploiting the malfunction in the trading robot. Therefore, the changes in the price of the stock, resulting from the defendants’ trading pattern, could justifiably be viewed as “irregular or artificial” under the statute, thereby fulfilling the legal requirement of market manipulation.Footnote 29
Judge Webster’s next point was that the market regularly accepts trading practices that would, strictly speaking, fall under the definition of market manipulation. An example would be cases where a trader did not want to disclose the real nature of his or her interest in a stock, and therefore only purchased small amounts of it in each trade, in order to avoid an increase in the price. Such trades were not punished, nor did the lawmakers intend them to be, according to Judge Webster, who thereby suggested that the trades made by the defendants were not necessarily so different from the kind of trades that are made all the time. All traders respond to movements in the market. In this case, the traders responded to an inefficiency in Timber Hill’s robot, which resulted in an “irrational adjustment of the price” of a certain stock as a response to a specific trading pattern.Footnote 30 Judge Webster commented that this might be viewed not as an act of manipulation on the part of the traders, but as a mere “reaction to an inefficiency in the market.”Footnote 31 This was in line, she continued, with the market’s ordinary way of functioning, where trades were based on predicting and adapting, to the best of one’s ability, to the actions of other traders. She added that the whole case also had to be viewed in light of recent developments in stock markets, where big companies increasingly made use of computer technology in order to increase the efficiency of their trades. This business model was based on a calculation in which the benefits of using trading machines rather than human traders are presumed to make up for exactly the kind of glitches that may occur when rational players respond deftly to the actions of the trading robots. She concluded this line of thought with the comment that “there is good reason to hesitate over imposing penal sanctioned limitations on other investors’ opportunities to adapt to the preprogrammed trading pattern” of companies such as Timber Hill.Footnote 32 Judge Webster’s overall view, then, was that the market irregularities arising from these trades were a consequence of the robot’s programming and not of manipulation on the part of the defendants. The defendants did not put out incorrect information, and they acted openly. Judge Webster therefore voted to reject the appeal and acquit both defendants, even if their actions fit the description of unlawful actions in the Statute.
Judge Tønder, representing the minority vote, disagreed with the majority vote, mainly on two points. First, he found that the defendants’ transactions were dishonest and therefore illegitimate. He opposed the argument that the defendants had, through their actions, revealed a deficiency in the robot’s programming and thereby contributed to the efficient running of the stock exchange: “What the defendants have done, is not only to reveal a weakness in the robot’s programming but to exploit this weakness over time, through a series of transactions, until they were exposed.”Footnote 33 The rightful course of action, on the part of the defendants, would have been to inform the Financial Supervisory Authority of the weakness in the robot and to request a clarification as to whether further trades with this robot would be in accordance with accepted practice.
Second, Judge Tønder resisted the view that the defendants are solely guilty of exploiting an inept actor in the market, which is not illegal. In other words, he did not accept placing human traders and a malfunctioning robot on equal terms. His argument was that the kinds of trades conducted by the defendants would have been quickly discontinued if their counterpart had been human, and that it was therefore only the imperfection in the programming of the robot that allowed this trading pattern to go on for months. Still, the central issue was not the malfunction of the robot, according to Judge Tønder, but the fact that the transactions of the defendants resulted in an artificial price of the traded stocks. It was this continuous artificiality of the price of the stock which was the central legal issue in the case, according to Judge Tønder, and responsibility for this laid exclusively with the defendants, who were, in his view, guilty of market manipulation.Footnote 34
XI Analysis of the Supreme Court Decision
The judicial opinion of the Supreme Court presents us with two different underlying narratives about the case, where the differences in part result from divergent views about how to characterize the abilities of the robot and its role in human–robot interactions. The events of the case, as formulated by Judge Webster, could be narrated in the following way. A major trading company decided to use trading robots in order to optimize their profits. One of these robots had a glitch in its programming which was not discovered by the company’s technicians. Two traders discovered, independently of each other, that a player in the market acted irrationally by increasing its purchase order for certain stocks irrespective of the volume of the trades. The traders responded rationally to this behavior, by using a trading pattern which triggered a response in the trading robot that allowed them to harvest a profit from the transactions. In this story, the blame for the inefficiency is laid on the company using the robot.
The underlying narrative of the minority vote could be formulated as follows. Two day-traders discovered a peculiar reaction by a player in the market and concluded that it must be a robot which was not working properly. Instead of alerting the Financial Supervision Authority, as they should have done, the traders decided to exploit the malfunctioning robot in order to enrich themselves. By exploiting the glitch in the robot’s programming, the traders were able to generate an artificial price of the stock, which falls under the definition of market manipulation. In this story, the blame is laid on the traders who are exploiting the robot.
From this, we can conclude that the underlying narrative that serves as a basis of the decision to acquit the defendants tends to view the robot as just another trader in the market, whose mistakes cannot be regarded as the responsibility of other traders, who are, on the contrary, entitled to respond to any movement in the market with their own self-interest in mind. The underlying narrative that supports a conviction, on the other hand, sees the robot as a mere instrument in the hands of human traders, and the glitch in the robot as a malfunction on par with any other computer malfunction in the stock exchange system. Viewed in this way, the trades that the defendants made with Timber Hill cannot be viewed as real trades, but must rather be seen as an exploitation of an obvious malfunction in the system, in the same way one would perhaps have seen it if someone discovered a slot machine at a casino that consistently gave a prize every second time it was used. Therefore, the trading pattern of Timber Hill’s robot cannot be viewed as if they were just stupid actions by an inept trader, but should rather be seen as an error in the system which one has a duty to report.
XII Concluding Analysis
When we consider all the arguments and narratives that were presented in the Robot Decision, it does not seem possible to resolve once and for all how the role of the robot should best be viewed. The view of the robot as either a mere tool or as an independent actor must therefore be seen as a choice. What one chooses is not a small matter, since the two main possibilities, tool or trader, have different legal consequences.
Reviewing the narratives that were put forward in the case, as well as their basis in underlying narratives about the case’s crucial aspects, we notice that they all tend to presuppose a normal situation, from which the circumstances of the case are a deviation. What characterizes the normal situation? Judged by the arguments discussed in the written judgments, it seems clear that the implied normal situation’s most central feature is that the stock market is dominated by rational agents. When the deviation is described, the word “irrational” is invariably used, with the implication that “irrational” behavior in the stock market always undermines its smooth functioning. However, the notion of “irrationality,” when used about the robot, differs from what would have been the case if it had been used about a human being. If we imagine an irrational human trader, who made a series of very bad decisions over time without being able to learn from his or her mistakes, the situation would surely have been very different from the one we have been dealing with here. For example, the actions of such a person would have been unlikely to cause an extraordinary stock market break. It is also hard to imagine that such actions would result in a criminal process against this person’s trading counterparts. If such a person were acting on their own, they would probably have been allowed to go on trading until they had lost all their money. If the irrational person had been employed by a trading company, they would most likely have been discharged very quickly. Had it turned out that the irrational trades were a consequence of mental illness, the most likely scenario would have been that family members intervened to stop the trader’s calamitous behavior.
This leads us to the question of how the irrationality of a human being differs from the irrationality of Timber Hill’s robot. The main difference seems to lie in the predictability of the robot’s irrational trades, a point which ties in with Dorrit Cohn’s point on the non-transparency of minds mentioned above. Whereas an irrational human trader would most likely be less predictable than a rational trader, the irrational robot is entirely predictable, which is of course the only reason why the robot was vulnerable to the kind of exploitation that the defendants engaged in. This difference appears to affect the very notion of a “trade,” i.e., under what conditions one may say that a trade has occurred. The underlying narrative that supports the conclusion that the two defendants should be convicted relies upon the view that their transactions cannot be viewed as real trades, but must instead be seen as a kind of system error on par with what would have been the case if there had been a malfunction in the stock exchange’s own computer system. The narrative that underlies the acquittal of the defendants, on the other hand, is more inclined to view the transactions as real trades, where the responsibility for the actions of the robot lies with the company using it.
Exploring this question further, we may ask whether the noted difference between robotic and human irrationality must mean that there is also a difference between their rational actions in the market. This point connects, of course, with the wide-ranging philosophical debate concerning the question of whether machines can think.Footnote 35 For the purposes of this chapter, it suffices to note that the actions of the trading robot differ from the activities of a human trader on two significant accounts. First, the machine’s being is entirely dependent on its programming, precluding the notion of choices and judgment. Second, the machine has the ability to process much larger amounts of information a lot quicker and more accurately than would ever be possible for a human. The question is how these differences affect the normal functioning of the stock market. Ultimately, in the final stage of the Robot Decisions, the judgment of the Supreme Court adopted the underlying narrative that the trading robot is not an independent actor in the market, but a tool in the hands of the real traders at Timber Hill.
As regards the question of what constitutes a disruption of the stock market’s normal functioning, it is perfectly possible to make the argument that the real disruption to markets occurred with the introduction of trading robots, and not with individual cases of malfunctioning robots. According to a 2012 article by the business journalist David Potts of the Sydney Morning Herald, automated trading has resulted in “wild price swings” on Wall Street.Footnote 36 Because of their rapid calculation capacities, and the privilege granted to them to skip the agency of the broker, robot traders are directly connected to the stock exchange system and can act on new information in the blink of an eye, making hundreds of trades in a millisecond. Because of this, Potts calls trading robots “the ultimate inside traders.”Footnote 37 According to the stock market analyst Dale Gillham, trading robots “make the market much more volatile and unpredictable” because of their high-speed trading and their ability to strategically cancel transactions “a millisecond before the market opens.”Footnote 38
Is this not precisely the kind of situation that evokes the nightmare scenario about robots taking over the world because of their superior abilities? Potts alludes to these narratives at the outset of his article: “Robots don’t have to take over the world when they’ve got sharemarkets in their clutches already.”Footnote 39 Compared with the performance of trading robots, especially as they have been developed in the years after the Robot Decision, a human trader is slow and prone to make mistakes. No one would view such mistakes as irrational or disruptive to the market. Inept traders and their exploitation by superior traders are everyday phenomena in the stock market. As we have seen, robots can also make mistakes, but they differ from the kinds of mistakes made by humans, as witnessed by the case discussed in this chapter. The Robot Decision suggests that the problem has never been that bad or irrational trades have been exploited. The issue running through the entire case is how to deal with the kind of irrational trades that only a robot could make. This problem inevitably leads to the question of how one should deal with the kind of rational trades that only a robot could make. The analysis has highlighted that the issue at hand in the Robot Decision is symptomatic of much larger problems which are inherent to the use of trading robots. Trading robots behave very differently from human traders, both when they act rationally and when they act irrationally. The analysis of the judgments in the Robot Decision does not warrant the conclusion that anxiety about robots taking over the world has influenced the courts’ adjudication. Still, the final decision of the Supreme Court does suggest an unwillingness to allow robots the freedom to use their superior computational skills to outperform human traders, while at the same time denying human traders the freedom to use their human ingenuity to exploit the kind of weaknesses that are only found in robots.
I Introduction
The technology era we now inhabit encompasses the Internet of Things, in which everyday objects send and receive data without human intervention.Footnote 1 But despite this presence in daily life, evidence of strong negative reactions of people in communities with autonomous vehicles (AVs) suggests that concerns remain. Fatalities caused by self-driving cars have been reported.Footnote 2 In the United States, Uber’s pilot self-driving cars were met with rude gestures and forced to stop by other drivers, who drove up close to their rear bumpers, and Google’s autonomous-vehicle unit, Waymo, experienced similar issues in which people slashed vehicle tires and even pulled guns on safety drivers.Footnote 3 In Singapore, residents have concerns about safety, including the ability of vehicles to react and evaluate traffic situations and follow traffic rules.Footnote 4 Among academics, there are concerns regarding AV risks and unintended consequences.Footnote 5 This chapter considers the case of Singapore, which has been testing the use of AVs. Using surveys and newspaper reports, the chapter explores the rhetorical devices used to frame relevant discussion, focusing on the concepts of narrative and narrative argument. The chapter identifies the narratives used to assert the potential benefits AVs offer, as well as addresses the concerns and fears they raise, thereby justifying the presence of AVs on the streets.
Narrative is used as the central instrument of inquiry in the chapter because this form of discourse is a fundamental way in which reality is understood and constructed,Footnote 6 and because it plays a particular role in the public discourse examined in the chapter.Footnote 7 The definition of narrative is contested, but for purposes of this chapter, “narrative” is defined simply as a representation of an event.Footnote 8 Some definitions of narrative use additional or expanded elements,Footnote 9 but without delving into the issue of narrativity,Footnote 10 this chapter adopts a more minimalist definition of narrative in order to identify the narrative character of public discussion of AVs.
The narratives considered here take place in the context of public discussions of the merits and drawbacks of AVs, and can therefore be understood as narrative argument, i.e., arguments relying to some degree on narrative. Concepts underlying narrative argument can be traced back to ancient rhetoric,Footnote 11 but they were developed in more modern times by Walter Fisher, who is credited with distinguishing between the rational world paradigm and the narrative world paradigm.Footnote 12 In the rational paradigm, humans are essentially rational beings, and the paradigm for human decision-making and communication is argument, understood as clear-cut, inferential structures.Footnote 13 The narrative paradigm presupposes that humans are storytelling creatures, and that the paradigm for human decision-making and communication is “good reasons,” including narrative probability, an internally coherent story, and narrative fidelity, a story consistent with lived experience.Footnote 14 The narrative paradigm can be considered the synthesis of two traditional strands of rhetoric, the argumentative, persuasive theme, and the literary, esthetic theme,Footnote 15 which makes it well-suited to analysis of narrative arguments.
The narrative arguments explored in the chapter occur in the wider Singapore community, and they therefore comprise narratives in the public space.Footnote 16 Bruce Weal has suggested why narratives perform a useful function in the public sphere. First, stories proceed via the actions of characters, and stories display the values of those characters; in a conflict of positions, the fact that one character prevails is an argument for that character’s values.Footnote 17 Second, narratives engage audience attitudes and understandings because the story form is more easily comprehended by most audiences compared to technical arguments.Footnote 18 This point echoes Fisher, who asserted that decisions of a public nature are subject to public narratives, which members of the public can participate in if they are sufficiently informed, because unlike expert subject matter, the public can assess narrative probability and fidelity.Footnote 19
The field of narrative argument is a growing one with its own disagreements, e.g., the degree to which the traditional analysis of argument must accommodate narrative.Footnote 20 Paula Olmos has identified different categories of narrative argument, two of which are: (1) primary or core narratives, which assume that someone has been given the responsibility to give a plausible account of facts unknown or under discussion via narrative devices; and (2) secondary or digressive narrative, in which narratives are not the main event, but are related to a conclusion or claim, and their relevance is either fully expressed or left to the audience.Footnote 21 As explored below, narratives regarding AVs in Singapore are less about plausible versions of contested facts and more about contested views about how AVs function and how to evaluate the benefits and risks they pose; as such, AV narratives would fall within the category of secondary or digressive narratives.
I.A Methodology and Terminology
To explore narratives regarding AVs in Singapore, the chapter considers both research studies on public opinion in Singapore and newspaper coverage. The research studies help establish opinions and narratives within the public sphere, and while the studies appear to be commercially oriented and display some biases in favor of artificial intelligence (AI) and AVs, the studies in turn also help establish the narratives of commercially oriented entities.
After reviewing the studies, the chapter provides a detailed analysis of Singapore newspaper reports. Examining newspaper coverage is a common methodology in socio-legal research.Footnote 22 In this chapter, local newspapers form the narrative “topos” for analysis.Footnote 23
The Factiva database was used to identify newspaper articles on AVs in Singapore from January 2014 to March 2021, using the Factiva search function that gathers articles related to AVs. This search produced an initial group of 67 newspaper articles in the relevant time frame. Different words were used for AVs in these articles, and these words arguably contain different orientations toward AV risks and benefits. For example, “driverless” vehicles might suggest a greater concern regarding vehicles, as the word emphasizes the lack of a driver and the associated risks of proceeding without a driver, while “autonomous” suggests that the vehicle can function autonomously without a driver. To determine chapter terminology regarding AVs, the frequency of terminology use was reviewed. Within the group of 67 articles, “autonomous” was used more frequently (59) than driverless (39), self-driving (50), and automated (5). The phrases “autonomous” and “driverless” both appeared first in 2014, but if “autonomous” and “automated” are combined, a phrase utilizing the root “auto” becomes even more clearly the preferred term (64). The chapter therefore adopts the phrase “autonomous vehicle” (AV), with occasional deviations to incorporate different usage in original texts, but with the understanding that the term autonomous vehicle may contain a pro-AV bias.
For purposes of performing narrative analysis, the chapter excluded publications based in jurisdictions outside of Singapore, as they appear less likely to reflect Singapore opinion. An exception was made for IEEE Spectrum, a magazine edited by the Institute of Electrical and Electronics Engineers, as it contained detail regarding commercial entities not available in local publications. The resulting 51 articles were analyzed qualitatively for narratives and narrative argument. Chapter analysis in the following sections is organized into three categories, depending on whether the article primarily represented the views of the public, government entities, or commercial entities. Articles were placed in one of these categories if more than 50 percent of the content comprised the opinions or activities of the public, commercial entities, or government entities.
II Research Studies and Surveys
Two relatively recent surveys contain information relevant to attitudes about AVs in Singapore. In 2019, the Boston Consulting Group (BCG) reported the results of a survey (“BCG Survey”) of citizen perspectives on the use of AI in government, based on the responses of 14,000 internetFootnote 24 users in different jurisdictions, including Singapore.Footnote 25 The BCG Survey asked participants how comfortable they were if certain decisions were made by a computer rather than a human being, what concerns they had regarding the use of AI by governments, and how concerned they were regarding the impact of AI on the economy and jobs.Footnote 26 Overall, the findings indicated that citizens were most supportive of using AI for tasks such as transport, traffic optimization, and predictive maintenance, but citizens did not support the use of AI for sensitive decisions associated with the justice system, such as parole board and sentencing recommendations.Footnote 27
Noting Singapore’s “Smart Nation” and Digital Government Group, the BCG Survey characterized Singapore as a case study in how to promote the application of AI technologies across the government.Footnote 28 Characterizing Singapore as a positive AI case study also indicates the survey’s pro-AI orientation to promote the use of AI in government. The BCG Survey’s orientation is reflected in how questions were posed, e.g., “[w]hen is it acceptable to use ‘black box’ deep-learning models, where the logic used … cannot possibly be explained or understood,”Footnote 29 as opposed to asking whether this kind of AI should be used at all. The BCG Survey’s pro-AI orientation is also illustrated in its use of what the chapter calls the “inevitability narrative,” the narrative that AI or AVs are inevitable and should just be accepted and managed. An opinion piece by the Partner and Managing Director of BCG, Singapore, while highlighting key points from the survey, asserted that the “AI genie is out of its bottle, and no amount of wishing it were otherwise will turn back the tide of AI innovation.”Footnote 30 The inevitability narrative occurs primarily in the narratives of commercial entities, and it is analyzed in this Section II, as well as Sections III.A.4 and III.B.3.
A second study conducted by the insurance company American International Group (“AIG Survey”) focused squarely on attitudes regarding AVs, and this study segregated data on respondents from the United States, the United Kingdom, and Singapore.Footnote 31 The answers of the Singapore respondents indicate that one in five adults self-identified as the current driver of a vehicle with automated assistance systems such as emergency breaking, lane departure avoidance, or features that make the vehicle capable of self-driving part of the time, and two-thirds of Singapore drivers said that autonomous features had a positive influence on their decision to purchase the car.Footnote 32 A total of 49 percent of Singapore adults who did not currently drive a vehicle with autonomous features said they thought they would buy, rent, share, or travel in a vehicle with those features, although 25 percent said they would not.Footnote 33
Respondents were concerned about safety. As the AIG Survey put it, the “general public is especially concerned about safety.”Footnote 34 Singapore respondents cited safer roads as the second-most appealing benefit for AVs, but there was divided opinion regarding sharing the road with driverless vehicles: 46 percent said they would be comfortable, and 29 percent said they would be uncomfortable.Footnote 35 Only 32 percent of Singapore drivers thought that driverless cars would be safer than the average driver, and when asked if driverless cars would be safer than their own driving, only 22 percent said yes.Footnote 36
Security is a related concern, and adults in all three countries saw security as a “significant barrier” to AV adoption.Footnote 37 A total of 78 percent of Singaporean respondents expressed concern about hackers taking control of AVs, and 73 percent were concerned about the privacy of personal data such as where they travel and when.Footnote 38 A total of 47 percent of Singaporeans said their biggest concern about privacy would be a breach of personal information, such as credit card data stored in the car.Footnote 39 Another issue included the car overhearing private conversations (10 percent),Footnote 40 a concern not unheard of in Singapore, where taxis can audio-record customer conversations.Footnote 41 The AIG Survey noted that AVs are susceptible to “cracking,” outsiders taking control of the car, and that sophisticated software could take control of a car, and cause it to sense that the car is located in the wrong place, or “see” something on the road that isn’t there.Footnote 42 A “less immediate but equally real risk” would be less invasive hacking to gain access to information stored in the vehicle.Footnote 43
Like the BCG Survey, the AIG Survey is pro-AV. The AIG Survey stated that AVs “promise the potential of greatly reducing the number of deaths attributable to automobiles (currently about 40,000 per year in the United States) and injuries from vehicle crashes. Over 90 percent of today’s roadway deaths and injuries are due to human error.”Footnote 44 These figures are accurate statistics, but the assertion assumes that AVs would not commit any “human errors,” and that AVs would not commit any AV errors, errors that humans would not commit. The AIG Survey also asserted the inevitability narrative, stating that “[i]nevitably, the role of the traditional driver will decrease and the role of technologies will increase.”Footnote 45
III Newspaper Articles
The majority of Singapore newspaper articles addressed the views or activities of the government or commercial entities. Of the few articles to address public opinion, one welcomed the idea of AVs on Sentosa, a small island close of Singapore that has been developed as a tourist and entertainment destination, because AVs would be “hassle-free” and more convenient for families with children, could help with long queues, and could be “exciting.”Footnote 46 One view endorsing AVs noted that during a morning commute in which the commuter was focused on his daily activities, “I don’t want to speak to anyone. I would even prefer hailing a driverless car to work to hiring one with a driver.”Footnote 47 However, some newspaper articles regarding public opinion indicated concerns and fears regarding AVs, e.g., safety issues needed to be “ironed out.”Footnote 48 In the context of automated buses, a school bus driver asked whether “parents of young school children would trust driverless technology more than bus drivers and their sidekicks, the ‘bus aunties.’”Footnote 49 There was also the concern regarding jobs for drivers, and that “job disruption for bus drivers may occur sooner than for taxi drivers.”Footnote 50
In contrast to the bright futures asserted in government and commercial narratives reviewed below, one expert noted that if he was “taking the bus on a daily basis, and the bus is leaving the bus bay, I can waive my hand and the driver can stop and open the door. With the driverless bus, I don’t think this is going to happen. Even though Singapore has been very aggressive in promoting driverless technology, I do not know if this is the future society we’d like to have.”Footnote 51
III.A Government Entities
Government discussions of AVs assert narrative arguments regarding the role of the government in pushing for AV development, the reasons for this, and the activities involved in working together with commercial partners to support AV usage in Singapore. Narrative arguments also addressed liability regarding AVs and rules or guidelines, and the careful testing of AVs and restriction of their movement.
III.A.1 AV Benefits
The emphasis in Singapore is less on AVs for personal use and more on AVs for community use, an approach which makes sense given population density in the city-state, but which also increases the risk of injury if there is an accident. In 2015, the Ministry of Transport’s (MoT) Permanent Secretary and Chairman of the Committee on Autonomous Road Transport for Singapore (CARTS) stated that it was not “the replacement of one driven car today by a driverless car tomorrow that excites us. What we’re interested in is the introduction of new mobility and transportation concepts that can enhance commuter mobility, and the overall public transport experience, especially for the first- and last-mile travel.”Footnote 52 One 2014 article asked readers to imagine a “completely car free town and residents taking ‘personalized MRTs’ in the form of driverless pods running underground from under their block to public transport nodes.”Footnote 53 The reference to “personalized MRTs” would be an appealing concept to many Singaporeans. MRT stands for Mass Rapid Transit, and as this public transportation is crowded at commuting times, it is anything but personalized. If a mode of transportation like the MRT could be personalized and offer a way from the user’s home to other public transportation, that would be a significant improvement. This article describes a utopian AV future: “In our dream town, its surface would be dominated by green and open spaces for residents … and free of the smoke, noise, congestion and safety concerns posed by vehicles today.”Footnote 54 Regarding the trial of driverless buses, the Chief Technology Officer of the Land Transport Authority (LTA) noted that while most AV technology focuses on self-driving cars, “Singapore’s need for high-capacity vehicles to address commuters’ peak-hour demands presents an opportunity for companies … to develop autonomous buses ….”Footnote 55
Beyond the benefits of AVs to commuters such as better mobility as well as safe and less congested roads, the advantages of connected cars were discussed. For example, an opinion piece noted that by having “information on a smart car’s performance, a carmaker can predict when the car requires maintenance,” which prevents manufacturers from over-investing in maintenance labor and parts, but also “delights customers as it shortens the time taken for maintenance.”Footnote 56 The real value of connected devices such as AVs lies in the insights provided by “the data they generate.”Footnote 57 This opinion piece presented a positive narrative and did not address potential concerns regarding AV data such as hacking and cybercrime.
III.A.2 Government Support for AVs
The government’s supportive role for AVs is illustrated by a 2014 article, which noted that previous development of AVs had been done by disparate organizations.Footnote 58 This disorganized state of affairs was to be replaced by the Singapore Autonomous Vehicle Initiative (SAVI), in which the LTA and the Agency for Science, Technology and Research (A*STAR) would jointly oversee “the setting up of a technology platform to spur research and development as well as the testing of AV technology, applications and solutions.”Footnote 59 CARTS was also formed to “chart the strategic direction and study opportunities for AVs ….”Footnote 60 Among the possibilities mentioned were transport networks such as driverless buses, or intra-town shuttles in future residential developments.Footnote 61 Fares were anticipated to be “competitive.”Footnote 62
The narrative that Singapore was pushing for AV development arises regularly, often via literal use of the word “push.” For example, the launch of the self-driving vehicle (SDV) research center and circuit was “part of the Government’s push towards a car-lite Singapore.”Footnote 63 To “push the development of self-driving technology” in Singapore, the LTA installed equipment aimed at supporting and monitoring the testing of driverless vehicles at One-North in 2016.Footnote 64 It was noted in 2017 that a project to trial driverless trucks on the industrialized Jurong Island was “one of several involving autonomous vehicle technology initiatives in Singapore, as the country pushes ahead to roll out driverless vehicles.”Footnote 65 The “push for an AV transport system in Singapore” is part of the country’s Smart Nations initiatives, intended to also impact matters such as electronic payments and digital identity.Footnote 66
Part of the Singapore narrative regarding AVs in that it is either the first country to achieve certain kinds of AV success, or it is one of the more conducive countries for AVs. For example, Singapore is the first country to “actively incorporate AV into future town-planning.”Footnote 67 It was noted in 2014 that Singapore has been on the “forefront in testing transport concepts and transport technologies over the past three decades.”Footnote 68 Guests to the tourist attraction Gardens by the Bay in 2015 were able to “test out the first fully-operational self-driving vehicle in Asia during a 2-week trial.”Footnote 69 AV testing at One-North in 2015 was “the first public road network in Singapore for the testing of driverless vehicles.”Footnote 70 Driverless buses in Jurong West continued Singapore’s “bid to take the lead in self-driving vehicles,” the “first of its kind in Singapore.”Footnote 71 It was noted in 2019 that Singapore was an early champion of AVs and was ranked “first among 20 countries for policy and legislation regarding self-driving vehicles in KPMG’s Autonomous Vehicles Readiness Index.”Footnote 72 In February 2019, it was noted that the Economic Development Board was setting its sights on Singapore to take “a leading role in developing and deploying autonomous vehicles and smart mobility systems.”Footnote 73 In December 2019, it was observed that tests on driverless cars using a 5G network would be the first time this was done in Singapore.Footnote 74
Why should Singapore play the role of AV advocate? AVs can assist Singapore to “radically transform land transportation in Singapore to address our two key constraints – land and manpower.”Footnote 75 Characterization of Singapore as a small country with limited resources is a regular refrain in public discourse,Footnote 76 and it contributes to AV narratives as well. Singapore’s focus on the use of AVs in public transportation would “reduce reliance on private vehicles,” and allow the saved road space to be used for other purposes.Footnote 77
Driverless technology can also alleviate manpower concerns.Footnote 78 The adoption of AVs in the United States has “caused a stir because of the number of drivers who could be put out of a job,” but Singapore faces challenges in attracting drivers.Footnote 79 Driverless buses could address the shortage of local bus drivers,Footnote 80 and driverless trucks were trialled in part because efficient freight movement is “critical” to Singapore’s port activity.Footnote 81
III.A.3 Addressing Issues Posed by AVs
Newspaper reports also contained narratives responsive to issues and concerns regarding AVs, such as the testing and trialing of AVs, and rules regarding legal responsibility. It was noted in 2014 that the LTA was working on a framework to allow AVs that “meet safety standards to be tested on all public roads” in the following year.Footnote 82 This position asserts that only safe vehicles will be tested, thereby protecting the public. A 2015 article noted that the MoT had unveiled “a slew of ongoing and upcoming self-driving trials” in locations including One-North, Gardens by the Bay, Sentosa, and West Coast Road.Footnote 83 Visitors to the Gardens could test out the SDVs during a two-week trial, and after this trial “further tests will be done before the vehicles are deployed in the Gardens.”Footnote 84 Tests for A*STAR’s self-driving car were done in urban areas, with plans to “test it on highways and in parking scenarios in the future.”Footnote 85 But to get on the road, AVs in trials had to adhere to the LTA’s requirements and could not go outside of the test area.Footnote 86 In some trials, an alert sounded if vehicles went outside of the test area.Footnote 87 It was noted in 2017 that driverless vehicles could ply a wider area, adding four times the previous area, but that those who “wish to conduct trials in mixed-use and residential estates in Dover and Buona Vista will need to demonstrate to LTA and Traffic Police that they are able to handle more dynamic traffic environments in autonomous mode.”Footnote 88
Trials for driverless buses were discussed together with a description of Nanyang Technological University’s (NTU) Centre of Excellence for Testing and Research of Autonomous Vehicles, which replicated road conditions in Singapore such as a rain simulator and a flood zone.Footnote 89 The trial was supported by the Singapore Mass Rapid Transport (SMRT), which was to “play a key role in determining the road worthiness of autonomous vehicles on public roads.”Footnote 90 Start-ups “from around the world” came to the purpose-built track that recreates an urban environment, to “test how autonomous vehicles cope” with those challenges.Footnote 91 One vehicle’s quirky design, which looked more like a “giant robotic bug,” was intentional, because in order “for the public to know that this is different to conventional cars, it needs to be noticeably different on first impressions, and stand out in comparison to other cars.”Footnote 92 The public may want to know that a vehicle is an AV as a matter of general knowledge, but the public may also need to know so that they can be on the lookout for potentially dangerous situations. Regarding the conducting of AV trials, the LTA stated in 2019 that it would “engage local grassroots and community leaders ahead of time if there were plans to conduct AV trials in their specific constituencies,” and that “public safety will continue to be the top priority for all autonomous vehicle trials.”Footnote 93 Further expansion of trials would be permitted “after the AVs pass stringent competency tests.”Footnote 94
Trials were sometimes reported to be conducted without passengers, thereby lowering risks to persons, e.g., in ComfortDelGro’s trial of self-driving shuttle buses in 2018. During the initial stage of this trial, “the shuttle will not take any passengers.”Footnote 95 Once the trial management team was satisfied that “the shuttle is ready for commuter trials, passengers will be able to start boarding the vehicle.”Footnote 96 Trials were conducted for commercial vehicles as well, e.g., “the design and trials for autonomous truck platooning, which comprises a human-driven truck and one or more driverless vehicles, will be carried out over a three-year period ….”Footnote 97
Newspaper reports of trials have at times also discussed the topic of safety drivers, which suggests that there are concerns that the AVs may not be sufficiently safe on their own. In the 2015 trials at the Gardens by the Bay, it was noted that “there will be a trained staff stationed in each vehicle to guide passengers and gather insights on commuter behavior, passenger feedback and the performance of the vehicle.”Footnote 98 In Grab’s “Robo-Car,” which the public could book for free, a safety driver as well as a support engineer were present in the car “to observe system performance and ensure the passenger’s comfort and safety.”Footnote 99 The presence of two individuals beyond the passengers in the small space of a taxi indicate significant concerns about safety. The self-driving shuttle bus trials at the National University of Singapore (NUS) in 2018 also had a safety engineer on board.Footnote 100 In 2019, the creation of guidelines for fully AVs was announced, together with the statement that all AVs being tested in Singapore require a safety driver “who takes control of the vehicle if necessary.”Footnote 101
One of the challenges encountered by AVs in Singapore is driving in bad weather.Footnote 102 Singapore encounters periods of heavy wind and rain,Footnote 103 and in the 2016 partnership between Grab and nuTonomy, the plan was to have a safety driver who would take over if it started to rain heavily.Footnote 104 The weather challenge was included in the LTA and the Jurong Town Council SDV research center and circuit, where driverless vehicles could be tested under traffic conditions.Footnote 105 Senior Minister for State for Transport Josephine Teo observed that the center and circuit could help Singapore develop standards and put SDVs on the roads.Footnote 106 The creation of the Singtel Cyber Security Institute was announced in 2019, a research center where researchers would be able to “put the solutions they have developed through rigorous testing and prototyping.”Footnote 107
The safety issues posed by AV navigation are also addressed in discussions of AV navigation mechanisms. AVs tested in Gardens by the Bay had laser technology to “scan the surroundings and register the position of the vehicle. It is able to detect obstacles, such as a person walking into its path.”Footnote 108 Camera lenses are located at the front and back of the vehicle for video capture, sensor fusion can choose the best navigation techniques to suit various road conditions, and radio frequency identification can be placed at different locations in Gardens by the Bay to support navigation.Footnote 109 Proposed automated buses in 2017 would have radar and sonars to detect other vehicles and pedestrians.Footnote 110 The Prime Minister Lee Hsien Loong and Minister for Trade and Industry Mr. S. Iswaran “hitched a ride” in A*STAR’s self-driving car, which used laser sensors and A*STAR’s own algorithm “to ensure a safe driving experience.”Footnote 111
In a demonstration, this AV was shown to have the ability to detect traffic lights, stop lines, “and objects as small as a child. It is even able to function in complete darkness.”Footnote 112 The use of the image of a child is significant, as one of the concerns regarding AVs is that if they do not detect pedestrians, they could hit them and cause injury. Children could be more vulnerable to injury from AVs compared to adults, a theme that arose above in connection with automated school buses. The presence of a child in narratives regarding AVs can therefore indicate fear, but children are also put to other uses in these narratives. The need for safeguards is contextualized in a more palatable manner via the observation that “[y]ou really don’t want your five-year-old jumping into a self-driving car and then taking off to Disneyland.”Footnote 113 This narrative acknowledges a fear regarding AVs, but inserts a happy, almost cartoon-like story of a mischievous child, with the happy ending of arriving safely at Disneyland.
III.A.4 Regulation and Liability
It was noted earlier on in Singapore’s engagement with AVs that SAVI would “look into regulations required for the mass adoption of such vehicles, such as liability issues when accidents happen and infrastructure requirements.”Footnote 114 In the context of constructing infrastructure, CCTVs were put into place along a test route, to identify challenges and because “footage can also serve as evidence in an investigation if an accident occurs.”Footnote 115 When Grab introduced a self-driving “Robo-Car” for testing in 2016, users had to be above the age of 18 and sign a liability waiver before riding.Footnote 116 Legal and insurance experts opined in December 2016 that liability issues involving AV technology were unclear.Footnote 117 Then Dean of the NUS Faculty of Law Simon Chesterman noted that criminal law focused on the driver of the vehicle, and that the lack of a driver posed “a real regulatory challenge.”Footnote 118
An accident involving a self-driving car did occur in Singapore on October 18, 2016.Footnote 119 One of nuTonomy’s self-driving cars hit a lorry in Biopolis Drive while on a test drive. The vehicle had two engineers on board, and one of them was behind the wheel as a safety driver.Footnote 120 The vehicle was driving at a low speed and changing lanes when the accident occurred.Footnote 121 No one was hurt,Footnote 122 but the right bumper of the self-driving car was damaged and the lorry had a dent in the side.Footnote 123 The Traffic Police and LTA investigated the accident, and the company conducted its own investigation.Footnote 124 Following the accident, nuTonomy put its tests of driverless cars on hold, although tests by three other agencies, A*STAR, Delphi, and the Singapore-MIT Alliance for Research and Technology, continued.Footnote 125 Also following the accident, the Executive Director of the Energy Research Institute @ NTU said that his researchers would be spending more time identifying possible safety compromises and run simulations on the buses being trialed at NTU to ensure safety.Footnote 126
Having investigated the accident, NuTonomy reported the following month that “an extremely rare combination of software anomalies” affected how the vehicle detected and responded to other nearby vehicles when changing lanes.Footnote 127 There was no discussion of why the two safety engineers were not able to prevent the accident. The company reported that it had made improvements to its software system to eliminate the anomalies responsible for the accident, and that extensive tests had been performed using computer simulations and private roads to ensure a safe operation moving forward.Footnote 128 The company also reported that it had resumed trials.Footnote 129
The need for additional regulation has been acknowledged in Singapore, with changes to, e.g., the Road Traffic Act in 2017.Footnote 130 The changes included penalties for private-hire drivers operating without a proper license or adequate insurance.Footnote 131 Without identifying particular AV issues, it was stated that while AVs can enhance the efficiency and convenience of Singapore’s land transport system, “the Government cannot take a ‘completely laissez-faire approach.’”Footnote 132 Singapore would therefore adopt a “balanced, light-touch regulatory stance that protects the safety of passengers and other road users, and yet ensures that these technologies can flourish.”Footnote 133
Newspaper reports presented some competing narratives regarding the regulation of safety and risk. The Auto Insurance Head of AIG said that AVs could make the roads safer because of the large proportion of accidents caused by human error, and that other features such as collision avoidance systems have reduced accidents significantly.Footnote 134 However, NTUC (National Trades Union Congress) Income’s general insurance and health general manager said that repair costs could be higher.Footnote 135 The creation of technical guidelines for AVs covering areas such as vehicle behavior and safety was announced in 2019, which came “after a year of discussions between representatives from the autonomous vehicle industry, government agencies, as well as research institutes and institutes of higher learning.”Footnote 136 As noted by a professor at NUS’s Advanced Robotics Centre, the guidelines were not rules, but they could be a basis for formulating regulations for AVs.Footnote 137 Permanent Secretary for Transport Loh Ngai Seng, Chairman of CARTS, said that he hoped that Technical Reference 68, a set of guidelines covering areas such as vehicle behavior and safety as well as cyber security, will “guide industry players in the safe and effective deployment of autonomous vehicles in Singapore.”Footnote 138
How might narrative arguments regarding AVs interact with Singapore’s regulatory approach? Singapore has pushed for AV development, and given safety concerns, that would support a stricter approach with comprehensive regulation. However, a narrative that AVs are not inevitable, and that they would only be allowed if they pass rigorous testing etc., suggests that AVs do not need strict legal regulation, because testing and trial regimes ensure safe operation. Newspaper reports in fact suggest that government discussions of AVs did not assert that AV development was inevitable. Widespread use of AVs was characterized in 2015 as “possible in the next 10 years.”Footnote 139 The study done on Sentosa would enable the venue to “decide whether the driverless vehicles will become a permanent feature after the trial,” and the entire study on Sentosa should produce insights that “will also help authorities evaluate the possibility of deploying similar self-driving shuttle systems for intra-town in other parts of Singapore in the future.”Footnote 140 The driverless truck trials in 2017 took place in two phases, with the first phase conducted by companies in their respective countries, and “depending on those outcomes, MOT and PSA Corporation will then select one of the companies” for Phase Two, which would involve further local trials and development.Footnote 141 Regarding driverless electric buses slated for trial in 2018, the SMRT Chief Executive Officer (CEO) stated that AVs “are expected to be fielded in larger scale under the future land transport master plan,” and that they would “leverage our extensive experience operating and maintaining buses to support the eventual deployment of autonomous vehicles safely on our roads,” but that “if successful” the buses “will serve commuters in the coming years,” and no timeline was provided.Footnote 142 Even when discussing progress in AV development, government discussions tended to conceive of the process in steps, e.g., regarding driverless trucks using a platoon approach with a human-driven lead truck with a convoy of driverless trucks, “it is timely that we move on to the next steps in developing truck platooning technology.”Footnote 143
III.B Commercial Entities
In the Singapore context, commercial entities have paired up with government entities to develop AVs, and their narratives revolve around commercial success, AV advantages, and AV inevitability.
III.B.1 Commercial Success
Highlighting the theme that AVs could provide seamless first and last mile connectivity for commuters, a joint venture between the government transportation entity SMRT Services and the company 2getthere Holding was announced on April 20, 2016.Footnote 144 The Singapore-based joint venture planned to market, install, operate, and maintain AV systems for customers in Singapore and the Asia-Pacific, and aimed to commercialize 2getthere’s “third-generation Group Rapid Transit Vehicle system in Singapore by the end of the year.”Footnote 145 It was announced in January 2017 that agreements were signed with two automotive companies, Scania and Toyota Tusho, to develop and test an autonomous truck platooning system,Footnote 146 and a partnership was formed in April 2017 between the LTA and ST Kinetics to develop and trial autonomous buses.Footnote 147
Singapore newspapers gave significant coverage to the local start-up nuTonomy, which was expected to start limited commercial service by 2018.Footnote 148 The LTA signed agreements with nuTonomy, as well as the UK company Delphi Automotive Systems, to make AVs a reality.Footnote 149 Grab introduced a “Robo-Car” in 2016,Footnote 150 and announced its partnership with nuTonomy, the first company in the world to try out self-driving taxis in public, three days after raising $750 million in funding.Footnote 151
III.B.2 AV Advantages
There was occasional coverage of commercial entities extolling the virtues of their products, and these narrative advertisements echo some of the advantages of AVs noted in government narratives. One 2018 article regarding an Audi AV asked, “What would you do with an extra hour of your life every day?”Footnote 152 If you’re someone who loves to drive, “then autonomous driving might not be for you,” but in Singapore, “we experience traffic jams daily,” and AVs give the driver the choice to “clear … e-mails or spend time interacting with … friends and family.”Footnote 153 This discussion assumes that the AV is at the most advanced level and does not require the attention of the driver: “Once all the conditions are met and the systems are engaged, it leaves the driver free to take hands off the wheel and do other things.”Footnote 154
III.B.3 Inevitability
The inevitability narrative favored by commercial entities makes a strong appearance in the research studies and surveys discussed at the beginning of the chapter, and inevitability also appears in newspaper coverage of commercial entities. The CEO of MooVita, creator of AV MooAV, suggested that cars like MooAV “will become a common sight in Singapore.”Footnote 155 The CEO of taxi company ComfortDelGro stated that the operational experience gained in AV trials would be invaluable “as we prepare for a future where autonomous vehicles … become an integral part of our daily commute.”Footnote 156
There are even instances of a commercial entity attributing inevitability to the Singapore government. For example, local start-up nuTonomy described how favorable the AV environment is in Singapore, stating that they see Singapore as “one of the best markets in the world for this technology … [Singapore wants] it to happen, and they’re going to make sure it does.”Footnote 157 However, this statement attributes an inevitability to the Singapore government which is not reflected in the government narratives analyzed above.
A related but slightly different narrative argument is raised in commercial entities’ discussion of regulatory approaches. In a 2018 article, Audi acknowledged there are hurdles to overcome in AV development, because although autonomous driving is a reality, the question is “whether or not you’ll be allowed to do it ….”Footnote 158 The article noted two legislative barriers: “whether autonomous cars are allowed at all, and what drivers are allowed to do while the car drives itself.”Footnote 159 Audi said it planned to seek approval from the LTA for its “Audi AI Traffic Jam Pilot.”Footnote 160 Another 2018 article noted that the establishment of a Japanese start-up in Singapore was attributed to Singapore’s “support in removing regulatory barriers and promoting testing.”Footnote 161 Companies can build technology, but if the market does not accept it, or “the government does not allow us to introduce the car, then all it is is an interesting toy.”Footnote 162 The commercial message here is that AVs are here, but short-sighted regulation could impede consumer access to it. In particular, the toy image suggests that imprudent regulation could trivialize a major development, one that has already arrived.
IV Conclusion
The chapter has argued that research surveys and newspaper articles suggest a distinct group of narrative arguments regarding AVs in Singapore. Public opinion included some views that AVs would bring positive outcomes, such as convenience and task completion without the need to interact with a human, but concern and fear were also expressed, primarily about the safety of AVs with some discussion of job loss. Government and commercial entities expressed reassuring narratives, such as those emphasizing AV testing and controlled pilot projects. The Singapore government was portrayed, by itself and by its commercial partners, as pushing for AV development, to, among other reasons, address the Singapore need to deal with resources in short supply, such as truck drivers and land space.
Narratives of government and commercial entities often complemented each other, and in newspaper articles, the government and commercial positions were regularly intertwined. These narratives were frequently upbeat, and when they addressed safety concerns, they did not necessarily acknowledge the reasons why there would be any concerns. There is, however, a difference between government and commercial narratives regarding AVs: commercial entities asserted an inevitability narrative, while government entities did not. According to the inevitability narrative, there is no stopping technological advances like AVs and their composite parts such as AI, so countries and the public should simply accept that and focus on managing the risks. This narrative argument conflicts at a fundamental level with a different narrative regarding how government and law function, that government officials are responsible for determining what technology can be used in their jurisdiction and implementing rules regarding it, including prohibitions if warranted. The government’s rejection of the inevitability narrative supports a view of law and government in which government officials decide the degree and pace of AV development. However, Singapore has not adopted a strict regulatory approach, and has opted instead for light touch regulation. As a narrative argument, the rejection of inevitability does not dictate a particular regulatory approach, and is consistent with either light touch or strict regulation.
The end of our foundation is the knowledge of causes and the secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible.Footnote 1
I Introduction
On October 15, 2001, a coach driver wanting to make a right turn stopped to give the right of way to a mother and her 5-year-old son on a bike crossing. After the mother had reached the other side of the crossing, she made a gesture to the driver. He accelerated and ran over the boy, who had fallen in the middle of the crossing. The boy died of his injuries. In court, the driver explained that the gesture made him assume that the boy had crossed safely. The Dutch lower, appellate, and Supreme Court found that his claim that he had based his understanding on the gesture was irrelevant, but this chapter asserts that the driver’s hermeneutic (mis)understanding of the mini-narrative of the human gesture is quite relevant. Because Article 6 of the Dutch Road Traffic Act 1994, applicable to traffic accidents resulting in grave bodily injury or death, is based on culpa lata, i.e., behavior less careful than that of the average person, the presumption of innocence allows a defendant to plead not guilty based on his or her interpretation of another person’s action. It appeared that the disastrous consequence of the boy’s death occasioned application of a stricter standard, that of culpa levis, i.e., whether the defendant behaved as the most careful person possible.Footnote 2 As Ferry de Jong suggests, when it comes to determining culpa, guilt, and dolus, intentionality, in any specific criminal case, a “hermeneutics of the situation”Footnote 3 is required to gauge whether or not actus reus and mens rea can be established. In this chapter, hermeneutics refers not only to the individual interpretations of actions or meaning, but also includes the criteria or framework used to produce such interpretations. A hermeneutics of the situation stresses the connection between this process of meaning-giving and the situation in which the process occurs.Footnote 4 This process is difficult enough in traffic accidents involving traditional cars, and it will become even more difficult if the car is a robot.
An autonomous vehicle is a robot, and a robot is understood here as “an engineered machine that senses, thinks, and acts.”Footnote 5 In view of the increased use of automated vehicles, referred to in the chapter as Automated Driving Systems (ADS), the need for a hermeneutics of the situation has become even more acute. When ascertaining the degree of criminal fault when ADS are involved in traffic accidents, we have to face the unpleasant truths that so far legislation lags behind and current versions of legal codes may fall short. Criminal law concepts dealing with intent and causality therefore need a new, careful scrutiny, because ADS have their own hermeneutics, one which is not easily comprehensible to the driver. ADS hermeneutics are based on their programming, i.e., their algorithms, and this introduces novel understandings of what it means to act – hermeneutical as well as narratological.Footnote 6
In addition to drivers of ADS, legislators may also find the logic of new technologies fuzzy. The question of hermeneutically understanding technology at the legislative level is outside the scope of this chapter, and limited space does not permit me to elaborate. It can be noted that any legislative choice regarding ADS in criminal law will influence future criminal charges, which are themselves always already mini-narratives of forms of reprehensible human behaviour, mala prohibita.Footnote 7 Both future legislation and pending concrete cases are in need of an informed hermeneutics of the situation, disciplinary and factual, not least because hermeneutic misunderstanding may be an impediment to the right to a fair trial.
Many disciplines were already involved in the development and construction of ADS before jurists became involved. The difficulties of how to interpret and understand the disciplinary other may easily lead to miscommunication when artificial intelligence (AI) experts who are not jurists must deal with jurists who are not AI experts.Footnote 8 In addition to problems of translation between disciplines, responsibility gaps may occur, “circumstances in which a serious accident happens and nobody can be reasonably held responsible or accountable due to the unpredictability or opaqueness of the process leading to the accident,” technological opaqueness included.Footnote 9 For example, in 2020, a former member of the EU Parliament, Marietje Schaake, had a conversation with an entrepreneur. The entrepreneur told her that one of his engineers working on the design of ADS had asked him who he would prefer to be killed in case of a collision involving an ADS, either a baby or an elderly person, because such options had to be built into the software.Footnote 10 This brings to mind the ethical-philosophical thought experiment called the “Weichenstellersfall” or trolley problem. A train runs out of control and will kill hundreds of people in a nearby train station unless it is diverted to a side track, but on that track there are five workmen who will be killed as a consequence. What should be done? Do you divert the train or not? Even more complicated is the problem’s elaboration in the fat man example; what if you are on a bridge and the only way to stop the train is to kill a fat man next to you and push his body on to the track to stop the train?Footnote 11 Translated to the topic of ADS, when there is imminent danger, the human driver and/or the ADS have to decide between two evils and choose to kill either one person or the other(s). Any human driver killing one individual in order to save the other(s) will be acting unlawfully, but would that also be acting culpably? Furthermore, if a democratic state under the rule of law can never weigh the life of one citizen against the other and prohibits any distinction on the basis of age, gender, and sex, why would we allow an engineer to do just that when programming an ADS? Understanding our fellow human beings and their actions is difficult enough, but understanding, let alone arguing with, an algorithm not of one’s own design is even more so. Technological advances in driving may be intended to reduce the complexity of the human task of driving a vehicle in contemporary traffic – the technological narrative of progress – but may in fact complicate it if such innovation demands that the human be on the alert for any surprise in the form of an error in the algorithmic and/or computational system, causing the vehicle to deviate from its intended course. While research is being done on how human drivers understand and use specific types of ADS, the current human driver-passenger may be hermeneutically challenged. How and when does she recognize that she needs to resume control?
While criminal law does not solely represent the pursuit of moral aims, new AI technologies force us to consider ethical issues in relation to hermeneutical and narratological ones, and to grapple with the criminal liability of ADS. To this end, the chapter incorporates different interdisciplinary lenses, including narratology. The chapter is inspired by the epistemological claim on human knowledge and progress voiced in Francis Bacon’s utopian narrative The New Atlantis, because the fundamental philosophical questions “What is it? What do you mean? How do you know?” apply in technological surroundings as much as in criminal law surroundings. The actors involved have to be able to clearly express their stories, paying careful attention not only to what they are saying and claiming, but also to how they tell their stories.Footnote 12 These ontological, hermeneutical, and methodological questions are therefore narratological questions as well.
In Section II, this chapter addresses the interdisciplinary issues of integrating knowledge, translating between disciplines, and responsibility gaps, as a prolegomena for Section III, which focuses on criminal liability. In Section IV, the human–robot/ADS interaction is discussed, in the context of issues raised by the concept of dolus eventualis. To conclude, Section V returns to the need for a hermeneutics of the situation that adequately addresses ADS challenges.
II Interdisciplinary Observations on the Interrelation of Technology and Law
II.A Whose Department?
The legal implementation of technology is too important to leave to technologists alone. This chapter therefore turns to philosophical thought on technology, in part to prevent us from falling into the trap of Francis Bacon’s idola tribus, i.e., our tendency to readily believe what we prefer to be true.Footnote 13 The idola tribus makes us see what our rationalizations allow. This approach is the easy way out when we do not yet fully understand the effects and consequences of new technologies, but the moment is not far away when ADS becomes fully capable of independent, unsupervised learning, and we should consider Samuel Butler’s visionary point on the side-effect of machine-consciousness, i.e., “the extraordinary rapidity with which they are becoming something very different to what they are at present.”Footnote 14 When that happens, who or what will be in control?
An epistemology based on algorithmic knowledge, while helpful in many applications to daily life, runs the risk of introducing forms of instrumentalism and reductionism. Behind such “substitutive automation” is the “neoliberal ideology … [in which] dominant evaluative modes are quantitative, algorithmic, and instrumentalist, focused on financialized rubrics of productivity.”Footnote 15 The greater the complexity of the issue, the greater the risks posed by algorithmic knowledge. Scientific dealings in these modes of analysis often disregard the fact that a human being is the source of the data, both as the object of the algorithms used in technologies when data is gathered to run the device, and as the engineer and designer who decides what goes into the programming process. Human fallibility is often disregarded, but ontological perfection either of humans or technologies is not in and of this world. While both human and AI learn by iteration, their individual awareness of past and present danger is not identical, or should we say, identically programmed.
Some Dutch examples may illustrate the difficulties in relying exclusively on algorithmic knowledge. In 2018, the advanced braking system of a Volvo truck failed because the camera system did not recognize a stationary truck in front of it in the same lane.Footnote 16 In the subsequent crash into the back of another truck, the driver of the Volvo was crushed to death. In a 2017 case, the warning system of a 2014 model Tesla failed to respond to another vehicle that changed lanes, the Tesla did not reduce its speed in due time, and it hit the side of the other vehicle. The manufacturer admitted that the 2014 model worked well when it came to detecting vehicles right in front of the Tesla, but not when these vehicles made sudden moves.Footnote 17 But that is not an uncommon event in traffic, is it?
The examples show that data-driven machines run the risk of incorporating forms of “epistemological tyranny.”Footnote 18 The human is reduced to the sum of its “dividual” parts, selectively used depending on its user’s needs.Footnote 19 Our making sense of the relations between individuals and their machines is then reduced to connecting the dots. If manufacturers focus on the development of new technologies rather than on the legal frameworks within which their products are going to be handled, any opacity as far as product information is concerned can lead to someone, somewhere, avoiding compliance with the law. We should therefore probe the “narrative of computationalist supremacy.”Footnote 20 The humanities can help provide guidance at the meta-level of juridical-technological discourse, because behind any form of “algorithmic imperialism,”Footnote 21 there is also linguistic imperialism that prioritizes one language of expertise above the other.Footnote 22
Under the influence of Enlightenment thought, the stereotypical or stock story of modern technology, its constitutive narrative, founded as it is in the natural sciences, has been the narrative of human progress.Footnote 23 Its darker side-effects have often been pushed into the background until something went seriously wrong. But it is a mistake to regard technology “as something neutral.”Footnote 24 If we look upon technology as production only, we may be reduced to Deleuzian dividuals, ready to be ordered by others, be they machines or humans, both in technology and law; then “‘[t]he will to mastery’ will prevail and we have to wait and see who gets in control at the level of production.”Footnote 25 While the heyday of legal positivism is behind us, its referential paradigm may well resurface, if for lack of information or understanding we all too readily accept at face value what is held before us as technology. The consequence may be uninformed and unethical applications of technology, without proper legal protection of the humans impacted by it.
This chapter does not promote Luddism. It does, however, highlight the risks involved in a positivist view of both law and technology, i.e., the value-free, unmediated application of any form of code, as opposed to the value-laden human enterprises that they are. As Lawrence Lessig put it, “Code is never found; it is only ever made, and only ever made by us.”Footnote 26 Technology should not be put to use for the simple reason that it is available, and one risk of modern technologies is that if it can be done, somewhere, someone, at some point in time, will actually do it, whatever the consequences. This attitude is brilliantly and cynically voiced in Tom Lehrer’s 1965 song “Wernher von Braun”: “‘Once the rockets go up, who cares where they come down? That’s not my department,’ says Wernher von Braun.”Footnote 27 Careful attention regarding the what, the how, and the why of ADS technology is required. The what of the algorithm, the logic of the if … then, does not coincide with the how of its juridical-technical implementation, let alone the how of its technical discourse. This is no small matter if we think of the if … then structure of the criminal charge in terms of punitive consequences for human behavior involving ADS, and the narratives a defendant would need to steer clear of criminal responsibility.
II.B The Need to Integrate Knowledge
Mono-disciplinary approaches reinforce scientific dichotomies that preclude the necessary risk assessments. They bring us back to the Erklären-Verstehen controversy, as it is called in the nineteenth-century German philosophical tradition, to the concept of restricting explanations to the natural sciences, because explanation (Erklären) could only pertain to facts, whereas the humanities could only attribute meaning or hermeneutic understanding (Verstehen). This dichotomy has had far-reaching implications for the epistemological differentiation of knowledge into separate academic disciplines, with each discipline developing its own language and methodology, outlook, goals, and concepts, and each discipline functioning in a different cultural and social context of knowledge production. The interdisciplinary approach advocated here can show that in all epistemological environments, “[d]isciplinary lenses inevitably inform perception.”Footnote 28 An interdisciplinary approach also calls for an appreciation of the fact that any discipline’s or field of expertise’s narratives cannot be understood other than within their cultural and normative universe, the nomos of their origin and existence.Footnote 29
To see the connection between ADS technology and narratology, we could ask what the new technologies’ rhetoric, scripts, and stock stories have been so far, and specifically, what the main narrative thrust of technology is and what it means for the non-specialist addressee. Any field of knowledge “must always be on its guard lest it mistake its own linguistic conventions for objective laws.”Footnote 30 Debate is essential, and engineers and jurists alike need guidance regarding the production and reception of narratives in their respective fields. One such form of guidance is Benjamin Cardozo’s claim that legal professionals need to develop a linguistic antenna sensitive to peculiarities beyond the level of the signifier, because the form and content, the how and the what of a text, are interconnected.Footnote 31 Concepts from narratology can assist to accomplish this task. All professionals benefit if they learn to differentiate between, first, narrative in the sense of story or what is told, and discourse of how it is told. For jurists working in criminal law, it is important, second, to realize that story comprises both events, understood here as either actions or happenings, and the characters that act themselves or get involved in happenings, and that all of this occurs in specific settings that influence meaning.
Precisely because disciplinary lenses influence us, translating between collaborating disciplines must be undertaken. To the legal theorist James Boyd White, interdisciplinarity is itself a form of translation. He claims that resolving the tensions between disciplines “always involves the establishment of a relation between two systems of language and of life, two discourses, each with its own distinctive purposes and methods, its own ways of constructing the social relations through which it works, and its own set of claims, silences, and meanings.”Footnote 32 At the core of translation as a mode of thought, then, is the claim that we should be alert to the possibilities and limitations of any professional discourse. This point illuminates the possibilities and limitations of any disciplinary language of expertise, limitations tied to the context of claims of meaning, and to the cultural and social effects of specific language uses. Translation requires that we address the fundamental difference between the narrative and the analytical, between “the mind that tells a story, and the mind that gives reason” because “one finds its meaning in representations of events as they occur in time, in imagined experience; the other, in systematic or theoretical explanations, in the exposition of conceptual order or structure.”Footnote 33 When transposed to the subject of conceptual thought, the need for attention to language and narrative becomes acute. What, to start with, is “a concept”? White found “concept” a problematic term, because the underlying premise is once again the referentiality of language, one that implies transparency of the semantic load of a concept in one disciplinary language and, following this, unproblematic translation of a concept into another. Such a view is imperialistic, based as it is on the supposition that the “conceptual world … is supposed to exist on a plane above and beyond language, which disappears when its task is done.”Footnote 34
One central example of translation in the context of human–ADS interactions is the concept of driver, currently presumed to be a human driver. In a present with current levels of ADS development, and in a future of full ADS automation, a legal concept of the driver based on a human is no longer appropriate. Feddes suggests that “the human is a passenger, the automation is the legal driver.”Footnote 35 If this is correct, attribution of legal responsibility in human–ADS interactions would require ADS to be able to handle any situation that crops up.
A Dutch case on the concept of driver illustrates arguments regarding who the driver is in a human–ADS interaction. The driver of a 2017 Tesla Model X was fined €230 in an administrative sanction for using his mobile phone hands-on while driving.Footnote 36 Before the county court, he claimed that because the autopilot was activated, he could no longer be legally considered the driver, and therefore the acts of driving and using a hands-on phone did not constitute the simultaneous act prohibited in Article 61A of the Rules on Traffic Regulations and Traffic Signs 1990.Footnote 37 This narrative did not save the day. The county court found the defendant’s appeal unfounded because Article 1 of the Road Traffic Act 1994 applied. The defendant had stated that while seated on the driver’s chair with the autopilot activated, he regularly held the steering wheel, but he did this because the system disengages itself if the driver does not react after the three auditory warnings from the vehicle when it notices that the driver is not holding the wheel.Footnote 38 He was found to be the legal driver of the vehicle and not a passenger, in part because drivers are “all road users excepting pedestrians” according to Dutch law.Footnote 39 Like the Netherlands, many legal systems lack a codified definition of the term “driver,” which leads courts to define the term in context.
The defendant’s other argument in this case, that Dutch legislation should be amended to provide a definition, did not help the defendant either, because in criminal cases future-oriented contextual interpretation is prohibited. On appeal, the defendant introduced a new element to his narrative, that a driver using an autopilot is similar to and should be treated like a driving instructor. Since a driving instructor is not the actual driver, he or she is allowed to use a mobile phone hands-on. This narrative forced the Court of Appeal to elaborate on the doctrinal distinction made in the Road Traffic Act 1994 and the Traffic Rules and Signs Regulations 1990 between the actual driver and the legal driver. Article 61A of the Traffic Rules and Signs Regulations 1990, the regulations used for the administrative charge against the defendant, pertained to the actual driver, not to the instructor or examiner. Activating and using the autopilot, as the defendant had done, made the defendant the actual driver, as his vehicle was not a fully automated ADS. Per this reasoning, Article 61A applied. The Court of Appeal upheld the judgment.Footnote 40 Under this reasoning, there is nothing automatic in autopilots yet!
A final, comparative question regarding translation is whether the process of ADS construction reflects unconscious biases. Suppose an ADS is of US American design. Surely the designer had US American law at the back of his mind during construction? Does such a vehicle fully comply with the demands of civil-law European systems and the mindsets of European users? An interdisciplinary approach regarding technology and law compels us to think through incompatibilities, while at the same time urges us to integrate their disciplinary discourses as much as possible. Rather than continuing a “‘black box’ mentality,”Footnote 41 we should promote “technologies of humility,”Footnote 42 to preclude technological languages from imposing their conceptual framework to the exclusion of other languages.
II.C Mind the Gap
As noted above, a responsibility gap arises when a serious accident happens but nobody can reasonably be held responsible. Responsibility gaps can arise because of the gaps between disciplinary fields. An example of minding the disciplinary gaps is Santoni de Sio’s attention to ethical issues, in which he urges integration of different disciplines. He observed that the Dutch Ministry of Infrastructure and Environment divides ethical issues in ADS into three levels: the operational level concerning the programming of automated vehicles; the tactical level of road traffic regulations; and the strategic aspect of how to deal with the societal impact of ADS.Footnote 43 For ADS, integration “should be done in such a way that ‘meaningful human control’ over the behaviour of the system is always preserved.”Footnote 44 The simple fact that a human is present is not in itself “a sufficient condition for being in control of an activity.”Footnote 45 This is the case because of the complexity of all the causal relations and correlations involved, and because “meaningful” control is not equivalent to “direct” control, i.e., when the driver directly controls the ADS’s full operation. Confusing meaningful and direct control can easily lead to either over-delegation, as when the driver of an ADS overestimates the vehicle, or under-delegation, where the driver overestimates his or her own driving capacities in an ADS context.Footnote 46 The need to clearly define the scope of the driver’s actual freedom to act is also inextricably connected to the notion of volition in criminal law.
III Criminal Liability
III.A Freedom to Act?
Human autonomous agency is inextricably connected to consciousness and to the capacity for rational thought. With these come free will, manifesting in criminal law, first as the self-determination to deliberately do the right thing and abstain from what is wrong, e.g., mala per se such as murder, and mala prohibita or what the law prohibits, and second as the criterion for assigning legal personhood. When it comes to attributing criminal liability, the first requirement is actus reus, the voluntary act or omission to act that the law defines as prohibited. Historically, the free will necessary for a voluntary act has been defined in numerous ways. It can mean that man is free to decide to go either left or right, even if there is no specific reason to do either. One has freedom to act if one is able to do whatever one decides, the liberum arbitrium indifferentiae.Footnote 47 Free will can also be seen when one is free to decide not to act at all. This is the precursor and precondition of the legal freedom to act in that it presupposes the mental ability to decide whether or not to do this, that, or the other.Footnote 48 The fact that man is aware of the fact that he has a will is not deemed enough, because being conscious of something is not evidence of its existence.
What are the necessary and sufficient conditions of a voluntary act in the context of ADS, and what are the legal consequences of those conditions? The lack of free will is still widely regarded as the axe at the root of the criminal law tree. The question today in human–robot relations is whether or not free will and forms of technological determinism can be reconciled, theoretically and practically. Is free will compatible with empirically provable determinants of action? If so, then free will is perhaps compatible with machine-determined action, and therefore legal causality. The necessary condition for free will is that an actor, in doing what he did, could have decided otherwise. In the law, we normally start from the premise that free will is a postulate that goes for the majority of ordinary human beings opposed to an empirically provable fact, because statistically speaking that is usually the situation. This approach leads to the traditional position that those suffering from mental illness are not free, and hence not or only partly responsible. The law’s beginning assumption of free will also leads to the impossibility of punishing those about whom one cannot say anything other than we do not know whether their will was hampered or not. Practically speaking, free will is established when a state of exception, e.g., insanity in humans, does not occur.
Two opposing views regarding the application of these ideas to ADS could be entertained. One is that if an ADS is an agent capable of learning in the sense of adapting its actions to new information, an ADS could be held criminally responsible, with or without attributing consciousness of the human type, because the algorithmic reasoning skills and autonomy of the ADS would suffice. Second, if charges are brought against the human driver, one could argue that an ADS provides a defense based on the state of exception approach to free will discussed above. The human driver does not know the mind of the ADS and cannot probe the technological sanity of an ADS, partly because the ADS is a device programmed to act in response to its environment, but not by the driver.
Both views are connected to the question of a possible form of legal personhood for AI, another condition for the imposition of legal responsibility. As a status conferred by law on humans and entities such as corporations, legal personhood is a construct. In everyday life, it is relatively easy to recognize a fellow human being if you meet one. We then recognize the rights and responsibilities of that independent unit, and we distinguish among different entities with legal personhood, e.g., between a toddler without and an adult with legal obligations. Things are already more difficult regarding artificial persons such as corporations, in terms of the information required to assess what the artificial person’s rights and obligations are, and the inquiry becomes more fraught regarding ADS.Footnote 49 Another issue is that as a matter of legal doctrine, most countries have a closed system of legal personhood. Adding to it may not be as easy as, e.g., the European Parliament thought, when in 2017 it spoke about personhood in the form of an “electronic personality” for robotsFootnote 50 without explaining which form it could or should take. The European Commission then declined granting such legal status to AI devices.Footnote 51
The issues of legal personhood and voluntariness are related. Voluntariness of the actus reus of any criminal charge is an issue for ADS. We assume that humans have volition because they do most of the time, and so the law does not always explicitly address the question of human volition. However, voluntary participation in an action is intimately connected to the Enlightenment model of thought that has individual autonomy at its heart and informs our current understandings of law. The requirement for voluntariness therefore prompts the issue of legal personhood to return with a vengeance, because the actus reus of a criminal charge, as the outwardly visible activity subject to our human understanding and judgment, is understood to be one committed by a legally capable and responsible person, unless otherwise proved. In short, the basic proposition of criminal law is that if one has legal personhood, one can be held responsible, if there is sufficient evidence and if the actus reus is accompanied by mens rea, the guilty mind. Legal personhood and voluntariness are elements that therefore remain inevitably entangled in any discussion of criminal liability and ADS.
III.B Which Guilt and Whose Guilty Mind?
Mens rea, the requisite mental state that accompanies the actus reus, is required for criminal responsibility, and a precise articulation of mens rea is in turn required by substantive due process. But because criminal law regarding ADS is currently under-developed, we should be even more aware than usual of the doctrinal differences regarding mens rea terminology at different levels. In particular, when comparing legal systems, legal concepts applicable in common law settings cannot immediately be translated to civil law surroundings. In any discussion of mens rea and ADS, we are always dealing with contested definitions and fundamental differences involving the mental pictures that jurists have of their own civil law and common law concepts. Comparative research on ADS is needed, but seemingly similar concepts may be false friends.
Regarding culpability, the US American Model Penal CodeFootnote 52 distinguishes between acting purposely, knowingly, recklessly, and negligently, with negligence occurring when one fails to exercise the care that the average prudent person would exercise under the same conditions. Culpable criminal negligence in this framework is recklessness or carelessness that results in death or injury of another person, and it implies that the perpetrator had a thoughtless disregard of the consequences or an indifference to other people’s safety. The inclusion of negligence in the Model Penal Code was controversial, because purpose, knowledge, and recklessness entail the conscious disregard of the risk of harm, i.e., subjective liability, whereas negligence does not, because the risk of harm is one that the actor ought to have been aware of, but was in fact not. Culpability as negligence is therefore often thought to result in objective, i.e., strict, liability. For many jurists, negligent criminal culpability sits uneasily with the requirement of “some mental posture toward the harm.”Footnote 53 In the criminal law of England and Wales, “there is to be held a presumption … that some element of ‘mens rea’ will be required for conviction of any offense, unless it is excluded by clear statutory wording.”Footnote 54 Various forms of mens rea found in statutory definitions and case law presume either: intention, direct or oblique, i.e., acting in the knowledge that a specific result will or is almost certain to occur; recklessness, either subjective, i.e., foreseen by the actor, or objective, i.e., the reasonable person threshold; or negligence, a deviation from the reasonable care standard of behavior. While recklessness resembles negligence, negligence does not coincide with recklessness.
In German criminal law, recklessness is not a separate concept. It finds a place within the concept of intention as the condition for criminal liability. Intention and negligence are the defining concepts. In this system, a negligence form of liability regarding ADS could be dolus eventualis, a concept which resembles the related common law concepts of recklessness and negligence, but which includes the belief that the harmful result would not occur. Dolus eventualisFootnote 55
affirms intention in cases in which the actor foresaw a possible but not inevitable result of her actions (the element of knowledge) and also approved of, or reconciled herself to, the possible occurrence of that result (the volitional or dispositional element). This is contrasted with cases in which the volitional element said to be essential to all forms of intention is missing because the actor earnestly relied on the non-occurrence of the result foreseen as possible.
Two examples may illustrate the difference between intention and negligence, and the role of dolus eventualis. An example of a missing volitional element was presented in a Dutch case of allegedly reckless driving. The defendant driver was driving at double the maximum speed, and the case involved a collision that killed the five passengers of the other car. The driver was charged with homicide. The Dutch Supreme Court judged him to be extremely negligent, but held that his act was not intentional as he had not consciously accepted the possible outcome of himself being killed by his own speeding, i.e., he relied on precisely the non-occurrence of an accident.Footnote 56 In a comparable German case, two persons were involved in an illegal street race which ended in an accident that killed the driver of another car who relied on the green light. The defendants were charged with murder, and the judicial debate focused on whether they had accepted the possible danger to themselves knowingly and willingly, and had been indifferent, “gleichgültig” as the Bundesgericht later called it, to the possible fate of others in case of an accident. The Berlin Landesgericht pronounced a life sentence, then the Bundesgerichthof revised the sentence on a technical matter, the Landesgericht then stuck to its earlier decision, and in the second revision the Bundesgerichthof confirmed the sentence.Footnote 57 The driver was convicted.
The dispositional element of dolus eventualis as indifference to what the law demands of us was developed by Karl Engisch in the 1930s, and it became the criterion to distinguish between intention and negligence.Footnote 58 In the 1980s, Wolfgang Frisch developed a risk-recognition theory. He thought of intention in terms of “an actor’s realisation, at the time of acting, that a risk exists that the offence might occur, which risk the legal order regards as unacceptable.”Footnote 59 Intentional action requires that the actor was aware of and deliberately created a public wrong. Greg Taylor elaborated on Frisch’s theory by means of an example in which a car driver overtaking another car on a blind corner either relies on the non-occurrence of an accident or is indifferent to the outcome. Taylor asserted that “[c]learly, by overtaking when it is not safe to do so, she creates a risk, and one which is legally unacceptable as well … Rather, the legal system condemns her conduct as unacceptable because, and as soon as, it creates a situation of danger beyond the ordinary risks of the road; it does not wait to see whether anyone is actually killed as a result of it.”Footnote 60
What issues are raised if dolus eventualis is applied to human driver or ADS defendants? If the foreseeability of an abstract risk is what is legally unacceptable, the distinction between negligence and dolus eventualis blurs and there is a shift in the direction of strict liability for the human driver of an ordinary car as well as for the human driver of an ADS, or the ADS itself if we accept the consequences of its self-learning. In terms of evidence, it then becomes more difficult to distinguish between intention and the advertent negligence of the driver in the Dutch example above, on the one hand, versus dolus eventualis, on the other. The question will then be whether we make the doctrinal move from culpa to dolus eventualis and/or strict liability in accidents involving ADS.
IV AI and the Human: Whose Liability, Which Gap?
Societal views often differ strongly from legal decisions on the concepts of recklessness and negligence, precisely because the death of innocent people is involved. But when is an occurrence a deliberate act warranting characterization as intentional, and when is it merely an event that does not warrant criminal liability? The answer depends on the hermeneutic judicial act of evaluating facts and circumstances, and this major challenge arises in all ADS cases, not only because the information in the file may be sparse.
Identifying the actus reus and mens rea for purposes of determining wrongfulness and culpability in individual ADS cases also creates major challenges for legislators pondering policy. As Abbott and Sarch suggest, “punishing AI could send the message that AI is itself an actor on par with a human being,” and “convicting AI of crimes requiring a mens rea like intent, knowledge, or recklessness would violate the principle of legality.”Footnote 61 The authors develop answers to what they call the “Eligibility Challenge,” i.e., what entities connected to ADS, including AI, are eligible for liability.Footnote 62 The simplest solution would be the doctrine of respondeat superior,Footnote 63 i.e., the human developers are responsibleFootnote 64 if and when they foresee the risk that an AI will cause the death of a person, because that would be reckless homicide. The second solution is strict, no-fault liability of a defendant, and the third solution is to develop a framework for defining new mens rea terms for AI, which “could require an investigation of AI behavior at the programming level.”Footnote 65 In court, judges could then be asked to further develop the relevant mens rea. However, the task of constructing a hermeneutics of the situation at the programming level would not immediately alleviate the judge’s evidentiary job. The interdisciplinary challenges of translation noted in Section II would still be present, and they probably require additional technological expertise in order to gauge the narratives told in court by the parties involved.Footnote 66
Issues are also raised by a focus on legal responsibility for AI, because per Mary Midgley, what “actually happens to us will surely still be determined by human choices. Not even the most admirable machines can make better choices than the people who are supposed to be programming them.”Footnote 67 This issue arises even in inquiries into negligence and dolus eventualis, because whileFootnote 68
humans may classify other drivers as cautious, reckless, good, and impatient, for example, driverless cars may eschew discrete categories … in favor of tracking the observed behavior of every single car ever encountered, with that data then uploaded and shared online – participating in the collective development of a profile for every car and driver far in excess of anything humanly or conceptually graspable.
This chapter argues that human agency matters at all levels of evaluating an ADS. Abbott and Sarch assert thatFootnote 69
[o]ne conceivable way to argue that an AI (say, an autonomous vehicle) had the intention (purpose) to cause an outcome (to harm a pedestrian) would be to ask whether the AI was guiding its behavior so as to make this outcome more likely (relative to its background probability of occurring). Is the AI monitoring conditions around it to identify ways to make this outcome more likely? Is the AI then disposed to make these behavioral adjustments to make the outcome more likely (either as a goal in itself, or as a means to accomplishing another goal)? If so, then the AI plausibility may be said to have the purpose of causing that outcome.
However, humans create AI programmes. The potential to programme ADS in a certain way, and the decision of whether to do that or not, brings us back to the case of the trolley discussed in Section I, and it supports the position that human agency is relevant to evaluating ADS. Another way of considering the role of humans in ADS is provided by what Philippa Foot calls the “doctrine of the double effect,” “the distinction between what a man foresees as a result of his voluntary action and what, in the strict sense, he intends”; in other words, he “intends in the strictest sense both those things that he aims at as ends and those that he aims at as means to his ends.”Footnote 70 Per Foot, the thesis is that it is “sometimes permissible to bring about by oblique intention what one may not directly intend.”Footnote 71 But can a human inside an ADS exercise free will when it comes to the vehicle’s actions?
Could we turn the tables on an ADS, and say that in the current state-of-the-art there is always the abstract risk that such vehicles will swerve out of the control of its human driver, on account of its newly developed intent or other basis, and that because the human driver is unable to anticipate such actions in a preventable way,Footnote 72 the risk is agent-relative to the manufacturer-engineer-designer and should be allocated solely to them, i.e., Abbott and Sarch’s first solution?Footnote 73 This would avoid the question of whether ADS can act intentionally in criminal law, as the risk would be independent of the mental state of the human driver. Depending on the jurisdiction, it may also bring back questions of legal personhood regarding corporate entities.
If the focus of liability is on the manufacturer-engineer-designer, how should liability be understood if an ADS device containing algorithms thinks for itself and gains a certain autonomy? Mary Shelley’s fictive monster constructed by Victor Frankenstein began to think for itself. How would a manufacturer-engineer-designer liability for future actions not included in its original programming be understood, e.g., when the machine learning is unsupervised? If we want to distribute risk evenly, we would probably need empirical research to do the math regarding the probability of harm in terms of percentages. For the legislator, the need for refined probabilities of risk could mean an increase in highly refined regulatory offenses. This approach would require a novel definition – or should we say concept? – of conduct, depending on whether there is any active role left for the human driver-passenger. In narratological terms, the driver finds herself in an inbuilt plot of a technological narrative from which she cannot escape; she cannot constrain the non-human actant other than by trying to take over the system when she sees something go wrong, and only if she sees it in time. Thinking about ADS in this way would mean that many advantages of the automatic part of automatic driving systems are done away with, and yet the driver still constantly faces the risk of a future criminal charge.
V Conclusion: The Outward and Inward Appearances of Intention
This chapter argues for the development of a hermeneutics of the situation to address the issues raised by ADS. As surveyed in the chapter, the issues are many. The factum probandum with regard to foresight and the dispositional element included in the concept of dolus eventualis are surrounded by challenges. In accidents involving ADS, the debates regarding what the evidence shows in concrete cases will be massive. How is one to decide that a specific human or non-human defendant’s disposition suffices for a conviction? These legal determinations will require a careful distinction between the outward appearance, i.e., apparently careless driving, and the legal carelessness of the driver, i.e., his or her indifference to the outcome. The externally ascertainable aspects of any defendant’s action must be taken into consideration in order to make a coherent finding on the elements “knowingly and willingly” of intent.
Some final examples illustrate the importance of the distinction between outward appearance and inward intent or carelessness. Intelligent Traffic Light Control systems can perceive traffic density by means of floating car data apps, which then decide who gets right of way; they are based on the algorithmic ideal of the traffic light talking back to the vehicle. Numerous cases of ADS spontaneously braking in situations where traffic did not require it have occurred, merely because the autopilot thought it recognized the location as one where it had braked earlier. This ADS response is literally a hermeneutics of the situation, but technically a fake negative, in which the human involved may suffer the consequences. In a 2019 Dutch criminal case, the defendant’s vehicle had swerved from its lane and collided head-on with an oncoming car. Based on Article 6 of the Road Traffic Act, the defendant was subject to the primary charge of culpable behavior in that he caused a traffic accident by his recklessness, or at a minimum the subsidiary charge that he caused the accident by his considerably careless and/or inattentive behavior, and as a result a person was killed.Footnote 74 The defendant pleaded not guilty, arguing that the threshold test for recklessness and/or carelessness had not been met, as he had taken his eye off the road for only a few seconds because he had assumed that the Autosteer System of his Tesla was activated. This position was not given any weight by the court. The defendant was found guilty because his lawyer admitted his client had taken his eye off the road for four to five seconds, and this action was characterized as “considerable” inattentiveness.Footnote 75 In the well-known Vasquez case in the United States, an investigation by the National Transportation Board suggested that the driver had been visually distracted. Generally speaking, distraction is “a typical effect of automation complacency,”Footnote 76 and it suggests the need for driver training. But in this case, the driver had presumably been gazing downward to the bottom of the center console for 34 percent of the time that the ADS was moving, 31.5 minutes, and about “6 seconds before the crash, she redirected her gaze downward, where it remained until about 1 second before the crash,” so that there was no time to react and avoid the crash.Footnote 77 The driver had supposedly been streaming a television show on her mobile phone during the entire trip.Footnote 78 The vehicle “was designed to operate in autonomous mode only on pre-mapped, designated routes.”Footnote 79 Did the fact that it was a test drive, and a short one at that, on a test road, make the driver behave irresponsibly by watching television while driving? Technical issues with regard to the vehicle and/or the company’s instruction of its employees aside, any driver of a non-automatic vehicle who acts in this way will probably be held criminally responsible, at the very least for behaving negligently. The difference between a traditional driver and a human operator of an ADS has not made great differences in court verdicts yet, in part because inattentiveness attracts liability of some sort. It is, after all, always a human driver who sets the ADS into motion.
Precisely because it is a mental phenomenon, the general concept of intent, as Ferry de Jong contends, is “an essentially ‘normative’ phenomenon.”Footnote 80 It “designates … a criminally relevant manifestation of intentional directedness between a subject and the social-life world,” so that “this intention externalizes itself in the action performed and is thereby rendered amenable to interpretation,” which as a “rule-guided process consists of a pre-eminently hermeneutic activity: by way of outward indications, the internal world of intentions and perceptions … is reconstructed.”Footnote 81 If the liability of ADS is to be hermeneutically ascertained, compared to being explained by means of, e.g., statistical evidence on traffic accidents in specific locations that invite some people’s dangerous driving, a hermeneutics of the situation in at least two forms is required. First, in court surroundings, the situation would include the doctrinal, conceptual situation of a specific case, a “hermeneutics of the [legal] signification,”Footnote 82 a thorough investigation of the defendant’s acts and omissions, and the situation of technology in the sense of the state-of-the-art of the vehicle involved. Second, on the meta-level, such hermeneutics would include a debate on the acceptance of various forms of criminal liability in relation to forms of legal personhood, its technological thresholds and machine autonomy, and societal views on the subject.
A hermeneutics of the situation for ADS is necessarily interdisciplinary. The humanities can contribute to the construction of a hermeneutics of the situation partly by means of narratological insights, because insight is needed into the analysis of narratives, both as story, the what, and discourse, the how, in the pre-trial phase and in court, as well as on the narrative structure of technological proposals and their underlying arguments. As long as technological devices are not fully predictable, explanation must be complemented by understanding. To the French philosopher Paul Ricoeur, “narrative is ‘imitation of action’ (mimesis),”Footnote 83 which means that “to say what an action is, is to say why it is done.”Footnote 84 In legal surroundings, narratives of judgment therefore address intent and legal imputation. The humanities can also contribute to a hermeneutics of the situation because the technological context of ADS raises the ethics of programming. There is good reason to add a legal-hermeneutic methodology of understanding when deciding ADS cases, lest our technological “swerve” swerves out of control, and we gain no further knowledge of causes and the secret motions of things as Bacon urged us to.Footnote 85