Artificial intelligence (AI) and concerns about its potential impact on humanity have been with us for more than half a century. The term entered the discourse in 1956 at a Dartmouth College symposium; early research explored topics like proving logic theorems, deducing the molecular structure of chemical samples, and playing games such as draughts. A dozen years later, Stanley Kubrick’s film 2001: A Space Odyssey offered an iconic vision of a machine empowered to override the decisions of its human counterparts, the HAL 9000’s eerily calm voice explaining why a spacecraft’s mission to Jupiter was more important than the lives of its crew.
Both AI and the fears associated with it advanced swiftly in subsequent decades. Though worries about the impact of new technology have accompanied many inventions, AI is unusual in that some of the starkest recent warnings have come from those most knowledgeable about the field – Elon Musk, Bill Gates, and Stephen Hawking, among others. Many of these concerns are linked to ‘general’ or ‘strong’ AI, meaning the creation of a system that is capable of performing any intellectual task that a human could – and raising complex questions about the nature of consciousness and self-awareness in a non-biological entity.
The possibility that such an entity might put its own priorities above those of humans is non-trivial, but this book focuses on the more immediate challenges raised by ‘narrow’ AI – meaning systems that can apply cognitive functions to specific tasks typically undertaken by a human.Footnote 1 A related term is ‘machine learning’, a subset of AI that denotes the ability of a computer to improve on its performance without being specifically programmed to do so.Footnote 2 The program AlphaGo Zero, for example, was merely taught the rules of the notoriously complex board game Go; using that basic information, it developed novel strategies that have established its superiority over any human player.Footnote 3
The field of AI and law is fertile, producing scores of books, thousands of articles, and at least two dedicated journals.Footnote 4 In addition to the more speculative literature on what might be termed robot consciousness,Footnote 5 much of this work describes recent developments in AI systems,Footnote 6 their actual or potential impact on the legal profession,Footnote 7 and normative questions raised by particular technologies – driverless cars,Footnote 8 autonomous weapons,Footnote 9 governance by algorithm,Footnote 10 and so on. A still larger body of writing overlaps with the broader fields of data protection and privacy, or law and technology more generally.
The bulk of that literature tends to concentrate on the activities of legal practitioners, their potential clients, or the machines themselves.Footnote 11 The objective here, by contrast, is to focus on those who seek to regulate those activities and the difficulties that AI systems pose for government and governance. Rather than taking specific actors or activities as the starting point, this book emphasizes structural problems that AI poses for meaningful regulation as such.
The term ‘regulation’ is chosen cautiously. Depending on context, its meaning can range from any form of behavioural control, whatever the origin, to the specific rules adopted by government that are subsidiary to legislation.Footnote 12 In the United States, regulation is often asserted to mean a burden that is the opposite of free markets; in the academic literature, competing visions of regulation posit it as being either the infringement of private autonomy or a collaborative enterprise.Footnote 13 Across the various definitions, much of the literature discusses the different roles that specific regulators can and should play in economic and political activities.
For present purposes, the focus will be on public control of a set of activities.Footnote 14 This embraces two important aspects. The first is the exercise of control, which may be through rules, standards, or other means including supervised self-regulation. The second is that such control is exercised by one or more public bodies. These may be the executive, the legislature, the judiciary, or other governmental or intergovernmental entities, but the legitimacy of this form of regulation lies in its connection – however loose – to institutions of the state. The emphasis on public control highlights avoidance of its opposite: a set of activities that would normally be regulated falling outside the effective jurisdiction of any public entity because those activities are being undertaken by AI systems. Regulation need not, however, be undertaken purely through law in the narrow sense of the command of a sovereign backed up by sanctions.Footnote 15 It also includes economic incentives such as taxes or subsidies, recognition or accreditation of professional bodies, and other market-based mechanisms.Footnote 16
One question that arises in this context is the extent to which AI systems themselves might have a role to play in regulation.Footnote 17 A central argument of the book, however, is that primary responsibility for regulation must fall to states. This embraces both a negative and a positive aspect. The negative aspect is that, in the near term, states should not outsource inherently governmental functions to entities (AI or otherwise) that are beyond their control.Footnote 18 The positive aspect is that, moving forward, effective management of the risks associated with AI will require international co-operation and co-ordination. Primary does not mean exclusive responsibility, however. Technology companies already play an outsized role in determining standards; this role will doubtless expand as AI systems become more complex. Yet the legitimacy of those standards and their incorporation into regulatory structures will be greatest, and they will be most effective, when endorsed by publicly accountable institutions.
The book is written for a global audience, but it is striking that the vast majority of the published material relies almost exclusively on the laws of Europe and the United States. That is understandable, given the economic importance of these jurisdictions and their sway in establishing global standards, directly or indirectly, in many fields related to technology. The two regimes also offer interesting points of comparison, with human rights concerns shaping the European response while market-based approaches hold sway in the United States. In the field of AI, however, China is – or soon will be – the dominant actor.Footnote 19 The book therefore examines the Chinese approach and the relationship between that dominance and the far more limited regulation within China. Another prominent Asian jurisdiction considered is Singapore, which has long sought to position itself as a rule of law hub to attract investment. As in the case of data protection law,Footnote 20 Singapore’s government has explicitly set the goal of regulation as being to attract and encourage AI innovation.Footnote 21
Such a public law perspective has been sorely lacking in debates over regulation of AI to date, while international law and institutions have been left out almost entirely.Footnote 22 The book builds on the author’s past work looking at public authority in times of crisis – ranging from humanitarian intervention and transitional administration, when a state turns on its population or collapses entirely,Footnote 23 to the outsourcing of security to private actors and the expansive powers asserted by intelligence agencies in response to terrorism.Footnote 24 AI may not yet pose a threat on such a scale, but lessons on how to manage risk, draw red lines, and preserve the legitimacy of public authority are useful now – and will be essential if it ever does.
Outline of the Book
The book is organized around the following sets of problems: How should we understand the challenges to regulation posed by the technologies loosely described here as ‘AI systems’? What regulatory tools exist to deal with those challenges and what are their limitations? And what more is needed – rules, institutions, actors – to reap the benefits offered by AI while minimizing avoidable harm?
Part I groups the challenges to regulation into three broad categories.
The first, considered in chapter one, is speed. Since computers entered into the mainstream in the 1960s, the efficiency with which data can be processed has raised regulatory questions. This is well understood with respect to privacy. Data that was notionally public – divorce proceedings, say – had long been protected through the ‘practical obscurity’ of paper records.Footnote 25 When such material was available in a single hard copy in a government office, the chances of one’s acquaintances or employer finding it were remote. Yet when it was computerized and made searchable through what ultimately became the Internet, practical obscurity disappeared. Today, high-speed computing poses comparable threats to existing regulatory models in areas from securities regulation to competition law, merely by enabling lawful activities – trading in stocks, or comparing and adjusting prices, say – to be undertaken more quickly than previously conceived possible. Many of these questions are practical rather than conceptual and apply to technologies other than AI. Nevertheless, current approaches to slowing down decision-making – through circuit-breakers to stop trading, for example – will not address all of the problems raised by the speed of AI systems.
A second set of challenges is the increasing autonomy of those systems, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to what is meant by ‘autonomy’ and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programmes in the private or public sector. Chapter two develops a novel typology that distinguishes three lenses through which to view the regulatory issues raised by autonomy: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.
Chapter three turns to the increasing opacity of AI. As computer programs become ever more complex, the ability of non-specialists to understand them diminishes. Opacity may also be built into programs by companies seeking to protect proprietary interests. Both such systems are capable of being explained, albeit with recourse to experts or an order to reveal their internal workings. Yet a third kind of system may be naturally opaque: some machine learning techniques are difficult or impossible to explain in a manner that humans can comprehend. This raises concerns when the process by which a decision is made is as important as the decision itself. For example, a sentencing algorithm might produce a ‘just’ outcome for a class of convicted persons. Unless the justness of that outcome for an individual defendant can be explained in court, however, it is, quite rightly, subject to legal challenge. Separate concerns are raised by the prospect that AI systems may mask or reify discriminatory practices or outcomes.
This is, of course, a non-exhaustive list of the challenges posed by AI. Among others on the horizon are the likely displacement of large segments of the workforce and the possibility of artificial general intelligence raising meaningful questions about the rights of ‘smart robots’.Footnote 26 Nor does this study seek to examine the broader ethical implications of AI taking on greater roles in society, or the regulation of cyberspace, virtual worlds, and so on.Footnote 27 Similarly, it will not attempt to cover fully the potential impact of blockchain or distributed ledger technology.Footnote 28 The more modest aim is to use the problems identified in this part to highlight gaps in existing regulatory models with a view to seeing whether the tools at our disposal can fill them.
Part II, then, turns to those tools. Chapter four examines how existing laws can and should apply to emerging technology through attribution of responsibility. Legal systems typically seek to deter identifiable persons – natural or juridical – from certain forms of conduct, or to allocate losses to those persons. Responsibility may be direct or indirect: key questions are how the acts and omissions of AI systems can and should be understood. Given the complexity of those systems, novel approaches to responsibility have been proposed, including special applications of product liability, agency, and causation. More important and less studied is the role that insurance can play in compensating harm but also structuring incentives for action. Another approach is to limit the ability to avoid responsibility, drawing on the literature on outsourcing and the prohibition on transferring certain forms of responsibility – most notably the exercise of discretion in the public sector.
As AI systems operate with greater autonomy, however, the idea that they might themselves be held responsible has gained credence. On its face, the idea of giving those systems a form of independent legal personality may seem attractive. Yet chapter five argues that this is both too simple and too complex. It is simplistic in that it lumps a wide range of technologies together in a single legal category ill-suited to the task; it is overly complex in that it implicitly or explicitly embraces the anthropomorphic fallacy that AI systems will eventually assume full legal personality in the manner of the ‘robot consciousness’ arguments mentioned earlier. Though the emergence of general AI is a conceivable future scenario – and one worth taking precautions against – it is not a sound basis for regulation today.
Notions of foreseeability underpin another tool that has been embraced as a means of limiting the risks associated with AI: transparency. Chapter six considers the manner in which transparency and the related concept of ‘explainability’ are being elaborated, notably the ‘right to explanation’ in the European Union (EU) and a move towards explainable AI (XAI) among developers. These are more promising than the arguments for legal personality, but the limits of transparency are already beginning to show as AI systems demonstrate abilities that even their programmers struggle to understand. That is leading regulators to cede ground and settle for explanations of adverse decisions rather than transparency of decision-making processes themselves. Such a backward-looking approach relies on individuals knowing that they have been harmed – which will not always be the case – and should be supplemented with forward-looking mechanisms like impact assessments, audits, and an ombudsperson.
The final part of the book considers the rules and institutions required to address the inadequacies of existing tools and regulatory bodies.
As the preceding chapters demonstrate, existing norms, suitably interpreted, are able to deal with many of the challenges presented by AI. But not all. Chapter seven begins with a survey of guides, frameworks, and principles put forward by states, industry, and intergovernmental organizations. These diverse efforts have led to a broad consensus on half a dozen norms that might govern AI. Far less energy has gone into determining how these might be implemented – or if they are even necessary. Rather than contribute to norm proliferation, the chapter focuses on why regulation is necessary, when regulatory changes should be made, and how it would work in practice. Two specific areas for law reform address the weaponization and victimization of AI. Regulations aimed at general AI are particularly difficult in that they confront many ‘unknown unknowns’, but uncontrollable or uncontainable AI could pose a threat far more serious than lethal autonomous weapon systems. Additionally, however, there will be a need to prohibit some conduct in which increasingly lifelike machines are the victims – comparable, perhaps, to animal cruelty laws.
The answers that each political community finds to the law reform questions posed may differ, but a larger threat in the very near future is that AI systems capable of causing harm will not be confined to one jurisdiction – indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, but different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action, or at least co-ordination, is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw at the international level certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To co-ordinate those activities and enforce global ‘red lines’, chapter eight posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.
Chapter nine turns to the possibility that the AI systems challenging the legal order may also offer at least part of the solution. Here, China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
Precaution vs Innovation
Underlying the question of regulation is the need to balance precautionary steps against unnecessarily constraining innovation. A government report in Singapore, for example, highlighted the risks posed by AI, but concluded that ‘it is telling that no country has introduced specific rules on criminal liability for artificial intelligence systems. Being the global first-mover on such rules may impair Singapore’s ability to attract top industry players in the field of AI.’Footnote 29
These concerns are well-founded. As in other areas of research, overly restrictive laws can stifle innovation or drive it elsewhere. Yet the failure to develop appropriate legal tools risks allowing profit-motivated actors to shape large sections of the economy around their interests to the point that regulators will struggle to catch up. This has been particularly true in the field of information technology. Social media giants like Facebook, for example, monetized users’ personal data while data protection laws were still in their infancy.Footnote 30 Similarly, Uber and other first-movers in what is now termed the sharing or ‘gig’ economy exploited platform technology before rules were in place to protect workers or maintain standards.Footnote 31 As Pedro Domingos once observed, people worry that computers will get too smart and take over the world; the real problem is that the computers are too stupid and they’ve taken it over already.Footnote 32
Much of the literature on AI and the law focuses on a horizon that is either so distant that it blurs the line with science fiction or so near that it plays catch-up with the technologies of today. That tension between presentism and hyperbole is reflected in the history of AI itself, with the term ‘AI winter’ coined to describe the mismatch between the promise of AI and its reality.Footnote 33 Indeed, it was evident back in 1956 at Dartmouth when the discipline was born. To fund the workshop, John McCarthy and three colleagues wrote to the Rockefeller Foundation with the following modest proposal:
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 … The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve [the] kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.Footnote 34
Over the subsequent decades, enthusiasm for and fear of AI have waxed and waned in almost equal measure. In an interview in Paris Review a few years after the Dartmouth gathering, Pablo Picasso memorably dismissed the new mechanical brains as useless: ‘They can only give you answers,’ he scoffed.Footnote 35 As countries around the world struggle to capitalize on the economic potential of AI while minimizing avoidable harm, a book like this cannot hope to be the last word on the topic of regulation. But by examining the nature of the challenges, the limitations of existing tools, and some possible solutions, it hopes to ensure that we are at least asking the right questions.