I. Introduction
In the conventional picture, international law emanates from treaties states conclude or customs they observe. States comply with binding international law and ensure compliance in the domestic context. In this picture, states in a ‘top-top’ process agree on the law before it trickles down to the domestic legal order where it is implemented. Norms made in other ways are considered ‘soft’, which implies that they provide mere guidance but are technically not binding, or irrelevant to international law.
Obviously, there is room for nuance in the conventional take on international law and its sources. Soft law, for instance, can acquire authority that comes close to binding character.Footnote 1 It can also serve to interpret binding law that would otherwise remain ambiguous.Footnote 2 However, traditional international law ignores that law is also created outside of its formal processes. Norms can notably consolidate independently from the will of states in speedy, subcutaneous processes. Norms can diffuse subliminally across the world into municipal laws which incorporate and make them binding domestically. In this informal process, international law enters the stage late, if at all. It can only retrace the law that has already been locked in domestically. This informal process resembles ‘bottom-up international law’,Footnote 3 though its character is more ‘bottom to bottom’ and ‘transnational’. The process shall be referred to as ‘norm diffusion’ in this chapter. It is illustrated through the creation of norms governing Artificial Intelligence (AI).
The informal process of law creation described above is far from ubiquitous. It can be hard to trace, for when international law codifies or crystallizes ‘new’ norms, it tends to obscure their origin in previous processes of law creation. It is also messy, for it does not adhere to the hierarchies that distinguish conventional international law. Even more so, it is worth discussing norm diffusion to complement the picture of international law and its sources.
The present chapter could have examined norm diffusion in the current global public health crisis. It seems that in the COVID-19 pandemic, behavioural norms informed by scientific expertise take shape rapidly, diffuse globally, and are incorporated into domestic law. In contrast, international lawyers are only now beginning to discuss a more suitable legal framework. However, rather than engaging with the ongoing chaotic normative process in public health, this chapter discusses a more mature and traceable occurrence of norm diffusion, namely that of the regulation of AI. The European Commission’s long-awaited proposal from April 2021 for a regulation on AI marks the perfect occasion to illustrate the diffusion of AI norms.
This chapter proceeds in three steps. First, it examines the creation of ethical norms designed to govern AI (Section II). Second, it investigates the diffusion of such norms into domestic law (Section III). This section examines the European Commission’s recent legislative proposal to show how it absorbs ethical norms on AI. This examination likewise sheds light on the substance of AI norms. Section III could also be read on its own, in other words, without regard to international law-making, if one wished to learn only about the origins and the substance of the European Union regulation in the offing. Section IV then discusses how the process of norm diffusion described in Sections II and III sidelines international law. Section V concludes and offers an outlook.
II. The Creation of Ethical Norms on AI
The creation of ethical norms governing AI has taken many forms over a short period of time. It began with robotics. Roughly 50 years ago, Isaac Asimov’s science fiction showed how ambiguous certain ethical axioms were when applied to intelligent robots.Footnote 4 Since then, robotics has made so much progress that scientists have begun to take an interest in ethical principles for robotics. Such principles, which were prominently enunciated in the United Kingdom in 2010, addressed the potential harm caused by robots, responsibility for damage, fundamental rights in the context of robotics, and several other topics, including safety/security, deception, and transparency.Footnote 5 The same or similar aspects turned out to be relevant for AI after it had re-awakened from hibernation. Two initiatives were significant in this regard, namely the launch of the One Hundred Year Study on Artificial Intelligence at Stanford University in 2014Footnote 6 and an Open LetterFootnote 7 signed by researchers and entrepreneurs in 2015.Footnote 8 Both initiatives sought to guide research toward beneficial and robust AI.Footnote 9 In their wake, the IEEE, an organization of professional engineers, in 2015 embarked on a broad public initiative aimed at pinning down the ethics of autonomous systems;Footnote 10 a group of AI professionals gathered to generate the Asilomar principles for AI, which were published in 2017Footnote 11; and an association of experts put forward ethical principles for algorithms and programming.Footnote 12 This push to establish ethical norms occurred in lockstep with the significant technological advances in AI,Footnote 13 and it is against this background that it must be understood.
In parallel, a discussion began to take shape within the Convention on Certain Conventional Weapons (CCW)Footnote 14 in Geneva. This discussion soon shifted its focus to the use of force by means of autonomous systems.Footnote 15 It notably zeroed in on physically embodied weapons systems – a highly specialized type of robot – and refrained from considering disembodied weapons, sometimes called cyberweapons.Footnote 16 The focus on embodimentFootnote 17 had the effect of keeping AI out of the limelight in Geneva for a long time.Footnote 18 As a broader consequence, the international law community became fixated on an exclusive and exotic aspect – namely physical (‘kinetic’) autonomous weapons systems – while the technological development was more comprehensive. Despite their narrow focus, the seven years of discussions in Geneva have yielded few concrete results, other than a great deal of publicity.Footnote 19
At about the same time, autonomous cars also became the subject of ethical discussion. This discussion, however, soon got bogged down in largely theoretical, though fascinating, ethical dilemmas, such as the trolley problem.Footnote 20 However, unlike those gathered in Geneva to ponder autonomous weapons systems, those intent on putting autonomous cars on the road were pragmatic. They found ways of generating meaningful output that could be implemented.Footnote 21
In 2017, the broader public beyond academic and professional circles became aware of the promises and perils of AI. Civil society began to discuss the ethics of AI and soon produced tangible output.Footnote 22 Actionable principles were also proposed on behalf of womenFootnote 23 and labour,Footnote 24 and AlgorithmWatch, a now notable non-governmental organization, was founded.Footnote 25
In step with civil society, private companies adopted ethical principles concerning AI.Footnote 26 Such principles took different shapes depending on companies’ fields of business. The principles embody a certain degree of self-commitment, which is not subject to outside verification, though.Footnote 27 Parts of the private sector and the third sector have also joined forces, most prominently in the Partnership on AI and its tenets on AI.Footnote 28
The development has not come to a halt today. Various organizations continue to mull over ethical norms to govern AI.Footnote 29 However, most early proponents of such norms have moved from the formation stage to the implementation stage. Private companies are currently applying the principles to which they unilaterally subscribed. After having issued one of the first documents on ethical norms,Footnote 30 the IEEE is now developing concrete technical standards to be applied by developers to specific applications of AI.Footnote 31 ISO, another professional organization, is currently setting such standards as well.Footnote 32 Domestic courts and authorities are adjudicating the first cases on AI.Footnote 33
At this point, it is worth pausing for a moment. The current section sketched a process in which multiple actors shaped and formed ethical norms on AI and are now implementing them. (As Section IV will explain, states have not been absent from this process.) This section could now go on to distil the essence of the ethical norms. This would make sense as the ethics remain unconsolidated and fuzzy. But much important work has already been done in this direction.Footnote 34 In fact, for present purposes, no further efforts are necessary because, while norms remain vague, they have now begun to merge into domestic law. However, the diffusion of ethical norms is far from being a linear and straightforward process with clear causes. Instead, it is multidirectional, multivariate, gradual, and open-ended, with plenty of back and forth. Hence, the next section, as it looks at norm diffusion from the incoming end, in other words, from the perspectives of states and domestic law, is best read as a continuation of the present section. The developments outlined have also occurred in parallel to those in municipal law, which are the topic of the next section.
III. Diffusion of Ethical Norms into Domestic Law: The New Regulation of the European Union on AI
A relevant sign of diffusion into domestic law is states’ first engagement with ethics and AI. For some states, including China, France, Germany, and the United States, such engagement began relatively early with the adoption of AI strategiesFootnote 35 in which ethical norms figured more or less prominently. The French president, for instance, stated a commitment to establish an ethics framework.Footnote 36 China, in its strategy, formulated the aim to ‘[d]evelop laws, regulations, and ethical norms that promote the development of AI’.Footnote 37 Germany’s strategy was to task a commission to come up with recommendations concerning ethics.Footnote 38 The US strategy, meanwhile, was largely silent on ethics.Footnote 39
Some state legislative organs also addressed the ethics of AI early on, most notably, the comprehensive report published by the United Kingdom House of Lords in 2018.Footnote 40 It, among other things, recommended elaborating an AI code to provide ethical guidance and a ‘basis for statutory regulation, if and when this is determined to be necessary’.Footnote 41 The UK report also suggested five ethical principles as a basis for further work.Footnote 42 In a similar vein, the Villani report, which had preceded the French presidential strategy, identified five ethical imperatives.Footnote 43
In the EU, a report drafted within the European Parliament in 2016 drew attention to the need to examine ethics further.Footnote 44 It dealt with robotics because AI was not yet a priority and included a code of rudimentary ethical principles to be observed by researchers. In 2017, the European Parliament adopted the report as a resolution,Footnote 45 putting pressure on the Commission to propose legislation.Footnote 46 In 2018, the Commission published a strategy on AI with a threefold aim, one of which was to ensure ‘an appropriate legal and ethical framework’.Footnote 47 The Commission consequently mandated a group of experts who suggested guidelines for ‘trustworthy’ AI one year later.Footnote 48 These guidelines explicitly drew on work previously done within the institutions.Footnote 49 The guidelines refrained from interfering with the lex lata,Footnote 50 including the General Data Protection RegulationFootnote 51.
In 2019, following the guidelines for trustworthy AI, the Commission published a White Paper on AIFootnote 52, laying the foundation for the legislative proposal to be tabled a year later. The White Paper, which attracted much attention,Footnote 53 recommended a horizontal approach to AI with general principles included in a single legislative act applicable to any kind of AI, thus rejecting the alternative of adapting existing (or adopting several new) sectorial acts. The White Paper suggested regulating AI based on risk: the higher the risk of an AI application, the more regulation was necessary.Footnote 54
On 21 April 2021, based on the White Paper, the Commission presented a Proposal for a regulation on AIFootnote 55. The Commission’s Proposal marks a crucial moment, for it represents the first formal step – globally, it seems – in a process that will ultimately lead to binding domestic legislation on AI. It is a sign of the absorption of ethical norms on AI by domestic law – in other words, of norm diffusion. While the risk-based regulatory approach adopted from the White Paper was by and large absent in the ethics documents discussed in the previous section, many of the substantive obligations in the proposed regulation reflect the same ethical norms.
The Commission proposed distinguishing three categories of AI, namely: certain ‘practices’ of AI that the proposed regulation prohibits; high-risk AI, which it regulates in-depth; and low-risk AI required to be flagged.Footnote 56 While the prohibition against using AI in specific ways (banned ‘practices’)Footnote 57 attracts much attention, practically, the regulation of high-risk AI will be more relevant. Annexes II and III to the proposed regulation determine whether an AI qualifies as high-risk.Footnote 58 The proposed regulation imposes a series of duties on those who place such high-risk AI on the market.Footnote 59
The regulatory focus on risky AI has the consequence, on the flip side, that not all AI is subject to the same degree of regulation. Indeed, the vast majority of AI is subject merely to the duty to ensure some degree of transparency. However, an AI that now appears to qualify as low-risk under the proposed regulation could become high-risk after a minor change in use intention. Hence, given the versatility of AI, the duties applicable to high-risk AI have to be factored in even in the development of AI in low-risk domains. One example is an image recognition algorithm that per se qualifies as low-risk under the regulation. However, if it were later used for facial recognition, the more onerous duties concerning high-risk AI would become applicable. Such development must be anticipated at an early stage to ensure compliance with the regulation throughout the life cycle of AI. Hence, regulatory spill-over from high-risk into low-risk domains of AI is likely. Consequently, the proposed regulation exerts a broader compliance pull than one might expect at first glance, given the specific, narrow focus of the regulation on high-risk AI.
Categorization aside, the substantive duties imposed on those who put high-risk AI on the market are most interesting from the perspective of ethical norm diffusion. The proposed regulation includes four bundles of obligations.
The first bundle concerns data and is laid down in Article 10 of the proposed regulation. When AI is trained with data (though not only thenFootnote 60), Article 10 of the proposed regulation requires ‘appropriate data governance and management practices’, in particular concerning design choices; data collection; data preparation; assumptions concerning that which data measures and represents; assessment of availability, quantity, and suitability of data; ‘examination in view of possible bias’; and identification of gaps and shortcomings. In addition, the data itself must be relevant, representative, free of errors, and complete. It must also have ‘appropriate statistical properties’ regarding the persons on whom the AI is used. And it must take into account the ‘geographical, behavioural or functional setting’ in which the AI will be used.
The duties laid down in Article 10 on data mirror existing ethical norms, notably the imperative to avoid bias. The IEEE’s Charter discussed the issue of data bias.Footnote 61 In an early set of principles addressed to professionals, avoidance of bias featured prominently; it also recommended keeping a description of data provenance.Footnote 62 The Montreal Declaration recommended avoiding discrimination,Footnote 63 while the Toronto Declaration on human rights and machine learning had bias and discrimination squarely in view.Footnote 64 Likewise, some of the ethical norms the private sector had adopted addressed bias.Footnote 65 However, the ethical norms discussed in Section II generally refrained from addressing data and its governance as comprehensively as Article 10 of the proposed regulation. Instead, the ethical norms directly focused on avoidance of bias and discrimination.
The second bundle of obligations concerns transparency and is contained in Article 13 of the proposed regulation. The critical duty of Article 13 requires providers to ‘enable users to interpret [the] output’ of high-risk AI and ‘use it appropriately’Footnote 66. The article further stipulates that providers have to furnish information that is ‘concise, complete, correct and clear’Footnote 67, in particular regarding the ‘characteristics, capabilities and limitations of performance’ of a high-risk AI system.Footnote 68 These duties specifically relate to any known or foreseeable circumstance, including foreseeable misuse, which ‘may lead to risks to health and safety or fundamental rights’, and to performance on persons.Footnote 69
Transparency is an equally important desideratum of ethical norms, though it is sometimes addressed in terms of explainability or explicability. The IEEE’s CharterFootnote 70 and the Asilomar principlesFootnote 71 emphasized transparency to different degrees. Other guidelines encourage the production of explanationsFootnote 72 or appropriate and sufficient information,Footnote 73 or call for extensive transparency, justifiability, and intelligibility.Footnote 74 These references make it evident that ethical norms, though they are heterogeneous and vague, are in the process of being absorbed by EU law (norm diffusion).
The third bundle of obligations is contained in Article 15 of the proposed regulation. It requires high-risk AI to have an ‘appropriate level’ of accuracy, robustness, and cybersecurity.Footnote 75 Article 15 refrains from adding much detail but states that the AI must be resilient to deleterious environmental influences or nefarious third parties’ attempts to game it.Footnote 76
As with the first and second bundles, the aspects of high-risk AI addressed by Article 15 can be traced back to various ethical norms. The high-level principles of effectiveness and awareness of misuse in the IEEE’s Charter covered similar aspects.Footnote 77 The Asilomar principles addressed ‘safety’, but in a rather generic fashion.Footnote 78 Other principles emphasized both the need for safety in all things related to AI and the importance of preventing misuse.Footnote 79 Others focused on prudence, which more or less includes the aspects covered by Article 15.Footnote 80 Parts of the private sector also committed themselves to safe AI.Footnote 81
The fourth bundle contains obligations of a procedural or managerial nature. The proposed regulation places confidence in procedure to cope with the high risks of AI. The trust in procedure goes so far that substantive issues are addressed procedurally only. One such example is one of the cardinal obligations of the proposed regulation, namely the duty to manage risks according to Article 9. Article 9 obliges providers to maintain a comprehensive risk management system throughout the life cycle of high-risk AI. It aims at reducing the risks posed by the AI so that the risks are ‘judged acceptable’, even under conditions of foreseeable misuse.Footnote 82 The means to reduce the risks are design, development, testing, mitigation and control measures, and provision of information to users. Instead of indicating which risks are to be ‘judged acceptable’, Article 9 trusts that risk reduction will result from a series of diligently executed, proper steps. However, procedural rules are not substantive rules. In and of themselves, they do not contain substantive guidance. In essence, Article 9 entrusts providers with the central ‘judgment’ of what is ‘acceptable’. Providers are granted liberty, while their obligations seem less onerous. At the same time, this liberty imposes a burden on them in that courts might not always validate their ‘judgment’ of what was ‘acceptable’ after harm has occurred. Would, for instance, private claims brought against the provider of an enormously beneficial AI be rejected after exceptionally high risks, which the provider managed and judged acceptable, have materialized?
Trust in procedure is also a mainstay of other provisions of the proposed regulation. An assessment of conformity with the proposed regulation has to be undertaken, but, here again, providers carry it out themselves in all but a few cases.Footnote 83 Providers have to register high-risk AI in a new EU-wide database.Footnote 84 Technical documentation and logs must be kept.Footnote 85 Human oversight is required – a notion that has a procedural connotation.Footnote 86 The regulation does not require substantive ‘human control’ as discussed within CCW for autonomous weapons systems.Footnote 87 Discrimination is not directly prohibited, but procedural transparency is supposed to contribute to preventing bias.Footnote 88 Such transparency may render high-risk AI interpretable, but a substantive right to explicable AI is missing.Footnote 89
The procedural and managerial obligations in the fourth bundle cannot easily be traced back to ethical norms. This is because of their procedural nature. Ethical norms are, in essence, substantive norms. Procedural obligations are geared towards implementation, yet implementation is not the standard domain of ethics (except for applied ethics which is yet to reach AIFootnote 90). Hence, while certain aspects of the fourth bundle mirror ethical norms, for example, the requirement to keep logs,Footnote 91 none of them has called for a comprehensive risk management system.
Overall, the proposed regulation offers compelling evidence of norm diffusion, at least to the extent that the regulation reflects ethical norms on AI. It addresses the three most pressing concerns related to AI of the machine learning type, namely bias due to input data, opacity that hampers predictability and explainability, and vulnerability to misuse (gaming, etc.).Footnote 92 In addressing these concerns, the proposed regulation remains relatively lean. It notably refrains from taking on broader concerns with which modern AI is often conflated, namely dominant market power,Footnote 93 highly stylized concepts,Footnote 94 and the general effects of technology.Footnote 95
However, the proposed regulation does not fully address the main concerns concerning AI, namely bias and opacity, head-on. It brings to bear a gentle, procedural approach on AI by addressing bias indirectly through data governance and transparency and remedying opacity through interpretability. It entrusts providers with the management of the risks posed by AI and with the judgement of what is tolerable. Providers consequently bear soft duties. In relying on soft duties, the regulation extends the life of ethical norms and continues their approach of indulgence. It thus incorporates the character of ethical norms that lack the commitment of hard law.
On the one hand, it may be unexpected that ethical norms live on to a certain extent, given that the new law on AI is laid down in a directly applicable, binding Union regulation. On the other hand, this is not all that surprising because a horizontal legislative act that regulates all kinds of AI in one go is necessarily less specific on substance than several sectorial acts addressing individual applications. (Though the adoption of several sectorial acts would have had other disadvantages.) Yet, this approach of the proposed regulation begs the question of whether it can serve as a basis for individual, private rights: will natural persons, market competitors, etc. be able to sue providers of high-risk AI for violation of the procedural, managerial obligations incumbent on them under the regulation?Footnote 96
IV. International Law Sidelined
It is not the case that international law has ignored the rise of AI, while ethics filled the void and laid down the norms. International law – especially the soft type – and ethical principles overlap and are not always easily distinguishable. Yet, even international soft law has been lagging behind considerably. It took until late spring 2019 for the Organization for Economic Co-Operation and Development (OECD) to adopt a resolution spelling out five highly abstract principles on AI.Footnote 97 While the principles address opacity (under transparency and explainability) and robustness (including security and safety), they ignore the risk of bias. Instead, they only generically refer to values and fairness. When the OECD was adopting its non-binding resolution, the European Commission’s White PaperFootnote 98 was already in the making. As the White Paper, the OECD Resolution recommended a risk-based approach.Footnote 99 Additionally, the OECD hosts a recent political initiative, the Global Partnership on Artificial Intelligence,Footnote 100 which has produced a procedural report.Footnote 101
Regional organizations have been more alert to AI than universal organizations. Certain sub-entities of the Council of Europe notably examined AI in their specific purview. In late 2018, a commission within the Council of Europe adopted a set of principles governing AI in the judicial system;Footnote 102 in the Council of Europe’s data protection convention framework, certain principles focussing on data protection and human rights were approved in early 2019.Footnote 103 On the highest level of the Council of Europe, the Committee of Ministers recently adopted a recommendation,Footnote 104 which discussed AI (‘algorithmic systems’,Footnote 105 as it calls it) in depth from a human rights perspective. The recommendation drew the distinction between high-risk and low-risk AI that the proposed Union regulation also adopted.Footnote 106 It, in large parts, mirrors the European Union’s approach developed in the White Paper and the proposed regulation. This is not surprising given the significant overlap in the two organizations’ membership.
On the universal level, processes to address AI have moved at a slower pace. The United Nations Educational, Scientific and Cultural Organization is only now discussing a resolution addressing values, principles, and fields of action on a highly abstract level.Footnote 107 The United Nations published a High-Level Report in 2019,Footnote 108 but it dealt with digital technology and its governance from a general perspective. Hence, the values it listsFootnote 109 and the recommendations it makesFootnote 110 appear exceedingly abstract from an AI point of view. The three models of governance suggested in the report, however, break new ground.Footnote 111
In a nutshell, most of the international law on AI arrives too late. Domestic implementation of ethical norms is already in full swing. Legislative acts, such as the proposed regulation of the EU, are already being adopted. Court and administrative cases are being decided. Meanwhile, standardization organizations are enacting the technical – and not-so-technical – details. Still, the international law on AI, all of which is soft (and hence not always distinguishable from ‘ethical norms’), is far from being useless. The Council of Europe’s recommendation on algorithmic systemsFootnote 112 added texture and granularity to the existing ethical norms. Instruments that may eventually be adopted on the universal level may spread norms on AI across the global south and shave off some of the Western edges the norms (and AI itself) currently still carry.Footnote 113
However, the impact of the ethical norms on AI is more substantial than international legal theory suggests. The ethical norms were consolidated outside of the traditional venues of international law. By now, they are diffusing into domestic law. International law is a bystander in this process. Even if the formation of formally binding international law on AI were attempted at some point,Footnote 114 a substantial treaty would be hard to achieve as domestic legislatures would have locked in legislation by then. A treaty could only re-enact a consensus established elsewhere, in other words, in ethical norms and domestic law, which would reduce its compliance pull.
V. Conclusion and Outlook
This chapter explained how ethical norms on AI came into being and are now absorbed by domestic law. The European Union’s new proposal for a regulation on AI illustrated this process of ‘bottom-to-bottom’ norm diffusion. While soft international law contributed to forming ethical norms, it neither created them nor formed their basis in a formal, strict legal sense.
This chapter by no means suggests that law always functions or is created in the way illustrated above. Undoubtedly, international law is mainly formed top-down through classical sources. In this case, it also exercises compliance pull. However, in domains such as AI, where private actors – including multinational companies and transnational or domestic non-governmental organizations – freely shape the landscape, a transnational process of law creation takes place. States in such cases tend to realize that ‘their values’ are at stake when it is already too late. Hence, states and their traditional way of making international law are sidelined. However, it is not ill will that drives the process of norm diffusion described in this chapter. States are not deliberately pushed out of the picture. Instead, ethical norms arise from the need of private companies and individuals for normative guidance – and international law is notoriously slow to deliver it. When international law finally delivers, it does not set the benchmark but only re-traces ethical norms. However, it does at least serve to make them more durable, if not inalterable.
The discussion about AI in international law has so far been about the international law that should, in a broad sense, govern AI. Answers were sought to how bias, opacity, robustness, etc., of AI could be addressed and remedied through law. However, a different dimension of international law has been left out of the picture so far. Except for the narrow discussion about autonomous weapons systems within CCW, international lawyers have mainly neglected what AI means for international law itself and the concepts at its core.Footnote 115 Therefore, the next step to be taken has to include a re-assessment of central notions of international law in the light of AI. The notions of territoriality/jurisdiction, due diligence duties concerning private actors, control that is central to responsibility of all types, and precaution should consequently be re-assessed and recalibrated accordingly.