1. INTRODUCTION AND PURPOSE OF STUDY: ETHICS GUIDELINES AS A TOOL FOR GOVERNANCE
Much like Karl Renner looked for the principal content of property law in times of technology-driven societal transformation in the industrializing Western Europe,Footnote 1 contemporary society is seeking its proper forms of governance in a digital transformationFootnote 2 driven by platformization,Footnote 3 datafication,Footnote 4 and algorithmic automation.Footnote 5 Much like how Eugene Ehrlich proposed a study of the living law,Footnote 6 paralleled by Roscoe Pound’s separation of law in books from law in action,Footnote 7 contemporary governance of artificial intelligence (AI) is also separable in terms of hard and soft law.Footnote 8 This article could be read in light of these foundational socio-legal scholars shaping the sociology of law as a scientific discipline that has inspired much thought on the relationship between social change, law, and new technology.Footnote 9
In its communication from April 2018,Footnote 10 the EU adopted an explicit strategy for AI and appointed the High-Level Expert Group on AI, consisting of 52 members, to provide advice on both investments and ethical-governance issues in relation to AI in Europe. In April 2019, the expert group published the Ethics Guidelines for Trustworthy Artificial Intelligence (hereinafter the Ethics Guidelines),Footnote 11 which—despite explicitly pointing out that the guidelines do not deal with legal issues—clearly indicate issues of responsibility, transparency, and data protection as entirely central parts of the development of trustworthy AI. Over the last few years, a number of ethics guidelines have been developed relating to AI: by companies, research associations, and government representatives.Footnote 12 Many overlap in part with already-existing legislation, but it is often unclear how the legislation and guidelines are intended to interact more precisely. In particular, the way in which the standpoints in principle are intended to be implemented is often unclear. In other words, the ethics guidelines focus on normative standpoints, but are often weak from a procedural perspective. The Ethics Guidelines of the EU Commission’s expert group are a clear sign of an ongoing governance challenge for the EU and its Member States. Interestingly, during her candidature, Ursula von der Leyen, the new president of the EU Commission, stated that, during her first 100 days in office, she would “put forward legislative proposals for a coordinated European approach to the human and ethical implications of AI.”Footnote 13 Consequently, in February 2020, the European Commission issued a digital strategy including proposals for empowering excellence and trust in AI and a White Paper on AI.Footnote 14 At the same time, the EU Commission’s take on AI development and governance signifies a global trend on governmental and jurisdictional approaches to seeing both societal and industrial benefits with AI in tandem with ethical and legal concerns that need to be addressed and governed. This notion of development and governance of high-potential/high-risk have earlier been described as being “inevitably and dynamically intertwined” with regard to emerging technologies.Footnote 15
Part of the challenge for the EU, arguably, consists of balancing regulation against the trust that exists in technical innovation and societal development overall, to which AI and its methods can contribute, and which is therefore not desirable to risk undermining with unbalanced or hastily introduced regulation. As societal use and dependency on AI and machine learning are increasing, society increasingly needs to understand any negative consequences and risks, how interests and power are distributed, and what needs exist for both legal and other types of governance.
This article focuses on ethics guidelines as tools for governance, points to the interplay with legal tools for governance, and discusses the particular features of AI development that have led to ethics issues gaining such a prominent position. Particular focus is placed on the Ethics Guidelines for “trustworthy AI” as well as the Commission’s White Paper on AI. First, the article focuses on the definitional struggles around the concept of AI, in order to clarify the relationship between the definition and the governance of AI. Since the actual definition of AI is highly debatable, and may depend on the disciplinary field in which the person making the definition is based, it will arguably have an effect on its governance. Here, this article argues for the need to regard the technologies in their applied context, and in their interplay with human values and societal expressions, which is not least underlined by the dependence of machine learning on large amounts of data or examples as its foundation. Second, the key features of the ethical approach on AI governance are outlined, addressing some of its critique, with a particular focus on the EU. This must arguably be placed in a broader context of governance tools that nevertheless often share some principle-based central values relating to the control of data, the degree of reasonable transparency and how responsibility should be allocated, and a brief comparison to Chinese and Japanese guidelines are provided. Finally, the article concludes with a socio-legal perspective on ethics guidelines as a form of governance over the AI development. The governance using ethics guidelines is highly dependent on recent insights from critical AI studies about the societal challenges relating to fairness, accountability, and transparency.Footnote 16 At the same time, the governance issue must inevitably deal with temporal aspects of the difference between how legislation is formed and how rapid development has been for the underlying elements of AI.
2. WHAT IS AI?
Despite—or perhaps because of—the increased attention that AI and its developed methods are receiving in multidisciplinary research, media, and policy work, there is no clear consensus on how AI should best be defined. This seems to be the case with regard not only to public perceptions,Footnote 17 but also to computer scienceFootnote 18 and law.Footnote 19 For example, Gasser and Almeida establish that one cause of the difficulty of defining AI from a technical perspective is that AI is not a single technology, but rather “a set of techniques and subdisciplines ranging from areas such as speech recognition and computer vision to attention and memory, to name just a few.”Footnote 20 A number of definitions have been expressed, both within research and in government agency reports, but a major challenge is that the methods express a movable and changing field. I would here like to emphasize the dynamic of the concept construct as it has been discussed within traditional AI research, and also offer some central aspects that can still be highlighted, as well as show what the High-Level Expert Group is concentrating on.
In conjunction with the High-Level Expert Group publishing the Ethics Guidelines, a definition document was also published, aimed at clarifying certain aspects of AI as a scientific discipline and as a technology.Footnote 21 One purpose that it highlights is to avoid misunderstanding and to achieve a commonly shared knowledge about AI that can be used fruitfully also by non-experts, and to indicate details that may contribute to the discussion about the Ethics Guidelines. The High-Level Expert Group uses as its first starting point the definition provided in the EU Commission’s communication on AI in Europe, published in April 2018, which is then developed further:
Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals.
AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). (The High-Level Expert Group, 2019b, p. 1)
This definition concentrates particularly on autonomy—namely that there is a measure of agency in AI systems—and points out that the systems can consist of both physical robots and also pure software systems. At the same time, the examples provide a clear indication of what they are aiming at and, extrapolating, what the governance objects of the Ethics Guidelines consist of. As a software-based category, they point to voice assistants, image-analysis software, search engines, and speech and face-recognition systems, while, for hardware-based applications, they indicate advanced robots, autonomous cars, drones, and the linked-up devices that are seen as part of the Internet of Things. As autonomy is emphasized, this can in combination be interpreted as not applying to all drones or all linked-up devices—only to those that have an autonomous or even learning element. What characterizes an “advanced” robot does not necessarily entail a simple demarcation, which we can expect to be changing over time. This is clearly a “moving target” that seems to be an inherent element of AI, sometimes described as an “odd paradox” or the “AI effect.”Footnote 22
The High-Level Expert Group also notes that an explicit part of AI is the intelligence concept, which is a particularly elusive element that has been included since the area was originated. Legg and Hutter, for example, gather together more than 70 different definitions of the intelligence concept in itself.Footnote 23 In addition to listing a number of psychological definitions, they also indicate how the definitions used in AI research have focused on different inherent aspects, with differing emphasis on problem-solving, improvement and learning over time, good performance in complex environments, or the generalizability of achieving domain-independent skills that are needed to manage a number of domain-specific problems. The intelligence concept also leads to a number of human associations, such as the ability to have feelings and self-awareness that cannot be said to be a living part of the methods and technologies that are causing the explosion of applied AI today, and thus not a central object for governance through ethics guidelines. It can therefore be established that contemporary AI primarily includes a number of technologies and analysis methods that have been gathered together under the umbrella concept of “artificial intelligence,” namely machine learning, natural language processing, image recognition, “neural networks,” and deep learning. Machine learning in particular—a field that, expressed in simple terms, is about methods for making computers “learn” based on data, without the computers having been programmed for that particular task—is a field that has developed rapidly in just the last few years through access to historically incomparable amounts of digital data and increasing analytical processing power. This has led to contemporary AI generally referring to “the computational capability of interpreting huge amounts of information in order to make a decision, and is less concerned with understanding human intelligence, or the representation of knowledge and reasoning,” according to Virginia Dignum, a professor in AI and ethics who also is a member of the High-Level Expert Group.Footnote 24
The complexity of the concept construct has led the High-Level Expert Group to put forward a fairly complex definition, which thus expands the EU Commission’s first definition. It also includes the AI functionality in its systemic context—namely the fact that it is often part of a larger whole,Footnote 25 includes the division of machine learning into structured and unstructured data, and the fact that AI systems are primarily goal-driven to achieve something that a human being has defined:
Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.Footnote 26
There are thus differing aspects of AI to be considered in the definition of AI as a challenge to regulation, where the most central ones for today’s development and use of AI tend to concern (1) autonomy/agency, (2) self-learning from large data amounts (or “adaptability”), and (3) the degree of generalizable learning. Finally, as a step towards a wider social sciences-based discussion and in light of the challenges that AI has displayed in its implementation and interaction with society’s values and structure, it can be argued that there are multidisciplinary advantages of not leaning too heavily towards a computer sciences-based definition of AI. The definition is in itself a form of conceptual control that impacts on the regulation debate, and we therefore need to both be careful and take a multidisciplinary approach when making definitions.Footnote 27
3. EU—TRUSTWORTHY AI
If we first look at the discussions about AI and ethics that are held in the global arena, we can establish that it is currently a lively subject among academics and policy-oriented bodies. Ethics Guidelines in particular, as a governance tool, have seen very strong development over the last few years. For example, a study of the global AI ethics landscape, published in 2019, identified 84 documents containing ethical principles or guidelines for AI.Footnote 28 The study concluded that there is relative unanimity globally on at least five principle-based approaches of ethical character: (1) transparency, (2) justice and fairness, (3) non-harmful use, (4) responsibility, and (5) integrity/data protection. At the same time, the study establishes that there are considerable differences in how these principles are interpreted; why they are considered important; what issue, domain or actors they relate to; and how they should be implemented. The single most common principle is “transparency”—a particularly multifaceted concept it seems.Footnote 29
Meanwhile, the ethics researcher Thilo Hagendorff considers that the weak point of the Ethics Guidelines is that AI ethics—like ethics in general—lack mechanisms for creating compliance or for implementing their own normative claims.Footnote 30 According to Hagendorff, this is also the reason why ethics is so appealing to many companies and institutions. When companies and research institutes formulate their own ethics guidelines, repeatedly introduce ethical considerations, or adopt ethically motivated own undertakings, Hagendorff argues that this counteracts the introduction of genuinely binding legal frameworks. He thus places great emphasis on the avoidance of regulation specifically as a main aim of the AI industry’s ethics guidelines. Mark Coeckelbergh, professor of media and technology philosophy, who is also a member of the High-Level Expert Group, expresses similar risks: “that ethics are used as a fig leaf that helps to ensure acceptability of the technology and economic gain but has no significant consequences for the development and use of the technologies.”Footnote 31 Even if this reminder has merits, and the risk is real—it is indubitably an incentive for many companies to avoid tougher regulation by pointing to “self-regulation” and the development of internal policies with weak actual implementation—there may yet be other reasons for ethics as a tool for governance to have been emphasized so heavily within AI development. Even though self-regulation is surely used as an argument for avoiding the intervention of concrete legislation, the question is still whether the rapid growth of the AI field in particular does not play just as important a role in the conclusion that this particular field has required a softer approach while waiting for critical research to catch up and offer a stable foundation for potent regulation. The question is, however, what codification of AI ethics would involve, and which parts of it would be best suited for legislation.
3.1 Ethics Guidelines for Trustworthy AI
In April 2018, the EU adopted a strategy for AI, and appointed the High-Level Expert Group with its 52 members, to provide advice on both investments and ethical-governance issues in relation to AI in Europe. In December 2018, the Commission presented a co-ordinated plan—“Made in Europe”—which had been developed with the Member States to promote the development and use of AI in Europe. For example, the Commission expressed an intention that all Member States should have their own strategies in place by the middle of 2019, which did not completely materialize. The expert group was appointed via an open call and consists of a fairly mixed group of researchers and university representatives (within areas such as robotics, computer science, and philosophy), as well as representatives of industry (such as Zalando, Bosch, and Google), and civil-society organizations (such as Access Now,Footnote 32 ANEC,Footnote 33 and BEUCFootnote 34 ). The composition has not avoided criticism, however. For example, in May 2019, Yochai Benkler, a professor at Harvard Law School—perhaps most famous for his optimistic writings on collaborative economies, focusing on phenomena such as Wikipedia, Creative Commons, and open source code—expressed a fear that representatives of industry were allowed too much control over regulatory issues governing AI.Footnote 35 Benkler drew parallels between the EU Commission’s expert group, Google’s failed committee for AI ethics issues, and Facebook’s investment in a German AI and ethics research centre. Similarly, technology and law researcher Michael Veale criticizes the High-Level Expert Group—focusing on the set of policy and investment recommendationsFootnote 36 that was published after the Ethics Guidelines—for failing to address questions of power, infrastructure, as well as organizational factors (including business models) in contemporary data-driven markets.Footnote 37 When the Ethics Guidelines were published, they were also criticized by members of the expert group itself. Thomas Metzinger, a philosopher at the Johannes Gutenberg University Mainz, critically described the process as “ethics washing” in an opinion piece in which he described how the drafts produced on prohibitions against certain areas of use, such as autonomous weapons systems, had been toned down by representatives of industry and allies of these, to land in softer and more permissive wordings.Footnote 38
The Ethics Guidelines have had a clear impact on the subsequent White Paper on AI from the EU Commission (see below) but it still remains to be seen what sort of importance and impact all of these sources will have on European AI development. The Ethics Guidelines point out that trustworthy AI has three components that should be in place throughout the entire life-cycle of AI:
-
a. it should be legal and comply with all applicable laws and regulations;
-
b. it should be ethical and safeguard compliance with ethical principles and values; and
-
c. it should be robust, from both a technical and a societal viewpoint, as AI systems can cause unintentional harm, despite good intentions.
The guidelines focus on ethical issues (b) and robustness (c), but leave legal issues (a) outside the explicit guidelines. It does this despite the fact that issues that are fairly well anchored in law, such as responsibility, anti-discrimination, and—not least—data protection, still fall within the framework for ethics. Just as the expert group established, many parts of AI development and use in Europe are already covered by existing legislation. These include the Charter of Fundamental Rights, the General Data Protection Regulation (GDPR), the Product Liability Directive, directives against discrimination, consumer-protection legislation, etc. Even though ethical and robust AI is to some extent often already reflected in existing laws, its full implementation may reach beyond existing legal obligations.
The expert group provides four ethical principles constituting the “foundation” of trustworthy AI: (1) Respect for human autonomy; (2) Prevention of harm; (3) Fairness; and (4) Explicability. However, for the realization of trustworthy AI, they address seven main prerequisites, which, they argue, must be evaluated and managed continuously during the entire life-cycle of the AI system:
-
1. Human agency and oversight
-
2. Technical robustness and safety
-
3. Privacy and data governance
-
4. Transparency
-
5. Diversity, nondiscrimination, and fairness
-
6. Societal and environmental wellbeing
-
7. Accountability.
As mentioned, although the guidelines emphasize that they focus on ethics and robustness, and not on issues of legality, it is interesting to note that both anti-discrimination (5) and protection of privacy (3) are developed as two of the seven central ethical prerequisites for the implementation of trustworthy AI. In relation to the investment and policy recommendations also published by the expert group, it recommends features such as a risk-based approach that is both proportional and effective in guaranteeing that AI is legal, ethical, and robust in its adaptation to fundamental rights.Footnote 39 Interestingly, the expert group calls for comprehensive mapping of relevant EU regulations to be carried out, in order to assess the extent to which the various regulations are still fulfilling their purposes in an AI-driven world. They highlight that new legal measures and control mechanisms may be needed to safeguard adequate protection against negative effects, and to enable correct supervision and implementation.
The Ethics Guidelines argue for the need for processes to be transparent in the sense that the capacities and purpose of AI systems should be “openly communicated, and decisions—to the extent possible—explainable to those directly and indirectly affected.”Footnote 40 A key reason is to be building and maintaining users’ trust. In the literature relating to ethics guidelines targeted at AI, it has been argued that transparency is not an ethics principle in itself, but rather a “pro-ethical condition”Footnote 41 for enabling or impairing other ethical practices or principles. As argued in a study on the socio-legal relevance of AI, there are several contradictory interests that can be linked to the issue of transparency.Footnote 42 Consequently, there are reasons other than pure technical complexity why certain approaches may be of a “black box” nature, not least the corporate interests of keeping commercial secrets and holding intellectual property rights.Footnote 43 Furthermore, the Ethics Guidelines contain an assessment list for practical use by companies. During the second half of 2019, over 350 organizations have tested this assessment list and sent feedback. The High-Level Group revised its guidelines in light of this feedback and presented their final Assessment List for Trustworthy Artificial Intelligence in July 2020.
3.2 The White Paper on AI
As mentioned, Commission President Ursula von der Leyen announced in her political GuidelinesFootnote 44 a co-ordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. The White Paper on AI from February 2020 could be seen in light of this commitment. In the White Paper, it is expressed that the Commission supports a regulatory and investment-oriented approach with what it calls a “twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology” and that the purpose of the White Paper is to set out policy options on how to achieve these objectives.Footnote 45 A key proposal in the White Paper is taking a risk-based, sector-specific approach to regulating AI, in which high-risk applications are distinguished from all other applications. First, a high-risk sector is where “significant risks can be expected,” which may initially include “healthcare; transport; energy and parts of the public sector.”Footnote 46 In addition, the application should be used in such a manner that “significant risks are likely to arise,” which means a cumulative approach. This proposal is an either/or approach on risk, and more nuanced alternatives have been proposed elsewhere, for example by the German Data Ethics Commission.Footnote 47
There is a clear value-base in the White Paper, with a particular focus on the concept of trust: “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”Footnote 48 An expressed aim of the EU’s policy framework is to mobilize resources to achieve an “ecosystem of excellence” along “the entire value chain.” The key elements of a future regulatory framework for AI in Europe is to create a “unique ecosystem of trust,” which is described as a policy objective in itself. The Commission’s hope is that a clear European regulatory framework would “build trust among consumers and businesses in AI, and therefore speed up the uptake of the technology.”Footnote 49
The White Paper makes a clear address to the human-centric approach based on the Communication on Building Trust in Human-Centric AI, which is also a central part of the Ethics Guidelines discussed above. The White Paper states that the Commission will take into account the input obtained during the piloting phase of the Ethics Guidelines prepared by the High-Level Expert Group on AI. Interestingly enough, the Commission concludes that those regarding transparency, traceability, and human oversight are not specifically covered under current legislation in many economic sectors. The lack of transparency, the Commission brings forward, makes it “difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights, attribute liability and meet the conditions to claim compensation.”Footnote 50
3.3 Asian Comparison
From an Asian socio-legal perspective, the Chinese and the Japanese developments on AI policy and governance are significant but will only briefly be addressed here. The core in China’s AI strategy can be found in the New Generation Artificial Intelligence Development Plan (AIDP), issued by China’s State Council in July 2017 and the Made in China 2025, released in May 2015.Footnote 51 For example, a goal expressed in the AIDP is to establish initial ethical norms, policies, and regulations related to AI development in China by 2020, to be further codified by 2025.Footnote 52 This includes participation in international standard setting as well as deepening international co-operation in AI laws and regulations. In 2019, a National Governance Committee for the New Generation Artificial Intelligence was established, which published a set of governance principles.Footnote 53 In May 2019, the so-called Beijing AI Principles, which is another set, were released by the Beijing Academy of Artificial Intelligence, depicting the core of its AI development as the realization of beneficial AI for humankind and nature. These Principles have been supported by various elite Chinese universities and companies including Baidu, Alibaba, and Tencent.Footnote 54
In Japan, an expert group at the Japanese Cabinet Office has elaborated on the Social Principles of Human-Centric AI (Social Principles), which was published in March 2019 after public comments were solicited. In a comparison between Japanese and European initiatives, a recent study concludes that common elements of both notions of governance include that AI should be applied in a manner that is “human-centric” and should be committed to the fundamental (constitutional) rights of individuals and democracy.Footnote 55 A particular difference is, however, according to Kozuka, that Japan’s Social Principles are more policy-oriented, while the European Ethics Guidelines have a rights-based approach. Interestingly, Kozuka—with references to Lawrence Lessig in the paper—concludes that “the role of the law as a mechanism of implementation will shrink and be substituted by the code as the use of AI becomes widespread.”Footnote 56 This notion is particularly meaningful in relation to automated policy implementation on large-scale digital platforms, shaping both human and institutional behaviour.Footnote 57
4. CONCLUSIONS
This article has put forward the difficulty of defining AI as one of the regulatory challenges that follow from the implementation and development of AI. While the historically visionary and contemporary heterogeneous AI field arguably provides favourable conditions for research and development, the conceptual fuzziness creates a challenge for regulation and governance. It is perhaps the data dependency of today’s machine learning—much critical and recent research shows—in combination with a complexity that creates a lack of explainability that stresses the risks of resulting in societal imbalances not only being reproduced, but also reinforced, at the same time as they evade detection. Furthermore, the article provides an account on the recent boom in ethics guidelines as a tool for governance in the field of AI, but with particular focus on the EU. Finally, three main concluding statements from the perspective of law and society can be made.
4.1 The Temporality Issue of Technology and Law
History teaches us that regulatory balancing is difficult, especially in times of rapid technological change in society. At the same time, legal scholars such as Karl Renner, who analyzed property laws of Western Europe’s industrialization, also teach that law can be an extremely dynamic and adaptive organism. It is conceivable that central parts of the Ethical Guidelines may be formalized with support for European and national legislation and regulation, focusing on the importance of (“an ecosystem of”) trust. The interpretation of existing legislation in light of functionalities, possibilities, and challenges of AI systems is also a matter of serious concern associated with major challenges. Even though the “legal lag” is more complex than it may seem,Footnote 58 the speed of change, in particular, is still repeatedly a difficult challenge in relation to the inertia of traditional regulation.Footnote 59 Legislative processes aimed at learning technologies with increasing agencyFootnote 60 require reflection, critical studies, and more knowledge in order to be able to find the desirable societal balances between various interests. Especially transparency, traceability, and human oversight are not clearly covered—or understood—under current legislation in many economic sectors. The temporal aspect of the difference between new technologies and well-fitted regulation, in combination with the many-headed balancing of interests, is very likely a significant contributor to why governance in the area is heavily characterized by ethics guidelines at the moment.
4.2 From Principles to Process
The White Paper signifies an ongoing process of evaluation towards where the principled take on AI governance expressed by a multitude of ethics guidelines can find a balanced formalization in law. This is also signified by the work conducted by the High-Level Group, as it was revisiting and assessing the Ethics Guidelines jointly with companies and stakeholders during 2019 and 2020. The Member States’ supervisory authorities and agencies could be addressed specifically here too, in the sense that they will very likely be the ones to carry out relevant parts of any regulatory approach on AI focusing what the High-Level Expert Group has expressed as a need for “explicability”—that is, auditability and explainability. This particular aspect of transparency stresses the need for both methodological developmentFootnote 61 and likely closer collaboration between relevant supervisory authorities than what is often the case at a Member-State level.
The great range of ethics guidelines still displays a core of principal values, but—being ethical guidelines—are relatively poor in procedural arrangements compared to law. This can be understood as an expression of how quickly the transition in society towards a data-dependent and applied AI has been, where the principle stage is essential. The subsequent procedural stage is necessary, however, both to strengthen the chances of implementing the principal values as well as to formalize in legislation, assessment methodologies, and standardization. If one can regard the growth in ethics guidelines as an expression of the rapidity of the development of AI methods, the procedural stage is an expected second stage. However, if one regards the ethics guidelines as industry’s reluctance to accept regulation of its activities, as a soft version of legislation that is intended to be toothless, then the procedural stage will meet considerable resistance. Perhaps the lack of expressing power structures of contemporary data-driven platform markets—emphasized by critics—is a sign of the regulatory struggles to come in the leap from principles to process.
4.3 The Multidisciplinary AI Research Need
Contemporary data-dependent AI should not be developed in technological isolation without continuous assessments from the perspective of ethics, cultures, and law.Footnote 62 Furthermore, given the applied status of AI, it is imperative that humanistic and social scientific AI research is stimulated jointly with technological research and development. Given aspects of learning in data-dependent AI, there is an interaction at hand in which human values and societal structures constitute the training data. This means that social values and informal norms may be reproduced or even amplified—sometimes with terrible outcomes. From an empirical approach, one could conclude that it is often human values and skewed social structures that lead to automated failures. In applied AI, learning simply arises not only from good and balanced examples, but also from the less proud sides of humanity: racism, xenophobia, gender inequalities, and institutionalized injustices.Footnote 63 Challenges here will thus be to sort normatively among the underlying data, or alternatively to take a normative view on the importance of the automation and scalability of self-learning technologies so that the reproductive and amplifying tendencies become better and more balanced than their underlying material. There is, therefore, a multidisciplinary need for research in this field that requires collaboration between the mathematically informed computer-scientific disciplines that have deep insights into how AI systems are built and operate, and the humanities and social science-oriented disciplines that can theorize and understand their interaction with cultures, norms, values, attitudes, or the meanings and consequences for power relations, states, and regulation.
In conclusion, the AI-development issue has come to take a value-based and ethics-focused development within the European administration with a focus on trustworthiness and human-centric design. It is an answer to the question of how to look at AI and its qualities, which here is found to be commendable: the precision of self-learning and autonomous technologies needs to be assessed in its interaction with the values of society. It is a normative definition with bearing on future development lines—a good AI is a socially entrenched and trustworthy one.