Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-23T02:31:07.226Z Has data issue: false hasContentIssue false

Anthropology and the AI-Turn in Global Governance

Published online by Cambridge University Press:  16 August 2021

Maria Sapignoli*
Affiliation:
Assistant Professor, Department of Philosophy ‘Piero Martinetti,’ University of Milan, Milan, Italy.
Rights & Permissions [Opens in a new window]

Extract

Under the banner “AI (artificial intelligence) for good,” new technologies are becoming more and more central to the agendas of global and regional institutions, as technologies to be embraced and regulated at the same time. This is indicated by the 2018 UN Secretary General's Strategy on New Technology, and by the most recent European Commission proposal to regulate artificial intelligence systems. In this essay, I discuss how anthropology and its ethnographic method could contribute to our understanding of the AI-turn in global governance, by shedding greater light on the effects that the use of this technology has for society, the work of institutions, and the production and application of international law. I argue that engaging ethnographically with AI techniques and knowledge could also bring about a transformation in governance, policy-making, and anthropological theory.

Type
Essay
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Maria Sapignoli 2021

Under the banner “AI (artificial intelligence) for good,” new technologies are becoming more and more central to the agendas of global and regional institutions, as technologies to be embraced and regulated at the same time. This is indicated by the 2018 UN Secretary General's Strategy on New Technology,Footnote 1 and by the most recent European Commission proposal to regulate artificial intelligence systems.Footnote 2 In this essay, I discuss how anthropology and its ethnographic method could contribute to our understanding of the AI-turn in global governance, by shedding greater light on the effects that the use of this technology has for society, the work of institutions, and the production and application of international law. I argue that engaging ethnographically with AI techniques and knowledge could also bring about a transformation in governance, policy-making, and anthropological theory.

The AI-Turn in Global Governance

Artificial intelligence can be described and enacted in different ways: by computer scientists through technical and mathematical terms as computational processes, including those derived from machine learning, statistics, or other data processing; as an artificial neural network that can classify data and make predictions in ways that cannot yet be fully explained; and as the simulation of human intelligence whereby technology is refined in order to imitate human reasoning, problem solving, and decision-making. The person in the street might refer to AI technologies present in their daily lives—such as their Apple watches, phone apps, Amazon Alexa, and Tesla cars. AI is considered by some as a threat, by enabling such practices as surveillance, predictive policing, and control over labor practices.

In the social sciences, AI is seen as related to global systems of power, composed of material infrastructures, supply chains, labor, classification, data, and so on, that in turn depend on political and social structures.Footnote 3 Nowadays, the fact that technologies are embedded in the social context that produces them seems broadly accepted. These sociotechnical assemblages inform the ways that humanity sees, engages with, and knows the world. Approaching AI as culture,Footnote 4 as discussed below, is an ideal terrain for anthropology; out of ethnographic attention comes a critical project that looks closely at AI's impacts as a knowledge system.

A symposium published in this journal in 2020 discussed how AI will affect international law and how international law might guide states’ decisions on how to regulate AI technologies.Footnote 5 This turn to AI is inspired by the hope that global institutions, in some ways weakened by state-centric structures of governance, can rely on new technologies to help ensure human rights compliance and make the presence of humanitarian crises more visible. AI is taken to offer possibilities for documenting, verifying, and monitoring human rights through the analysis of large amounts of data as well as to make possible the gathering of data in contexts where access is extremely difficult. Its use promises to help international organizations respond to crises more efficiently through the production of readily accessible information and refined and efficient decision-making. AI in global governance has also shifted the attention from “compliance” to “prevention” of human rights violations.Footnote 6 It aims to detect human rights abuses through big data analytics and machine learning-enabled “trigger warnings.” In this way, the AI-turn in global governance seeks to challenge the United Nations as an institution that too often has not been responsive to humanity's crises.

Machine learning technologies have already been incorporated into many UN initiatives, such as education, health, food delivery, peace, diplomacy, security, refugee management, humanitarian aid, human rights, environmental monitoring, sustainable development goals, and humanitarian crisis response.Footnote 7 These initiatives are turning to the use of “real time data” and “crisis mapping” to develop “quick and time-efficient policies,”Footnote 8 and are thereby refining ways that global diplomacy and international institutions relate to states, civil society, and the private sector.Footnote 9 This is occurring in a context in which more and more states are using new digital tools for systematically surveilling, documenting, and discrediting or intimidating human rights activists.Footnote 10

The Private Sector's Role in the Global Governance of AI

Another important actor in the AI-turn in global governance is the private sector, particularly tech companies, both as regulators and perpetrators of human rights violations. These companies can take the form of joint ventures and investors in the development of new technologies, including the establishment of multi-donor pooled funds for innovations. They also take the form of private sector donations through the practice of “data philanthropy,” or the voluntary sharing of data for the public good. The United Nations, together with its corporate partners, is developing technologies that present both important opportunities and risks in the administration of programs and the legal geography of global governance.Footnote 11

Recent private sector initiatives use technologies to shift the attention of human rights violations from post-facto to ante-facto, leading to the question of who decides the norms to regulate and the data to be included in the world from which AI systems learn. For example, Microsoft formed a partnership in 2017 with the Office of the UN High Commissioner for Human Rights to develop and make use of advanced information technology to “predict, analyze, and respond to critical human rights situations.” Their joint project, “Rights View Dashboard,” has the main objective of connecting, integrating, and processing different data sources, including real-time data on human rights violations—from NGOs, activists, governments, UN country teams, and missions around the world—into a single dashboard. Through verification and analysis, the project aims to issue early warnings and enable “swifter response in crisis situations,” while offering “smart data to guide response.”Footnote 12 AI technologies and their architects clearly play a key role in guiding the eyes of institutions, creating regimes of visibility and invisibility, and shaping human rights practices and knowledge.

The use of new technologies for human rights and in international governance has raised serious concerns regarding security, privacy, surveillance, liability, the reproduction and production of inequalities, and unequal access to justice. Private companies can sometimes resemble international human rights tribunals in their practices of issuing binding rulings and non-binding recommendations, developing answers to procedural questions, and setting new standards that they then apply.Footnote 13 Engineers, designers, and computer scientists step into the roles of human rights practitioners and advocates when they translate policy into code, decide the legitimate platform content, create algorithms, and design models to predict where and when human rights violations will occur. These experts are aware that they serve as valuable resources to companies and often make their voices heard when their employers do not follow their own principles of social good. In doing so, they may risk their jobs in speaking out against tools that end up discriminating against certain groups of people or the signing of troubling contracts with state militaries and police agencies.Footnote 14

Ethnographic research and anthropological theory can contribute to existing analyses of the role of the private sector by critically studying how companies translate (or not) human rights principles during the early stages of product design. It can offer insights before the technologies are deployed and cause harm by situating them in their local contexts whilst also analyzing their global circulation. It can also demonstrate that deploying technology for solving problems of governance is risky because technological systems might reflect the structural privileges of those who design them,Footnote 15 those who are the creators of the worlds from which machines learn.

AI as a Knowledge Environment

Digital practices influence the ways that institutions interact with and understand the populations they serve. The reality from which the technology learns and the new reality the technology produces are designed by compositive figures of humans learning collaboratively with algorithms—and algorithms with algorithms—as well as by data and models used to train machines. The new reality is constructed from a normative reasoning made by “if/then” statements to simulate possible actions and calculate their consequences with infinite outputs and future possibilities. As these systems seek to achieve order from complexity, they reduce infinite realities by selecting those that, theoretically at least, matter most. In other words, AI can only observe what its code allows it to construct as it evolves as a system. AI, therefore, has its limits.

These socio-technical assemblages can leave many unrepresented in the “digital smoke signals”Footnote 16 that global organizations use to understand the conditions of the populations they serve. The emphasis on “big data” also masks disparities in power among social groups and regions of the world. Data that are missing, incomplete, or prone to error are not represented in AI-based solutions and predictions.Footnote 17 Much like the indicators explored by Sally Engle Merry and others,Footnote 18 new technologically sophisticated practices can reproduce historical inequalities as well as unintentionally create new ones. Furthermore, the data-subjects that emerge from computational reasoning, our “digital selves,” are transitory and out of the subjects’ control. They depend on the algorithms that correlate data and translate them, and on the data and proxies that build them in a specific moment.Footnote 19 In turn, algorithms are “prisms that both reflect and reconfigure social dynamics.”Footnote 20 Just like indicators, algorithms are not inherently good or bad as modes of governance, “but contribute to the ways in which the world is understood and decisions are made in the global arena.”Footnote 21 However, algorithms go a step further since they combine big data with automated processing aimed at facilitating decision-making, thereby increasing the illusion of objectivity.

New technologies applied to the world's problems play a significant role as knowledge producers about humanity and its diversity. The AI-turn of the United Nations, through the power of digitization, big data analytics, and automated decision-making, has contributed to the classification, prioritization, and production of humanities and normativities in global governance. As I now discuss, anthropological theories and methods have the potential to elucidate the workings of and knowledge embedded and produced by AI systems.

The Anthropology of AI in International Law

Ethnography is a promising method for mapping the social life of an AI system in human rights practice, from development to application and to its effects. It provides the tools to study collaborationsFootnote 22 among experts—lawyers, activists, social scientists, computer scientists, and so on—who create, regulate, implement, and monitor technologies. Such technologies “do not work in a vacuum, but rather depend upon a complex network of expertise, maintenance, and governance that often embody structural inequalities”Footnote 23 and travel across jurisdictions and contexts. They must be understood in their expansive contexts to grasp what is included in or excluded from the algorithms and the criteria of decision-making in the realm of development and application.

Machine learning and its normative reasoning are a powerful governing rationality. In a way, these systems promise to uncover and discover unseen patterns in human interaction through processes that resemble the inductive method of anthropology, with large amounts of information forming the basis of analysis and interpretation. The processes of classification and translation of the world that has to be made machine readable through datasets—including the proxies through which they claim to measure the world—lend themselves to traditional anthropological questions related to how classifications get made, and what their significance is in social worlds. What do AI systems produce? How do they interact with the classified? How does the universalistic logic/language of programming and computing get translated and appropriated by specific realities? And how does algorithmic data processing influence global governance and collective life?

Resembling the way anthropologists have approached the study of human rights as culture,Footnote 24 Nick Seaver writes about algorithms, not as technologies affecting or being affected by culture, but as culture.Footnote 25 In his view, algorithms as culture materialize values and meanings. They are not singular technical objects that enter into many different cultural interactions, but are rather “unstable objects, culturally enacted by the practices people use to engage with them.”Footnote 26 As culture, they are composed of collective human thoughts and practices.

Ethnographically, studying the AI-turn in international governance would be an ethics of realism that takes into consideration and reveals the cultural, political, and economic context in which AI programs are embedded and how their applications are translated in diverse cultural contexts and jurisdictions with different consequences. It would be an approach that engages collaboratively with experts to reflect on and observe the design of AI and the values embedded in its production. In recent years, several guidelines and regulations have been issued by companies and international and regional institutions concerning how to regulate an ethical and human rights-based approach to AI. But there are limits to ethical and human rights-based frameworks that must be identified and addressed. Moreover, AI legal and ethical guidelines, intended to be applied on a global scale, have been produced mainly in Western countries and by big-tech companies, leaving many voices missing. All too often, companies’ self-regulating ethical frameworks decide in practice what ethical AI means.

AI's outcomes are unknown—they are predictions of predictions; they are difficult to explain and therefore hard to anticipate. Legal framing can hardly effectively mitigate AI's worst effects. An empirical study of these processes producing a detailed and descriptive understanding of these practices could allow access to the less obvious embedded practices and harms that emerge from AI applications in different contexts.

Conclusion and a Way Forward

Anthropologists investigating the AI-turn of global governance will likely find themselves having to come to terms with five emergent phenomena: (1) the growing role of data technicians in developing digital technologies applied to a myriad of the world's problems, including the development of international law and the importance of collaboration in the work being conducted; (2) greater private sector participation in, and responsibility for, human rights and global governance, often in ways that are inseparable from corporate goals of image production and profitability; (3) the invisible hand of automatic decision-making affecting targeted population(s); (4) the digitization and semi-automatization of bureaucratic practices; and (5) the creation of algorithmically-interpreted data identities that change every time new data enters the system.

The United Nations still lacks internal policies and audit mechanisms for assessing the impacts of AI and the digitalization of bureaucracy. The use of new technologies for the development of human rights standards is an emerging field, which goes beyond the identification of human groups as distinct human rights-claimants, and focuses instead on technologies as actors and agents that act in important ways on human lives. If we do not critically and empirically analyze these processes, as Philip Alston put it, “there is a real risk here that the rule of web design will replace the rule of law.”Footnote 27

The ethnography of new technologies can offer an environment for theoretical innovation. Engaging ethnographically with AI techniques and knowledge will not only generate new insights about the nature of knowledge production and decision-making, but could also bring about a transformation of anthropological theory. Social science methods are more than just incremental techniques for understanding the world; they are also social phenomena in and of themselves, both because they emerge from particular social worlds that organize ontologies and epistemologies in their own particular ways, and because their methods actively participate in the social worlds they were designed to comprehend. Just as social scientists have engaged in a critical analysis of the rule of law, these scientists should unpack the rule of AI design, and hopefully mitigate its consequences, both intended and unintended. An anthropology of these new technologies will be challenged to address the nature of governance and data-driven governing rationalities in the unfolding twenty-first century.

References

4 Nick Seaver, What Should an Anthropology of Algorithms Do?, 33 Cult. Anthropology 375 (2018).

6 Cf. Galit A. Sarfaty, Can Big Data Revolutionize International Human Rights Law?, 39 U. Pa. J. Int'l L. 73 (2017).

7 See, for example, the work of the UN Pulse, the adoption of AI base solutions by the UN Children Fund or UN High Commissioner for Refugees, and the expansion in the use of AI by the World Food Program.

9 Deeks, supra note 5.

10 Molly K. Land & Jay D. Aronson, New Challenges for Justice and Accountability, 16 Ann. Rev. of Law & Soc. Sci. 223, 232 (2020).

11 Sapignoli, supra note 8, at 7.

13 Laurence Helfer & Molly K. Land, Is the Facebook Oversight Board an International Human Rights Tribunal?, Lawfare (May 13, 2021).

14 Maria Sapignoli & Ronald Niezen, Global Legal Institutions, in Oxford Handbook of Law and Anthropology (Foblets Marie-Claire et al. eds., 2020).

15 Land & Aronson, supra note 10, at 232.

16 Steve Lohr, Searching Big Data for ‘Digital Smoke Signals’, N.Y. Times (Aug. 7, 2013).

17 Sapignoli, supra note 8, at 6.

20 Angèle Christin, The Ethnographer and the Algorithm: Beyond the Black Box, 49 Theory & Soc'y 897 (2020).

21 Merry, supra note 18, at 36.

22 Annelise Riles, From Comparison to Collaboration: Experiments with a New Scholarly and Political Form, 78 Law & Contemp. Probs. 147 (2015).

23 Land & Aronson, supra note 10, at 236.

24 Jane Cowan, Culture and Rights After Culture and Rights, 108 Am. Anthropologist 9 (2006).

25 Seaver, supra note 4.

26 Id.

27 UN General Assembly 74 session Item 72(b), online streaming (Oct. 18, 2019).