Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-25T18:09:07.814Z Has data issue: false hasContentIssue false

12 - Against Procedural Fetishism in the Automated State

from Part III - Synergies and Safeguards

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

This chapter offers a synthesis on the role the law has to play in Automated States. Arguing for a new research and regulatory agenda on AI and ADM beyond the artificial ‘public’ and ‘private’ divide, it seeks to identify new approach and safeguards necessary to make AI companies and the Automated States accountable to their customers, citizens and communities. I argue that emphasis on procedural safeguards alone – or what I call procedural fetishism – is not enough to counter the unprecedented levels of AI power in the Automated States. Only by shifting our perspective from procedural to substantive, we can search for new ways to regulate the future in the Automated States. The chapter concludes the collection with an elaboration of what more substantive regulation should look like: create a global instrument on data privacy, redistribute wealth and power by breaking and taxing AI companies, increasing public scrutiny and adopting prohibitive laws; democratizing AI companies by making them public utilities, and giving people a say how these companies should be governed. Crucially, we must also decolonize future AI regulation by recognizing colonial practices of extraction and exploitation and paying attention to the voices of Indigenous peoples and communities of the so-called Global South. With all these mutually reinforcing efforts, the new AI regulation will debunk the corporate and state agenda of procedural fetishism and establish a new social contract in the age of AI.

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 221 - 240
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

12.1 Introduction

The infamous Australian Robodebt and application of COMPAS tool in the United States are just a few examples of abuse of power in the Automated State. However, our efforts to tackle these abuses have largely failed: corporations and states have used AI to influence many crucial aspects of our public and private lives, from our elections to our personalities and emotions, to environmental degradation through extraction of global resources to labour exploitation. And we do not know how to tame them. In this chapter I suggest that our efforts have failed because they are grounded in what I call procedural fetishism – an overemphasis and focus on procedural safeguards and assumption that transparency and due process can temper power and protect the interests of people in the Automated State.

Procedural safeguards, rules and frameworks play a valuable role in regulating AI decision-making and directing it towards accuracy, consistency, reliability, and fairness. However, procedures alone can be dangerous for legitimizing excessive power, and obfuscating the largest substantive problems we are facing today. In this chapter, I show how procedural fetishism acts as an obfuscation and redirection of the public from more substantive and fundamental questions about the concentration and limits of power to procedural micro-issues and safeguards in the Automated State. Such redirection merely reinforces the status quo. Procedural fetishism detracts from the questions of substantial accountability and obligations by diverting the attention to ‘fixing’ procedural micro-issues that have little chance of changing the political or legal status quo. The regulatory efforts and scholarly debate, plagued by procedural fetishism, have been blind to colonial AI extraction practices, labour exploitation, and dominance of the US tech companies, as if they did not exist. Procedural fetishism – whether corporate or state – is dangerous. Not only does it defer social and political change, it also legitimizes corporate and state influence and power under an illusion of control and neutrality.

To rectify the imbalance of power between people, corporations, and states, we must shift the focus from soft law initiatives to substantive accountability and tangible legal obligations by AI companies. Imposing data privacy obligations directly upon AI companies with an international treaty is one (but not the only) option. The viability of such an instrument has been doubted: human rights law and international law, so it goes, are state-centric. Yet, as data protection law illustrates, we already apply (even if poorly) certain human rights obligations to private actors. Similarly, the origins of international law date back to powerful corporations that were the ‘Googles’ and ‘Facebooks’ of their time. In parallel to such global instrument on data privacy, we must also redistribute wealth and power by breaking and taxing AI companies, increasing public scrutiny by adopting prohibitive laws, but also by democratizing AI technologies by making them public utilities. Crucially, we must recognize colonial AI practices of extraction and exploitation and paying attention to the voices of Indigenous peoples and communities of the so-called Global South. With all these mutually reinforcing efforts, a new AI regulation will resist procedural fetishism and establish a new social contract for the age of AI.

12.2 Existing Efforts to Tame AI Power

Regulatory AI efforts cover a wide range of policies, laws, and voluntary initiatives at national level, including domestic constitutions, laws and judicial decisions; regional and international instruments and jurisprudence; self-regulatory initiatives; and transnational non-binding guidelines developed by private actors and NGOs.

Many recent AI regulatory efforts aim to tackle private tech power with national laws. For example, in the United States, five bipartisan bills collectively referred to as ‘A Stronger Online Economy: Opportunity, Innovation and Choice’ have been proposed and seek to restrain tech companies’ power and monopolies.Footnote 1 In China, AI companies once seen as untouchables (particularly Alibaba and Tencent) have faced a tough year in 2021.Footnote 2 For example, the State Administration for Market Regulation (SAMR) took aggressive steps to rein in monopolistic behaviour, levying a record US$2.8 billion fine on Alibaba.Footnote 3 AI companies are also facing regulatory pressure in Australia targeting anti-competitive behaviour.Footnote 4

At a regional level, perhaps the strongest example of AI regulation is in the European Union, where several prominent legislative proposals have been tabled in recent years. The Artificial Intelligence Act,Footnote 5 and the Data ActFootnote 6 aim to limit the use of AI and ADM systems. These proposals build on the EU’s strong track record in the area: for example, EU General Data Protection Regulation (GDPR)Footnote 7 has regulated the processing of personal data. The EU has been leading AI regulatory efforts on a global scale, with its binding laws and regulations.

On an international level, many initiatives have attempted to draw the boundaries of appropriate AI use, often resorting to the language of human rights. For example, the Organisation for Economic Co-operation and Development (OECD) has adopted AI Principles in 2019,Footnote 8 which draw inspiration from international human rights instruments. However, despite the popularity of the human rights discourse in AI regulation, international human rights instruments, such as the International Covenant on Civil and Political RightsFootnote 9 or the International Covenant on Economic, Social and Cultural Rights,Footnote 10 are not directly binding on private companies.Footnote 11 Instead, various networks and organizations try to promote human rights values among AI companies.

However, these efforts to date have been of limited success in taming the power of AI, and dealing with global AI inequalities and harms. This weakness stems from the proceduralist focus of AI regulatory discourse: proponents have assumed that procedural safeguards, transparency and due process can temper power and protect the interests of people against the power wielded by AI companies (and the State) in the Automated State. Such assumptions stem from the liberal framework, focused on individual rights, transparency, due process, and procedural constrains, which, to date, AI scholarship and regulation have embraced without questioning their capacity to tackle power in the Automated State.

The assumptions are closely related to the normative foundations of AI and automated decision-making systems (ADMS) governance, which stem, in large part, from a popular analogy between tech companies and states: how AI companies exert quasi-sovereign influence over commerce, speech and expression, elections, and other areas of life.Footnote 12 It is also this analogy, and the power of the state as the starting point, that leads to the proceduralist focus and emphasis in AI governance discourse: just as the due process and safeguards constrain the state, they must now also apply to powerful private actors, like AI companies. Danielle Keats Citron’s and Frank Pasquale’s early groundbreaking calls for technological due process have been influential: it showed how constitutional principles could be applied to technology and automated decision-making – by administrative agencies and private actors.Footnote 13 Construction of various procedural safeguards and solutions, such as testing, audits, algorithmic impact assessments, and documentation requirements have dominated AI decision-making and ADMS literature.Footnote 14

Yet, by placing all our energy on these procedural fixes, we miss the larger picture and are blind to our own coloniality: we rarely (if at all) discuss the US dominance in AI economy, we seldom mention environmental exploitation and environmental degradation caused by AI and AMDS technologies. We rarely ask how AI technologies reinforce existing power disparities globally between the so-called Global South and Imperialist West/North, how they contribute to climate disaster and exploitation of people and extraction of resources in the so-called Global South. These substantive issues matter, and arguably matter more than a design of a particular AI auditing tool. Yet, we are too busy designing the procedural fixes.

To be successful, AI regulation must resist what I call procedural fetishism – a strategy, employed by AI companies and state actors, to redirect the public from more substantive and fundamental questions about the concentration and limits of power in the age of AI to procedural safeguards and micro-issues. This diversion reinforces the status quo, reinforces Western dominance, accelerates environmental degradation and exploitation of the postcolonial peoples and resources.

12.3 Procedural Fetishism

Proceduralism, in its broadest sense, refers to ‘a belief in the value of explicit, formalized procedures that need to be followed closely’,Footnote 15 or ‘the tendency to believe that procedure is centrally important’.Footnote 16 The term is often used to describe the legitimization of rules, decisions, or institutions through the process used to create them, rather than by their substantive moral value.Footnote 17 Such trend towards proceduralism – or what I call procedural fetishism – also dominates our thinking about AI: we believe that having certain ‘safeguards’ for AI systems is inherently valuable, that those safeguards tame power and provide sufficient grounds to trust the Automated State. However, procedural fetishism undermines our efforts for justice for several reasons.

First, procedural fetishism offers an appearance of political and normative neutrality, which is convenient to both AI companies and policymakers, judges, and regulators. Proceduralism allows various actors to ‘remain agnostic towards substantive political and moral values’ when ‘faced with the pluralism of contemporary societies’.Footnote 18 At the ‘heart’ of all proceduralist accounts of justice, therefore, is the idea that, as individual members of a pluralist system, we may agree on what amounts to a just procedure (if not a just outcome), and ‘if we manage to do so, just procedures will yield just outcomes’.Footnote 19 However, procedural fetishism enables various actors not only to remain agnostic, but to avoid confrontation with hard political questions. For example, the courts engage in procedural fetishism to appear neutral and avoid tackling the politically difficult questions of necessity, proportionality, legitimacy of corporate and state surveillance practices, and have instead come up with procedural band-aids.Footnote 20 The focus on procedural safeguards provides a convenient way to make an appearance of effort to regulate without actually prohibiting any practices or conduct.

A good example of such neutralizing appearance of procedural fetishism is found in the AI governance’s blind eye to very important policy issues impacted by AI, such as climate change, environmental degradation, and continued exploitation of the resources from the so-called Third World countries. The EU and US-dominated AI debate has focused on inequalities reinforced through AI in organizational settings in business and public administration, but it has largely been blind to the inequalities of AI on a global scale,Footnote 21 including global outsourcing of labour,Footnote 22 and the flow of capital through colonial and extractive processes.Footnote 23 While it is the industrial nations in North America, Europe, and East Asia who compete in the ‘race for AI’,Footnote 24 AI and ADM systems depend on global resources, most often extracted from the so-called Global South.Footnote 25 Critical AI scholars have analyzed how the production of capitalist surplus for a handful of big tech companies draws on large-scale exploitation of the soil, minerals, and other resources.Footnote 26 Other critical scholars have described the processes of extraction and exchange of personal data itself as a form of dispossession and data colonialism.Footnote 27 Moreover, AI and ADMs systems have also been promoted as indispensable tools in international developmentFootnote 28 but many have pointed how those efforts often reinforce further colonization and extraction.Footnote 29 Procedural fetishism also downplays the human labour involved in AI technologies, which draws on the underpaid, racialized, and not at all ‘artificial’ human labour primarily from the so-called Global South. The AI economy is one in which highly precarious working conditions for gig economy ‘click’ workers are necessary for the business models of AI companies.

12.3.1 Legitimizing Effect of Procedural Fetishism

Moreover, procedural fetishism is used strategically not only to distract from power disparities but also to legitimize unjust and harmful AI policies and actions by exploiting people’s perceptions of legitimacy and justice. As early as in the 1980s, psychological research undermined the traditional view that substantive outcomes drove people’s perception of justice by showing that it was more about the procedure for reaching the substantive outcome.Footnote 30 Many of the ongoing proceduralist reforms, such as Facebook’s Oversight Board, are primarily conceived for this very purpose – to make it look that Facebook is doing the ‘right thing’ and delivering justice, irrespective of whether substantive policy issues change or not. Importantly, such corporate initiatives divert attention from the problems caused by the global dominance of the AI companies.Footnote 31

The language of ‘lawfulness’ and constitutional values, prevalent in AI governance debates, is working as a particularly strong legitimizing catalyst both in public and policy debates. As critical scholars have pointed out, using the terminology, which is typically employed in context of elected democratic governments, misleads, for it infuses AI companies with democratic legitimacy, and conflates corporate interests with public objectives.Footnote 32

In the following sections, I suggest that this language is prevalent not accidentally, but through sustained corporate efforts to legitimize their power and business models, to avoid regulation, and enhance their reputation for commercial gain. AI companies often come up with private solutions to develop apparent safeguards against their own abuse of power and increase their transparency to the public. Yet, as I have argued earlier, many such corporate initiatives are designed to obfuscate and misdirect policymakers, researchers, and the public in the bid to strengthen their brand and avoid regulation and binding laws.Footnote 33 AI companies have also successfully corporatized and attenuated the laws and regulations that bind them. Through many procedures, checklists, and frameworks, corporate compliance with existing binding laws has often been a strategic performance, devoid of substantial change in business practices. Such compliance has worked to legitimize business policy and corporate power to the public, regulators, and the courts. In establishing global dominance, AI companies have also been aided by the governments.

12.3.2 Procedural Washing through Self-Regulation

First, corporate self-regulatory AI initiatives are often cynical marketing and social branding strategies to increase public confidence in their operations and create a better public image.Footnote 34 AI companies often self-regulate selectively by disclosing and addressing only that which is commercially desirable for them. For example, Google, when creating an Advanced Technology External Advisory Council (Council) in 2019 to implement Google’s AI Principles,Footnote 35 refused to reveal the internal processes that led to the selection of a controversial member, anti-LGBTI advocate and climate change denial sponsor Kay Coles James.Footnote 36 While employees’ activism forced Google to rescind the Council, ironically, this showed Google’s unwillingness to publicly share the selection criteria of their AI governance boards.

Second, AI companies self-regulate only if it pays off for them in the long run, so profit is the main concern.Footnote 37 For example, in 2012 IBM provided police forces in Philippines with video surveillance technology which was used to perpetuate President Duterte’s war on drugs through extrajudicial killings.Footnote 38 At the time, IBM defended the deal with Philippines, saying it ‘was intended for legitimate public safety activities’.Footnote 39 The company’s practice of providing authoritarian regimes with technological infrastructure is not new and dates back to the 1930s when IBM supplied the Nazi Party with unique punch-card technology that was used to run the regime’s censuses and surveys to identify and target Jewish people.Footnote 40

Third, corporate initiatives also allow AI companies to prevent any regulation of their activities. A good example of pro-active self-regulation is Facebook’s Oversight Board, which reviews individual decisions, and not overarching policies. Thus, the attention is still diverted away from critiquing the legitimacy or appropriateness of Facebook’s AI business practices themselves and is instead focused on Facebook’s ‘transparency’ about them. The appropriateness of the substantial AI policies themselves are obfuscated, or even legitimated, through the micro procedural initiatives, with little power to change status quo. In setting up the board, Facebook has attempted not only to stave off regulation, but also to position itself as an industry regulator by inviting competitors to use the Oversight Board as well.Footnote 41 AI companies can then depict themselves as their own regulators.

12.3.3 Procedural Washing through Law and Help of State

Moreover, AI companies (and public administrations) have also exploited the ambiguity of laws regulating their behaviour through performative compliance with the laws. Often, policymakers have compounded this problem by creating legal provisions to advance the proceduralist agenda of corporations, including via international organizations and international law, and regulators and courts have enabled corporatized compliance in applying these provisions by focusing on the quality of procedural safeguards.

For instance, Ezra Waldman has shown how the regulatory regime of data privacy, even under the GDPR – the piece of legislation which has gained the reputation as the strongest and most ambitious law in the age of AI – has been ‘managerialized’: interpreted by compliance professionals, human resource experts, marketing officers, outside auditors, and in-house and firm lawyers, as well as systems engineers, technologists, and salespeople to prioritize values of efficiency and innovation in the implementation of data privacy law.Footnote 42 As Waldman has argued, many symbolic structures of compliance are created; yet, apart from an exhaustive suite of checklists, toolkits, privacy roles, and professional training, there are hardly substantial actions to enhance consumer protection or minimize online data breaches.Footnote 43 These structures comply with the law in name but not in spirit, which is treated in turn by lawmakers and judges as best practice.Footnote 44 The law thus fails to achieve its intended goals as the compliance metric developed by corporations becomes dominant,Footnote 45 and ‘mere presence of compliance structures’ is assumed to be ‘evidence of substantive adherence with the law’.Footnote 46 Twenty-six recent studies analyzed the impact of the GDPR and US data privacy laws and none have found any meaningful influence of these laws on data privacy protection of the people.Footnote 47

Many other laws itself have been designed in the spirit of procedural fetishism, enabling corporations to avoid liability and change their substantive policies by simply establishing proscribed procedures. For example, known as ‘safe harbours’, such laws enable the companies to avoid liability by simply following a prescribed procedure. For example, under the traditional notice-and-consent regime in the United States, companies avoid liability as long as they post their data use practices in a privacy policy.Footnote 48

Regulators and the courts, by emphasizing procedural safeguards, also engage in performative regulation, grounded in procedural fetishism, that limits pressure for stricter laws by convincing citizens and institutions that their interests are sufficiently protected without inquiring substantive legality of corporate practices. A good example is Federal Trade Commission’s (FTC) audits and ‘assessment’ requirements, which require corporations to demonstrate compliance through checklists.Footnote 49 Similar procedural fetishism is also prevalent in jurisprudence, which does not assess specific state practices by reference to their effectiveness in advancing the proclaimed goals, but rather purely to the stringency of the procedures governing that practice.Footnote 50

12.3.4 Procedural Washing through State Rhetoric and International Law

Procedural washing by AI companies have also been aided by executive governments – both through large amounts of public funding and subsidization to these companies, and through the development of the laws, including international laws, that suit corporate and national agenda. Such support is not one-sided, of course, the state expands its economic and geopolitical power through technology companies. All major powers, including the United States, European Union, and China, have been active in promoting their AI companies. For example, mutually beneficial and interdependent relationship between the US government and information technology giants has been described as the information-industrial-complex, data industrial complex, and so on.Footnote 51 These insights build on Herbert Schiller’s work, who described the continuous subsidization by US companies of private communications companies back in the 1960s and 1970s.Footnote 52 For example, grounding their work on classical insights, Powers and Jablonski describe how the dynamics of the information-industrial-complex have catalyzed the rapid growth of information and communication technologies within the global economy while firmly embedding US strategic interests and companies at the heart of the current neoliberal regime.Footnote 53 Such central strategic position necessitates continuous action and support from the US government.

To maintain the dominance of US AI companies internationally, the US government aggressively promotes the global free trade regime, intellectual property enforcement, and other policies that suit US interests. For example, the dominance of US cultural and AI products and services worldwide is secured via the free flow of information doctrine at the World Trade Organization, which the US State Department pushed with the GATT, GATS, and TRIPS.Footnote 54 The free flow of information doctrine allows the US corporations to collect and monetize personal data of individuals from around the world. This way, data protection and privacy are not part of the ‘universal’ values of the Internet, whereas strong intellectual property protection is not only viable and doable, but also strictly enforced globally.

Many other governments have also been complicit in this process. For example, the EU AI Act, despite its declared mission to ‘human centred AI’ is silent about the environmental degradation and social harms that occur in other parts of the world because of large-scale mineral and resource extraction and energy consumption, necessary to produce and power AI and digital technologies.Footnote 55 The EU AI Act is also silent on the conditions under which AI is produced and the coloniality of the AI political economy: it does not address precarious working conditions and global labour flows. Thus, EU AI Act is also plagued by procedural fetishism: it does not seek to improve the global conditions for an environmentally sustainable AI production. Thus, at least the United States and EU have prioritized inaction, self-regulation over regulation, no enforcement over enforcement, and judicial acceptance over substantial resistance. While stressing the differences in US and EU regulatory approaches has been popular,Footnote 56 the end result has been very similar both in the EU and the United States: the tech companies collect and exploit personal data not only for profit, but for political and social power.

In sum, procedural fetishism in AI discourse is dangerous for creating an illusion that it is normatively neutral. Our efforts at constraining AI companies are replaced with the corporate vision of division of power and wealth between the corporations and the people, masked under the veil of neutrality.

12.4 The New Social Contract for the Age of AI

The new social contract for the age of AI must try something different: it must shift its focus from soft law initiatives and performative corporate compliance to substantive accountability and tangible legal obligations by AI companies. Imposing directly binding data privacy obligations on AI companies with an international treaty is one (but not the only!) option. Other parallel actions include breaking and taxing tech companies, increasing competition and public scrutiny, and democratizing AI companies: involving people in their governance.

12.4.1 International Legally Binding Instrument Regulating Personal Data

One of the best ways to tame AI companies is via the ‘currency’ which people often ‘pay’ for their services – the personal data. And the new social contract should not only be concerned with the procedures that AI companies should follow in continuing to exploit personal data. Instead, it should impose substantive limits on corporate AI action, for example, data cannot be collected and used in particular circumstances, how and when it can be exchanged, manipulative technologies and biometrics are banned to ensure mental welfare, and social justice.

Surely, domestic legislators should develop such laws (and I discuss that below too). However, given that tech companies exploit our data across the globe, we need a global instrument to lead our regulatory AI efforts. Imposing directly binding obligations on AI companies with an international treaty should be one (but not the only!) option. While exact parameters of such treaty are beyond the scope of this chapter, I would like to rebut one misleading argument, often used by the AI companies, that private companies cannot have direct obligations under international law.

The relationship between private actors and international law has been a subject of intense political and scholarly debate for over four decades,Footnote 57 since the first attempts to develop a binding international code of conduct for multinational corporations in the 1970s.Footnote 58 Most recent efforts have led to the ‘Third Revised Draft’ of the UN Treaty on Business and Human Rights released in 2021, since the process started with the so-called Ecuador Resolution in 2014.Footnote 59 The attempts to impose binding obligations on corporations have not yet been successful because of enormous political resistance from private actors, for whom such developments would be costly. Corporate resistance entail many fronts, here I can only focus on debunking a corporate myth that such constitutional reform is not viable, and even legally impossible because of the state-centric nature of human rights law. Yet, as data protection law, discussed above, illustrates, we already apply (even if poorly) certain human rights obligations to private actors. We can and should demand more from corporations in other policy areas.

Importantly, we must understand the role of private actors under international law. Contrary to the popular myth that international law was created by and for nation-states, ‘[s]ince its very inception, modern international law has regulated the dealings between states, empires and companies’.Footnote 60 The origins of international law itself date back to powerful corporations that were the Googles and Facebooks of their time. Hugo Grotius, often regarded as the father of modern international law, was himself counsel to the Dutch East India Company – the largest and most powerful corporation in history. In this role, Grotius’ promotion of the principle of the freedom of the high seas and his views on the status of corporations were shaped by the interests of the Dutch East India Company to ensure the security and efficacy of the company’s trading routes.Footnote 61 As Peter Borschberg explains, Grotius crafted his arguments to legitimize the rights of the Dutch to engage in the East Indies trade and justify the Dutch Company’s violence against the Portuguese, who claimed exclusive rights to Eastern Hemisphere.Footnote 62 In particular, Grotius aimed to justify the seizure by Dutch of the Portuguese carrack Santa Catarina in 1603:

[E]ven though people grouped as a whole and people as private individuals do not differ in the natural order, a distinction has arisen from a man-made fiction and from the consent of citizens. The law of nations, however, does not recognize such distinctions; it places public bodies and private companies in the same category.Footnote 63

Grotius argued that moral personality of individuals and collections of individuals do not differ, including, to what was for Grotius, their ‘natural right to wage war’. Grotius concluded that ‘private trading companies were as entitled to make war as were the traditional sovereigns of Europe’.Footnote 64

Therefore, contrary to the popular myth, convenient to AI companies, the ‘law of nations’ has always been able to accommodate private actors, whose greed and search for power gave rise to many concepts of modern international law. We must therefore recognize this relationship and impose hard legal obligations related to AI on companies under international law precisely to prevent tech companies’ greed and predatory actions which have global consequences.

12.4.2 Increased Political Scrutiny and Novel Ambitious Laws

We must also abolish the legislative regimes that have in the past established safe harbours for AI companies, such as the EU-US Transatlantic Privacy Framework,Footnote 65 previously known as Safe Harbour and Privacy Shield. Similarly, regimes, based on procedural avoidance of liability, such as the one under Section 230 of the US Communications Decency Act 1996, should be reconsidered. This provision provides that websites should not treated as the publisher of third party (i.e., user submitted content); and it is particularly useful for platforms like Facebook.

Some of the more recent AI regulatory efforts might be displaying first seeds of substantive-focused regulation. For example, many moratoriums have been issued on the use of facial recognition technologies across many municipalities and cities in the United States, including the state of Oregon, and NYC.Footnote 66 In EU too, some of the latest proposals also display an ambition to ban certain uses and abuses of technology. For example, the Artificial Intelligence Act provides a list of ‘unacceptable’ AI systems and prohibits their use. The Artificial Intelligence Act has been subject to criticism about its effectiveness,Footnote 67 yet its prohibitive approach can be contrasted with earlier EU regulations, such as GDPR, which did not proclaim that certain areas should not be automated, or some data should not be processed at all/ fall in the hands of tech companies. On an international level, the OECD has recently announced a landmark international tax deal, where 136 countries and jurisdictions representing more than 90 per cent of global GDP agreed to minimum corporate tax rate of 15 per cent on the biggest international corporations which will be effective in 2023.Footnote 68 While this is not tackling tech companies business practices, it is aimed at fairer redistribution of wealth, which too must be the focus of the new social contract, if we wish to restrain the power of AI.

12.4.3 Breaking AI Companies and Public Utilities Approach

We must also break AI companies many of which have grown so large that they are effectively gatekeepers in their markets. Many scholars have recently proposed ways to employ antitrust and competition law to deal with and break big tech companies,Footnote 69 and such efforts are also visible on political level. For example, in December 2020, the EU Commission published a proposal for two new pieces of legislation: the Digital Markets Act (DMA) and the Digital Services Act (DSA).Footnote 70 The proposal aims to ensure platform giants, such as Google, Amazon, Apple, and Facebook, operate fairly, and to increase competition in digital markets.

We already have legal instruments for breaking the concentration of power in AI sector: for example, the US Sherman Act 1890 makes monopolization unlawful.Footnote 71 And we must use the tools of competition and antitrust law (but not only them!) to redistribute the wealth and power. While sceptics argue Sherman Act case against Amazon, Facebook, or Google would not improve economic welfare in the long run,Footnote 72 we must start somewhere. For instance, as Kieron O’Hara suggested, we could prevent anticompetitive mergers and require tech giants to divest companies they acquired to stifle competition, such as Facebook’s acquisition of WhatsApp and Instagram.Footnote 73 We could also ring-fence giants into particular sectors. For example, Amazon’s purchase of Whole Foods Market (a supermarket chain) would likely be prevented by that strategy. We could also force tech giants to split its businesses into separate corporations.Footnote 74 For instance, Amazon would be split into its E-commerce platform, physical stores, web services, and advertising business.

However, antirust reforms should not obscure more radical solutions, suggested by critical scholars. For example, digital services could be conceived as public utilities: either as closely regulated private companies or as government-run organizations, administered at municipal, state, national, or regional levels.Footnote 75 While exact proposals of ‘Public utility’ approach vary, they aim at placing big AI companies (and other big enterprises) under public control.Footnote 76 This provides a strong alternative to market-driven solutions to restore competition in technology sector, and has more potential to address the structural problems of exploitation, manipulation, and surveillance.Footnote 77

12.4.4 Decolonizing Technology Infrastructure

We should also pay attention to the asymmetries in economic and political power on global scale: this covers both the US dominance in the digital technologies and AI, US influence in shaping international free trade and intellectual property regimes, rising influence of China, as well as EU’s ambitions to set global regulatory standards in many policy areas and both business and public bodies in the so-called Global South on the receiving end of Brussels demands of what ‘ethical’ AI is, and how ‘data protection’ must be understood and implemented.Footnote 78

We should also incorporate Indigenous epistemologies – they provide strong conceptual alternatives to dominant AI discourse. Decolonial ways to theorize, analyze, and critique AI and ADMS systems must be part of our new social contract for the age of AI,Footnote 79 because people in the so-called Global South relate very differently to major AI platforms than those who live and work where these companies are headquartered.Footnote 80 A good example in this regard is the ‘Technologies for Liberation’ project which studies how queer, trans, two-spirit, black, Indigenous, and people of colour communities are disproportionately impacted by surveillance technologies and criminalization.Footnote 81 Legal scholars must reach beyond our comfortable Western, often Anglo-Saxon position, and bring forward perspectives of those who have been excluded and marginalized in the development of AI and ADMS tools.

The decolonization however must also happen in laws. For example, the EU’s focus on regulating AI and ADMS as a consumer ‘product-in-use’ requiring individual protection is hypocritical, and undermines the claims to regulate ‘ethical’ AI, for it completely ignores the exploitative practices and global implications of AI production and use. These power disparities and exploitation must be recognized and officially acknowledged in the new laws.

Finally, we need novel spaces for thinking about, creating and developing the new AI regulation. Spaces that are not dominated by procedural fetishism. A good example of possible resistance, promoted by decolonial data scholars, is a Non-Aligned Technologies Movement (NATM) – a worldwide alliance of civil society organizations which aims to create ‘techno-social spaces beyond the profit-motivated model of Silicon Valley and the control-motivated model of the Chinese Communist Party. NATM does not presume to offer a single solution to the problem of data colonialism; instead it seeks to promote a collection of models and platforms that allow communities to articulate their own approaches to decolonization’.Footnote 82

12.5 Conclusion

The new social contract for the age of AI must incorporate all these different strategies – we need a new framework, and not just quick, procedural fixes. These strategies might not achieve substantive policy change alone. However, together, acting in parallel, the proposed changes will enable us to start resisting corporate and state agenda of procedural fetishism. In the digital environment dominated by AI companies, procedural fetishism is an intentional strategy to obfuscate the implications of concentrated corporate power. AI behemoths legitimize their practices through procedural washing and performative compliance to divert the focus onto the procedures they follow, both for commercial gain and to avoid their operations being tempered by regulation. They are also helped and assisted by states, which enable corporate dominance via the laws and legal frameworks.

Countering corporate procedural fetishism, requires, first of all, returning the focus back to the substantive problems in the digital environment. In other words, it requires paying attention to the substance of tech companies’ policies and practices, to their power, not only the procedures. This requires a new social contract for the age of AI. Rather than buying into procedural washing as companies intend for us to do, we need new binding, legally enforceable mechanisms to hold the AI companies to account. We have many options, and we need to act on all fronts. Imposing data privacy obligations directly on AI companies with an international treaty is one way. In parallel, we must also redistribute wealth and power by breaking and taxing tech companies, increasing public scrutiny by adopting prohibitive laws, and democratizing and decolonizing big tech by giving people power to determine the way in which these companies should be governed. We must recognize that AI companies exercise global dominance with significant international and environmental implications. This aspect of technology is related to global economic structure, and therefore cannot be solved alone: it requires systemic changes to our economy. The crucial step to such direction is developing and maintaining AI platforms as public utilities, which operate for the public good rather than profit. The new social contract for the age of AI should de-commodify data relations, rethink behaviour advertising as the foundation of the Internet, and reshape social media and internet search as public utilities. With all these mutually reinforcing efforts, we must debunk the corporate and state agenda of procedural fetishism and demand basic tangible constraints for the new social contract in the Automated State.

Footnotes

* This chapter incorporates and adapts arguments advanced in my other work on procedural fetishism, and in particular M. Zalnieriute, ‘Against Procedural Fetishism: A Call for a New Digital Constitution’ (2023) Indiana Journal of Global Legal Studies, 30(2), 227–64. I thank Angelo Golia, Gunther Teubner, Sofia Ranchordas, and Tatiana Cutts for invaluable feedback.

1 The bills include the American Innovation and Choice Online Act, the Platform Competition and Opportunity Act, the Ending Platform Monopolies Act, the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, and the Merger Filing Fee Modernization Act, see House Lawmakers Release Anti-Monopoly Agenda for ‘A Stronger Online Economy: Opportunity, Innovation, Choice’, U.S. House Judiciary Committee (2021) <https://judiciary.house.gov/news/documentsingle.aspx?DocumentID=4591> (last visited 13 October 2021).

2 Charlie Campbell, ‘How China Is Cracking Down on Its Once Untouchable Tech Titans’ Time (2021) <https://time.com/6048539/china-tech-giants-regulations/> (last visited 13 October 2021).

3 Andrew Ross Sorkin et al, ‘Alibaba’s Big Fine Is a Warning Shot’ (12 April 2021) The New York Times <www.nytimes.com/2021/04/12/business/dealbook/alibaba-fine-antitrust.html> (last visited 23 September 2022).

4 John Davidson, ‘Big Tech Faces Tough New Laws under ACCC Plan’, Australian Financial Review (2021) <www.afr.com/technology/big-tech-faces-tough-new-laws-under-accc-plan-20210905-p58p0r> (last visited 13 October 2021).

5 Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts COM (2021) 206 final.

6 Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act) COM (2020) 767 final.

7 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L 119/1.

8 ‘OECD Principles on Artificial Intelligence – Organisation for Economic Co-operation and Development’ <www.oecd.org>.

9 International Covenant on Civil and Political Rights, opened for signature 19 December 1966, 999 U.N.T.S. 171 (entered into force 23 March 1976); G.A. Res. 2200, U.N. GAOR, 21st Sess., Supp. No 16, at 52, U.N. Doc. A/6316 (1967).

10 International Covenant on Economic, Social, and Cultural Rights, opened for signature 16 December 1966, 993 U.N.T.S. 3 (entered into force 23 March 1976) [hereinafter ICESCR].

11 Monika Zalnieriute, ‘From Human Rights Aspirations to Enforceable Obligations by Non-State Actors in the Digital Age: The Case of Internet Governance and ICANN’ (2019) 21 Yale Journal of Law & Technology 278.

12 For literature making such analogies see Julie E Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (2019); Julie E Cohen, ‘Law for the Platform Economy’ (2017) 51 UCD Law Review 133, 199; Hannah Bloch-Wehba, ‘Global Platform Governance: Private Power in the Shadow of the State’ (2019) 72 SMU Law Review 27, 29; Rory Loo, ‘Rise of the Digital Regulator’ (2017) 66 Duke Law Journal 1267.

13 Danielle Keats Citron, ‘Technological Due Process’ (2007) 85 Washington University Law Review 1249. Although Citron’s original work did not focus on tech platforms, but argued that administrative agencies’ use of technology should be subjected to due process; See also Danielle Keats Citron and Frank Pasquale, ‘The Scored Society: Due Process for Automated Predictions Essay’ (2014) 89 Washington Law Review 1 arguing for due process for automated credit scoring.

14 See, e.g., Margot Kaminski and Gianclaudio Malgieri, ‘Algorithmic Impact Assessments under the GDPR: Producing Multi-Layered Explanations’ International Data Privacy Law 125–26 <https://scholar.law.colorado.edu/faculty-articles/1510>; Deven R Desai and Joshua A Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) 31 Harvard Journal of Law & Technology 1, 10 (arguing for ex ante testing of AI and ADMS technologies); Andrew D Selbst, ‘Disparate Impact in Big Data Policing’ (2017) 52 Georgia Law Review 109, 169 (arguing for Algorithmic Impact Statements); Andrew D Selbst and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87 Fordham Law Review 1085 at 1100–5 (arguing for algorithmic impact assessments and recoding requirements).

15 Jens Steffek, ‘The Limits of Proceduralism: Critical Remarks on the Rise of “Throughput Legitimacy”’ (2019) 97 Public Admin 784 at 784.

16 Paul MacMahon, ‘Proceduralism, Civil Justice, and American Legal Thought’ (2013) 34 University of Pennsylvania Journal of International Law 545 at 559.

17 Jordy Rocheleau, ‘Proceduralism’ in Deen K Chatterjee (ed), Encyclopedia of Global Justice (2011) 906 <http://link.springer.com/10.1007/978-1-4020-9160-5_367> (last visited 2 June 2021).

18 Steffek, ‘The Limits of Proceduralism’ at 784.

19 Emanuela Ceva, ‘Beyond Legitimacy: Can Proceduralism Say Anything Relevant about Justice?’ (2012) 15 Critical Review of International Social and Political Philosophy 183 s at 191.

20 Monika Zalnieriute, ‘Big Brother Watch and Others v. the United Kingdom’ (2022) 116 American Journal of International Law 585; Monika Zalnieriute, ‘Procedural Fetishism and Mass Surveillance under the ECHR: Big Brother Watch v. UK’ Verfassungsblog: On Matters Constitutional (2021) <https://verfassungsblog.de/big-b-v-uk/> (last visited 9 August 2021).

21 Padmashree Gehl Sampath, ‘Governing Artificial Intelligence in an Age of Inequality’ (2021) 12 Global Policy 21.

22 Aneesh Aneesh, ‘Global Labor: Algocratic Modes of Organization’ (2009) 27 Sociological Theory 347.

23 Nick Couldry and Ulises A Mejias, The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism (2019) <https://doi.org/10.1093/sf/soz172> (last visited 23 September 2022).

24 Kathleen Walch, ‘Why the Race for AI Dominance Is More Global Than You Think’ Forbes <www.forbes.com/sites/cognitiveworld/2020/02/09/why-the-race-for-ai-dominance-is-more-global-than-you-think/> (last visited 23 September 2022).

25 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021).

27 Nick Couldry and Ulises Ali Mejias, ‘The Decolonial Turn in Data and Technology Research: What Is at Stake and Where Is It Heading?’ (2023) 26(4) Information, Communication & Society 786802; Couldry and Mejias, The Costs of Connection; Jim Thatcher, David O’Sullivan, and Dillon Mahmoudi, ‘Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data’ (2016) 34 Environment and Planning D 990.

28 Jolynna Sinanan and Tom McNamara, ‘Great AI Divides? Automated Decision-Making Technologies and Dreams of Development’ (2021) 35 Continuum 747.

29 Couldry and Mejias, ‘The Decolonial Turn in Data and Technology Research’; Michael Kwet, ‘Digital Colonialism: US Empire and the New Imperialism in the Global South’ (2019) 60 Race & Class 3; Shakir Mohamed, Marie-Therese Png, and William Isaac, ‘Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence’ (2020) 33 Philosophy & Technology 659.

30 Tom R Tyler, ‘Why People Obey the Law’ (2006), 5, 9 s <www.degruyter.com/document/doi/10.1515/9781400828609/html> (last visited 23 September 2022) (summarizing the procedural justice literature suggesting that process heavily influences perception of legitimacy).

31 Victor Pickard, Democracy without Journalism?: Confronting the Misinformation Society (2019), 17.

32 Salomé Viljoen, ‘The Promise and Limits of Lawfulness: Inequality, Law, and the Techlash’ (2021) 2 Journal of Social Computing 284.

33 Monika Zalnieriute, ‘“Transparency-Washing” in the Digital Age: A Corporate Agenda of Procedural Fetishism’ (2021) 8 Critical Analysis of Law 39.

34 Christina Garsten and Monica Lindh De Montoya, ‘The Naked Corporation: Visualization, Veiling and the Ethico-politics of Organizational Transparency’ in Christina Garsten and Monica Lindh De Montoya (eds), Transparency in a New Global Order: Unveiling Organizational Visions 79–96; See also Ivan Manokha, ‘Corporate Social Responsibility: A New Signifier? An Analysis of Business Ethics and Good Business Practice’ (2004) 24 Politics 56.

35 Kent Walker, ‘An External Advisory Council to Help Advance the Responsible Development of AI’, Google (2019) <https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/> (last visited 17 June 2020).

36 Scott Shane and Daisuke Wakabayashi, ‘“The Business of War”: Google Employees Protest Work for the Pentagon’ (30 July 2018) The New York Times <www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html> (last visited 24 October 2018).

37 See Beth Stephens, ‘The Amorality of Profit: Transnational Corporations and Human Rights’ (2002) 20 Berkeley Journal International Law 45.

38 George Joseph, ‘Inside the Video Surveillance Program IBM Built for Philippine Strongman Rodrigo Duterte’, The Intercept (2019) <https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/> (last visited 17 June 2020).

40 Edwin Black, IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition (2001).

41 Karissa Bell, ‘Facebook Wants “Other Companies” to Use the Oversight Board, Too’ Engadget (2021) <www.engadget.com/facebook-oversight-board-other-companies-202448589.html> (last visited 6 October 2021).

42 Ari Ezra Waldman, ‘Privacy Law’s False Promise’ (2019) 97 Washington University Law Review 773 at 778.

43 Footnote Ibid at 5.

44 Lauren B Edelman, Working Law: Courts, Corporations, and Symbolic Civil Rights (University of Chicago Press, 2016); Waldman, ‘Privacy Law’s False Promise’.

45 Waldman, ‘Privacy Law’s False Promise’.

46 Footnote Ibid at 792–94.

47 Filippo Lancieri, ‘Narrowing Data Protection’s Enforcement Gap’ (2022) 74 Maine Law Review 15.

48 Joel R Reidenberg et al, ‘Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding’ (2015) 30 Berkeley Technology Law Journal 1 at 41.

49 Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy 166 (2016).

50 Zalnieriute, ‘Big Brother Watch and Others v. the United Kingdom’; Zalnieriute, ‘Procedural Fetishism and Mass Surveillance under the ECHR’ at 185–92.

51 See, e.g., Shawn M Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedom, 1st ed (2015).

52 Herbert Schiller, Mas Communications and American Empire, 2nd ed (1992) 63–75.

53 Powers and Jablonski, The Real Cyber War at 47.

54 Herbert I Schiller, Culture, Inc: The Corporate Takeover of Public Expression (1991) 118; Schiller, Mas Communications and American Empire at 93.

55 Mark Coeckelbergh, ‘AI for Climate: Freedom, Justice, and Other Ethical and Political Challenges’ (2021) 1 AI Ethics 67 at 67–72; Payal Dhar, ‘The Carbon Impact of Artificial Intelligence’ (2020) 2 Nature Machine Intelligence 423 at 423–25; Emma Strubell, Ananya Ganesh, and Andrew McCallum, ‘Energy and Policy Considerations for Modern Deep Learning Research’ (2020) 34 Proceedings of the AAAI Conference on Artificial Intelligence 13693.

56 James Q Whitman, ‘The Two Western Cultures of Privacy: Dignity versus Liberty’ (2003) 113 Yale Law Journal 1151; See, e.g., Giovanni De Gregorio, ‘Digital Constitutionalism across the Atlantic’ (2022) 11 Global Constitutionalism 297; Oreste Pollicino, Judicial Protection of Fundamental Rights on the Internet: A Road towards Digital Constitutionalism? (2021).

57 See, e.g., Steven Bittle and Laureen Snider, ‘Examining the Ruggie Report: Can Voluntary Guidelines Tame Global Capitalism?’ (2013) 2 Critical Criminology 177; Olivier de Schutter, ‘Towards a New Treaty on Business and Human Rights’ (2016) 1 Business & Human Rights Journal 41; Frédéric Mégret, ‘Would a Treaty Be All It Is Made Up to Be?’, James G Stewart (2015) <http://jamesgstewart.com/would-a-treaty-be-all-it-is-made-up-to-be/> (last visited 10 September 2020); John G Ruggie, ‘Get Real or We’ll Get Nothing: Reflections on the First Session of the Intergovernmental Working Group on a Business and Human Rights Treaty’, Business & Human Rights Resource Centre (2020) <www.business-humanrights.org> (last visited 10 September 2020).

58 The Commission on Transnational Corporations and the United Nations Centre on Transnational Corporations (UNCTNC) were established in 1974; the UN, Draft Code on Transnational Corporations in UNCTC, TRANSNATIONAL CORPORATIONS, SERVICES AND THE URUGUAY ROUND, Annex IV at 231, was presented in 1990. For history of the controversy of the issue at the UN, see Khalil Hamdani and Lorraine Ruffing, United Nations Centre on Transnational Corporations: Corporate Conduct and the Public Interest (2015) <www.taylorfrancis.com/books/9781315723549> (last visited 18 September 2020).

59 Binding Treaty, Business & Human Rights Resource Centre <www.business-humanrights.org/en/big-issues/binding-treaty/> (last visited 25 September 2022) (providing the latest developments and progress on the UN Treaty on Business and Human Rights); U.N. Human Rights Council, ‘Open-Ended Intergovernmental Working Group on Transnational Corporations and Other Business Enterprises with Respect to Human Rights’, OHCHR <www.ohchr.org/en/hr-bodies/hrc/wg-trans-corp/igwg-on-tnc> (last visited 25 September 2022); Elaboration of an International Legally Binding Instrument on Transnational Corporations and other Business Enterprises with Respect to Human Rights, 26th Sess., U.N. Doc. A/HRC/26/L.22/Rev.1 (2014) <https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/RES/26/9> (last visited 25 September 2022) (resolution adopted by twenty votes in favour, thirteen abstentions, and fourteen against).

60 José-Manuel Barreto, ‘Cerberus: Rethinking Grotius and the Westphalian System’, in Martti Koskenniemi, Walter Rech, and Manuel Jiménez Fonseca (eds), International Law and Empire: Historical Explorations (2016) 149–76, arguing that ‘international law does not only regulate the relations between nation states’ but that ‘[s]ince its very inception, modern international law has regulated the dealings between states, empires and companies’; Erika R George, ‘The Enterprise of Empire: Evolving Understandings of Corporate Identity and Responsibility’ in JenaMartin and Karen E Bravo (eds), The Business and Human Rights Landscape: Moving Forward, Looking Back (2015) 19 <www.cambridge.org/core/books/business-and-human-rights-landscape/enterprise-of-empire/100EFD4FBD897AAC4B3A922E1DAB0D3A> (last visited 25 September 2022).

61 See Antony Anghie, ‘International Law in a Time of Change: Should International Law Lead or Follow the Grotius Lecture: ASIL 2010’ (2010) 26 American University International Law Review 1315; John T Parry, ‘What Is the Grotian Tradition in International Law’ (2013) 35 University of Pennsylvania Journal of International Law at 299, 236–327, 337.

62 See Peter Borschberg, ‘The Seizure of the Sta. Catarina Revisited: The Portuguese Empire in Asia, VOC Politics and the Origins of the Dutch-Johor Alliance (1602–c.1616)’ (2002) 33 Journal of Southeast Asian Studies 31.

63 Hugo Grotius, Commentary on the Law of Prize and Booty (2006) 302.

64 Richard Tuck, The Rights of War and Peace: Political Thought and the International Order from Grotius to Kant (2001) 85.

65 The White House, ‘United States and European Commission Announce Trans-Atlantic Data Privacy Framework’, The White House (2022) <www.whitehouse.gov/briefing-room/statements-releases/2022/03/25/fact-sheet-united-states-and-european-commission-announce-trans-atlantic-data-privacy-framework/> (last visited 25 September 2022).

66 Monika Zalnieriute, ‘Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State’ (2021) 22 Columbia Science and Technology Review 314.

67 Michael Veale and Frederik Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act – Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach’ (2021) 22 Computer Law Review International 97; Vera Lúcia Raposo, ‘Ex machina: Preliminary Critical Assessment of the European Draft Act on Artificial Intelligence’ (2022) 30 International Journal of Law and Information Technology 88; Lilian Edwards, Expert Opinion: Regulating AI in Europe. Four Problems and Four Solutions (2022) <www.adalovelaceinstitute.org/report/regulating-ai-in-europe/> (last visited 25 September 2022).

68 Organisation for Economic Co-operation and Development, International Community Strikes a Ground-Breaking Tax Deal for the Digital Age (2021) <www.oecd.org/tax/international-community-strikes-a-ground-breaking-tax-deal-for-the-digital-age.htm> (last visited 25 September 2022).

69 See, e.g., Manuel Wörsdörfer, ‘Big Tech and Antitrust: An Ordoliberal Analysis’ (2022) 35 Philosophy & Technology 65; Zephyr Teachout, Break ‘Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money (2020); Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario (2020) <https://cadmus.eui.eu/handle/1814/68567> (last visited 25 September 2022); Dina Srinivasan, ‘The Antitrust Case against Facebook: A Monopolist’s Journey towards Pervasive Surveillance in Spite of Consumers’ Preference for Privacy’ (2019) 16 Berkeley Business Law Journal 39.

70 See Giorgio Monti, The Digital Markets Act – Institutional Design and Suggestions for Improvement (2021); Luis Cabral et al, The EU Digital Markets Act: A Report from a Panel of Economic Experts (2021).

71 The Sherman Antitrust Act of 1890 (26 Stat. 209, 15 U.S.C. §§ 1–7).

72 Robert W Crandall, ‘The Dubious Antitrust Argument for Breaking Up the Internet Giants’ (2019) 54 Review of Industrial Organization 627 at 645–49.

73 Kieron O’Hara, ‘Policy Question: How Can Competition against the Tech Giants Be Fostered?’ Four Internets (2021), 117–19 <https://oxford.universitypressscholarship.com/10.1093/oso/9780197523681.001.0001/oso-9780197523681-chapter-10> (last visited 7 October 2021).

74 Teachout, Break ‘Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money.

75 Dan Schiller, ‘Reconstructing Public Utility Networks: A Program for Action’ (2020) 14 International Journal of Communication 12; Vincent Mosco, Becoming Digital: Toward a Post-Internet Society (2017); James Muldoon, Platform Socialism: How to Reclaim Our Digital Future from Big Tech (2022).

76 Thomas M Hanna and Michael Brennan, ‘There’s No Solution to Big Tech without Public Ownership of Tech Companies’ Jacobin (2020) <https://jacobin.com/2020/12/big-tech-public-ownership-surveillance-capitalism-platform-corporations> (last visited 25 September 2022).

77 James Muldoon, Do Not Break Up Facebook – Make It a Public Utility (2020) <https://jacobin.com/2020/12/facebook-big-tech-antitrust-social-network-data> (last visited 25 September 2022).

78 More on EU’s influence in setting regulatory standards, see Anu Bradford, The Brussels Effect: How the European Union Rules the World (2020).

79 Abeba Birhane, ‘Algorithmic Injustice: A Relational Ethics Approach’ (2021) 2 Patterns 100205; Jason Edward Lewis et al, Indigenous Protocol and Artificial Intelligence Position Paper (2020) <https://spectrum.library.concordia.ca/id/eprint/986506/> (last visited 25 September 2022); Stefania Milan and Emiliano Treré, ‘Big Data from the South(s): Beyond Data Universalism’ (2019) 20 Television & New Media 319.

80 R Grohmann and WF Araújo, ‘Beyond Mechanical Turk: The Work of Brazilians on Global AI Platforms’ in Pieter Verdegem (ed), AI for Everyone?: Critical Perspectives (2021) 247–66; Mary L Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (2019).

81 Brenda Salas Neves and Mihika Srivastava, ‘Technologies for Liberation: Toward Abolionist Futures’, Astraea Foundation (2020) <www.astraeafoundation.org/FundAbolitionTech/> (last visited 25 September 2022); Important also here is the broader ‘design justice’ movement see Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (2020) <https://library.oapen.org/handle/20.500.12657/43542> (last visited 25 September 2022).

82 Non Aligned Technologies Movement, <https://nonalignedtech.net/index.php?title=Main_Page> (last visited 25 September 2022).

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×