We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This Element endeavors to enrich and broaden Southeast Asian research by exploring the intricate interplay between social media and politics. Employing an interdisciplinary approach and grounded in extensive longitudinal research, the study uncovers nuanced political implications, highlighting the platform's dual role in both fostering grassroots activism and enabling autocratic practices of algorithmic politics, notably in electoral politics. It underscores social media's alignment with communicative capitalism, where algorithmic marketing culture overshadows public discourse, and perpetuates affective binary mobilization that benefits both progressive and regressive grassroots activism. It can facilitate oppositional forces but is susceptible to authoritarian capture. The rise of algorithmic politics also exacerbates polarization through algorithmic enclaves and escalates disinformation, furthering autocraticizing trends. Beyond Southeast Asia, the Element provides analytical and conceptual frameworks to comprehend the mutual algorithmic/political dynamics amidst the contestation between progressive forces and the autocratic shaping of technological platforms.
This last chapter summarizes most of the material in this book in a range of concluding statements. It provides a summary of the lessons learned. These lessons can be viewed as guidelines for research practice.
In 1997 Amazon started as a small online bookseller. It is now the largest bookseller in the US and one of the largest companies in the world, due, in part, to its implementation of algorithms and access to user data. This Element explains how these algorithms work, and specifically how they recommend books and make them visible to readers. It argues that framing algorithms as felicitous or infelicitous allows us to reconsider the imagined authority of an algorithm's recommendation as a culturally situated performance. It also explores the material effects of bookselling algorithms on the forms of labor of the bookstore. The Element ends by considering future directions for research, arguing that the bookselling industry would benefit from an investment in algorithmic literacy.
We study the problem of fitting a piecewise affine (PWA) function to input–output data. Our algorithm divides the input domain into finitely many regions whose shapes are specified by a user-provided template and such that the input–output data in each region are fit by an affine function within a user-provided error tolerance. We first prove that this problem is NP-hard. Then, we present a top-down algorithmic approach for solving the problem. The algorithm considers subsets of the data points in a systematic manner, trying to fit an affine function for each subset using linear regression. If regression fails on a subset, the algorithm extracts a minimal set of points from the subset (an unsatisfiable core) that is responsible for the failure. The identified core is then used to split the current subset into smaller ones. By combining this top-down scheme with a set-covering algorithm, we derive an overall approach that provides optimal PWA models for a given error tolerance, where optimality refers to minimizing the number of pieces of the PWA model. We demonstrate our approach on three numerical examples that include PWA approximations of a widely used nonlinear insulin–glucose regulation model and a double inverted pendulum with soft contacts.
While governments have long discussed the promise of delegating important decisions to machines, actual use often lags. Consequently, we know little about the variation in the deployment of such delegations in large numbers of similar governmental organizations. Using data from crime laboratories in the United States, we examine the uneven distribution over time of a specific, well-known expert system for ballistics imaging for a large sample of local and regional public agencies; an expert system is an inference engine joined with a knowledge base. Our statistical model is informed by the push-pull-capability theory of innovation in the public sector. We test hypotheses about the probability of deployment and provide evidence that the use of this expert system varies with the pull of agency task environments and the enabling support of organizational resources—and that the impacts of those factors have changed over time. Within this context, we also present evidence that general knowledge of the use of expert systems has supported the use of this specific expert system in many agencies. This empirical case and this theory of innovation provide broad evidence about the historical utilization of expert systems as algorithms in public sector applications.
Within Holocaust studies, there has been an increasingly uncritical acceptance that by engaging with social media, Holocaust memory has shifted from the ‘era of the witness’ to the ‘era of the user’ (Hogervorst 2020). This paper starts by problematising this proposition. This claim to a paradigmatic shift implies that (1) the user somehow replaces the witness as an authority of memory, which neglects the wealth of digital recordings of witnesses now circulating in digital spaces and (2) agency online is solely human-centric, a position that ignores the complex negotiations between corporations, individuals, and computational logics that shape our digital experiences. This article proposes instead that we take a posthumanist approach to understanding Holocaust memory on, and with, social media. Adapting Barad's (2007) work on entanglement to memory studies, we analyse two case studies on TikTok: the #WeRemember campaign and the docuseries How To: Never Forget to demonstrate: (1) the usefulness of reading Holocaust memory on social media through the lens of entanglement which offers a methodology that accounts for the complex network of human and non-human actants involved in the production of this phenomenon which are simultaneously being shaped by it. (2) That professional memory institutions and organisations are increasingly acknowledging the use of social media for the sake of Holocaust memory. Nevertheless, we observe that in practice the significance of technical actancy is still undervalued in this context.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
As governments increasingly adopt algorithms and artificial intelligence (AAI), we still know comparatively little about citizens’ support for algorithmic government. In this paper, we analyze how many and what kind of reasons for government use of AAI citizens support. We use a sample of 17,000 respondents from 16 OECD countries and find that opinions on algorithmic government are divided. A narrow majority of people (55.6%) support a majority of reasons for using algorithmic government, and this is relatively consistent across countries. Results from multilevel models suggest that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income. Older and more educated respondents are more accepting of algorithmic government, while female and low-income respondents are less supportive. Finally, we classify the reasons for using algorithmic government into two types, “fairness” and “efficiency,” and find that support for them varies based on individuals’ political attitudes.
Services offered by genealogy companies are increasingly underpinned by computational remediation and algorithmic power. Users are encouraged to employ a variety of mobile web and app plug-ins to create progressively more sophisticated forms of synthetic media featuring their (often deceased) ancestors. As the promotion of deepfake and voice-synthesizing technologies intensifies within genealogical contexts – aggrandised as mechanisms for ‘bringing people back to life’ – we argue it is crucial that we critically examine these processes and the socio-technical infrastructures that underpin them, as well as their mnemonic impacts. In this article, we present a study of two AI-enabled services released by the genealogy company MyHeritage: Deep Nostalgia (launched 2020), and DeepStory (2022). We carry out a close critical reading of these services and the outputs they produce which we understand as examples of ‘remediated memory’ (Kidd and Nieto McAvoy 2023) shaped by corporate interests. We examine the distribution of agency where the promotion by these platforms of unique and personalised experiences comes into tension with the propensity of algorithms to homogenise. The analysis intersects with nascent ethical debates about the exploitative and extractive qualities machine learning. Our research unpacks the social and (techno-)material implications of these technologies, demonstrating an enduring individual and collective need to connect with our past(s), and to test and extend our memories and recollections through increasingly intense and proximate new media formats.
Chapter 3 expands on the diabolical aspects of the contemporary political soundscape and develops initial deliberative responses to its key problematic aspects. These aspects include an overload of expression that overwhelms the reflective capacities of listeners; a lack of argumentative complexity in political life; misinformation and lies; low journalistic standards in “soft news”; cultural cognition, which means that an individual’s commitment to a group determines what gets believed and denied; algorithms that condition what people get to hear (which turn out to fall short of creating filter bubbles in which they hear only from the like-minded); incivility; and extremist media. The responses feature reenergizing the public sphere through means such as the cultivation of spaces for reflection both online and offline, online platform regulation and design, restricting online anonymity, critical journalism, media literacy education, designed forums, social movement practices, and everyday conversations in diverse personal networks. Formal institutions (such as legislatures) and political leaders also matter.
Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm’s recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.
Objections against digital self-normativity are primarily related to questioning whether a moral dimension is embedded in the normative function of algorithms, and the increase in predictive power connected to the automatic implementation of norms. These matters concern secondary level rules of implementation and practice but are often thought to reflect the moral dimension of digital primary norms. There appears no comparable continuum between self-made private rules and international or domestic legal instruments governing digital human rights. I term such an absence as the idealism abyss; that is, the idealistic nature inherent in human rights articulated by positive legal instruments is not carried as uninterrupted into the self-normativity of digital agents. Once the self-normativity of digital private enterprises becomes justified, the idealism abyss leads to the necessity of self-constitutionality. In this case, primary and secondary self-regulation form one logical structure. The rejection of the idealism abyss shows an image where the self-made secondary norms rely on primary- (constitutional-) level norms originating from the non-digital realm, but their content may have changed in the course of the transposition.
Chapter 7 shows how RIO can facilitate algorithmic case selection. We outline how algorithms can be used to select cases for in-depth analysis and provide two empirical analyses to illustrate how RIO facilitates a deeper understanding of how cases relate to one another within the model space, and how they align with the theoretical motivations for different case selection strategies.
The gradual digitization of EU migration policies is turning external borders into AI-driven filters that limit access to fundamental rights for people from third countries according to risk indicators. An unshakeable confidence in the reliability of technological devices and their ability to predict the future behaviour of incoming foreigners is leading towards the datafication of EU external frontiers. What happens if the supposedly infallible algorithms are wrong? The article aims to understand the consequences of algorithmic errors on the lives of migrants, refugees and asylum seekers arriving in the European Union. This contribution investigates the socio-political implications of deploying data-driven solutions at the borders in an attempt to problematize the techno-solutionist approach of EU migratory policies and its fundamental rights impact on affected individuals.
We outline a theory of algorithmic attention rents in digital aggregator platforms. We explore the way that as platforms grow, they become increasingly capable of extracting rents from a variety of actors in their ecosystems—users, suppliers, and advertisers—through their algorithmic control over user attention. We focus our analysis on advertising business models, in which attention harvested from users is monetized by reselling the attention to suppliers or other advertisers, though we believe the theory has relevance to other online business models as well. We argue that regulations should mandate the disclosure of the operating metrics that platforms use to allocate user attention and shape the “free” side of their marketplace, as well as details on how that attention is monetized.
This chapter looks closely at the influence of online news, especially social media, echo chambers, fake news, populism, political polarisation and foreign propaganda. There is lack of information but what there is does not fit well with current worries and concerns about the political content and effects of the new media and suggests a different set of conclusions.
Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.
Let G be a complex classical group, and let V be its defining representation (possibly plus a copy of the dual). A foundational problem in classical invariant theory is to write down generators and relations for the ring of G-invariant polynomial functions on the space $\mathcal P^m(V)$ of degree-m homogeneous polynomial functions on V. In this paper, we replace $\mathcal P^m(V)$ with the full polynomial algebra $\mathcal P(V)$. As a result, the invariant ring is no longer finitely generated. Hence, instead of seeking generators, we aim to write down linear bases for bigraded components. Indeed, when G is of sufficiently high rank, we realize these bases as sets of graphs with prescribed number of vertices and edges. When the rank of G is small, there arise complicated linear dependencies among the graphs, but we remedy this setback via representation theory: in particular, we determine the dimension of an arbitrary component in terms of branching multiplicities from the general linear group to the symmetric group. We thereby obtain an expression for the bigraded Hilbert series of the ring of invariants on $\mathcal P(V)$. We conclude with examples using our graphical notation, several of which recover classical results.