Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-22T08:45:28.644Z Has data issue: false hasContentIssue false

Why do citizens support algorithmic government?

Published online by Cambridge University Press:  20 May 2024

Dario Sidhu
Affiliation:
Department of Political Science, UCLA, Los Angeles, CA, USA
Beatrice Magistro*
Affiliation:
Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
Benjamin Allen Stevens
Affiliation:
Forum Research, Toronto, ON, Canada
Peter John Loewen
Affiliation:
Munk School of Global Affairs and Public Policy, University of Toronto, Toronto, ON, Canada
*
Corresponding author: Beatrice Magistro; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

As governments increasingly adopt algorithms and artificial intelligence (AAI), we still know comparatively little about citizens’ support for algorithmic government. In this paper, we analyze how many and what kind of reasons for government use of AAI citizens support. We use a sample of 17,000 respondents from 16 OECD countries and find that opinions on algorithmic government are divided. A narrow majority of people (55.6%) support a majority of reasons for using algorithmic government, and this is relatively consistent across countries. Results from multilevel models suggest that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income. Older and more educated respondents are more accepting of algorithmic government, while female and low-income respondents are less supportive. Finally, we classify the reasons for using algorithmic government into two types, “fairness” and “efficiency,” and find that support for them varies based on individuals’ political attitudes.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Introduction

Around the world, governments are looking for ways to better deliver services. This has multiple motivations, but chief among them are making government more efficient and less costly, increasing government effectiveness, and reducing bias in service delivery. To do so, governments are increasingly relying on digital tools. Over the past two decades, “e-government” has experienced significant growth, with governments offering an increasingly broad set of services through digital channels. While this has expanded into a wide variety of use cases, the most common applications for e-government remain basic services such as registering new businesses, paying utilities and fees, and applying for government documents (United Nations 2020). As a result, this trend towards e-government and digital service provision has for the most part not significantly changed how governments make decisions. While this has been somewhat controversial at times, debate has focused primarily on issues like the accessibility of digital services to diverse groups in society (Leist and Smith Reference Leist, Smith, Ko and Francesconi2014), as well as cybersecurity around high-stakes issues such as online voting (Lee Reference Lee2020).

Increasingly, however, more sophisticated digital tools, in the form of algorithmic and automated decision-making and other applications of artificial intelligence, are being deployed by governments. We refer to these practices as algorithmic government. One comprehensive assessment found that 45% of federal departments, agencies, and sub-agencies in the United States have already experimented with artificial intelligence (AI) and machine learning tools (Engstrom et al. Reference Engstrom, Ho, Sharkey and Cuellar2020). Governments in many other countries are similarly increasingly using algorithms as part of their toolkit.

Given the accountability challenges inherent in algorithmic government, as well as the potential for adverse outcomes, a major priority is to understand the conditions under which citizens will support the wide-scale deployment of algorithms in the final decisions – or even in a major part of the decisions – that government makes. At the core of this is an assumption that citizen consent will be – or at least, should be – necessary for governments to move forward with the widespread use of algorithms in decision-making across various domains. As policy entrepreneurs seek to advance the use of algorithms, then, they will do well to understand both citizens’ apprehensions and their reasons for support. While a growing literature has found that several variables related to citizen perceptions of algorithmic decision-making – including fairness, accountability, transparency, explainability, and trust in AI – affect the overall acceptance of different algorithmic decision-making systems (Grimmelikhuijsen Reference Grimmelikhuijsen2022; Lünich and Kieslich Reference Lünich and Kieslich2022; Shin Reference Shin2020), we provide some of the first cross-national evidence on the reasons why citizens support or oppose the use of algorithms in government, what socio-demographic factors explain individual variation in support, and how such support varies across societies.

Algorithmic government has the potential to help address some critical governance challenges. It can help to keep up with the increasing complexity of society and the economy, mitigate individual behavioral biases in the decision-making process, and incorporate learning directly into government. Consider three examples. First, governments that are making small business loan allocations can theoretically improve rates of return by letting machine learning algorithms leaning on more than credit scores and human-evaluated business plans select loan recipients. They can also potentially address racial or other bias in loan allocations by leaving final decisions to a (monitored) algorithm. Second, doctors can make decisions about optimal treatment or procedure queuing and allocation through AI, rather than current techniques. Finally, tax authorities can rapidly increase tax assessments and auditing through both machine learning and AI. There is, in short, a lot of upside to governments using algorithms and AI.

However, this shift from pure digital service provision to government by algorithm poses a different, and inarguably more serious set of controversies than traditional forms of e-government. New forms of AI and algorithmic government differ from previous digital innovations in crucial ways (Engstrom and Ho Reference Engstrom and Ho2020). In particular, algorithmic decision-making is far more difficult to explain, since the relation between inputs and outputs of a given algorithm is often non-intuitive and in many ways not directly explicable. This attribute of algorithms poses significant challenges for accountability. Unlike other digital tools used by governments, algorithms are often used to make decisions rather than just to provide services, including on how to allocate (scarce) resources. In so doing, algorithmic government represents a move of artificial intelligence “to the center of the redistributive and coercive power of the state” (Engstrom and Ho Reference Engstrom and Ho2020).

While well-designed algorithms can help address biases in human decision-making, algorithms can themselves lead to biased outcomes. Well-documented cases in criminal justice and health care, among other sectors, have demonstrated the potential for algorithmic decision-making to lead to racially inequitable outcomes (Angwin et al. Reference Angwin2016; Obermeyer et al. Reference Obermeyer, Powers, Vogeli and Mullainathan2019). A set of very recent controversies around the world have further demonstrated the potential for adverse outcomes from algorithmic government. In the United Kingdom, owing to disruption to examinations caused by the COVID-19 pandemic, school grades in 2020 were partly assigned by an algorithm. This algorithm included past school performance as one input into its decision-making, leading to charges of bias (Katwala Reference Katwala2020). In Austria, the state employment agency planned to roll out a sorting algorithm assigning different tranches of job-seekers different likelihoods of finding work. In 2020, after finding that this algorithm gave lower scores to women and people with disabilities, the Austrian data protection authority declared the system illegal (Szigetvari Reference Szigetvari2020). In extreme cases, algorithmic government could represent a new tool of autocratic control by authoritarian governments. The development of China’s ‘Social Credit’ system, as well as its response to the COVID-19 pandemic have stoked particular fears in this regard (Mozur et al. Reference Mozur, Zhong and Krolik2020). Partly as a result of the potential for bias and related concerns, there has recently been something of a backlash against algorithmic government in some countries (Simonite Reference Simonite2020).

Using data from surveys carried out in 16 OECD countries, we provide some of the first representative evidence on the reasons why citizens support or oppose the use of algorithms in government, and how this varies across individuals and countries. Our study has several notable findings. In general, views on algorithmic government are divided. A narrow majority of survey respondents (55.6%) support a majority of the 8 reasons for algorithmic government presented to them. On average, respondents accept 4.44 reasons for the use of algorithms. Support for algorithmic government varies modestly between countries. The proportion of respondents who agree with a majority of the reasons ranges from a low of 44.3% in France to a high of 67.2% in Italy, with only four countries falling below 50%. However, results from multilevel models with country random effects suggest that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income, as older, more educated, men, and higher-income respondents are more accepting of algorithmic government.

Alternate reasons for algorithmic government differ in the support they receive from the public. We divide these reasons into two different types: “efficiency” reasons that focus on making government more efficient (e.g., “to make decisions which will be a better use of government money”) and “fairness” reasons focused on increasing equity (e.g., “to make sure decisions are not influenced by officials’ biases”). Support for these types of reasons varies according to individuals’ political attitudes. Individuals self-identifying with right-wing views tend to support efficiency reasons for algorithmic government, while individuals holding left-wing views support using algorithms for fairness reasons. In addition, individuals displaying populist attitudes are less supportive of fairness reasons.

These results underscore the need for a more sophisticated understanding of public support for algorithmic government. In particular, our findings suggest that support for algorithmic government is not monolithic, and different reasons for its use find different levels of popular support. The lack of systematic differences in support across countries when accounting for individual-level characteristics suggests that some a priori plausible drivers of support for algorithms, such as past experience with digital government, may matter very little. Finally, our results on partisan differences in support for algorithmic government open up a rich field for further research.

Public attitudes towards algorithms and artificial intelligence

The question of public attitudes towards algorithmic government is gaining more attention among public policy scholars due to the increasingly common deployment of algorithms in government decision-making. The available evidence suggests mixed levels of support overall for algorithmic government and artificial intelligence more broadly.

In a nationally representative sample of Americans carried out by the Center for the Governance of AI, 41% of respondents reported somewhat or strongly supporting the development of AI (Zhang and Dafoe Reference Zhang and Dafoe2019). In the same survey, a minority (22%) somewhat or strongly opposed developing AI. More directly, a large-scale 2019 survey in several European countries asked respondents to choose between two statements: “I prefer that algorithms judge me instead of humans. They make more objective decisions that are the same for everyone.” or, “Algorithms might be objective, but I feel uneasy if computers make decisions about me. I prefer humans making those decisions.” Across the surveyed countries, 64% of respondents decided in favor of the second statement, whereas the first statement was selected by only 16% of respondents (Grzymek and Puntschuh Reference Grzymek and Puntschuh2019). In another survey in seven European countries, one in four respondents responded affirmatively to a question asking whether they would let an artificial intelligence “make important decisions about the running of the country” (IE Center for the Governance of Change 2019).

In the existing literature on attitudes towards algorithmic decision-making, there is some tension between the ideas of algorithmic aversion and algorithm appreciation. Algorithmic aversion refers to findings that suggest that people often discount algorithmic advice and distrust algorithmic decision-making, even when algorithms outperform their own judgment or that of other humans (Acikgoz et al. Reference Acikgoz, Davison, Compagnone and Laske2020; Burton et al. Reference Burton, Stein and Jensen2019; Dietvorst and Bharti Reference Dietvorst and Bharti2020). The concept of algorithm appreciation is supported by experiments with contrary findings, which suggest that people are more likely to adhere to advice when they think it comes from an algorithm rather than a person, in areas from predicting geopolitical events, music popularity, to romantic matches (Araujo et al. Reference Araujo, Helberger, Kruikemeier and de Vreese2020; Logg et al. Reference Logg, Minson and Moore2019; Schlicker et al. Reference Schlicker, Langer, Ötting, Baum, König and Wallach2021). Based on research done to date, a tentative synthesis might recognize that people do not necessarily distrust algorithms more than humans, but discount their advice much more heavily after seeing them err (Dietvorst et al. Reference Dietvorst, Simmons and Massey2015; Prahl and Van Swol Reference Prahl and Van Swol2017).

There are several important distinct areas in which we know relatively little about support for algorithms. For one, who supports algorithmic government, and what are the individual-level characteristics that are predictive of support for algorithms? How does support for algorithmic government vary across societies? Finally, how many and what kind of reasons for government use of AAI do citizens support? Existing research provides some suggestive, but no definitive answers to these questions.

Correlates of support for algorithms

There is no systematic evidence on the individual-level correlates of algorithmic government, though the available research suggests that gender, age, and education may be significant determinants of support for algorithms. Representative survey research from the United States on attitudes towards developing AI finds majority support for AI among male respondents, college-educated individuals, high-income households as well as those with educational background in computer science and engineering (Zhang and Dafoe Reference Zhang and Dafoe2019). Additional survey research on attitudes toward automated decision-making (ADM) in the Netherlands provides some further insight into personal attributes that may predict support for algorithms (Araujo et al. Reference Araujo, Helberger, Kruikemeier and de Vreese2020). This research finds that older individuals find ADM less useful, and more risky. Higher levels of belief in economic equality are also associated with higher levels of usefulness and fairness perceptions of ADM. Research on the Social Credit System in China, which prominently features elements of algorithmic government, suggests high levels of support among affluent, urban, and older individuals (Kostka Reference Kostka2019).

Albarrán et al. (Reference Albarrán Lozano, Molina and Gijón2021) analyze the perception of artificial intelligence by individuals in Spain and the factors associated with it. They detect a significant gender gap and find that people have a negative attitude if they are not interested in scientific discoveries and technological developments and if AI and robots are not useful at work.

There is some emerging evidence that support for the use of algorithms may vary by ethnic background. A Pew Survey found that while only 25% of white respondents thought that the use of algorithms to create personal finance scores would be fair, 45% of African American respondents thought it would be fair. Conversely, while only 49% of whites thought using algorithms to create criminal risk scores would be unfair, 61% of African Americans thought so (Smith Reference Smith2018). One interpretation of these findings is that those most at risk of algorithmic bias (i.e., African Americans in the criminal justice system) may be most skeptical of their use. On the other hand, some evidence also suggests that people may recognize the potential for algorithms to mitigate racial bias. In one survey experiment, African American subjects were significantly more supportive than white Americans of automated decision-making in the case of red light cameras, specifically when primed about racial representation in the local police force (Miller and Kaiser Reference Miller and Keiser2020). These results could suggest that some people may prefer algorithmic government if the alternative is discretionary, and potentially biased, action by humans.

Existing research has not investigated other individual-level characteristics that may matter in determining support for algorithms. In particular, to our knowledge, no existing study has investigated whether and how support for algorithms varies by political affiliation, a plausibly significant factor.

Some research suggests that support for algorithmic government and the use of algorithms more broadly depends on the specific domain in which algorithms are used. One survey on public attitudes toward computer algorithms in the United States found that while a majority of respondents do not support algorithmic decision-making in general, levels of support vary by issue area (Smith Reference Smith2018). Respondents were more likely to find algorithms acceptable in criminal risk assessments for people up for parole and for the purpose of resume screening, and less likely to support algorithms used to perform video analysis of job interviews or to create personal finance scores.

Survey evidence from European countries also suggests variation in support by domain. While a majority of respondents found it acceptable for a computer to make decisions on its own when it comes to spell-checking, only 6% of respondents felt the same way about pre-selecting job candidates (Grzymek and Puntschuh Reference Grzymek and Puntschuh2019). More broadly, the same survey also found that while 27% of respondents associated the term algorithms with “efficient decisions” and 25% did so with “accurate decision,” only 11% of respondents associated algorithms with fair decisions.

While evidence suggests that support may differ systematically between different use cases for algorithms, it is unclear why this is the case. In particular, the survey evidence reviewed above hints at the notion that fairness concerns, contrasted with accuracy or efficiency concerns, may matter to the public. The present study tackles this issue directly.

Variation in public opinion on algorithms between countries

Evidence for differences in support for algorithms across countries is almost non-existent. One of the few sources of information on this topic again comes from a survey carried out across European countries, with representative data available for Germany, France, Italy, Poland, Spain, and the United Kingdom (Grzymek and Puntschuh Reference Grzymek and Puntschuh2019). This research found that Polish respondents were most accepting of algorithms, the only surveyed country in which an absolute majority of the population associated more benefits than problems with the use of algorithms, at 56%. In both Spain and the United Kingdom, close to half of respondents also saw significant benefits from using algorithms. Of all populations surveyed, French respondents were most skeptical of algorithms. While valuable descriptively, this evidence does not clearly demonstrate that country-level factors, such as culture, political arrangements, and past histories with digital governance determine public support for algorithmic government. It is possible that these cross-national differences are masking differences found at the individual level.

What reasons do people support the use of algorithms for?

The existing literature highlights a number of factors that influence people’s attitudes toward the use of algorithms. Different studies find that several variables related to citizen perceptions of algorithmic decision-making – including fairness, accountability, transparency, explainability, and trust in AI – affect the overall acceptance of algorithmic decision-making systems (Grimmelikhuijsen Reference Grimmelikhuijsen2022; Lünich and Kieslich Reference Lünich and Kieslich2022; Shin Reference Shin2020).

For one, the explainability of algorithms is often raised as an important concern for algorithmic accountability (Engstrom and Ho Reference Engstrom and Ho2020). The opacity of algorithmic decision-making and the often non-intuitive relation between inputs and outputs in algorithms are an important concern for algorithmic accountability. Some evidence suggests that measures to increase explainability can increase the acceptability of algorithms. In one experiment, research found that while people distrust algorithmic recommendations in general, explaining how recommender algorithms make their decisions can alleviate this distrust (Yeomans et al. Reference Yeomans, Shah, Mullainathan and Kleinberg2019).

The level of human oversight is another factor that is both often highlighted as an important component of accountability, and appears to influence perceptions of algorithmic government. One experiment on the impact of algorithmic government on perceptions of governing legitimacy in the EU finds that when algorithmic decision-making systems are the sole decision-maker, rather than making decisions jointly with humans, respondents tend to perceive these as illegitimate (Starke and Lünich Reference Starke and Lünich2020).

Along similar lines, the existing literature highlights transparency as a key ingredient for algorithmic accountability (De Fine Licht and De Fine Licht Reference De Fine Licht and De Fine Licht2020; Schmidt et al. Reference Schmidt, Biessmann and Teubner2020). At the same time, direct evidence for an impact of transparency on algorithm approval is limited.

Some evidence also suggests that the type and quality of information used by algorithms also influences whether or not people approve of them. In one experiment, researchers found that people were particularly skeptical of algorithms when they incorporated information that could serve as a proxy for other information, such as gender or ethnicity (BIT 2020). The same study also found that people exhibited greater trust in algorithms when the algorithm was known to be more accurate, suggesting that the accuracy of algorithms also matters to people.

Specific task characteristics also appear to influence whether or not individuals approve of algorithms performing them. Some evidence suggests that whether a task is considered “mechanical” versus “human” may matter for how people think about algorithms performing them (Lee Reference Lee2018). Similarly, some evidence suggests that algorithms are trusted more for tasks that appear more objective (Castelo et al. Reference Castelo, Bos and Lehmann2019).

Finally, research has also investigated whether the personalization of algorithms matters for approval of their use. In one study on medical artificial intelligence, resistance to the use of AI was stronger among consumers who perceived themselves to be more unique, a tendency that was mitigated when the care provided by AI was framed as personalized or supported by humans (Longoni et al. Reference Longoni, Bonezzi and Morewedge2019).

In sum, research on public attitudes towards algorithmic government is growing, but there remain many open questions about the determinants of support for algorithms, the reasons for which people support their use, and the interaction between these two domains. The present study is an attempt to help fill these gaps.

Research questions, data, and empirical strategy

In this study we answer a series of key questions: how many and what kind of reasons for government use of AAI do citizens support? What individual-level socio-demographic characteristics predict varying levels of support? And finally, how does such support vary between countries? To answer these questions, we combine two surveys that explored respondents’ views on algorithms, automation, and AI and their related policy preferences: a pilot survey conducted among 1,995 Canadian citizens in May and June 2019, and a large comparative survey of 15,414 Europeans from 15 countries in March and April 2020, with between 775 and 983 responses per country.Footnote 1 After using listwise deletion to remove subjects with missing information, we end up with 1,876 complete responses in Canada and 15,035 complete responses in Europe, for a total of 16,911 respondents. The Canadian survey was fielded on the Qualtrics platform, using an online survey sample provided by Qualtrics, drawn from multiple panels with quotas for age, gender, and region, providing a representative sample of the Canadian population. The European survey was fielded on the Qualtrics platform, with sample provided by Dynata, with quotas for age, gender, education, and region for each country to make it representative. Our basic instrument is a presentation of eight reasons for governments to use algorithms or AI to make decisions. For each, we ask respondents to indicate whether the reason is acceptable or unacceptable. The reasons were preceded by an explanation of algorithms and artificial intelligence, with some examples. This description was as follows, followed by the reasons (see Table 1 for descriptive statistics):Footnote 2

Table 1. Descriptive statistics of different reasons for governments to use algorithms and artificial intelligence

Governments are increasingly looking to algorithms and artificial intelligence to improve the work that they do. By algorithms, we mean a step-by-step procedure for solving a problem or answering a question, which is undertaken by a computer, rather than a human decision maker. Algorithms and artificial intelligence describe a lot of different technologies. For now, however, we’d like you to think about the reasons why government might use these technologies.

For example, a government might use a series of algorithms to determine whether a person should have their tax filing audited. Or, an official might use an algorithm to decide whether a small business should receive a government loan, who should be prioritized in a hospital waiting room, or who should receive extra financial aid for college or university. In each case, the algorithm would take the place of a human decision maker. Artificial intelligence might be used to help learn from and improve those decisions.

Below are eight different reasons why governments might use algorithmic decision making and artificial intelligence to make decisions. Please tell us which reasons you think are acceptable or unacceptable, or whether you are just not sure about them.

How acceptable are the following reasons for governments to use algorithms and AI to make decisions?

Our empirical strategy is as follows. We first present descriptive evidence of how many and what reasons received the greatest amount of support. We follow this by examining the kinds of reasons citizens support, presenting an empirically defensible grouping of reasons under two categories: efficiency and fairness. We then explore which types of individuals are more likely to support these reasons, both overall and across our two groupings.Footnote 3

Results

What reasons for algorithmic government exhibit the greatest support among citizens?

To begin, Fig. 1 presents the frequency with which each of the eight reasons was deemed acceptable as a reason to use algorithms and AI to make decisions. In this figure, and in all of the analyses that follow, we treat acceptability as a binary measure, with those answering “not sure” or those who refused the question grouped together with those who answered “unacceptable”. Arguably, this sets a higher bar for measuring acceptance, as it assumes that those who are ambivalent or unsure are more likely to oppose than support a reason.Footnote 4

Figure 1. Frequency of support for reasons for government use of algorithms and artificial intelligence.

Two key observations emerge from these data. First, all reasons but one receive majority support from our respondents. Second, there is nonetheless substantial variation across the reasons supported. For example, 47% of respondents believe that using AAI to reduce the time required to make decisions is acceptable. By contrast, an appreciably larger 64% support using AAI to reduce fraud against the government.

How many reasons do citizens support?

Moving beyond individual questions, Fig. 2 presents the distribution of the number of reasons individuals deemed acceptable. The bimodality here suggests that respondents are polarized on the issue. A narrow majority of people (55.6%) support a majority of reasons for using algorithmic government (at least 5 out of 8), including 19% who find every reason acceptable. On the other hand, 44.4% of respondents find less than a majority of reasons acceptable, and a significant proportion of respondents (19%) find none of the reasons acceptable. This indicates that there is potentially more opposition or uncertainty when we don’t treat support for government use of AAI as a monolithic bundle, but instead consider specific sets of reasons for which individuals may support or oppose government use of AAI.

Figure 2. Frequency of the number of reasons for government use of algorithms and artificial intelligence individuals deem acceptable.

How does support vary between countries?

The distribution of the number of reasons supported is relatively similar across countries, as Figs. 3, 4 illustrates. The proportion of respondents who agree with a majority of the reasons ranges from a low of 44.3% in France to a high of 67.2% in Italy, with only four countries falling below 50% (Belgium (FR), Denmark, France, and Norway). There is a notable similarity in the distribution of respondents between the extremes across countries, while the proportion at the ends exhibits more variance. At the low end, between 9.4% (Canada) and 27.3% (Denmark) of respondents in each country found no reason acceptable, and at the high end between 13.6% (Denmark), and 27.0% (Italy) of respondents found all eight reasons acceptable.

Figure 3. Frequency of support for reasons for government use of algorithms and artificial intelligence by country (All to Germany).

Figure 4. Frequency of support for reasons for government use of algorithms and artificial intelligence by country (Greece to UK).

What kinds of reasons for government use of AAI do citizens support: efficiency or fairness?

We can organize the reasons for supporting government use of AAI into two broad categories. The first category addresses government efficiency and effectiveness and captures five items (see Table 2 below). These are items addressing, for example, the speed and cost of decisions, the size of government, and the issue of fraud. The second dimension addresses issues of fairness, as captured by three items (see Table 2). These are items concerned with using AAI to flatten out the effects of recipients’ identities or officials’ biases.Footnote 5

Table 2. Organization of 8 reasons for government use of algorithms and artificial intelligence into two categories: efficiency and fairness

Which individual-level characteristics are more likely to predict support for government use of AAI?

The final sets of results considered in this paper concern understanding the characteristics of those who are supportive of using AAI in government decision-making. Here, we want to model some of the individual-level characteristics of respondents that might both explain their support for AAI in government and be familiar to scholars of political behavior and public policy. Our basic approach is the following. We model an individual’s aggregated acceptability score as a function of their age, gender, income, education, political ideology self-placement, and populist sentiments (see Appendix A for a description of all independent variables and basic summary statistics). We think there are a number of ready explanations for why some individuals may be more supportive of AAI than others. For example, those with a greater education may better understand the potential of AAI, and so will support it. On the other hand, we could generate an explanation that they will be more likely to understand the risks and drawbacks of AAI, and so may be more opposed. All of this is to say that we are taking an empirical approach of – at this point – letting the data and models be informative of the relationship between individual-level characteristics and support for AAI, rather than treating this as an exercise in hypothesis testing.

We present three separate models. The first, Model 1 in Table 3, estimates the number of reasons deemed acceptable from the complete set. Model 2 estimates the number of efficiency reasons found to be acceptable. Model 3 estimates the number of fairness reasons found to be acceptable. In each case, we rescale the dependent variable from 0 to 1 and estimate a multilevel linear model with country random effects. Coefficients are, as a consequence, easily interpretable.

Table 3. Coefficients from multilevel linear regressions of support for algorithmic government, combined European Political Institutions Survey (EPIS), and Canadian data. Standard errors in parentheses

Note: *(p < 0.05), **(p < 0.01), ***(p < 0.001).

Model 1 suggests that several factors positively predict agreement with all justifications for AAI. Older respondents are more likely to accept more reasons, as are those with a higher level of education. By contrast, those who report a female gender are less accepting of reasons for AAI, as are those with a low income. Left-right placement and populism do not have any discernible effect when all of the potential reasons for use of AAI are included.

The full range effects of education and income are similar, while the full range effects of age are even more impressive, as shown in Fig. 5. The oldest person in our dataset accepts 0.15 (again on a scale from 0 to 1) more reasons for algorithmic government, compared to the youngest person. Women accept on average 0.04 fewer reasons for government’s use of AAI. Finally, low-income individuals support on average 0.08 fewer reasons for algorithmic government, while highly educated individuals support 0.08 more reasons for algorithmic government than lower-educated individuals. These estimated effects are not only statistically significant but also substantively notable.Footnote 6

Figure 5. Coefficients from multilevel linear regressions of support for algorithmic government.

Breaking out the efficiency and fairness subscales provides further insights. Acceptability of AAI on efficiency or fairness grounds alone shares the same correlates and substantive effects as that for the full scale, with important differences emerging for the left-right and populism variables.

When only efficiency reasons are considered, right-wing respondents are more supportive of the use of AAI, while when only fairness reasons are considered, left-wing respondents are more supportive. In addition, when only fairness reasons are considered, respondents who hold anti-elite attitudes are less supportive of the deployment of AAI. This is intuitive as it is possible that populists are less likely to be concerned about whether government is being fair, particularly if that fairness is related to minority groups. These findings also suggest that there may be other political variables, especially those related to citizens general or higher-order preferences vis-a-vis government which may explain differential rates of acceptance across sets of reasons.

Discussion

The results presented above shed light on public attitudes toward algorithmic government and provide insights into the factors that shape citizens’ support for its implementation. One of the key findings is the division in public support for algorithmic government, with a narrow majority of respondents (55.6%) expressing acceptance of a majority of the presented reasons for its implementation. This suggests that the public does not uniformly embrace algorithmic decision-making but instead evaluates it based on specific contexts and justifications. The classification of reasons into two categories – efficiency and fairness – provides a nuanced understanding of citizens’ perspectives. The finding that individuals who identify with right-wing views are more supportive of efficiency reasons, while those with left-wing views favor fairness reasons, highlights the ideological foundations of public support. The alignment between political ideologies and support for different reasons suggests that policy entrepreneurs and decision-makers may tailor their communication strategies to resonate with the values and priorities of specific ideological groups. Recognizing that different segments of the population are drawn to different aspects of algorithmic government can lead to more effective engagement and governance strategies.

Furthermore, our results show that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income. Young, female, and low-income and low-education respondents are overall less supportive of algorithmic government. This finding aligns with the notion that education likely plays a crucial role in shaping perceptions of technology and its potential benefits. This suggests that there may be a role for comprehensive educational campaigns to increase public awareness about the benefits, limitations, and potential risks associated with algorithmic government. Clear and accessible communication can help bridge the knowledge gap and dispel any misconceptions. At the same time, gender, age, education, and income disparities in support highlight important equity considerations. The fact that support for algorithmic government is stronger among older, male, wealthy, and highly educated individuals suggests a potential bias in the design and implementation of these technologies (Angwin et al. Reference Angwin2016; Obermeyer et al. Reference Obermeyer, Powers, Vogeli and Mullainathan2019; Szigetvari Reference Szigetvari2020). Policymakers need to ensure that algorithmic systems are developed with inclusivity in mind and are not inadvertently catering to specific demographic groups.

Conclusion

The use of algorithms and AI in government decision-making raises both opportunities and quandaries. Proponents of AI believe that algorithmic decision-making will bring several benefits, including increases in accuracy and efficiency, and a reduction in human bias. On the other hand, skeptics worry that these systems may just reinforce existing biases and disparities. While these algorithms are already all around us, from book and movie recommendations to news stories they think we may find relevant, they are increasingly also being used in many areas of government, including traffic management and customer service centers, tax audit targeting, policing resource allocations, and health care service optimization. Although these tools are likely to lead to an increase in efficiency and effectiveness in government policy, their use remains contentious. In particular, key concerns revolve around potentially perpetuating bias, creating negative effects on employment, and privacy, ethical, and transparency issues.

While the use of AI in government is becoming increasingly common, we know comparatively little about people’s attitudes towards the use of AI by governments and the reasons why they may find such uses acceptable. We proceed from the assumption that citizen support will be fundamental for governments to move forward with the widespread use of algorithms in decision-making across various domains, and for its effectiveness. In this study, we explore a series of key questions: How many and what kind of reasons for government use of AI do citizens support? What individual-level socio-demographic characteristics predict varying levels of support? And finally, how does such support vary between countries?

Our study is one of the first providing cross-country evidence on the reasons why citizens support or oppose the use of algorithms in government, and how this varies across individuals and countries. It emerges that views on algorithmic government are fairly divided and that individual-level differences, including those based on age, gender, education, income, and political preferences appear to be the largest drivers of such variation. In particular, political attitudes appear to be correlated with different sets of reasons to support algorithmic government: right-wing respondents are more accepting of efficiency reasons, left-wing respondents are more accepting of fairness reasons, while populist respondents are less accepting of fairness reasons. This suggests that other political variables that we do not currently consider, including trust in governments and perception of government capabilities, may differentially affect support for AAI. The lower support from specific demographic groups could reflect a combination of factors. While it might indicate skepticism about the inherent fairness and transparency of algorithms, it could also be attributed to concerns related to the digital literacy divide. Future research should delve deeper into understanding the reasons behind the skepticism among these groups. Investigating whether the skepticism from these specific groups is rooted in a perceived bias in algorithmic decision-making is crucial.

Our findings have important implications for governments as they make decisions on how to deploy AAI in increasingly more domains. In particular, our results suggest that ultimately citizens’ support for government use of AAI in different applications will vary based on how governments in power decide to communicate their reasons for doing so. Future studies should account for the potential asymmetry in the importance of reasons when assessing attitudes towards algorithmic decision-making by governments. The potential for distinct levels of salience across different reasons calls for a deeper exploration, where even the endorsement of a single reason out of a multitude can hold the key to fostering support. This possibility further highlights the pivotal role of policy-makers, who, through strategic emphasis on particular reasons, can encourage public support toward algorithmic government. Future research could possibly explore the trade-offs that individuals are willing to make between different reasons, in particular, conjoint experiments would allow us to measure the relative salience of each reason more accurately. Understanding these trade-offs can provide valuable insights into the conditions under which individuals are more likely to embrace algorithmic government, thus adding depth to our comprehension of the broader public sentiment.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S0143814X24000114

Data availability statement

Replication materials are available in the Journal of Public Policy Dataverse at https://doi.org/10.7910/DVN/BDS2RX

Competing interests

The authors declare none.

Footnotes

1 Austria, Belgium (FR), Belgium (NL), Denmark, Finland, France, Germany, Greece, Ireland, Italy, the Netherlands, Norway, Portugal, Spain, Sweden, and the UK. The two languages of Belgium were sampled separately.

2 In the EU individuals should not be subject to a decision that is based solely on automated processing, but there are some exceptions, including if the individual consents to a decision based on the algorithm (see the following for more information: https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-and-organisations/dealing-citizens/are-there-restrictions-use-automated-decision-making_en). For this reason, in the prompt we use hypothetical language.

3 It is important to note that the surveys conducted across the 15 European countries and Canada, while distinct, shared a core set of identical questions, particularly those that formed the basis of our dependent variables on support for algorithmic government. This ensured a high degree of comparability in our primary data across different regions. For the independent variables that differed slightly between the two surveys (education, income, and populism), we employed a careful harmonization process to ensure consistency and comparability in our analysis. Detailed information about the harmonization methodology and the specific adjustments made to the independent variables can be found in Appendix B of this paper. This approach was instrumental in maintaining the integrity and coherence of our cross-country comparison.

4 In Appendix E we also show regressions results where we code Not sure/don’t know/refusal as a neutral option (0) with acceptable as +1 and unacceptable as −1. Then the combined scale is rescaled to 0–1 at the end. Results do not substantively change.

5 The Cronbach’s Alpha coefficients are 0.88, 0.82, and 0.72 for the full, efficiency, and fairness scales respectively.

6 While it is not a central goal of this paper, it is potentially interesting to ask if variation can be explained by country-level variation in our cases. We present results considering intra-class correlation (ICC) in Appendix D, according to various country-level variables. The results suggest that only about 2% of the variance is due to country-level differences, and that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income.

References

Acikgoz, Y., Davison, K. H., Compagnone, M., and Laske, M. 2020. “Justice Perceptions of Artificial Intelligence in Selection.” International Journal of Selection and Assessment 28 (4): 399416.10.1111/ijsa.12306CrossRefGoogle Scholar
Albarrán Lozano, I., Molina, J. M., and Gijón, G, 2021. “Perception of Artificial Intelligence in Spain.” Telematics and Informatics 63: 101672.CrossRefGoogle Scholar
Angwin, Julia et al. 2016. “Machine Bias.” ProPublica, May 23 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Retrieved Oct 4th, 2020.Google Scholar
Araujo, T., Helberger, N., Kruikemeier, S., and de Vreese, C. H. 2020. “In AI We Trust? Perceptions About Automated Decision-Making by Artificial Intelligence.” AI & Society 35 (3): 611–23.CrossRefGoogle Scholar
Behavioural Insights Team (BIT). 2020. “The Perception of Fairness of Algorithms and Proxy Information in Financial Services.” A report for the Centre for Data Ethics and Innovation from the Behavioural Insights Team. Google Scholar
Burton, J. W., Stein, M. K., and Jensen, T. B. 2019. “A systematic review of algorithm aversion in augmented decision making.” Journal of Behavioral Decision Making 33 (2): 220–39.CrossRefGoogle Scholar
Castelo, N., Bos, Maarten W., and Lehmann, Donald R.. 2019. “Task-Dependent Algorithm Aversion.” Journal of Marketing Research 56 (5): 809–25.CrossRefGoogle Scholar
De Fine Licht, K., and De Fine Licht, J. 2020. “Artificial Intelligence, Transparency, and Public Decision-Making.” AI & Society 35: 917–26.CrossRefGoogle Scholar
Dietvorst, B., and Bharti, S. 2020. “People Reject Algorithms in Uncertain Decision Domains Because They have Diminishing Sensitivity to Forecasting Error.” Psychological Science 31 (10): 1302–14.CrossRefGoogle ScholarPubMed
Dietvorst, B., Simmons, J. P., and Massey, C. 2015. “Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err.” Journal of Experimental Psychology: General 144 (1): 114–26.CrossRefGoogle ScholarPubMed
Engstrom, D., Ho, D., Sharkey, C. M. and Cuellar, M. F. 2020. “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies.” Report submitted to the Administrative Conference of the United States, Stanford Law School.CrossRefGoogle Scholar
Engstrom, D., and Ho, D. 2020. “Algorithmic Accountability in the Administrative State.” Yale Journal on Regulation 37 (3): 800–54.Google Scholar
Grimmelikhuijsen, S. 2022. “Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making.” Public Administration Review 83 (2): 241–62.CrossRefGoogle Scholar
Grzymek, V., and Puntschuh, M. 2019. “What Europe Knows and Thinks About Algorithms.” Bertelsmann Stiftung, Discussion Paper Ethics of Algorithms #10. http://aei.pitt.edu/102582/1/WhatEuropeKnowsAndThinkAboutAlgorithm.pdf. Retrieved Oct 4th, 2020.Google Scholar
IE Center for the Governance of Change. 2019. “European Tech Insights 2019.” https://docs.ie.edu/cgc/European-Tech-Insights-2019.pdf. Retrieved Oct 4th, 2020.Google Scholar
Katwala, A. 2020. “An Algorithm Determined UK Students’ Grades. Chaos Ensued.” WIRED; August 15th 2020. https://www.wired.com/story/an-algorithm-determined-uk-students-grades-chaos-ensued/. Retrieved Oct 4th, 2020.Google Scholar
Kostka, G. 2019. “China’s Social Credit Systems and Public Opinion: Explaining High Levels of Approval.” New Media & Society 21 (7): 1565–93.CrossRefGoogle Scholar
Lee, M. K. 2018. “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management.” Big Data & Society 5 (1). https://doi.org/10.1177/2053951718756684 CrossRefGoogle Scholar
Lee, T. B. 2020. “Why experts are overwhelmingly skeptical of online voting.” Ars Technica, September 3 2020. https://arstechnica.com/tech-policy/2020/09/why-experts-are-overwhelmingly-skeptical-of-online-voting/. Retrieved Oct 4th, 2020.Google Scholar
Leist, E., and Smith, D. 2014. “Accessibility Issue in E-Government.” In Electronic Government and the Information Systems Perspective. EGOVIS 2014, Vol. 8650 of Lecture Notes in Computer Science, eds. Ko, A. and Francesconi, E.. Cham: Springer.Google Scholar
Logg, J.M., Minson, J. and Moore, D. A. 2019. “Algorithm Appreciation: People prefer algorithmic to human judgment.” Organizational Behavior and Human Decision Processes 151: 90103.CrossRefGoogle Scholar
Longoni, C., Bonezzi, A., and Morewedge, C. K. 2019. “Resistance to Medical Artificial Intelligence.” Journal of Consumer Research 46 (4): 629–50.CrossRefGoogle Scholar
Lünich, M., and Kieslich, K. 2022. “Exploring the Roles of Trust and Social Group Preference on the Legitimacy of Algorithmic Decision-Making vs. Human Decision-Making for Allocating COVID-19 Vaccinations.” AI & Society. https://doi.org/10.1007/s00146-022-01412-3 Google ScholarPubMed
Miller, S. M., and Keiser, L. R. 2020. “Representative Bureaucracy and Attitudes Toward Automated Decision Making.” Journal of Public Administration Research and Theory 31 (1): 160–5.Google Scholar
Mozur, P., Zhong, R., and Krolik, A. 2020. “In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags.” New York Times, August 7th, 2020. https://www.nytimes.com/2020/03/01/business/china-coronavirus-surveillance.html. Retrieved Oct 4th, 2020.Google Scholar
Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. 2019. “Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations.” Science 366 (6464): 447–53.10.1126/science.aax2342CrossRefGoogle Scholar
Prahl, A., and Van Swol, L. 2017. “Understanding Algorithm Aversion: When is Advice from Automation Discounted?Journal of Forecasting 36 (6): 691702.CrossRefGoogle Scholar
Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., and Wallach, D. 2021. “What to Expect from Opening up ‘Black Boxes’? Comparing Perceptions of Justice Between Human and Automated Agents.” Computers in Human Behavior 122: 106837.10.1016/j.chb.2021.106837CrossRefGoogle Scholar
Schmidt, P., Biessmann, F., and Teubner, T. 2020. “Transparency and Trust in Artificial Intelligence Systems.” Journal of Decision Systems 29 (4): 260–78.10.1080/12460125.2020.1819094CrossRefGoogle Scholar
Shin, D. 2020. “User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability.” Journal of Broadcasting & Electronic Media 64 (4): 541–65.CrossRefGoogle Scholar
Simonite, T. 2020. “Europe Limits Government by Algorithm. The US, Not So Much.” WIRED, February 7th 2020. https://www.wired.com/story/europe-limits-government-algorithm-us-not-much/. Retrieved Oct 4th, 2020.Google Scholar
Smith, A. 2018. “Public Attitudes Towards Computer Algorithms.” Pew Research Center November 18th 2018. https://www.pewresearch.org/internet/2018/11/16/public-attitudes-toward-computer-algorithms/. Retrieved Oct 4th, 2020.Google Scholar
Starke, C., and Lünich, M. 2020. “Artificial Intelligence for EU Decision-Making: Effects on Citizens’ Perceptions of Input, Throughput & Output Legitimacy.” Data & Policy 2, E16. doi: 10.1017/dap.2020.19 CrossRefGoogle Scholar
Szigetvari, A. 2020. “Datenschutzbehörde kippt umstrittenen AMS-Algorithmus.” Der Standard, August 20th 2020. https://www.derstandard.at/story/2000119486931/datenschutzbehoerde-kippt-umstrittenen-ams-algorithmus. Retrieved Oct 4th, 2020.Google Scholar
United Nations. 2020. “E-Government Survey 2020. Digital Government in the Decade of Action for Sustainable Development.” Department of Economic and Social Affairs. https://publicadministration.un.org/egovkb/Portals/egovkb/Documents/un/2020-Survey/2020%20UN%20E-Government%20Survey%20(Full%20Report).pdf. Retrieved Oct 4th, 2020.Google Scholar
Yeomans, M., Shah, A. K., Mullainathan, S., and Kleinberg, J. 2019. “Making Sense of Recommendations.” Journal of Behavioral Decision Making 32: 403–14.10.1002/bdm.2118CrossRefGoogle Scholar
Zhang, B. and Dafoe, A. 2019. “Artificial intelligence: American attitudes and trends.” Center for the Governance of AI. Future of Humanity Institute. https://ssrn.com/abstract=3312874. Retrieved Oct 4th, 2020.Google Scholar
Figure 0

Table 1. Descriptive statistics of different reasons for governments to use algorithms and artificial intelligence

Figure 1

Figure 1. Frequency of support for reasons for government use of algorithms and artificial intelligence.

Figure 2

Figure 2. Frequency of the number of reasons for government use of algorithms and artificial intelligence individuals deem acceptable.

Figure 3

Figure 3. Frequency of support for reasons for government use of algorithms and artificial intelligence by country (All to Germany).

Figure 4

Figure 4. Frequency of support for reasons for government use of algorithms and artificial intelligence by country (Greece to UK).

Figure 5

Table 2. Organization of 8 reasons for government use of algorithms and artificial intelligence into two categories: efficiency and fairness

Figure 6

Table 3. Coefficients from multilevel linear regressions of support for algorithmic government, combined European Political Institutions Survey (EPIS), and Canadian data. Standard errors in parentheses

Figure 7

Figure 5. Coefficients from multilevel linear regressions of support for algorithmic government.

Supplementary material: File

Sidhu et al. supplementary material

Sidhu et al. supplementary material
Download Sidhu et al. supplementary material(File)
File 849.4 KB