Artificial intelligence-based algorithms are, and will likely be, used extensively by public institutions. We argue that decisions made by AI algorithms cannot count as public decisions, namely, decisions that are made in the name of the public, and that this fact bears on the legitimacy and desirability of deploying AI in lieu of public officials. More specifically, the extensive use of AI in public institutions distances the public from decisions and, hence, undermines their authorship over those decisions.
The assumption underlying the analysis inandwas that there are cases where the value of a decision-making process hinges not only on the quality of the output, namely, the quality of the resulting decisions, but on who makes the decisions and, also, how they are being made. Inandit was argued that public decisions are characterized by the fact that they are being made in the name of all and, further, that the legitimacy and the value of such decisions hinge on the agent who makes them. This chapter identifies more concrete ramifications of this observation. More specifically, it argues that to count as being made in our name, decisions must be publicly discussed, openly debated, and, more concretely, satisfy three conditions: transparency, particitability, and challengeability. Decisions that fail to meet these conditions cannot be characterized as genuinely public decisions. The chapter argues that these conditions are often not met (or not sufficiently met) by AI algorithms. Further, it argues that the source of the failure at issue is structural, rather than contingent or transitory.