Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-26T08:22:17.428Z Has data issue: false hasContentIssue false

Part II - Respecting Persons, What We Owe Them

Published online by Cambridge University Press:  17 May 2021

Alan Rubel
Affiliation:
University of Wisconsin, Madison
Clinton Castro
Affiliation:
Florida International University
Adam Pham
Affiliation:
California Institute of Technology

Summary

Type
Chapter
Information
Algorithms and Autonomy
The Ethics of Automated Decision Systems
, pp. 43 - 96
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

3 What Can Agents Reasonably Endorse?

In Chapter 2, we offered an account of autonomy that is both compatible with a broad range of views and ecumenical in that it incorporates important facets of competing views. The key features of our account are that autonomy demands both procedural independence (i.e., competence and authenticity) and substantive independence (i.e., social and relational conditions that nurture and support persons in acting according to their values as they see fit, without overweening conditions on acting in accord with those values). Our next task is to draw on that conception of autonomy to better understand and evaluate algorithmic decision systems. Among those that we will consider are ones we introduced in Chapter 1, including risk assessment algorithms such as COMPAS and K-12 teaching evaluation systems such as EVAAS.

How, though, do we get from an account of an important moral value such as autonomy to an evaluation of complex socio-technical systems? We will do that by offering a view of what it takes to respect autonomy and to respect persons in virtue of their autonomy, drawing on a number of different normative moral theories. Our argument will proceed as follows. We start with a description of another K-12 teacher evaluation case – this one from Washington, DC. We then consider several puzzles about the case. Next, we provide our account of respecting autonomy and what that means for individuals’ moral claims. We will explain how that conception can help us understand the DC case, and we will offer a general account of the moral requirements of algorithmic systems.Footnote 1 Finally, we will explain how our view sheds light on our foundational cases (i.e., Loomis, Wagner, and Houston).

3.1 IMPACT: Not an Acronym

In 2007, Washington, DC sought to improve its public school system (“DC schools”) by implementing an algorithmic teacher assessment tool, IMPACT, the aim of which is to identify and remove ineffective teachers. In 2010, teachers with IMPACT scores in approximately the bottom 2 percent were fired; in 2011, teachers with IMPACT scores in approximately the bottom 5 percent were fired.Footnote 2

There is a plausible argument for DC schools using IMPACT. The algorithm uses complex, data-driven methods to find and eliminate inefficiencies, and it purports to do this in an objective manner. Its inputs are measurements of performance and its outputs are a function of those measurements. Whether teachers have, say, ingratiated themselves to administrators would carry little weight in the decision as to whether to fire them. Rather, it is (ostensibly) their effectiveness as teachers that grounds the decision. Using performance measures and diminishing the degree to which personal favor and disfavor affect evaluation could plausibly generate better educational outcomes.

Nonetheless, DC schools’ use of IMPACT was problematic. This is in part because IMPACT’s conclusions were epistemically flawed. A large portion of a teacher’s score is based on VAM that seeks to isolate and quantify a teacher’s individual contribution to student achievement on the basis of annual standardized tests.Footnote 3 However, VAMs are poorly suited for this measurement task.Footnote 4 DC teachers work in schools with a high proportion of low-income students. At the time IMPACT was implemented, even in the wealthiest of the city’s eight wards (Ward 3) nearly a quarter of students were from low-income families, and in the poorest ward (Ward 8), 88 percent of students were from low income families.Footnote 5 As one commentary on IMPACT notes, low-income students face a number of challenges that influence their ability to learn:

These schools’ student bodies are full of kids dealing with the toxic stress of poverty, leaving many of them homeless, hungry, or sick due to limited access to quality healthcare. The students are more likely to have an incarcerated parent, to be deprived of fresh or healthy food, to have spotty or no internet access in their homes, or to live in housing where it is nearly impossible to find a quiet place to study.Footnote 6

Given the challenges of their students, it is not surprising that fewer teachers in Ward 8 than Ward 3 are identified by IMPACT as “high performing.”Footnote 7

The effects of poverty are confounding variables that affect student performance on standardized tests. For this reason, we cannot expect VAMs – which use only annual test scores to assess a teacher’s individual contribution to student achievement – to reliably find the signal of bad teaching through the noise of student poverty. Indeed, the American Statistical Association warns that studies on VAMs “find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions.”Footnote 8 The American Statistical Association also notes that “[VAMs] have large standard errors, even when calculated using several years of data. These large standard errors make rankings [of teachers] unstable, even under the best scenarios for modeling.”Footnote 9

So IMPACT suffers from an epistemic shortcoming. Is there also a moral problem? One possibility is that IMPACT poses a moral problem in that it harms teachers; it is harmful for teachers to lose their jobs, and IMPACT scores are the basis for that loss and harm. This, however, is not enough to conclude that there is moral wrong. Firing teachers can be justified (e.g., for cause), though it harms them, and use of IMPACT may create enough student benefit to risk some harm to teachers. Moreover, IMPACT is not obviously unfair; its epistemic flaws may be evenly distributed among teachers.

If there is something wrong about using IMPACT, what does that have to do with the epistemic problem, and what does it have to do with autonomy? We will argue that a teacher who is fired is wronged when that firing is based on a system that they could not reasonably endorse. We explain the general argument in the next section. We then apply that argument to IMPACT and our other polestar cases in the remainder of the chapter.

3.2 Autonomy, Kantian Respect, and Reasonable Endorsement

In Chapter 2, we explained that autonomy and self-governance involve (among other things) the capacity to develop one’s own conception of value and sense of what matters, and the ability to realize those values by guiding one’s actions and decisions according to one’s sense of value. We explained the relationship of this conception to Kantian views.Footnote 10

The issue we are addressing here, though, is what kinds of moral requirements are grounded in autonomy. How, in other words, does autonomy ground persons’ moral claims? There are a number of different ways to address this question.

Let’s begin with a prominent account by Christine Korsgaard.Footnote 11 The basic idea of autonomy – that is, that each of us in our capacity as autonomous beings develop conceptions of value for ourselves and act on those conceptions – is that people are self-legislators. By engaging in self-legislation, we understand our capacity to determine what matters for ourselves as a source of value. If we treat this capacity as a source of value, then it is the capacity itself (not, say, our own egoism) that must be valuable. Hence, any instances of that capacity (not just our own) must also be a source of value. So, because we are autonomous, we must value (which is to say, respect) autonomy generally. In other words, the premise that the capacity to self-legislate grounds value in one’s own case entails a conclusion that a similar capacity to self-legislate must also ground value in others’ cases.

A different way to ground the value of autonomy is its connection to well-being. Individuals have the capacity to develop their own sense of value; they are generally well positioned to understand how to advance that value, and the ability to do so (within reasonable parameters) is an important facet of their well-being. Because we have good reasons to promote well-being in ourselves and others, we therefore have good reasons to respect autonomy in ourselves and others. This is the line of reasoning that a utilitarian, for example, John Stuart Mill, can use in support of respecting autonomy.Footnote 12

Views like these link the concept of autonomy to the moral value of respecting autonomy. But what does respect for autonomy require? Returning to Kant, there are different but (roughly) equivalent ways to spell this out. One way to respect autonomy is to abide by the second, Humanity Formulation of the Categorical Imperative:

Humanity Formulation: So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means.Footnote 13

Treating something as an end requires treating it as something that is valuable in its own right; treating something “merely” as a means involves treating it as solely an instrument for the promotion of an end, without also treating it as an end itself. Treating someone as an end-in-themselves requires that we take seriously their ability to make sense of the world and their place in it, to determine what matters to them, and to act according to their own understanding and values (to the extent that they see fit). They may not be considered solely in terms of how they advance values of others.

The Humanity Formula is, of course, vague and has long been the subject of dispute. Derek Parfit can provide some help in specifying it. He offers the following principle as the core idea of the Humanity Formula:

Consent Principle: It is wrong to treat anyone in any way to which this person could not rationally consent.Footnote 14

Famously, Kant gave several formulations of the Categorical Imperative, of which the Humanity Formula is just one. According to another formulation,

Formula of Universal Law: Act only in accordance with that maxim through which you can at the same time will that it become a universal law.Footnote 15

A “maxim” is a principle that connects an act to the reasons for its performance. Suppose one makes a donation to Oxfam. Their maxim might be, “In order to help reduce world hunger, I will contribute fifty dollars a month to Oxfam.” An act is morally permissible when its maxim is universalizable, that is, if (and only if) every rational person can consistently act on it.

The Formula of Universal Law is often compared to the Golden Rule. This is a comparison Kant would be loath to accept since he rejected the Golden Rule as a moral principle. Parfit, in his inimitable style, thinks that “Kant’s contempt for the Golden Rule is not justified.”Footnote 16 And indeed, Parfit offers a reconstruction of the Golden Rule that incorporates the core ideas of the Formula of Universal Law as follows:

Golden Rule: We ought to treat everyone as we would rationally be willing to be treated if we were going to be in all of these people’s positions, and would be relevantly like them.Footnote 17

As consideration of the Humanity Formula, the Consent Principle, the Formula of Universal Law, and the Golden Rule help to lay bare, respecting autonomy involves both an element of treating others in ways to which they can agree (because it aligns with their ends, for example) and an element of understanding how others’ positions are relevant to which ends one adopts as one’s own.

Hence, there are two facets to this “golden rule” style formulation. The first has to do with a person’s ability to develop and endorse their sense of value and act accordingly, including what treatment would they willingly subject themselves to. This facet of the rule explains wrongs that are associated with deception. Deception is wrong (when it is) in part because it circumvents an agent’s ability to make decisions according to their own reasons. Likewise, paternalism is an affront (when it is) because it supplants a person’s ability to act on their own reasons based on a degree of distrust of their agency.Footnote 18

The second facet of the Golden Rule has to do with the treatment of others. Autonomy can underwrite moral claims only to the extent that it is used to ends that are compatible with others’ reasonable interests. The requirement that we consider how we would be rationally willing to be treated if we were relevantly similar to, and in similar circumstances as others is a way of making vivid others’ reasonable interests. It also echoes Joel Feinberg’s understanding of “autonomy as ideal” (as we discussed in Section 2.2.2). Autonomy as ideal recognizes that people can exercise autonomy badly (such that facets of autonomy are not necessarily virtues) and that people are parts of larger communities. Hence, Feinberg explains, the ideal of an autonomous person requires that their self-governance be consistent with the autonomy of others in their community.Footnote 19 This, in turn, reflects Kant’s understanding that morally right action requires that the action can coexist with everyone else’s ability to exercise freedom under universal moral law.Footnote 20

Feinberg’s understanding of autonomy as ideal is reflected in two other conceptions of respecting autonomy that are useful in developing our view. The first comes from John Rawls. In developing his understanding of just political and social systems, Rawls describes people as having two moral powers. The idea is that any person in the original position – which is to say anyone deciding on the basic structure of the society in which they will live, but knowing nothing of their place in it and nothing about their particular characteristics – must possess two powers for their choices to make sense. First, they must be rational. As in our discussion in Chapter 2, “rational” here just means the ability to engage in basic reasoning about means and ends, coupled with some set of basic values and motivations. The idea is that for a person to prefer one social and political structure over another, they must have some basic motivations to ground that preference. If literally nothing mattered to an individual, there would be no basis for their choices. Second, persons must be reasonable. This simply means that they are willing to abide fair terms of social cooperation, so long as others do too. It requires neither subordinating one’s reasonable interests to others nor accepting outlandish demands from others.

Rawls’s view is that people with these two powers in the original position would, for reasons having to do with nothing more than their own self-interest, accept certain social structures as binding. They will advance ends that people endorse (after all, those ends might be their own) and will establish fair terms of social cooperation because they will be in a position where they will have to abide those terms. Now, there are myriad criticisms and limitations of Rawls’s view, but his conception is useful in that it connects procedural autonomy (or psychological autonomy, as we described it in Chapter 2) to respect and social cooperation. Following Kant, Rawls’s view is that persons’ exercise of their own autonomy is important, but justifiable only to the extent that it is compatible with others’. And, hence, principles limiting autonomy can be grounded in fair terms of social cooperation.

A different view comes from Scanlon. Both Scanlon and Rawls are grounded in social contract theories. However, Rawls’s target is society’s basic structure while Scanlon’s main concern is to articulate basic moral principles governing social interaction. Moreover, while Rawls derives principles based on people rationally advancing their own self-interest, Scanlon aims to derive principles based on an account of the reasons one can offer to others to justify conduct. Specifically, Scanlon argues that “[a]n act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behavior that no one could reasonably reject as a basis for informed, unforced, general agreement.”Footnote 21 Parfit distills this view into what we might call the “reasonable rejection” criterion: “Everyone ought to follow the principles that no one could reasonably reject.”Footnote 22 This criterion holds the linchpin of morality to be the strength of people’s reasons: If one has good reasons against some principle and actions based on it, but others have weightier reasons for that principle and actions based on it, those weightier reasons should prevail on the grounds that one cannot reasonably reject them.

Rahul Kumar characterizes Scanlon’s contractualism as grounding persons’ legitimate expectations and demands of one another concerning conduct and consideration “as a matter of basic mutual respect for one another’s value as rational self-governors.”Footnote 23 A key facet of Scanlon’s approach, and one that unites the respect-for-persons views we are drawing on here, is that each requires paying attention to individuals and to the separateness of persons.Footnote 24 To understand this, contrast the requirement that actions be based on principles no one could reasonably reject with aggregating views such as some forms of consequentialism (i.e., those that are concerned with aggregated welfare).

As Kumar notes, consequentialist concerns with our actions are subordinate to (i.e., only matter in light of) the results of those actions. However, contractualism (and autonomy-respecting theories generally) focuses on how our actions reflect our relationships with others directly. Consequences, on this view, matter only insofar as they reflect respect for other persons. That is,

[Consequentialism is] concerned with what we do, but only because what we do affects what happens. The primary concern in [consequentialism] is with the promotion of well-being. Contractualism is concerned with what we do in a more basic sense, since the reasons for which we act express an attitude toward others, where what is of concern is that our actions express an attitude of respect for others as persons.Footnote 25

Respecting others as having their own sense of value and being able to order their lives accordingly also makes it the case that their expectations should matter to us, and mutual respect entails that people have legitimate expectations for what they can expect of others. People have good reason to expect that others will respect them as having their own ends and as being capable of abiding fair terms of social cooperation. Treatment that frustrates that expectation is a failure of respect. As Kumar puts it,

Disappointments of such expectation are (at least prima facie) valid grounds for various appropriate reactive attitudes toward one another. Resentment, moral indignation, forgiveness, betrayal, gratitude – the range and subtlety of reactions we have toward others with whom we are involved in some kind of interpersonal relationship is inexhaustible – all presuppose beliefs about what we can reasonably expect from others.Footnote 26

We will return to the issue of reactive attitudes in Chapter 7, where we examine automated decision systems and responsibility.

So what is the upshot?

Recall that the purpose of this section is to move from an understanding of autonomy (from Chapter 2) and explain some moral claims that are grounded in respecting autonomy. We have drawn on several views about respecting autonomy, each of which attends to the importance of the principles one wills for oneself and to the incontrovertible fact that humans are social beings, and hence, to the fact that human moral principles require broad and deep social cooperation. Respecting autonomy, in other words, requires both attention to individuals’ conceptions of their own good and some broad conception of social cooperation. Notice, though, that the views we have drawn on have substantial overlap. They may entail some differences in application (though that is an open question and would merit an argument), and they may have slightly different normative grounds for principles. But our project is deliberately ecumenical, and for our purposes the most important thing is the similarity across these views. They all point in the same direction, and they each provide a foundation on which to articulate the following, which captures their key elements, at least to a first approximation.

Reasonable Endorsement Test: An action is morally permissible only if it would be allowed by principles that each person subject to it could reasonably endorse.

According to this test, then, subjecting a person to an algorithmic decision system is morally permissible only if it would be allowed by principles that everyone could reasonably endorse.

It’s worth clarifying a couple of points about the Reasonable Endorsement Test. It differs from the articulations given by Scanlon and Rawls. One reason that Scanlon uses “reasonably reject” is to emphasize that persons must compare the burdens that they must endure under some state of affairs with others’ burdens under that state of affairs. Hence, if a person would reject a social arrangement as burdensome, but their burden is less than others, there are substantial benefits to the arrangement, the alternatives are at least as burdensome to others, and the overall consequences are not substantially better, then the person’s rejection of the arrangement would be unreasonable.

Our use of “could reasonably endorse” does similar work, as we make clear in our discussion of IMPACT later. However, by focusing on endorsement it leans closer to Parfit’s reformulation of the Golden Rule. Specifically, people can reasonably endorse (i.e., can be rationally willing to be treated according to) principles as either consistent with their own sense of value or as fair terms of social cooperation. Note, too, that actions and principles that do not affect an individual’s personal interests are nonetheless candidates for reasonable endorsement because those individuals can evaluate them as fair terms of social cooperation.

Another thing to note is that each of the formulations we have drawn on, as well as our Reasonable Endorsement Test, will inevitably have important limitations. Recall that each is trying to provide a framework for guiding actions based on a more basic moral value (autonomy). Hence, what matters for each principle is the extent to which it recognizes and respects persons’ autonomy. For the reasons we outlined earlier, we think that the reasonable endorsement principle does at least as well in capturing respect for autonomy as the other formulations.

Finally, even if another principle does a better job reflecting the nature and value of autonomy and providing guidance, our sense is that those views, when applied, will converge with ours (at least in the cases that are of interest here). And in any case, objecting to our larger project on the grounds that a different kind of social agreement principle better captures autonomy and social cooperation warrants an argument for why it would better explain concerns in the context of algorithmic systems.

3.3 Teachers, VAMs, and Reasonable Endorsement

To sum up our view so far: teachers are autonomous persons, and hence they have a claim to incorporate their values into their lives as they see fit. And respecting them requires recognizing them as value-determiners, neither thwarting nor circumventing their ability to act according to those values without good reason. They are also capable of abiding fair terms of social agreement (so long as others do too), and hence “good reasons” for them will be reasons they can endorse as fair terms of social cooperation, which means they can endorse those reasons as either consistent with their own values or as a manifestation of fair social agreement.

Now, what is it to thwart an agent’s ability to act according to their values? One example, discussed earlier, is deceit, in which one precludes an agent’s ability to understand circumstances relevant to their actions. Another way to thwart agency is to create conditions in which agents are not treated according to reasons that they could reasonably endorse, were they given the opportunity to choose how to be treated. That is, precluding persons from acting according to their values (e.g., by deceit) or placing them in circumstances that they cannot endorse as fair is a failure of recognition of them as value-determiners and a form of disrespect.

IMPACT fails to respect teachers in exactly this way (i.e., placing them in circumstances they cannot endorse), for several interrelated reasons.Footnote 27 The reasons are reliability, responsibility, stakes, and relative burden, and they work as general criteria for when people can reasonably endorse algorithmic systems.

Reliability. For the purposes of this project, we will understand reliability in its colloquial sense; that is, as consistent (though not necessarily infallible) accuracy.Footnote 28 We have provided some reasons for why IMPACT is an unreliable tool for the evaluation of teacher efficacy. Now, teachers, like any professionals, can reasonably endorse a system in which they are evaluated based on their efficacy. Moreover, through their training and professionalization, they have endorsed the value of educating students, and fair terms of social cooperation would require that truly ineffective teachers be identified for this reason. But because IMPACT is unreliable, there is some reason to think that it misidentifies teachers as ineffective. Hence, teachers should be loath to endorse being evaluated by IMPACT.

Responsibility. IMPACT’s lack of reliability is not the only way it fails to respect autonomy. Imagine a case where a teacher evaluation system reliably measures student learning. Two teachers score poorly in this year’s assessment. One scores poorly because she did not assign curriculum-appropriate activities, while the other scores poorly because her classroom lacks air-conditioning. Only the first teacher is responsible for her poor scores. The second teacher’s scores are based on factors for which she is not responsible. Teachers could not reasonably endorse such a system.Footnote 29

Given the population many DC teachers were working with – underserved students – IMPACT cannot be understood as tracking only factors for which teachers are responsible. The effects of poverty, abuse, bullying, illness, undiagnosed learning disabilities (resources for addressing these are much more limited in underserved districts), and so on plausibly undermine teacher efficacy. Yet teachers bear no responsibility for those impediments. So, even if the VAMs were reliable, teachers could not reasonably endorse their implementation.

Note that the dimension of responsibility not only covers the factors that teachers can’t be responsible for (e.g., children’s circumstances outside of school), but also factors they shouldn’t be held responsible for. It is not impossible to imagine, for instance, that teachers who bring snacks to every session could motivate their students to get higher test scores. Or, that teachers who repair air-conditioners themselves do. Teachers who do (or do not) bring snacks, fix air-conditioners, etc., can be responsible for engaging in (or refraining from) those activities, in the sense that they have the power to engage in (or refrain from) those activities. However, they shouldn’t be held responsible for refraining from the activities mentioned here, as this is not a reasonable ask.

Now, what exactly makes for a reasonable ask? That is a question that we cannot give an informative general account of, as it will vary greatly from domain to domain. We simply introduce the dimension of responsibility to our criterion to highlight the fact that algorithmic systems can affect persons for factors they either cannot or should not be responsible for, and that one factor relevant to the question of whether someone may be affected for partaking in (or refraining from) an activity is whether asking them to partake in (or refrain from) that activity is itself reasonable.

Stakes. Perhaps the most important factor in determining whether agents can reasonably endorse an algorithmic decision system is the stakes involved. Suppose that a VAM is set up to provide teachers with lots of information about their own practices but is not used for comparative assessment. The scores are shared with teachers privately and are not used for promotion and firing. Such a system might not be very reliable, or it might measure factors for which teachers are not responsible. Nonetheless, teachers might endorse it despite its limitations because the stakes are low. But if the stakes are higher (work assignments, bonuses, promotions), it is reasonable for the employees to want the system to track factors which can be reliably measured and for which they are responsible.

DC schools’ use of IMPACT is high stakes. Teachers rely on their teaching for a paycheck, and many take pride in what they do. They have sought substantial training and often regard educating students as key to their identities. Having a low IMPACT score might cost a teacher their job and career, and it may well undermine their self-worth. By agreeing to work in particular settings they have formed reasonable expectations that they can continue to incorporate those values into their lives, subject to fair terms of cooperation (e.g., that they do their work responsibly and well, that demand for their services continues, that funding remains available, etc.).

IMPACT does poorly on our analysis. It is not reliable, it evaluates teachers based on factors for which they are not responsible, and it is used for high-stakes decisions. These points are reflected in teacher reactions to IMPACT. For example, Alyson Perschke – a fourth-grade teacher in DC schools – alleged in a letter to Chancellor Kaya Henderson that VAMs are “unreliable and insubstantial.”Footnote 30 Perschke did so well in her in-class observations that her administrators and evaluators asked if she could be videotaped as “an exemplar.”Footnote 31 Yet the same year her VAM dragged her otherwise-flawless overall evaluation down to average. Remarking on this, she says, “I am baffled how I teach every day with talent, commitment, and vigor to surpass the standards set for me, yet this is not reflected in my final IMPACT score.”Footnote 32

Relative Burden. Another factor that is relevant in determining whether persons subject to an algorithmic system can reasonably endorse it is what we will call “relative burden.” It is plausible that IMPACT disproportionately negatively affects teachers from underrepresented groups:

[T]he scarcely mentioned, uglier impact of IMPACT is disproportionately felt by teachers in DC’s poorest wards – at schools toward which minority teachers tend to gravitate. In seeking to improve the quality of teachers, IMPACT manages to simultaneously perpetuate stubborn workforce inequalities and exacerbate an already alarming shortage of teachers of color.Footnote 33

So IMPACT might impose more burdens on members of underrepresented groups. This is another reason – independent of reasons grounded in reliability, responsibility, and stakes – that teachers should refrain from endorsing IMPACT.

There has been a great deal of discussion about how algorithmic systems may be biased or unfair. However, precisely what those concepts amount to and the degree to which they create moral problems are often unclear. Indeed, different conceptions of fairness can lead to different conclusions about whether a particular system is unfair (cf. Section 3.5).Footnote 34

The problem, stated generally, is that there are many ways in which a decision system can represent subpopulations differently. The data on which a system is built and trained may under- or overrepresent a group. The system may make predictions about facts that have different incidence rates in populations, leading to different false-positive and false-negative rates. It is plausible to describe each of these cases of difference as in some sense “unfair.” But we cannot know as a general matter which conception of fairness is appropriately applied across all cases. Thus, we will need an argument for applying any particular conception. In other words, even once we have determined that an algorithmic system is in some sense unfair, there is a further question as to whether (and why) it is morally problematic.

Put another way, one could frame our argument about the conditions under which agents can reasonably endorse algorithmic systems as an argument about the conditions under which such systems are fair (if everyone could reasonably endorse a system, that system is ipso facto fair). On that framing, fairness is the conclusion of our analysis, so including fairness as a general criterion for whether agents could reasonably endorse a system would make our argument circular.Footnote 35 Simply pointing out differences in treatment and concluding that those differences are unfair is not enough to move our argument forward. Rather, our task here is to identify factors that matter in determining whether people can reasonably endorse a decision system independently of whether they can be characterized as unfair. Among those is relative burden.

All automated systems will distribute benefits and burdens in some way or another.Footnote 36 Smart recommender systems for music, for example, will favor some artists over others and some users’ musical tastes over others. Of course, what matters for our view here is whether they can reasonably endorse such a system. Relative burden matters if that burden either (a) is arbitrary, such that it has nothing to do with the system in the first place, (b) reflects otherwise morally unjustifiable distinctions, or (c) is a compound burden, which reflects or exacerbates an existing burden on a group.

An example of an arbitrary burden (a) would be a test that systematically scored English teachers more poorly overall than teachers of other subjects and thereby created some social or professional consequences for those teachers. An example of (b) would be a test that, let’s suppose, systematically scored kindergarten teachers who are men lower than others. Perhaps kindergarteners respond less well to even very well-qualified men than they do to women; it would be morally unjustifiable, though, to have evaluation systems reflect that (at least if the stakes are significant). The unjustifiability in that case, though, is not related to some other significant social disadvantage. An example of (c) would be a case where an automated system imposes a burden that correlates with some other significant social burden (in many cases race, ethnicity, gender, or socioeconomic position).

3.4 Applying the Reasonable Endorsement Test

So far, we have argued that for an algorithmic system to respect autonomy in the relevant way, those who are subject to the system must be able to reasonably endorse it. And whether people can reasonably endorse a given system will be a function of its reliability, the extent to which it measures outcomes for which its subjects are responsible, the stakes at hand, and the relative burden imposed by the system’s use. We illustrated our framework by analyzing DC Schools’ use of IMPACT. Here, we turn back to our polestar cases to help make sense of the moral issues underlying them.

3.4.1 Wagner v. Haslam

Recall from Chapter 1 that the plaintiffs in this case (Teresa Wagner and Jennifer Braeuner) were teachers who challenged the Tennessee Value-Added Assessment System (TVAAS), which is a proprietary system similar to IMPACT. Because TVAAS did not test the subjects Wagner and Braeuner taught, they were evaluated based on a school-wide composite score, combined with their (excellent) scores from in-person teaching observations. This composite score dragged their individual evaluations from the highest possible score (as they had received in previous years) to middling scores. As a result, Wagner did not receive a performance bonus and Braeuner was ineligible for consideration for tenure. Moreover, each “suffered harm to her professional reputation, and experienced diminished morale and emotional distress.”Footnote 37

There is a deeper moral issue grounding the legal case. Wagner and Braeuner frame their case in terms of harms (losing a bonus, precluding tenure consideration, and so forth), but those harms matter only because they are wrongful. They are wrongful because TVAAS is an evaluation system that teachers could not reasonably endorse. Wagner and Braeuner’s scores did not reliably track their performances nor did the scores reflect factors for which they were responsible, as the scores were based on the performance in subjects Wagner and Braeuner did not teach. And the stakes in the case are fairly high (there were financial repercussions for Wagner and job security for Braeuner). So, per our account, they were wronged.

There may also be a relative burden issue with TVAAS, though it is not discussed explicitly in the Wagner opinion. A 2015 study of TVAAS found that mathematics teachers across Tennessee were, overall, found to be more effective by TVAAS than their colleagues in English/language arts.Footnote 38 This finding supports two hypotheses: Tennessee’s math teachers are more effective than its English/language arts teachers, and TVAAS is systematically biased in favor of math teachers.Footnote 39 If the latter hypothesis is true – and we suspect that it is – then, there are teachers, specifically Tennessee’s English/language arts teachers, who have an additional complaint against TVAAS: It imposes a higher relative burden on them because it arbitrarily returns lower scores for non-math teachers. They could not reasonably endorse this arrangement.

3.4.2 Houston Fed of Teachers v. Houston Ind Sch Dist

The Houston Schools case is superficially similar to Wagner in that it involves a similar proprietary VAM (EVAAS) to evaluate teachers. The school system used EVAAS scores as the sole basis for “exiting” teachers.Footnote 40 The primary concern for our purposes is that Houston Schools did not have a mechanism for ensuring against basic coding and clerical errors. They refused to correct errors on the grounds that doing so would require them to rerun their analysis of the entire school district. That, in turn, would have two consequences. First, it would be costly; second, it would “change all other teachers’ reports.”Footnote 41

The moral foundations of the teachers’ complaints should by now be clear. The stakes here – i.e., losing one’s job and having one’s professional image tarnished – are high. EVAAS is unreliable, having what the court called a “house-of-cards fragility.”Footnote 42 And that unreliability is due to factors for which teachers are not responsible, “ranging from data-entry mistakes to glitches in the code itself.”Footnote 43 Hence, teachers could not reasonably endorse being evaluated under such a system.

We can add a complaint about relative burden, at least for some teachers. EVAAS, like IMPACT, gives lower scores to teachers working in poorer schools. To see this, consider an analysis of EVAAS’s use in Ohio (in 2011–12) conducted by The Plain Dealer and State Impact Ohio, which found the following:

  • Value-added scores were 2½ times higher on average for districts where the median family income is above $35,000 than for districts with income below that amount.

  • For low-poverty school districts, two-thirds had positive value-added scores – scores indicating students made more than a year’s worth of progress.

  • For high-poverty school districts, two-thirds had negative value-added scores – scores indicating that students made less than a year’s progress.

  • Almost 40 percent of low-poverty schools scored “Above” the state’s value-added target, compared with 20 percent of high-poverty schools.

  • At the same time, 25 percent of high-poverty schools scored “Below” state value-added targets while low-poverty schools were half as likely to score “Below.”Footnote 44

In virtue of these findings, it is plausible that – in addition to complaints grounded in reliability, responsibility, and stakes – teachers in low-income schools have a complaint grounded in the uneven distribution of burdens.

And, hence, use of EVAAS is not something teachers subject to it could reasonably endorse.

3.4.3 Wisconsin v. Loomis

Our framework for understanding algorithmic systems and autonomy applies equally well to risk assessment tools like COMPAS.

To begin, COMPAS is moderately reliable. Researchers associated with Northpointe assessed COMPAS as being accurate in about 68 percent of cases.Footnote 45 More important is that COMPAS incorporates numerous factors for which defendants are not responsible. Recall that among the data points that COMPAS takes into account in generating risk scores are prior arrests, residential stability, employment status, community ties, substance abuse, criminal associates, history of violence, problems in job or educational settings, and age at first arrest.Footnote 46 Regardless of how well COMPAS’s big and little bars reliably reflect reoffense risk, defendants are not responsible for some of the factors that affect those bars. So, while Loomis did commit the underlying conduct and was convicted of prior crimes, COMPAS incorporates factors for which defendants are not responsible.Footnote 47 For example, the questionnaire asks about the age at which one’s parents separated (if they did); whether one was raised by biological, adoptive, or foster parents; whether a parent or sibling was ever arrested, jailed, or imprisoned; whether a parent or parent-figure ever had a drug or alcohol problem; and whether one’s neighborhood friends or family have been crime victims.Footnote 48 Moreover, even if some factors (e.g., residential stability, employment status, and community ties) are things over which individuals can exercise a degree of control, they are matters for which we shouldn’t attribute responsibility in determining sentencing for offenses.

Further, the use of COMPAS in Loomis is high stakes. Incarceration is the harshest form of punishment that the state of Wisconsin can impose. This is made vivid by comparing the use of COMPAS in Loomis with its specified purposes. COMPAS is built to be applied to decisions about the type of institution in which an offender will serve a sentence (e.g., lower or higher security), the degree of supervision (e.g., from probation officers or social workers), and what systems and resources are appropriate (e.g., drug and alcohol treatment, housing, and so forth). Indeed, Northpointe warns against using COMPAS for sentencing, and Loomis’s presentence investigation report specifically stated the COMPAS report should be used “to identify offenders who could benefit from interventions and to target risk factors that should be addressed during supervision.Footnote 49 When the system is used for its intended purposes – identifying ways to mitigate risk of reoffense of persons under state supervision – the stakes are much lower.Footnote 50 Hence, it is more plausible that someone subject to its use could reasonably endorse it in those cases.

One of Loomis’s complaints about COMPAS is that it took his gender into account. The court found that COMPAS’s use of gender was not discriminatory because it served the purpose of promoting accuracy. So Loomis’s claim that he shouldered a higher relative burden under this system was undercut, and the court was – in our opinion – correct in their response to his claim. Because men do commit certain crimes more often than women, removing gender as a factor could result in the systematic overestimation of women’s risk scores.Footnote 51 This does not, however, mean that COMPAS has no issues with respect to the question of relative burden.

To introduce the relative burden issue, let’s turn to a related controversy surrounding COMPAS. In May 2016, ProPublica reported that COMPAS was biased against Black defendants.Footnote 52 Specifically, ProPublica found that COMPAS misidentified Black defendants as high risk twice as often as it did White defendants. Northpointe, the company that developed COMPAS, released a technical report that was critical of ProPublica’s reporting.Footnote 53 They claimed that despite its misidentifying Black defendants as high risk at a higher rate, COMPAS was unbiased. This is because the defendants within risk categories reoffended at the same rates, regardless of whether they are Black or White.

The back and forth between Northpointe and ProPublica is at the center of a dispute over how to measure fairness in algorithmic systems. Northpointe’s standard of fairness is known as “calibration,” which requires that outcomes (in this case reoffense) are probabilistically independent of protected attributes (especially race and ethnicity), given one’s risk score (in this case high risk of reoffense and low risk of reoffense).Footnote 54 In this context, calibration requires that knowing a defendant’s risk score and race should provide the same amount of information with respect to their chances of reoffending as just knowing their score. ProPublica’s standard of fairness, on the other hand, is “classification parity,” which requires that classification error is equal across groups, defined by protected attributes.Footnote 55 As the dispute between ProPublica and Northpointe shows, you cannot always satisfy both standards of fairness.

This might be counterintuitive. ProPublica’s conclusions about COMPAS are a result of the fact that Black defendants and White defendants are arrested and rearrested at different rates, and hence Black and White defendants are counted as “re-offending” at different rates. To better understand how COMPAS can satisfy calibration but violate classification parity, it will be helpful to substitute a version of COMPAS with simplified numbers, which we will call “SIMPLE COMPAS.” Note that we choose the numbers here because they loosely approximate ProPublica’s analysis of COMPAS, which included larger numbers of Black defendants counted as reoffending.

SIMPLE COMPAS. SIMPLE COMPAS sorts defendants into two risk groups: high and low. Within each group, defendants reoffend at the same rates, regardless of race. In the low-risk group, defendants reoffend about 20 percent of the time. In the high, about 80 percent. The high-risk group is overwhelmingly (but not entirely) Black, and the low-risk group is overwhelmingly (but not entirely) White. Its results are summarized by the following bar chart (Figure 3.1).Footnote 56

Figure 3.1 SIMPLE COMPAS

Now consider three questions about SIMPLE COMPAS.

First, if you randomly select a defendant, not knowing whether they are from the high- or low-risk group, would learning their race warrant suspicion that their chance of reoffending is higher (or lower) than others from the risk group they are in? No. As we have stipulated, defendants within a risk group reoffend at similar rates regardless of race, and this is reflected in the bar chart, where the proportion of reoffenders to non-reoffenders in “Low – Black” is the same as “Low – White”: one in five. Similarly, the proportion of reoffenders to non-reoffenders in “High – Black” is the same as “High – White”: four in five. In virtue of this, SIMPLE COMPAS satisfies the calibration standard of fairness.

Second, if you randomly select a defendant, not knowing whether they are from the high- or low-risk group, would learning their race warrant increasing or decreasing your confidence that they are in the high-risk group? Yes. We have stipulated (tracking the analysis of COMPAS from ProPublica) that the high-risk group is predominantly Black and that the low-risk group is predominantly White (80 percent). If the randomly selected defendant is Black, you should increase your confidence that they are from the high-risk group, from about 55 percent (since five out nine total defendants – 250/450 – are high risk) to about 66 percent (since six out of nine Black defendants – 200/300 – are high risk). If, on the other hand, the randomly selected defendant is White, you should increase your confidence that they are from the low-risk group, from about 44 percent (since four out of nine defendants – 200/450 – are low risk) to about 66 percent (since six out of nine White defendants – 100/150 – are low risk).

Third, if you randomly select a defendant, not knowing whether they are from the high- or low-risk group, should learning their race affect your confidence that they are non-reoffending high-risk (i.e., misidentified as high-risk)? Yes. To see this, just look back to the previous question. If you learn the defendant is Black, you should increase your confidence that they are from the high-risk group. In virtue of this, your confidence that they are in the non-reoffending high-risk group should increase too, from about 11 percent (since one out of nine defendants – 50/450 – are non-reoffending high risk) to about 13 percent (since two out of fifteen Black defendants – 40/300 – are non-reoffending high risk). This may not seem like much, unless we appreciate that learning that a defendant is White should drive your confidence that they are non-reoffending high risk down to about 6.6 percent (since one out of fifteen White defendants – 10/150 – are non-reoffending high risk). This means that, in SIMPLE COMPAS, Black defendants are twice as likely as White defendants to be misidentified as high-risk, which violates classification parity. And this is what ProPublica found: COMPAS misidentifies Black defendants as high risk about twice as often as it does White defendants.Footnote 57 As this should make clear, violating classification parity is one way that an algorithmic system can impose an undue relative burden.

Analyzing SIMPLE COMPAS further shows why this is a matter of relative burden and why this sort of issue is distinct from issues of reliability, responsibility, and stakes. Suppose that in the world of SIMPLE COMPAS, Black and White citizens use illicit drugs at similar rates. However, Black citizens are disproportionately charged with drug crimes because they are more likely to get stopped and searched. Because of this correlation, SIMPLE COMPAS (which does not directly take race into account) is more likely to identify Black defendants as high risk.

If we assume further that the justice system that SIMPLE COMPAS is embedded in has sensible penalties,Footnote 58 then the shortcomings of SIMPLE COMPAS defy the categories of responsibility, reliability, and stakes. SIMPLE COMPAS does not hold defendants responsible for factors not under their control: The vast majority of those tagged as high risk are in fact likely to reoffend, and they are being held responsible for breaking laws that they in fact broke. Similarly, we can’t complain that SIMPLE COMPAS is unreliable; it is well calibrated and predicts future arrests very reliably. Finally, we cannot complain from the perspective of stakes, because – as we have stipulated – the justice system SIMPLE COMPAS is embedded in has sensible penalties. Yet something is wrong with the use of SIMPLE COMPAS, and the problem has to do with relative burden.

To see this, compare SIMPLE COMPAS with EVEN COMPAS.

EVEN COMPAS. EVEN COMPAS sorts defendants into two risk groups: high and low. Within each group, defendants reoffend at the same rates, regardless of race. In the low-risk group, defendants reoffend about 20 percent of the time. In the high-risk group, about 80 percent . The high-risk group and low-risk group are each 50 percent White and 50 percent Black. The sizes of the low- and high-risk groups are such that EVEN COMPAS misclassifies all defendants at the same (low) rate that SIMPLE COMPAS misclassifies Black defendants.

Suppose that EVEN COMPAS, like SIMPLE COMPAS, is embedded in a system that has sensible penalties. The only difference is that the EVEN COMPAS police force does not discriminate, and so EVEN COMPAS does not learn to identify Black defendants as high risk more often than White defendants.

Let us now make two observations. First, note that EVEN COMPAS might not be problematic. That is, given the details of the case, it does not seem like there are obvious objections to its use that we can make: It is an accurate and equitable device. It does misclassify some non-reoffending defendants as high risk, but unless we abandon pretrial risk assessment altogether or achieve clairvoyance, this is unavoidable.

Second, note that SIMPLE COMPAS is intuitively problematic even though – from defendants’ points of view – EVEN COMPAS treats them worse on the whole (i.e., compared to SIMPLE COMPAS, EVEN COMPAS is worse for White defendants and it is not better for Black defendants). What could explain SIMPLE COMPAS being problematic while EVEN COMPAS is not? It cannot be an issue of reliability, responsibility, or stakes. Rather, it is the fact that SIMPLE COMPAS imposes its burdens unevenly: It is systematically worse for Black defendants. Hence, relative burden is a factor in whether people could (or could not) reasonably endorse a system that is distinct from reliability, responsibility, and stakes.

Let’s return to Loomis. COMPAS – like the fictional SIMPLE COMPAS – does have an issue with high relative burden. The burden does not happen to be one that negatively affects Loomis. He does not have an individual complaint that he has endured a burden that is relatively higher than other people subject to it. However, it does not follow that COMPAS is a system that any person subject to it can reasonably endorse. Rather, because it imposes a greater relative burden to Black defendants than to White defendants, it is one that at least some defendants cannot reasonably endorse.

3.5 Why Not Fairness?

None of the top-line criticisms of algorithmic systems that we offer in this book are that such systems are unfair or biased. This might be surprising, considering the gigantic and expanding literature on algorithmic fairness and bias. It is certainly true that decision systems are in many cases biased and (hence) unfair, and it is also true that unfairness is an extremely important issue in the justifiability of those systems. There are several, related reasons we do not primarily lean on fairness. First is that whether something is fair is often best understood as the conclusion of an argument. As we note in Section 3.3, whether people can reasonably endorse a system to which they are subject can be understood as a criterion for whether that system is fair. Likewise, the component parts of our reasonable endorsement argument can be understood as questions of fairness. So whether a system imposes a relative burden that is arbitrary, compound, or otherwise unjustifiable is a way in which a system can be unfair.

The second reason that we don’t lead with fairness is that there is an important ambiguity in conceptions of fairness. The issue is that fairness is a concept that can in some senses be formalized, but in other senses serves as an umbrella concept for lots of more specific moral values. To see why this is important, and why we think it is fruitful not to deploy fairness as a marquee concern, we need to distinguish two broad conceptions of fairness. First is,

Formal fairness: the equal and impartial application of rules.Footnote 59

Formal fairness contrasts with

Substantive fairness: the satisfaction of a certain subset of applicable moral reasons (such as desert, agreements, needs, and side-constraints).Footnote 60

This distinction is straightforward. Any system of rules can be applied equally in a rote or mechanical way and, thus, can arrive at outcomes that are in some sense “fair” so long as those rules are applied without deviation. The rule that deli customers must collect a ticket upon entry and will be served in the order of the ticket numbers is a rule that can be applied in a formally fair way. However, if it is easy to steal tickets, if some tickets are not sequentially numbered, or if some people are unable to stand in line, the rule will be substantively unfair because it fails to satisfy important, applicable moral reasons that the rule does not cover. One might add all kinds of more complicated rules (people may move to the front if they need to, there will be no secondary market in low numbered tickets, people may only buy a defined, reasonable amount, etc.). Each of those additional rules may be applied in a formally fair way. Nonetheless, we cannot ensure substantive fairness by ensuring formal fairness, regardless of how exacting, equal, and impartial an application of rules is. That is because substantive fairness itself just is a conclusion about (a) which moral reasons are applicable and (b) whether those reasons have been proportionally satisfied. And (a) and (b) cannot be answered by an appeal to formal fairness without assuming the answer to the question at hand, viz., what moral reasons ought to apply.

Moreover, it might be logically impossible to simultaneously maximize different facets of substantive fairness. In other words, even if we could agree on a set of substantive moral criteria that are relevant in determining whether a given algorithmic system is fair, it may not be possible to take those substantive criteria and render a formally fair application of rules that satisfies those criteria. To understand why, consider recent work by Sam Corbett-Davies and Sharad Goel.Footnote 61

In the literature on fairness in machine learning, there are three predominant conceptions of fairness. Corbett-Davies and Goel argue that each one is inadequate and that it is impossible to satisfy them all at once. The first is “anti-classification,” according to which algorithms do not take into consideration protected characteristics (race, gender, or close approximations). They argue that anti-classification is an inadequate principle of fairness, on the grounds that it can harm people. For example, because women are much less likely than men to commit violent crimes, “gender-neutral risk scores can systematically overestimate a woman’s recidivism risk, and can in turn encourage unnecessarily harsh judicial decisions.”Footnote 62 Notice that their argument implicitly incorporates a substantive moral theory about punishment, namely that justification for punishment for violent crimes depends on the likelihood of the criminal committing future offenses. Hence, the question of whether “anti-classification” is the right measure of fairness requires addressing a further question of substantive fairness.

The second main conception of fairness in machine learning Corbett-Davies and Goel describe is “classification parity,” which requires that predictive performance of an algorithm “be equal across groups defined by … protected attributes.”Footnote 63 We explained earlier how ProPublica’s examination of COMPAS showed that it violates classification parity. The problem is that when distributions of risk actually vary across groups, achieving classification parity will “often require implicitly or explicitly misclassifying low-risk members of one group as high-risk, and high-risk members of another as low risk, potentially harming members of all groups in the process.”Footnote 64

The third conception, calibration, requires that results are “independent of protected attributes after controlling for estimated risk.” One problem with calibration is the reverse of classification parity. Where there are different underlying rates across groups, calibration will conflict with classification parity (as we discussed earlier). In addition, Corbett-Davies and Goel argue that coarse measures that are well calibrated can be used in discriminatory ways (e.g., by using neighborhood as a proxy for credit-worthiness without taking into account income and credit history).Footnote 65

The upshot of the Corbett-Davies and Goel paper is that the results of using each of the formal definitions of fairness are in some way harmful, discriminatory, or otherwise unjustifiable. But that is simply another way of saying that there are some relevant, applicable moral claims that would not be proportionally addressed in each. In other words, there may be substantive reasons that measures of formal fairness are good. However, because substantive fairness is multifaceted, no single measure of formal fairness can capture it.

Another line of literature attends to the fact that there are different conceptions of substantive fairness in the philosophical literature, each of which has different implications for uses of algorithmic systems.Footnote 66 The fact that there are different conceptions of fairness and that those different conceptions prescribe different uses and constraints for algorithmic systems is largely a function of the scope of substantive fairness. That is, substantive fairness is capacious, and different conceptions of fairness, discrimination, egalitarianism, and the like are component parts of it.

Finally, a conclusion that a system is fair will often be tenuous. That is because a system that renders outcomes that are formally and substantively fair in one context may be rendered substantively unfair when the context changes.Footnote 67 Consider our discussion of COMPAS. We can imagine a risk assessment tool that is strictly used for interventions meant to prevent violence and reoffense with, for example, drug and alcohol treatment, housing, job training, and so forth. Such a system could (let’s suppose) be formally fair and proportionally address relevant moral reasons. However, if that same system is deployed in a punitive way, then a different relevant moral reason is applicable, namely that it is substantively unfair to punish people based on facts for which they are not blameworthy. Addictions, housing insecurity, and unemployment are not conditions for which people are blameworthy. Hence, the new application of the risk algorithm would be substantively unfair, even if the original application is not.

To sum up, substantive fairness is broad and includes a range of relevant moral reasons. The purpose of this project is to examine one important component of relevant moral reasons. Thus, we don’t begin with fairness.

3.6 Conclusion

Our task in this chapter has been to link our conception of autonomy and its value to moral principles that can serve as a framework for when using algorithmic systems is justifiable. We did so by arguing for the Reasonable Endorsement Test, according to which an action is morally permissible only if it would be allowed by principles that each person subject to it could reasonably endorse. In the context of algorithmic systems, that principle is that subjecting a person to an algorithmic decision system is morally permissible only if it would be allowed by principles that everyone could reasonably endorse. From there, we offered several factors for when algorithmic systems are such that people subject to them can reasonably endorse them. Specifically, reasonable endorsement is a function of whether systems are reliable, whether they turn on factors for which subjects are responsible, the stakes involved, and whether they impose unjustified relative burdens on persons.

Notice, though, that these criteria are merely necessary conditions for permissibility based on respect for persons. They are not sufficient. For example, use of algorithmic systems may meet these criteria but will not be justifiable for other reasons. Indeed, one common criticism of algorithmic systems is that they are inscrutable (either because the technology is complex or because access is protected by intellectual property laws). We consider that in Chapter 4.

4 What We Informationally Owe Each Other

In Chapter 2, we articulated our conception of autonomy. We argued for a lightweight, ecumenical approach that encompasses both psychological and personal autonomy. In Chapter 3, we drew on this account to set out conditions that are crucial in determining whether algorithmic decision systems respect persons’ autonomy. Specifically, we argued that algorithmic decision systems are justifiable to the extent that people subject to them can reasonably endorse them. Whether people can reasonably endorse those systems turns on conditions of reliability, responsibility, stakes, and relative burden.

Notice, though, that the conditions set out in Chapter 3 are primarily about how those systems threaten persons’ material conditions, such as whether teachers are fired based on evaluation systems and whether defendants are subject to more stringent conditions based on risk assessment systems. But people are not just passive subjects of algorithmic systems – or at least they ought not to be – and whether use of a system is justifiable overall turns on more than the material consequences of its use.

In this chapter we argue that there is a distinct informational component to respecting autonomy. Specifically, we owe people certain kinds of information and informational control. To get a basic sense of why, consider our understanding of autonomy from Chapter 2, which has two broad facets. Psychological autonomy includes conditions of competence (including epistemic competence) and authenticity. Personal autonomy includes procedural and substantive independence, which at root demands space and support for a person to think, plan, and operate. Further, as we explained in Chapter 2, whether agents are personally autonomous turns on the extent to which they are capable of incorporating their values into important facets of their lives. Respecting an agents’ autonomy requires not denying them what they need to incorporate their values into important facets of their lives. It is a failure of respect to prevent agents from exercising their autonomy, and it is wrongful to do so without sufficiently good reason. Incorporating one’s values into important facets of one’s life requires that one have access to relevant information. That is, autonomy requires having information important to one’s life, and respecting autonomy requires not denying agents that information (and at times making it available). Algorithmic decision systems are often built in a way that prevents people from understanding their operations.Footnote 1 This may, at least under certain circumstances, preclude persons’ access to information to which they have a right.Footnote 2

That is the broad contour of our argument. Our task in the rest of the chapter is to fill that argument in. We begin by describing two new cases, each involving background checks, and we analyze those cases using the Reasonable Endorsement Test we developed in Chapter 3. We then explain important facets of autonomy that are missing from the analysis. To address that gap, we distinguish several different modes of agency, including practical and cognitive agency. We argue that individuals have rights to information about algorithmic systems in virtue of their practical and cognitive agency. Next, we draw on some scholarship surrounding a so-called right to explanation in the European Union’s General Data Protection Regulation and how those relate to our understanding of cognitive and practical agency. Finally, we apply our criteria to our polestar cases.

To be clear, we are not arguing that individuals have a right to all information that is important in understanding their lives, incorporating their values into important decisions, and exercising agency. Rather, we argue that they have some kind of defeasible claim to such information. Our task here is to explain the basis for that claim, the conditions under which it creates obligations on others to respect, and the types of information the moral claims underwrite. A recent report on ethics in AI systems states, “Emphasis on algorithmic transparency assumes that some kind of ‘explainability’ is important to all kinds of people, but there has been very little attempt to build up evidence on which kinds of explanations are desirable to which people in which contexts.”Footnote 3 We hope to contribute to this issue with an argument about what information is warranted.

4.1 The Misfortunes of Catherine Taylor and Carmen Arroyo

Let’s begin by considering two new cases.

Arkansas resident Catherine Taylor was denied a job at the Red Cross. Her rejection letter came with a nasty surprise. Her criminal background report included a criminal charge for intent to manufacture and sell methamphetamines.Footnote 4 But Taylor had no criminal history. The system had confused her with Illinois resident Catherine Taylor, who had been charged with intent to manufacture and sell methamphetamines.Footnote 5

Arkansas Catherine Taylor wound up with a false criminal charge on her report because ChoicePoint (now a part of LexisNexis), the company providing the report, relied on bulk data to produce an “instant” result when checking her background.Footnote 6 This is a common practice. Background screening companies such as ChoicePoint generate reports through automated processes that run searches through large databases of aggregated data, with minimal (if any) manual overview or quality control. ChoicePoint actually had enough accurate information – such as Taylor’s address, Social Security number, and credit report – to avoid tarnishing her reputation with mistakes.Footnote 7 Unfortunately for Taylor, the product ChoicePoint used in her case simply was not designed to access that information.Footnote 8

ChoicePoint compounded the failure by refusing to rectify its mistake. The company said it could not alter the sources from which it draws data. So if another business requested an “instant” report on Arkansas Catherine Taylor, the report would include information on Illinois Catherine Taylor.Footnote 9

This is not the only occasion on which Catherine Taylor (of Arkansas) would suffer this kind of error. Soon after learning about the ChoicePoint mix-up, she found at least ten other companies who were providing inaccurate reports about her. One of those companies, Tenant Tracker, conducted a criminal background check for Taylor’s application for federal housing assistance that was even worse than ChoicePoint’s check. Tenant Tracker included the charges against Illinois Catherine Taylor and also included a separate set of charges against a person with a different name, Chantel Taylor (of Florida).Footnote 10

Taylor’s case is not special. Another background screening case involving a slightly different technology shows similar problems. It is common for background screeners to offer products that go beyond providing raw information on a subject and produce an algorithmically generated judgment in the form of a score or some other kind of recommendation. “CrimSAFE,” which was developed by CoreLogic Rental Property Solutions, LLC (CoreLogic), is one such product.Footnote 11 CrimSAFE is used to screen tenants. CoreLogic markets it as an “automated tool” that “processes and interprets criminal records and notifies leasing staff when criminal records are found that do not meet the criteria you establish for your community.”Footnote 12

When a landlord or property manager uses CrimSAFE to screen a tenant, CoreLogic delivers a report that indicates whether CrimSAFE has turned up any disqualifying records.Footnote 13 But the report does not indicate what those allegedly disqualifying records are or any information about them (such as their dates, natures, or outcomes). To reiterate, the report only states whether disqualifying records have been found, not what they are. CoreLogic provides neither the purchaser nor the subject of the report any of the underlying details.Footnote 14

Let us now look at a particular case involving CrimSAFE. In July 2015, Carmen Arroyo’s son Mikhail suffered an accident that left him unable to speak, walk, or care for himself.Footnote 15 Carmen was Mikhail’s primary caregiver, and she wanted to have Mikhail move in with her when he was discharged from treatment. For Mikhail to move into his mother’s apartment, he had to be screened by her complex, and so the complex manager had CoreLogic screen Mikhail using CrimSAFE.Footnote 16

CoreLogic returned a report to the apartment complex manager indicating that Mikhail was not fit for tenancy, based on his criminal record.Footnote 17 The report did not specify the date, nature, or outcome of any criminal charges on Mikhail’s record. Further, Mikhail had never been convicted of a crime. Despite being unaware of the date, nature, or outcome of the alleged criminal conduct – and without taking into consideration the question of whether Mikhail was at that point even capable of committing the crimes he had been accused of – the manager adopted CoreLogic’s conclusion and denied Mikhail tenancy.Footnote 18 Hence, Carmen Arroyo was unable to move her severely injured son into her apartment where she could provide the care he needed.

Taylor and the Arroyos have suffered serious harms. And knowing the causes of their misfortunes is of little help in reversing those misfortunes. Decisions based on faulty criminal background reports are rarely overturned after those reports are identified as faulty.Footnote 19 As the National Consumer Law Center puts it, “[Y]ou can’t unring the bell.”Footnote 20

Taylor learned of the problems with her background as her tribulations unfolded. Carmen Arroyo learned of the problem only after being denied the key thing she needed to support her son, though she did eventually learn the reasons for Mikhail being denied tenancy. Many who are denied housing or employment through automated screening do not ever learn why.Footnote 21

One reason people do not find out is that under US law, consumer reporting agencies (companies that provide reports on consumers, such as background checks) do not have to tell the subjects of background checks that they are being screened. The relevant statute in this context is the Fair Credit Reporting Act (FCRA), which requires either notification or the maintenance of strict procedures to ensure that the information is complete and up to date.Footnote 22 This leaves reporting agencies the legal option of leaving the subjects of background searches out of the loop.

Further, many companies that provide background checks maintain that they are not consumer reporting agencies at all. So they maintain that the FCRA does not apply to them. As a result, they neither notify subjects of background checks nor maintain the strict procedures necessary to ensure the information in their systems is complete and up to date. One of the companies responsible for disseminating false information about Catherine Taylor, PublicData.com, simply denies that it is a consumer reporting agency.Footnote 23 When Taylor notified PublicData.com of the errors it had made about her, they were unwilling to do anything to correct those errors.Footnote 24 This was a matter of company policy, which is explicit that it “will NOT modify records in any database upon notification of inaccuracies.”Footnote 25

FCRA also requires employers using background checks to disclose that they will be doing background checks and to notify a candidate if adverse action may be taken in response to a background check.Footnote 26 However, employers often do not comply with notice requirements.Footnote 27

4.1.1 Taylor, Arroyo, and the Reasonable Endorsement Test

One way to understand Taylor’s and the Arroyos’ situations is in the terms we spelled out in Chapter 3, namely whether the background reporting systems are ones that people subject to them can reasonably endorse. Both Taylor and Arroyo have experienced considerable material burdens based on algorithmically aided decision systems. Both were held to account by systems that are based on factors for which Taylor and Arroyo are not responsible, and the stakes in each case are high. Hence, one could make the case that the reporting systems are ones that individuals subject to them cannot reasonably endorse as comporting with their material interests. Such an analysis, while compelling, would not be complete.

Something has gone wrong in the Taylor and Arroyo cases beyond the fact that they were materially harmed. This separate consideration is an informational wrong. Taylor and Arroyo did not know (at least initially) what information in their files led to their background check results. Carmen Arroyo did not discover the basis for Mikhail’s check until it was too late to do anything meaningful about it. Taylor lost opportunities before she discovered the reason. Further, in Taylor’s case, several companies providing the misinformation would not fix their files upon learning that they had made a mistake. Finally, both Taylor and Arroyo were left in the dark as to how exactly the results came out the way they did; they were not afforded an understanding of the systems that cost them the opportunities they had sought.

Arroyo has an additional, distinctive complaint. When her son’s application was rejected, the apartment complex did not know the details of the disqualifying conduct because CoreLogic did not supply them. This means that Arroyo was not given enough information about Mikhail’s rejection to even contest the claim. Compare Arroyo’s case with Taylor’s. Taylor at least knew that her file had contained a false drug charge. Knowing what she had been accused of informed her that she had to prove what she had not done. Arroyo lacked even that.

We have mentioned that there is at least some regulation that attempts to address these sorts of issues and that there is plausibly a question as to whether CoreLogic complies with its legal obligations under FCRA (as stated earlier, companies do not always follow the notification requirement). Could full compliance with FCRA bring about practices that Taylor and Arroyo could reasonably endorse? Again, we think not. For one, FCRA does not specify when subjects are owed notification.Footnote 28 So the notification requirement can be met without actually affording data subjects the underlying thing that really matters: time to effectively respond to any false or misleading information in their files and an understanding of where they stand with respect to decisions made about them. These are the claims we address in the following section.

4.2 Two Arguments for Informational Rights

Surely the Taylor and Arroyo cases grate on our intuitions, both because of the harms resulting from their background checks and because of the fact that each was in the dark about those checks. Such intuitions, however, can only take us so far. We need an argument to explain the wrongs adequately. Our argument is that persons’ autonomy interests have a substantial informational component that is distinct from the material components we argued for in Chapter 3. Specifically, respecting the autonomy of persons subject to algorithmic decision systems requires ensuring that they have a degree of cognitive access to information about those systems.

Agency refers to action and the relationship between a person (or other entity) and actions that are in some sense attributable to that person. That relationship may be merely causal (as when a person hands over their wallet at gunpoint), it may be freely willed, it may be deliberately planned, or it may be something else. Hence, agency is broader than autonomy, for a person may be an agent but neither psychologically nor personally autonomous. However, agency is morally important in that persons have claims to exercise agency (and to have room to exercise agency) in light of their (capacity) autonomy. On the relationship between autonomy and agency, Oshana writes: “An autonomous person is an agent – one who directs or determines the course of her own life and who is positioned to assume the costs and the benefits of her choices.”Footnote 29 We return to the relationship between agency and autonomy, and the relation of both to conceptions of freedom, in Chapters 5 and 6.

To make our case, we first need to distinguish two aspects of agency. At base, agency is the capacity (or effective exercise of the capacity) to act. And agents are beings with such capacity.Footnote 30 There is substantial philosophical controversy surrounding conceptions and metaphysics of agency (e.g., whether it is simply a causal relation between an actor and event, whether agency requires intentionality, and the degree to which nonhumans may be agents). We can leave many of those to the side so that we can focus on agency with respect to action and mental states.

The most familiar facet of agency is the ability to act physically in a relatively straightforward way, for example, taking a walk, preparing a meal, or writing an email. A more complex exercise of agency involves taking actions that institute a plan or that realize one’s values (which is to say, exercise agency in such a way that doing so successfully instantiates one’s psychological autonomy). Call this “practical agency.” Exercising practical agency so that it is consistent with one’s preferences and values requires a great deal of information and understanding. So, for example, if it is important to a person to build a successful career, then it is important for them to understand how their profession and organization function, how to get to work, how to actually perform tasks assigned, and so forth. And if that person’s supervisor fails to make available information that is relevant to their job performance, the supervisor fails to respect the person’s practical agency because doing so creates a barrier to the employee incorporating their values into an important facet of their life. Notice that this understanding of practical agency shares similar foundations to the substantive independence requirement of personal autonomy outlined in Chapter 2. Being denied important information about the practicalities of planning and living one’s life undermines the degree to which one has substantive independence from others.

The importance of information to exercising agency does not solely depend on agents’ abilities to use information to guide actions. A second aspect of agency is the ability to understand important facets of one’s life. Call this “cognitive agency.” The distinction between practical agency and cognitive agency tracks Pamela Hieronymi’s view that ordinary intentional agency, in which we exercise control over actions – deciding to take a walk or deciding to prepare a meal – is distinct from “mental agency” (although we use “cognitive agency,” the notion is the same). Mental agency, Hieronymi explains, is the capacity to exercise evaluative control over our mental states (e.g., our attitudes, beliefs, desires, and reactive responses). The difference between ordinary intentional agency and mental agency is the difference between an actor deciding “whether to do” (i.e., whether to take some action in the world beyond oneself) and the actor deciding “whether to believe.” Hieronymi’s view is that agents indeed exercise control – to some degree and within important limits – over how they respond mentally to their circumstances. The scope of one’s evaluative control over one’s mental states and the extent to which one can exercise it effectively are less important to our project than recognizing the domain of cognitive agency.Footnote 31

Cognitive agency grounds moral claims in much the same way as practical agency. Respecting persons as autonomous requires that they be able to incorporate their sense of value into decisions about conducting their lives as a matter of practical agency. Similarly, respecting persons as autonomous requires that they be able to incorporate their sense of value into how they understand the world and their place in it. As Thomas Hill, Jr., has argued, deception is an affront to autonomy regardless of whether that deception changes how one acts because it prevents persons from properly interpreting the world; even a benevolent lie that spares another’s feelings can be an affront because it thwarts that person’s ability to understand their situation.Footnote 32 We can extend Hill’s argument beyond active deception. Denying agents information relevant to important facets of their lives can circumvent their ability to understand their situation just as much as deceit.Footnote 33 In other words, deceit circumvents persons’ epistemic competence and may render their desires and beliefs inauthentic.

One might question here whether practical and cognitive agency are distinctive issues for algorithmic systems. Strictly speaking, the answer is no, because – as we explained in Chapter 1 – many of the arguments we advance in this book are applicable to a wide range of social and technical systems. However, there are several reasons to think that practical and cognitive agency raise issues worth analyzing in the context of algorithmic systems. For one, humans are well adapted to understanding, regulating, and interacting with other humans and human systems, but the same is not true of artificial systems. Sven Nyholm has recently argued that there are a number of important moral issues that arise in the context of human–robot interactions precisely because humans tend to attribute human-like features to robots, when in fact humans have a poor grasp of what robots are like.Footnote 34 The same can be said for algorithmic systems. Related is that the informational component of algorithmic systems may be more pronounced than it is for bureaucratic or other primarily human decisions. We may understand the limited, often arbitrary nature of human decisions. But infirmities of algorithmic systems may be harder for us to reckon, and we may lack the kinds of heuristics we can employ to understand human decision-making.

The view so far is that information is important for practical and cognitive agency, and that claims to such information are grounded in autonomy. Surely, however, it isn’t the case that respecting autonomy requires providing any sort of information that happens to advance practical and cognitive agency. After all, some information may be difficult to provide, may be only modestly useful in fostering agency, or may undermine other kinds of interests. Moreover, some information may be important for exercising practical and cognitive agency, but no one has an obligation to provide it. If one wants to feel better by cooking healthier meals, information about ingredients, recipes, and techniques is important in exercising practical agency over their eating habits. However, it is not clear that anyone thwarts another person’s agency by failing to provide that information. What we need, then, is a set of criteria for determining if and when informational interests are substantial enough that persons have claims to that information on the grounds of practical or cognitive agency.

4.2.1 Argument 1: Practical Agency

The first set of criteria for determining whether persons have claims to information about automated decision systems echoes the criteria we advanced in Chapter 3. Specifically, whether an individual has a claim to information about some algorithmic decision system that affects their life will be function of that system’s reliability, the degree to which it tracks actions for which they are responsible, and the stakes of the decision.

Assume for a moment that Taylor’s problems happen in the context of a reporting system that people cannot reasonably reject on grounds of reliability, responsibility, and stakes. Taylor nonetheless has a claim based on practical agency. To effectively cope with the loss of her opportunities for employment and credit, she needs to understand the source of her negative reports. To that extent, Taylor’s claims to information based on practical agency resemble those of anyone who is subject to credit reports and background checks. And, of course, Taylor did indeed have access to very general information about the nature of background checks and credit reporting. That might have been sufficient to understand that her background check was a factor in her lost opportunity.

We can capture this sense of Taylor’s claims with what we will call the Principle of Informed Practical Agency.

Principle of Informed Practical Agency (PIPA): One has a defeasible claim to information about decision systems affecting one’s life where (a) that information advances practical agency, (b) it advances practical agency because one’s practical agency has been restricted by the operations of that system, (c) the effects of the decision system bear heavily on significant facets of one’s life, and (d) information about the decision system allows one to correct or mitigate its effects.

Surely this principle holds, but it cannot capture the degree to which Taylor’s practical agency was thwarted by ChoicePoint and other reporting agencies. Rather, a key limitation on Taylor’s practical agency is the fact that the reporting agencies systemically included misinformation in her reports. In other words, Taylor’s claims to information are particularly weighty because the background checks at once purport to be grounded in information for which she is responsible (including criminal conduct) and the reports were systemically wrong. Hence, to capture the strength of Taylor’s claims, we can add the following:

Strong Principle of Informed Practical Agency: A person’s claim to information based on the PIPA is stronger in case (e) the system purports to be based on factors for which a person is responsible and (f) the system has errors (even if not so frequent that they, on their own, make it unendorseable).

Knowing that the background checking system conflates the identities of people with similar names, knowing that her own record includes information pertaining to other people with criminal records, and knowing that the system relies on other background checking companies’ databases and thus repopulates her profile with mistaken information can provide Taylor with tools to address those mistakes. That is, she can better address the wrongs that have been visited upon her by having information about the system that makes those wrongs possible. To be clear, a greater flow of information to Taylor does not make the mistakes and harms to her any less wrongful. Even if it is true that a system is otherwise justifiable, respecting autonomy demands support for practical agency so that people may address the infirmities of that system.

What is key for understanding claims based on practical agency is the distinction we make in Chapter 2 between local autonomy (the ability to make decisions about relatively narrow facets of one’s life according to one’s values and preferences) and global autonomy (the ability to structure larger facets of one’s life according to one’s values and preferences). In many contexts, respect for autonomy is local. Informed consent for undergoing a medical procedure, participating as a subject in research, agreeing to licensing agreements, and the like have to do with whether a person can act in a narrow set of circumstances. Our principles of practical agency, in contrast, concern aspects of autonomy that are comparatively global. One rarely (if ever) provides meaningful consent to having one’s data collected, shared, and analyzed for the purposes of background checks and hence enjoys only a little local autonomy over that process.Footnote 35

Individuals have little (if any) power to avoid credit and background checks and hence do not have global autonomy with respect to how they are treated. However, understanding how their information is used, whether there is incorrect information incorporated into background checks, and how that incorrect information precludes them from opportunities may be important (as in Taylor’s case) in order to prevent lack of local autonomy from becoming relatively more global. That is, mitigating the effects of algorithmic systems may allow one to claw back a degree of global autonomy. And that ability to potentially exercise more global autonomy underwrites a moral claim to information.

The two principles of informed practical agency only tell us so much. They cannot, for example, tell us precisely what information one needs. In Taylor’s case, practical agency requires understanding something about how the algorithmic systems deployed by ChoicePoint actually function, who uses them for what purposes, and how they absorb information (including false information) from a range of sources over which they exercise no control and minimal (if any) oversight. But other decision systems and other circumstances might require different kinds of information. The principles also cannot tell us exactly who needs to be afforded information. While the claim to information in this case is Taylor’s, it may be that her advocate, representative, fiduciary, or someone else should be the one who actually receives or accesses the relevant information. Taylor, for instance, might have a claim that her employer learn about the infirmities in ChoicePoint and Tenant Tracker’s algorithmic systems. The principles cannot tell us the conditions under which persons’ claims may be overridden.

The principles discussed so far only address the epistemic side of practical agency. But Taylor is owed more than just information. We can see this by considering one of the most deeply troubling facets of her case: the reluctance that the data controllers who are involved have toward fixing her mistaken data. One effect of their reluctance is that it undercuts her ability to realize her values, something to which she has a legitimate claim. To capture this, we need – in addition to the principles of informed agency – a principle that lays bare agents’ claim to control.

Principle of Informational Control (PIC): One has a defeasible claim to make corrections to false information fed into decision systems affecting one’s life where (a) one’s practical agency has been restricted by the operations of that system, (b) the effects of the decision system bear heavily on significant facets of a person’s life, and (c) correcting information about the decision system allows one to correct or mitigate its effects.

As before, we need a second principle specifying certain cases where this claim is stronger.

Strong Principle of Informational Control: A person’s claim to correct information based on the PIC is stronger in case the system purports to be based on factors for which a person is responsible.

These principles demand of the systems used in the Taylor’s case that she not only is able to learn what information a system is based on, but that she is able to contest that information when it is inaccurate. The claim she has in this case is (just like the principles of informed practical agency) grounded in her agency, that is, her claim to decide what is valuable for herself and pursue those values so long as they are compatible with respect for the agency and autonomy of others.

Now, the principles of informed practical agency and informational control cannot tell us what a person’s informational claims are in cases where they are unable to exercise practical agency. We consider that next.

4.2.2 Argument 2: Cognitive Agency

Cognitive agency can also ground a claim to information. Consider a difference between the Taylor and Arroyo cases. Or, more specifically, a difference between Taylor’s case once she had experienced several iterations of problems with her background checks and Carmen Arroyo’s case after she had been denied housing with her son. Taylor at some point became aware of a system that treats her poorly and for which she bears no responsibility. Arroyo, in contrast, was precluded from moving her son into her apartment for reasons she was unable to ascertain, the basis for the decision was an error, and the result was odious. Denying tenancy to Arroyo’s son Mikhail is surely an injustice. But that wrong is compounded by its obscurity, which precluded Arroyo from interpreting it properly. That obscurity violates what we call the principle of informed cognitive agency.

Principle of Informed Cognitive Agency (PICA): One has a defeasible claim to information about decision systems affecting significant facets of a person’s life (i.e., where the stakes are high).

As before – and for familiar reasons – we will add a second, stronger principle.

Strong Principle of Informed Cognitive Agency: A person’s claim to information based on the PICA is stronger in case the system purports to be based on factors for which a person is responsible.

Arroyo is an agent capable of deciding for herself how to interpret the decision, and she deserves the opportunity to do so. Her ability to understand her situation is integral in her exercising cognitive agency, but the facts that are crucial for her understanding are that her ability to care for her son is a function of the vagaries of a background check system.

Cognitive agency is implicated in Arroyo’s case in part because her predicament is based on a system that bears on an important facet of her life (being able to secure a place to live and care for one’s child) and purports to be based on actions for which she is responsible (moving her son, who had been subject to criminal charges, into her apartment). The system, meanwhile, is such that it treats old charges as dispositive even though they were withdrawn and as remaining dispositive regardless of whether the person is at present in any position to commit such a crime at all. The reason such facts about the background check system are important is not because they will allow Arroyo to act more effectively to mitigate its effects. She was unable to act effectively when she was precluded from moving her son into her apartment. Rather, those facts are important for Arroyo to be able to act as a cognitive agent by exercising evaluative control over what to believe and how to interpret the incident.

Notice that the criteria for a claim to information based on cognitive agency appears less stringent than for practical agency. However, it does not follow that cognitive agency demands more information. Rather, cognitive agency demands different kinds of information. Because practical agency requires information sufficient to effectively act, it may require technical or operational information. Cognitive agency, in contrast, requires only enough information to exercise evaluative control. In the context of background checks, this might require only that one be able to learn that there is an algorithmic system underlying one’s score, that the system has important limitations, that it is relatively unregulated (as, say, compared to FICO credit score reporting), and the factors that are salient in determining outcomes.Footnote 36

Of course, that leaves us with the question of what information is necessary to exercise evaluative control. Our answer is whatever information is most morally salient, and the claim to information increases as the moral salience of information increases. So, in the case of Arroyo’s background check, morally salient information includes the fact of an automated system conducting the check and the fact that her son’s current condition did not enter the assessment. It is true that there might be other morally salient information. For example, we can imagine a case where the future business plans of CoreLogic are peripherally morally salient to a case; however, a claim to that information is comparatively weaker and hence more easily counterbalanced by claims CoreLogic has to privacy in its plans.

4.2.3 Objections and Democratic Agency

There are several objections to the view we have set out so far that are important to address here. The first is that it proves too much. There are myriad and expanding ways that algorithmic systems affect our lives, and information about those systems bears upon our practical and cognitive agency in innumerable ways. Hence, the potential scope for individuals’ claims to information is vast.

It is certainly true that the principles of informed practical agency and of informed cognitive agency are expansive. However, the principles have limitations that prevent them from justifying just any old claim to information. To begin, the principles of practical agency require that an algorithmic system restrict an individual’s practical agency. How to determine what counts as a restriction, of course, is an interpretative difficulty. For example, does an algorithmic system that calculates one’s insurance premiums restrict one’s practical agency? What about a system that sets the prices one is quoted for airline tickets? Nonetheless, even on a capacious interpretation, it won’t be just any algorithmic system that affects one’s practical agency. Another significant hurdle is that the algorithmic system must affect significant facets of a person’s life. Perhaps insurance rates and airline prices clear that hurdle, but it is close. Other systems, such as what political ads one is served in election season, what music is recommended on Spotify, or which route Google maps suggests to your destination, do not impose restrictions on one’s practical agency.Footnote 37 The requirement that information allows a person to correct or mitigate the effects of an algorithmic system, therefore, is a substantial hurdle for the information to clear. Claims to information that have no such effect would fall under cognitive agency (and as we explain later, information that respects cognitive agency is less onerous to provide).

A second, related, objection is that many people – probably most people – will not wish to use information to exercise practical or cognitive agency. It is cheap, so to speak, to posit a claim to information, but it is pricey for those who deploy algorithmic systems, and the actual payoff is limited. This criticism is true so far as it goes, but it is compatible with the principles we’ve offered. For one, the fact (if it is) that many people will not exercise practical agency does not say much in itself about the autonomy interests one might have in a piece of information. This is much the same as in the case of medical procedures: Few people opt out of care, but information about care remains necessary to respecting their autonomy interests. Moreover, the objection speaks mostly to the strength of individuals’ claims. All else equal, the higher the stakes involved, and the more information can advance practical agency, the stronger the claims. And the more unwieldy it is for entities using algorithmic systems to provide information, the greater are countervailing considerations.

A third objection is that the arguments prove too little. There is presumably a lot of information to which people have some sort of claim, but which does not advance individuals’ practical or cognitive agency. To introduce this objection, let’s start with a claim to information based on cognitive agency. Imagine a person (call him DJ) born into enormous advantage: wealth, social status, educational opportunities, political influence, and so forth. Suppose, however, that these advantages derive almost entirely from a range of execrable practices by DJ’s family and associates: child labor, knowingly inducing addiction to substances that harm individuals and hollow out communities, environmental degradation, and so forth. DJ’s parents, we might imagine, shield him from the sources of his advantage as he grows up, and when he reaches adulthood, he does not inherit any wealth (though of course he retains all the social, educational, and political benefits of his privileged upbringing). The degree to which his ignorance limits his practical agency is not clear, given his advantages.Footnote 38 However, on the view we outline in the previous section, DJ’s parents certainly limit his cognitive agency by continuing to shield him from the sources of his advantage; he is precluded from understanding important facts about his life, as well as the chance to interpret his circumstances in light of those facts.

DJ is not the only person whose cognitive agency is a function of understanding the source of enormous wealth and advantage. Anyone who has an interest in their society’s social, political, financial, and educational circumstances has some claim to understand how DJ’s family’s and associates’ actions bear upon those circumstances. And that is true regardless of whether they are in any position to change things. In other words, it is the fact that DJ’s family’s actions have an important effect on the world that grounds others’ claims to information, not strictly how those actions affect each individual.Footnote 39 But it is difficult to see how the importance of that information is a function of either practical or cognitive agency.

With that in mind, let’s return to algorithmic systems. In path-breaking work, Latanya Sweeney examined Google’s AdSense algorithm, which served different advertisements, and different types of advertisements, based on names of search subjects.Footnote 40 Sweeney’s project began with the observation that some advertisements appearing on pages of Google search results for individuals’ names suggested that the individuals had arrest records. The project revealed that the ads suggesting arrest records were more or less likely to appear based on whether a name used in the search was associated with a racial group. That is, advertisements suggesting arrest records appeared to show up more often in Google ads served for searches that included names associated with Black people than in ads served for searches that included names associated with White people. This result was independent of whether the searched names actually had arrest records.Footnote 41 While Sweeney did not have access to the precise mechanism by which the AdSense algorithm learned to serve on the basis of race, as she explains, a machine learning system could achieve this result over time simply by some number of people clicking on ads suggesting arrest records that show up when they use Google to search for Black-identifying names.Footnote 42

But what does this have to do with agency and information? After all, as Sweeney points out, the ads themselves may be well attuned to their audiences, and it might be that search engines have a responsibility to ensure that their targeted advertising does not reflect race simply on the basis of harm prevention. But our argument here is different. It is that people have claims to information about some kinds of algorithmic systems even where their individual stake is relatively small, even where the system is reliable, and even where the system makes no assumptions about responsibility. So while people who are White have relatively little personal stake in the issue of search engine advertising serving ads that suggest arrest records disproportionately to searches using Black-identifying names, they have an interest based on agency nonetheless. Specifically, they have an interest in exercising agency over areas of democratic concern.

For the moment we will call this democratic agency and define it as access to information that is important for persons to perform the legitimating function that is necessary to underwrite democratic authority. We will take up this facet of agency and autonomy in more detail in Chapter 8. The gist of the idea is this. Whether a democratic state, set of policies, actions, regulatory regimes, and so forth are justifiable (or legitimate) is an important part in the function of the autonomy of its citizens. Exercising the autonomy necessary to serve this legitimating function requires certain kinds of information. Google of course is not a state actor, but it serves an outsized role in modern life, and understanding how that interacts with basic rights (including treatment of people based on race) is important for people to understand.

4.3 Relation to the GDPR

Having examined moral claims to information about algorithmic systems based on cognitive and practical agency, it will be useful to consider some of the scholarship on legal rights to information regarding algorithmic systems. Specifically, there is considerable scholarly discussion regarding informational rights in the context of the European Union’s General Data Protection Regulation (GDPR).Footnote 43 Much of that discussion concerns whether the GDPR contains a “right to explanation,” and if so, what that right entails. There is, in contrast, much less scholarly attention devoted to what moral claims (if any) underwrite such a right. The claims to cognitive and practical agency that we have established can do that justificatory work. But before we get to that, we want to draw on some of the right to explanation scholarship for some important context and to make a few key distinctions.

The General Data Protection Regulation (GDPR) is the primary data protection and privacy regulation in European Union law. For our purposes, we wish to discuss four specific rights related to decision systems: the right of access (the right to access the information in one’s file), the right to rectification (the right to correct misinformation in one’s file), the right to explanation (the right to have automated decisions made about oneself explained), and the right to object (the right not to be subject to a significant decision based solely on automated processing).

4.3.1 The Right of Access and the Right to Rectification

Article 15 of the GDPR outlines the right of access, which is the (legal) right of data subjects who are citizens of the EU to obtain from data controllers confirmation as to whether or not their personal data are being processed, confirmation that personal data shared with third parties is safeguarded, and to obtain a copy of personal data undergoing processing.Footnote 44 Article 16 outlines the right to rectification, which is “the [legal] right to obtain from the controller without undue delay the rectification of inaccurate personal data concerning him or her.”Footnote 45 These legal rights can be underwritten by the same ideas that support the principles of practical and cognitive agency and the principles of informational control, and we can use the principles to underwrite them.

Begin with rectification. Where one’s data is being used to make decisions affecting significant facets of one’s life – such that the system restricts one’s agency – the principle of informational control tells us that there is a defeasible claim to correcting that information. Insofar as our data is being used to make decisions about us that will affect us, the right to rectification stands as a law that enjoys justification from this principle.

With these ideas in place, we can also offer a justification for the right of access. To know whether a controller has incorrect information about us or information that we do not want them to have or share, we need to know what information they in fact have about us. And so, if the right to rectification is to have value, we need a right of access. We can further support the right of access by reflection of the principles of practical and cognitive agency: Often we will need to know what information is being collected in order to improve our prospects or to simply make sense of decisions being made about us.

4.3.2 The Right to Explanation

Consider next the right to have significant automated decisions explained. The Arroyo case brings out the importance of this right. To respond to their predicament, Carmen and Mikhail need to understand it. We begin with a general discussion of the right.

Sandra Wachter, Brent Mittelstadt, and Luciano Floridi introduce two helpful distinctions for thinking about the right to explanation.Footnote 46 The first of these distinctions disambiguates what is being explained. A system-functional explanation explains “the logic, significance, envisaged consequences and general functionality of an automated decision-making system.”Footnote 47 In contrast, a specific decision explains “the rationale, reasons, and individual circumstances of a specific automated decision.”Footnote 48 Note that if a system is deterministic a complete description of system functionality might entail an explanation of a specific decision. So, in at least some cases, the distinction between the two kinds of explanation is not exclusive.Footnote 49

The second distinction disambiguates when the explanation is being given. An ex ante explanation occurs prior to when a decision has been made. An ex post explanation occurs after the decision has been made. Wachter et al. claim that ex ante explanations of specific decisions are not possible; a decision must be made before it is explained. As Andrew Selbst and Julia Powles point out, in the special case of a complete system-level explanation of a deterministic system, decisions are predictable and thus, ex ante explanations of those decisions are at least sometimes possible.Footnote 50

Rather than stake a claim in this dispute, we will take a pragmatic approach. We can say all we need to say about the right to explanation by discussing the three categories that Wachter et al. admit of (i.e., ex ante system functional, ex post system functional, and ex post specific). If a subject has a right to an ex ante explanation of a specific decision, the arguments for such explanations will follow naturally from our arguments for specific explanations; the only issue that the right will turn on is whether such explanations are possible – an issue that we are not taking a stand on here. We think that, morally, the right to explanation could encompass any of the possibilities Wachter et al. outline. So we will understand the right to explanation as the right to explanations about ex post specific decisions, ex ante system function, or ex post system function.

Let us then work through some ideas about what our account says about the right to explanation.

Ex ante system-functional explanations: Subjects of decisions that have not yet been made often have good reason to know how decisions of that sort will be made in the future. The principles of practical agency delineate some of these conditions.

One way to see this is to return to Catherine Taylor. She now knows that because of her common name, systems that perform quick, automated searches are prone to making mistakes about her. Based on this, she has an interest in knowing how a given system might produce a report on her. If she knows a system is one that might produce a false report about her, she can save herself – and the purchaser of the report – quite a bit of trouble, either by insisting to ChoicePoint that more careful methods are used or by preempting the erroneous results by providing an independent, high-quality counter-report of her own.

Ex post system-functional explanations: Subjects of decisions that have been made often have good reason to know how those decisions of that sort were made. These claims can be grounded in practical or cognitive agency.

Consider Taylor again. If Taylor is denied a job and she learns that an automated background check was involved, she has reason to suspect that the automated check might have erroneously cost her the opportunity. For her, simply knowing the most general contours of how a system works is powerful information. This alone may be enough to allow her to get her application reviewed again, and she could not reasonably endorse a system where she is denied this minimal amount of information. But even if she cannot accomplish this – that is, even if the principle of informed practical agency is not activated because her situation is hopeless – she still has a claim, via the principle of informed cognitive agency, to gain an understanding of her situation.

Specific explanations: Finally, subjects of decisions often have good reason to know how those specific decisions were made. These claims can be grounded in practical agency.

Recall Arroyo’s denial of housing. Something is wrong with Arroyo’s report, yet his mother does not (and cannot) know what it is. This leaves her especially vulnerable in defending her son, since she does not know what to defend him against. As the principle of informed practical agency demands, subjects of decisions that have been made should at least know enough about those decisions to respond to them if they have been made in error.

We want to pause briefly to discuss a recent proposal pertaining to how specific explanations might be given, namely via counterfactual explanations, which have been detailed extensively in a recent article by Wachter et al. An example of a counterfactual explanation, applied to the Arroyo case, is as follows

“You have been denied tenancy because you have one criminal charge in your history. Were you to have had zero charges, you would have been granted tenancy.”

Generalizing a bit, counterfactual explanations are explanations of the form “W occurred because X; Were Y to have been the case (instead of X), Z would have occurred (instead of W),” where W and Z are decisions and X and Y are two “close” states of affairs, identifying a small – perhaps the smallest – change that would have yielded Z as opposed to W.

Counterfactual explanations have several virtues qua specific explanations. For one, they are easy to understand.Footnote 51 They are efficient in communicating the important information users need to know to make sense of and respond to decisions that bear on them. Thus, such explanations are often sufficient for giving subjects what they are informationally owed. Another virtue is that they are relatively easy to compute, and so producing them at scale is not onerous: Algorithms can be written for identifying the smallest change that would have made a difference with respect to the decision.Footnote 52 Further, they communicate needed information without compromising the algorithms that underlie the decisions they explain; they offer explanations, as Wachter et al. put it, “without opening the black box.”Footnote 53

Counterfactual explanations can serve as a useful tool for delivering what is demanded by the cognitive and practical agency of data subjects without running roughshod over the interests of their data controllers. Of course, such explanations will not always meet these demands; they will only work in contexts where specific explanations are called for. And even then, they might not always offer everything an agent needs; for instance, one could imagine counterfactual explanations that are too theory laden to be useful or that are only informative against knowledge of myriad background conditions. Nevertheless, this style of explanation can be a very useful tool in meeting agents’ needs. Thus, they serve as a good example of a realistic tool for giving data subjects what they are informationally owed.

Let’s take stock of what our account has to say about the right to an explanation. We take it that the right to explanation is a defeasible right to meaningful ex post, ex ante, system-level, and specific explanations of significant, automated decisions. Using our cases and principles, we have demonstrated how our account can underwrite a claim: As autonomous beings, we need to understand significant events in our lives in order to navigate the world so as to pursue our values; as autonomous beings, we have a duty to support each other’s autonomy; so, if we are in control of information pertaining to significant decisions affecting someone’s life, we often owe it to them to make that information available.

4.3.3 The Right to Object

In addition to rights of access and rights to rectification and explanation, the GDPR outlines the right to object, “the right not to be subject to a [significant] decision based solely on automated processing.”Footnote 54 As stated earlier, our interest is in understanding whether there is a moral right to object. However, examining a version of a legal right can help us make sense of moral claims. There are two key features of the right to object as it is stated in the GDPR.

Note first that the right is vague. Specifically, the “based solely” condition, as well as the notion of significance, admits of vagueness. As Kaminski notes,

One could interpret “based solely” to mean that any human involvement, even rubber-stamping, takes an algorithmic decision out of Article 22’s scope; or one could take a broader reading to cover all algorithmically-based decisions that occur without meaningful human involvement. Similarly, one could take a narrow reading of “[…] significant” effects to leave out, for example, behavioral advertising and price discrimination; or one could take a broader reading and include behavioral inferences and their use.Footnote 55

We will not focus too heavily on issues of vagueness here. However, it is important to note that the limiting condition of the right – as well as some of its content – is vague.

Second, the right to object is ambiguous.Footnote 56 It could be understood broadly: as a broad prohibition on decisions that are based solely on automated processing. The same right could also be understood narrowly: as an individual right that data subjects can summon for the purposes of rejecting a particular algorithmic decision.Footnote 57 Here, we won’t be interested in adjudicating which way to read Article 22 of the GDPR, because we regard both readings as supported by the same considerations that we cite in favor of the right to explanation.

Human oversight of an automated decision system requires that the system be functionally intelligible to at least some humans (perhaps upon acquiring the relevant expertise). So, in a world where the broad reading is observed, each significant automated decision is intelligible to some human overseers. What this means, in turn, is that the reasons for its decisions could be meaningfully explained to data subjects (or at the very least to their surrogates). The significance of this, from our point of view, is that it would help to secure the right to explanation, as it would require systems to be designed so that they are intelligible to humans. Further, in cases where a data subject cannot request an explanation, it serves to assure them that significant automated decisions made about them make sense. Similarly, in a world where the narrow right is observed, systems are designed to be intelligible so that, were their decisions meaningfully checked by a human decision maker, they would make sense. This, of course, means that they are designed so that they do make sense to humans (even if those humans are experts). Moreover, like the broad reading, it also affords data subjects the opportunity to have decisions checked when they themselves cannot check them (perhaps for reasons of trade secrecy). However, the narrow right might sound more plausible than the broad right because it means fewer human decision makers would have to be employed to satisfy it, allowing systems to operate more efficiently.

Now, unlike the previously mentioned rights, the right to object – particularly in its broad formulation – might sound onerous. However, abiding the rights to access, rectification, and explanation already requires that data controllers provide data subjects meaningful human oversight of decisions made about them, so perhaps the broad right isn’t as implausible as it may first seem. Further, the broad right has the advantage that it makes the exercise of the right to object less costly to those individuals who would otherwise have to explicitly exercise it. We can imagine data subjects worrying that they will face prejudice for exercising the right; for instance, a job applicant might worry that if she exercised the right, the potential employer will think that she is going to cause trouble.

What does the right to object add, then? Importantly, there are systems where inferences must be kept secret – either to prevent subjects from gaming it or because the system is simply too complicated – in these circumstances, the right to object plays the important role of ensuring that surrogates of data subjects understand whether high-stakes decisions made about those subjects make sense.

4.4 Polestar Cases

We can finally return to the cases that provide our through line for the book.

4.4.1 Loomis

One of Loomis’s primary complaints in his appeal is that COMPAS is proprietary and hence not transparent. Specifically, he argued that this violated his right to have his sentence based on accurate information. He bases the argument in part on Gardner v. Florida.Footnote 58 In Gardner, a trial court failed to disclose a presentence investigation report that formed part of the basis for a death sentence. The U.S. Supreme Court determined that the failure to disclose the report meant that there was key information underwriting the sentence which the defendant “had no opportunity to deny or explain.” Loomis argued that the same is true of the report in his case. Because the COMPAS assessment is proprietaryFootnote 59 and because there had not been a validation study of COMPAS’s accuracy in the state of Wisconsin (other states had conducted validation studies of the same system), Loomis argued that he was denied the opportunity to refute or explain his results.

The Wisconsin Supreme Court disagreed. It noted that Northpointe’s Practitioner’s Guide to COMPAS Core explained the information used to generate scores, and that most of the information is either static (e.g., criminal history) or in Loomis’s control (e.g., questionnaire responses). Hence, the court reasoned, Loomis had sufficient information and the ability to assess the information forming the basis for the report, despite the algorithm itself being proprietary.Footnote 60 As for Loomis’s arguments that COMPAS was not validated in Wisconsin and that other studies criticize similar assessment tools, the court reasoned that cautionary notice was sufficient. Rather than prohibiting use of COMPAS outright, the court determined that presentence investigation reports using COMPAS should include some warnings about its limitations.

According to the principles of practical agency, Loomis has a defeasible claim to information about COMPAS if (a) information about COMPAS advances his practical agency, (b) COMPAS has restricted his practical agency, (c) COMPAS’s effects bear heavily on significant aspects of Loomis’s life, and (d) information about COMPAS allows Loomis to correct or mitigate the effects of COMPAS. If there is such a claim, it is strengthened (e) if COMPAS purports to be based on factors for which Loomis is responsible and (f) if COMPAS has errors.

It is certainly plausible that COMPAS limits Loomis’s practical agency insofar as it had some role in his sentence. Loomis faced a number of decisions about what to do in response to his sentence. One is whether he should appeal and on what grounds. Another is whether he should try to generate public support for curtailing the use of COMPAS. For Loomis, settling these questions about what to do depends on knowing how COMPAS generated his risk score. And there is much he doesn’t know. He doesn’t know whether the information fed into COMPAS was accurate. He doesn’t know whether, and in what sense, COMPAS is fair. And he doesn’t know whether the algorithm was properly applied to his case. That lack of information curtails his practical agency. The length of his criminal sentence certainly involved a significant facet of his life, and it is at least plausible that greater information would allow him to mitigate COMPAS’s effects. The strength of his claims increases in light of the fact that it is best understood as being based on factors for which he is responsible, viz., his propensity to reoffend.

So Loomis has a prima facie and defeasible claim to information about COMPAS. But that leaves open just what kind of information he has a claim to, what that claim entails, and whether there are countervailing considerations that supersede Loomis’s claim. It would seem that Loomis needs to know that the data fed into COMPAS was accurate, evidence that COMPAS is in fact valid for his case, and, finally, some kind of explanation – perhaps in the form of a counterfactual explanation – that makes clear why he received the score that he did. Such information would advance Loomis’s practical agency, either by giving him the information needed to put together an appeal or by demonstrating to his satisfaction that his COMPAS score was valid, allowing him to focus his efforts elsewhere.

Independent of the concerns based on practical agency, Loomis has a claim to information based on cognitive agency. Both factors in the principle of informed cognitive agency are present. COMPAS purports to be based on factors for which Loomis is responsible, and the stakes are high. Being imprisoned is among the most momentous things that may happen to a person and understanding the basis of a prison sentence is essential to one’s agency. That extends beyond the factors that matter in determining one’s sentence to include whether the process by which one is sentenced is fair. And as we have argued, agents have a claim to understand important facets of their situations. Hence, Loomis has a claim based on cognitive agency to better understand the grounds for his imprisonment.

While Loomis plausibly has claims to information based on both practical and cognitive agency, there are differences in what those claims entail. While practical agency will only underwrite information that can be used in advancing Loomis’s case – and hence, mostly supports information for Loomis’s legal representation – cognitive agency underwrites the provision of certain pieces of information to Loomis himself. It would involve providing him information about the fact that a proprietary algorithm is involved in the system, information about how well the system predicts reoffense, and information about the specific factors that led to Loomis’s sentence. There is no reason to think that it would advance Loomis’s cognitive agency to provide him with specific information about how COMPAS functions.

Moreover, the court did, in fact, respect Loomis’s cognitive agency. The Wisconsin Supreme Court upheld the circuit court’s decision in substantial part because the circuit court articulated its own reasons for sentencing Loomis as it did. In other words, it provided an account sufficient for Loomis to exercise evaluative control with respect to his reactive attitudes toward the decision and sentence.

4.4.2 Wagner and Houston Schools

The principles of informed practical agency and informed cognitive agency also aid our understanding of the K-12 teacher cases, especially Houston Schools. Recall that Houston Schools uses a VAM called EVAAS, which produces each individual teacher’s score by referencing data about all teachers.Footnote 61 This practice makes EVAAS’s scores highly interdependent. Recall also that Houston Schools was frank in admitting that it would not change faulty information because it would require a costly reanalysis for the whole school district and the potential to change all teachers’ scores. This was all despite warnings (as we note in Chapters 1 and 3) that value-added models have substantial standard errors.Footnote 62 So EVAAS’s scores are extremely fragile, produced without independent oversight, and cannot be corroborated by teachers (or the district or, recall, an expert who was unable to replicate them).

It seems clear enough that information about EVAAS is vital for teachers to exercise practical agency. Certainly, it is relevant to several significant aspects of teachers’ lives. For teachers who were fired or did not have their contracts renewed based on low performance, gaining an understanding of the system advances their practical agency in a couple of ways. It gives them (and their union leaders and lawyers) the bases of either an appeal (whether in court or to the public) of the firings or an appeal of the system altogether. It also gives teachers who are finding employment in other schools some context that could help them convince administrators that their departure from the Houston Independent School District (HISD) was not evidence of poor teaching. That is, affected teachers have a (defeasible) claim to information about EVAAS’s functioning, because it could allow them to correct or mitigate the system’s effects. Their claim is strengthened because EVAAS purports to be based on factors for which the teachers are responsible (viz., their work in the classroom), and yet (as HISD admits) EVAAS has errors. These claims also underwrite teachers’ claims to informational control, specifically their claim to have any inaccuracies reflected in their scores corrected.

The fact that EVAAS affects such important parts of teachers’ lives and purports to be based on factors for which they are responsible also gives them a claim to information based on cognitive agency. As in the COMPAS case, the type of information necessary for teachers to exercise evaluative control – that is, to assess their treatment at the hands of their school system – may be different from the information necessary for them to exercise practical agency. Cognitive agency may only require higher-level information about how EVAAS works, a frank assessment of its flaws, and a candid accounting of Houston Schools’ unwillingness to incur the cost of correcting errors rather than the more detailed information necessary for teachers to correct errors. To put a bookend on the importance of cognitive agency, we will return to an exemplary teacher’s public reaction to the VAM used by DC Schools: “I am baffled how I teach every day with talent, commitment, and vigor to surpass the standards set for me, yet this is not reflected in my final IMPACT score.”Footnote 63 This would seem to be an appeal to exercise evaluative control.

4.5 Conclusion

In Chapter 2, we argued that autonomy ranges beyond the ability to make choices. Properly understood, self-governance includes competence and authenticity and substantive independence, and it demands acting in accord with others. Chapter 3 examined the requirements for respecting persons’ autonomy related to their material conditions. In the present chapter, we explain the informational requirements of autonomy. Specifically, we argued that autonomy demands respect for both practical and cognitive agency. We articulated several principles of practical and cognitive agency and argued that those principles could underwrite key provisions in the GDPR. Finally, we explained that those principles entail that the subjects of our polestar cases deserve substantial information regarding the algorithmic systems to which they are subject.

Recall, though, that the organizing thesis of the book is that understanding the moral salience of algorithmic systems requires understanding how such systems relate to autonomy. That involves more than respecting the autonomy of persons who are, at the moment, autonomous. It also involves securing the conditions under which they can actually exercise autonomy. That’s the issue we turn to in next two chapters.

Footnotes

3 What Can Agents Reasonably Endorse?

1 This argument originated in Rubel, Castro, and Pham, “Algorithms, Agency, and Respect for Persons.”

2 O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Turque, “More than 200 D.C. Teachers Fired.”

3 Isenberg and Hock, “Measuring School and Teacher Value Added in DC, 2011–2012 School Year”; see also Walsh and Dotter, “Longitudinal Analysis of the Effectiveness of DCPS Teachers.”

4 For extensive discussion, see Amrein-Beardsley, Rethinking Value-Added Models in Education.

5 Quick, “The Unfair Effects of IMPACT on Teachers with the Toughest Jobs.”

6 Quick.

7 Quick.

8 American Statistical Association, “ASA Statement on Using Value-Added Models for Educational Assessment: Executive Summary,” 2; see also, Morganstein and Wasserstein, “ASA Statement on Value-Added Models.”

9 American Statistical Association, 7; see also, Morganstein and Wasserstein.

10 It is worth reiterating several points from Chapter 2 to emphasize the limits of this Kantian formulation. The capacity to self-govern, the values agents develop, and the ways in which they incorporate those values into their lives are socially situated. See Mackenzie and Stoljar, Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, 4. Developing one’s sense of what is important depends on social conditions that nurture the ability to do so. See Oshana, Personal Autonomy in Society, 90. Social structures may delimit the conceptions of value that are available for persons to draw upon in developing their sense of value. Persons’ abilities to incorporate their values into their important decisions will depend on what opportunities exist in the broader social context. See Raz, The Morality of Freedom; Mackenzie, “Relational Autonomy, Normative Authority and Perfectionism.” The fact that self-governance is socially situated, however, does not undermine the importance of autonomy and agency. Rather, failures to nurture persons’ abilities to develop their agency and substantial constraints on options available for incorporating values into persons’ lives are moral problems in part because of the importance of autonomy. See, for instance, Superson, “Deformed Desires and Informed Desire Tests”; Meyers, “Personal Autonomy and the Paradox of Feminine Socialization.” Hence, even though our conception of autonomy echoes Kantian views, it would be a mistake to conclude that autonomy in this sense assumes that individuals are separable from their social, familial, and relational lives.

11 Korsgaard et al., The Sources of Normativity.

12 Mill, On Liberty, chapter 3.

13 Kant, Groundwork of the Metaphysics of Morals, sec. 4:429.

14 Parfit, On What Matters, 181.

15 Kant, 4:421.

16 Parfit, 19.

17 Parfit, 327.

18 Shiffrin, “Paternalism, Unconscionability Doctrine, and Accommodation.”

19 Feinberg, “Autonomy,” 44–45.

20 Kant, Groundwork of the Metaphysics of Morals, sec. 6:230.

21 Scanlon, What We Owe to Each Other, 153.

22 Parfit, On What Matters, 360.

23 Kumar, “Reasonable Reasons in Contractualist Moral Argument,” 10.

24 Rawls, A Theory of Justice, sec. 5.

25 Kumar, “Defending the Moral Moderate: Contractualism and Common Sense,” 285.

26 Kumar, 286.

27 To be clear, we think that each of these dimensions is relevant in determining whether use of an algorithm is morally problematic. However, we do not think that the dimensions we outline are exhaustive; this list is not meant to be comprehensive. There may be other considerations, such as consideration of desert or other facets of fairness which can play an important role in assessing the appropriateness of the use of an algorithm.

28 Because we are using “reliability” in its colloquial sense, we will not be using the term in the statistical (and more captious) sense of being free from random error; our use of reliability will more closely align with the statistical sense of “validity,” that is, accuracy borne from use of a (statistically) reliable method. We refrain from using the term “validity” to avoid confusion with the philosophical sense of the term, that is, premises entailing the conclusion of an argument. For more on the statistical senses of reliability and validity and an appraisal of value-added models in those terms, see Amrein-Beardsley, Rethinking Value-Added Models in Education, chapter 6.

29 Notice that in this example responsibility and reliability are both relevant. Teachers could reasonably endorse a system in which their jobs depend on factors for which they are not responsible – e.g., population decline. However, firing teachers whose scores suffer because of exogenous factors (lack of air-conditioning) involves criteria that are not teachers’ responsibilities and which are unreliable in making teaching better (though perhaps reliable in achieving better learning outcomes).

30 Strauss, “D.C. Teacher Tells Chancellor Why IMPACT Evaluation Is Unfair.”

31 Strauss.

32 Strauss. There is another autonomy-related issue here. In Chapter 2, we explained the importance of social and relational facets of autonomy. One way to understand the relationship between autonomy and facts about persons’ social circumstances and relationships is that social and relational facts are causally important in fostering persons’ autonomy. Another way is to understand social and relational facts as constitutive of autonomy. See Mackenzie and Stoljar, Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, chapter 1. That is, being a part of supportive and meaningful social groups and relationships is (1) a necessary condition for developing the competences and authenticity necessary for psychological autonomy and (2) an inherent part of being a person incorporating values into their life. Teacher-student and teacher-community relationships are deeply important and constitutive of the lives that many teachers value and have cultivated so as to realize their sense of value. Subjecting those relationships to unreliable, high-stakes processes that measure things for which teachers are not responsible conflicts with that facet of autonomy as well. Note that this is distinct from the reasonable endorsement argument.

33 Quick, “The Unfair Effects of IMPACT on Teachers with the Toughest Jobs.”

34 Corbett-Davies and Goel, “The Measure and Mismeasure of Fairness.”

35 To be clear, we are not arguing that fairness analyses are mistaken. Rather, there are many conceptions of fairness that focus on different values. These may conflict, and many are mutually incompatible. Hence, there is a great deal of work to do in working out fairness issues even once one determines that a system is in some sense fair or unfair. We take this issue up again in Section 3.5.

36 Note that the relationships between system burden and social circumstances need not be causal.

37 Wagner v. Haslam, 112 F. Supp. 3d at 690.

38 Holloway, “Evidence of Grade and Subject-Level Bias in Value-Added Measures.”

39 Amrein-Beardsley, “Evidence of Grade and Subject-Level Bias in Value-Added Measures: Article Published in TCR”; Spears, “Bias Confirmed – Tennessee Education Report.”

40 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1175.

41 Houston Independent School District, “EVAAS/Value-Added Frequently Asked Questions.”

42 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1178.

43 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1177.

44 Ideastream, “Grading the Teachers.”

45 Brennan, Dieterich, and Ehret, “Evaluating the Predictive Validity of the COMPAS Risk and Needs Assessment System.” We should note that how we apply the concept of reliability could itself be a matter of dispute. The study by Northpointe-affiliated researchers considers how well calibrated COMPAS is; that is, how likely COMPAS is to predict individual defendants’ reoffense. However, there are other relevant measures for which COMPAS could be more or less reliable. A study by ProPublica found that prediction failure was different for White and Black defendants such that White defendants labeled lower risk were more likely to reoffend than Black defendants with a similar label, and Black defendants labeled higher risk were less likely to reoffend than White defendants labeled higher risk. See Angwin et al., “Machine Bias,” May 23, 2016. These results call into question COMPAS’s reliability in avoiding false positives and false negatives. We address this issue in more detail in the next section.

46 Northpointe, Inc., “Practitioner’s Guide to COMPAS Core,” 24.

47 Drawing on factors for which one is not responsible is compatible with a range of theories of punishment. Such factors may help determine a sentence – whether one is even arrested, moral luck, how well punishment deters crime, and so forth. But our view is not that only factors for which one is responsible may contribute to sentencing decisions. Rather, our view is that, as such factors increase, it becomes more difficult for an agent to abide such a system.

48 Other questions pertain to matters for which defendants’ responsibility is less clear: how often one has had barely enough money to get by, whether one’s friends use drugs, how often one has moved in the last year, and whether one has ever been suspended from school.

49 Wisconsin v. Loomis, 881 N.W.2d paragraph 16 (emphasis added).

50 Northpointe describes COMPAS’s scope as follows: “Criminal justice agencies across the nation use COMPAS to inform decisions regarding the placement, supervision and case management of offenders.” Northpointe, Inc., “Practitioner’s Guide to COMPAS Core,” 1.

51 Skeem, Monahan, and Lowenkamp, “Gender, Risk Assessment, and Sanctioning”; DeMichele et al., “The Public Safety Assessment”; Corbett-Davies and Goel, “The Measure and Mismeasure of Fairness.” Note here that there is another potential issue of responsibility and stakes and of what we will call “substantive fairness” in Section 3.5. It is indeed the case that men are much more likely to reoffend and to commit violent offenses than women, though one’s gender is not a factor for which one should be held responsible. Moreover, whether gender is justifiably a difference-maker in determining sentencing (as opposed to, say, job training, drug and alcohol counseling, or supportive intervention) will turn on a normative theory of criminal law.

52 Angwin et al., “Machine Bias,” May 23, 2016.

53 Dieterich, Mendoza, and Brennan, “COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity.”

54 Corbett-Davies and Goel, “The Measure and Mismeasure of Fairness.”

55 Corbett-Davies and Goel.

56 We borrow the idea of using this kind of chart to relay the difference between calibration and classification parity from Corbett-Davies et al., “Algorithmic Decision Making and the Cost of Fairness.” The image itself is similar to one used in Castro, “Just Machines.”

57 Angwin et al., “Machine Bias,” May 23, 2016.

58 Of course, it may well be unjust to impose criminal penalties for many kinds of drug possession and use in the first place. We will leave that issue aside for this project.

59 Hooker, “Fairness.” See also Castro, “Just Machines.”

60 Hooker. See also Castro, “Just Machines.”

61 Corbett-Davies and Goel, “The Measure and Mismeasure of Fairness.”

62 Corbett-Davies and Goel, 2.

63 Corbett-Davies and Goel, 2.

64 Corbett-Davies and Goel, 2.

65 Corbett-Davies and Goel, 2–3.

66 Binns, “Fairness in Machine Learning: Lessons from Political Philosophy.”

67 Herington, “Measuring Fairness in an Unfair World.”

4 What We Informationally Owe Each Other

1 Frank Pasquale (2016) argues that lack of transparency is one of the defining features and key concerns of technological “black boxes” that exert control over large swathes of contemporary life. Such obscurity can derive from many sources, including technological complexity, legal protections via intellectual property, and deliberate obfuscation. For our purposes the source of obscurity is initially less important than what autonomy demands. The source will become important when evaluating what duties people have to provide information as a matter of respecting others’ autonomy.

2 David Grant, Jeff Behrends, and John Basl argue that understanding what we owe to subjects of automated (or “black boxed”) decision systems should not begin with questions of transparency and opacity. Rather, we should begin with an understanding of the morally relevant features of decision subjects, how decision-makers relate themselves to decision subjects, and a standard of “due consideration” to decision subjects. Grant et al., “What We Owe to Decision Subjects: Beyond Transparency and Explanation in Automated Decision-Making.” We agree. Our account of practical and cognitive agency is a way of spelling out some of those morally salient features and relationships between decision-makers and subjects.

3 Whittlestone et al., “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research,” 12.

4 O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

5 Yu and Dietrich, “Broken Records: How Errors by Criminal Background Checking Companies Harm Workers and Businesses.”

6 Yu and Dietrich.

7 Yu and Dietrich.

8 Yu and Dietrich, citing Deposition of Teresa Preg at 63–64.

9 Yu and Dietrich.

10 O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

11 Nelson, “Broken Records Redux: How Errors by Criminal Background Check Companies Continue to Harm Consumers Seeking Jobs and Housing.”

12 Nelson. See also Connecticut Fair Hous. Ctr. v. Corelogic Rental Prop. Sols., LLC, 369 F. Supp. 3d at 367–368.

13 Nelson, “Broken Records Redux: How Errors by Criminal Background Check Companies Continue to Harm Consumers Seeking Jobs and Housing.”

14 Nelson.

15 Nelson.

16 Nelson.

17 Nelson.

18 Nelson.

19 Yu and Dietrich, “Broken Records: How Errors by Criminal Background Checking Companies Harm Workers and Businesses.”

20 Yu and Dietrich.

21 For further discussion of background check algorithms and lack of regulation and oversight, see Kirchner and Goldstein, “Access Denied.”

22 91st United States Congress, An Act to amend the Federal Deposit Insurance Act to require insured banks to maintain certain records, to require that certain transactions in US currency be reported to the Department of the Treasury, and for other purposes; Yu and Dietrich, “Broken Records: How Errors by Criminal Background Checking Companies Harm Workers and Businesses.”

23 Yu and Dietrich, “Broken Records: How Errors by Criminal Background Checking Companies Harm Workers and Businesses.”

24 Yu and Dietrich.

25 Yu and Dietrich.

26 91st United States Congress, An Act to amend the Federal Deposit Insurance Act to require insured banks to maintain certain records, to require that certain transactions in US currency be reported to the Department of the Treasury, and for other purposes.

27 Yu and Dietrich, “Broken Records: How Errors by Criminal Background Checking Companies Harm Workers and Businesses.”

28 Yu and Dietrich.

29 Oshana, Personal Autonomy in Society, vii.

30 Sven Nyholm puts it: “Agency is a multidimensional concept that refers to the capacities and activities most centrally related to performing actions, making decisions, and taking responsibility for what we do.” Nyholm, Humans and Robots, 31.

31 For a similar division of aspects of our agency and discussion, see Smith, “A Constitutivist Theory of Reasons: Its Promise and Parts.”

32 Hill, Jr., “Autonomy and Benevolent Lies.”

33 Rubel, “Privacy and the USA Patriot Act.”

34 Nyholm, Humans and Robots, 15–18.

35 Solove, “Privacy Self-Management and the Consent Dilemma.”

36 See also the discussion of counterfactual explanations in Section 4.4.2.

37 There is a related question about the baseline against which some action counts as a restriction. A direction-suggesting algorithm (e.g., Google Maps) in most cases increases one’s practical agency by allowing one to find one’s way quickly and easily. In the rare case that such a system sends one on a suboptimal route, we could interpret that as a restriction of practical agency against a baseline of an overall expansion of practical agency. The best understanding of the principles of practical agency, though, is against a baseline of no algorithmic system.

38 To the extent that DJ wishes to steer his course on the basis of his family and social background and reconcile that with his values and beliefs, shielding him may indeed limit his practical agency.

39 There might be plausible rationales for continued secrecy, for example, privacy rights. But those are countervailing considerations to individuals’ autonomy interests – in this case grounded in cognitive agency.

40 Sweeney, “Discrimination in Online Ad Delivery.”

41 Sweeney.

42 Results from algorithmic systems that differ on the basis of race and ethnicity are rampant. Examples include predominantly sexualized images of women and girls returned for searches including “Black,” “Latina,” and “Asian,” but not “White,” searches for high-status positions returning images predominantly of White people (e.g., “CEO”), facial recognition and image enhancement technologies that are more accurate for images of White people than Black people, health risk assessment machine learning tools that underestimate Black patients’ eligibility for care interventions, and more. Garvie and Frankle, “Facial-Recognition Software Might Have a Racial Bias Problem”; Noble, Algorithms of Oppression; Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” While organizations often aim to rectify these disparities, those responses are often reactive. Moreover, knowledge of those processes is important to democratic agency and legitimation, the topic of Chapter 8.

43 European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

44 GDPR, art. 15.

45 GDPR, art. 16.

46 Wachter, Mittelstadt, and Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.”

47 Wachter, Mittelstadt, and Floridi, 11.

48 Wachter, Mittelstadt, and Floridi.

49 Selbst and Powles, “Meaningful Information and the Right to Explanation.”

50 Selbst and Powles.

51 Wachter, Mittelstadt, and Russell, “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.”

52 Wachter, Mittelstadt, and Russell, 15–16.

53 Wachter, Mittelstadt, and Russell.

54 GDPR, art. 22.

55 Kaminski, “The Right to Explanation, Explained.”

56 Kaminski; Mendoza and Bygrave, “The Right Not to Be Subject to Automated Decisions Based on Profiling.”

57 The terminology of broadness and narrowness is from Kaminski, “The Right to Explanation, Explained.”

58 Gardner v. Florida, 430 U.S. 349 (1977).

59 Wisconsin v. Loomis, 881 N.W.2d paragraph 51.

60 Wisconsin v. Loomis, 881 N.W.2d paragraphs 54–56.

61 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d.

62 American Statistical Association, “ASA Statement on Using Value-Added Models for Educational Assessment: Executive Summary,” 7; Morganstein and Wasserstein, “ASA Statement on Value-Added Models.”

63 Strauss, “D.C. Teacher Tells Chancellor Why IMPACT Evaluation Is Unfair.”

Figure 0

Figure 3.1 SIMPLE COMPAS

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×