Introduction
For nearly two decades, ‘nudges’, subtle tweaks in choice environments that predictably steer people’s behavior without restricting their option sets or changing their economic incentives (Thaler and Sunstein, Reference Thaler and Sunstein2008, 6), have been recommended to policy-makers. Inspired by findings in cognitive science about predictable traits in human decision-making – such as a bias toward the present, weakness of the will or an aversion to losses – nudge proponents have been able to come up with a variety of techniques that successfully tap into these heuristics in order to induce welfare-promoting behavior. These include setting up defaults to promote pension savings, designing physical environments to reduce traffic accidents or placing healthy food at eye level in cafeterias to promote healthy eating.
The practice of nudging has been objected to on many grounds – that it is manipulative (Grüne-Yanoff, Reference Grüne-Yanoff2012; Wilkinson, Reference Wilkinson2013), that it diminishes the control of individuals over their deliberations (Hausman and Welch, Reference Hausman and Welch2010), objectionably imposes values upon its targets and disrespects them (White, Reference White2013), fails to treat people as rational agents (Rozeboom, Reference Rozeboom2020), fails to preserve freedom of choice despite its supposed libertarian credentials (Rebonato, Reference Rebonato2014), etc. But according to the founders of nudge, Thaler and Sunstein, many such objections are immaterial, ‘a literal nonstarter’, given that every choice environment will inevitably influence choosers in some way (2008, 10–11). Putting nudges to use is permissible, the argument goes, because choice architects (and governments employing them) inevitably find themselves in situations in which they must set up arrangements of one kind or another (ibid.). The inevitability argument (IA) is, arguably, ‘Thaler and Sunstein’s most important argument for nudging’ (Grill, Reference Grill2014, 142), from which they draw the following welfarist lesson: if choice contexts need to be arranged in some way, then it is best to arrange them so that they make agents better off than they would be in the face of alternative arrangements (Thaler and Sunstein, Reference Thaler and Sunstein2008, 11).Footnote 1
However, some opponents note (e.g., Hausman and Welch, Reference Hausman and Welch2010; Grüne-Yanoff, Reference Grüne-Yanoff2012), and more than a few proponents admit (e.g., Blumenthal-Barby, Reference Blumenthal-Barby, Coons and Weber2013; Grill, Reference Grill2014; Engelen, Reference Engelen2019), that while influence on choice might indeed be inevitable, there remains a significant moral difference between influence from unmodified environments (including environments designed without any thought given to behavioral influence), and from environments modified specifically to produce behavioral effects. It is only by influences of the latter kind that citizens can be subject to alien control (Schmidt, Reference Schmidt2017) and that power can be exerted over them (Hausman and Welch, Reference Hausman and Welch2010, 133). In short, while influence may be inevitable, governments can still refrain from becoming nudgers, thus avoiding to enter into morally suspect relationships with their citizens (Grüne-Yanoff, Reference Grüne-Yanoff2012, 639). If this objection stands, then the standard objections against nudging (pertaining to manipulation, lack of respect, imposition of values, etc.) resurface. Proponents have offered little in the way of refuting this objection, despite the supposed importance of IA for nudging.
In this paper, we argue that in at least some cases, one version of the IA persists. While it does not lead straight to Thaler and Sunstein’s welfarist conclusion, it justifies choice architects in interfering with environments that are either unmodified or that were designed without any intention to produce behavioral influence. When choice architects are able to reliably predict the behavioral effects of both the unmodified environment and the available alternative arrangements, they will be permitted, or so we argue, to treat all these options – modified or unmodified, as well as intended or merely foreseen – as on a moral par. In other words, there is insignificant moral difference in such cases between arranging an environment with predictable effects and allowing an unmodified environment with predictable effects, as well as intending a behavioral effect as opposed to merely foreseeing it. The normative position that little or nothing hinges on whether an environment has been purposely altered or not is supported by what we call the evidence-based view.
We proceed as follows. First, we offer some conceptual preliminaries and cast aside versions of IA that have already been rejected in the literature. Second, we offer our own version of IA and ground it in the evidence-based view. Third, we elaborate in more detail why it follows from the availability of evidence and the predictability of contextual effects that choice architects should be morally permitted to modify environments and treat intended as opposed to merely foreseen behavioral effects as on a moral par. Fourth, we face our view with some pressing objections. The final section concludes.
On what is and isn’t inevitable
The debate on IA has been riddled with misnomers and misconceptions. This is primarily due to a lack of conceptual rigor from its main proponents, who have advanced claims about the ‘inevitability of nudging’ (Thaler and Sunstein, Reference Thaler and Sunstein2008), the ‘inevitability of paternalism’ (Thaler and Sunstein, Reference Thaler and Sunstein2003; Glaeser, Reference Fitzpatrick2006) and ‘the inevitability of choice architecture’ (Thaler and Sunstein, Reference Thaler and Sunstein2008; Cohen, Reference Cohen2013; Sunstein, Reference Sunstein2014, Reference Sunstein2015) almost interchangeably. We start by offering some conceptual preliminaries to distinguish between these claims and those that are clearly false. This clarification will also shed light on how we will be using these terms in the paper.
Four terms ought to be clarified – ‘paternalism’, ‘nudging’, ‘choice architecture/arrangement’ and ‘choice environment/context’. Let’s take ‘paternalism’ and ‘nudging’ together. According to Thaler and Sunstein, an intervention qualifies as a nudge only if it is (a) easy and cheap to avoid (preservation of the option set) and (b) makes individuals better off by their own lights (paternalism) (2008, 6, 10). The first condition ensures that the targeted agents need not invest much effort into resisting the influence. The second condition presupposes means paternalism, according to which it is permissible to interfere with the means with which the targeted agents pursue their ends, but not with the ends that they choose for themselves.Footnote 2 To count as a nudge, then, a behavioral intervention ought to be easily resistible and means-paternalistic.
Consider now the pairing of ‘nudging’ and ‘choice architecture’ (or ‘choice arrangement’). We take nudges to be instances of choice architecture that satisfy the aforementioned qualifications (easy resistibility and means paternalism). Choice architecture, more broadly, represents a conscious intervention with predictable behavioral effects, grounded in findings from cognitive science and behavioral economics, but which is not necessarily constrained by conditions like easy resistibility and means paternalism. ‘Nudging’ is thus subordinate to ‘choice architecture’ – every instance of nudging is an instance of choice architecture, but not vice versa.
Finally, consider the pairing of ‘choice architecture’ (or ‘choice arrangement’) and ‘choice environment’ (or ‘choice context’). Choice architectures are those choice environments that were consciously modified with an awareness of how this would predictably affect behavior. They thus include an element of design. Yet, a choice environment may affect the decision-making of individuals without any conscious intervention whatsoever. The notion simply points to the context dependency of preferences and choices. Once more, the first term is subordinate to the second – every instance of choice architecture is an instance of choice environment, but not vice versa. So, which of these, if any, is inevitable, and does it matter for the permissibility of intervention?
To illustrate this, consider an all-too familiar example from the nudge literature, the arrangement of cafeteria food items. In Thaler and Sunstein’s original example, Carolyn is an expert about all things behavioral. Imagine that, as a director of food services for a large system of schools, she is in a position to instruct the employees of these schools how food items in cafeterias ought to be displayed. Based on the observation that students tend to choose items more visually salient to them, a number of strategies become open to Carolyn – she can influence students to maximize profits, randomize the layout, steer them toward what she conceives to be the best option for them, or toward what they conceive to be the best option for them, among others (Thaler and Sunstein, Reference Thaler and Sunstein2008, 2).
To intervene paternalistically seems in no way inevitable. This much is obvious from the example itself, in which only two out of the four available strategies are paternalistic. In this vein, Grüne-Yanoff notes that if ‘the government decides that it has no business in improving people’s welfare through its choice-architecture design, then it does not act paternalistically in this regard’ (2012, 639).Footnote 3
If we can avoid paternalism, then it would quickly follow that we can avoid nudging, at least on the narrow definition offered by Thaler and Sunstein that takes all nudging to be paternalistic. But the original definition of nudging might be too narrow. Some of the paradigmatic examples of nudges are not paternalistic in any sense, such as the organ donation default, which clearly benefits someone other than the person being influenced. In fact, it has become commonplace in the literature to attach the ‘nudge’ label to various interventions affecting prosocial and moral behavior (e.g., Guala and Mittone, Reference Guala and Mittone2015; Nagatsu, Reference Nagatsu2015; Capraro et al., Reference Capraro, Jagfeld, Klein, Mul and van de Pol2019).Footnote 4 Kelly envisions nudges that promote the principles of Rawlsian justice (Reference Kelly, Coons and Weber2013, 223–225), whereas Moles argues in favor of nudges that facilitate the fulfilment of enforceable duties (2015, 659–660). Still, even granting a broader, non-paternalistic variety of nudges would not make nudging inevitable, for choice architects could still fail to preserve option-sets and thus run afoul of the easy resistibility requirement. For instance, Carolyn could place desserts in a different location altogether, raising the transaction costs for dessert lovers to the point that the influence is no longer easy to resist (Thaler and Sunstein, Reference Thaler and Sunstein2003, 1184).Footnote 5 Nudging is, thus, not inevitable either.
How about the inevitability of influence of choice environments? Regardless of whether environments are arranged with or without an eye to behavioral influence (or are completely unmodified), influence on the behavior of agents will occur. This much seems true, but trivially so. The inevitability of contextual influence is hardly a point of contention between proponents and opponents of employing nudges and choice architecture. Opponents will admit that individuals cannot escape the effects of contextual influence. They may even grant that influence without a designer can be significant to considerations of personal autonomy. But there remains a significant moral difference, they would insist, between the inescapable effects of unmodified environments, and the effects of environments arranged with an astute awareness of how behavior will be affected. And what of Thaler and Sunstein’s claim that, since contextual influence is inevitable, environments ought to be arranged in welfare-promoting ways? Opponents would likely insist that this is a question-begging conclusion, to which the fact of contextual inevitability doesn’t point. Hence, the inevitability of contextual influence doesn’t seem to do much for choice architecture and nudge proponents.
Finally, is choice architecture inevitable? On the conception that we explicate in the next section, it will be inevitable for choice architects to assume control over the behavioral effects on exposed individuals, when they can reliably predict the outcomes of available choice environments. This, we argue, will place unmodified and arranged choice environments on a moral par.
Inevitability and the evidence-based view
Opponents of choice architecture insist that it is at least pro tanto wrong to intervene on choice environments to produce predictable effects. Even if the effects of unintended influences and influences by design equally affect decision-making, influences by design contain an added threat to autonomy, since they might make our actions dependent on the wills of others. We now argue that this suggestion is not as straightforward as it may seem. There are some cases in which a choice architect will be unable to avoid making the actions of others dependent on their wills.
Let’s demote Carolyn from the position of higher-up official to a school cafeteria manager. She remains an expert on behavioral influences of all kinds, and successfully predicts not only the effects of arrangements available to her but also the effects when she does not intervene, or the effects of randomized arrangements. But if she can accurately predict the effects of both her action and inaction, and chooses inaction for a particular effect to be produced, is there a moral difference between choosing modified and unmodified environments? With such reliable evidence about influences, are Carolyn’s acts and omissions morally distant enough to imply different moral conclusions? We claim that they are not. This gives rise to the evidence-based view, which posits the moral proximity of predictable choice environments, regardless of whether they are modified or not.Footnote 6
The case for inevitability is even stronger in environments that are made entirely ‘from scratch’, with no existing default environment. Imagine that a building is being constructed, or that some new regulation requiring a default is being set up. If choice architects like Carolyn can reliably predict the behavioral effects of available arrangements, they inevitably have to pick some arrangement knowing what behavioral effects will likely be produced as a result. The case is even stronger here because bringing about behavioral effects is constitutive in bringing the arrangement into existence, without raising concerns about possible differences between action and inaction.
To illustrate how this new kind of inevitability arises, consider the following case:
Disgruntled customer: After reading in a magazine about a cafeteria arrangement that uses visual cues to promote healthy eating, a customer recognizes it in Carolyn’s cafeteria. Disgruntled, he faces Carolyn and complains that he doesn’t take kindly to being manipulated into food choices through her behavioral schemes. Carolyn responds, however, that she has a fairly good idea how her customers will be steered no matter which arrangement is put into place. She cannot help but pick some arrangement, while knowing what kind of behavior will likely be promoted as a result. While she indeed arranges the cafeteria with behavioral effects in mind, she can hardly be at fault for it, since she had to pick some arrangement while knowing what behavioral effects will thereby be promoted.
Compare Carolyn to Naïve Bill. Naïve Bill has been in Heuristics School for a week, and only has a fairly good idea about the effects of one behavioral arrangement. Imagine now that Bill gets the chance to manage a cafeteria and test his new insight. Unlike the previous case, the disgruntled customer does seem to have a legitimate complaint against Bill, since Bill could’ve avoided putting his insight to the test. This means that Bill’s intervention is susceptible to the charges raised by the opponent of nudges mentioned earlier, pertaining to manipulation, value imposition, etc.
The moral proximity of choice environments available to Carolyn, be they interfered upon or merely allowed, is established because, due to the availability of evidence and predictability of contextual effects, Carolyn cannot in one sense avoid assuming control over the behavioral effects of the choice environment on her patrons. We elaborate this claim further below. For now, we put forward an adjusted version of IA:
The Inevitability Argument: Choice architects who can reliably predict the outcomes of available arrangements and the default environment inevitably assume control over the behavioral effects of their designated choice environment on those affected. In such cases, it makes inconsiderable moral difference whether they choose a modified or unmodified choice environment.
A few words on what we mean by ‘control’ here. There are two senses in which choice architects can be said to assume control. First, it is one thing to have control over a choice environment itself. In this sense, Carolyn and Bill have control over the cafeteria just the same, in virtue of having the power to modify its aspects, as would any other person in charge regardless of their behavioral expertise. Another, very different sense, is having control over the behavioral effects of that choice environment, the kind of control that we seem to be interested in when we discuss nudging and choice architecture more broadly. The first kind of control – that over a choice environment – is necessary for control over behavioral effects, but not sufficient. This latter kind of control has an epistemic requirement – of knowing, or being able to predict, how modifications to the environment will affect those exposed to it. To have control over the behavioral effects on others, we are required to have a sense of how the intervention upon the environment will steer their behavior. In this sense, Bill only has control over behavioral effects when he puts his one insight from Heuristics School to use. Otherwise, he could change the environment without knowing how it would affect behavior, or that it would affect behavior at all. Neither would amount to assuming control on this second understanding, that over behavioral effects. This understanding, we believe, captures the kind of control that critics of choice architecture usually find disturbing when they raise concerns about, say, manipulation, and it is this kind that we discuss in the remainder of the paper.
Returning to the cafeteria, imagine that the disgruntled customer is unconvinced by Carolyn’s reasonable appeal to the adjusted IA, and gathers like-minded individuals to stage a protest. Carolyn is sacked as a result, and the cafeteria shortly operates with a skeleton staff, which cluelessly and accidentally puts up a nearly identical arrangement to Carolyn’s. Later, James, an equally knowledgeable choice architect as Carolyn is hired. James sees how the cafeteria has been arranged, and happily leaves it untouched, knowing (by hypothesis) what people are likely to choose by virtue of its effects. But James’s passivity seems no different in moral terms to Carolyn’s activity. The disgruntled customer’s grounds for complaint (or lack thereof) seem to be identical against both managers.Footnote 7
Note that the adjusted IA that we defend isn’t biased in favor of welfarism, pro-social behavior, the prevention of harms or any other positive normative account that favors the implementation of some choice environment. It only establishes that the fact that some environment isn’t modified doesn’t count in its favor. Here we distinguish between the content of the influence and its form. Our argument is that, in these relevant contexts, the permissibility of a choice arrangement depends solely on its content.
Imagine that there are two available school cafeterias, A and B, where the former is not modified by a knowledgeable choice architect, and the latter is. B will promote some healthy foods that will otherwise likely be overlooked. Our version of IA simply establishes that, if the choice architect fits our previous description, it will not count in favor of A that it is not modified. Of course, the choice architect must later weigh reasons regarding which normative direction seems best, given the available options. Perhaps the availability of options will favor a means-paternalistic arrangement (Thaler and Sunstein, Reference Thaler and Sunstein2008), or one imposing minimal costs (Blumenthal-Barby, Reference Blumenthal-Barby, Coons and Weber2013, 183), or a more ‘natural’ ordering (White, Reference White, Favor, Gaus and Lamont2010, 217). Additionally, the legitimacy of the choice architect’s decision may be constrained by the kinds of reasons that may be offered in favor of some available choice arrangement; perhaps, such reasons must be public, so that they can be accessible or acceptable to their targets. But these discussions are separate to a prior one on whether employing choice architecture is itself permissible (Grill, Reference Grill2014, 153). IA in our version would merely deny, for our range of cases, that options are pro tanto wrongfully selected if they are arranged to produce predictable effects. It follows from IA that it is permissible to engage in choice architecture. Thus, on our view, it makes little sense to raise complaints about the utilization of choice architecture itself in the circumstances that we describe (on grounds that it is manipulative, or disrespectful), but perfectly legitimate complaints could still be raised about the normative content that many of these arrangements promote.
Some might suspect that the effects of consciously designed choice arrangements on individuals will contain ‘added force’ (i.e., stronger influence), compared to that of their unmodified predecessors. If choice arrangements are more forceful than unmodified environments, then a pro tanto reason might reemerge to stick to the latter. However, while it is indeed possible for some arrangements to produce stronger effects than some unmodified contexts, this seems entirely contingent. Nudges are often proposed in order to dull the effects on already triggered heuristics and are supposedly designed to be easily resistible. Imagine that Carolyn’s customers want to follow balanced diets, but in the current setting (one set up by the previous manager with no behavioral aim), they have a hard time resisting their strong temptations for desserts. It would seem that rearranging the cafeteria to promote balanced diets hardly replaces the weaker influence for the stronger. These kinds of cases, where a weaker influence substitutes a stronger, are not the exception in the nudge literature.
We now turn to further explaining what it is about the availability of evidence and the reliability of prediction that brings about this new version of inevitability for the choice architect.
The importance of being knowledgeable
In the previous section, we argued that the disgruntled cafeteria customer has no justified claim (grounded in manipulation, value imposition, respect, etc.) against knowledgeable choice architects like Carolyn and James, but does have one against a rookie architect like Naïve Bill. The former are absolved, we suggested, because they reliably predict the behavioral effects of available choice environments, making it inevitable for them to assume control over these effects. But what might be the kind of knowledge, or the level of competence, that sets apart Carolyn and James from Bill? Is such a level realistic enough to be relevant for practical considerations? And why does it deflect the charge pressed by the disgruntled customer? In this section, we provide more detail about the kind of evidence that grounds the adjusted IA.
Let’s start from the first question. Responding to the disgruntled customer, Carolyn says that she has a ‘fairly good idea’ about how her customers will be steered no matter which arrangement is put into place. How might we understand Carolyn’s notion of a ‘fairly good idea’? At the very least, Carolyn will have a good track record of predicting which available option will be most promoted by some environmental adjustment, and which option will be most impeded. At best, she will be able to predict changes in her customers’ general preference ordering that result from the adjustment. To reliably predict this, Carolyn will need to have a good grasp not only of how particular environments trigger heuristics, but what the predominant preferences and values are in a given community. In short, choice architects ‘must have a sufficient grasp of the scientific material and a good understanding of how people think in particular situations’ (Selinger and Whyte, Reference Selinger and Whyte2010, 469). However, note that for the purposes of this paper, we are not looking to provide a comprehensive conception of choice-architect expertise. We are only interested in the nature and level of competence needed in some relevant choice context for our version of inevitability to occur. Note that the cafeteria might be the only context in which inevitability obtains for Carolyn. Inevitability within that context may simply be the result of evidence being more easily attainable for Carolyn, or there only being few arrangement options available.Footnote 8 This, of course, is not to deny that greater competence among choice architects will typically breed more occurrences of inevitability.Footnote 9 As Grill notes, inevitability seems to be ‘strengthened by the fact that our knowledge of behavioral psychology is steadily increasing and spreading’ (2014, 143). Or as Blumenthal-Barby has recently stated:
once behavioral science helps us gain insight into how choice is affected, intentionality is forced, in a sense. It becomes increasingly difficult for us to maintain that we did not know how various factors in the choice architecture would impact […] choice. […] Given that we then have to make a decision about how to set things up, we are forced to engage in nudging or shaping choice one way or the other. (Reference Blumenthal-Barby2021, 67)
Here’s how Carolyn’s required level of expertise might be illustrated. Suppose that Carolyn is arranging a cafeteria to promote vegan dishes in a community mostly populated by meat eaters. She judges that there are two prominent spots in which she can display the food, and that these can be ordered by degree of prominence. However, it occurs to Carolyn that placing the vegan dish in the most prominent spot is unlikely to maximize this dish getting picked, since this increases the chance of ‘reactance’ among meat-eating customers, i.e., the likelihood that they will notice that they are being steered and pull in the opposite direction (Sunstein, Reference Sunstein2014, 154). Being able to predict this and knowing that she can do more to promote the dish by avoiding consumer reactance, Carolyn places it in the second most prominent spot.
Still, we noted Carolyn only has a ‘good idea’ about how various choice environments will steer her customers. We understand this notion as permitting the occasional mistake, the occasional overlooking of relevant factors on decision-making or the inability to express the prediction in statistical terms. Yet, a rough prediction of this kind still seems sufficient for saying that Carolyn cannot avoid the consideration of behavioral effects in picking a choice arrangement. This is what we will understand as the lowest threshold for Carolyn to make ‘reliable predictions’.
One more important qualifier remains. Carolyn will not be reliably predicting the behavior of each individual patron in any particular instance of choice. She won’t be able to say how the cafeteria arrangement will influence the disgruntled customer specifically, and what food he’ll personally end up picking as a result. To know this, she would have to be intimately familiar not only with his food preferences, but the values that may guide him in making food choices. Instead, Carolyn can make fairly accurate predictions about how changes in the arrangement will shift preferences on the collective level. This is similar to how choice contexts are arranged for gamblers. Architects of gambling contexts can manage the gamblers’ environment so that they don’t quit while they’re ahead, and arrange that over a series of plays, in most cases, gamblers end up with less money than they started with. But that doesn’t mean architects can reliably predict for any particular person that she will pull the lever on a slot machine or stay at the roulette table.
We have noted earlier that the adjusted IA will only apply ‘in some cases’. This is because there is a limited number of cases in which choice architects can be expected to have fairly good ideas about the effects of available options on those exposed. For such cases, evidence will be more easily attainable, arrangement options will be limited, and the effects of influence more stable across cases. It comes as no surprise that defaults and cafeteria arrangements – Thaler and Sunstein’s go-to examples – best fit this description. Inevitability may also occur for physicians in their interactions with patients, as some nudge proponents point out (Brooks, Reference Brooks2013; Cohen, Reference Cohen2013).
Let’s turn now to the second question – why does inevitability deflect the charges of the disgruntled customer? In a nutshell, the knowledgeable choice architect is accountable for the effects of choice environments, regardless of whether she actively brought them about or merely allowed them; since she is able to predict them, she can cause and prevent them. She is accountable for them in much the same way as she would be for the secondary effects of environments that she does not intend but is able to foresee. Since omissions and active interferences are thus placed on a moral par, or so we suggest, she shouldn’t be charged for merely having used choice architecture. Given her knowledge, acting and omitting are sufficiently morally close.Footnote 10
We hint here that there may not be significant normative difference between the expert who reliably foresees an effect and the expert who intends that same effect. Philosophically, this is an altogether different concern from that of the moral significance of acts as opposed to omissions. One might retort that there is an important moral difference between intending a harmful behavioral effect and merely foreseeing it as a side-effect of a choice architecture arranged primarily for, say, aesthetic purposes. The significance of intentions has, indeed, a long pedigree (see Quinn, Reference Quinn1993; Tadros, Reference Tadros2015; for criticisms about the significance of intentions, see Scanlon, Reference Scanlon2010). The literature on it is vast, and we cannot do justice to it here. We limit ourselves to surmising that in circumstances of reasonable certainty about side-effects, as is the case of the knowledgeable choice architect who has a fairly good idea how the persons exposed would be affected by different options of choice arrangements, the moral significance of intention as opposed to mere foreseeability is at least downplayed when foreseen effects are ascertained with a fair degree of certainty. Reliable prediction, as we show, similarly downplays the significance of intention as opposed to mere foreseeability as it was established earlier about the significance of acts as opposed to omissions. While these are separate normative points, they largely overlap in the cases we presented here.
The downplaying of the significance of intentions in cases of reliable prediction can be supported by at least two points. First, the observation that foreseeing may at least sometimes be morally on a par with intending is at least partly confirmed by Knobe’s experiments (Reference Knobe2003), which show that merely foreseen harm is often perceived as (being on a par with) intended harm; in other words, being able to predict harm as a secondary effect of one’s action is often perceived by test subjects as the same as intending that harm.Footnote 11 Second, we should reiterate Blumenthal-Barby’s claim that with added insight, ‘intentionality is forced, in a sense’, and the choice architect is ‘forced to engage in […] shaping choice one way or the other’ (Reference Blumenthal-Barby2021, 67); this is to say that being able to reliably foresee how an environment affects choice and having to introduce some choice environment with this knowledge at hand is akin to intending. There are two ways to understand this claim. The first suggests a moral closeness between intentions and mere foreseeability brought about by new insights in behavioral science, which resembles our own claim. The second understanding points to an epistemic closeness – allowing and merely foreseeing are not always clearly distinguishable (see, e.g., Fitzpatrick, Reference Fitzpatrick2006). If Carolyn ignores the foreseeable harms that might befall patrons with hemochromatosis, it’s not clear whether she intends them or merely foresees them. Both understandings support the downplaying of the significance of intended as opposed to merely foreseen effects in the cases at hand.
We will remain agnostic on just how much the reliable predictability of foreseen effects downplays the significance of intentions as opposed to mere foreseeability. To us, and for our purposes, it seems at least that in the cases at hand, it doesn’t impact concerns of permissibility, and that the burden of proving a significance impacting permissibility would fall on those with intuitions conflicting with ours about the effects of reasonable certainty regarding side-effects and about inevitability.
We finish this section with some claims from the nudge literature that seem to be pointing in a similar argumentative direction to ours. Saghai argues that if secondary behavioral effects may lead to significant costs to some individuals, even if these effects were unintended but merely predictable, the choice architect (or her superior) would be found accountable for bringing the arrangement about (2013, 492). Consider such a case – a cafeteria manager has the option of prominently positioning iron-supplemented food that would benefit the majority of patrons, but could be very harmful for those suffering from hemochromatosis, a rare condition in which the accumulation of iron adversely affects vital organ systems (Salvat, Reference Salvat2008, 11). However, if the cafeteria manager is accountable for allowing unintended, yet predictable adverse secondary effects, the point seems to carry over to considerations of actively bringing about arrangements as opposed to allowing the effects of unmodified environments – the cafeteria manager would be similarly accountable for allowing predictable adverse effects of unmodified choice environments as she would be for the environments she actively brings about.
It might seem odd that while Carolyn is accountable for secondary effects and the effects of unmodified environments, she is somehow absolved of the disgruntled customer’s charge. This is because she cannot be at fault for inevitably picking one available choice environment with behavioral effects that are predictable to her. She must answer for the ways in which these environments have been arranged, but it makes little sense that she answers for having arranged them in the first place.Footnote 12
Objections
In this section, we clarify our position on inevitability further by addressing some objections from the literature.
Engaging with choices reflectively
The first objection is often raised in discussions on nudge inevitability. Specifically, Mitchell has argued, and others have followed suit (Gelfand, Reference Gelfand2016, 605; Holm, Reference Holm2017, 38–39), that contextual influence is inevitable only insofar as ‘individuals remain subject to these irrational influences’ (Mitchell, Reference Mitchell2005, 1251). But individuals can overcome such influences. For instance, ‘simply asking people to give reasons for their choices can reduce the influence of gain/loss framing effects’ (ibid., 1256).Footnote 13 Indeed, the notion that targeted individuals can rise above heuristic triggers has led some authors to believe that nudging is permissible only insofar as it is in some sense transparent, allowing dissenters to dodge nudges with which they disagree (Schmidt, Reference Schmidt2017; Ivanković and Engelen, Reference Ivanković and Engelen2019). In addition, ‘boost’ proponents have recommended improving people’s decision-making competencies so they are able to avoid heuristic triggers (Grüne-Yanoff and Hertwig, Reference Grüne-Yanoff and Hertwig2016). All these proposals might bring into question not only our version of IA, but the inevitability of contextual influence as well.
We do not wish to challenge either the empirical grounding or the moral desirability of these proposals. However, they wouldn’t render our considerations of inevitability moot. Imagine that aside from arranging the cafeteria, Carolyn instructs her staff to put up posters at entry points, warning students about the ways in which their food choices can be influenced. She also instructs them to verbally prompt students to think about the reasons for their food choices. Jeff, a high school senior, stops to inspect the posters several days in a row. He takes the time to listen to the cafeteria staff and heeds their advice…for a while. Later, he becomes preoccupied with getting good scores on finals and his social life. The prompts lose their novelty. Eventually, they fall into the background and fail to stir up Jeff’s reflective capacities. Once again, the cafeteria arrangement becomes significant for his food choices. Now, the objectors might say that Jeff allows the choice architect to take at least partial control over his food choices. The prompts are giving him a chance to overcome the influence, which he chooses to ignore. But this response would overlook that Jeff may fail to pay attention as a result of his reflection being redirected. Imagine that, instead of becoming preoccupied, he’s faced with numerous choice environments where architects are seeking to stir up his reflective capacities. Surely, he cannot engage with all these environments, since his capacity for reflective engagement is a limited resource; in other words, Jeff has a limited mental bandwidth (Mullainathan and Shafir, Reference Mullainathan and Shafir2013). While Mitchell might be right about the possibilities of overcoming heuristic triggers, individuals will only be able to do so in a limited number of cases. IA, as we conceive it, will then remain significant beyond an individual’s capacity for reflection, i.e., once he uses up all of his cognitive resources. And while individuals might be prompted more often in some types of choice environments than in others, e.g., because some decisions are more and some less weighty, they will expend their resources for reflection into choice environments as they see fit. Any one choice arrangement may be engaged with reflectively by some, and subtly trigger the heuristics of others. Hence, considerations of inevitability may remain significant for any one of these choice arrangements.
Eliminating choice architecture
Opponents of nudges and choice architecture more broadly might grant to us that the adjusted IA stands, but only if we hold onto our stipulation that a knowledgeable person remains in the role of choice architect. But why stick to such a stipulation? There are at least two conceivable ways, the objection goes, in which we can rid ourselves of the choice architect – by keeping behavioral experts away from becoming choice architects, or by randomizing layouts. While there are separate practical and moral considerations involved for each of the two methods, they successfully prevent inevitability from occurring for choice architects, essentially by eliminating choice architecture. If true, then there isn’t a context in which choice architecture is truly inevitable.
In a general sense, the objection holds water. The possibility of taking the choice architect out of the equation somewhat reinforces the original charge that there is a significant moral difference between unmodified environments (including arrangements without any thought given to behavioral influence) and environments modified to produce specific behavioral effects. Triggering heuristics in circumstances of predictable effects is no longer morally unproblematic if we can in fact ‘work our way around’ inevitability. However, in many cases, the elimination of choice architecture is either inconceivable or deeply undesirable.
Consider first the possibility of keeping behavioral experts away from positions in which inevitability would manifest for them. A proponent of this method would suggest it was right all along to sack Carolyn and wrong to hire James, the two highly knowledgeable experts. To avoid possibly manipulative or value-imposing influence, a principle should be upheld that would deny behavioral experts employment that includes the arrangement of choice environments. But this would be very difficult to accomplish with ever greater expertise. As Grill points out, avoiding choice architecture becomes ‘more and more difficult as behavioral insights are disseminated through the population’, and although policy-makers can require the consideration of behavioral effects to be ousted from design decisions, ‘such requirements will be difficult to monitor’ (2014, 143–144).Footnote 14
Now, on a different note, if some individuals with behavioral insights would manage to land these jobs regardless, then it would be more desirable, morally and practically, to have people as competent as Carolyn to be choice architects than Naïve Bills. In other words, there are good reasons to suggest that if having some behavioral expertise on board is hardly avoidable, then it’s better to have as much of it as possible. On the one hand, as we’ve established in this paper, Carolyn is more likely to find herself in circumstances of inevitability, absolving her from the charges raised by the disgruntled customer. On the other hand, some argue that individuals who find themselves in the role of choice architect, and who are aware of ubiquitous influence on behavior, must arrange choices responsibly (Hansen and Jespersen, Reference Hansen and Jespersen2013, 23), while others suggest this can be done only at a level of competence that can be trusted (Selinger and Whyte, Reference Selinger and Whyte2010, 462). Blumenthal-Barby suggests that arranging the choice environment ‘should be based on data about satisfaction and happiness levels across various outcomes’ (Reference Blumenthal-Barby, Coons and Weber2013, 196).
But even if it were insisted that individuals with behavioral insights could conceivably be kept away from choice architecture positions, thereby overcoming IA, it would still be far from conclusive that all the possible benefits of choice architecture should be entirely forgone to overcome the threats of manipulation or value imposition. Nudges are often proposed in situations in which ‘people’s psychological set-up predictably leads them astray—failing to live up to […] their own professed values and ideals’ (Engelen et al., Reference Engelen, Thomas, Archer and van de Ven2018, 351). Giving up on choice architecture will often entail leaving individuals at the mercy of their own psychological deficiencies, or, in some cases, to the exploitation of their heuristics by profit maximizers.Footnote 15 And even if their heuristics are not strictly used to harm them, individuals would still be missing out on guidance in important areas, including health, wealth and safety.
Alternatively, some authors believe we can avoid inevitability by randomizing the layout. Randomizing is said to offer choice contexts in which no person is ‘under anyone else’s control’ (Wilkinson, Reference Wilkinson2013, 343). The merit of a randomized layout is neutrality in the sense that no option is being consciously steered by a designer.
While randomization seems conceivable in some circumstances, it might be more difficult to imagine in others. If a landscape architect was designing a park, aware of the behavioral effects that the available design options might bring about, it seems hard to envisage what ‘randomizing a park layout’ would entail. Or consider a doctor aware of the various framing effects at work when presenting a diagnosis or therapy options – once the doctor’s help has been sought, it hardly seems imaginable that he can randomize the frame (Cohen, Reference Cohen2013, 9). Finally, to attend to our favored example here, it’s not at all clear how we should envision a randomized cafeteria.
In many other circumstances, randomization may just seem ‘silly’ (Grill, Reference Grill2014, 143) – perhaps overcoming inevitability through randomization would be conceivable (and thus could eliminate choice architecture), but hardly favorable. If an environment were truly randomized, then the choice architect would lack veto control over some randomized layout, and this, at worst, would threaten the authorization of some environments with predictably harmful effects (like the iron-supplemented food arrangement is harmful to people with hemochromatosis). Such harmful arrangements could be disqualified, but this would require at least some degree of interference from the choice architect, thereby ruling out randomization in the strongest sense. A weaker kind of randomization might entail picking one of the remaining arrangements at random or randomly alternating between arrangements. However, if we’ve already ‘dirtied our hands’ with choice architecture to eliminate harmful arrangements, why not also rule out arrangements completely lacking in benefit? But it might be questioned at this point, or even at the previous, whether such randomization truly eliminates choice architecture.
Hence, it’s not at all obvious that the gains from avoiding inevitability in these described ways would outweigh the very significant costs. The objection does, however, force us to concede that eliminating the choice architect is in some cases conceivable, regardless of how normatively preposterous eliminating choice architecture may seem in these cases. Be that as it may, our adjusted IA remains relevant at least for a number of cases where elimination is inconceivable.Footnote 16
Conclusion
We have argued in this paper that, in conditions in which choice architects can make reliable predictions about the behavioral effects of available choice environments, it becomes inevitable for the architect to pick some choice environment that generates some predictable behavioral effect. In such cases, our argument goes, there is inconsiderable moral difference between picking an unmodified and modified choice environment, as well as between intending and merely foreseeing behavioral effects. Our version of the IA, grounded on the evidence-based view, is itself neutral regarding normative content – it says nothing about the direction in which choice environments should steer. The argument only shows that, in many cases, because of inevitability, the debate about form should be altogether skipped, and turn to content.
Acknowledgements
The paper is in part inspired by sections from Ivanković’s doctoral dissertation The Liberal Politics of Behavioral Enhancement (Central European University, 2019). We would like to thank Tom Douglas, Tom Parr, Kalle Grill, Anamarija Komesarović, Lovro Savić, Aleksandar Simić, Maximilian Kiener, and two anonymous reviewers for their insightful comments and suggestions, as well as the audiences at the ‘3rd Polemo Conference 2021ʹ (Central European University, June 2021), ‘Society of Applied Philosophy Annual Conference’ (SAP, July 2021), and ‘Influenthics seminar’ (Université Catholique de Lille, September 2021).
Funding statement
Ivanković was supported by the project Ethics and Social Challenges (EDI) at the Institute of Philosophy, reviewed by the Ministry of Science and Education of the Republic of Croatia and financed through the National Recovery and Resilience Plan 2021–2026 of the European Union – NextGenerationEU.
Competing interests
The authors declare that they have no competing interests.