Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-23T05:23:30.216Z Has data issue: false hasContentIssue false

Experiencing default nudges: autonomy, manipulation, and choice-satisfaction as judged by people themselves

Published online by Cambridge University Press:  19 March 2021

Patrik Michaelsen*
Affiliation:
Department of Psychology, University of Gothenburg, Gothenburg, Sweden
Lars-Olof Johansson
Affiliation:
Department of Psychology, University of Gothenburg, Gothenburg, Sweden
Martin Hedesström
Affiliation:
Department of Psychology, University of Gothenburg, Gothenburg, Sweden
*
*Correspondence to: Department of Psychology, University of Gothenburg, Box 500, 40530 Gothenburg, Sweden. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Criticisms of nudging suggest that nudges infringe on decision makers’ autonomy. Yet, little empirical research has explored whether people who are subjected to nudges agree. In three between-group experiments (N = 2083), we subject participants to contrasting choice architectures and measure experiences of autonomy, choice-satisfaction, perceived threat to freedom of choice, and objection to the choice architecture. Participants who received a prosocial opt-out default nudge made more prosocial choices but did not report lower autonomy or choice satisfaction than participants in opt-in default or active-choice conditions. This was the case even when the presence of the nudge was disclosed, and when monetary choice stakes were introduced. With monetary choice stakes, participants perceived the threat to freedom of choice as slightly higher in the nudge condition than in the other conditions, but objection to the choice architecture did not differ between the conditions. Taken together, our results suggest that default nudges are less manipulative and autonomy-infringing than sometimes feared. We recommend that policymakers include measures of choice experiences when testing out new interventions.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press

Introduction

Unethical behavior change interventions should not be used for policy. The class of policy interventions known as nudges, interventions influencing behavior by changing cues in choice environments, have received much attention from fears that they may fail to live up to ethical standards (for book-length discussions, see Rebonato, Reference Rebonato2012; Conly, Reference Conly2013; White, Reference White2013; Sunstein, Reference Sunstein2014, Reference Sunstein2015, Reference Sunstein2016). The stakes of the accusation are high, as nudges or similar behaviorally informed interventions are already in use by over 200 governmental units and initiatives around the world (OECD, 2019).

In the nudging and ethics debate, most criticisms cluster around two lines. One considers nudging manipulative, infringing on rational decision-making capabilities, and threatening freedom of choice. A second considers nudging paternalistic, overriding people's means or ends for ones preferred by the nudger. Central to both charges is that nudging is claimed to pay insufficient respect to individuals’ autonomy.

A common response from the proponents of nudging is that environmental influences on one's decisions are inescapable, suggesting that a nudge is similarly manipulative to there being no nudge (Thaler & Sunstein, Reference Thaler and Sunstein2008). If we cannot avoid influence, then (arguably) benevolent intervening is preferable to a random design, which nonetheless influences people. Other responses are to suggest that autonomy can be retained by making the nudge sufficiently transparent, and easy to bypass.

These arguments, and more, have been scrutinized at length in the theoretically driven ethics debate that has flourished in the last decade (for recent overviews, see Lades & Delaney, Reference Lades and Delaney2020; Schmidt & Engelen, Reference Schmidt and Engelen2020). However, it can be questioned whether a nudge's level of respect for autonomy should always be assessed solely on theoretical grounds. A contrasting, empirically oriented, view is that an individual can be their own best judge of whether autonomy is retained (cf., ‘… better off as judged by themselves’, Thaler & Sunstein, Reference Thaler and Sunstein2008, p. 5, italics added). Or at least, that ethical assessment should include the individual's perspective. If merely a weak form of this ‘subjectivist’ view is accepted, then the autonomy issue calls for empirical data.

To date, empirical work on people's perceptions of nudging mainly consists of survey studies wherein participants rate descriptions of nudges – without actually experiencing them first-hand. This research finds that most common nudges receive majority support in most countries researched (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015; Sunstein et al., Reference Sunstein, Reisch and Rauber2017). Nonetheless, nudges targeting automatic cognitive processes are, while broadly deemed ‘acceptable’, at the same time perceived as threatening to autonomy and freedom of choice (Jung & Mellers, Reference Jung and Mellers2016). This includes the common usage of opt-out defaults (where a desired course of action is preselected, subject to opt-out; Jachimowicz et al., Reference Jachimowicz, Duncan, Weber and Johnson2019), such as in applications of organ donation, carbon emission offsets, and retirement savings (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015; Yan & Yates, Reference Yan and Yates2019).

The survey approach of having participants read and rate descriptions of nudges is undoubtedly valuable for informing policymakers of people's opinions on nudges. It fails, however, to address how people experience being subjected to a nudge, such as whether they feel in control and autonomous or not when making the decision. Even if autonomy can be measured in the form of a projected belief in a survey (i.e. “I think I would be in control making that decision”), there is reason to doubt how well that assessment translates into a real-world experience (Wilson & Gilbert, Reference Wilson, Gilbert and Zanna2003; Patil et al., Reference Patil, Cogoni, Zangrando, Chittaro and Silani2014; Francis et al., Reference Francis, Howard, Howard, Gummerum, Ganis, Anderson and Terbeck2016). For this reason, we suggest that exploring actual experiences of being nudged is a valuable complement to the survey approach. It may even be more ethically informative, as when the focus is on how a nudge affects a person in some way, then a ‘first-hand-view’ is a much more direct measurement. Very few studies compare experiences of autonomy in a nudge versus non-nudge experimental design, however. In both cases that we are aware of, we, furthermore, believe that the conclusions that can be drawn about the ethicality of nudging are restricted by methodological limitations.

First, one experiment by Arvanitis et al. (Reference Arvanitis, Kalliris and Kaminiotis2020) indicates that opt-out default nudges may be detrimental to people's experienced autonomy. In their study, participants faced a hypothetical choice of health insurance plans. The results showed that when there were three health plans to choose from, participants default-nudged by having one plan preselected (vs. no default plan) gave significantly lower ratings in one of three autonomy subscales. No difference was found in either of the other two autonomy subscales, however, and the negative effect of the default nudge vanished when participants faced nine options to choose from. Taking the limited sample size of this study into account (35 participants per cell; 139 in total), we suggest that further evidence corroborating the robustness and scope of this finding is needed before ethical and policy conclusions should be attempted.

Second, Abhyankar et al. (Reference Abhyankar, Summers, Velikova and Bekker2014) provided suggestive evidence that an opt-out default (vs. opt-in and no-default) may have little or no effect on experienced autonomy. The study did, however, not focus solely on the choice format (nudge) influence, but sought to evaluate a broader choice process. It is, therefore, not possible to isolate the default nudge's effect from other aspects of the choice process, which also included a second not-defaulted opportunity for participants to state their preference.

Other studies have investigated how increasing the transparency of a nudge affects how it is perceived, or have compared the nudge with other types of interventions. These studies show that the level of transparency of a default nudge may not affect experiences of autonomy or choice-satisfaction (Wachner et al., Reference Wachner, Adriaanse and De Ridder2020), or affect perceptions of threat to freedom of choice (Bruns et al., Reference Bruns, Kantorowicz-Reznichenko, Klement, Luistro Jonsson and Rahali2018). Compared with a mere recommendation of the same course of action, however, a default nudge has been found to be subjectively more freedom-threatening (but also less so than a mandate; Bruns & Perino, Reference Bruns and Perino2019).

Finally, at least two empirical studies have inferred, but not measured, opt-out defaults leading to reactant behavior (Hedlin & Sunstein, Reference Hedlin and Sunstein2016; Arad & Rubinstein, Reference Arad and Rubinstein2018). It is not clear whether participants experienced their autonomy as affected, however. If participants acted contrarily, showing reactance toward the nudge, presumably their subjective autonomy would have been intact from actively rejecting the intervention.

In sum, research on experiences of nudges is scarce, and methodological issues leave important questions unanswered. To the extent that we are correct in identifying autonomy as of central relevance to the ethics of nudging, further research on this issue seems warranted.

Overview of present studies

Our aim is to investigate how opt-out default nudges affect experiences of autonomy, choice-satisfaction, perceived threat to freedom of choice, and objection to the choice format, when the nudge is experienced by people first-hand. Default nudges are among the most effective at influencing behavior (Hummel & Maedche, Reference Hummel and Maedche2019), which makes them of particular interest in relation to how well autonomy is respected. We present three experiments (total N = 2083) where participants are subjected to opt-out default nudges and compared with participants subjected to opt-in or a no-default active-choice format. Studies 2 and 3 further manipulate the transparency of the intervention by disclosing the choice format intervention and its anticipated effect on choice. Study 4 meta-analyses the experience and perception results, and tests for noninferiority.

Study 1: Experiencing a proenvironmental default nudge

Study 1 provides a first test of how contrasting choice formats affect people's experiences and perceptions when choosing. In a between-groups design, we compare the opt-out default nudge, both with the ‘business-as-usual’ opt-in default and with a choice-requiring format without any default set.

Method

Participants

Participants were recruited through Amazon Mechanical Turk (MTurk) and paid $0.40. We aimed to recruit 100 participants per condition. Expecting 20% loss from attention check failures, we requested 360 responses (362 complete responses received). Three attention/comprehension checks were used. Specifically, we excluded participants who failed to report: (1) how the choice task was formatted (56 failed), (2) what was chosen in the choice task (>2 off from what was recorded; 52 failed), or (3) to ‘select number 2’ on a scale when instructed to (12 failed). Our final sample consisted of 290 participants (M = 36.47 years old, SD = 11.28; 44% women). The exclusions were, however, disproportionate, in that opt-out participants were more likely to incorrectly answer how the choice was formatted. This rendered group sizes uneven: 60 (opt-out), 115 (active choice), and 115 (opt-in). For reference, the lowest powered two-group comparison thus had 80% power to detect an effect of Cohen's d = 0.45.

Procedure and materials

Participants imagined moving to a new apartment. The apartment would be prefitted with several appliances, and participants chose between having each appliance in an environmentally friendly (green) or standard (nongreen) version (similar to in Steffel et al. (Reference Steffel, Williams and Pogacar2016)). The green versions of 10 appliances were displayed in a list (e.g., energy-efficient dishwasher, low-flow faucets). Depending on the experimental condition, the default was either to (1) receive the green version (opt-out condition), (2) not receive the green version (opt-in condition), or (3) no default, participants had to explicitly state Yes or No to receiving the green version (active-choice condition). In the opt-out and opt-in conditions, participants could reject the default and receive the other version by ticking a box next to each appliance. Thus, if ‘energy-efficient dishwasher’ was manually ticked by the participant in the opt-out condition, this meant that the nongreen version was chosen. If manually ticked in the opt-in condition, the green version was chosen. In all conditions, the green appliance version came with a cost ($1–$7), to be added or subtracted from the monthly rent.

Subsequent to choosing appliances, participants answered questions regarding experienced autonomy, choice-satisfaction, and perceived threat to freedom of choice. Experienced autonomy pertains to the experience of control in, and deliberateness of, the decision made. The 6-item scale was adapted from Cornwell and Krantz (Reference Cornwell and Krantz2014; adjusted per recommendations from Felsen et al. (Reference Felsen, Castelo and Reiner2013)). Choice-satisfaction pertains to contentment with the decision made and was measured with a single item. Perceived threat to freedom of choice pertains to whether the choice environment was perceived as trying to exert influence, regardless of whether this was judged to affect the choice made. The 4-item scale was taken from Dillard and Shen (Reference Dillard and Shen2005). All items used 9-point scales with labels at end points. Items for all scales can be found in the Supplementary Material (pp. 17–18), and the full stimulus material is available at osf.io/69be8.

Presentation order for experienced autonomy and perceived threat to freedom of choice was randomized. Choice-satisfaction was measured prior. Exploratory, we also included some individual difference-measures pertaining to control in decision making. For brevity, analyses for these measures are placed in the Supplementary Material.

Results

All analyses were conducted as analysis of variance (ANOVA) comparisons between choice format conditions (opt-out vs. opt-in vs. active choice), unless otherwise stated. All post hoc analyses used the Tukey HSD test. Descriptive results can be found in Table 1, frequency distributions in Figure 1, and additional analyses and visualizations in the Supplementary Material. The internal consistency was high for both experienced autonomy (α = 0.87) and perceived threat to freedom of choice (α = 0.93).

Table 1. Means and standard deviations for dependent variables across studies.

Values in the same row not sharing a subscript are significantly different at p < 0.05.

Figure 1. Frequency distributions for all dependent variables in Study 1, separated by condition.

Choice

There was a significant influence from choice format on how many green appliances participants chose, F(2, 287) = 32.39, p < 0.001, η p2 = 0.184. Post hoc comparisons showed significant differences between the opt-out condition (M = 6.75, SD = 2.96) and both the opt-in (M = 3.38, SD = 2.28; p < 0.001) and the active-choice condition (M = 4.23, SD = 2.82; p < 0.001). The difference between opt-in and active choice was also significant (p = 0.043).

Experienced autonomy

Participants did not significantly differ in experienced autonomy between the opt-out default (M = 7.88, SD = 1.42), opt-in default (M = 7.55, SD = 1.40), and active-choice (M = 7.78, SD = 1.17) conditions, F(2, 287) = 1.51, p = 0.222, η p2 = 0.010.

Choice-satisfaction

There were no significant differences between the opt-out (M = 8.13, SD = 1.13), opt-in (M = 7.72, SD = 1.45), and active-choice conditions (M = 7.78, SD = 1.36), F(2, 287) = 1.95, p = 0.145, η p2 = 0.013.

Perceived threat to freedom of choice

Similarly for perceived threat to freedom of choice, no significant differences were found between the opt-out (M = 2.33, SD = 2.03), opt-in (M = 2.16, SD = 1.82), and active-choice conditions (M = 2.50, SD = 2.00), F(2, 287) = 0.86, p = 0.425, η p2 = 0.006.

Additional analyses

As shown in Figure 1, there were no signs of bimodal distributions in the opt-out (nudge) condition for any of the experience and perception variables. Instead, all distributions were skewed toward favorable evaluations. Correlations between the number of green appliances chosen and the other dependent variables read as follows: autonomy: r = 0.18, p = 0.002; choice-satisfaction: r = 0.16, p = 0.007; and perceived threat to freedom of choice: r = −0.11, p = 0.057. The patterns were highly similar in the opt-out and other conditions (see Supplementary Material for details).

Discussion

Study 1 found that while structuring the choice in an opt-out format had a sizeable influence on choices, participants’ experiences of autonomy, choice-satisfaction, and perceived threat to freedom of choice did not significantly differ from those subjected to an opt-in or active-choice format. Ratings were favorable overall, with high reports of autonomy and satisfaction and low perceptions of threat to freedom of choice. As visualized in Figure 1, this was the case for the whole sample, without notable subgroups reacting aversively.

We suggest that these findings most plausibly could be due to three explanations: (1) small or nonexistent true effects (which would be positive from the perspective of nudging), (2) a lack of recognition of the nudge, leading to weak experimental manipulation, or (3) a lack of engagement with the hypothetical choice task. In the next two experiments, we attempt to shed light on these issues.

Study 2: Do choice experiences deteriorate when intervention transparency is increased?

Study 2 extends by increasing sample size and manipulating the transparency of the intervention. Before choosing, half of the participants are explicitly disclosed of the choice formatting, and how it may influence choice.

Method

Participants

In total, 722 participants who had not taken part in Study 1 were recruited from MTurk and paid $0.45 for participation. We excluded participants who failed to report (1) how the choice task was formatted (87 failed) or (2) how many green-version appliances they chose (>2 off from what was recorded; 66 failed). After exclusions, the final sample consisted of 606 participants (M = 35.8 years old, SD = 11.23; 48.3% women). Experimental instructions were clarified to avoid the previous higher failure rate for opt-out participants, and group sizes ended up more even: 179 (opt-out), 203 (active choice), and 224 (opt-in).

We also included an item assessing comprehension of the choice format disclosure (134 of 355 participants failed, 37.7%). Results are reported both for the full sample, and specifically for participants having received the disclosure and passed this check. We choose to report both selections in the main text from a lack of having a preregistered plan covering how to deal with a failure rate of this level. Simply excluding failing participants from all analyses is not desirable as it would unbalance comparison groups and risk introducing spurious effects driven by selection biases. We also believe that both selections answer separate and interesting questions: the full-sample analysis informs what may be expected if a nudge disclosure is offered (for many real-world situation, many people may not engage with such information, see Page, Reference Page2019), and the selective sample analysis informs what people who are evidently aware of the nudge think. The first question may be primary for policy, while the latter is more psychologically interesting. The smaller selection can also be seen as a robustness check for the results of the full sample. Descriptives separated for participants undisclosed, disclosed, and disclosed and passing comprehension check can be found in Appendix A, and the data are available at osf.io/69be8.

Procedure and materials

Study 2 used the same apartment acquisition scenario as Study 1. The design was expanded by also manipulating the transparency of the intervention, resulting in a 3(choice format: opt-out vs. opt-in vs. active choice) × 2(transparency: disclosure present vs. absent) between-groups design. Specifically, before choosing appliances, half of the participants in each condition were presented with a text box disclosing that (1) how a choice is formatted may influence people's choices, and (2) in which direction the influence could be expected (low/average/high amount of green appliances chosen). For instance, opt-out condition participants received information that preselecting an option makes the option more likely to be chosen, and that here this would lead to more green appliances chosen. Wordings for each disclosure can be found in the Supplementary Material (pp. 18–19). After making appliance choices, participants answered the same measures as in Study 1.Footnote 1

Results

All main analyses were conducted as 3(choice format: opt-out vs. opt-in vs. active choice) × 2(transparency: disclosure present vs. absent) ANOVAs. Tukey HSD was used for all post hoc tests. Internal consistency was high for both experienced autonomy (α = 0.87) and perceived threat to freedom of choice (α = 0.92). Descriptive results can be found in Table 1 and frequency distributions in Figure 2.

Figure 2. Frequency distributions for all dependent variables in Study 2, separated by condition.

Choice

There was a significant main effect of choice format on the number of green appliances chosen, F(2, 600) = 83.35, p < 0.001, η p2 = 0.217. Post hoc comparisons revealed a significant difference between the opt-out default (M = 7.46, SD = 2.57) and opt-in default (M = 4.14, SD = 2.65; p < 0.001) conditions, and a significant difference between opt-out and active choice (M = 4.72, SD = 2.83; p < 0.001). The difference between opt-in and active choice was not significant (p = 0.067). There was no main effect of the transparency manipulation, F(1, 600) = 0.84, p = 0.361, η p2 = 0.001, or interaction effect, F(2, 600) = 0.93, p = 0.397, η p2 = 0.003.

The results were highly similar for participants disclosed of the choice architecture and passing the comprehension check. The one-way ANOVA was significant, F(2, 191) = 27.30, p < 0.001, η p2 = 0.222, and post hoc analysis showed that opt-out participants (M = 7.31, SD = 2.78) made significantly more green choices than both opt-in (M = 4.09, SD = 2.48; p < 0.001) and active-choice participants (M = 4.70, SD = 2.71; p < 0.001). The difference between opt-in and active choice was not significant (p = 0.418).

Experienced autonomy

There was a significant main effect of choice format on experienced autonomy, F(2, 600) = 3.43, p = 0.033, η p2 = 0.011. Opt-out participants (M = 7.84, SD = 1.30) experienced higher autonomy than participants subjected to the opt-in default (M = 7.49, SD = 1.46; p = 0.029). There was no significant difference between active choice (M = 7.72, SD = 1.35) and opt-in (p = 0.204) or opt-out (p = 0.646). There was no main effect of the transparency manipulation, F(1, 600) = 0.31, p = 0.578, η p2 = 0.001, or an interaction effect, F(2, 600) = 0.22, p = 0.802, η p2 = 0.001.

For disclosed participants passing the comprehension check, no difference between choice formats was found, F(2, 191) = 1.79, p = 0.171, η p2 = 0.018. However, means and standard deviations were highly similar to those in the full sample: opt-out (M = 7.84, SD = 1.37), opt-in (M = 7.42, SD = 1.46), and active choice (M = 7.79, SD = 1.18), suggesting that the change of statistical significance may have been a result of reduced power.

Choice-satisfaction

There was a significant main effect of choice format, F(2, 600) = 4.37, p = 0.013, η p2 = 0.014. Opt-out participants (M = 7.97, SD = 1.27) were significantly more satisfied than opt-in participants (M = 7.57, SD = 1.39; p = 0.012). Active-choice participants (M = 7.72, SD = 1.50) were not significantly different from either opt-in (p = 0.588) or opt-out ones (p = 0.148). No main effect of the transparency manipulation, F(1, 600) = 1.89, p = 0.169, η p2 = 0.003, or interaction effect was found, F(2, 600) = 1.98, p = 0.139, η p2 = 0.007.

The significant difference between choice formats persisted when analyzing only those participants who received and comprehended the disclosure, F(2, 191) = 5.40, p = 0.005, η p2 = 0.054. Again, opt-out participants (M = 7.94, SD = 1.31) were significantly more satisfied than opt-in participants (M = 7.19, SD = 1.68; p = 0.009). The same was true for active choice (M = 7.91, SD = 1.27) versus opt-in (p = 0.016). Participants in the opt-out and active-choice conditions did not differ significantly (p = 0.989).

Perceived threat to freedom of choice

There were no significant main effects for either choice format, F(2, 600) = 0.56, p = 0.573, η p2 = 0.002, transparency manipulation, F(1, 600) = 0.428, p = 0.513, η p2 = 0.001, or an interaction, F(2, 600) = 0.08, p = 0.926, η p2 < 0.001. Notably, none of the cell means reached above 2.5 on the 9-point scale: opt-out (M = 2.36 SD = 1.77), opt-in (M = 2.35, SD = 1.75), and active choice (M = 2.27, SD = 1.83).

Participants receiving a disclosure and passing the disclosure comprehension check had perceptions similar to the larger sample, with no significant differences between choice formats, F(2, 191) = 0.04, p = 0.965, η p2 < 0.001, and no group means above 2.5: opt-out (M = 2.35 SD = 1.86), opt-in (M = 2.44, SD = 1.70), and active choice (M = 2.41, SD = 2.00).

Additional analyses

As for Study 1, frequency distributions showed no signs of participants in the opt-out condition reacting aversively (see Figure 2). The number of green appliances chosen correlated significantly with all three other dependent variables, for autonomy: r = 0.21, p < 0.001; choice-satisfaction: r = 0.22, p < 0.001; and perceived threat to freedom of choice: r = −0.12, p = 0.003. All correlations were in the same direction and of roughly equal strength for all choice formats (see Supplementary Material).

Discussion

The results of Study 2 mirrored those of Study 1, in that the nudge had a strong influence on choice without affecting other outcomes negatively. This conclusion held when transparency was increased, yielding highly similar results for participants receiving and acknowledging a disclosure of the intervention's presence and potential effect.

Curiously, the results indicated that opt-out participants experienced themselves as slightly better off with regard to autonomy and choice-satisfaction, compared with opt-in participants. It should, however, be noted that both differences were small, with mean differences of less than half a scale point (Cohen's d of 0.26 for autonomy and 0.30 for satisfaction). To the extent that confidence be put in these differences, we speculate that facilitated preference-alignment for choices may be an explanation. People generally prefer to be environmentally friendly (if not too costly), and when the nudge made this behavior easy, it may have promoted feelings of autonomy and satisfaction. Another possibility is that autonomy was heightened in the opt-out condition from this format being less intuitive, thereby making participants increasingly aware of their opportunity to exercise choice.

It seems that in Studies 1 and 2, when participants experienced the default nudge first-hand, they found it less intrusive than expected from findings in the survey-based literature. A caveat remains, however, in that both studies used choices without real consequences for the participants. To explore whether the absence of negative effects may have stemmed from a lack of engagement with the hypothetical choice task, we next introduce a choice task with a monetary payoff.

Study 3: Do choice experiences deteriorate when stakes are increased?

The results of Study 2 suggested that a lack of intervention transparency was not the reason for participants’ apparent approval of the nudge. Study 3 proceeds by increasing the stakes of the choice for the participants. The hypothetical apartment scenario of the previous studies is substituted for a choice task wherein people decide between donating a bonus payment to charity and keeping it for themselves.

Method

Participants

We requested 1250 participants from MTurk (1258 completed responses received). The participants were paid $0.50, with a possible bonus of 20¢ depending on a donation choice. Individuals who had taken part in the previous studies were not allowed to participate. We excluded participants who did not correctly report what they had chosen in the choice task (19 answered incorrectly and 52 more had missing values). We also included a disclosure comprehension question (273/586, 47%, participants failed). The results are reported as in Study 2, first with the full sample and followed up with only participants having received the disclosure and passed the check. The final sample consisted of 1187 participants (age M = 37.9, SD = 12.5; 52.7% female). Group sizes were approximately equal: 401 (opt-out), 392 (active choice), and 394 (opt-in).

Procedure and materials

The advertised purpose of the study was to make comparisons of geometrical shapes. Similarity between six pairs were rated before the actual experiment. In the experimental task, participants were given a bonus payment of 20¢ and the opportunity to donate the money to charity (specifically, US hurricane relief, high on the agenda at the time of data collection, fall 2017). The design mirrored Study 2: 3(choice format: opt-out vs. opt-in vs. active choice) × 2(transparency: disclosure present vs. absent). The choice format ‘opt-out’ meant that the default was set to donate the bonus, and ‘opt-in’ meant that keeping the bonus was the default. We used disclosures similar to the ones in Study 2, conveying that choice formats may exert an influence on choice, and how the present format could be expected to influence the donation choice (see Supplementary Material, p. 19 for exact wordings). After the donation choice, we measured choice experiences and perceptions. An item assessing objection to the choice format was added. This was to complement the questions on perceived threat to freedom of choice, as it is separate whether one perceives that an influence attempt is taking place and whether one deems this objectionable. We did not include the individual difference-measures used in the previous experiments.

Results

All analyses were conducted as 3(choice format: opt-out vs. opt-in vs. active choice) × 2(transparency: disclosure present vs. absent) ANOVAs, unless otherwise stated. Tukey HSD was used for all post hoc tests. The internal consistency was high for both experienced autonomy (α = 0.86) and perceived threat to freedom of choice (α = 0.91). Descriptive results can be found in Table 1 and frequency distributions in Figure 3.

Figure 3. Frequency distributions for experience and perception measures in Study 3, separated by condition.

Donation choice

We tested the influence of choice format and transparency on donation choice in a three-stage logistic regression.Footnote 2 We used opt-in as the reference group and dummy coded opt-out and active choice. Transparency was coded 1 for disclosure present and 0 for absent. Interaction terms were created by multiplying the choice format and transparency dummy variables. In Stage 1, we entered the choice format dummy variables. Both predictors were significant, showing that opt-out (44.4% donated; Wald (1) = 29.27, OR = 2.29, p < 0.001) and active-choice participants (35.5% donated; Wald (1) = 8.41, OR = 1.57, p = 0.004) were each significantly more likely to donate than opt-in participants (25.9% donated; Nagelkerke R 2 = 0.034). Stage 2 added the transparency variable. However, this did not significantly improve the fit with the data, χ 2(1) = 1.84, p = 0.175, Nagelkerke R 2 = 0.037. Stage 3 added the interaction terms. This also did not significantly improve the fit with the data compared with the previous stage, χ 2(2) =  0.46, p = 0.0794, Nagelkerke R 2 = 0.037.

For disclosed participants passing the disclosure comprehension check, the pattern was somewhat different. Participants in the opt-out condition donated to a similar extent (42.6%), but a higher percentage donated in the active-choice (41.9%) and opt-in conditions (35.2%). The differences (vs. opt-in) were not significant in a logistic regression, opt-out: Wald (1) = 1.25, OR = 1.36, p = 0.264; active choice: Wald (1) =  0.93, OR = 1.33, p = 0.334.

Experienced autonomy

There was no significant main effect of choice format on experienced autonomy, F(2, 1181) = 0.03, p = 0.973, η p2 < 0.001; opt-out (M = 7.92, SD = 1.26), opt-in (M = 7.90, SD = 1.28), and active choice (M = 7.92, SD = 1.23). The main effect for the transparency manipulation was significant, however, F(1, 1181) = 4.36, p = 0.037, η p2 = 0.004. Participants who received a disclosure experienced themselves as less autonomous (M = 7.84, SD = 1.24) than participants not receiving a disclosure (M = 7.99, SD = 1.27). As can be seen in Table A2 in Appendix A, this effect was primarily driven by the active-choice condition. The interaction effect was not significant, F(2, 1181) = 1.12, p = 0.327, η p2 = 0.002.

Among only participants receiving and passing the disclosure check, no difference between choice formats was found, F(2, 310) = 2.28, p = 0.104, η p2 = 0.014. Experiences of autonomy were high regardless of the choice format received: opt-out (M = 7.95, SD = 1.05), opt-in (M = 7.95, SD = 1.11), and active choice (M = 7.64, SD = 1.38).

The difference between participants receiving and not receiving a disclosure was not significant when excluding disclosure check failures, t(912) = 1.50, p = 0.134, d = 0.11. The mean difference, however, remained almost identical: disclosed and passing check (M = 7.86, SD = 1.18) and undisclosed (M = 7.99, SD = 1.27).

Choice-satisfaction

There was no significant main effect of choice format on choice-satisfaction, F(2, 1181) = 0.77, p = 0.464, η p2 = 0.001, of the transparency manipulation, F(1, 1184) = 0.64, p = 0.424, η p2 = 0.001, or an interaction between the two, F(2, 1181) = 1.37, p = 0.255, η p2 = 0.002. Regardless of the choice format, the participants reported a high level of satisfaction: opt-out (M = 8.02, SD = 1.50), opt-in (M = 7.89, SD = 1.68), and active choice (M = 7.93, SD = 1.57).

For disclosed participants passing the comprehension check, choice formats did not differ significantly either, F(2, 310) = 1.51, p = 0.222, η p2 = 0.010; opt-out (M = 8.03, SD = 1.48), opt-in (M = 7.91, SD = 1.59), and active choice (M = 7.63, SD = 1.97).

Perceived threat to freedom of choice

There was a significant main effect of both choice format, F(2, 1181) = 5.06, p = 0.006, η p2 = 0.008, and transparency, F(1, 1181) = 8.42, p = 0.004, η p2 = 0.007. Post hoc testing showed that opt-out participants (M = 2.95, SD = 2.18) perceived the choice format as more threatening to freedom of choice than both opt-in (M = 2.54, SD = 1.90; p = 0.013) and active-choice participants (M = 2.55, SD = 2.01; p = 0.018). Participants who received the disclosure reported a higher threat to freedom of choice (M = 2.85, SD = 2.08) than those who did not receive the disclosure (M = 2.51, SD = 1.99). The interaction was not significant, F(2, 1183) = 2.16, p = 0.134, η p2 = 0.003.

For disclosed participants passing the comprehension check, there was no significant difference between choice formats, F(2, 310) = 2.76, p = 0.065, η p2 = 0.018. However, a glance at the means suggests a tendency among opt-out participants to perceive a higher threat: opt-out (M = 3.18, SD = 2.22), opt-in (M = 2.52, SD = 1.83), and active choice (M = 2.94, SD = 2.20).

The difference between participants receiving and not receiving a disclosure persisted when excluding disclosure check failures, t(912) = −2.67, p = 0.008, d = 0.18. Disclosed participants passing the check perceived a higher threat to freedom of choice (M = 2.89, SD = 2.10) than did undisclosed participants (M = 2.51, SD = 1.99).

Objection to choice format

There were no significant main effects of either choice format, F(2, 1181) = 0.53, p = 0.591, η p2 = 0.001, or the transparency manipulation, F(1, 1181) = 2.70, p = 0.100, η p2 = 0.002, or an interaction effect, F(2, 1181) = 1.27, p = 0.281, η p2 = 0.002. Objection ratings were low for all conditions: opt-out (M = 2.64, SD = 2.26), opt-in (M = 2.59, SD = 2.18), and active choice (M = 2.47, SD = 2.26).

For disclosed participants passing the comprehension check, no differences between choice formats existed either, F(2, 310) = 0.18, p = 0.837, η p2 = 0.001; opt-out (M = 2.51, SD = 2.19), opt-in (M = 2.35, SD = 1.82), and active choice (M = 2.46, SD = 2.04).

Additional analyses

As shown in Figure 3, there were no indications of subgroups in the opt-out default condition showing aversion toward the nudge. Participants’ donation choice was associated with all experience and perception variables. t-tests showed that donating (vs. not donating) was significantly associated with reports of higher autonomy (d = 0.32), choice-satisfaction (d = 0.56), lower perceptions of threat to freedom of choice (d =0.35), and lower objection (d = 0.39). This was the case, in the same direction and of roughly equal strength, for all choice formats. No two-way interactions between donation and choice format were significant (all ps > 0.22).

Discussion

The introduction of monetary choice stakes did little to change the general pattern of results. As in previous experiments, choice format had a sizeable effect on choices, with the opt-out default leading to not far from twice the amount of donations compared with the opt-in. Participants’ experienced autonomy and satisfaction with their choice did not differ between those subjected to the nudge and those who were not.

However, the measure of perceived threat to freedom of choice did. While ratings remained low, opt-out participants did to a higher degree perceive the choice format as trying to influence them than participants in the other two choice formats. Participants disclosed of there being an intervention in place likewise rated a higher threat to freedom of choice than undisclosed participants; however, this is unsurprising, as this is essentially what the disclosure sought to convey. We suggest that the introduction of the monetary payoff is likely what accounts for the differences between choice formats occurring here but not in the previous experiments, although, as more factors change between the studies, other explanations cannot be ruled out.

The results for perceived threat to freedom of choice should be interpreted against the background of participants’ responses to the more value-laden objection to the choice format question. Opt-out participants objected a mere 0.05-scale point more than opt-in participants, and all means were in the low third of the scale (i.e., indicating low objection). Taken together, this suggests that participants were able to detect the attempt at influence, but, on average, did not mind this happening. However, in turn, one should consider that objection ratings may have been influenced by sympathy for the hurricane relief cause, something that could have suppressed negative ratings. Future research may explore this by varying the beneficiaries of the nudge.

Study 4: Meta-analyses and noninferiority tests

Lastly, we meta-analyze the experience and perception results and statistically test whether choice format differences are negligible in size. The individual study results largely do not show significant differences, but one should not infer from this that no differences exist. In a null hypothesis testing framework, the absence of evidence does not translate into evidence of absence. Instead, we positively analyze whether effects are smaller than a specified limit through noninferiority testing (Wellek, Reference Wellek2010).

Method

A noninferiority test is used to assess whether an effect is at least a small as a predetermined level (e.g., of Cohen's d = 0.2). Ideally, this level is based on theoretical reasoning grounded in real-world applications. Here, however, we have no theoretical reason for deciding on any specific bound, and there is no clear estimation of a likely effect size from previous research.

In the absence of theoretical grounding, we follow guidelines from Lakens (Reference Lakens2017) and Lakens et al. (Reference Lakens, Scheel and Isager2018) and set our noninferiority bound after what effect size we have 80% power to detect at p < 0.05. In the case of our meta-analyses, this level corresponds to a Cohen's d of 0.102, which means that we can detect differences at a level conventionally considered negligibly small (Cohen, Reference Cohen1988).

We consider active choice the most neutral comparison point for how the opt-out default nudge is experienced and perceived, and for brevity, only report the comparisons between these two conditions. Results versus opt-in are similar or more favorable to the opt-out, and can be found in the Supplementary Material (pp. 14–16).

Results

All analyses below compare the opt-out and active-choice conditions. We meta-analyzed the previous experiments (n = 1350), using random effects models, with the R package metafor (Viechtbauer, Reference Viechtbauer2010). For the noninferiority tests, we used the TOSTER package (Lakens, Reference Lakens2018). Visualizations can be found in the Supplementary Material. R code for the analyses are provided at osf.io/69be8/.

Experienced autonomy

The autonomy experienced when subjected to an opt-out default was not inferior to the autonomy experienced when subjected to an active-choice format. This was indicated by a meta-analytic estimate that was significantly higher than the noninferiority bound of d = −0.102, specifically: d = 0.035, 95% CI [−0.072, 0.143], Z = 2.50, p = 0.006. The estimate was not large enough to reject the traditional null hypothesis of zero difference, Z = 0.641, p = 0.522.

Choice-satisfaction

For choice-satisfaction, the meta-analytic effect was larger than the noninferiority bound of d = −0.102, specifically: d = 0.131, 95% CI [0.007, 0.254], Z = 3.69, p < 0.001. Thus, the satisfaction of the participants in the opt-out condition was not inferior to that of the participants in the active-choice condition. The estimate was large enough to reject the null hypothesis of no difference, Z = 2.072, p = 0.038. Opt-out participants were, thus, actually significantly more satisfied than active-choice participants; however, this was not enough to surpass the level set here for a negligible effect.

Perceived threat to freedom of choice

Here, the relevant test is of inferiority. We want to know whether we can reject the hypothesis that the opt-out default was perceived as more than trivially threatening to freedom of choice when compared with the active-choice condition. Testing against the inferiority bound of 0.102, we could not reject the fact that freedom of choice was perceived as only trivially higher by opt-out participants, Z = 0.436, p = 0.669. Not surprisingly then, the difference was statistically different from zero, d = 0.128, 95% CI [0.01, 0.246], Z = 2.13, p = 0.033, showing participants in the opt-out default condition to perceive a higher threat to freedom of choice than active-choice ones.

Discussion

We found that experienced autonomy and choice-satisfaction were not lower for nudged participants. For choice-satisfaction, the meta-analytic estimate was even significantly higher. Simultaneously, perceived threat to freedom of choice was also significantly higher. This effect was small, but not small enough to be considered trivial by the cutoff level used in this study. Taken together, this suggests that participants were able to recognize that a nudge aiming to change their behavior was in place, but that their experiences of choosing were not meaningfully diminished by this fact.

A reminder is in place that the noninferiority bounds were set by a heuristic based on statistical power. While all effect size estimates were small according to conventional benchmarks (all ds ≤ 0.13), it is fully possible that a theoretically informed interpretation may deem them to be of practical significance.

General discussion

Motivated by the concern that nudging constitutes a threat to autonomy, we investigated the experiences and perceptions of people subjected to default nudges. In three experiments, we found consistently high reports of experienced autonomy and choice-satisfaction, and low levels of perceived threat to freedom of choice. Increasing the transparency of the nudge, or the stakes of the choice, did not change this pattern. As suggested by individual experiments, Study 4 showed that the nudge did not negatively affect autonomy and satisfaction in comparison with the opt-in and active-choice formats. However, driven by Study 3, perceived threat to freedom of choice was found to be higher.

What implications do empirical findings like these have for the question whether nudging respects or disrespects autonomy? For someone considering autonomy a purely objective property, it is likely that nothing will change: people's subjective experiences are simply not decisive for ethicality. A more moderate objectivist might consider the subjective experience a useful indicator, even if not conclusive. However, as argued at the outset, if one takes a subjectivist position and maintains that a person should be allowed to be their own judge, the empirical data of subjective experiences ought to be central.

Here, the data showed that participants were strongly skewed toward considering themselves in control while being subjected to default nudges (Table 1 and Figures 1–3). At least on the surface level then, concerns of manipulation and autonomy-infringement seem overblown. However, one possible interpretation of the high autonomy ratings, and the lack of differences between choice formats, is that people displayed what may be labeled an ‘autonomy bias’. In the absence of strong evidence to the contrary, people seemingly tend to see themselves as originators of their actions (for similar conclusions, see, e.g., Davison, Reference Davison1983; Nolan et al., Reference Nolan, Schultz, Cialdini, Goldstein and Griskevicius2008; Bang et al., Reference Bang, Shu and Weber2020). If this interpretation is preferred, would this not constitute a problem for the subjectivist position – perhaps people really were meaningfully manipulated, but due to the limits of introspection (autonomy bias), they failed to realize that this was the case?

We see two main ways of responding from a subjectivist position. One is to simply deny that an autonomy bias is a problem: matters should be considered as judged by people themselves, and if people judge themselves fine, then they are fine. Another is to argue that the participants were sufficiently aware and understanding of the situation that their experiences ought to count regardless of a general tendency to overestimate one's autonomy. In particular, the results of Study 3 showed that participants nudged in a choice with real money, while presented and comprehending a disclosure of the nudge's presence and expected effect, still considered themselves highly autonomous (and not less so than others). If these participants are deemed autonomous only in an inauthentic way, it can be asked whether the same yardstick is used for assessing the autonomy of people subjected to nudges that is used for people in other everyday situations. Arguably at least as likely as participants not being able to resist the nudge, is that they may have approved and adhered to the cause, which would be consistent with autonomous decision-making (Ivanković & Engelen, Reference Ivanković and Engelen2019). The default may then have been taken as advice or recommendation (McKenzie et al., Reference McKenzie, Liersch and Finkelstein2006).

Regardless of one's position of who is the better judge, it can be asked whether the experimental choice tasks were sufficiently engaging to participants that potentially adverse effects of the nudge would show. Two main factors suggest that this would have been the case. First, survey studies show that people are wary of default nudges incurring financial costs on them (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015; Yan & Yates, Reference Yan and Yates2019), which suggests that the present choice contexts were ones likely to trigger aversion to the nudge. If the default nudge was considered manipulative when experienced firsthand, we would expect to have seen traces of this in at least one of the two choice scenarios: Studies 1 and 2 used tangible financial costs but in a hypothetical setting, Study 3 used small stakes but in a choice that was consequential for participants. Second, while the choice stakes in Study 3 were modest, it is still likely that they signaled a meaningful level of value to the participants. This is suggested by the bonus payment corresponding to 40% of the study's sign-up payment, and by a pilot study showing that participants are willing to work a menial task, at a subpar pay rate, for several minutes in order to keep this bonus for themselves.Footnote 3

Furthermore, there is the question of generalizability. The ways in which a nudge can be designed, and the contexts in which it can be applied, are limited only by the inventiveness of the world's choice architects. We acknowledge that the present findings certainly cannot account fully for this heterogeneity. Rather, a piecemeal approach may be what this line of research largely is confined to (Wilkinson, Reference Wilkinson2013). Nevertheless, as defaults that produce financial losses have been rated among the most intrusive nudges in survey research (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015; Jung & Mellers, Reference Jung and Mellers2016), some confidence may be held that other common nudges would not likely affect people's choice experiences more negatively. Further studies should explore whether being subjected to other types of nudges, such as norm interventions, produce similar results. Other topics for future research may be how experiences and perceptions of being nudged may be moderated by the ease of opting out (resistibility), people's preferences for the behavior they are being nudged toward, and by their attitudes toward the choice architect.

In conclusion, we have argued that the study of people's choice experiences, especially autonomy, is important for assessing the ethicality of nudge interventions. Despite concerns of intrusiveness expressed by people in survey studies (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015), we find that common applications of defaults can be consistent with autonomous decision-making, as judged by people themselves. We note, however, that there is a risk of people perceiving the default as trying to influence them, which under some circumstances may lead to reactance. Not to forget, nudges make up a highly heterogeneous class of interventions, and more research is needed before general conclusions can be arrived at. In the meantime, we recommend choice architects to routinely use measures of choice experiences as a guide when designing new interventions.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/bpp.2021.5.

Acknowledgments

We thank Lina Nyström and Juliane Bücker for excellent research assistance. We thank Timothy J. Luke for comments on a previous draft, as well as suggestions for data analysis and help with data visualizations. For helpful comments, we also thank Katarina Nordblom, Niklas Harring, and other participants at a 2018 workshop in Gothenburg, organized by the Centre for Collective Action Research (CeCAR), University of Gothenburg. Parts of this work were presented at the 38th Annual Conference of the Society for Judgment and Decision Making in Vancouver, November 10–13, 2017.

Financial support

This research was funded by grant MAW 2015.0106 from the Marcus and Amalia Wallenberg Foundation, awarded to Martin Hedesström.

Conflict of interest

None declared.

Footnotes

1 An additional item was included to the perceived threat to freedom of choice scale in this experiment only. For consistency with other experiments and previous literature, we dropped this item from all analyses below. Results are the same whether the item is included or not (see Supplementary Tables S11 and S12).

2 For tabulations with more detail, see Supplementary Tables S14 and S15.

3 Data from a pilot study (N = 450) suggest participants are willing to expend considerable energy not to miss out on a 20¢ bonus. In the pilot study, participants were faced with a donation choice analogous to the one in Study 3. However, to opt-out of donating their bonus, participants needed to complete an unstimulating task consisting of dragging boxes, one by one, from a Donate-column over to a Keep-column. In the most extreme condition we collected, we found that 44% (21/48) were willing to drag a full 160 boxes from the Donate over to the Keep column in order to keep the 20¢ for themselves. The median time spent on dragging boxes was approximately 3 minutes. This means that almost half of the participants preferred keeping the bonus strongly enough that they were willing to work well below conventional pay level for short MTurk-studies (landing on about $4/h), for a time period long enough to complete a new separate study.

References

Abhyankar, P., Summers, B. A., Velikova, G. and Bekker, H. L. (2014), ‘Framing options as choice or opportunity: Does the frame influence decisions?Medical Decision Making, 34(5): 567582.CrossRefGoogle ScholarPubMed
Arad, A. and Rubinstein, A. (2018), ‘The people's perspective on libertarian-paternalistic policies’, The Journal of Law and Economics, 61(2): 311333.CrossRefGoogle Scholar
Arvanitis, A., Kalliris, K. and Kaminiotis, K. (2020), ‘Are defaults supportive of autonomy? An examination of nudges under the lens of Self-Determination Theory’, The Social Science Journal, 111.Google Scholar
Bang, H. M., Shu, S. B. and Weber, E. U. (2020), ‘The role of perceived effectiveness on the acceptability of choice architecture’, Behavioural Public Policy, 4(1): 5070.CrossRefGoogle Scholar
Bruns, H. and Perino, G. (2019), ‘The role of autonomy and reactance for nudging-experimentally comparing defaults to recommendations and mandates’, Pre-print. Retrieved from: https://ssrn.com/abstract=3442465.Google Scholar
Bruns, H., Kantorowicz-Reznichenko, E., Klement, K., Luistro Jonsson, M. and Rahali, B. (2018), ‘Can nudges be transparent and yet effective?Journal of Economic Psychology, 65: 4159.CrossRefGoogle Scholar
Cohen, J. (1988), Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
Conly, S. (2013), Against autonomy: Justifying coercive paternalism. Cambridge: Cambridge University Press.Google ScholarPubMed
Cornwell, J. F. M. and Krantz, D. H. (2014), ‘Public policy for thee, but not for me: Varying the grammatical person of public policy justifications influences their support’, Judgment and Decision Making, 9(5): 433444.CrossRefGoogle Scholar
Davison, W. P. (1983), ‘The third-person effect in communication’, The Public Opinion Quarterly, 47(1): 115.CrossRefGoogle Scholar
Dillard, J. P. and Shen, L. (2005), ‘On the nature of reactance and its role in persuasive health communication’, Communication Monographs, 72(2): 144168.CrossRefGoogle Scholar
Felsen, G., Castelo, N. and Reiner, P. B. (2013), ‘Decisional enhancement and autonomy: Public attitudes towards overt and covert nudges’, Judgment and Decision Making, 8(3): 202213.CrossRefGoogle Scholar
Francis, K. B., Howard, C., Howard, I. S., Gummerum, M., Ganis, G., Anderson, G. and Terbeck, S. (2016), ‘Virtual morality: Transitioning from moral judgment to moral action?PLoS ONE, 11: 10.CrossRefGoogle ScholarPubMed
Hagman, W., Andersson, D., Västfjäll, D. and Tinghög, G. (2015), ‘Public views on policies involving nudges’, Review of Philosophy and Psychology, 6(3): 439453.CrossRefGoogle Scholar
Hedlin, S. and Sunstein, C. R. (2016), ‘Does active choosing promote green energy use: Experimental evidence’, Ecology Law Quarterly, 43: 107141.Google Scholar
Hummel, D. and Maedche, A. (2019), ‘How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies’, Journal of Behavioral and Experimental Economics, 80: 4758.CrossRefGoogle Scholar
Ivanković, V. and Engelen, B. (2019), ‘Nudging, transparency, and watchfulness’, Social Theory and Practice, 45(1): 4372.CrossRefGoogle Scholar
Jachimowicz, J. M., Duncan, S., Weber, E. U. and Johnson, E. J. (2019), ‘When and why defaults influence decisions: A meta-analysis of default effects’, Behavioural Public Policy, 3(2): 159186.CrossRefGoogle Scholar
Jung, J. Y. and Mellers, B. A. (2016), ‘American attitudes toward nudges’, Judgment and Decision Making, 11(1): 6274.CrossRefGoogle Scholar
Lades, L. and Delaney, L. (2020), ‘Nudge FORGOOD’, Behavioural Public Policy, 120.Google Scholar
Lakens, D. (2017), ‘Equivalence tests: A practical primer for t tests, correlations, and meta-analyses’, Social Psychological and Personality Science, 8(4): 355362.CrossRefGoogle Scholar
Lakens, D. (2018), TOSTER: Two one-sided tests (TOST) equivalence testing (Version 0.3.4). Retrieved from: https://CRAN.R-project.org/package=TOSTER.Google Scholar
Lakens, D., Scheel, A. M. and Isager, P. (2018), ‘Equivalence testing for psychological research: A tutorial’, Advances in Methods and Practices in Psychological Science, 1(2): 259269.CrossRefGoogle Scholar
McKenzie, C. R., Liersch, M. J. and Finkelstein, S. R. (2006), ‘Recommendations implicit in policy defaults’, Psychological Science, 17(5): 414420.CrossRefGoogle ScholarPubMed
Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J. and Griskevicius, V. (2008), ‘Normative social influence is underdetected’, Personality and Social Psychology Bulletin, 34(7): 913923.CrossRefGoogle ScholarPubMed
OECD (2019), Delivering better policies through behavioural insights: New approaches. Paris: OECD Publishing.Google Scholar
Page, L. (2019), ‘Disclosure for real humans’, Behavioural Public Policy, 113.Google Scholar
Patil, I., Cogoni, C., Zangrando, N., Chittaro, L. and Silani, G. (2014), ‘Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas’, Social Neuroscience, 9(1): 94107.CrossRefGoogle ScholarPubMed
Rebonato, R. (2012), Taking liberties: A critical assessment of libertarian paternalism. New York, NY: Palgrave Macmillan.Google Scholar
Schmidt, A T and Engelen, B (2020), ‘The ethics of nudging: An overview’, Philosophy Compass, 15(4): 12658–12651CrossRefGoogle Scholar
Steffel, M., Williams, E. and Pogacar, R. (2016), ‘Ethically deployed defaults: Transparency and consumer protection through disclosure and preference articulation’, Journal of Marketing Research, 53(5): 865880.CrossRefGoogle Scholar
Sunstein, C. R. (2014), Why nudge? The politics of libertarian paternalism. New Haven, CT: Yale University Press.Google Scholar
Sunstein, C. R. (2015), Choosing not to choose: Understanding the value of choice. Oxford: Oxford University Press.Google Scholar
Sunstein, C. R. (2016), The ethics of influence: Government in the age of behavioral science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Sunstein, C. R., Reisch, L. and Rauber, J. (2017), ‘A worldwide consensus on nudging? Not quite, but almost’, Regulation & Governance, 12(1): 322.CrossRefGoogle Scholar
Thaler, R. H. and Sunstein, C. R. (2008), Nudge: Improving decisions about health, wealth, and happiness. New York, NY: Penguin Books.Google Scholar
Viechtbauer, W. (2010), ‘Conducting meta-analyses in R with the metafor package’, Journal of Statistical Software, 36: 148.CrossRefGoogle Scholar
Wachner, J., Adriaanse, M. and De Ridder, D. (2020), ‘The influence of nudge transparency on the experience of autonomy’, Comprehensive Results in Social Psychology, 116.Google Scholar
Wellek, S. (2010), Testing statistical hypotheses of equivalence and noninferiority (2nd ed.). Boca Raton, FL: CRC Press.CrossRefGoogle Scholar
White, M. (2013), The manipulation of choice: Ethics and libertarian paternalism. New York, NY: Palgrave Macmillan.CrossRefGoogle Scholar
Wilkinson, T. (2013), ‘Nudging and manipulation’, Political Studies, 61(2): 341355.CrossRefGoogle Scholar
Wilson, T. D. and Gilbert, D. T. (2003), ‘Affective forecasting’, in Zanna, M. P. (ed.), Advances in experimental social psychology, Vol. 35, San Diego: Academic Press, 345411.Google Scholar
Yan, H. and Yates, J. F. (2019), ‘Improving acceptability of nudges: Learning from attitudes towards opt-in and opt-out policies’, Judgment and Decision Making, 14(1): 2639.CrossRefGoogle Scholar
Figure 0

Table 1. Means and standard deviations for dependent variables across studies.

Figure 1

Figure 1. Frequency distributions for all dependent variables in Study 1, separated by condition.

Figure 2

Figure 2. Frequency distributions for all dependent variables in Study 2, separated by condition.

Figure 3

Figure 3. Frequency distributions for experience and perception measures in Study 3, separated by condition.

Supplementary material: File

Michaelsen et al. supplementary material 1
Download undefined(File)
File 27.3 KB
Supplementary material: File

Michaelsen et al. supplementary material 2
Download undefined(File)
File 750.5 KB