Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-24T19:16:11.854Z Has data issue: false hasContentIssue false

The impact of experience on the tendency to accept recommended defaults

Published online by Cambridge University Press:  12 February 2024

Yefim Roth*
Affiliation:
University of Haifa, Haifa, Israel
Greta Maayan Waldman
Affiliation:
University of Pennsylvania, Philadelphia, PA, USA
Ido Erev
Affiliation:
Technion—Israel Institute of Technology, Haifa, Israel
*
Corresponding author: Yefim Roth; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Two preregistered web studies are presented that explore the impact of experience on the tendency to accept recommended defaults. In each of the 100 trials, participants (n = 180, n = 165) could accept a recommended default option or choose a less attractive prospect. The location of the options (left or right) was randomly determined before each trial. Both studies compared two conditions. Under Condition Dominant, the default option maximized participants’ payoff in all trials. Under Condition Protective, the default option protected the participants from rare losses and maximized expected return but decreased payoff in most trials. The results reveal a tendency to accept the default in Condition Dominant but the opposite tendency in Condition Protective. This pattern was predicted by assuming that in addition to promoting specific actions, the presentation of the default changes the set of feasible strategies, and choice between these strategies reflects reliance on small samples of past experiences.

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association of Decision Making

1. Introduction

It is often possible to promote specific behaviors by presenting them as recommended defaults. For example, framing the socially desirable option as a recommended default increases organ donation (Johnson & Goldstein, Reference Johnson and Goldstein2003) and pension contributions (Rubaltelli & Lotto, Reference Rubaltelli and Lotto2021). A meta-analysis that examines 58 studies of the impact of defaults (Jachimowicz et al., Reference Jachimowicz, Duncan, Weber and Johnson2019) documents a large and significant overall effect (Cohen’s d = 0.68). Yet the analysis also highlights two interesting exceptions: (1) Narula et al. (Reference Narula, Ramprasad, Ruggs and Hebl2014) show that setting a repeated colonoscopy appointment as the default reduced the show up rate and (2) Reiter et al. (Reference Reiter, McRee, Pepper and Brewer2012) evaluates parents’ consent to have their adolescent sons hypothetically receive human papillomavirus (HPV) vaccine at school and found that describing this option as the default reduces the consent rate.

The original explanations for the exceptions to the positive effect of defaults include the assertion that, in these studies, the default has changed the set of strategies considered by the decision-makers. In Narula et al.’s study, the availability of the default could lead the decision-makers to choose a commitment-free strategy: “the system selected a date, and I will come if I will be free.” In Reiter et al.’s study, rejecting the default could be used to signal objection to school-mandated HPV vaccination. The current paper tries to clarify the impact of recommended defaults by considering three additional properties of these exceptions to the known positive effect of defaults. First is the observation that both exceptions involve protective actions designed to address low probability risks. Second, both exceptions require action. Accepting these defaults implies a requirement to arrive to an appointment. A third common property of these exceptions involves the existence of prior experiences with similar painful preventive medical appointments.

Our experimental analysis focuses on situations in which the presentation of the default modifies the choice task and can lead people to behave as if they select between two cognitive strategies (“accept the default” and “change the default”). In these settings, people are likely to base their decision on past experiences with similar defaults. Thus, the decision between accepting and changing the default is an example of a decision from experience. Previous studies of decisions from experience, including a sequence of choice prediction competitions (Erev et al., Reference Erev, Eyal and Alvin2010a, Reference Erev, Ert, Roth, Haruvy, Herzog, Hau and Lebiere2010b, Reference Erev, Ert, Plonsky, Cohen and Cohen2017; Plonsky et al., Reference Plonsky, Apel, Ert, Tennenholtz, Bourgin, Peterson and Erev2019), highlight the predictive value of models, assuming that people behave as if they rely on the outcomes obtained in small samples of similar situations. Reliance on small samples implies a tendency to neglect rare outcomes (for example, the probability that a sample of 5 past cases will include an event that occurred in 10% of the cases is only 1–0.95 = 0.41), and in the current context, it can reverse the impact of defaults. Under this “reliance on small sample” hypothesis, experience is expected to increase the tendency to reject defaults designed to prevent rare losses (as in the studies conducted by Narula et al. and Reiter et al.).

Study 1 evaluates the reliance on small sample hypothesis by employing the clicking paradigm (Figure 1, top panel) and focuses on the impact of a recommended default that requires an action. Participants were asked to actively choose between an option marked as a recommended default and an unmarked option. Study 2 extends the analysis and examines if the requirement to act is a necessary condition to the reversed default. It explores the impact of a no-action default by using the radio buttons paradigm (Figure 1, bottom panel). Both studies involve two conditions. In Condition Dominant, following the default is the dominant strategy—it always leads to the best possible payoff. This condition simulates the impact of effective persistent defaults (Goldstein et al., Reference Goldstein, Johnson, Herrmann and Heitmann2008). In Condition Protective, following the default protects against large losses and maximizes expected return but also decreases the probability of success. In 90% of the trials in this condition, the default provides a lower payoff than the alternative option. The stars in Figure 1 present the prediction of a simple quantification of the reliance on small sample hypothesis—the sample-of-5 model (Erev & Haruvy, Reference Erev and Haruvy2016; Erev & Roth, Reference Erev and Roth2014). This model assumes that after receiving feedback, the decision-maker bases each choice on a random sample (taken with replacement) of five previous trials and selects the strategy with the highest average payoff in the sample. Our preregistered hypothesis for study 1 predicted a linear trend in the direction of the small sample hypothesis predictions. This prediction can be justified as a generalization of the “face-or-cue” model developed by Erev et al. (Reference Erev, Yakobi, Ashby and Chater2022b) to capture the approximately linear learning curve they observed in an experiment that examined binary decisions based on two sets of observations: (1) a prechoice sample of 12 draws taken from each alternative and (2) the outcome of previous trials. Implicit in this generalization is the assumption that the effect of the nudge is similar to the effect of the prechoice sample.

Figure 1 Left panel: experiment instructions, sample choice, and results screens for participants in each condition. Right panel: mean default rate in each of the 100 trials in Condition Dominant and Protective. The stars on the right-hand side present the predicted behavior of experienced agents that base each choice on a sample of only five past experiences.

The results of study 1 reject the linear trend prediction and suggest that the duration of the main impact of a fixed nudge is shorter than the duration of the effect of new samples that vary from trial to trial. To address this finding, we preregistered a new hypothesis (that focuses on the mean choice rates and corresponds to the small sample prediction) before running study 2. Appendix A presents the prediction of a refined quantification: the model PAS, a generalization of the sample-of-5 model that was proposed by Erev et al. (Reference Erev, Ert, Plonsky and Roth2023), and was adapted to the current context after the preregistration of the current predictions.

2. Study 1

In order to clarify the joint impact of recommended defaults and experience, the current study used a simple experimental paradigm (top left column in Figure 1) that limits the information available to participants to the identity of the recommended default and the outcomes of past trials. In each experimental trial, the participants were asked to choose between two options: a blank key and a key marked as “recommended default for most users.” Each choice was followed with the presentation of the payoffs from both keys.

2.1. Method

2.1.1. Participants

The participants were 180 adults recruited through Amazon Mechanical Turk (MTurk) for a base payment of $0.50, as per our preregistration.Footnote 1 Participants (67% male; average age = 38 years) had the opportunity to receive a $1.50 bonus, with a higher number of points accumulated throughout the experiment corresponding to a higher probability of a bonus. The experiment lasted about 10 minutes.

2.1.2. Procedure

At the beginning of the preregistered web experiment (programmed using O-Tree, Chen et al., Reference Chen, Schonger and Wickens2016), participants were shown an introductory screen with instructions and fields to input basic demographic information. Participants read that proceeding to the rest of the experiment implied their consent to the consent statement provided. Each participant was then randomly assigned to one of two payoff conditions—dominant (n = 96) and protective (n = 84)—and faced a repeated choice task consisting of 100 trials. The choice required clicking on one of two keys (see choice screens in left top of Figure 1).

The payoff (in points) from choosing the default in trial t, denoted by DPt , was drawn from the range of −10 to 10. Going with the default maximized expected return (the expected return over all 100 trials was 0). The expected return from choosing the nondefault alternative was always lower by 0.1 points, and the exact payoff of this option in each trial depended on the experimental condition. In Condition Dominant, the payoff of the nondefault alternative was DPt − 0.1 with certainty, while in Condition Protective, it was DPt + 1 in 90% of trials and DPt − 10 in the remaining 10%. The location of the payoff-maximizing default choice (left or right) was randomly determined before each trial. This random placement of the default, payoff-maximizing option implies that the expected maximization rate of participants who ignore the default is 50%.

After the participants completed all 100 trials, they were presented with an attention check question in which they were explicitly told the payoffs that would result from choosing each of five buttons and were asked to choose a button (Roth & Yakobi, Reference Roth and Yakobi2023).Footnote 2 As mentioned previously in a footnote, we included only participants who passed this attention check in our analysis. In the final screen, participants were shown their total number of points accumulated throughout the experiment. They were also told whether or not they received a bonus and were presented with their final payoff.

2.2. Results and discussion

The top right-hand side of Figure 1 presents the mean default rate as a function of time. It shows high initial tendency to accept the default (acceptance rate of 73%) and a quick reaction to the feedback that rejects our preregistered hypothesis. While our preregistered hypothesis predicts a linear increase in the default rate in Condition Dominant and linear decrease in Condition Protective, Figure 1 shows that the difference between the two conditions increased in the first 20 trials and remained relatively constant in the last 80 trials. Moreover, the results reveal a nonmonotonic learning pattern in Condition Protective: the probability of accepting the default decreases in the first 20 trials to 0.25 or so (indeed, much of the decrease occurred immediately after the first choice) and then increases to about 0.40. Post hoc analysis reveals that the decrease in the default rate between the first two trials and trials 3–20 is significant (t(83) = 5.00, p < .0001), as is the increase from trials 3 to 20 to the last 80 trials (t(83) = 2.18, p < .05).

Under the model PAS (Erev et al., Reference Erev, Ert, Plonsky and Roth2023, see Appendix 3), the unpredicted nonmonotonic pattern reflects the joint impact of two classes of past experiences: old experiences that occur before the beginning of the current experimental task and (a small sample of) new experiences with the current task. The high default rate in the first trial reflects reliance on old experiences. The quick subsequent decline suggests a decrease in the reliance on these old experiences even before accumulating enough new experiences. The increase in the longer term suggests that at least some of the participants are sensitive to the accumulation of new experiences.

Figure 2 Average individual default rates in the last 20 trials, as a function of the recency score in the first 80 trials, in study 1 (left) and study 2 (right). The shapes reflect the default rates in the first two trials.

Post hoc (unregistered) analysis reveal that over the 100 trials, participants in Condition Dominant selected the recommended default choice at a higher rate (.88, STD = .20) than those in Condition Protective (.36, STD = .24). The difference is highly significant (t(178) = 15.95, p < .001, Cohen’s d = 2.38). Both rates are significantly different than. 5 (t(95) = 18.78, p < .001, Cohen’s d = 1.92; t(83) = −5.458, p < .001, Cohen’s d = 0.60 for Condition Dominant and Protective, respectively).

Notice that the expected maximization rate in our task without any recommendation is. 5 (without the recommended default information, the participant could not know the location of the better option that varied from trial to trial). Thus, the observation that the recommended default rate in Condition Protective is significantly below. 5 suggests a backfiring effect of the recommended default, as predicted by the reliance on small sample hypothesis.

Previous research distinguishes between two explanations of the descriptive value of the reliance on small sample hypothesis. The first states that people tend to rely on the most recent past experiences. Spektor and Wulff (Reference Spektor and Wulff2021) show that, in certain settings, the aggregate choice rates can be captured by the reliance on small sample hypothesis even if only a minority of the decision-makers are “myopic” and exhibit a strong recency effect. This account can be quantified with models that assume (1) sequential adjustment of choice propensities and (2) large between-individual differences in adjustment speed. The second explanation implies that people try to rely on the most similar past experiences and feel that only a small portion of their past experiences (not necessarily the most recent ones) are similar to their current task (see Plonsky et al., Reference Plonsky, Teodorescu and Erev2015).Footnote 3 In order to compare these explanations, we used the first 80 choices of each participant in Condition Protective to estimate the participant’s tendency to rely on recent outcomes and examined the relationship between this estimated tendency and the default rate in the last 20 trials. The recency score was computed as the default rate after a trial in which the default was the best choice (increased the gain by 10 points), minus the default rate after a trial in which the alternative to the default was the best choice (selecting the default decreased the payoff by 1 point). The left-hand panel in Figure 2 presents the results. Each dot summarizes the behavior of one of 83 participants (one of the 84 participants did not observe a gain from selecting the default during the first 80 trials). The results reveal large between-subject variability in the recency score, large between-subject variability between the default rates, and insignificant correlation between the two measures (r = −.126, ns). The lack of significant correlation suggests that the tendency to reject the default, predicted by the reliance on small sample hypothesis, is not likely to be the product of a strong recency effect by a minority of the participants. This suggestion agrees with the results of Erev, Cohen, and Yakobi’s (Reference Erev, Cohen and Yakobi2022a) comparison of the two explanations in the context of pure decisions from experience tasks.

Figure 2 also shows high correspondence between the default rate in the first two trials and in the last 20 trials (see similar observation in Roth, Reference Roth2020). The correlation between the two rates is 0.35 (p < .001). This correspondence is consistent with the assumption (implemented in PAS, the model described in Appendix A) that the initial reaction to the default reflects old experiences with the default, and the impact of these old experiences diminishes but is not eliminated by new experiences.

The current results suggest that one of the three properties of the exceptions to the positive effect of the default considered above—past experience with painful preventive medicine appointments—is not a necessary condition for the emergence of a reverse default effect. Study 2 was designed to examine if a second property of the known exceptions—the fact that accepting the default requires an action—is a necessary condition.

3. Study 2

Study 2 is a replication of study 1 with a modified manipulation of the recommended default. In study 2, the recommended default was preselected (see Yan & Yates, Reference Yan and Yates2019).

3.1. Method

3.1.1. Participants

The participants were 165 adults (55% male; average age = 41 years) recruited through Amazon Mechanical Turk (MTurk). The payment and inclusion criteria were the same as in study 1, and 28 participants failed the attention task.

3.1.2. Design and procedure

The design and procedure were as in study 1 with the exception that the default did not require an active choice: the default radio button was preselected in each trial. In addition, the “Next” key was inactive for the first 2 seconds of each trial. This constraint was designed to increase the similarity of the incentive structure in the two studies, by reducing the benefit from an effort to save time by selecting the default (Munichor et al., Reference Munichor, Erev and Lotem2006). A total of 81 participants were assigned to Condition Dominant and 84 to Condition Protective.

3.2. Results

The lower panel in Figure 1 presents the default rate in Study 2. The initial default rate was 90%. While this rate is higher than the initial rate in study 1 (71%), the reaction to feedback in Study 2 exhibits the nonmonotonic pattern documented in study 1. The default rate in Condition Protective fell to 38% by the third trial, continued to decrease for several trials, and slightly increased in the longer term. The decrease from the default rate in the first two trials to the rate in trials 3 to 20 is significant (t(83) = 10.92, p < .0001), but the increase from trials 3 to 20 to the last 80 trials is not (t(83) = 0.72, ns). In accordance with our preregistered hypothesis, the default rate over all trials shows a large difference between the two conditions. In Condition Dominant, the participants kept the preselected default choice at a significantly higher rate (.89, STD = .19) than those in Condition Protective (.33, STD = .24), t(163) = 16.30, p < .0001, Cohen’s d = 2.54. As in study 1, the results reveal a reverse default effect in Condition Protective: the maximization rate was significantly lower (t(83) = 6.37, p < .0001, Cohen’s d = 0.70) than the. 5 rate expected without the default.

Figure 2 shows additional similarities between the two studies. The results demonstrate large between-participant variations between the tendency to exhibit the recency effect and the default effect, but the correlation between these tendencies is insignificant (r = 0.09, ns). The correlation between the default rate in the first two and last 20 trials is significant (r = .27. p < .02).

4. A boundary condition: The impact of defaults that do not change the strategy space

Recall that the current analysis focuses on an environment in which the presentation of the default modifies the set of feasible strategies. Since the location of the expected value (EV) maximizing option was randomly selected (between the left and the right key) before each trial, the participants could not select the EV maximizing strategy more than expected by random choice had we eliminated the presentation of the recommended default. Similarly, without the presentation of the default, the participants could not select the strategy that minimizes EV more than expected by random choice. Under our analysis, this modification of the set of feasible strategies is a necessary condition for the emergence of the reverse default effect. Indeed, without this modification, our analysis (as quantified by the model PAS, as described in Appendix A) predicts a positive default effect even after 100 trials with immediate feedback. For example, consider a variant of the Condition Protective, studied above, in which the location of the counterproductive risky option is fixed on the left side during all 100 trials. PAS predicts that in this “fixed-sides” experiment, setting the safe option as the default will increase its choice rate. The predicted safe rate in trial 100 is .46 when the safe option is the default and only .40 in the absence of the default.

5. General discussion

The starting point of the current analysis is the distinction between two effects of the presentation of defaults: the presentation can increase the set of feasible strategies, and it can change the tendency to select the promoted option. In our experiments, setting the EV maximizing option as the default allowed the participants to select a strategy that maximizes EV and also to select the strategy that minimizes EV. The results show that the prediction of the joint impact of the two effects is complicated by the observation that the tendency to accept recommended defaults can change when people gain experience in a nontrivial way. The tendency to select the default was enhanced by experience when this strategy led to the best payoff with high probability but was reversed by experience when this probability was low. Importantly, the reverse default effect emerged despite the fact that following the default maximized expected return and prevented large losses. In addition, our analysis suggests a boundary condition to the reverse default effect—this effect is expected to emerge only when the presentation of the defaults allows the use of a counterproductive strategy (that could not be used in the absence of the default) that leads to the best outcomes with high probability.

In order to clarify the implications of the current analysis, it is constructive to consider the leading explanations of the default effect. Past research (Dinner et al., Reference Dinner, Johnson, Goldstein and Liu2011; Johnson & Goldstein, Reference Johnson and Goldstein2003; McKenzie et al., Reference McKenzie, Liersch and Finkelstein2006) has identified three contributors to the default effect. First, selecting the default reduces cognitive effort. Second, defaults imply endorsement, and selecting them minimizes personal liability. And third, defaults serve as a reference point from which choosing an alternative feels like a gain or loss—and according to the loss aversion assumption of prospect theory, losses loom larger than equivalent gains (Kahneman & Tversky, Reference Kahneman and Tversky1979). While the current results do not challenge the significance of these contributors, our results shed light on the processes that underlie these tendencies.

The observation that experience can trigger a tendency to avoid recommended defaults suggests that these previous explanations do not reflect stable preferences. Rather, our results suggest that people behave as if they choose between cognitive rules (like “accept the default” and “change the default”) and this hypothetical choice reflects reliance on small samples of similar past experiences. Thus, it is possible that part of the impact of the three known contributors to the default effect is the product of the fact that they determine which past experiences seem most similar to the current choice task. In many natural settings, strategies that minimize cognitive effort, minimize personal liability, and minimize losses are likely to provide the best outcomes. Thus, when facing a new choice task, people tend to select the default option that minimizes these three variables. However, there are interesting exceptions. Condition Protective of the current experiments presents one clear example: going with the default minimized effort and losses, but in 90% of the trials, rejecting the default led to better outcomes. Our analysis suggests that the backfiring of the default nudge in the two preventive medicine studies considered in the introduction (Narula et al., Reference Narula, Ramprasad, Ruggs and Hebl2014; Reiter et al., Reference Reiter, McRee, Pepper and Brewer2012) reflects a similar tendency.

In summary, the current analysis demonstrates that the impact of the default nudge can be enhanced, but can also be reversed, by experience. In addition, our results suggest that the high sensitivity of the impact of the default nudge to experience does not prevent useful predictions. In the environment we considered, the effect of experience can be predicted with the easily quantifiable reliance on small sample hypothesis.

6. Materials and methods

All data, analysis code, and materials have been made publicly available via Open Science Framework and can be accessed at https://osf.io/2hyu6/?view_only=66074042dc3e4a35ae624335dc67f65f. The design and analysis plans were preregistered on AsPredicted, and copies of the preregistrations can be found at https://aspredicted.org/blind.php?x=sm2j23 (study 1) and https://aspredicted.org/3G7_BLG (study 2). Before running study 2, we preregistered and ran another study (“study 1.5,” https://aspredicted.org/blind.php?x=KP5_XK4) that focused on study 2’s no-action default task, with a different description of the options—the options were called “Top” and “Bottom.” The results of this study are similar to the results of the reported study 2. We chose to focus on only two of the three studies we run, as adding study 1.5 reduces readability (as study 1.5 differs from study 1 on two dimensions) without adding new insights.

The Faculty of Social Welfare and Health Sciences approved these experiments (approval number 274/20). The informed consent notified the participants that this is a repeated decision-making experiment that is expected to take about 10 minutes and involve a $0.5 show-up fee and a possible bonus. The participants were informed that they were free to abandon the experiment at any point without any consequences.

Appendix A. The Partially Attentive Sampler (PAS) models

Our implementation of the model PAS (Erev et al., Reference Erev, Ert, Plonsky and Roth2023) starts with the assumption that the participants in the current experiments learn between two strategies: accept or change the default. The model assumes that each choice is based on one of two classes of past experiences: old (experiences that occurred before the current experimental session) and new (from the current experimental session). The probability of reliance on old past experiences diminishes with experience. In the current binary choice task, the probability equals ${\delta}_i^{\left[\frac{t-1}{t}\right]}$ , where δi is a free parameter and t is the trial number. The choice rate while relying on old experience is determined by the description of the task and does not change during the experiment. Thus, it can be estimated by choice rate in the first trial (t = 1, when the probability of relying on old experience equals 1). In the current study 1, the initial default rate is 0.73. When the agents rely on new experiences, they sample κi, previous trials and select the strategy with the highest average payoff in the sample.

The current derivation of the model’s predictions uses the parameters estimated by Erev et al. to fit 87 experimental tasks. The distribution of the two parameters in the population is presented in Table A1.

Table A1 The estimated distribution of the PAS parameters

Notes: Cells indicate the percentages of cases. When the value of κi is “VL,” all past experiences were equally weighted as expected when the sample size is very large (→ ∞).

Figure A1 Model PAS default rate predictions.

Note: The zigzag pattern in Condition Protective reflects the behavior of the virtual agents with large κi (see explanation in the study by Shteingart & Loewenstein, Reference Shteingart and Loewenstein2015).

Figure A1 presents the predictions of this model to the current experiments and captures the main results. In Condition Dominant, the model predicts a quick increase toward accepting the default. In Condition Protective, the model predicts a nonmonotonic curve—a quick initial decrease in the default rate and a slow increase toward 0.5 in the longer term. Under the model, the nonmonotonic curve reflects the joint impact of two factors—the decrease in the probability of reliance on old experiences quickly reduces the default effect in the early trials and the increase in the number of new experiences that slowly increases the default rate (this increase is particularly clear for participants with high ki parameters).

In a second analysis, we derived the predictions of PAS (Erev et al., Reference Erev, Ert, Plonsky and Roth2023) for two variants of Condition Protective defaults. In both variants, the location of the optimal safe option was fixed throughout the 100 trials (and the presentation of the default does not change the set of feasible strategies). In one variant, the virtual agents were “presented” with a recommended default (the choice rate under reliance on old experience was 0.73), while the second variant considered a control condition with unmarked keys (the choice rate under reliance on old experience was 0.50). The results show that in this “fixed-location” setting, PAS predicts a positive impact of the presentation of the default—it increases the default rate from 0.40 to 0.46.

Acknowledgments

The authors thank Katherine L. Milkman for her helpful comments. Yefim Roth acknowledges support from the Israel Science Foundation (1857/22). Ido Erev acknowledges support from the Israel Science Foundation (861/22).

Competing interest

The authors declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

Footnotes

1 The preregistration stated approximately 80 participants who passed the attention check in each condition. We run 269 participants and 180 of them pasted the test. We preregistered “approximately 80” to obtain power of. 8 under the assumption of a Cohen’s d around. 5. This assumption was supported in previous studies of decisions from experience.

2 The attention check was considered to be answered correctly if participants selected the button that was associated with the highest payoff.

3 Notice that both explanations predict deviation from maximization (of the aggregate choice rates) that reflect insufficient sensitivity to rare outcomes. This prediction was supported in many studies of repeated feedback-based decisions from experience (see review in Erev et al., Reference Erev and Plonsky2023). Yet, studies of one-shot decisions from experience based on free sampling reveals a boundary condition (see Glöckner et al., Reference Glöckner, Hilbig, Henninger and Fiedler2016).

References

Chen, D. L., Schonger, M., & Wickens, C. (2016). oTree—An open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance, 9, 8897.CrossRefGoogle Scholar
Dinner, I., Johnson, E. J., Goldstein, D. G., & Liu, K. (2011). Partitioning default effects: Why people choose not to choose. Journal of Experimental Psychology: Applied, 17(4), 332341. https://doi.org/10.1037/A0024354 Google ScholarPubMed
Erev, I., Cohen, D., & Yakobi, O. (2022a). On the Descriptive Value of the Reliance on Small-Samples Assumption. Judgment and Decision Making, 17(5), 10431057.CrossRefGoogle Scholar
Erev, I., Ert, E., Plonsky, O., Cohen, D., & Cohen, O. (2017). From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience. Psychological review, 124(4), 369409.CrossRefGoogle Scholar
Erev, I., Ert, E., Plonsky, O., & Roth, Y. (2023). Contradictory deviations from maximization: Environment-specific biases, or reflections of basic properties of human learning? Psychological Review, 130(3), 640676.CrossRefGoogle ScholarPubMed
Erev, I., Eyal, E., and Alvin, E. R.. “A choice prediction competition for market entry games: An introduction.” Games 1.2 (2010a): 117136.CrossRefGoogle Scholar
Erev, I., Ert, E., Roth, A. E., Haruvy, E., Herzog, S. M., Hau, R., Lebiere, , & , C., (2010b). A choice prediction competition: Choices from experience and from description. Journal of Behavioral Decision Making, 23(1), 1547.CrossRefGoogle Scholar
Erev, I., & Haruvy, E. (2016). Learning and the economics of small decisions. In The Handbook of Experimental Economics, vol. 2. Princeton University Press. https://doi.org/10.1515/9781400883172-011 Google Scholar
Erev, I., & Plonsky, O. (2023). The J/DM Separation Paradox and the Reliance on the Small Samples Hypothesis. In Fiedler, K., Juslin, P., & Denrell, J. (Eds). Sampling in Judgment and Decision Making, Cambridge University Press.Google Scholar
Erev, I., & Roth, A. E. (2014). Maximization, learning, and economic behavior. Proceedings of the National Academy of Sciences, 111(supplement_3), 1081810825.CrossRefGoogle ScholarPubMed
Erev, I., Yakobi, O., Ashby, N. J., & Chater, N. (2022b). The impact of experience on decisions based on pre-choice samples and the face-or-cue hypothesis. Theory and Decision, 92, 583598.CrossRefGoogle Scholar
Glöckner, A., Hilbig, B. E., Henninger, F., & Fiedler, S. (2016). The reversed description-experience gap: Disentangling sources of presentation format effects in risky choice. Journal of Experimental Psychology: General, 145(4), 486508.CrossRefGoogle ScholarPubMed
Goldstein, D. G., Johnson, E. J., Herrmann, A., & Heitmann, M. (2008). Nudge your customers toward better choices. Harvard Business Review, 86(12), 99105.Google Scholar
Jachimowicz, J. M., Duncan, S., Weber, E. U., & Johnson, E. J. (2019). When and why defaults influence decisions: A meta-analysis of default effects. Behavioural Public Policy, 3(2), 159186.CrossRefGoogle Scholar
Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 13381339. https://doi.org/10.1126/science.1091721 CrossRefGoogle ScholarPubMed
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263292.CrossRefGoogle Scholar
McKenzie, C. R. M., Liersch, M. J., & Finkelstein, S. R. (2006). Recommendations implicit in policy defaults. Psychological Science, 17(5), 414420. https://doi.org/10.1111/j.1467-9280.2006.01721.x CrossRefGoogle ScholarPubMed
Munichor, N., Erev, I., & Lotem, A. (2006). Risk attitude in small timesaving decisions. Journal of Experimental Psychology: Applied, 12(3), 129141.Google ScholarPubMed
Narula, T., Ramprasad, C., Ruggs, E. N., & Hebl, M. R. (2014). Increasing colonoscopies? A psychological perspective on opting in versus opting out. Health Psychology, 33(11), 14261429.CrossRefGoogle ScholarPubMed
Plonsky, O., Apel, R., Ert, E., Tennenholtz, M., Bourgin, D., Peterson, J. C., & Erev, I. (2019). Predicting human decisions with behavioral theories and machine learning. preprint arXiv:1904.06866 Google Scholar
Plonsky, O., Teodorescu, K., & Erev, I. (2015). Reliance on small samples, the wavy recency effect, and similarity-based learning. Psychological Review, 122(4), 621647.CrossRefGoogle ScholarPubMed
Reiter, P. L., McRee, A. L., Pepper, J. K., & Brewer, N. T. (2012). Default policies and parents’ consent for school-located HPV vaccination. Journal of Behavioral Medicine, 35(6), 651657. https://doi.org/10.1007/S10865-012-9397-1/FIGURES/2 CrossRefGoogle ScholarPubMed
Roth, Y. (2020). The decision to check in multialternative choices and limited sensitivity to default. Journal of Behavioral Decision Making, 33(5), 643656.CrossRefGoogle Scholar
Roth, Y., & Yakobi, O. (2023). Attention! Do we really need attention checks? PsyArXiv. https://doi.org/10.31234/osf.io/63qht CrossRefGoogle Scholar
Rubaltelli, E., & Lotto, L. (2021). Nudging freelance professionals to increase their retirement pension fund contributions. Judgment and Decision Making, 16(1), 551565.CrossRefGoogle Scholar
Shteingart, H., & Loewenstein, Y. (2015). The effect of sample size and cognitive strategy on probability estimation bias. Decision, 2(2), 107117.CrossRefGoogle Scholar
Spektor, M. S., & Wulff, D. U. (2021). Myopia drives reckless behavior in response to over-taxation. Judgment and Decision Making, 16(1), 114130.CrossRefGoogle Scholar
Yan, H., & Yates, J. F. (2019). Improving acceptability of nudges: Learning from attitudes towards opt-in and opt-out policies. Judgment and Decision Making, 14(1), 2639.CrossRefGoogle Scholar
Figure 0

Figure 1 Left panel: experiment instructions, sample choice, and results screens for participants in each condition. Right panel: mean default rate in each of the 100 trials in Condition Dominant and Protective. The stars on the right-hand side present the predicted behavior of experienced agents that base each choice on a sample of only five past experiences.

Figure 1

Figure 2 Average individual default rates in the last 20 trials, as a function of the recency score in the first 80 trials, in study 1 (left) and study 2 (right). The shapes reflect the default rates in the first two trials.

Figure 2

Table A1 The estimated distribution of the PAS parameters

Figure 3

Figure A1 Model PAS default rate predictions.Note: The zigzag pattern in Condition Protective reflects the behavior of the virtual agents with large κi (see explanation in the study by Shteingart & Loewenstein, 2015).