Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-23T05:48:16.181Z Has data issue: false hasContentIssue false

Behavioral economics enhancers

Published online by Cambridge University Press:  11 April 2024

Eldad Yechiam*
Affiliation:
Technion – Israel Institute of Technology, Haifa, Israel
Rights & Permissions [Opens in a new window]

Abstract

Recent meta-analyses suggest that certain drugs act as cognitive enhancers and can increase attentional investment and performance even for healthy adults. The current review examines the potential of behavioral economics enhancers (BEEs) for similarly improving cognitive performance and judgments. Traditionally, behavioral economics theory has adopted a skeptical approach regarding the notion of whether individuals can overcome judgment biases through variables that increase cognitive effort. We focus mostly on the effects of two BEEs: incentivization and losses. Summarizing results from different meta-analyses, we find a small but robust positive effect size for BEEs, with comparable effect sizes to those found in studies of pharmacological cognitive enhancers.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making

Introduction

Meta-analyses reveal some robust positive results for the effect of certain attention-enhancing medications, particularly methylphenidate, independently of attention-deficit/hyperactivity disorder (ADHD) symptoms. In a meta-analysis incorporating 16 double-blind studies of healthy adults without ADHD, Marraccini et al. (Reference Marraccini, Weyandt, Rossi and Gudmundsdottir2016) reported a positive effect of methylphenidate on speeded processing accuracy, with an effect size of 0.28. A somewhat smaller effect size of around 0.20 was reported in Roberts et al.’s (Reference Roberts, Jones, Sumnall, Gage and Montgomery2020) meta-analysis of cognitive performance (see also Ilieva et al., ’s Reference Ilieva, Hook and Farah2015 meta-analysis of methylphenidate and amphetamine). Importantly, a similar effect size was reported in a meta-analysis of the effect of methylphenidate on adults with ADHD (e.g., d = 0.22 across cognitive domains in Pievsky and McGrath, Reference Pievsky and McGrath2018). In addition, several studies directly examining the selective effect of methylphenidate found equally improved sustained attention and short-term memory of people with and without ADHD, with no significant interaction between the effects of drug and diagnosis (Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2010, Reference Agay, Yechiam, Carmel and Levkovitz2014; Yechiam and Zeif, Reference Yechiam and Zeif2022). Those who benefit most from the effect of methylphenidate seem to be individuals with low baseline performance (Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2014; Finke et al., Reference Finke, Dodds, Bublak, Regenthal, Baumann, Manly and Müller2010; Mehta et al., Reference Mehta, Owen, Sahakian, Mavaddat, Pickard and Robbins2000; Zack and Poulos, Reference Zack and Poulos2009; see also Mehta and Riedel, Reference Mehta and Riedel2006 for different results).Footnote 1 The current paper examines the possibility that behavioral economic variables might similarly have a small but robust effect on cognitive performance and judgment biases. We refer to relevant variables as behavioral economics enhancers (BEEs).

Cognitive biases refer to systematic deviations from normative solutions (e.g., Caputo, Reference Caputo2013; Elkinton, Reference Elkinton1941; Fischhoff, Reference Fischhoff, Kahneman, Slovic and Tversky1982; Mill, Reference Mill1863; Tversky and Kahneman, Reference Tversky and Kahneman1974; Yang et al., Reference Yang, Hsee and Li2021). The literature on debiasing was born out of attempts to examine the boundaries of biased responses as well as the practical need to alleviate biases (Fischhoff, Reference Fischhoff, Kahneman, Slovic and Tversky1982). Many approaches have been proposed to debiasing, and the present paper will only briefly mention those within the behavioral economics area and not cover the myriad of debiasing techniques within psychological and social sciences (for reviews of these, see, e.g., Fischhoff, Reference Fischhoff, Kahneman, Slovic and Tversky1982; Larrick, Reference Larrick, Koehler and Harvey2004; Lilienfeld et al., Reference Lilienfeld, Ammirati and Landfield2009).

Three historical developments have shaped the behavioral economics literature on debiasing and cognitive effort. The first was the notion of heuristics, namely the identification of fast and frugal information processing strategies that can lead to biases (Gigerenzer, Reference Gigerenzer, Koehler and Harvey2004; Keren and Teigen, Reference Keren, Teigen, Koehler and Harvey2004; Tversky and Kahneman, Reference Tversky and Kahneman1974). An extreme point of view in this respect is that heuristics are the source of the most robust and prevalent biases (Kahneman, Reference Kahneman2011). The second development is the notion that biases can be avoided if task information is provided in a clear fashion (Fischhoff, Reference Fischhoff, Kahneman, Slovic and Tversky1982), which is part of the ‘boost’ approach (Grune-Yanoff and Hertwig, Reference Grune-Yanoff and Hertwig2016). A prominent example is Gigerenzer’s (Reference Gigerenzer1996, Reference Gigerenzer, Koehler and Harvey2004; Gigerenzer et al., Reference Gigerenzer, Hertwig and Pachur2011) influential work, which argued that people cannot deal with complex probabilistic information in an unbiased fashion unless probabilities are presented in frequencies or through other evolutionary adaptive means. The third development was the emergence of the ‘nudge’ approach (Thaler and Sunstein, Reference Thaler and Sunstein2008), a loosely defined set of debiasing techniques exploiting conditions actually leading to biases in order to divert decisions into advantageous paths. The nudge approach goes one step beyond the notion of heuristics in arguing that it is not only that biases are driven by automatic associative processes but also that in practice it is impossible to increase one’s cognitive resources in order to avoid them. As noted by Thaler and Sunstein (Reference Thaler and Sunstein2008), ‘Most of us are busy and we can’t spend all our time thinking and analyzing everything’ (p. 23). Ergo, ‘We’re better off changing our immediate environment than believing that we can do it through the “power of the mind”’ (Submarine Channel, 2018). Though second-generation nudge techniques incorporate the so-called educative nudges that act through people’s relevant intellectual and knowledge capacities (Sunstein, Reference Sunstein2016), these educative nudges focus on prescribing participants with information that supports appropriate judgment and decisions, and not increasing their sheer cognitive capacity.

Potentially though, these three developments are at odds with one another. At least in theory, heuristics can be overcome by deliberate effortful processing, while the nudge and boost approaches (implicitly or explicitly) discount the potential of inducing such deliberation. Indeed, behavioral economics scholars have generally been quite skeptical about the possibility that people can be somehow induced to increase their cognitive effort. For example, addressing the possibility that effort might reduce heuristics such as representativeness and anchoring and adjustment, Tversky and Kahneman (Reference Tversky and Kahneman1983) indicated that ‘we do not share Dennis Lindley’s optimistic opinion that inside every incoherent person there is a coherent one trying to get out… and we suspect that incoherence is more than skin deep’. Camerer and Hogarth (Reference Camerer and Hogarth1999) indicated that ‘There is no replicated study in which a theory of rational choice was rejected at low stakes in favor of a well specified behavioral alternative, and accepted at high stakes’. Larrick (Reference Larrick, Koehler and Harvey2004) further elaborated that ‘For incentives to improve decision making, decision makers must possess effective strategies… the necessary “cognitive capital” to which they can apply additional effort… Incentives do improve performance in settings such as clerical and memorization tasks, where people possess the cognitive capital required to perform well but lack the intrinsic motivation. Few decision tasks, however, are analogous to simple clerical work or memorization’.

Some scholars have been careful to point out that in theory cognitive performance and judgments could be significantly improved by increased cognitive effort, but implementing this in practice is difficult (e.g., Arkes, Reference Arkes1991; Larrick, Reference Larrick, Koehler and Harvey2004). Importantly, both Arkes (Reference Arkes1991) and Larrick (Reference Larrick, Koehler and Harvey2004) theorized that certain types of decisions are more likely to be assisted by cognitive effort. Arkes’ (Reference Arkes1991) taxonomy of judgment and decision biases includes three classes of biases: The first class is psychophysically based biases that are driven by nonlinear translations of objective features, such as probabilities and time, into subjective attributes. An example is the (per capita) discounting of larger increases compared to smaller increases in one’s outcomes known as ‘diminishing sensitivity’ (Bernoulli, 1782/Reference Bernoulli1954; Kahneman and Tversky, Reference Kahneman and Tversky1979). The second class is association-based biases that are caused by fast and frugal problem-solving strategies, namely heuristics, and are triggered by the representations available in short-term memory (Arkes, Reference Arkes1991). This is exemplified by the Cognitive Reflection Test (Frederick, Reference Frederick2005), which was originally proposed as a judgment test capturing the usage of heuristic strategies versus more elaborate and reflective strategies (Frederick, Reference Frederick2005),Footnote 2 though some findings suggest that this test also measures numeric ability per se (Sirota et al., Reference Sirota, Kostovičová, Juanchich, Dewberry and Marshall2020; Welsh et al., Reference Welsh, Burns, Delfabbro, Knauff, Sebanz, Pauen and Wachsmuth2013).

The third class of biases proposed by Arkes (Reference Arkes1991) is strategy-based suboptimal behaviors (see also Larrick, Reference Larrick, Koehler and Harvey2004) reflecting the choice of strategies that are comprehensive but are nevertheless inappropriate for the problem at hand. Importantly, Arkes (Reference Arkes1991) and Larrick (Reference Larrick, Koehler and Harvey2004) maintained that it is biases of the second type—namely association- or heuristic-based biases—that are more likely to be assisted by cognitive effort. Arkes (Reference Arkes1991) suggested that this may occur because cognitive effort leads to a more comprehensive search among possible problem solutions, which can yield more adequate responses, but cautioned that the search process driven by effort may itself be biased, and hence, ‘neither the introduction of incentives nor entreaties to perform well will necessarily cause subjects to shift to a new judgment behavior’ (Arkes, Reference Arkes1991, p. 494).

Other scholars have suggested that effort might in theory reduce the reliance on heuristics by triggering a less automatic and more deliberate processing mode. Indeed, a more recent taxonomy of biases was proposed by Stanovich et al. (Reference Stanovich, Toplak and West2008) based on dual system theory (Denes-Raj and Epstein, Reference Denes-Raj and Epstein1994; Frederick, Reference Frederick2005; Kahneman, Reference Kahneman2011). Though contentious (see, e.g., Chater, Reference Chater2018; Keren and Schul, Reference Keren and Schul2009), according to the theory, System 1 operates via heuristic and tacit reasoning, which can be performed rapidly, while System 2 uses more deliberative and slower processes with greater working memory requirements (Keren and Teigen, Reference Keren, Teigen, Koehler and Harvey2004). According to Stanovich et al. (Reference Stanovich, Toplak and West2008), heuristic-related biases are subcategorized into two types. The first is cognitive miserliness, which is similar to the notion of association-based biases but is limited to cases where System 1 is at work while System 2 is deactivated. The second type is override failure, which involves cases where System 2 is activated but is overridden by System 1. Yet Stanovich and West as well have suggested that economic variables such as incentives often fall short of reducing System 1-related errors (e.g., Stanovich and West, Reference Stanovich, West and Over2004). Thus, both Arkes (Reference Arkes1991) and Stanovich and West (Reference Stanovich, West and Over2004) argued that cognitive effort can reduce judgment biases in theory but not in practice.

The notion that increased cognitive effort can in fact reduce judgment biases is supported by the findings of studies of attention-enhancing drugs. For example, Peled-Avron et al. (Reference Peled-Avron, Goren, Brande-Eilat, Dorman-Ilan, Segev, Feffer, Gvirtz Problovski, Levkovitz, Barnea, Lewis and Tomer2021) found that methylphenidate improved performance in a simple perceptual judgment task. Franke et al. (Reference Franke, Gransmark, Agricola, Schühle, Rommel, Sebastian, Balló, Gorbulev, Gerdes, Frank, Ruckes, Tüscher and Lieb2017) examined the effect of methylphenidate and modafinil on chess performance. Controlling for game duration (which was longer with modafinil and methylphenidate), both modafinil and methylphenidate enhanced chess performance as demonstrated by significantly higher scores. We examined the effect of two attention-enhancing drugs: methylphenidate and mixed amphetamine salts, on performance in the Cognitive Reflection Test (Yechiam and Zeif, Reference Yechiam and Zeif2022). The results indicated that the former drug led to a significant improvement in test scores, with an effect size of d = 0.40. The latter substance had a smaller and nonsignificant effect (d = 0.07). By contrast, several studies did not find an effect of methylphenidate on over/underweighting rate events and risk-taking (Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2010, Reference Agay, Yechiam, Carmel and Levkovitz2014; Yechiam and Zeif, Reference Yechiam and Zeif2022),Footnote 3 suggesting that its effect is limited to judgment biases resulting from fast and frugal processing.

In line with these findings and following theoretical predictions that at least in theory cognitive effort could reduce judgment errors driven by heuristics (e.g., Arkes, Reference Arkes1991; Larrick, Reference Larrick, Koehler and Harvey2004; Stanovich et al., Reference Stanovich, Toplak and West2008), the current review focuses on the effect of BEEs on performance in general and specifically on judgment biases that are considered to be at least partially driven by heuristic processing (see also the more recent taxonomies of Datta and Mullainathan, Reference Datta and Mullainathan2014; Münscher et al., Reference Münscher, Vetter and Scheuerle2016). Our main focus will be on two BEEs: the effect of incentives and losses. For these variables, there is a proliferate literature and our review will thus take the form of a compilation and synthesis of relevant meta-analyses along with illustrative examples. This is followed by a more tentative discussion of other potential BEEs.

BEE 1. Incentivization

Incentivization is a key term in economics, and not only in behavioral economics, since rational people are assumed to respond to incentives (Mankiw, Reference Mankiw2018).Footnote 4 However, in the context of cognitive enhancement the term is used differently than in standard economics in that participants do not know in advance what response or judgment will yield better incentives. In this respect, in the field of behavioral economics early findings supported the effect of incentives on performance (Edwards, Reference Edwards1956; Siegel, Reference Siegel1961; Tversky and Edwards, Reference Tversky and Edwards1966), but this was followed by studies showing no or even negative effects of incentivization (e.g., Arkes et al., Reference Arkes, Dawes and Christensen1986; Fischhoff et al., Reference Fischhoff, Slovic and Lichtenstein1977; Tversky and Kahneman, Reference Tversky and Kahneman1983), leading to a debate (see Hertwig and Ortmann, Reference Hertwig and Ortmann2001; Gneezy et al., Reference Gneezy, Meier and Rey-Biel2011).

On the other hand, in psychology several meta-analyses have established the robustness of the effect of incentives on cognitive performance. In a meta-analysis of 45 studies, Condly et al. (Reference Condly, Clark and Stolovitch2003) reported an effect size of d = 0.60 for the effect of incentivization on performance in cognitive tasks and 0.88 in motor tasks. Also, there was no significant negative effect of incentivization on self-reported internal motivation. A larger meta-analysis of 146 studies was conducted by Garbers and Konradt (Reference Garbers and Konradt2014), who reported a somewhat smaller effect size of d = 0.34. Yet a third meta-analysis by Cerasoli et al. (Reference Cerasoli, Nicklin and Ford2014) incorporated studies from school, work, and physical domains that used either incentivization or not. They also found a small positive effect size for incentivization in their moderator analysis $(\unicode{x3b2} = 0.29)$ . Importantly, in all three meta-analyses the effect size did not differ between studies using quantity vs. quality indices,Footnote 5 suggesting that the effect of incentivization is not relevant only to repetitive or mundane tasks (as espoused, for instance, by Larrick, Reference Larrick, Koehler and Harvey2004). Nevertheless, an important question is whether this effect, observed in the psychology literature, also emerges for judgments, particularly those that are susceptible to heuristic-based biases.

In relevant judgment studies, there have been many mixed results. Some studies did not find a positive effect of incentivization (Awasthi and Pratt, Reference Awasthi and Pratt1990; Baillon et al., Reference Baillon, Bleichrodt and Granic2022; Enke et al., Reference Enke, Gneezy, Hall, Martin, Nelidov, Offerman and van de Ven2023; Tversky and Kahneman, Reference Tversky and Kahneman1983; Wright and Anderson, Reference Wright and Anderson1989), while others demonstrated it (Charness et al., Reference Charness, Karni and Levin2010; Dale et al., Reference Dale, Rudski, Schwarz and Smith2007; Enke et al., Reference Enke, Gneezy, Hall, Martin, Nelidov, Offerman and van de Ven2023; Epley and Gilovich, Reference Epley and Gilovich2005; Lefebvre et al., Reference Lefebvre, Vieider and Villeval2011; Simmons et al., Reference Simmons, LeBoeuf and Nelson2010; Wright and Aboul-Ezz, Reference Wright and Aboul-Ezz1988, in different biases). To stray from anecdotes and cherry-picking, we mainly focus on meta-analyses of the literature.

A recent meta-analysis of the effect of incentives on the Cognitive Reflection Test (Frederick, Reference Frederick2005) was conducted by Brañas-Garza et al. (Reference Brañas-Garza, Kujal and Lenkei2019). For their dataset, collected from 42,425 individuals examined in 110 studies, Brañas-Garza et al. (Reference Brañas-Garza, Kujal and Lenkei2019) reported no significant effect of incentives. However, this report suffers from several methodological challenges. First, Brañas-Garza et al. compared studies that used incentivization to those that did not. While their main analysis controlled for a variety of study-level moderators, this also reduced the sample size (by about 17%) due to missing values and introduced potential multicollinearity. Indeed, the null effect in Brañas-Garza et al.’s (Reference Brañas-Garza, Kujal and Lenkei2019) meta-analysis only emerged in their regression analysis including several study-level control variables. Reanalyzing their meta-analysis for the simple effect of incentivization, we found the positive effect of incentivization to be significant though very small (d = 0.14; see Yechiam and Zeif, Reference Yechiam and Zeif2023a). Secondly, as noted in Yechiam and Zeif (Reference Yechiam and Zeif2023a), Brañas-Garza et al.’s (Reference Brañas-Garza, Kujal and Lenkei2019) meta-analysis included two studies that did not strictly use monetary incentives as part of the incentivized study groups.Footnote 6 When these studies are removed, the simple effect of incentivization increases to 0.20 and a similar effect is recorded in regressions controlling for study-related characteristics.

In Yechiam and Zeif (Reference Yechiam and Zeif2023a), we also conducted a more focused meta-analysis of eight studies that actually compared an incentivized Cognitive Reflection Test and a control condition with no incentives. Our results showed a small and significant effect of incentivization on test performance with an effect size of d = 0.21, which is comparable to the corrected effect size in Brañas-Garza et al. (Reference Brañas-Garza, Kujal and Lenkei2019). Differences in effect size between studies in this meta-analysis were mostly due to random noise rather than any moderating study-related effects.

In another meta-analysis (Yechiam and Zeif, Reference Yechiam and Zeif2023b), we examined the conjunction fallacy, one of the classical examples of judgment biases. Originally investigated by Tversky and Kahneman (Reference Tversky and Kahneman1983), the conjunction fallacy is the tendency to estimate multiple contingencies occurring together as being more likely than one of the individual contingencies. Tversky and Kahneman (Reference Tversky and Kahneman1983) presented this fallacy by proposing the problem now famously known as the ‘Linda problem’. Linda is described as an outspoken, single, and very bright 31-year-old woman, who is involved with issues of discrimination and social justice. Participants are required to judge whether it is more likely that Linda is a bank teller, a feminist, or a bank teller who is active in the feminist movement. In this problem, the conjunction fallacy is evidenced by preferring the latter option over the two former ones. Tversky and Kahneman (Reference Tversky and Kahneman1983) argued that this bias is mainly driven by the representativeness heuristic, a fast-and-frugal strategy for estimating probabilities based on similarity to representative examples (e.g., Linda seems more similar to a feminist bank teller than to any bank teller). Our meta-analysis of 11 conjunction fallacy studies (Yechiam and Zeif, Reference Yechiam and Zeif2023b) showed a small positive effect of incentivization on judgment performance in conjunction fallacy problems (d = 0.19 for all problems; d = 0.24 for the Linda problem). Again, disparities between different study results were mostly due to random noise. In addition, the effect size was stronger when calculating odds ratios as compared to absolute differences, suggesting a moderating effect of baseline performance (without incentives), similar to that observed for pharmacological cognitive enhancers (Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2014).

Finally, a meta-analysis of the literature on the anchoring and adjustment heuristic was conducted by Li et al. (Reference Li, Maniadis and Sedikides2021). This involved 56 product pricing studies that were incentivized or not and evaluated the effect of monetary-amount information presented as an anchor before participants made their pricing decisions. Specifically, the meta-analysis estimated the correlation between the (arbitrary) anchor and the elicited price. Li et al.’s (Reference Li, Maniadis and Sedikides2021) data indicated that the correlation denoting the degree of anchoring and adjustment dropped from 0.31 with no incentives to 0.24 with probabilistic incentives (for a random item) and 0.16 with full incentives. A reanalysis shows there was a significant moderating effect of incentives in the direction of lower correlation (B = 0.13; p = .02).Footnote 7

Thus, it seems there is a rather robust though small-sized effect of incentivization on judgment biases in meta-analyses consistent with the notion that incentives can reduce heuristic-driven biases, at least to some extent. Importantly, the smallness of the effect can explain the haphazard literature since with Cohen’s d of 0.2, most small-scale studies would find no significant results (e.g., only 17% of studies with n = 100 will find the effect to be significant).

BEE 2. Losses

Within behavioral economics, the effect of losses was addressed using two very different perspectives, that is, as a bias (e.g., loss aversion; Kahneman and Tversky, Reference Kahneman and Tversky1979) or as an attention-enhancing variable (e.g., Lejarraga and Hertwig, Reference Lejarraga and Hertwig2017; Yechiam and Hochman, Reference Yechiam and Hochman2013a). In some settings, when poor performance implies getting more losses and successful performance avoids or reduces losses, both of these aspects are expected to lead to a performance-boosting effect of losses. However, in other cases, for instance, when losses are provided irrespectively of one’s performance, or when the loss is too small to elicit loss aversion, the bias approach predicts that losses should no longer improve performance, whereas the attentional model would suggest that the performance-enhancing effect of losses is robust.

In a recent meta-analysis, Ferraro and Tracy (Reference Ferraro and Tracy2022) reported a positive effect of losses (compared to gains) on productivity in economic experiments, with Cohen’s d of 0.33 for laboratory studies and 0.12 for field studies. This echoes earlier reviews showing a robust effect of negative outcomes on cognitive performance, which exceeds the effect of positive incentives (Baumeister et al., Reference Baumeister, Bratslavsky, Finkenauer and Vohs2001; Rozin and Royzman, Reference Rozin and Royzman2001; Yechiam and Hochman, Reference Yechiam and Hochman2013a). However, this literature cannot disentangle the bias and the attention models because it focuses on cases where successful performance reduces losses.

What about judgments? In their meta-analysis, Brañas-Garza et al. (Reference Brañas-Garza, Kujal and Lenkei2019) did not have a sufficient number of studies using losses. Indeed, the only study that we are aware of that examined the effect of losses on the Cognitive Reflection Test in an unpublished study by Carpenter and Munro (Reference Carpenter and Munro2023). Interestingly, these authors as well found that the effect of losses exceeded the effect of gains on cognitive reflection.

But is this performance advantage of losses a cognitive-boosting effect or merely a bias due to loss aversion? Several experiments have shown that losses have performance-enhancing effects even for small losses where no loss aversion is demonstrated. For example, Yechiam and Hochman (Reference Yechiam and Hochman2013b) and Yechiam et al. (Reference Yechiam, Retzer, Telpaz and Hochman2015) found no loss aversion for losses of 1 token (worth less than a cent) in a repeated experiential decision task. However, the same small loss was found to increase maximization in different choice problems that involved simple quantitative judgments between varying amounts. In addition, losses were found to positively affect performance even in settings where elevated performance did not lead to getting fewer losses or even when it led to getting more losses (Yechiam et al., Reference Yechiam, Retzer, Telpaz and Hochman2015, Reference Yechiam, Ashby and Hochman2019; Yechiam and Hochman, Reference Yechiam and Hochman2013b). For example, Yechiam et al. (Reference Yechiam, Retzer, Telpaz and Hochman2015) examined a decisions-from-experience task with three choice options: an advantageous option and a disadvantageous option that produced either minor losses (in a loss condition) or minor gains (in a gain condition), along with a medium expected value option that did not produce any losses (see Figure 1). Under loss aversion, the loss condition should result in more choices from the medium expected value option since it eliminates the prospect of getting losses. However, the results showed a rather different pattern. As indicated in Figure 1, losses led to more selections from the advantageous option and fewer choices from the medium and disadvantageous options. Thus, losses increased the rate of advantageous selections even though advantageous selections actually produced more losses. This shows sufficient conditions for the attentional cognitive-enhancing effect of losses.Footnote 8 Similarly, in a perceptual judgment task, taxing the participants’ payoffs (by either 10% or 30%) produced a positive effect on judgment performance even though more accurate judgments did not reduce the participants’ taxes (Yechiam and Hochman, Reference Yechiam and Hochman2013b).

Figure 1 The effect of losses on average choice rates in Yechiam et al. (Reference Yechiam, Retzer, Telpaz and Hochman2015). The task in this study involved 150 choices between three options, with either a high, medium, or low expected value (EV). The probability of the two outcomes in the high and low EV options was equal (50%). Participants were not provided with the payoff distribution, and each choice resulted in feedback drawn from the selected alternative’s payoff distribution. Error terms denote standard errors.

The cognitive enhancement effect of losses thus seems to be pertinent for small losses for which there is no loss aversion (Zeif and Yechiam, Reference Zeif and Yechiam2022). Importantly, the behavioral economics literature has often explained the effect of difficult goals on performance as evidence of loss aversion due to the implied loss frame incorporated by goals (e.g., Allen et al., Reference Allen, Dechow, Pope and Wu2017; Corgnet et al., Reference Corgnet, Gómez-Miñambres and Hernán-González2018). However, the sheer attentional effect of losses accounts for this effect of goals on performance and also explains why goals, like losses, improve performance when monetary amounts associated with goal failure are small (Corgnet et al., Reference Corgnet, Gómez-Miñambres and Hernán-González2018; Gómez-Miñambres et al., Reference Gómez-Miñambres, Corgnet and Hernán-González2012; Locke et al., Reference Locke, Bryan and Kendall1968).

Mediators of BEEs

Most of the literature on process variables affected by BEEs focused on physiological indices and brain processes that are the hallmark of increased attention, yet very few studies actually examined the mediating effect of these process variables. For example, the presence of significant material consequences for successful performance was found to increase autonomic arousal and prefrontal activation in a multitude of studies (for some examples, see Gendolla and Richter, Reference Gendolla and Richter2006; Gendolla and Wright, Reference Gendolla, Wright, Forgas, Williams and von Hippel2005; Richter and Gendolla, Reference Richter and Gendolla2009; Sharpe, Reference Sharpe2004; Wright, Reference Wright, Kofta, Weary and Sedek1998; Wright and Kirby, Reference Wright, Kirby and Zanna2001; Xue et al., Reference Xue, Lu, Levin, Weller, Li and Bechara2009), yet none of these studies examined the mediating effect of brain activation patterns on performance. Similarly, the literature on losses shows that negative outcomes were found to have a larger effect on arousal measures and prefrontal activation (Gehring and Willoughby, Reference Gehring and Willoughby2002; Hochman and Yechiam, Reference Hochman and Yechiam2011; Löw et al., Reference Löw, Lang, Smith and Bradley2008; Satterthwaite et al., Reference Satterthwaite, Green, Myerson, Parker, Ramaratnam and Buckner2007; Tom et al., Reference Tom, Fox, Trepel and Poldrack2007), yet no study examined the mediation effects of these brain processes on cognitive performance and judgment biases.

A related question is whether these brain processes are similar to those produced by attention-enhancing medications such as methylphenidate. Research suggests some similarities. For example, both methylphenidate and incentives reduced activation in the default mode network (DMN) when administered to children with ADHD (Liddle et al., Reference Liddle, Hollis, Batty, Groom, Totman, Liotti, Scerif and Liddle2011). The DMN is the brain area activated when individuals invest their attention in off-task activities, namely mind wandering and daydreaming. Similarly, both incentivization and methylphenidate increased the activation of the anterior cingulate cortex in children with ADHD as evidenced by greater error-related negativity and error positivity (Groom et al., Reference Groom, Liddle, Scerif, Scerif, Liddle, Batty, Liotti and Hollis2013).

There are other variables that can potentially mediate the effect of BEEs. One of them is the sheer increase in processing time directed at the task at hand (Ayal et al., Reference Ayal, Rusou, Zakay and Hochman2015; Bettman et al., Reference Bettman, Johnson and Payne1990), which interestingly is not usually found in studies of pharmacological cognitive enhancers (Marraccini et al., Reference Marraccini, Weyandt, Rossi and Gudmundsdottir2016; see also an exception in Franke et al., Reference Franke, Gransmark, Agricola, Schühle, Rommel, Sebastian, Balló, Gorbulev, Gerdes, Frank, Ruckes, Tüscher and Lieb2017). For example, in several studies it was shown that losses increased deliberation time (Porcelli and Delgado, Reference Porcelli and Delgado2009; Xue et al., Reference Xue, Lu, Levin, Weller, Li and Bechara2009; Yechiam and Telpaz, Reference Yechiam and Telpaz2013). In other studies, it was found that when participants are encouraged to deliberate, Cognitive Reflection Test performance improves (Patel et al., Reference Patel, Baker and Scherer2019; Sjastad and Baumeister, Reference Sjastad and Baumeister2023; Szollosi et al., Reference Szollosi, Bago, Szaszi and Aczel2017) and judgment biases such as the conjunction fallacy (Scherer et al., Reference Scherer, Yates, Baker and Valentine2017) and the contrast effect (Finucane et al., Reference Finucane, Alhakami, Slovic and Johnson2000) are reduced. Yet so far, no study has investigated the incentive-related mediating effect of deliberation time, and this remains an important challenge.

Moderators of BEEs

As noted above, Larrick (Reference Larrick, Koehler and Harvey2004) and others suggested that cognitive effort only improves performance in judgment tasks that are relatively monotonous and where participants possess the relevant strategies to correctly perform the task. Unpacking this implies an effect of task type, with a positive effect predicted in decision tasks where biases are driven by fast and frugal heuristics, but also exclusively in tasks where participants’ cognitive effort is not high to begin with, for instance, due to low motivation or interest and/or the monotonous nature of the task (see also McGraw, Reference McGraw, Lepper and Greene1978).

With respect to task type, there are some studies showing that incentivization does not affect biases that are typically argued to be driven by automatic psychophysical transformations (though for most of these biases there are also theoretical explanations based on heuristics). For example, studies of delay discounting indicated that paying participants did not reduce the degree of discounting of future outcomes (Brañas-Garza et al., Reference Brañas-Garza, Jorrat, Espín and Sánchez2023; Johnson and Bickel, Reference Johnson and Bickel2002; Lagorio and Madden, Reference Lagorio and Madden2005; Locey et al., Reference Locey, Jones and Rachlin2011; Matusiewicz et al., Reference Matusiewicz, Carter, Landes and Yi2013). Also, incentivization did not reduce the overweighting of small probability events (Astebro et al., Reference Astebro, Mata and Santos-Pinto2015; Barreda-Tarrazona et al., Reference Barreda-Tarrazona, Jaramillo-Gutierrez, Navarro-Martinez and Sabater-Grande2011) or underweighting of high probability events (Barreda-Tarrazona et al., Reference Barreda-Tarrazona, Jaramillo-Gutierrez, Navarro-Martinez and Sabater-Grande2011) in decisions from description. It also did not reduce the endowment effect (Yechiam et al., Reference Yechiam, Ashby and Pachur2017), sellers’ tendency to price objects higher than potential buyers do (Kahneman et al., Reference Kahneman, Knetsch and Thaler1990). Thus, possibly, the effect of incentivization might be restricted to poor judgments driven by association-based biases.

With respect to effort level, however, the above-reviewed findings of a small but robust positive effect for one-shot judgments seem to contradict Larrick’s (Reference Larrick, Koehler and Harvey2004) and McGraw’s (Reference McGraw, Lepper and Greene1978) view that the effect of incentivization is limited to simple and highly monotonous tasks. Rubinstein (Reference Rubinstein2013) similarly argued that ‘Human beings generally have an excellent imagination and starting a question with “Imagine that…” achieves a degree of focus at least equal to that created by a small monetary incentive’ (p. 541). Yet as noted above, even in judgment tasks where participants are asked to imagine certain situations (such as the Cognitive Reflection Test), incentives were found to have an effect. Nevertheless, an effect of baseline effort level may exist at the individual level.Footnote 9 As reviewed above, this has been scarcely examined, but some support is evident in the meta-analysis of base rate fallacy, which suggests a larger effect of incentives when baseline performance is poorer (Yechiam and Zeif, Reference Yechiam and Zeif2023b).

Another important moderator was proposed by Hogarth et al. (Reference Hogarth, Gibbs, McKenzie and Marquis1991). They suggested that in tasks where most of the time participants receive a negative (net) reward for their actions, incentives—and losses—will have a negative effect. Their studies on the effect of incentives can be explained as a tendency to underweight the small probability that methodically applying effort will yield a positive outcome. To an extent, these studies pre-shadowed the underweighting of rare event phenomena in decisions from feedback (Barron and Erev, Reference Barron and Erev2003), though the two literature studies were not subsequently integrated.

The search for additional BEEs

Though the current paper focused on two relevant BEEs, others could be gleaned by additional research. The effect of incentives reviewed above suggests that increased effort might also emerge from the posited close relationships between the person making the decision and the people or objects who/that will be affected by the decision. Though this is consistent with the increased arousal elicited by the presence of close others (Vogel et al., Reference Vogel, Ram, Conroy, Pincus and Gerstorf2017), there is limited supporting evidence. McShane and Gal (Reference McShane and Gal2016) found that presenting hypothetical statistical problems as advice to one’s close family compared to giving advice to an (unknown) medical doctor reduced judgment biases associated with misinterpretation of p-value. They also found that presenting these problems as giving advice to a person lowered these biases compared to answering them hypothetically. In a similar vein, Braga et al. (Reference Braga, Ferreira and Sherman2015) found that when the Linda problem addresses hypothetical individuals living in one’s country rather than in a different country this diminishes the conjunction fallacy.

On the other hand, and somewhat paradoxically, abstract construal, namely framing the problem as relating to others rather than oneself, to the past or future rather than the present, and to a place that is physically far away rather than close by (Trope and Liberman, Reference Trope and Liberman2010), was found to reduce certain judgment biases. Specifically, abstract construal was found to reduce the rate of responses consistent with the availability heuristic (Braga et al., Reference Braga, Ferreira and Sherman2015) and to increase utilitarian choices in the trolley problem (Xiao et al., Reference Xiao, Wu, Yang, Zhou, Jiang, Zhang and Peng2015); this was attributed to a reduction in intuitive (or System 1) emotional processing. Still, the debiasing effect of closeness of others in some settings, such as in the conjunction fallacy (Braga et al., Reference Braga, Ferreira and Sherman2015), suggests that abstract construal may not be a robust BEE. Further research is required to disentangle the cognitive-enhancing effect of close vs. far benefactories and of abstract and concrete phrasing.

General discussion

The main conclusion from this review is that the behavioral economics discipline has cognitive-enhancing ‘tools’ that are as efficient (in terms of effect size) as the common drugs used in ADHD, both having a small-sized effect on cognitive performance and judgment biases. For instance, the effect size of incentivization on performance in the Cognitive Reflection Test in our meta-analysis (Yechiam and Zeif, Reference Yechiam and Zeif2023a) was around d = 0.2. This roughly falls between the effects of Adderall and methylphenidate on the Cognitive Reflection Test recorded by Yechiam and Zeif (Reference Yechiam and Zeif2022) and is similar to the effect size in Ilieva et al.’s (Reference Ilieva, Hook and Farah2015) and Roberts et al.’s (Reference Roberts, Jones, Sumnall, Gage and Montgomery2020) meta-analyses of the effect of methylphenidate on cognitive performance in healthy adults.

Why are these small-sized effects important given the fact that policymakers need to prioritize limited economic resources and that incentives do require resources? First, the findings suggest that the positive effect of incentivization is achieved even with small financial outcomes and that the size of the outcome does not strongly moderate the effect (e.g., Yechiam & Zeif, Reference Yechiam and Zeif2023a,b). In addition, the presence of potential losses seems to considerably increase the effect size of incentives on cognitive performance, as evident in the meta-analysis of Carpenter and Munro (Reference Carpenter and Munro2023), and there might be other factors or conditions that could further increase it. Especially, the effect of incentives might be stronger for certain segments of the population. For methylphenidate, for instance, there is ample evidence that the cognitive-enhancing effect is stronger for individuals with low baseline performance (e.g., Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2014; Mehta et al., Reference Mehta, Owen, Sahakian, Mavaddat, Pickard and Robbins2000; Zack and Poulos, Reference Zack and Poulos2009). This has not been extensively examined for behavioral economic variables such as incentivization and losses. Finally, in some settings it might be important to invest communal resources in order to avoid severe judgment and decision errors or in order to have a competitive advantage.

The reviewed findings further suggest some parallels between BEEs and pharmacological cognitive enhancers. Both cognitive enhancers and BEEs were found to have a small-sized effect on performance in judgment tasks where typically individuals make fast but incorrect choices, but were not found to overcome biases that seem to be based more strongly on the transformation of perceptions to sensations, such as risk aversion or the underweighting of rare events in decisions from experience (methylphenidate: Agay et al., Reference Agay, Yechiam, Carmel and Levkovitz2010, Reference Agay, Yechiam, Carmel and Levkovitz2014; Yechiam and Zeif, Reference Yechiam and Zeif2022; incentives: Bowman and Turnbull, Reference Bowman and Turnbull2003; Xu et al., Reference Xu, Xiao and Rao2019). Additionally, both methylphenidate and financial incentives were found to increase autonomic arousal and to affect similar brain networks (as reviewed above; c.f., Liddle et al., Reference Liddle, Hollis, Batty, Groom, Totman, Liotti, Scerif and Liddle2011). Possibly, there could also be a similarity in the adverse effects of methylphenidate and BEEs. For instance, methylphenidate (Hinshaw et al., Reference Hinshaw, Heller and McHale1992), incentivization (Benistant et al., Reference Benistant, Galeotti and Villeval2022), and losses (Grolleau et al., Reference Grolleau, Kocher and Sutan2016) were all found to increase task-related cheating behavior, possibly because of greater task effort and attention. Nevertheless, more focused and systematic studies are necessary to clarify the proximity between attention-enhancing drugs and incentives.

To conclude, the notion of BEEs presents quite a different approach to debiasing from the very commonly applied nudge approach. Nudges were originally designed as guides or gentle directions toward the correct decision or judgment and away from poorer ones. BEEs do not guide individuals.Footnote 10 For example, incentivized performers merely know that if the task is performed well, this will result in better economic outcomes. Thus, the BEE variable is not the feedback but the reward or cost of making an appropriate or inappropriate decision. In accordance with this notion, in the reviewed meta-analyses of judgment studies even though participants did not know in advance that a particular judgment would yield greater incentives, merely knowing that there were incentives was found to improve performance.

Similarly, losses were found to improve performance even when the same losses were given for the correct and incorrect answer, and even paradoxically, when there were slightly more losses for the correct answer than the incorrect answer. Thus, even though losses and negative framing are heavily used in the nudge literature, the notion of BEEs suggests they have an independent positive effect on cognitive effort, which can facilitate task performance.

Funding statement

This paper was not funded.

Competing interest

The author declares none.

Footnotes

1 A somewhat lower effect size of about 0.1 was reported for some other cognitive enhancers in healthy individuals, including an effect of modafinil in non-sleep-deprived individuals (Kredlow et al., Reference Kredlow, Keshishian, Oppenheimer and Otto2019; Roberts et al., Reference Roberts, Jones, Sumnall, Gage and Montgomery2020) and an effect of selective serotonin reuptake inhibitors (SSRIs) on some cognitive domains in nondepressed adults (Prado et al., Reference Prado, Watt and Crowe2018).

2 For instance, consider the Cognitive Reflection test item: ‘If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?’ Addressing this item may evoke an immediate associative process (e.g., mentally completing the number list: 5, 5, 5, 100, 100, ?). However, the resulting judgment (an answer of 100) is wrong: The correct answer is five minutes.

3 Daood et al. (Reference Daood, Peled-Avron, Ben-Hayun, Nevat, Aharon-Peretz, Tomer and Admon2022) reported a negative effect of methylphenidate on hypothetical delay discounting for healthy adults, but a similar effect was not found in Shiels et al.’s (Reference Shiels, Hawk, Reynolds, Mazzullo, Rhodes, Pelham, Waxmonsky and Gangloff2009) study of children with ADHD.

4 Indeed, some of the early economic criticism of Tverksy and Kahneman’s work on heuristics was that they did not sufficiently incentivize participants (Harrison, Reference Harrison and Hey1994).

5 An example of the former is the number of (above-criteria) floral arrangements put together in an hour, and an example of the latter would be the prominence of a single bouquet, for instance, as evaluated by an expert or by peers. Condly et al. (Reference Condly, Clark and Stolovitch2003) noted that quantity indices are easier to define.

6 For example, in one such study participants were incentivized for ‘carefully filling in the questionnaire’ (Lubian and Untertrifaller, Reference Lubian and Untertrifaller2014).

7 This effect was smaller and not significant in the authors’ models that sub-grouped incentivized studies into full and probable incentives and also included additional covariates. These covariates led to missing cases (7% of the studies were removed), and several of the additional predictors were correlated with the incentivization variable.

8 Although this is not a judgment problem, one could consider this decision as involving a simple quantitative judgment. Under Yechiam and Hochman’s (Reference Yechiam and Hochman2013a,Reference Yechiam and Hochmanb) model, the positive attentional effect of losses should hold as long as there are considerable expected value differences between options.

9 This is also implied by the inverse U-shape association between initial autonomic arousal and performance, that is, the so-called Yerkes–Dodson law (Yerkes and Dodson, Reference Yerkes and Dodson1908), which was extended to the initial attention level (see Kahneman, Reference Kahneman1973).

10 Thus, BEEs are, in theory, implementable even when the designers of the decision architecture do not themselves know the correct decision.

References

Agay, N., Yechiam, E., Carmel, Z., & Levkovitz, Y. (2010). Non-specific effects of Methylphenidate (Ritalin) on cognitive ability and decision-making of ADHD and healthy adults. Psychopharmacology, 210, 511519.CrossRefGoogle ScholarPubMed
Agay, N., Yechiam, E., Carmel, Z., & Levkovitz, Y. (2014). Methylphenidate enhances cognitive performance in adults with poor baseline capacities regardless of Attention-Deficit/Hyperactivity Disorder diagnosis. Journal of Clinical Psychopharmacology, 34, 261265.CrossRefGoogle ScholarPubMed
Allen, E. J., Dechow, P. M., Pope, D. G., & Wu, G. (2017). Reference-dependent preferences: Evidence from marathon runners. Management Science, 63, 16571672.CrossRefGoogle Scholar
Arkes, H. R. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110, 486498.CrossRefGoogle Scholar
Arkes, H. R., Dawes, R. M., & Christensen, C. (1986). Factors influencing the use of a decision rule in a probabilistic task. Organizational Behavior and Human Decision Processes, 37, 93110.CrossRefGoogle Scholar
Astebro, T., Mata, T., & Santos-Pinto, S.-P. (2015). Skewness seeking: Risk loving, optimism or overweighting of small probabilities? Theory and Decision, 78, 189208.CrossRefGoogle Scholar
Awasthi, V., & Pratt, J. (1990). The effects of monetary incentives on effort and decision performance: The role of cognitive characteristics. Accounting Review, 65, 797811.Google Scholar
Ayal, S., Rusou, Z., Zakay, D., & Hochman, G. (2015). Determinants of judgment and decision making quality: the interplay between information processing style and situational factors. Frontiers in Psychology, 6, 1088.CrossRefGoogle ScholarPubMed
Baillon, A., Bleichrodt, H., & Granic, G. D. (2022). Incentives in surveys. Journal of Economic Psychology, 93, 102552.CrossRefGoogle Scholar
Barreda-Tarrazona, I., Jaramillo-Gutierrez, A., Navarro-Martinez, D., & Sabater-Grande, G. (2011). Risk attitude elicitation using a multi-lottery choice task: Real vs. hypothetical incentives. Spanish Journal of Finance and Accounting, 40, 609624.Google Scholar
Barron, G., & Erev, I. (2003). Small feedback-based decisions and their limited correspondence to description-based decisions. Journal of Behavioral Decision Making, 16, 215233.CrossRefGoogle Scholar
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5, 323370.CrossRefGoogle Scholar
Benistant, J., Galeotti, F., & Villeval, M. C. (2022). Competition, information, and the erosion of morals. Journal of Economic Behavior & Organization, 204, 148163.CrossRefGoogle Scholar
Bernoulli, D. (1782/1954). Exposition of a new theory on the measurement. Econometrica, 22, 2336.CrossRefGoogle Scholar
Bettman, J. R., Johnson, E. J., & Payne, J. W. (1990). A componential analysis of cognitive effort in choice. Organizational Behavior and Human Decision Processes, 45, 111139.CrossRefGoogle Scholar
Bowman, C. H., & Turnbull, O. H. (2003). Real versus facsimile reinforcers on the Iowa Gambling Task. Brain and Cognition, 53, 207210.CrossRefGoogle ScholarPubMed
Braga, J. N., Ferreira, M. B., & Sherman, S. J. (2015). The effects of construal level on heuristic reasoning: The case of representativeness and availability. Decision, 2, 216227.CrossRefGoogle Scholar
Brañas-Garza, P., Jorrat, D., Espín, A. M., & Sánchez, A. (2023). Paid and hypothetical time preferences are the same: Lab, field and online evidence. Experimental Economics, 26, 412434.CrossRefGoogle Scholar
Brañas-Garza, P., Kujal, P., & Lenkei, B. (2019). Cognitive reflection test: Whom, how, when. Journal of Behavioral and Experimental Economics, 82, 101455.CrossRefGoogle Scholar
Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19, 742.CrossRefGoogle Scholar
Caputo, A. (2013). literature review of cognitive biases in negotiation processes. International Journal of Conflict Management, 24, 374398.CrossRefGoogle Scholar
Carpenter, J., & Munro, D. (2023). Do losses trigger deliberative reasoning? IZA Institute of Labor Economics Working paper. https://docs.iza.org/dp15292.pdf.Google Scholar
Cerasoli, C. P., Nicklin, J. M., & Ford, M. T. (2014). Intrinsic motivation and extrinsic incentives jointly predict performance: A 40-year meta-analysis. Psychological Bulletin, 140, 9801008.CrossRefGoogle ScholarPubMed
Charness, G., Karni, E., & Levin, D. (2010). On the conjunction fallacy in probability judgment: New experimental evidence regarding Linda. Games and Economic Behavior, 68, 551556.CrossRefGoogle Scholar
Chater, N. (2018). Is the type 1/type 2 distinction important for behavioral policy? Trends in Cognitive Sciences, 22, 369371.CrossRefGoogle ScholarPubMed
Condly, S. J., Clark, R. E., & Stolovitch, H. D. (2003). The effects of incentives on workplace performance: A meta-analytic review of research studies. Performance Improvement Quarterly, 16, 4663.CrossRefGoogle Scholar
Corgnet, B., Gómez-Miñambres, J., & Hernán-González, R. (2018). Goal setting in the principal–agent model: Weak incentives for strong performance. Games and Economic Behavior, 109, 311326.CrossRefGoogle Scholar
Dale, D., Rudski, J., Schwarz, A., & Smith, E. (2007). Innumeracy and incentives: A ratio bias experiment. Judgment and Decision Making, 2, 243250.CrossRefGoogle Scholar
Daood, M., Peled-Avron, L., Ben-Hayun, R., Nevat, M., Aharon-Peretz, J., Tomer, R., & Admon, R. (2022). Fronto-striatal connectivity patterns account for the impact of methylphenidate on choice impulsivity among healthy adults. Neuropharmacology, 216, 109190.CrossRefGoogle ScholarPubMed
Datta, S., & Mullainathan, S. (2014). Behavioral design: A new approach to development policy. Review of Income and Wealth, 60, 735.CrossRefGoogle Scholar
Denes-Raj, V., & Epstein, S. (1994). Conflict between intuitive and rational processing: When people behave against their better judgment. Journal of Personality and Social Psychology, 66, 819829.CrossRefGoogle ScholarPubMed
Edwards, W. (1956). Reward probability, amount, and information as determiners of sequential two-alternative decisions. Journal of Experimental Psychology, 52, 177188.CrossRefGoogle ScholarPubMed
Elkinton, C. M. (1941). A study of student values and inconsistent reasoning. American Economic Review, 31, 557559.Google Scholar
Enke, B., Gneezy, U., Hall, B., Martin, D., Nelidov, V., Offerman, T., & van de Ven, J. (2023). Cognitive biases: Mistakes or missing stakes? Review of Economics and Statistics, 105, 818832.CrossRefGoogle Scholar
Epley, , & Gilovich, T. (2005). When effortful thinking influences judgmental anchoring: Differential effects of forewarning and incentives on self‐generated and externally provided anchors. Journal of Behavioral Decision Making, 18, 199212.CrossRefGoogle Scholar
Ferraro, P. J., & Tracy, J. D. (2022). A reassessment of the potential for loss-framed incentive contracts to increase productivity: a meta-analysis and a real-effort experiment. Experimental Economics, 25, 14411466.CrossRefGoogle Scholar
Finke, K., Dodds, C. M., Bublak, P., Regenthal, R., Baumann, F., Manly, T., & Müller, U. (2010). Effects of modafinil and methylphenidate on visual attention capacity: a TVA-based study. Psychopharmacology, 210, 317329.CrossRefGoogle ScholarPubMed
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13, 117.3.0.CO;2-S>CrossRefGoogle Scholar
Fischhoff, B. (1982). Debiasing. In Kahneman, D., Slovic, P., & Tversky, A. (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 422444). Cambridge University Press.CrossRefGoogle Scholar
Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3, 552564.Google Scholar
Franke, A. G., Gransmark, P., Agricola, A., Schühle, K., Rommel, T., Sebastian, A., Balló, H. E., Gorbulev, S., Gerdes, C., Frank, B., Ruckes, C., Tüscher, O., & Lieb, K. (2017). Methylphenidate, modafinil, and caffeine for cognitive enhancement in chess: A double-blind, randomised controlled trial. European Neuropsychopharmacology, 27, 248260.CrossRefGoogle Scholar
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic perspectives, 19, 2542.CrossRefGoogle Scholar
Garbers, Y., & Konradt, U. (2014). The effect of financial incentives on performance: A quantitative review of individual and team-based financial incentives. Journal of Occupational and Organizational Psychology, 87, 102137.CrossRefGoogle Scholar
Gehring, W. J., & Willoughby, A. R. (2002). The medial frontal cortex and the rapid processing of monetary gains and losses. Science, 295, 22792282.CrossRefGoogle ScholarPubMed
Gendolla, G. H. E., & Richter, M. (2006). Cardiovascular reactivity during performance under social observation: The moderating role of task difficulty. International Journal of Psychophysiology, 62, 185192.CrossRefGoogle ScholarPubMed
Gendolla, G. H. E., & Wright, R. A. (2005). Motivation in social settings: Studies of effort related cardiovascular arousal. In Forgas, J. P., Williams, K., & von Hippel, W. (Eds.), Social motivation (pp. 7190). Cambridge University Press.Google Scholar
Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103, 592596.CrossRefGoogle Scholar
Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded rationality. In Koehler, D. J., & Harvey, N. (Eds.), Blackwell handbook of judgment and decision making (pp. 3761). Blackwell Publishing.Google Scholar
Gigerenzer, G., Hertwig, R., & Pachur, T. (Eds.) (2011). Heuristics: The foundations of adaptive behavior. Oxford University Press.CrossRefGoogle Scholar
Gneezy, U., Meier, S., & Rey-Biel, P. (2011). When and why incentives (don’t) work to modify behavior. Journal of Economic Perspectives, 25, 191210.CrossRefGoogle Scholar
Gómez-Miñambres, J., Corgnet, B., & Hernán-González, R. (2012). Goal Setting and Monetary Incentives: When Large Stakes Are Not Enough. Chapman University ESI Working Paper 12–24.CrossRefGoogle Scholar
Grolleau, G., Kocher, M. G., & Sutan, A. (2016). Cheating and loss aversion: Do people cheat more to avoid a loss? Management Science, 62, 34283438.CrossRefGoogle Scholar
Groom, M. J., Liddle, E. B., Scerif, G., Scerif, G., Liddle, P. F., Batty, M. J., Liotti, M., & Hollis, C. P. (2013). Motivational incentives and methylphenidate enhance electrophysiological correlates of error monitoring in children with attention deficit/hyperactivity disorder. Journal of Child Psychology and Psychiatry, 54, 836845.CrossRefGoogle ScholarPubMed
Grune-Yanoff, T., & Hertwig, R. (2016). Nudge versus Boost: How coherent are policy and theory? Minds & Machines, 26, 149183.CrossRefGoogle Scholar
Harrison, G. W. (1994). Expected utility theory and the experimentalists. In Hey, J. D. (Ed.), Experimental economics (pp. 4376). Springer-Verlag.CrossRefGoogle Scholar
Hertwig, R., & Ortmann, A. (2001). Experimental practices in economics: A methodological challenge for psychologists? Behavioral and Brain Sciences, 24, 383451.CrossRefGoogle ScholarPubMed
Hinshaw, S. P., Heller, T., & McHale, J. P. (1992). Covert antisocial behavior in boys with attention-deficit hyperactivity disorder: External validation and effects of methylphenidate. Journal of Consulting and Clinical Psychology, 60, 274281.CrossRefGoogle ScholarPubMed
Hochman, G., & Yechiam, E. (2011). Loss aversion in the eye and in the heart: The Autonomic Nervous System’s responses to losses. Journal of Behavioral Decision Making, 24, 140156.CrossRefGoogle Scholar
Hogarth, R. M., Gibbs, B. J., McKenzie, C. R., & Marquis, M. A. (1991). Learning from feedback: Exactingness and incentives. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 734752.Google ScholarPubMed
Ilieva, I. P., Hook, C. J., & Farah, M. J. (2015). Prescription stimulants’ effects on healthy inhibitory control, working memory, and episodic memory: A meta-analysis. Journal of Cognitive Neuroscience, 27, 10691089.CrossRefGoogle ScholarPubMed
Johnson, M. W., & Bickel, W. K. (2002). Within-subject comparison of real and hypothetical money rewards in delay discounting. Journal of the Experimental Analysis of Behavior, 77, 129146.CrossRefGoogle ScholarPubMed
Kahneman, D. (1973). Attention and effort. Prentice-Hall.Google Scholar
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.Google Scholar
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98, 13251348.CrossRefGoogle Scholar
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263291.CrossRefGoogle Scholar
Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science, 4, 533550.CrossRefGoogle ScholarPubMed
Keren, G., & Teigen, K. H. (2004). Yet another look at the heuristics and biases approach. In Koehler, D. J., & Harvey, N. (Eds.), Blackwell handbook of judgment and decision making (pp. 89109). Blackwell Publishing.CrossRefGoogle Scholar
Kredlow, M. A., Keshishian, A., Oppenheimer, S., & Otto, M. W. (2019). The efficacy of modafinil as a cognitive enhancer: A systematic review and meta-analysis. Journal of Clinical Psychopharmacology, 39, 455461.CrossRefGoogle ScholarPubMed
Lagorio, C. H., & Madden, G. J. (2005). Delay discounting of real and hypothetical rewards III: steady-state assessments, forced-choice trials, and all real rewards. Behavioral Processes, 31, 173187.CrossRefGoogle Scholar
Larrick, R. P. (2004). Debiasing. In Koehler, D., & Harvey, N. (Eds.), Blackwell handbook of judgment and decision making (pp. 316337). Blackwell Publishing.CrossRefGoogle Scholar
Lefebvre, M., Vieider, F. M., & Villeval, M. C. (2011). The ratio bias phenomenon: Fact or artifact? Theory and Decision, 71, 615641.CrossRefGoogle Scholar
Lejarraga, T., & Hertwig, R. (2017). How the threat of losses makes people explore more than the promise of gains. Psychonomic Bulletin & Review, 24, 708720.CrossRefGoogle Scholar
Li, L., Maniadis, Z., & Sedikides, C. (2021). Anchoring in economics: A meta-analysis of studies on willingness-to-pay and willingness-to-accept. Journal of Behavioral and Experimental Economics, 90, 101629.CrossRefGoogle Scholar
Liddle, E. B., Hollis, C., Batty, M. J., Groom, M. J., Totman, J. J., Liotti, M., Scerif, G., & Liddle, P. F. (2011). Task-related default mode network modulation and inhibitory control in ADHD: effects of motivation and methylphenidate. Journal of Child Psychology and Psychiatry, 52, 761771.CrossRefGoogle ScholarPubMed
Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4, 390398.CrossRefGoogle ScholarPubMed
Locey, M. L., Jones, B. A., & Rachlin, H. (2011). Real and hypothetical rewards in self-control and social discounting. Judgment and Decision Making, 6, 522564.CrossRefGoogle Scholar
Locke, E. A., Bryan, J. F., & Kendall, L. M. (1968). Goals and intentions as mediators of the effects of monetary incentives on behavior. Journal of Applied Psychology, 52, 104121.CrossRefGoogle Scholar
Löw, A., Lang, P. J., Smith, J. C., & Bradley, M. M. (2008). Both predator and prey: Emotional arousal in threat and reward. Psychological Science, 19, 865873.CrossRefGoogle ScholarPubMed
Lubian, D., & Untertrifaller, A. (2014). Cognitive abilities, stereotypes and gender segregation in the workplace. Economics Bulletin, 32, 12681282.Google Scholar
Mankiw, N. G. (2018). Principles of economics (8th ed.) South West College ISE.Google Scholar
Marraccini, M. E., Weyandt, L. L., Rossi, J. S., & Gudmundsdottir, B. G. (2016). Neurocognitive enhancement or impairment? A systematic meta-analysis of prescription stimulant effects on processing speed, decision-making, planning, and cognitive perseveration. Experimental Clinical Psychopharmacology, 24, 269284.CrossRefGoogle ScholarPubMed
Matusiewicz, A. K., Carter, E. C., Landes, R. D., & Yi, Y. (2013). Statistical equivalence and test–retest reliability of delay and probability discounting using real and hypothetical rewards. Behavioural Processes, 100, 116122.CrossRefGoogle ScholarPubMed
McGraw, K. O. (1978). The detrimental effects of reward on performance: A literature review and a prediction model. In Lepper, M. R., & Greene, D. (Eds.), The hidden costs of reward (pp. 3360). Erlbaum.Google Scholar
McShane, B. B., & Gal, D. (2016). Blinding us to the obvious? The effect of statistical training on the evaluation of evidence. Management Science, 62, 17071718.CrossRefGoogle Scholar
Mehta, M. A., Owen, A. M., Sahakian, B. J., Mavaddat, N., Pickard, J. D., & Robbins, T. W. (2000). Methylphenidate enhances working memory by modulating discrete frontal and parietal lobe regions in the human brain. Journal of Neuroscience, 20, RC65.CrossRefGoogle ScholarPubMed
Mehta, M. A., & Riedel, W. J. (2006). Dopaminergic enhancement of cognitive function. Current Pharmaceutical Design, 12, 24872500.CrossRefGoogle ScholarPubMed
Mill, J. S. (1863). Utilitarianism. Collins.Google Scholar
Münscher, M., Vetter, T., & Scheuerle, T. (2016). A review and taxonomy of choice architecture techniques. Journal of Behavioral Decision Making, 29, 511524.CrossRefGoogle Scholar
Patel, N., Baker, G., & Scherer, L. D. (2019). Evaluating the cognitive reflection test as a measure of intuition/reflection, numeracy, and insight problem solving, and the implications for understanding real-world judgments and beliefs. Journal of Experimental Psychology: General, 148, 21292153.CrossRefGoogle ScholarPubMed
Peled-Avron, L., Goren, H. G., Brande-Eilat, N., Dorman-Ilan, S., Segev, A., Feffer, K., Gvirtz Problovski, H. Z., Levkovitz, Y., Barnea, Y., Lewis, Y. D., & Tomer, R. (2021). Methylphenidate reduces orienting bias in healthy individuals. Journal of Psychopharmacology, 35, 760767.CrossRefGoogle ScholarPubMed
Pievsky, M. A., & McGrath, R. E. (2018). Neurocognitive effects of methylphenidate in adults with attention-deficit/hyperactivity disorder: A meta-analysis. Neuroscience & Biobehavioral Reviews, 90, 447455.CrossRefGoogle ScholarPubMed
Porcelli, A. J., & Delgado, M. R. (2009). Acute stress modulates risk taking in financial decision making. Psychological Science, 20, 278283.CrossRefGoogle ScholarPubMed
Prado, C. E., Watt, S., & Crowe, S. F. (2018). A meta-analysis of the effects of antidepressants on cognitive functioning in depressed and non-depressed samples. Neuropsychology Review, 28, 3272.CrossRefGoogle ScholarPubMed
Richter, M., & Gendolla, G. H. E. (2009). The heart contracts to reward: Monetary incentives and preejection period. Psychophysiology, 46, 451457.CrossRefGoogle ScholarPubMed
Roberts, C. A., Jones, A., Sumnall, H., Gage, S. H., & Montgomery, C. (2020). How effective are pharmaceuticals for cognitive enhancement in healthy adults? A series of meta-analyses of cognitive performance during acute administration of modafinil, methylphenidate and D-amphetamine. European Neuropsychopharmacology, 38, 4062.CrossRefGoogle ScholarPubMed
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5, 269320.CrossRefGoogle Scholar
Rubinstein, A. (2013). Response time and decision making: An experimental study. Judgment and Decision Making, 8, 540551.CrossRefGoogle Scholar
Satterthwaite, T. D., Green, L., Myerson, J., Parker, J., Ramaratnam, M., & Buckner, R. L. (2007). Dissociable but inter-related systems of cognitive control and reward during decision making: Evidence from pupillometry and event-related fMRI. Neuroimage, 37, 10171031.CrossRefGoogle ScholarPubMed
Scherer, L. D., Yates, J. F., Baker, S. G., & Valentine, K. D. (2017). The influence of effortful thought and cognitive proficiencies on the conjunction fallacy: Implications for dual-process theories of reasoning and judgment. Personality and Social Psychology Bulletin, 43, 874887.CrossRefGoogle ScholarPubMed
Sharpe, L. (2004). Patterns of autonomic arousal in imaginal situations of winning and losing in problem gambling. Journal of Gambling Studies, 20, 95104.CrossRefGoogle ScholarPubMed
Shiels, K., Hawk, L. W., Reynolds, B., Mazzullo, R. J., Rhodes, J. D., Pelham, W. E., Waxmonsky, J. G., & Gangloff, B. P. (2009). Effects of methylphenidate on discounting of delayed rewards in attention deficit/hyperactivity disorder. Experimental and Clinical Psychopharmacology, 17, 291301.CrossRefGoogle ScholarPubMed
Siegel, S. (1961). Decision making and learning under varying conditions of reinforcement. Annals of the New York Academy of Sciences, 89, 766783.CrossRefGoogle Scholar
Simmons, J. P., LeBoeuf, R. A., & Nelson, L. D. (2010). The effect of accuracy motivation on anchoring and adjustment: Do People adjust from provided anchors? Journal of Personality and Social Psychology, 99, 917932.CrossRefGoogle ScholarPubMed
Sirota, M., Kostovičová, L., Juanchich, M., Dewberry, C., & Marshall, A. C. (2020). Measuring cognitive reflection without maths: Developing and validating the verbal cognitive reflection test. Journal of Behavioral Decision Making, 34, 322343.CrossRefGoogle Scholar
Sjastad, H., & Baumeister, R. F. (2023). Fast optimism, slow realism? Causal evidence for a two-step model of future thinking. Cognition, 236, 105447.CrossRefGoogle ScholarPubMed
Stanovich, K. E., Toplak, M., & West, R. (2008). The development of rational thought: A taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36, 251286.CrossRefGoogle ScholarPubMed
Stanovich, K. E., & West, R. F. (2004). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In Over, D. E. (Ed.), Evolution and the psychology of thinking (pp. 171230). Psychology Press.Google Scholar
Submarine Channel (2018). Bottomless soup bowls - minimovies: Ig Nobel prizes (Ep. 5/6). https://www.youtube.com/watch?v=MD48Qa9eoXc.Google Scholar
Sunstein, C. R. (2016). People prefer System 2 nudges (kind of). Duke Law Journal, 66, 121168.Google Scholar
Szollosi, A., Bago, B., Szaszi, B., & Aczel, B. (2017). Exploring the determinants of confidence in the bat-and-ball problem. Acta Psychologica, 180, 17.CrossRefGoogle ScholarPubMed
Thaler, R. H., & Sunstein, C. R. (2008). Nudge. Yale University Press.Google Scholar
Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315, 515518.CrossRefGoogle ScholarPubMed
Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117, 440463.CrossRefGoogle ScholarPubMed
Tversky, A., & Edwards, W. (1966). Information versus reward in binary choices. Journal of Experimental Psychology, 71, 680683.CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 11241131.CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293315.CrossRefGoogle Scholar
Vogel, N., Ram, N., Conroy, D. E., Pincus, A. L., & Gerstorf, D. (2017). How the social ecology and social situation shape individuals’ affect valence and arousal. Emotion, 17, 509527.CrossRefGoogle ScholarPubMed
Welsh, M. B., Burns, N. R., & Delfabbro, P. H. (2013). The cognitive reflection test: How much more than numerical ability? In Knauff, M., Sebanz, N., Pauen, M., & Wachsmuth, I. (Eds.), Proceedings of the 35th annual meeting of the cognitive science society (pp. 15871592). Cognitive Science Society.Google Scholar
Wright, R. A. (1998). Ability perception and cardiovascular response to behavioral challenge. In Kofta, M., Weary, G., & Sedek, G. (Eds.), Control in action: Cognitive and motivational mechanisms (pp. 197232). Plenum Press.Google Scholar
Wright, R. A., & Kirby, L. D. (2001). Effort determination of cardiovascular response: An integrative analysis with applications in social psychology. In Zanna, M. P. (Ed.), Advances in experimental social psychology (pp. 255307). Academic Press.Google Scholar
Wright, W. F., & Aboul-Ezz, M. E. (1988). Effects of extrinsic incentives on the quality of frequency assessments. Organizational Behavior and Human Decision Processes, 41, 143152.CrossRefGoogle Scholar
Wright, W. F., & Anderson, U. (1989). Effects of situation familiarity and financial incentives on use of the anchoring and adjustment heuristic for probability assessment. Organizational Behavior and Human Decision Processes, 44, 6882.CrossRefGoogle Scholar
Xiao, W., Wu, Q., Yang, Q., Zhou, L., Jiang, Y., Zhang, J., & Peng, J. (2015). Moral hypocrisy on the basis of construal level: To be a utilitarian personal decision maker or to be a moral advisor? PLOS ONE, 10, e0117540.CrossRefGoogle ScholarPubMed
Xu, S., Xiao, Z., & Rao, H. (2019). Hypothetical versus real monetary reward decrease the behavioral and affective effects in the Balloon Analogue Risk Task. Experimental Psychology, 66, 221230.CrossRefGoogle ScholarPubMed
Xue, G., Lu, Z., Levin, I. P., Weller, J. A., Li, X., & Bechara, A. (2009). Functional dissociations of risk and reward processing in the medial prefrontal cortex. Cerebral Cortex, 19, 10191027.CrossRefGoogle ScholarPubMed
Yang, Y., Hsee, C. K., & Li, X. (2021). Prediction biases: An integrative review. Current Directions in Psychological Science, 30, 195201.CrossRefGoogle Scholar
Yechiam, E., Ashby, N. J. S., & Hochman, G. (2019). Are we attracted by losses? Boundary conditions for the approach and avoidance effects of losses. Journal of Experimental Psychology: Learning, Memory, & Cognition, 45, 591605.Google ScholarPubMed
Yechiam, E., Ashby, N. J. S., & Pachur, T. (2017). Who’s biased? A meta-analysis of buyer-seller differences in the pricing of risky prospects. Psychological Bulletin, 143, 543563.CrossRefGoogle Scholar
Yechiam, E., & Hochman, G. (2013a). Losses as modulators of attention: Review and analysis of the unique effects of losses over gains. Psychological Bulletin, 139, 497518.CrossRefGoogle ScholarPubMed
Yechiam, E., & Hochman, G. (2013b). Loss-aversion or loss-attention: The impact of losses on cognitive performance. Cognitive Psychology, 66, 212231.CrossRefGoogle ScholarPubMed
Yechiam, E., Retzer, M., Telpaz, A., & Hochman, G. (2015). Losses as ecological guides: Minor losses lead to maximization and not to avoidance. Cognition, 139, 1017.CrossRefGoogle ScholarPubMed
Yechiam, E., & Telpaz, A. (2013). Losses induce consistency in risk taking even without loss aversion. Journal of Behavioral Decision Making, 26, 3140.CrossRefGoogle Scholar
Yechiam, E., & Zeif, D. (2022). The effect of methylphenidate and mixed amphetamine salts on cognitive reflection: A field study. Psychopharmacology, 239, 455463.CrossRefGoogle ScholarPubMed
Yechiam, E., & Zeif, D. (2023a). Revisiting the effect of incentivization on cognitive reflection: A meta-analysis. Journal of Behavioral Decision Making, 36, e2286.CrossRefGoogle Scholar
Yechiam, E., & Zeif, D. (2023b). The effect of incentivization on the conjunction fallacy in judgments: A meta-analysis. Psychological Research, 87, 23362344.CrossRefGoogle ScholarPubMed
Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit formation. Journal of Comparative Neurology and Psychology, 18, 459482.CrossRefGoogle Scholar
Zack, M., & Poulos, C. X. (2009). Effects of the atypical stimulant modafinil on a brief gambling episode in pathological gamblers with high vs. low impulsivity. Journal of Psychopharmacology, 23, 660671.CrossRefGoogle ScholarPubMed
Zeif, D., & Yechiam, E. (2022). Loss aversion (simply) does not materialize for smaller losses. Judgment and Decision Making, 17, 10151042.CrossRefGoogle Scholar
Figure 0

Figure 1 The effect of losses on average choice rates in Yechiam et al. (2015). The task in this study involved 150 choices between three options, with either a high, medium, or low expected value (EV). The probability of the two outcomes in the high and low EV options was equal (50%). Participants were not provided with the payoff distribution, and each choice resulted in feedback drawn from the selected alternative’s payoff distribution. Error terms denote standard errors.