Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-23T09:28:55.253Z Has data issue: false hasContentIssue false

The rationality wars: a personal reflection

Published online by Cambridge University Press:  22 November 2024

Gerd Gigerenzer*
Affiliation:
Max Planck Institute for Human Development, Berlin, Germany
Rights & Permissions [Opens in a new window]

Abstract

During the Cold War, logical rationality – consistency axioms, subjective expected utility maximization, Bayesian probability updating – became the bedrock of economics and other social sciences. In the 1970s, logical rationality underwent attack by the heuristics-and-biases program, which interpreted the theory as a universal norm of how individuals should make decisions, although such an interpretation is absent in von Neumann and Morgenstern’s foundational work and dismissed by Savage. Deviations in people’s judgments from the theory were thought to reveal stable cognitive biases, which were in turn thought to underlie social problems, justifying governmental paternalism. In the 1990s, the ecological rationality program entered the field, based on the work of Simon. It moves beyond the narrow bounds of logical rationality and analyzes how individuals and institutions make decisions under uncertainty and intractability. This broader view has shown that many supposed cognitive biases are marks of intelligence rather than irrationality, and that heuristics are indispensable guides in a world of uncertainty. The passionate debate between the three research programs became known as the rationality wars. I provide a brief account from the ‘frontline’ and show how the parties understood in strikingly different ways what the war entailed.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

The battlefield for the rationality wars was laid out the moment the hegemony of Cold War rationality began to erode. By way of game theory and rational choice theory, Cold War rationality had emerged to prominence after World War II. The ideal of a rule-following logical rationality embodied the hope that calculative reasoning could tame the immediate threat of a nuclear war, serving as a safeguard from the unpredictable emotions of a Khrushchev or a Kennedy (Erickson et al., Reference Erickson, Klein, Daston, Lemow, Sturm and Gordin2013). Logical rationality – maximization of subjective expected utility, the consistency axioms, Bayesian probability updating, Nash equilibrium and backward induction – offered the promise of an intellectual weaponry to guide the West through the Vietnam War and the Cold War. At the same time, it became the bedrock of much of economics and other social sciences, and one of the crowning achievements of the human intellect.

In the 1970s, logical rationality came under attack by a group of psychologists who argued that human decision-making was fraught with systematic biases, and that behavior was often irrational, even predictably so. The attack originated with cognitive psychologists Tversky and Kahneman, whose agenda became known as the heuristics-and-biases program (Tversky and Kahneman, Reference Tversky and Kahneman1974). Yet rather than challenging logical rationality as a norm, these scholars in fact embraced it. When a discrepancy was shown between people’s judgments and logical rationality, they attributed it to a flaw in the human mind, never in the norm, often without mincing words. People’s intuitions were called ‘a multitude of sins’, ‘indefensible’, ‘self-defeating’ and ‘ludicrous’ (Tversky and Kahneman, Reference Tversky and Kahneman1971, pp. 107–110). Social psychologists joined the ranks. People were not only said to be often mistaken about the causes of their own behavior but, in principle, incapable of identifying them because they cannot access the processes in their minds (Nisbett and Wilson, Reference Nisbett and Wilson1977). The sacking of introspection went hand-in-hand with the claim, reminiscent of Skinner’s Beyond Freedom and Dignity, that behavior is 99% automatic and easily manipulable by priming (Bargh, Reference Bargh and Wyer1997; Leys, Reference Leys2024). A general picture emerged that people are inherently liable to irrational biases, lack awareness and control, and need to be steered by paternalistic government policies such as nudging (Thaler and Sunstein, Reference Thaler and Sunstein2008). I will refer to this ensemble as the cognitive bias program.

In the 1990s, a third party entered the field, initiated by my research group (Gigerenzer et al., Reference Gigerenzer and Todd1999, 2011). Its philosophy is based on the work of Herbert Simon (Reference Simon1990), who argued that to understand behavior one needs to analyze both cognition and its environment (which he likened to the two blades of scissors), as well as their match. Thus, the nature of rationality is not internal consistency, but rather functionality in the world. Because the maximization of expected utility relies on the mathematics of optimization, it cannot deal with ill-defined situations of uncertainty (e.g., introduction of a new product) and well-defined situations of intractability (e.g., playing chess). The program of ecological rationality, more general than logical rationality, addresses these situations. It models the heuristic processes used and identifies the environments in which they are successful. As a result, many so-called cognitive biases turn out to be functional in the real world (Gigerenzer, Reference Gigerenzer2018). Simon’s original program of what he named ‘bounded rationality’ emerged alongside the logical rationality approach but was not fully pursued until later. Rather, Simon’s term was taken over by proponents of logical rationality to mean optimization under constraints and by the cognitive bias program to mean the opposite, irrationality. In response to this dual takeover, I introduced the term ‘ecological rationality’.

The debate between these three views on human nature – as Homo economicus, Homer Simpson or Homo heuristicus – has been christened the ‘rationality wars’ by philosophers Samuels et al. (Reference Samuels, Stich, Bishop and Elio2002) and Sturm (Reference Sturm2012), and the ‘great rationality debate’ by psychologist Stanovich (Reference Stanovich2011).

Multiple battle lines define the debate, which cuts straight across disciplinary borders. Is logical rationality an universal norm of rationality, applicable everywhere and always, or is its territory bounded? To what extent does human behavior deviate from logical rationality, and does it matter? Should a theory of rational behavior be purely abstract, blind to context and human experience, or should it explicitly reflect the structure of the environment and put some psychological flesh on the bare-bones models?

In this article, I provide a brief account of the rationality wars. I have stood in the front line and witnessed first-hand how fierce debates about rationality can be. Therefore, what you are reading is a personal view of a contestant, not an observer. But I will do my best to represent all parties fairly.

Cold War rationality

During the Vietnam War, the philosopher John Searle visited a friend who was a high official of the Defense Department in the Pentagon. As Searle (Reference Searle2001, p. 6) reported:

I tried to argue him out of the war policy the United States was following, particularly the policy of bombing North Vietnam. He had a Ph.D. in mathematical economics. He went to the blackboard and drew the curves of traditional microeconomic analysis; and then said, ‘Where these two curves intersect, the marginal utility of resisting is equal to the marginal disutility of being bombed. At that point, they have to give up. All we are assuming is that they are rational. All we are assuming is that the enemy is rational!’

The North Vietnamese, however, continued to fight until the US forces retreated in 1973.

In the 1940s and 1950s, the analytic foundations of logical rationality (also known as axiomatic rationality; Rizzo and Whitman, Reference Rizzo and Whitman2020, pp. 52–55) were laid, and it became prominent in the 1960s. The timing may not be entirely accidental. The Berlin crisis of 1961, the Cuban missile crisis of 1962, the crushing of the Prague Spring by the USSR in 1968, the Vietnam War 1955–1975, and the persistent threat of long-range atomic missiles embodied the global climate of nuclear threat.

Beginning with the 1964 Berkeley conference ‘Strategic Interaction and Conflict’, a group of eminent scholars met at the RAND corporation in Washington and at various conferences to discuss how to save the planet from nuclear war. This group of luminaries included economist Thomas Schelling from Harvard, economist Daniel Ellsberg (who later became famous as the former Defense Department official who passed on the Pentagon Papers), economist Oskar Morgenstein from Princeton University, and sociologist Erving Goffman from the University of Califorina at Berkeley. Their goal was to rescue earth not only from nuclear annihilation but also from the hazardous human mind. In Schelling’s (Reference Schelling1960, p. 292) words, ‘The point is that accidents do not cause war. Decisions cause war’.

This group hoped that a rational calculus could tame the immediate threat that human mischief, arrogance, or lunacy might lead to global nuclear war. Rationality should be formal, independent of personality and context, and calculative. Algorithmic, optimal and impersonal – this was the ideal of Cold War rationality. The calculus would transform our uncertain world into a certain one, making the enemy predictable.

The goal of the RAND corporation group was grander than the Vietnam War and the Cold War: to discover pure rationality, valid universally and eternally, independent of the problem at hand, and ideally to be used mechanically by a computer (Erickson et al., Reference Erickson, Klein, Daston, Lemow, Sturm and Gordin2013, p. 177). This grand vision added unexpected new culprits to the old list of hindrances to reason. For centuries, the guilty parties were the passions, sloppy thinking, ignorance and madness. Cold War rationality added to this list two strange new bedfellows that stood in the way of calculating optimal solutions: uncertainty and intractability (Erickson et al., Reference Erickson, Klein, Daston, Lemow, Sturm and Gordin2013, p. 9). Uncertainty means radical uncertainty, or unknown unknowns, to use the words of former US Secretary of Defense Rumsfeld, where the complete set of future possible states and their consequences cannot be known. Intractability means that the optimal course of action can be computed neither by humans nor computers. It is best known from games such as chess, but many Bayesian computations and their approximations are also intractable (Kwisthout et al., Reference Kwisthout, Wareham and von Rooij2011). Situations of uncertainty and intractability rule out optimization and, along with it, the usefulness of the theory of maximizing subjective expected utility.

Many Cold War rationalists were aware of the limits of logical rationality. Ellsberg (Reference Ellsberg1961) argued that Savage’s consistency axioms have questionable normative power. Morgenstern pointed out that errors in the antiballistic missile system were inevitable. The Cold War rationalists were their own severest critics, uneasy with logical rationality’s blind spot for physical and psychological factors.

At this point in the debate, the first battle lines emerged – battle lines that are contested to this day.

Battle lines

Universality

Is logical rationality universal or bounded? In an extraordinary coup, the maximization of subjective expected utility was crowned by Milton Friedman and other social scientists as a universal theory of rational decision-making, applicable everywhere and always. Its limits were dismissed as mere ‘anomalies’. Ellsberg’s normative critique was largely forgotten and his widely cited experimental results labelled a ‘paradox’, as was Allais’ normative critique earlier. Yet this coup was not what the founders of subjective expected utility theory had in mind. Savage, who had axiomatized the theory, considered its domain very narrow, not universal. He warned his readers that it would be ‘ridiculous’ to apply subjective expected utility theory to situations of uncertainty where unforeseen events could happen or to intractable problems (Savage, Reference Savage1954, p. 16). Savage believed that the theory is normative only in small worlds, where all possible future states of the world and all consequences of one’s actions are known for certain. Likewise, a normative interpretation of the choice axioms or the maximization of expected utility is absent in the three editions of von Neumann and Morgenstern’s Theory of Games and Economic Behavior (von Neumann and Morgenstern, Reference von Neumann and Morgenstern1944, Reference von Neumann and Morgenstern1947, 1953).

Purpose

Is logical rationality intended to describe how we behave, or prescribe how we should behave, or something else altogether? The hope of Cold War rationality was that the theory described how the warring parties actually thought. If the theory were merely normative, passions and madness might nevertheless lead the world into nuclear war. However, Allais and Ellsberg demonstrated that people’s preferences, including Savage’s own intuitions, systematically deviate from the choice axioms. Friedman (Reference Friedman1953) ruled that psychological realism does not matter; the theory states only that people behave as if they maximize their subjective expected utility, not that they actually do so. Following Friedman’s as-if philosophy, many neoclassical economists have shown little interest in the actual process of decision-making. Friedman (Reference Friedman1953) declared that a theory should be evaluated ‘only by seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories’ (p. 41).

How good then are its predictions? The failure of the Federal Reserve’s macroeconomic models to predict the financial crisis of 2008 is a case in point. But back in 1983, economist Heiner had already noted that the list of unambiguous predictions derived from optimization models is at best very short (p. 561). In 2014, a group of economists published a review of the empirical evidence from the last 50 years for how well behavior is predicted by utility functions, including utility of income functions, utility of wealth functions, and the value function in prospect theory. They concluded: ‘Their power to predict out-of-sample is in the poor-to-nonexistent range, and we have seen no convincing victories over naïve alternatives’ (D. Friedman et al., Reference Friedman, Issac, James and Sunder2014, p. 3).

Note the term ‘out-of-sample prediction’. It means that a model is calibrated on part of the data and tested on the other part. This is standard methodology in machine learning, but not in economics (Artinger et al., Reference Artinger, Gigerenzer and Jacobs2022). Although highly parameterized as-if models such as cumulative prospect theory excel at data fitting, they do not necessarily predict well. All sorts of behavior are consistent with some utility model yet not predicted by them.

The two battle lines – Is logical rationality universal or narrow? What is its actual purpose? – continue to be clouded in the fog of war. As in real warfare, positions are far from consistent and shift rapidly. To protect logical rationality, its models have rarely been tested in out-of-sample prediction, its bounds are rarely specified, and it continues to be variously presented as normative, descriptive, ‘as-if’, or all of that.

Ironically, the psychologists who began to challenge logical rationality in the 1970s took it more seriously as a universal and normative theory than did many economists who understood its limits. Unlike Friedman, they also took the theory literally as a description of individual behavior, and proceeded to assault its descriptive validity.

Cognitive biases

One might think that statistical reasoning has been always a core topic in psychological research. Yet not until the beginning of the Cold War did interest in it emerge in studies of human thought (Gigerenzer and Murray, Reference Gigerenzer and Murray2015). In 1967, psychologists Peterson and Beach reviewed 110 articles and concluded that the laws of statistics provide ‘a good first approximation for a psychological theory of inference’ (p. 43). Their article was aptly – apart from the gender bias – entitled ‘Man as intuitive statistician’. Similarly, Edwards (Reference Edwards and Kleinmuntz1968) concluded that human beings are pretty good Bayesians, albeit conservative ones, and Piaget and Inhelder (Reference Piaget and Inhelder1951/1975) concluded that by the age of 12, children’s intuitions approximate the laws of probability. Note that these studies, unlike the later heuristics-and-biases program, tested participants on real random physical devices rather than on hypothetical text problems.

The psychology of irrationality

A few years after Peterson and Beach’s review article, a conflicting claim emerged. Tversky and Kahneman (Reference Tversky and Kahneman1974) reviewed four studies of their own and concluded: ‘people do not appear to follow the calculus of chance or the statistical theory of prediction’ (p. 237). This article provided the template for the cognitive bias program, which began to assemble a list of alleged cognitive illusions that resembled visual illusions, suggesting their inevitableness: the base-rate fallacy, the conjunction fallacy, misconceptions of chance, overconfidence and dozens of other mental quirks. Ariely (Reference Ariely2010) argued ‘that we are not only irrational but predictably irrational – that our irrationality happens the same way, again and again’ (p. xviii). In their book Nudge, Thaler and Sunstein (Reference Thaler and Sunstein2008) jokingly compared humans to the bumbling comic figure Homer Simpson. Homo sapiens now appeared to be a misnomer.

The media, which had previously indicated little interest in the research on the intuitive statistician, propagated the new message. For instance, Newsweek ran a feature article concluding that most people are ‘woefully muddled information processors’ and that the list of cognitive biases of these ‘saps’ and ‘suckers’ is so lengthy as to ‘demoralize Solomon’ (McCormick, Reference McCormick1987).

Similarly, a troop of researchers began to attribute all kinds of human disasters to alleged cognitive biases. Consider the so-called conjunction fallacy – e.g., that most people find it more probable that Linda is a ‘bank teller and a feminist’ than a ‘bank teller’ when given a description implicitly suggesting she is a feminist but not a bank teller (Tversky and Kahneman, Reference Tversky and Kahneman1983; see Rizzo and Whitman, Reference Rizzo and Whitman2020, pp. 133–141). Philosopher Stich (Reference Stich2012) sounded the alarm: ‘It is disquieting to speculate on how large an impact this inferential failing may have on people’s assessments of the chance of such catastrophes as nuclear reactor failures’ (p. 52). Cognitive scientist Kanwisher (Reference Kanwisher1989) speculated that the conjunction fallacy might underlie flawed arguments in debates on US security policy by ‘overemphasizing the most extreme scenarios at the expense of less flashy but more likely ones’ (p. 655). And paleontologist Gould (Reference Gould1992) concluded: ‘our minds are not built (for whatever reason) to work by the rules of probability’ (p. 469). None of them seemed aware that Inhelder and Piaget (Reference Inhelder, Piaget and Lunzer1951/1964, p. 101) had shown that even 12-year-olds understand the conjunction rule: that a subset cannot be larger than the set (Tversky and Kahneman did not cite Inhelder and Piaget’s earlier contradicting results). Thus, the question of why children but not adults understand this rule did not even arise.

The politics of irrationality

The irrationality message suited some businesses well. Various addictions and social disasters were attributed to our inner biases, diverting attention from these businesses’ own bad behavior. After the financial crisis of 2008, Deutsche Bank Research published an article ‘Homo economicus – or more like Homer Simpson?’ featuring 17 cognitive biases, suggesting these as causes of the crisis (Schneider, Reference Schneider2010). However, according to the US Department of Justice (2017), ‘Deutsche Bank did not merely mislead investors: it contributed directly to an international financial crisis’. In 2017, Deutsche Bank agreed to pay $7.2 billion for its illegal conduct and irresponsible lending practices.

Soon the irrationality message turned political. Philosopher Trout (Reference Trout2005) declared that the evidence points to ‘a single moral – that the Enlightenment vision is profoundly mistaken’ (p. 379), while philosopher Conley (Reference Conley2013) dismissed John Stuart Mill’s liberalism because Mill ‘failed to adequately reckon with human psychology, as we now know it to be’ (p. 9). The UK government created a Behavioral Insights Team, also known as its ‘Nudge Squad’, and former US President Obama, who identified himself as an admiring reader of Nudge, hired Cass Sunstein to be his ‘regulation czar’ (more formally, Head of the White House Office of Information and Regulatory Affairs; Ferguson, Reference Ferguson2010). Nudging has become a billion-dollar business around the world. This new paternalism is aimed at protecting people not from imperfections of the markets or criminals but from the enemy within, their own irrationality. It bases its justification on the alleged stubbornness of cognitive biases.

New battle lines

Replicability

The irrationality message opened up a third battle line concerning the authenticity of cognitive biases. Are these replicable? It turned out that some were not. When we (Sedlmeier et al., Reference Sedlmeier, Hertwig and Gigerenzer1998) made the first (and to date sole) attempt to replicate the famous letter-frequency study by Tversky and Kahneman (Reference Tversky and Kahneman1973), we found no systematic availability bias. Priming effects were the next casualties of replication studies. Doyen et al. (Reference Doyen, Klein, Pichon and Cleeremans2012) failed to replicate Bargh et al.’s (Reference Bargh, Chen and Burrows1996) iconic experiment where participants who were asked to sort words associated with the elderly automatically slowed down their walking pace when they left the laboratory. Despite multiple replication failures, Bargh and like-minded researchers insisted that the biases that made them famous were real and waged personal attacks on those who could not replicate these, slinging terms such as ‘replication police’, ‘shameless little bullies’, ‘witch hunts’, ‘methodological terrorism’ and even ‘the Stasi’ (Lewis-Kraus, Reference Lewis-Kraus2023). Kahneman, who had devoted numerous pages of Thinking, Fast and Slow to ‘the marvels of priming’ (Reference Kahneman2011, p. 52), conceded that he had uncritically endorsed this research and wrote an open letter to Nature urging priming researchers to stop their ‘defiant denial’ of their replication problem (Kahneman, Reference Kahneman2012).

Quantity

Samuels et al. (Reference Samuels, Stich, Bishop and Elio2002), who named the debate ‘rationality wars’, understood in a strikingly different way what the war entailed: not universality, purpose or replicability, but simply quantity. They asked: to what degree do humans actually deviate from logical rationality? For them, this descriptive question demarcated the rationality wars’ sole battle line. In their view, the cognitive bias program sees the glass of rationality as half empty, the ecological rationality program as half full. Where, they asked, is the disagreement? In their reconciliatory endeavors, Samuels et al. missed the very point. The dispute about what counts as good reasoning is firmly normative, not merely descriptive. At issue is the substance of the glass of rationality, not just the level of the liquid.

Welfare costs

The final two battle lines concern the supposed welfare costs of deviations from logical rationality and the efficiency of nudging to reduce these costs. Are violations of logical rationality associated with real-world costs, which would justify the new paternalism? Arkes, Hertwig and I (Arkes et al., Reference Arkes, Gigerenzer and Hertwig2016) searched the literature on evidence for detrimental material consequences, such as lower earnings, impaired health or shorter lives. For instance, according to the money pump argument a person who has intransitive preferences will lose money: If you prefer A to B, B to C and C to A, and are willing to pay for getting from A to B, and so on, you end up in a circle and become a money pump. We identified over 100 studies that reported violations of transitivity, and found no evidence that these violations actually lead to money pumps. We also searched more than a thousand articles for violations of the conjunction rule, independence axiom, preference reversals, framing effects and other supposed cognitive biases, with the same result. The absence of evidence that violations of logical rationality matter is striking.

Efficiency of nudging

Does nudging actually improve people’s health, wealth and happiness, as frequently claimed? A meta-analysis of 212 studies reported a small-to-medium effect size, but noted a publication bias (Mertens et al., Reference Mertens, Herbert, Hahnel and Brosch2021). Publication bias means that studies that found zero or negative effects did not enter the meta-analysis. When correcting for the publication bias, independent researchers found no benefit of nudging in any domain (Maier et al., Reference Maier, Bartos, Stanley, Shanks, Harris and Wagenmakers2022).

One reason why nudging appears to work when it does not are surrogate measures. A popular success story maintains that nudging saves lives by increasing organ donation rates. Countries with opt-out defaults (everyone is a donor unless they opt out) have higher rates of potential donors than countries with opt-in defaults. Because most people tend to follow the default, Thaler and Sunstein (Reference Thaler and Sunstein2008, p. 186) attributed the shortage of donations to people’s inertia. The solution seemed to be to simply change the default from opt-in to opt-out. However, increasing the number of potential donors is not the same as increasing actual donations. When 17 OECD countries with opt-out defaults were compared with 18 OECD countries with opt-in defaults, no difference was found in the actual rates of donations (Arshad et al., Reference Arshad, Anderson and Sharif2019); a second study following five countries that had switched to opt-out found that switching increased the rate of potential donors but again not of actual donations (Dallacker et al., Reference Dallacker, Appelius, Brandmaier, Morais and Hertwig2024). The case of Spain – the country with the highest rate of actual donors – indicates that the problem lies not in the inertia of individual minds but in the structure of the system. Spain originally had an opt-out default, but only after the government introduced targeted structural changes did actual donation rates rise substantially. These changes included sufficient financial incentives for hospitals to provide the expensive infrastructure, a transplantation network that efficiently organizes the process, education programs for the public and psychologically trained personnel to talk with the families of the deceased, who largely make the final decision independent of the default (Matesanz, Reference Matesanz2005). Increasing the number of potential donors is a surrogate for the real task of increasing actual donor rates by structural changes and education.

A second reason why nudging appears to work when it does not has sadly been data manipulation and fraud. According to Ariely, when people were asked to sign an honesty pledge at the beginning of a form reporting their annual mileage to their insurance, they were more honest than when signing at the end. After scrutinizing Ariely’s data, Simonsohn et al. (Reference Simonsohn, Simmons and Nelson2021) concluded that they were deliberately manipulated to produce the effect and found additional evidence for fraud by Ariely’s coauthor Gino, who contributed a lab study on the topic of dishonesty (Lewis-Kraus, Reference Lewis-Kraus2023).

Why were people rational before 1970 and irrational thereafter?

Surprisingly, the question of why people suddenly became irrational was rarely posed. What happened in the 1970s? Lopes (Reference Lopes1991) was one of the few to ask and to point to two factors that fueled the new irrationality message: citation bias and lack of learning opportunities.

Citation bias

Take the review of 110 studies by Peterson and Beach (Reference Peterson and Beach1967), which concluded that people are fairly good intuitive statisticians. By 2020, this article was cited 479 times according to Scopus, while the article by Tversky and Kahneman (Reference Tversky and Kahneman1974) with the opposite message was cited over 15,000 times (Lejarraga and Hertwig, Reference Lejarraga and Hertwig2021). And in the same timeframe, Edwards et al.’s article (Reference Edwards, Lindman, Phillips, Barron, Dement, Edwards, Lindman, L., Olds and Olds (eds)1965) concluding that people are conservative Bayesians was cited 214 times, while the corresponding article by Kahneman and Tversky (Reference Kahneman and Tversky1972) claiming that people are neither conservative Bayesians nor Bayesian at all (p. 450) was cited 5,815 times (Lejarraga and Hertwig, Reference Lejarraga and Hertwig2021). The citation bias is not limited to these pairs of classical publications.

Consider the framing effect, which has been interpreted as a persistent, logical error: ‘in their stubborn appeal, framing effects resemble perceptual illusions more than computational errors’ (Kahneman and Tversky, Reference Kahneman and Tversky1984, p. 343). However, several research groups, including that of McKenzie (e.g. McKenzie and Nelson, Reference McKenzie and Nelson2003; Sher and McKenzie, Reference Sher and McKenzie2006) and of Kühberger (e.g., 1995; Kühberger and Tanner, Reference Kühberger and Tanner2010) have shown that framing is an intelligent way to convey unspoken messages, and attention to framing reflects the corresponding ability to read between the lines, consistent with psycholinguistic theories (Grice, Reference Grice1989). Once again, this conflicting body of research is rarely if ever mentioned in that part of the literature that continues to interpret all framing effects as cognitive biases (e.g., Thaler and Sunstein, Reference Thaler and Sunstein2008; Kahneman, Reference Kahneman2011).

The massive disregard of conflicting scientific research spreads to endowment effects, loss aversion, overconfidence, the hot hand fallacy, and many other cognitive biases, and has held up the fiction that these are stable mental aberrations (Gigerenzer, Reference Gigerenzer2018; Rizzo and Whitman, Reference Rizzo and Whitman2020, p. 408). The fiction extends to speculations on the role of cognitive biases in politics. When Kanwisher (Reference Kanwisher1989) attributed the weaknesses of American security policy to the conjunction fallacy and overconfidence, or McDermott (Reference McDermott2002) blamed former US President Reagan for relying on the availability heuristic instead of rationally calculating future Soviet expansion, they remained mute on research critical of the alleged biases.

Citation bias is a form of confirmation bias. It should be a subject matter of scientific inquiry, not its method.

Quick experiments, no opportunity for learning

The citation bias explains why many are unaware of conflicting research, but not the discrepant results themselves. One reason for the latter is a radical change in the way experiments are conducted. Research demonstrating good statistical intuitions typically uses real random devices, such as urns and balls, and provides participants an opportunity to learn from experience. For instance, in a probability learning experiment Tversky conducted while working on his PhD under the supervision of Edwards, each of 24 participants was seated in front of a box equipped with a random device, each was studied individually, and each was given 1,000 trials to learn (Tversky and Edwards, Reference Tversky and Edwards1966). An individual session lasted approximately 1 hour. After Tversky joined with Kahneman, they created a new kind of experiment that replaced random devices with hypothetical text problems, substituted controlled laboratory experiments with questionnaires that could be distributed anywhere, and provided no opportunities to learn. Such an experiment could be conducted in a few minutes (Heukelom, Reference Heukelom2012).

The transition from time-consuming probability learning experiments to quick hypothetical questions is not neutral to the resulting conclusions about human rationality. In fact, it is well-documented that people’s judgments differ systematically when they can experience a task or when it is only described. This discrepancy is known as the description-experience gap (Schulze and Hertwig, Reference Schulze and Hertwig2021).

In the heat of the debate

Early challenges

In 1981, the philosopher Cohen was one of the first to challenge Kahneman and Tversky’s norms. Commentators on his Behavioral and Brain Sciences (BBS) article labeled his ‘quarrel’ as overly contentious, unfortunate and unnecessary, to which Cohen replied that no one who accuses fellow humans of committing fallacies is entitled to complain if similar accusations are raised against them (Cohen, Reference Cohen1983, p. 515). Decision theorist Levi (Reference Levi1983) rejected Kahneman and Tversky’s conclusion that people commit the base rate fallacy in the ‘cab problem’, pointing out that not ignoring the base rates in this ambiguous text problem would be the fallacy. Unusually for BBS, in the same commentary section, Kahneman and Tversky were allowed to respond to Levi’s comment; as an editorial note explains, they had asked the editor to see Levi’s comment at the proof stage, meaning that Levi could not respond in return.

Lopes (Reference Lopes1981) argued that, contrary to the dictum of expected utility theory, decision-making in the short run is not the same as in the long run. For instance, people might hesitate to accept a bet of winning $2,000 or losing $1,000 on the toss of a fair coin, but accept the same bet if it is repeated 100 times. Her objections were rebutted by Tversky and Bar-Hillel (Reference Tversky and Bar-Hillel1983), who defended utility theory, but there was no reply from Lopes – violating the standard sequence of article, rebuttal and reply. When I asked Lopes why she had not responded to Tversky and Bar-Hillel’s rebuttal, she said that she had in fact submitted a reply, but it was oddly not published. By the early 1990s, the war had turned against the critics of the cognitive bias program. Thaler (Reference Thaler1991) announced that ‘mental illusions should be considered the rule rather than the exception’ (p. 4). When Lopes (Reference Lopes1991) criticized the rising ‘rhetoric of irrationality’, few listened to her. Frustrated, she eventually quit research on decision-making and devoted her great talents to university administration.

A personal recollection

My entry to the debate took place in 1989–1990, while a fellow at the Center for Advanced Studies in Stanford. In February 1990, I was invited to speak to the psychology department at Stanford, the department in which Tversky had been working for decades. I highly respected his earlier work on measurement theory, models of similarity, and models of heuristics such as elimination-by-aspect. But later in his career, the models were replaced by labels, and his emphasis shifted to Kahneman’s research of the 1960s on cognitive errors.

Psychologist Lee Ross introduced me: ‘Before Amos came, we believed that people were rational. Amos showed that this is not the case. In recent years, there have been voices claiming that people are not so irrational’. The title of my talk was ‘Beyond heuristics and biases: How to make cognitive illusions disappear’. Because that was a controversial topic at Stanford, I asked the audience to let me talk for one hour and save discussion for afterwards. The talk made three points: one normative, one descriptive and one theoretical.

First, I argued that many so-called errors in probabilistic reasoning are in fact not violations of probability theory. The heuristics-and-biases program has relied on a narrow normative view that is shared by some theoretical economists but not by proponents of the frequentist view of probability that dominates today’s statistics departments, not even by all Bayesians. My point is that while everyone is free to apply subjective probabilities to all events, even singular cases, that does not justify imposing this view as a norm on others. By this narrow standard of probabilistic reasoning, the most distinguished probabilists and statisticians of the 20th century – figures of the stature of von Mises and Neyman – would be guilty of ‘biases’ in probabilistic reasoning.

Second, we can fix the problematic norms by asking people about frequencies instead of single events. In the Linda problem, the question ‘What is more probable: “Linda is a bank teller,” or “Linda is a bank teller and is active in the feminist movement”?’ is ambiguous, because ‘probable’ applied to a single event can refer to any nonmathematical meaning of the term, such as what is plausible or whether there is evidence. To resolve this ambiguity, one can ask instead about frequencies: ‘There are 100 people like Linda: How many are bank tellers? How many are bank tellers and active in the feminist movement?’ This change from single-event to frequency is sufficient to make the conjunction fallacy largely disappear (Fiedler, Reference Fiedler1988; see also Hertwig and Gigerenzer, Reference Hertwig and Gigerenzer1999). This result also explains why children in Inhelder and Piaget’s studies did not commit the ‘fallacy’; they were asked about frequencies, not single events. I showed that overconfidence also disappears when people are asked ‘How many of the last ten questions do you think you got right?’ as opposed to ‘How confident are you that you got this question right?’ Next, statistics is all about assumptions. In a text problem, these need to be clearly specified by the experimenter, and plausibly so. For instance, in the Tom W. problem (Kahneman and Tversky, Reference Kahneman and Tversky1973), no information was provided on how the personality sketch of a (hypothetical) graduate student Tom W. was selected, whether randomly or not. For the engineer–lawyer problem, where the participants got a thumbnail description of a person that suggested he was an engineer, they were told that the description was randomly sampled from a pool with 30 engineers and 70 lawyers. But in fact, it was not; the description was made up. When we repeated the experiment and let participants draw from a real urn and actually experience random sampling (Gigerenzer et al., Reference Gigerenzer, Hell and Blank1988), the base rate fallacy largely disappeared and participants paid attention to base rates.

Finally, I argued that psychological theories of decision-making should end the practice of explaining deviations from questionable norms, as the heuristics-and-biases program was doing. Rather, we should acknowledge the limits of logical rationality and give due consideration to the heuristics individuals and institutions use, model these precisely, and identify the environments in which they work or do not. I sketched out a theoretical program based on the two blades of Simon’s scissors: the analysis of how heuristics exploit environmental structures (see Gigerenzer, Reference Gigerenzer, Stroebe and Hewstone1991, for a revised version of the talk).

After the talk, a heated discussion broke out. In a remarkable turn, Tversky publicly stated that he had no theory or hypotheses, only empirical generalizations; therefore, his ideas could not be refuted.

Two weeks later, Kahneman contacted me: ‘I heard you had a very successful colloquium at Stanford. Would you be willing to give one or two talks at Berkeley?’ I ended up giving two talks at Berkeley, one for the psychology department and the other for the cognitive science group. I found it much easier to debate with Kahneman than with Tversky. After each taIk, Kahneman had requested a 10-minute response, after which I replied to his response, a scheme repeated at the Meeting of the Judgment and Decision-Making Society in November 1992. A public exchange of ideas provides the audience an opportunity to hear countering sides at the same time, the arguments and rejoinders and enables science to progress. For my part, I have always attempted to separate personal respect for Kahneman and Tversky and their achievements from our diverging scientific views. That did not always work. As I learned from Lewis’s (Reference Lewis2017) biography much later, ‘Amos didn’t merely want to counter Gigerenzer; he wanted to destroy him’ (p. 335). The three philosophers who coined the term ‘rationality wars’ might have been right after all.

In 1996, Kahneman and Tversky replied to my critique with an article in Psychological Review. The original submission was entitled ‘On the reality of cognitive illusions: A reply to Gigerenzer’. The published version deleted the subtitle, but adopted a more aggressive tone, encouraged by a partisan reviewer whom the editor instantly eliminated from the review process. (I later invited this reviewer to cross the front lines and spend a week with my research group to observe the debate from the other side. It was a mutually valuable experience.) In their article, Kahneman and Tversky defended their norms against my critique of imposing these on everyone, calling the latter ‘normative agnosticism’. They continued to insist that the flaw was not in their narrow norms, but in people’s minds. My response to their reply was published in Gigerenzer (Reference Gigerenzer1996).

I now review three fundamental issues to make progress in the debate: taking uncertainty and intractability seriously, algorithmic models of heuristics, and ecological rationality.

Paths forward

Uncertainty and intractability

In his book Risk, Uncertainty, and Profit (Reference Knight1921), Knight, founder of the Chicago School of Economics, distinguished between risk and uncertainty. As he put it, in a world of risk, i.e., with perfect foresight, all actions would become mechanical, all humans automata, and insurers could make no profit. Keynes (Reference Keynes1937) made a similar distinction: ‘The sense in which I am using the term [uncertainty] is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention’ (p. 214).

Under the authority of Friedman, the distinction between risk and uncertainty was brushed aside. Friedman (Reference Friedman1963/Reference Friedman1963) declared: ‘Frank Knight drew a sharp distinction between risk, as referring to a known or knowable probability distribution, and uncertainty, as referring to events for which it was not possible to specify numerical probabilities. I’ve not referred to this distinction because I do not believe it is valid.… We may treat people as if they assigned numerical probabilities to every conceivable event’ (p. 282). Friedman not only dismissed Knightian uncertainty but, more importantly, misconstrued it to mean mere situations of missing probabilities, passing over the two big bounds of logical rationality, intractability and uncertainty.

In their influential book Games and Decisions (Reference Luce and Raiffa1957), Luce and Raiffa similarly used the term ‘uncertainty’ for ‘ambiguity’ and thus eliminated all true uncertainty from the domain of decision-making. As in Cold War rationality, their ideal was perfect foresight and automatic decisions without human judgment: ‘it is possible to imagine that an executive foresees all possible contingencies and that he describes in detail the action to be taken in each case … [so] that the further operation of the plant can be left in the hands of a clerk or a machine’ (p. 6).

The heuristics-and-biases program followed the path trodden by Friedman, and even used the term ‘uncertainty’ for both ambiguity and situations of risk (Tversky and Kahneman, Reference Tversky and Kahneman1974). This conceptual confusion fueled the illusion of certainty that all problems presented have exactly one correct answer, that all other answers are errors, and that heuristics are always inferior to logical rationality.

To counter this confusion of terms, I define these in Table 1 using Savage’s (Reference Savage1954/1972) concept of a small world, a situation where an agent has perfect foresight of all possible future states S and all consequences C. A small world with unknown probabilities is a situation of ambiguity; one with known probabilities is a situationof risk (Table 1). Manipulated dice and slot machines with undisclosed bias are examples of ambiguity; lotteries and roulette are examples of situations of risk. Uncertainty, in contrast, refers to large worlds where the state space (S, C) cannot be known. Here, no probability distribution (with probabilities that add up to one) can be meaningfully constructed over events or consequences, not even subjective probabilities. Uncertainty is part of most important real-world decisions, from administrative budget problems to planning a war. Finally, a well-defined problem is intractable if the optimal course of action cannot be computed even if it exists. Consider a scheduling problem: Given n towns connected with roads of known length, find the shortest route to visit each of n towns exactly once, beginning and ending in the same town. With n towns there are (n – 1)!/2 possible paths, which means that for 61 towns or more, the number of paths is larger than the estimated number of atoms in the universe.

Table 1. Risk, ambiguity, uncertainty, and intractability

Table 1 maps out the true territory of logical rationality: small worlds. Methodological tools such as subjective probabilities, second-order probabilities and uniform priors help to cover situations of ambiguity. Yet these do not apply in situations of uncertainty where the full state space is not known.

As Simon noted, there are two ways to deal with a situation under uncertainty and intractability. The first is to convert the original problem into a small world to calculate an optimal course of action and hope that the solution will generalize to the original problem. The second is to face uncertainty and intractability, dispense with the ideal of optimality, and study how individuals and institutions actually make good decisions in the large world. Following Friedman, the majority of neoclassical economists and decision theorists have taken the first route; following Knight and Simon, a minority of social scientists, including my own research group, have taken the second. But how can good decisions be made in large worlds?

Heuristics

The heuristics-and-biases program deserves acclaim for bringing heuristics to the attention of social scientists. Tversky and Kahneman (Reference Tversky and Kahneman1974) acknowledged that heuristics are highly economical and usually effective. However, such praise was virtually always followed by warnings that heuristics can lead to severe and systematic errors. As Lopes (Reference Lopes1991) noted, for all the care taken in highlighting errors, Tversky and Kahneman do not cite one single instance that illustrates a heuristic working well. That is a direct consequence of logical rationality, where heuristics are always considered second-best.

The three heuristics originally proposed – availability, anchoring and representativeness – were an important first step. It is understandable that in the early 1970s, these were only loosely characterized. However, as I argued 25 years and many experiments later (Gigerenzer, Reference Gigerenzer1996), notions such as representativeness remain vague, undefined, and unspecified with respect both to the antecedent conditions that elicit them and to the cognitive processes underlying them. Kahneman and Tversky (Reference Kahneman and Tversky1996, p. 591) defended the vagueness by arguing that Gestalt psychologists also did not fully specify the rules of similarity and good continuation, and that it would be ‘unwise’ to ‘legislate process models as the primary way to advance psychology’. More than another 25 years later, the process underlying the three concepts remains nebulous.

In my own research, I built on Simon’s (and Tversky’s) algorithmic models of heuristics. A first innovation was to test models of heuristics for inferences, not only for preferences as in Simon’s satisficing and Tversky’s elimination-by-aspects. In contrast to preferences, inferences make it possible to actually measure the accuracy of heuristics. This enabled the discovery of less-is-more effects, situations where simple heuristics predict more accurately than complex linear models – and with less time and effort (Gigerenzer et al., Reference Gigerenzer and Todd1999). The first objection to our findings was that they were impossible. Less-is-more effects are indeed not imaginable in the small worlds of logical rationality, but they do occur in large worlds. When independent researchers replicated our results, the next objection was that heuristics may predict more accurately than linear models but will always be inferior to complex machine-learning algorithms. When we subsequently tested heuristics such as take-the-best and fast-and-frugal trees in machine-learning data sets, we found data sets where heuristics predicted as accurately as or better than random forests and support vector machines, and more efficiently (Brighton and Gigerenzer, Reference Brighton and Gigerenzer2015; Katsikopoulos et al., Reference Katsikopoulos, Şimşek, Buckmann and Gigerenzer2020; Buckmann, Reference Buckmann2024).

The discovery of less-is-more effects led to a new question: Can we identify the conditions where a given class of heuristics succeeds or fails?

Ecological rationality

The study of ecological rationality relies on analysis and computer simulation to model the match between heuristics and environmental conditions. At a conference in 1996, when I presented my research group’s first finding, it caught the attention of Reinhard Selten, who had been awarded the Nobel Memorial Prize in Economics two years beforehand. At the end of my talk, Selten announced: ‘This is a real breakthrough’. The condition we had discovered was straightforward: if the beta weights of the variables on which an inference is based decrease exponentially, no linear model can predict better than the take-the-best heuristic (Martignon and Hoffrage, Reference Martignon, Hoffrage, Gigerenzer and Todd1999). For Selten, game theory was a mathematical puzzle, not to be confused with a prescription or description of decision-making under uncertainty, as Cold War rationalists had hoped. To illustrate, he proved by backward induction that cooperative, not aggressive, pricing is logically implied in the chain-store problem. Yet in the real world, he added, he would not follow the logical deduction but would be aggressive to deter others from entering the market (Selten, Reference Selten1978, pp. 132–133). The logical validity of an argument in a well-defined small world does not imply its reasonableness in the large world of business competition.

Ecological rationality means functionality in the tradition of James, Dewey and Brunswik, not veridicality (Rizzo, Reference Rizzo2023). The bias–variance trade-off implies that heuristics can be biased, but because they have low error from variance, they can nevertheless lead to more accurate and efficient behavior than do complex strategies (Gigerenzer and Brighton, Reference Gigerenzer and Brighton2009). The study of ecological rationality fleshes out the blades of Simon’s scissors. Simon was pleased to see how his analogy had developed into mathematical analysis. In his blurb for our book Simple Heuristics That Make Us Smart, he wrote that it ‘offers a fascinating introduction to this revolution in cognitive science, striking a great blow for sanity in the approach to human rationality’ (Simon, Reference Simon, Gigerenzer and Todd1999). The discovery of less-is-more effects revolutionized the understanding of bounded rationality. What had appeared to be irrational neglect of information turned out be ecologically rational under precise conditions we could determine.

Vernon Smith (Reference Smith2003) introduced the term ‘ecological rationality’ independently in his Nobel lecture, where he acknowledged its similarity to the concept of ecological rationality used in Simple Heuristics. For Smith, ecological rationality emerges from the unconscious brain rather than from the conscious mind, from traditions, heuristics, norms, and other cultural and biological processes. Both Smith’s and our use of the term signal the need to go beyond the cognitive biases program, although they differ as well, e.g., in that ecological rationality, according to my use of the term, includes the analysis of deliberately used heuristics, such as in medical diagnosis and management (Dekker and Remic, Reference Dekker and Remic2019; Reb et al., Reference Reb, Luan and Gigerenzer2024).

In sum, logical rationality is a tool for small worlds, and heuristics are tools for uncertainty and intractability. On many occasions, I have witnessed emotional resistance to this message, sometimes mixed with an implicit recognition of its validity. In 2010, the University of Bonn celebrated Selten’s 80th birthday, to which Selten invited four speakers, three economists (including two Nobel laureates) and myself. After my talk, one of the Nobel laureates approached me, commenting: ‘Very interesting, but you know, I don’t like uncertainty.’

In recent years, however, quite a few economists have grown curious about algorithmic models of heuristics and begun to study these in decision-making under uncertainty. Economists at the Bank of England, for instance, together with researchers from my group, devised fast-and-frugal trees for predicting bank failure after finding that when data are limited and risks are fat-tailed, simple heuristics dominate more complex models for calculating banks’ capital requirements (Aikman et al., Reference Aikman, Galesic, Gigerenzer, Kapadia, Katsikopoulos, Kothiyal, Murphy and Neumann2021). Others, including Stiglitz, have tested the predictive accuracy of heuristics in macroeconomic situations characterized by technological change, imperfect information, coordination hurdles and structural breaks: ‘Our results suggest that fast and frugal robust heuristics may not be a second-best option but rather ‘rational’ responses in complex and changing macroeconomic environments’ (Dosi et al., Reference Dosi, Napoletano, Roventini, Stiglitz and Treibich2020, p. 1).

Education instead of paternalism

A Nature article entitled ‘Risk School’ posed the question: Can the general public learn to evaluate risks accurately, or do authorities need to steer it toward correct decisions? (Bond, Reference Bond2009). The author spoke to the ‘two opposing camps’ of heuristics-and-biases and ecological rationality. Legal scholar Kahan is quoted as saying that ‘risk decision-making should be concentrated to an even greater extent in politically insulated expert agencies’, Thaler as asserting that ‘our ability to de-bias people is quite limited’, and Kahneman as stating that ‘it takes an enormous amount of practice to change your intuition’ (pp. 1189–1192). While no one disputes that poor decision-making occurs, we disagree about the nature of rationality and whether humans can learn and improve. Consider Bayesian reasoning, the original clash between Edwards and Kahneman & Tversky. By using intuitive representations of numerosity, most fourth-graders can already reason the Bayesian way (Gigerenzer et al., Reference Gigerenzer, Multmeier, Föhring and Wegwarth2021). I have taught 1,000 gynecologists in their continuing medical education, and about 80% did not know how to interpret basic statistics about a woman’s chances of having breast cancer, given that her screening mammogram came back positive. Yet after one hour of being taught the same tools as those learned by the children, almost all of them understood (Gigerenzer et al., Reference Gigerenzer, Gaissmaier, Kurz-Milcke, Schwartz and Woloshin2007). Learning is made easy by intuitive representations, which logical rationality mistakes for framing errors. In the same way, it is important to teach efficient heuristics for situations that involve uncertainty, such as fast-and-frugal trees for allocating patients in emergency units and for making the world of finance a safer one (Katsikopoulos et al., Reference Katsikopoulos, Şimşek, Buckmann and Gigerenzer2020).

Conclusion

The Cold War has come to an end, but the rationality wars continue. They are still fueled by a deep conceptual rift over human nature and human psychology. As a well-known economist explained to me with great confidence, ‘Look, either reasoning is rational or it’s psychological’. Rationality without psychology – that was the common ideal of Cold War rationality and the cognitive bias program, despite appearing diametrically opposed. It is remarkable that many psychologists preferred to march under the flag of logical rationality and fight judgment, intuition, experience, emotion and everything else psychological as the enemy of reasoning.

Ecological rationality is not truly at war with logical rationality. I see ecological rationality as a broader concept. It includes not only the analysis of the conditions under which a class of heuristics work, but also the conditions under which relying on logical rationality, such as Bayes’ rule, is likely successful. For instance, Bayes’ rule requires a stable world so that the past can provide reliable priors. In contrast, applying Bayes’ rule to an uncertain world of sudden change can lead to the ‘turkey illusion’ (Taleb, Reference Taleb2010).

In my opinion, rationality entails more than cold-blooded logic. It requires judgment and experience. Good decisions are contingent on knowing the social and physical environments we live in. As a consequence, there is no single rationality, just as there is no one statistical method of inference that is best for all problems. Rationality is ecological, dependent on the particular problem and context we face. The way forward is to free rationality from the straightjacket of a universal logical rationality, take heuristics seriously, and systematically model decision-making in our world of uncertainty and intractability.

References

Aikman, D., Galesic, M., Gigerenzer, G., Kapadia, S., Katsikopoulos, K. V., Kothiyal, A., Murphy, E. and Neumann, T. (2021), ‘Taking uncertainty seriously: simplicity versus complexity in financial regulation’, Industrial and Corporate Change, 30(2):317345.CrossRefGoogle Scholar
Ariely, D. (2010), Predictable Irrationality, New York: Harper.Google Scholar
Arkes, H. R., Gigerenzer, G. and Hertwig, R. (2016), ‘How bad is incoherence?’, Decision, 3: 2039.CrossRefGoogle Scholar
Arshad, A., Anderson, B. and Sharif, A. (2019), ‘Comparison of organ donation and transplantation rates between opt-out and opt-in systems’, Kidney International, 95(6):14531460.CrossRefGoogle ScholarPubMed
Artinger, F., Gigerenzer, G. and Jacobs, P. (2022), ‘Satisficing: integrating two traditions’, Journal of Economic Literature, 60: 598635.CrossRefGoogle Scholar
Bargh, J. A. (1997), ‘The Automaticity of Everyday Life, in Wyer, R. S. Jr (ed), The Automaticity of Everyday Life: Advances in Social Cognition, Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Bargh, J. A., Chen, M. and Burrows, L. (1996), ‘Automaticity of social behavior: direct effects of trait construct and stereotype activation on action’, Journal of Personality and Social Psychology, 71(2):230244.CrossRefGoogle ScholarPubMed
Bond, M. (2009), ‘Risk school’, Nature, 461: 11891192.CrossRefGoogle ScholarPubMed
Bookstaber, R. (2017), The End of Theory, Princeton, NJ: Princeton University Press.Google Scholar
Brighton, H. and Gigerenzer, G. (2015), ‘The bias bias’, Journal of Business Research, 68: 17721784.CrossRefGoogle Scholar
Buckmann, M. (2024), Rationality of Simple Decision Heuristics. Dissertation, Technical University Berlin.Google Scholar
Cohen, L. J. (1981), ‘Can human irrationality experimentally demonstrated?’, Behavioral and Brain Sciences, 4: 317331.CrossRefGoogle Scholar
Cohen, L. J. (1983), ‘The controversy about irrationality’, Behavioral and Brain Sciences, 6: .CrossRefGoogle Scholar
Conley, S. (2013), Against Autonomy. Justifying Coercive Paternalism, New York: Cambridge University Press.Google Scholar
Dallacker, M., Appelius, L., Brandmaier, A. M., Morais, A. S. and Hertwig, R. (2024), ‘Opt-out defaults hardly increase organ donation’, Public Health, 236: 436440.CrossRefGoogle Scholar
Dekker, E. and Remic, B. (2019), ‘Two types of ecological rationality: on how to best combine psychology and economics’, Journal of Economic Methodology, 26: 291306.CrossRefGoogle Scholar
U.S. Department of Justice (2017), Deutsche Bank Agrees to Pay $7.2 Billion for Misleading Investors in Its Sale of Residential Mortgage-backed Securities, Office of Public Affairs, 17, January, https://www.justice.gov/opa/pr/deutsche-bank-agrees-pay-72-billion-misleading-investors-its-sale-residential-mortgage-backedGoogle Scholar
Dhami, S. (2020), The Foundations of Behavioral Economic Analysis Vol V: Bounded Rationality, New York: Oxford University Press.Google Scholar
Dosi, G., Napoletano, M., Roventini, A., Stiglitz, J. E. and Treibich, T. (2020), ‘Rational heuristics? Expectations and behaviors in evolving economies with heterogeneous interacting agents’, Economic Inquiry, 58(3):14871516.CrossRefGoogle Scholar
Doyen, S., Klein, O., Pichon, C.-L. and Cleeremans, A. (2012), ‘Behavioral priming: it’s all in the mind, but whose mind?’, PLoSONE, 7(1):.CrossRefGoogle ScholarPubMed
Edwards, W. (1968), Conservatism in human information processing, Kleinmuntz, B.ed, Formal Representation of Human Judgment, New York: Wiley.Google Scholar
Edwards, W., Lindman, H. and Phillips, L. D. (1965), ‘Emerging Technologies for Making Decisions’, in Barron, F., Dement, W. C., Edwards, W., Lindman, H., L., D. P., Olds, J. and Olds (eds), M., New Directions in Psychology II, New York: Holt, Rinehart and Winston.Google Scholar
Ellsberg, D. (1961), ‘Risk, ambiguity, and the Savage axioms’, Quarterly Journal of Economics, 75: 643699.CrossRefGoogle Scholar
Erickson, P., Klein, J., Daston, L., Lemow, R., Sturm, T. and Gordin, M. D. (2013), How Reason Almost Lost Its Mind. The Strange Career of Cold War Rationality, Chicago: University of Chicago Press.CrossRefGoogle Scholar
Ferguson, A. (2010), ‘Nudge nudge, wink wink: behavioral economics – the governing theory of Obama’s nanny state’. The Weekly Standard. 19 April. Available at: https://www.washingtonexaminer.com/magazine/1679845/nudge-nudge-wink-wink/Google Scholar
Fiedler, K. (1988), ‘The dependence of the conjunction fallacy on subtle linguistic factors’, Psychological Research, 50: 123129.CrossRefGoogle Scholar
Friedman, D., Issac, R. M., James, D. and Sunder, S. (2014), Risky Curves: On the Empirical Failure of Expected Utility, London: Routledge.CrossRefGoogle Scholar
Friedman, M. (1953), Essays in Positive Economics, Chicago: University of Chicago Press.Google Scholar
Friedman, M. (1963), Price Theory, Reprint. London: Transaction Publishers. 2007.Google Scholar
Gigerenzer, G. (1991), ‘How to Make Cognitive Illusions Disappear. Beyond “Heuristics and Biases’, in Stroebe, W. and Hewstone, M. (eds), European Review of Social Psychology, vol. 2, Chichester, UK: Wiley.Google Scholar
Gigerenzer, G. (1996), ‘On narrow norms and vague heuristics: a reply to Kahneman and Tversky’, Psychological Review, 103: 592596.CrossRefGoogle Scholar
Gigerenzer, G. (2018), ‘The bias bias in behavioral economics’, Review of Behavioral Economics, 5: 303336.CrossRefGoogle Scholar
Gigerenzer, G. and Brighton, H. (2009), ‘Homo heuristicus: why biased minds make better inferences’, Topics in Cognitive Science, 1: 107143.CrossRefGoogle ScholarPubMed
Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M. and Woloshin, S. W. (2007), ‘Helping doctors and patients make sense of health statistics’, Psychological Science in the Public Interest, 8: 5396.CrossRefGoogle ScholarPubMed
Gigerenzer, G., Hell, W. and Blank, H.(1988), ‘Presentation and content: the use of base rates as a continuous variable’, Journal of Experimental Psychology: Human Perception and Performance, 14: 513525.Google Scholar
Gigerenzer, G., Hertwig, R. and Pachur, T. (eds) (2011), Heuristics: The Foundations of Adaptive Behavior, New York: Oxford University Press.CrossRefGoogle Scholar
Gigerenzer, G., Multmeier, J., Föhring, A. and Wegwarth, O. (2021), ‘Do children have Bayesian intuitions?’, Journal of Experimental Psychology: General, 50: 10411070.CrossRefGoogle Scholar
Gigerenzer, G. and Murray, D. J. (2015), Cognition as Intuitive Statistics, Psychology Revivals Series, London: Psychology Press.CrossRefGoogle Scholar
Gigerenzer, G. and Selten, R. (ed.) (2001), Bounded Rationality: The Adaptive Toolbox, Cambridge, MA: MIT Press.Google Scholar
Gigerenzer, G., Todd, P. M. and The ABC Research Group (1999), Simple Heuristics that Make Us Smart, New York: Oxford University Press.Google Scholar
Gould, S. J. (1992), Bully for Brontosaurus: Further Reflections in Natural History, New York: Penguin Books.Google Scholar
Grice, H. P. (1989), Studies in the Way of Words, Cambridge, MA: Harvard University Press.Google Scholar
Heiner, R. A. (1983), ‘The origin of predictable behavior’, American Economic Review, 73: 560594.Google Scholar
Hertwig, R. (2017), ‘When to consider boosting: some rules for policy-makers’, Behavioural Public Policy, 1: 143161.CrossRefGoogle Scholar
Hertwig, R. and Gigerenzer, G. (1999), ‘The “conjunction fallacy” revisited: how intelligent inferences look like reasoning errors’, Journal of Behavioral Decision Making, 12: 275305.3.0.CO;2-M>CrossRefGoogle Scholar
Heukelom, F. (2012), ‘Three explanations for the Kahneman-Tversky programme of the 1970s’, European Journal of the History of Economic Thought, 19: 797828.CrossRefGoogle Scholar
Inhelder, B. and Piaget, J. (1959), The Early Growth of Logic in the Child, translated by Lunzer, E. A.. Reprint, New York: Norton Library, 1964.Google Scholar
Kahneman, D. (2011), Thinking Fast and Slow, London: Allen Lane.Google Scholar
Kahneman, D. (2012), ‘A proposal to deal with questions about priming effects. Open letter’. Nature. 26#x00A0;September. Available at: https://www.nature.com/news/polopoly_fs/7.6716.1349271308!/suppinfoFile/KahnemanLetter.pdf.Google Scholar
Kahneman, D. and Tversky, A. (1972), ‘Subjective probability: a judgment of representativeness’, Cognitive Psychology, 3(3):430454.CrossRefGoogle Scholar
Kahneman, D. and Tversky, A. (1973), ‘On the psychology of prediction’, Psychological Review, 80: 237251.CrossRefGoogle Scholar
Kahneman, D. and Tversky, A. (1984), ‘Choices, values, and frames’, American Psychologist, 39(4):341350.CrossRefGoogle Scholar
Kahneman, D. and Tversky, A. (1996), ‘On the reality of cognitive illusions’, Psychological Review, 103: 582591.CrossRefGoogle ScholarPubMed
Kanwisher, N. (1989), ‘Cognitive heuristics and American security policy’, Journal of Conflict Resolution, 33: 652675.CrossRefGoogle Scholar
Katsikopoulos, K., Şimşek, Ö., Buckmann, M. and Gigerenzer, G. (2020), Classification in the Wild, Cambridge, MA: MIT Press.Google Scholar
Keynes, J. M. (1937), ‘The general theory of employment’, Quarterly Journal of Economics, 51: 209223.CrossRefGoogle Scholar
Knight, F. H. (1921), Risk, Uncertainty, and Profit, Boston: Houghton Mifflin Co.Google Scholar
Kühberger, A. (1995), ‘The framing of decisions: a new look at old problems’, Organizational Behavior and Human Decision Processes, 6: 230240.CrossRefGoogle Scholar
Kühberger, A. and Tanner, C. (2010), ‘Risky choice framing: task versions and a comparison of prospect theory and fuzzy-trace theory’, Journal of Behavioral Decision Making, 23(3):314329.CrossRefGoogle Scholar
Kwisthout, J., Wareham, T. and von Rooij, I. (2011), ‘Bayesian intractability is not an ailment that approximations can cure’, Cognitive Science, 35: 779784.CrossRefGoogle Scholar
Lejarraga, T. and Hertwig, R. (2021), ‘How experimental methods shaped views on human competence and rationality’, Psychological Bulletin, 147: 535564.CrossRefGoogle ScholarPubMed
Levi, I. (1983), ‘Who commits the base rate fallacy?’, Behavioral and Brain Sciences, 6: .CrossRefGoogle Scholar
Lewis, M. (2017), The Undoing Project, New York: Norton.Google Scholar
Lewis-Kraus, G. (2023), ‘Big little lies’. New Yorker. https://www.newyorker.com/magazine/2023/10/09/they-studied-dishonesty-was-their-work-a-lie [11 November 2024].Google Scholar
Leys, R. (2024), Anatomy of a Train Wreck: The Rise and Fall of Priming Research, Chicago: University of Chicago Press.Google Scholar
Lopes, L. L. (1981), ‘Decision making in the short run’, Journal of Experimental Psychology: Human Learning and Memory, 7(5):377385.Google Scholar
Lopes, L. L. (1991), ‘The rhetoric of irrationality’, Theory & Psychology, 1: 6582.CrossRefGoogle Scholar
Luce, R. D. and Raiffa, H. (1957), Games and Decisions, New York: Wiley.Google Scholar
Maier, M., Bartos, F., Stanley, T. D., Shanks, D. R., Harris, A. J. L. and Wagenmakers, E.-J. (2022), ‘No evidence for nudging after adjusting for publication bias’, PNAS, 119(31):.CrossRefGoogle ScholarPubMed
Martignon, L. and Hoffrage, U. (1999), ‘Why Does One-Reason Decision Making Work? A Case Study in Ecological Rationality’, in Gigerenzer, G. and Todd, P. M., And the ABC Research Group (eds), Simple Heuristics that Make Us Smart (pp. 119–140), New York: Oxford University Press.Google Scholar
Matesanz, R. (2005), ‘Factors influencing the adaptation of the Spanish model of organ donation’, Transplant International, 16: 736741.CrossRefGoogle Scholar
McCormick, J. (1987), ‘The wisdom of Solomon’, Newsweek, 17. August. 6263.Google Scholar
McDermott, R. (2002), ‘Arms control and the first Reagan administration: belief systems and policy choices’, Journal of Cold War Studies, 4: 2949.CrossRefGoogle Scholar
McKenzie, C. R. M. and Nelson, D. J. (2003), ‘What a speaker’s choice of frame reveals: reference points, frame selection, and framing effects’, Psychonomic Bulletin and Review, 10: 596602.CrossRefGoogle ScholarPubMed
Mertens, S., Herbert, M., Hahnel, U. J. J. and Brosch, T. (2021), ‘The effectiveness of nudging: a meta-analysis of choice architecture interventions across behavioral domains’, PNAS, 119(1):.Google Scholar
Nisbett, R. E. and Wilson, T. D. (1977), ‘Telling more than we can know: verbal reports on mental processes’, Psychological Review, 84: 231259.CrossRefGoogle Scholar
Peterson, C. R. and Beach, L. R. (1967), ‘Man as an intuitive statistician’, Psychological Bulletin, 68: 2946.CrossRefGoogle Scholar
Piaget, J. and Inhelder, B. (1951), The Origin of the Idea of Chance in Children, New York: Norton. 1975.Google Scholar
Reb, J., Luan, S. and Gigerenzer, G. (2024), Smart Management: How Simple Heuristics Help Leaders Make Good Decisions in an Uncertain World, Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Rizzo, M. J. (2023), ‘The antipaternalist psychology of William James’, Behavioural Public Policy, 126.CrossRefGoogle Scholar
Rizzo, M. J. and Whitman, G. (2020), Escaping Paternalism, Cambridge, UK: Cambridge University Press.Google Scholar
Samuels, R., Stich, S. and Bishop, M. (2002), ‘Ending the Rationality Wars: How to Make Disputes About Human Rationality Disappear’, Elio, R. (ed.), Common Sense, Reasoning and Rationality (pp. 236–268), New York: Oxford University Press.Google Scholar
Savage, L. J. (1954), The Foundations of Statistics, 2nd. New York: Wiley. 1972.Google Scholar
Schelling, T. (1960), ‘Meteors, mischief, and war’, Bulletin of the Atomic Scientists, 16: .CrossRefGoogle Scholar
Schulze, C. and Hertwig, R. (2021), ‘A description-experience gap in statistical intuitions: of smart babies, risk-savvy chimps, intuitive statisticians, and stupid grown-ups’, Cognition, 210: .CrossRefGoogle ScholarPubMed
Searle, J. (2001), Rationality in Action, Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Sedlmeier, P., Hertwig, R. and Gigerenzer, G. (1998), ‘Are judgments of the positional frequencies of letters systematically biased due to availability?’, Journal of Experimental Psychology: Learning, Memory, and Cognition, 24: 754770.Google Scholar
Selten, R. (1978), ‘The chain-store paradox’, Theory and Decision, 9: 127159.CrossRefGoogle Scholar
Sher, A. and McKenzie, C. R. M. (2006), ‘Information leakage from logically equivalent frames’, Cognition, 101: 467494.CrossRefGoogle ScholarPubMed
Simon, H. A. (1990), ‘Invariants of human behavior’, Annual Review of Psychology, 41: 119.CrossRefGoogle ScholarPubMed
Simon, H. A. (1999), in Blurb, Gigerenzer, G. and Todd, P. M., and the ABC Research Group (eds), Simple Heuristics that Make Us Smart, New York: Oxford University Press.Google Scholar
Simonsohn, U., Simmons, J. P. and Nelson, L. D. (2021), Evidence of Fraud in an Influential Field Experiment About Dishonesty. https://datacolada.org/98 [11 November 2024].Google Scholar
Smith, V. L. (2003), ‘Constructivist and ecological rationality in economics’, American Economic Review, 93: 465508.CrossRefGoogle Scholar
Stanovich, K. E. (2011), Rationality and the Reflective Mind, New York: Oxford.Google Scholar
Stich, S. P.(2012), Knowledge, Rationality, and Morality, 1978-2010, Collected Papers, Vol. 2, New York: Oxford University Press.Google Scholar
Sturm, T. (2012), ‘The “rationality wars” in psychology: where they are and where they could go’, Inquiry: An Interdisciplinary Journal of Philosophy, 55(1):6681.CrossRefGoogle Scholar
Taleb, N. N. (2010), The Black Swan, 2nd edn. New York: Random House.Google Scholar
Thaler, R. H. (1991), Quasi-rational Economics, New York: Russel Sage Foundation.Google Scholar
Thaler, R. H. and Sunstein, C. R. (2008), Nudge: Improving Decisions about Health, Wealth, and Happiness, New Haven, NJ: Yale University Press.Google Scholar
Trout, J. D. (2005), ‘Paternalism and cognitive bias’, Law and Philosophy, 24: 393434.CrossRefGoogle Scholar
Tversky, A. and Bar-Hillel, M. (1983), ‘Risk: the long and the short’, Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(4):713717.Google Scholar
Tversky, A. and Edwards, W. (1966), ‘Information versus reward in binary choice’, Journal of Experimental Psychology, 71: 680683.CrossRefGoogle Scholar
Tversky, A. and Kahneman, D. (1971), ‘Belief in the law of small numbers’, Psychological Bulletin, 76: 105110.CrossRefGoogle Scholar
Tversky, A. and Kahneman, D. (1973), ‘Availability: a heuristic for judging frequency and probability’, Cognitive Psychology, 4: 207232.CrossRefGoogle Scholar
Tversky, A. and Kahneman, D. (1974), ‘Judgement under uncertainty: heuristics and biases’, Science, 185: 11241131.CrossRefGoogle ScholarPubMed
Tversky, A. and Kahneman, D. (1983), ‘Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment’, Psychological Review, 90: 293315.CrossRefGoogle Scholar
von Neumann, J. and Morgenstern, O. (1944), Theory of Games and Economic Behavior, 2nd. 1947, 3rd ed. 1953., Princeton, NJ: Princeton University Press.Google Scholar
Figure 0

Table 1. Risk, ambiguity, uncertainty, and intractability