1 Introduction
The traditional approach to the study of judgment and decision making (JDM) is to compare a judgment or a decision (which can be considered as a judgment about what to do (Reference Baron, Koehler and HarveyBaron, 2004)) to a standard or “benchmark.” The comparison enables an evaluation of whether a particular judgment is “good” or “bad” relative to the standard. Normative models which provide these standards are valuable because their clear sets of rules or axioms, such as those derived from economics (expected utility theory) and mathematics (probability theory) can be used to test predictions about human behavior. When behavior deviates from the predictions of normative models — i.e., biases are observed — attempts can be made to ascertain why and, often, techniques for overcoming such biases can be prescribed.
This approach with its focus on deviations from normative models contrasts the ideal of a homo oeconomicus with the apparent reality of a cognitive miser (or even loser) and has been enormously influential and useful. However, it has not been without its critics (e.g., Reference Einhorn and HogarthEinhorn & Hogarth, 1981; Reference Christensen-SzalanskiGigerenzer, 1996; Reference LopesLopes, 1991). Both metaphors fall short of psychology’s actual goal because they tend “to define human decision making by what it is not” (Reference Medin and BazermanMedin & Bazerman, 1999, p. 533), and as a consequence JDM research has tended to follow its own path and, some would argue, become disconnected from much of psychology in general and cognitive psychology in particular (e.g., Reference Medin and BazermanMedin & Bazerman, 1999)Footnote 1. This “disconnection” is a great pity given that many of the issues at the core of understanding human judgment and decision making are, necessarily, central to the wider goal of understanding human cognition.
The papers in this special issue of the Judgment and Decision Making grew from a symposium held at the Max Planck Institute for Research on Collective Goods which focused on one area of JDM research which we believe has and will continue to benefit from its overlap with more mainstream cognitive psychology. The area is multi-attribute judgment and the symposium explored recent advances in the cognitive modeling approaches that have been brought to bear on the basic question of how we make judgments when faced with multiple pieces of information. In the 30 years since the seminal attempt by John Payne (1976) to make JDM research “more cognitive” a wide range of approaches from cognitive psychology has been applied to this question. Our aim in this issue is to provide the reader with an up-to-date overview of these approaches and to emphasize the important and influential advances that can be made by taking the interface between JDM and cognitive psychology seriously. Contrasting the different approaches allows for the identification of possible boundary conditions for their appropriateness as valid models of cognitive processing, and, we hope will suggest fruitful avenues for future research.
This paper is structured as follows: we begin with a brief introduction (adapted from Holyoak, 1999) which serves to highlight the things we know about cognition that should be incorporated into models of multi-attribute judgment; we then discuss briefly how these aspects are addressed in some of the models considered by contributors to the special issue. This collection of models, however, cannot simply be viewed as complementary accounts since they sometimes conflict with respect to fundamental assumptions. A sample of these incompatibilities between different metaphors is discussed at the end of the paper.
2 The overarching metaphor: Wo/man as an information processor
Forty years ago Neisser (1967) introduced the idea that an intelligent organism operates in a perception-action cycle: the senses take in information from the environment, the mind/brain performs computations on that information and the outputs of those computations are used to guide subsequent goal-directed actions. A key aspect of this “information processing” metaphor is that biological organisms are capacity limited; there is a limit on how much information can be processed and thus the organism needs to be selective in what it attends to in the environment — i.e., the information taken in via the senses (e.g., Miller, 1956).
The interaction between attention and memory is also fundamental to the information processing metaphor. The notion of working memory (Reference Baddeley, Hitch and BowerBaddeley & Hitch, 1974) is now widely accepted as a descriptive model of how various forms of information (visual, phonological) are represented in a temporary memory store. In this model a central executive is responsible for allocating attention to various processing tasks such as the controlled thought needed for problem solving, decision making, reasoning and so on. The degree to which processing relies on this controlled versus relatively automatic processing is often a function of the involvement of memory. Tasks that have been encountered numerous times in the past become straightforward to execute or solve because relevant actions or solutions can be retrieved from memory and thus performance is less dependent on active attention. In the traditional “gambling paradigm” of JDM research (Reference Goldstein, Hogarth, Goldstein and HogarthGoldstein & Hogarth, 1997), such routine- and experience-based changes in the cognitive processes and their respective “costs” have largely been neglected (Reference Brandstätter, Gigerenzer and HertwigBetsch & Haberstroh, 2005; Reference KleinKlein, 1998).
Another vital aspect of information processing is that organisms are endowed with an ability to adaptively alter their behavior — i.e., learn. Human and non-human animals alike are able to learn contingencies among events and actions; an ability which is fundamental for survival in a changing environment. The understanding of the cause-effect structure of the world gained through this learning process also facilitates causal reasoning and induction which in turn can lead to the development of categorization — the process by which we organize our knowledge. Categorization is influenced by our causal knowledge and (perceptual) similarity relations between objects (e.g., Reference Murphy and MedinMurphy & Medin, 1985). As well as being able to organize knowledge, humans also have the ability to think about their own thinking, this regulation of cognition or metacognition is directly connected to the adaptivity of our behaviour (e.g., our ability to decide how to decide in different situations, (Reference Erev and BarronPayne, Bettman, & Johnson, 1993)) and may be related to intelligence (Reference Anderson, Bothell, Byrne, Douglass, Lebiere and QinBröder, 2003; Reference Stanovich and WestStanovich & West, 2000).
This whistle-stop tour through some of the “facts” about the cognitive system serves to orient our thinking about what needs to be considered when we attempt to build and implement cognitive models. Given that multi-attribute judgment is “simply” another task performed by the system, it is important that our attempts to model how it is done are embedded both theoretically and empirically in what we already know about the operation of that system. Thus some key aspects to consider are: 1) capacity limitation, 2) the distinction between automatic and controlled processing and the role that memory plays in their interaction, 3) the ability to learn, 4) the translation of cause-effect learning to the development of categorization, and 5) the regulation of cognition. In the following sections we examine some of the ways in which these aspects are incorporated into the models and metaphors proposed by the contributors to this issue.
3 Capacity limitation
The capacity limitation stressed by the information processing metaphor is a limitation in cognitive capacity — specifically a limit on the amount of information that an organism can attend to and/or process at any given time. However, focusing solely on these limitations of the mind ignores the crucial role played by the environment in shaping human behaviour (e.g., Simon, 1956; Reference Chu and SpiresGigerenzer, Todd et al., 1999). Many theorists argue that, to model cognition adequately, we must understand the connection between the limitations imposed by the mind (e.g., attention span, memory) and those imposed by the environment (e.g., information costs). This view, first proposed by Simon (1956), is captured in the analogy: “Human rational behaviour is shaped by a scissors whose blades are the structure of task environments and the computational capabilities of the actor” (Reference Martignon, Hoffrage, Gigerenzer and ToddSimon, 1990, p. 7). To understand how scissors cut, we must consider both blades; to understand how we make decisions, we must consider both mind and environment.
One way of incorporating the limitations of the mind and the structure of environments into cognitive models is to propose simplifying heuristics or shortcuts which enable people to “satisfice” or make “good enough” judgments (Reference Brandstätter, Gigerenzer and HertwigChristensen-Szalanski ,1978; 1980; Reference Erev and BarronPayne et al., 1993; Reference HolyoakSimon, 1956). Some of the approaches that take this line combine the homo oeconomicus and information processing metaphors to develop frameworks for considering the “cost of thinking.” For example the classic paper by Shugan (1980) explicitly considered trade-offs between the cost and benefits of thinking in consumer choice using an economic/normative framework (see also Payne et al., 1993). Other approaches explicitly eschew the normative standards ascribed by mathematical and economic models, preferring an “ecological” standard of rationality (Reference Christensen-SzalanskiGigerenzer, 2004) against which to compare models of preference.
This approach to the study of multi-attribute judgment has been embraced in the work of the Adaptive Behaviour and Cognition group (ABC) and is well represented in this special issue (Reference Gaissmaier, Schooler and MataGaissmaier, Schooler & Mata, 2008; Rieskamp, 2008). The key premise of the ABC approach is that decision makers have access to a “collection of specialised cognitive mechanisms that evolution has built into the mind for specific domains of inference and reasoning” (Reference Chu and SpiresGigerenzer & Todd, 1999, p. 30). These mechanisms or heuristics describe how people can capitalize on both their own cognitive limitations (e.g., forgetting) and environmental limitations to act adaptively and make good judgments. This approach is more aligned with the information processing metaphor than the homo-oeconomicus one. In this way the work resonates with the rational analysis pioneered by John Anderson (e.g., Anderson et al., 2004) which seeks “an explanation of an aspect of human behaviour based on the assumption that it is optimized somehow to the structure of the environment” (Reference Baron and ErevAnderson, 1991, p. 471). Thus the standard for rationality becomes one that refers to whether a particular strategy works well in an environment not whether it adheres to a set of formalisms (cf. Reference DiederichHogarth & Karelaia, 2005; 2007), hence replacing the traditional and dominating coherence criterion of rationality with the pragmatists’ correspondence criterion.Footnote 2
In their contribution Gaissmaier et al. (2008) explore the exciting synergies between rational analysis and the adaptive toolbox approach by using the cognitive architecture developed by Anderson — ACT-R — as a framework to implement some of the proposed decision strategies (e.g., the recognition heuristic). ACT-R incorporates behaviorally informed assumptions about cognitive processing and its adaptivity with respect to recurrent environmental structures.
Proposing collections of simplifying heuristics for specific tasks is one way to account for capacity limitations in cognitive models; a very different method is to propose a single decision model in which various alternative courses of action or options are considered until one option surpasses a threshold and is chosen (Reference LopesNewell, 2005). Such a dynamic decision process is known as a sequential sampling process (Reference Busemeyer, Johnson, Koehler and HarveyBusemeyer & Johnson, 2004). The specific mechanisms underlying sequential sampling differ but in general the models assume that, rather than taking a predetermined quantity of information, sampling of each option occurs until evidence sufficient to favour one option over the other has been accumulated (e.g., Reference Betsch, Haberstroh, Betsch and HaberstrohBusemeyer & Townsend, 1993; Reference Dror, Busemeyer and BasolaDror, Busemeyer, & Basola, 1999; Reference Erev and BarronLee & Cummins, 2004; Reference Glöckner and BetschNosofsky & Palmeri, 1997; Reference RatcliffRatcliff, 1978; Reference Lee and CumminsWallsten & Barton, 1982). The importance of the decision to the decision maker is then reflected in the threshold: trivial decisions for which little consideration of options is required have a low threshold; important decisions that require extensive and thoughtful deliberation have a high threshold. In such models capacity limitation, and specifically limitation in the ability to attend to information, is explicitly modeled in the way that a decision maker can attend only to one option (and its possible consequences) at any given moment (Reference Busemeyer, Johnson, Koehler and HarveyBusemeyer & Johnson, 2004).
Many of these models are applied to situations in which the interest is in preferential choice among valued options (e.g., Reference Betsch, Haberstroh, Betsch and HaberstrohBusemeyer & Townsend, 1993), but others examine probabilistic inference or judgment tasks that are the focus of much of the work in this special issue (e.g., Reference Erev and BarronLee & Cummins, 2004; Reference Lee and CumminsWallsten & Barton, 1982). Hausmann and Läge (2008) review the sequential sampling/evidence accumulation approach to modeling decision making and present their own work in which the evidence threshold is conceived of as a person’s subjective desired level of confidence in the outcome of a prediction. Their work is especially interesting because they develop a simple method of validating the threshold concept empirically and at the same time model (stable?) individual differences under the umbrella of a unified process model.
4 Automatic versus controlled processing
An implicit assumption of arguments about the relevance of capacity limitations for cognitive models of multi-attribute judgment is that much of the information processing takes place in a controlled, serial manner. For example, one of the key models of the adaptive toolbox approach — Take-the-Best — has an explicit rule for sequential search through information, a rule for stopping on the basis of the first cue found (in a pre-defined hierarchy) that discriminates alternatives, and a simple rule for choosing the favored alternative. These simple rules are said to be psychologically plausible because they adhere to what we know about the serial, explicit nature of conscious thought. However, we also know that a vast amount of neural activity takes place automatically and in parallel (the processes underlying vision for example) thus the assumption of serial processing may not always be appropriate.
Glöckner and Betsch (2008) examine the intriguing possibility that the contribution of automatic processes to decision making has been underestimated. Or rather: Reliance on the homo oeconomicus metaphor and its descendants may have overemphasized controlled serial processes. Glöckner and Betsch propose that when individuals encounter a decision situation, salient and associated information is activated in memory and a mental representation is formed that combines given and memory-stored information. The mental representation is conceptualized as a temporarily active network. Once the network is activated automatic processes operate on the connections in the network to maximize consistency between pieces of information in the network. This consistency maximizing strategy results in the formation of a representation of the decision situation in which one option usually dominates and this option is then chosen (cf. Reference Juslin, Olsson and OlssonHolyoak & Simon, 1999). A feature of the model is that controlled processing can be used at the activation and consistency maximizing phases to facilitate the formation of the consistent representation. Thus Glöckner and Betsch propose a model of multi-attribute judgment that incorporates the interaction between memory, automatic and controlled processing, and they review data that validates their account. Karlsson, Juslin, and Olsson (2008) also explore the possibility that multiple memory systems which employ different amounts of controlled and automatic processing might contribute to multi-attribute judgment.
5 Learning
Learning and decision making are inextricably linked. Judgments and decisions do not emerge out of thin air, they are informed by our prior experience and each decision yields some information (did it work out well or badly?) that we can add to our stock of experience for future benefit (Reference Newell and ShanksNewell, Lagnado & Shanks, 2007). Although in many real world situations feedback about a particular decision might be delayed or degraded (by noise in the environment; Reference Tversky and KahnemanTversky & Kahneman, 1986), it is still the case that over time we can learn to adaptively alter our behavior to improve our decision making. Given what appears to be a clear and important connection between learning and decision making it is perhaps surprising that a large portion of JDM research has studied situations in which all the information required for a decision or judgment is provided in descriptions (e.g., gambles, scenarios, and statements) for which no learning is required (see Reference Busemeyer, Johnson, Koehler and HarveyBusemeyer & Johnson, 2004 for a similar point). There are exceptions to this focus, not least the probability learning experiments of the 1960s (e.g., Reference Tversky and EdwardsTversky & Edwards, 1966 — variants of which are once again in vogue in the JDM literature — see Barron & Erev, 2003; Reference Bröder and GaissmaierErev & Barron, 2005; Reference Newell and RakowNewell & Rakow, 2007) and the multiple-cue probability learning studies of the 1970s and 80s (e.g., Brehmer, 1980).
Much of the work on multi-attribute judgment, and indeed most of the papers in this special issue take their lead from these tasks in which learning from experience is an essential component of the judgment process. For example, many of the tasks employ variants of a situation in which participants face repeated judgments about which of two or more objects (cities, companies, horses, insects) is highest on a criterion of interest (size, profitability, race-winning ability, toxicity). Typically each object is described by cues (e.g., in the case of companies: share price, employee turnover, etc) that are probabilistically related to the criterion. The mechanism underlying the learning of these cue-criterion relations is often not an explicit component of the models but is assumed to occur through some process of co-variation assessment, frequency counting or basic associative learning (see Reference Newell and ShanksNewell, Lagnado & Shanks, 2007 for an extensive treatment of these issues). The models of judgment tend to be more interested in how the products of this low-level learning process are implemented in choosing between alternatives or in predicting criterion values.
For example the learning might be conceptualized as influencing the weights of connections in a network that produces a dominant option (Reference Glöckner and BetschGlöckner & Betsch, 2008); or the cue-criterion knowledge might be thought of as the inputs to a linear summation model which produces a predicted criterion value (Karlsson, Juslin & Olsson, 2008); another interpretation is that cue knowledge about different objects is what is sampled sequentially in an evidence-accumulation threshold model (Reference HausmannHausmann & Läge, 2008); on yet another interpretation the knowledge can be thought of as chunks of information retrieved from memory and used in the execution of production rules (Gaissmaier, Schooler & Mata). The papers in this issue explore these varied and intriguing conceptualizations and illustrate the value in understanding how the process and products of learning influence multi-attribute judgment. Rieskamp’s paper is also concerned with learning but at a higher level of abstraction — that of learning how to decide or when to select a particular strategy.
6 Categorization
Categorization is a fundamental ability which allows us to organize our knowledge, react appropriately and make useful predictions about the properties of “things” we encounter in the world (Reference Bruner, Goodnow and AustinBruner, Goodnow, & Austin, 1956; Reference Medin, Ross and MarkmanMedin, Ross, & Markman, 2001). Reference CookseyJuslin, Olsson and Olsson (2003) made the insightful observation that categorization and multi-attribute judgment research often ask the same basic questions (e.g., How do you judge if a person is a friend or a foe?) and often use tasks with the same underlying structures; but very different formal models of performance have dominated in the two areas of research.
The driving metaphor in early multi-attribute judgment research was that of the lens model (Reference BrunswikBrunswik, 1952; Reference HammondHammond, 1955) which emphasized the controlled integration of cue-criterion values that had been abstracted via training or experience in the relevant environment. The processes could generally be captured in multiple-linear regression models (e.g., Brehmer, 1994; Reference Bergert and NosofskyCooksey, 1996) which assume the weighting and adding of single pieces of evidence. In contrast categorization research has been influenced enormously by models which emphasize exemplar memory — the reliance on specific instances of events/objects retrieved from memory (Juslin et al. 2003 suggest a doctor making diagnoses on the basis of retrieved instances of similar patients as an illustrative example). This view dispenses with abstracted cue-criterion relations and emphasizes an organism’s ability to remember stimulus configurations. Several mathematical models of exemplar processing have been proposed (e.g., Reference Einhorn and HogarthMedin & Schaffer, 1978; Reference Gigerenzer, Koehler and HarveyNosofsky, 1984) and more recently some have been applied to JDM phenomena (e.g., Reference BrunswikDougherty, Gettys & Ogden, 1999; Reference CookseyJuslin & Persson, 2002).
Karlsson, Juslin & Olsson (2008) present an overview of their stimulating work exploring the possibility that decision makers have both exemplar processes and controlled cue abstraction processes at their disposal when making multi-attribute judgments. The key issue they examine is whether there is an automatic shift between these systems as a function of judgment error, or whether such shifts are mediated by explicit intervention.
7 Metacognition
The question of explicit intervention raises the perennial favourite issue in cognitive science — what about the homunculus? Who or what structure decides how to decide? Can we describe meta-rules or criteria which select or determine the actual information processing (e.g., strategy or evidence threshold or similarity function) that is used in a specific decision situation? Unfortunately, cognitive models tend to become less specific and process descriptions become more anthropomorphic when higher order process like these are concerned.
Consequently, the most influential framework for strategy selection in decision making has been Beach’s and Mitchell’s (1978) contingency model which heavily relies on economic cost/benefit analysis and sees the selection of a strategy as a “compromise between the press for more decision accuracy ... and the decision maker’s resistance to the expenditure of his or her personal resources” (Reference AndersonBeach & Mitchell, 1978, p. 447). Although they do not state this explicitly, this selection process appears as an effortful and attention-demanding activity, and research following the tradition has investigated it in this way (Reference Brandstätter, Gigerenzer and HertwigChristensen-Szalanski, 1978; 1980; Reference Bruner, Goodnow and AustinChu & Spires, 2003). To be adaptive, the task is to weigh potential cognitive costs of more or less costly strategies against their expected accuracy or payoff. Since costs and accuracy are typically thought to conflict, a compromise is necessary. Reference Erev and BarronPayne, Bettman, and Johnson (1993) have added valuable techniques for measuring cognitive effort in terms of the assumed processing steps needed (assuming they are performed sequentially), and they relaxed the assumption of effortful selection processes. However, they did so at the expense of specificity since they allow for the selection to be “sometimes a conscious choice and sometimes a learned contingency” (p. 14), and the selection will “also be a function of the recency and frequency of prior use” (p. 71). In contrast to the assumption of effortful selection processes, learning models like the SSL theory discussed by Rieskamp (2008; see also Reference Newell and ShanksRieskamp & Otto, 2006) try to solve the problem of strategy selection by assuming a simple reinforcement process that requires no particular amount of cognitive capacity or reasoning ability.
Although the question of whether strategy selection is effortful appears to be a straightforward task for empirical evaluation, answering the question is probably less simple. Under some circumstances, the adaptive choice of a strategy consumes cognitive capacity as reflected in intelligence measures (Reference Anderson, Bothell, Byrne, Douglass, Lebiere and QinBröder, 2003) or attention (Reference Baron and ErevBröder & Schiffer, 2003), whereas a reinforcement learning model captures the learning process well in other situations (Rieskamp, 2006, Reference Newell and ShanksRieskamp & Otto, 2006). Bröder and Schiffer (2006) observed both quick adaptation to new environments and slow adaptation to changing environments, suggesting different principles of strategy selection at different points in time; an issue examined by Rieskamp (2008).
8 Interim Summary
Multi-attribute judgment requires many cognitive processes which have been modeled in other areas of cognitive psychology. Examining these models is therefore worthwhile because of their potential to explain decision processes and incorporate decision making into mainstream cognitive psychology. Although it is tempting to use established models from cognitive psychology for the different processes described above and integrate them in a complementary way, a caveat is necessary: Some of the metaphors and models comprise at least partly incompatible assumptions. Hence, any fully-fledged theoretical account of multi-attribute decision making will have to specify boundary conditions of the respective applicability of each model.
In the final section of the paper, we will not try to provide a full treatment of all assumptions and their mutual inconsistencies, but rather we illustrate the problem by contrasting a metaphor that has influenced our own research — the adaptive toolbox metaphor — with three other prominent metaphors that we have already briefly discussed — evidence accumulation models, exemplar-based models and network models. The juxtaposition of these metaphors highlights central topics that need to be considered carefully when cognitive models of decision making are developed. Figure 1 is a schematic diagram of the relation between the metaphors we consider and the cognitive processes hypothesized to underlie them.
9 Adaptive toolbox versus evidence accumulation models
The adaptive toolbox metaphor (Reference Chu and SpiresGigerenzer, Todd, et al., 1999) in particular and contingency models in general assume that we possess a multitude of distinguishable strategies or heuristics which involve qualitatively different processing steps. According to a yet unknown process involving explicit cost-benefit analyses or some yet unknown other selection mechanism, we choose amongst these strategies in a largely adaptive manner. The strategy we choose determines the amount of information we search for and the sequence in which we search for it. Hence, our decisions are sometimes frugal (involving few pieces of information) and sometimes more opulent (integrating more information). Another metaphor, however, is envisaged by evidence accumulation models which assume only one single process for deciding (sum pro and con evidence for all options and choose when a threshold is surpassed). Since the threshold is conceived as adjustable (depending on task demands, decision importance, time pressure etc.), evidence accumulation models can mimic the use of apparently different heuristics which use different amounts of information (Reference LopesNewell, 2005). But despite involving contradictory metaphors for the decision process, contingency models and accumulation models are very hard to distinguish empirically. Whether this matter can be resolved by inventing clever testing methods remains to be investigated (see Reference Anderson, Bothell, Byrne, Douglass, Lebiere and QinBergert & Nosofsky, 2007; Reference Erev and BarronLee & Cummins, 2004; Reference Gigerenzer, Todd, Gigerenzer and ToddNewell, Collins & Lee, 2007, for recent attempts). If the issue cannot be addressed empirically, the question may have to be resolved by determining which of the metaphors is more fruitful and fits better into the nomological network of other theories of cognition and decision making.
10 Adaptive toolbox and evidence accumulation versus exemplar-based models
There is a fundamental difference between the processes described in strategies or heuristics and exemplar-based models of decision making. Heuristics and also evidence accumulation models assume a piecemeal processing of cue-criterion relations which can be conceptualized as sequential or parallel in nature (see Reference Bröder and GaissmaierBröder & Gaissmaier, 2007). In evidence accumulation models like Decision Field Theory, consulting cues is clearly thought to be a sequential process (Reference Betsch, Haberstroh, Betsch and HaberstrohBusemeyer & Townsend, 1993; Diedrich, 1997; 2003; Reference Roe, Busemeyer and TownsendRoe, Busemeyer & Townsend, 2001); it is assumed that cues or attributes independently support one option, and their respective contributions to a decision are weighted and integrated. An internal knowledge base must therefore contain knowledge (or intuitions) about single cue-criterion functions that are actively integrated at the time of judgment. Usually, this integration process is conceived of as cognitively costly (Reference AndersonBeach & Mitchell, 1978; Reference Erev and BarronPayne et al., 1993).
Exemplar-based decision models, on the other hand, dispense with the idea of stored cue-criterion relations and rather assume that past instances of options are stored as attribute constellations. Options in a judgment or decision task act as memory probes for retrieving typical (modal or average) criterion values that are associated with their attribute constellations. The retrieved value is generated by aggregating across the most similar attribute sets in memory. This model is “lazy” with respect to two aspects: First, no cue-criterion relations have to be learned to establish a knowledge base. This is a realistic feature of real environments in which it is often not clear which feature has to be predicted as a criterion in the future. Second, the model dispenses with costly information integration at the time of a decision since it only compares the probes with stored exemplars. Hence, there are some advantages of exemplar-based reasoning in terms of cognitive costs (cf. Reference CookseyJuslin & Persson, 2002). Despite the attractive qualities of exemplar-based reasoning, there is evidence that in fact cue abstraction may be a default option in judgment tasks and that this mode is only supplemented by exemplar-based reasoning in environments where cue-criterion relations are hard to extract. One goal of further research is to delineate the conditions under which either of the decision modes is activated (see Reference Christensen-SzalanskiJuslin et al., 2003).
11 Adaptive toolbox versus network models
Contingency models in general (Reference AndersonBeach & Mitchell, 1978; Reference Erev and BarronPayne et al., 1993) and the Adaptive Toolbox metaphor in particular are very explicit about cognitive costs. Compensatory strategies are viewed as “rational demons” (Reference Chu and SpiresGigerenzer & Todd, 1999) that have a high demand for cognitive resources and time. Also, processing is believed to be sequential: “In the kind of inference task we are concerned with, cues have to be searched for, and the mind operates sequentially, step by step and cue by cue.” (Reference GigerenzerMartignon & Hoffrage, 1999, p. 137). The usual rhetoric is that “If … both summing without weighting and weighting without summing can be as accurate as weighting and summing, why should humans not use these simpler heuristics?” (Reference Brandstätter, Gigerenzer and HertwigBrandstätter, Gigerenzer, & Hertwig, 2006, p. 410). These arguments appear plausible because of their introspective appeal. However, critics argue that in other areas of cognition (e.g., perception, language planning), numerous automatic and parallel processes take place without consuming cognitive effort at all. These processes involve automatic integration of numerous pieces of information (Reference Chater, Oaksford, Nakisa and RedingtonChater, Oaksford, Nakisa & Redington, 2003). Simple one-layer neural networks can perform multiple regression analyses, integrating many predictors to estimate a criterion (e.g., Stone, 1986). Hence, complex process do not necessarily imply the consumption of conscious resources or much processing time and viewed from this perspective, “simple” heuristics are probably not much simpler, subjectively than complex ones.
In principle, both views should be easy to distinguish experimentally by restraining or enhancing cognitive resources and by observing the effects on strategy selection. The evidence on this question is mixed, however: Bröder (2003) and Bröder and Schiffer (2003) did not find evidence that strategies involving weighting and summing were more costly than simple lexicographic strategies. Quite to the contrary, higher cognitive capacity (intelligence, non-demanding second task) was associated with simpler strategies when these were appropriate in the environment. This suggests that the selection rather than the execution of strategies is associated with costly processing. On the other hand, Bröder and Gaissmaier (2007) found evidence for sequential processing in cases where cues had to be retrieved from memory to form a judgment. Thus, the parallel/automatic versus sequential/effortful distinction is probably not an absolute one in multi-attribute decisions. Rather, the task again will be to investigate the boundaries of the respective cognitive models as a function of task characteristics.
12 Conclusions
Our hope for this special issue is that highlighting areas of overlap between cognitive modeling and multi-attribute judgment will stimulate further cross-fertilization and inspire research examining the boundary conditions of various models. We believe this will strengthen JDM research in general and help to reconnect it with mainstream cognitive psychology.