Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-22T06:06:12.607Z Has data issue: false hasContentIssue false

Consumers can make decisions in as little as a third of a second

Published online by Cambridge University Press:  01 January 2023

Milica Milosavljevic
Affiliation:
Computation and Neural Systems, California Institute of Technology
Christof Koch
Affiliation:
Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
Rights & Permissions [Opens in a new window]

Abstract

We make hundreds of decisions every day, many of them extremely quickly and without much explicit deliberation. This motivates two important open questions: What is the minimum time required to make choices with above chance accuracy? What is the impact of additional decision-making time on choice accuracy? We investigated these questions in four experiments in which subjects made binary food choices using saccadic or manual responses, under either “speed” or “accuracy” instructions. Subjects were able to make above chance decisions in as little as 313 ms, and choose their preferred food item in over 70% of trials at average speeds of 404 ms. Further, slowing down their responses by either asking them explicitly to be confident about their choices, or to respond with hand movements, generated about a 10% increase in accuracy. Together, these results suggest that consumers can make accurate every-day choices, akin to those made in a grocery store, at significantly faster speeds than previously reported.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2011] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

We make hundreds of decisions every day, many of them extremely quickly and without much explicit deliberation. Consider, as an example, a trip to the local grocery store. While unfamiliar and high-stakes purchases often involve careful comparisons, casual observation suggests that many others are made at speeds that seem inconsistent with careful deliberation. This motivates an important open question in the psychology and neurobiology of decision-making, as well as in the domain of consumer research: What are the fastest speeds at which the human brain is capable of identifying the most valuable options?

Previous work on the computational and neurobiological basis of decision-making provides some clues. Multiple studies have shown that the Drift-Diffusion-Model and its variants, which provide a computational description of how choices are made, are able to provide good quantitative descriptions of how accuracy and response times vary with the underlying parameters of the choice problem (Reference Link and HeathLink & Heath, 1975; Reference RatcliffRatcliff, 1978; Reference Usher and McClellandUsher & McClelland, 2001; Reference Busemeyer and JohnsonBusemeyer & Johnson, 2004; Gold & Schadlen, 2007; Reference BogaczBogacz, 2007; Reference Ratcliff and McKoonRatcliff & McKoon, 2008; Reference Krajbich, Armel and RangelKrajbich, Armel, & Rangel, 2010; for a companion paper see Reference Milosavljevic, Malmaud, Huth, Koch and RangelMilosavljevic, Malmaud, Huth, Koch, & Rangel, 2010). All of these models predict a speed-accuracy tradeoff, which has been observed in the data. However, this literature has not experimentally measured the fastest speeds at which value-based decisions can be made (Reference Smith and RatcliffSmith & Ratcliff, 2004; Gold & Schadlen, 2007; Reference Ratcliff and McKoonRatcliff & McKoon, 2008; Reference Bogacz, Hu, Holmes and CohenBogacz, Hu, Holmes, & Cohen, 2010).

An examination of previous decision-making studies also suggest that decisions can be made quickly, but provide few clues about how fast these decisions can be made when speed is the goal, perhaps due to external constraints. For example, in Reference Krajbich, Armel and RangelKrajbich, Armel, and Rangel (2010), hungry subjects made real choices between pairs of food stimuli displayed on a computer screen with reaction times (RT) that ranged from 1.7 seconds for the easiest choices to 2.7 for the most difficult ones. Reaction times in the 800 ms range have been reported in various other choice studies (Reference Knutson, Rick, Wimmer, Prelec and LoewensteinKnutson, Rick, Wimmer, Prelec, & Loewenstein, 2007; Reference Wunderlich, Rangel and O’DohertyWunderlich, Rangel, & O’Doherty, 2010; Reference Litt, Plassmann, Shiv and RangelLitt, Plassmann, Shiv, & Rangel, 2010). Studies examining more complex choices, those involving six to twelve options, have reported reaction times in the 6–18 seconds range (Reference Pieters and WarlopPieters & Warlop, 1999; Reference Chandon, Hutchinson, Bradlow and YoungChandon, Hutchinson, Bradlow, & Young, 2009).

However, work on the psychophysics of perceptual judgments suggests that the brain may be able to carry out the decisions computations much faster. Reference Thorpe, Fize and MarlotThorpe, Fize, & Marlot (1996) showed that subjects could categorize natural scenes according to whether or not they contain an animal using a go/no-go task (median RT of 445 ms on “go” trials and differential ERP activity in 150 ms). VanRullen & Thorpe (2001ab) found that subjects could distinguish between animals and vehicles in a go/no-go task at similarly fast speeds (mean RT of 364 ms for animals, 376 ms for vehicles; minimum RT of 225 ms for animals, 245 ms for vehicles as measured by earliest above-chance responses; differential ERP activity detected in 150 ms for both tasks). Further, Kirchner & Thorpe (Reference Kirchner and Thorpe2006) used a novel saccadic choice paradigm to show that a pair of natural scenes flashed in the left and right hemifields could be compared for the presence of one or more animals with a median RT of 228 ms, and a minimum RT of 120 ms. Utilizing a similar paradigm, Bannerman, Milders, de Gelder, and Sahraie (Reference Bannerman, Milders, de Gelder and Sahraie2009) showed that subjects could distinguish a fearful facial expression or body posture from a neutral one in less than 350 ms (mean RT). Similarly, forced-choice saccades to identify human faces can be performed above chance and initiated with a mean reaction time of 154 ms and a minimum RT 100 ms (Reference Crouzet, Kirchner and ThorpeCrouzet, Kirchner, & Thorpe, 2010). Finally, we recently showed that individuals can make magnitude comparisons between two single-digit numbers with high accuracy in 306 ms on average and are able to perform above chance in as little as 230 ms (Reference Milosavljevic, Madsen, Koch and RangelMilosavljevic, Madsen, Koch, & Rangel, 2011).

Note that, as in the case of simple choice, in all of these experiments subjects had to recognize the stimuli and their location, analyze the stimuli to make a discrimination judgment, and then indicate the outcome of the judgment through a motor response. Thus, given the similar computational demands, it is natural to hypothesize that the brain should be able to make accurate simple choices at much faster speeds than those that have been reported in the literature.

We conducted four experiments designed to address two basic questions: What is the minimum computation time required to make choices with above chance accuracy? What is the impact of additional computational time on choice accuracy? An important difficulty in answering these questions is that reaction time measures from standard choice paradigms overestimate the amount of time that it takes to make a decision (i.e., to compute and compare values), since they include the time that it requires to perceive the stimuli and to deploy the choice. Here, we use a paradigm from vision psychophysics (Reference Kirchner and ThorpeKirchner & Thorpe, 2006), which was developed in the studies described above, and which allows us to minimize the aforementioned measurement problems.

Determining the minimum computation times at which consumers can make choices above chance level, as well as the impact of additional time on choice accuracy, is important for two reasons. First, since a significant fraction of decisions seems to be made at these speeds, it provides an insight into the general quality of human decision-making. The increased popularity of new technologies in which decisions are made with the click of a mouse further increases the importance of the question. Second, it provides insights into the nature of the computational and neurobiological process that might be at work in making fast decisions, such as the relative importance of “bottom-up” (“feed-forward”) and “top-down” (“feedback”) processes.

Figure 1: Typical trial in Experiment 1

2 Experiment 1: Fast saccadic consumer choices

Experiment 1 was designed to investigate how fast consumers can make real value-based choices between pairs of stimuli, i.e., pairs of food items, and to measure the accuracy of the resulting choices.

2.1 Methods

Subjects. Twelve Caltech students with normal or corrected-to-normal vision participated in the eye-tracking study after providing informed consent.

Experimental Task. Subjects were instructed not to eat for three hours before the experiment. The experiment was divided into three phases.

First, there was a liking-rating task. Subjects were shown images of 50 different food items. All of the foods were highly familiar snack food items, such as candy bars and chips (e.g., Snickers and Doritos), sold at local convenience stores, and which pre-testing had shown were familiar to our subject population. The items were centered on the screen one at a time, and subjects were asked: “How much would you like to eat this item at the end of the experiment?” They reported their preferences on a 5-point scale, ranging from −2 (not at all; don’t like the item at all) to +2 (extremely; one of my favorite snacks). The items were neutral to appetitive, with average liking ratings ranging from −.83 ± .37 (SD) to 1.42 ± .36 (SD) across 12 subjects. These liking ratings served as our measure of the value that the subjects place on each food item. Thus, on each trial, the correct choice involved picking the item with the higher subjective value.

Second, subjects completed 750 trials of a 2-alternative forced choice (2-AFC) task. Figure 1 depicts the structure of the trials. Each trial began with an enforced 800 ms fixation on a central fixation cross, followed by a 200 ms blank screen (Reference Fischer and WeberFischer & Weber, 1993). Two different food items were then simultaneously shown for 20 ms, centered one each at the left and right hemifields. The pairs of food items were chosen randomly from the set of 50 items, with the constraint that the absolute subjective rating differences (d=rating bestrating worst) be larger than zero. That is, no two equally-rated food items were ever paired on a same trial. Following the presentation of the two choice options, two faint dots were displayed, one on the left and one on the right side of the screen, to indicate corresponding choice options. Subjects were asked to respond as quickly as possible by making an eye-movement toward the side where their preferred food item appeared. They were also told that one of the trials will be randomly chosen at the end of the experiment and that they will be asked to eat the item that they chose on that trial. Their response (left or right food item) was recorded with an eye-tracker (see below) and the next trial began immediately.

Finally, at the end of the study one of the choice trials was randomly selected. Subjects were asked to stay in the lab for 15 minutes and the only thing that they were allowed to eat was whatever they chose in that randomly selected trial.

Eye-tracking. The task took place in a dimly lit room and subjects’ heads were positioned in a forehead and chin rest. Eye-position data were acquired from the right eye at 1000 Hz using an Eyelink 1000 eye-tracker (SR Research, Osgoode, Canada). The distance between computer screen and subject was 80 cm, giving a total visual angle of 28° × 21°. The images were presented on a computer monitor using the Matlab Psychophysics toolbox and the Eyelink toolbox extensions (Reference BrainardBrainard, 1997; Reference Cornelissen, Peters and PalmerCornelissen, Peters, & Palmer, 2002). Left or right choices were determined when an eye-movement was initiated from the center of the screen towards the food item, and crossed a threshold of 2.2° (78 pixels) toward the left or the right side of the screen. Saccadic reaction time, here representing decision-making time, was determined as the time difference between the onset of the images and subjects’ saccade initiation.

The experiment was designed so to get an accurate estimate of the minimum time required to make choice. Several features are worth highlighting. First, the total measured reaction times include both the time that it takes to make a decision (by computing and comparing the values associated with the two stimuli), as well as the time required to recognize and process the stimuli, and to initiate the associated motor response. Thus, this experiment overestimates the actual time that it takes to make the decision by an amount equal to the time that it takes to process the stimuli visually and to initiate a saccade once the decision has been made. Second, in order to minimize the bias in our response time measure introduced by the motor response, we asked subjects to indicate their choice by executing an eye-movement as rapidly as possible to the location of their favorite stimulus. We used saccades as our response modality because they are the fastest responses that humans can make (Reference SaslowSaslow, 1967; Reference Fischer and RamspergerFischer & Ramsperger, 1984). Third, we introduced a 200 ms blank screen between the fixation and the stimulus display because this has been shown to speed up the saccade, which further reduces the upper bias on our estimate of how fast choices can be made (Reference Fischer and WeberFischer & Weber, 1993). Fourth, the stimuli were displayed on the screen for only 20 ms following Kirchner & Thorpe’s (2006) paradigm, which was designed to determine the minimum reaction time for their perceptual decision-making task.

We also highlight a limitation of the study. Since subjects provide only one liking rating for each food item, it is very likely that we measure the underlying preferences with noise. Note that this stands in contrast with standard psychophysics methods in which the characteristics of the stimuli that affect behavior (e.g., orientation or contrast) are controlled by the experimenter, and thus measured without noise. It is important to keep this source of noise in mind when interpreting the accuracy results described below. In particular, it implies that some of the choices that we label as errors (in the sense of not choosing the higher rated option) could in fact be correct given the underlying true preferences (which we measured with error). Note, however, that as long as the measurement error in the liking ratings is additive and constant for all items, our accuracy variable is measured with the same amount of noise in all trial bins, which implies that our results on how accuracy changes with variables such as choice difficulty are noisier than ideal, but not biased.

Computation of Minimum Reaction Times (MRT). Minimum reaction times (MRT) were computed using a method taken from the statistical quality control literature (Reference ChandraChandra, 2001; Reference RobertsRoberts, 1959) and recently introduced in the domain of perceptual decision making (Reference Milosavljevic, Madsen, Koch and RangelMilosavljevic, Madsen, Koch, & Rangel, 2011). The method has also been used to filter “fast guesses” from perceptual data when fitting the Ratcliff Drift Diffusion Model (Reference Vandekerckhove and TuerlinckxVandekerckhove & Tuerlinckx, 2007; Reference Ratcliff and McKoonRatcliff & McKoon, 2008). The basic idea is as follows. Observations are ordered from low to high response times. Let X i denote the accuracy of the ith ordered response (1 = correct, 0 = incorrect). An exponentially weighted moving average (EW M A) measure of accuracy is then computed using the following formula:

where λ is a parameter indicating how much weight to give to past (ordered) observations in the moving average. Note that when λ = 1, the EW M A statistic is based only on the most recent observation. In contrast, as λ approaches 0, previous observations are given increasing weight relative to the latest observation. EW M A 0 was set to 0.5 (i.e. chance performance).

Intuitively, the EW M A measure provides an estimate of how accuracy changes with increasing reaction times. This measure can then be compared against the null hypothesis of chance performance on all trials. This null hypothesis generates a confidence interval for the EW M A after i observations given by

The first term in this expression is the mean EW M A statistic under the null hypothesis, which in our experiment is given by µ = E(X i) = 0.05. The second term provides an expression for the width of the confidence interval: N is the number of standard deviations included in the confidence interval, and σ = Std(X i) = 0.5 is the standard deviation of each observation X i under the null hypothesis. Note that under the null hypothesis performance in every trial depends on the independent flip of a fair coin.

The MRT can then be defined as the smallest ordered reaction time at which the EW M A measure permanently exceeds this confidence interval. For our analyses we chose conservative parameters to reduce the possibility of false positives: λ = 0.01 and N = 3. The EW M A analysis was run separately for each subject and condition.

This test is meant to improve the previous measures of minimum reaction times introduced by Reference Kirchner and ThorpeKirchner & Thorpe (2006) and extensively used in the literature (e.g., Reference Crouzet, Kirchner and ThorpeCrouzet, Kirchner, & Thorpe, 2010). Their typical analysis sorts reaction times in increasing order, divides them into discrete bins (e.g. 10 ms bins centered at 10 ms, 20 ms, etc.), and then tests the average accuracy of observations in each bin against chance. This procedure has raised some controversy because the resulting MRT measures are highly sensitive to the width and placement of the bins.

Figure 2: Reaction time distributions for Experiment 1, for correct trials = thick line, and error trials = thin line. Vertical line shows the mean MRT = 313 ms.

Table 1: Individual statistics for Experiment 1

Figure 3: (A) Percentage of correct choices as a function of liking rating differences between the two food items (1=most difficult choices, 4=easiest choices). (B) Mean reaction time at each value distance. Error bars denote SEM.

2.2 Results

Trials with responses faster than 80 ms were excluded, since this is the approximate minimum time necessary to make a purposeful eye-movement (302 out of 9751 trials). Also, the trials that took much longer than what is required by the task were left out (4 out of 9751). These trials probably represent errors made by the subject or the eye-tracker, for example, due to a blink which may cause a subject to miss the 20 ms presentation of the food items.

Figure 2 depicts the group reaction time curves for correct and error trials, as well as the mean estimate of the MRT. The mean response time was 407 ± 1 (SEM) for correct trials, and 396 ± 3 ms for error trials. Furthermore, the reaction time distributions show that a small percentage of saccades took place between 80 and 200 ms (1.0% for correct trials, and .7% for error trials). These types of saccades are known as “express saccades” (Reference Fischer and RamspergerFischer & Ramsperger, 1984).

Table 1 and Figure 3A show that, despite the speed of the choices (mean response time across subjects was 404 ± 21 ms), all subjects selected their preferred option well above chance: mean accuracy was 73.3 ± 1.6% (binomial test of above chance performance, p < 0.0001), ranging from 65.2 ± 1.8% for most difficult choices to 84.6 ± 3.0% for easiest choices. Figure 3B describes reaction times as a function of difficulty, where absolute distance in liking ratings of the two food items of d=1 represents difficult choices, and d=4 represents easy choices. For difficult choices (d=1) the mean RT was 411 ms, while for easy choices (d=4) the mean RT was 386 ms (paired t-test: t=3.82, df=11, p=.0028). Table 1 also reports the MRT for each individual. The mean MRT across subjects was 313 ± 17 ms.

3 Experiment 2: Pure perceptual detection control task

One limitation of Experiment 1 is that the estimate of the minimum speed at which simple choices can be made includes the time necessary to perceive the items and initiate the eye-movement through which the choices are indicated. This is an important limitation because a key goal of the paper is to improve existing estimates of the minimum amount of time at which values can be computed and compared in the brain.

To address this limitation we carried out a pure perceptual detection control task in which subjects were asked to detect the side of the screen on which a picture of a food item is displayed and make an eye-movement towards that location. Note that this task has similar perceptual and motor demands as those in Experiment 1, although not identical, since only one stimulus is shown on the screen. However, in the new task subjects do not need to compute and compare the value of stimuli. As a result, a subtraction of the reaction times in Experiment 2 from those in Experiment 1 provides an alternative and potentially better estimate of the amount of time that it takes to compute and compare values.

3.1 Methods

Subjects. Four subjects from Experiment 1 participated in this experiment, which was conducted several weeks after the previous one. Table 2 shows which subjects participated in each experiment, as well as the order in which they did so.

Task. Subjects completed 300 trials of a simple fixation task, shown in Figure 4. Each trial began with an enforced 800 ms fixation on a central fixation cross, followed by a 200 ms blank screen. Afterwards a single food stimulus was placed in either the left or the right hemifield for 20 ms, followed by two faint dots displayed on either side of the screen to indicate corresponding saccade landing positions. At that time subjects were asked to, as quickly as possible, make an eye-movement toward the side where the food item appeared. All omitted details are as in the previous experiment.

We acknowledge several potential concerns with this control experiment, which should be taken into account when interpreting the results described here. First, since unfortunately we thought of this task only after carrying out the other experiments described here, all of the subjects performed Experiment 1 before Experiment 2. This raises the concern that familiarity with the stimulus set, which was identical in both tasks, could have led to faster responses in Experiment 2. Note that this would imply that the task underestimates the time devoted to stimulus recognition and motor deployment in normal choice, and thus the estimates reported below would over-estimate the time that it takes to compute and compare values. Second, the control task is not perfect, since the perceptual demands in Experiment 2 are significant simpler that than those in Experiment 1 because now subjects can recognize the location of the stimulus by simply perceiving a flicker. Note, however, that this limitation also results in an overestimate of the amount of time that it takes to compute and compare values, which means that it does not affect our overall conclusion that the brain can make simple consumer choices at extremely fast speeds. Given these concerns, the results in this section should be considered exploratory, and as a result the rest of the paper emphasizes the MRT estimates from the other experiments.

Table 2: Order of participation for each subject and experiment

Figure 4: Typical trial in Experiment 2.

Figure 5: Reaction time distribution for Experiment 1 (blue line) vs. Experiment 2 (green line). Thick line = correct trials. Thin line = error trials. Vertical lines = the estimate of the mean Minimum Reaction Time across subjects: Experiment 1 = 313 ms; Experiment 2 = 114 ms.

3.2 Results

Table 3: Individual statistics for Experiment 2 and Experiment 1

Due to extremely fast or slow response times, 30 out of 1200 trials were omitted from the further analyses. Figure 5 compares the reaction time distributions for Experiments 1 and 2. Table 3 describes the mean and minimum RT (MRT) for each subject, which in every case were significantly faster in Experiment 2 than in Experiment 1 (highest p < .00001 for paired t-tests). The mean reaction time across subjects was 183 ± 21 ms, which was significantly faster than the mean response for Experiment 1 (paired t-test, t=5.59, df=14, p=.0001). Table 3 also shows accuracy for each subject (mean = 99.8%) and the MRT (mean=114 ms; paired t-test vs. Experiment 1, t=6.66, df=14, p < .00001).

We can now subtract the RT estimates of the two experiments for the four subjects that participated in both tasks, in order to get a better estimate of the amount of time that it takes to compute and compare values. The mean difference in RT between a simple saccadic response and a more demanding value-based saccadic choice across four subjects was 235 ms, and the mean difference in the MRT was 231 ms.

4 Experiments 3 and 4: The impact of additional computation time on choice accuracy

The next two experiments were designed to investigate the impact of additional computation time on choice accuracy. In Experiment 3, reaction times were slowed down by asking subjects to respond only once they were confident of which option they preferred. In Experiment 4, reaction times were slowed down by asking subjects to indicate their choice with a button press instead of a saccade, which naturally slows down reaction times.

4.1 Method

Subjects. Nine subjects participated in Experiment 3. Five subjects participated in Experiment 4. See Table 2 for details.

Task. The procedure for Experiment 3 was identical to that of Experiment 1, except that subjects were asked to maintain the central fixation on the screen until they were confident of which food item they preferred, and to only then indicate their choice by making an eye-movement to the location of their preferred item. The procedure for Experiment 4 was identical to that of Experiment 1, except that now responses were indicated by a button press. All omitted details are as in Experiment 1.

Figure 6: Reaction time distribution for Experiment 3 (red line) vs. Experiment 4 (gray line). Thick lines = correct trials. Thin lines = error trials. Vertical lines = the mean Minimum Reaction Time across subjects: Experiment 3 = 365 ms; Experiment 4 = 418 ms.

4.2 Results

Table 4: Individual statistics for Experiment 3

Table 5: Individual statistics for Experiment 4

Figure 7: Experiments 3 (dashed line) & 4 (solid line). (A) Percentage of correct trials as a function of liking rating differences between the two food items (1=most difficult choices, 4=easiest choices). (B) Mean reaction times for each value distance. Error bars denote SEM.

Figure 8: (A) Comparison of Minimum RTs and Mean RTs in all experiments: Experiment1 = Speed; Experiment 2 = Control; Experiment 3 = Conf; Experiment 4 = Manual. (B) Mean Accuracy in all experiments. Error bars represent SEM.

In Experiment 3, due to extremely fast or slow response times, 14 out of 6750 trials were omitted from the further analyses. Figure 6 and Table 4 show the reaction time distributions and accuracy for Experiment 3 (eye-tracking; N=9). The mean accuracy was 83.0% (binomial test of above chance performance, p < 0.0001) and the mean response time was 572 ± 44 ms. Accuracy increased and reaction time decreased as a function of choice difficulty (paired t-test for difference in accuracy between d=1=74.2% and d=4=95.0% is significant at p < .0001; paired t-test for reaction times between d=1=606 ms and d=4=510 ms is significant at p=.014).

In Experiment 4, due to extremely slow response times 2 out of 3742 trials were omitted from the further analyses. Figure 6 and Table 5 depict the reaction time distributions and accuracy for Experiment 4 (manual responses, N=5). The mean accuracy was 84.9% (binomial test of above chance performance, p < 0.0001) and the mean response time 632 ± 60 ms. Accuracy increased and reaction time decreased with decreasing choice difficulty (paired t-test for accuracy between d=1=74.2% and d=4=93.8% is significant at p=.004; paired t-test for reaction times between d=1=685 ms and d=4=559 ms is marginally significant at p=.07).

Figure 7 shows the group reaction time curves for correct and error trials for Experiments 3 and 4. None of the differences (in RT or accuracy, over any value distance) between the two experiments were statistically different (for all t-tests, p>.3).

In order to facilitate comparisons across experiments, Figure 8 shows the comparison of the Minimum Reaction Times (MRT), accuracy, and mean reaction times for all four experiments.

5 Discussion

The experiments described here have generated three main findings. First, using a measure of Minimum Reaction Times (MRT) we found that subjects can compute and compare values significantly above chance in as little as 313 ms. Second, we found that, at average reaction times of 404 ms subjects were able to compute and compare values with accuracies as high as 73%, and that at most 235 ms of this time was devoted to the computation and comparison of values (the rest was spent on preparing and initiating the response). Third, we have found that slowing down subjects by either asking them explicitly to be confident about their choices, or by asking them to indicate choices using hand movements, increases mean reaction times by about 150 ms (comparison between the same 8 subjects who completed Experiments 1 and 3 shows a difference of 150 ms; the difference between the same 4 subjects who completed Experiments 1 and 4 was 123 ms), while generating only small increases in choice accuracies: 9.6% and 7.6%, respectively.

A comparison of these estimates with the time that it takes to carry out other simple cognitive computations is informative. For example, humans can make a purely visual binary discrimination to select the natural scene containing one or more animals in about 140–160 ms (Reference Kirchner and ThorpeKirchner & Thorpe, 2006), can distinguish a fearful facial expression or body posture from a neutral one in less than 350 ms (Reference Bannerman, Milders, de Gelder and SahraieBannerman, Milders, de Gelder, & Sahraie, 2009), and can make judgments after brief exposures (<1 s) to complex stimuli that are predictive of the choices made when they have sufficient time to deliberate (Reference Ambady and RosenthalAmbady & Rosenthal, 1992; Reference Willis and TodorovWillis & Todorov, 2006). Since the motor processing demands of our task, as well as the details of the experimental design, are similar to these perceptual decision-making tasks (esp. Kircher and Thorpe, 2006), it is interesting to compare the psychometrics of the two tasks, both of which involve indicating one’s choice by making an eye-movement to one of two images. A comparison of the average reaction times suggests that value-based decisions take approximately 264 ms longer than purely perceptual choices (404 ms in our eye-tracking speed condition versus 140 ms in the perceptual discrimination task in Reference Kirchner and ThorpeKirchner & Thorpe, 2006). A similar estimate is provided by a comparison of Experiment 1 and our control task in Experiment 2 (235 ms).

A caveat to the finding that additional processing time had a small impact on choice accuracy is that our manipulations increased computation times only by about 168 ms (difference between Experiment 3 and Experiment 1). It is conceivable that longer deliberation times might have generated even higher choice accuracies. However, given the well-known stochastic nature of choice, it might be that the choice accuracies are already near ceiling at around 600 ms. Further careful testing will be necessary to settle this question.

Footnotes

Financial support from the Moore Foundation (AR), NSF (AR), the Mathers Foundation, the Gimbel Fund (CK), and the Tamagawa Global Center of Excellence (MM) is gratefully acknowledged. Address: Caltech, 1200 E. California Blvd., HSS, MC 228–77, Pasadena, CA 91125.

References

Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111, 256274.CrossRefGoogle Scholar
Bannerman, R. L., Milders, M., de Gelder, B., & Sahraie, A. (2009). Orienting to threat: Faster localization of fearful facial expressions and body postures revealed by saccadic eye movements. Proceedings of the Royal Society B, 276(1662), 16351641.CrossRefGoogle ScholarPubMed
Bogacz, R. (2007). Optimal decision-making theories: Linking neurobiology with behaviour. Trends in Cognitive Science, 11, 118125.CrossRefGoogle ScholarPubMed
Bogacz, R., Hu, P. T., Holmes, P. J., & Cohen, J. D. (2010). Do humans produce the speed-accuracy tradeoff that maximizes reward rate? Quarterly Journal of Experimental Psychology, 63, 863891.CrossRefGoogle ScholarPubMed
Brainard, D. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433436.CrossRefGoogle ScholarPubMed
Busemeyer, J. R., & Johnson, J. G. (2004). Computational models of decision making. Cambridge, MA: Blackwell.CrossRefGoogle Scholar
Chandon, P., Hutchinson, J. W., Bradlow, E. T., & Young, S. H. (2009). Does in-store marketing work? Effects of the number and position of shelf facings on brand attention and evaluation at the point of purchase. Journal of Marketing, 73, 117.CrossRefGoogle Scholar
Chandra, M. (2001). Statistical quality control. New York: CRC Press.CrossRefGoogle Scholar
Cornelissen, F. W., Peters, E. M., & Palmer, J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavioral Research Methods, Instruments & Computers, 34(4), 613617.CrossRefGoogle Scholar
Crouzet, S., Kirchner, H., & Thorpe, S. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10(4):16, 117.CrossRefGoogle ScholarPubMed
Fischer, B., & Ramsperger, E. (1984). Human express saccades: Extremely short reaction times of goal directed eye movements. Experimental Brain Research, 57, 191195.CrossRefGoogle ScholarPubMed
Fischer, B., & Weber, H. (1993). Express saccades and visual attention. Behavioral and Brain Sciences, 16, 553610.CrossRefGoogle Scholar
Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535574.CrossRefGoogle ScholarPubMed
Kirchner, H., & Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46, 17621776.CrossRefGoogle ScholarPubMed
Knutson, B., Rick, S, Wimmer, G. E, Prelec, D., & Loewenstein, G. (2007). Neural predictors of purchases. Neuron, 53, 147156.CrossRefGoogle ScholarPubMed
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in goal-directed choice. Nature Neuroscience, 13, 12921298.CrossRefGoogle Scholar
Link, S. W., & Heath, R. A. (1975). A sequential theory of psychological discrimination. Psychometrika,40, 77105.CrossRefGoogle Scholar
Litt, A., Plassmann, H., Shiv, B., & Rangel, A. (2010). Dissociating valuation and saliency signals during decision-making. Cerebral Cortex, 2, 95102.Google Scholar
Milosavljevic, M., Madsen, E., Koch, C., & Rangel, A. (2011). Fast saccades towards numbers: Simple number comparisons can be made in as little as 230 ms. Journal of Vision, 11 (4), article 4.CrossRefGoogle Scholar
Milosavljevic, M., Malmaud, J., Huth, A., Koch, C., & Rangel, A. (2010). The Drift Diffusion Model can account for the accuracy and reaction time of value-based choices under high and low time pressure. Judgment and Decision Making, 5, 437449.CrossRefGoogle Scholar
Pieters, R., & Warlop, L. (1999), Visual attention during brand choice: The impact of time pressure and task motivation. International Journal of Research in Marketing, 16, 117.CrossRefGoogle Scholar
Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 59108.CrossRefGoogle Scholar
Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20, 873922.CrossRefGoogle ScholarPubMed
Roberts, S. (1959). Control chart tests based on geometric moving averages. Technometrics, 1, 239250.CrossRefGoogle Scholar
Saslow, M. G. (1967). Effects of components of displacement-step stimuli upon latency for saccadic eye movement. Journal of the Optical Society of America, 57, 10241029.CrossRefGoogle ScholarPubMed
Smith, P. L., & Ratcliff, R. (2004). Psychology and neurobiology of simple decisions. Trends in Neuroscience, 27, 161168.CrossRefGoogle ScholarPubMed
Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of processing in the human visual system. Nature, 381, 520522.CrossRefGoogle ScholarPubMed
Usher, M., & McClelland, J. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108, 550592.CrossRefGoogle ScholarPubMed
Vandekerckhove, J., & Tuerlinckx, F. (2007). Fitting the Ratcliff diffusion model to experimental data. Psychonomic Bulletin & Review, 14, 10111026.CrossRefGoogle ScholarPubMed
VanRullen, R., & Thorpe, S. J. (2001a). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30, 655668.CrossRefGoogle ScholarPubMed
VanRullen, R., & Thorpe, S. J. (2001b). The time course of visual processing: From early perception to decision making. Journal of Cognitive Neuroscience, 13,454461.CrossRefGoogle ScholarPubMed
Willis, J., & Todorov, A. (2006). First impressions: Making up your mind after a 100-ms exposure to a face. Psychological Science, 17, 592598.CrossRefGoogle ScholarPubMed
Wunderlich, K., Rangel, A., & O’Doherty, J. P. (2010). Economic choices can be made using only stimulus values. Proceedings of the National Academy of Sciences, 107, 1500515010.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1: Typical trial in Experiment 1

Figure 1

Figure 2: Reaction time distributions for Experiment 1, for correct trials = thick line, and error trials = thin line. Vertical line shows the mean MRT = 313 ms.

Figure 2

Table 1: Individual statistics for Experiment 1

Figure 3

Figure 3: (A) Percentage of correct choices as a function of liking rating differences between the two food items (1=most difficult choices, 4=easiest choices). (B) Mean reaction time at each value distance. Error bars denote SEM.

Figure 4

Table 2: Order of participation for each subject and experiment

Figure 5

Figure 4: Typical trial in Experiment 2.

Figure 6

Figure 5: Reaction time distribution for Experiment 1 (blue line) vs. Experiment 2 (green line). Thick line = correct trials. Thin line = error trials. Vertical lines = the estimate of the mean Minimum Reaction Time across subjects: Experiment 1 = 313 ms; Experiment 2 = 114 ms.

Figure 7

Table 3: Individual statistics for Experiment 2 and Experiment 1

Figure 8

Figure 6: Reaction time distribution for Experiment 3 (red line) vs. Experiment 4 (gray line). Thick lines = correct trials. Thin lines = error trials. Vertical lines = the mean Minimum Reaction Time across subjects: Experiment 3 = 365 ms; Experiment 4 = 418 ms.

Figure 9

Table 4: Individual statistics for Experiment 3

Figure 10

Table 5: Individual statistics for Experiment 4

Figure 11

Figure 7: Experiments 3 (dashed line) & 4 (solid line). (A) Percentage of correct trials as a function of liking rating differences between the two food items (1=most difficult choices, 4=easiest choices). (B) Mean reaction times for each value distance. Error bars denote SEM.

Figure 12

Figure 8: (A) Comparison of Minimum RTs and Mean RTs in all experiments: Experiment1 = Speed; Experiment 2 = Control; Experiment 3 = Conf; Experiment 4 = Manual. (B) Mean Accuracy in all experiments. Error bars represent SEM.