Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-26T06:14:20.928Z Has data issue: false hasContentIssue false

New designs for research in delay discounting

Published online by Cambridge University Press:  01 January 2023

John R. Doyle*
Affiliation:
Cardiff Business School, Aberconway Building, Colum Drive, Cardiff University, Cardiff, UK. CF10 3EU
Catherine H. Chen
Affiliation:
Department of Accounting and Finance, Middlesex University
Krishna Savani
Affiliation:
Graduate School of Business, Columbia University
*
Rights & Permissions [Opens in a new window]

Abstract

The two most influential models in delay discounting research have been the exponential (E) and hyperbolic (H) models. We develop a new methodology to design binary choice questions such that exponential and hyperbolic discount rates can be purposefully manipulated to make their rate parameters orthogonal (Pearson’s R = 0), negatively correlated (R = –1), positively correlated (R = +1), or to hold one rate constant while allowing the other to vary. Then we extend the method to similarly contrast different versions of the hyperboloid model. The arithmetic discounting model (A), which is based on differences between present and future rewards rather than their ratios, may easily be made orthogonal to any other pair of models. Our procedure makes it possible to design choice stimuli that precisely vary the relationship between different discount rates. However, the additional control over the correlation between different discount rate parameters may require the researcher to either restrict the range that those rate parameters can take, or to expand the range of times the participant must wait for future rewards.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2011] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

With few exceptions people prefer to receive a reward now rather than at a later date; people also prefer to receive larger rewards than smaller rewards. But when a choice is offered between a smaller reward now versus a larger reward later, then these two “rules” of behavior conflict. People must find some way of resolving the conflict to make their choice. Firms must also choose between outcomes at different times in the future, or now. Classical economics, accountancy, and finance all agree on a single normative method by which this should be done, which is to value all future gains in terms of their present value. If a sum of money (P, the present value) will grow to a future amount (F) following to a risk-free process of continuously compounded interest, then that P is said to be the present value of that future payment; and F is said to be discounted to P. As continuous compounding is an exponential model of growth, the normative model is known as exponential discounting.

However, people consistently depart from this normative model in two distinct ways. First, people are not exponential discounters: they discount more heavily than exponential in the short term, and/or less heavily than exponential in the long term. An important implication of this “decreasing impatience” is that people will make inconsistent preference reversals simply due to the passage of time. Researchers have suggested that the form of people’s behavioral discounting is better modeled by hyperbolic discounting (Reference MazurMazur, 1987), whose mathematics reflects the processes of simple interest (Reference RachlinRachlin, 2006). The second departure is that people are far more likely to choose present payoffs relative to future payoffs (than they should). Seen through the lens of the exponential model, this “impulsivity” in favor of the present reward P implies that people demand a wildly high level of interest to get them to choose a future reward F—often an order of magnitude greater than any bank would offer. Therefore, the form of people’s discounting is non-exponential, and the size of their discounting is unreasonable.

It has also been found that the magnitude of P and F, not just their ratios, may affect people’s choices (Reference Green, Myerson and McFaddenGreen, Myerson, & McFadden, 1997; Reference Kirby, Petry and BickelKirby, Petry, & Bickel, 1999), a finding which is implied by Killeen’s (2009) additive utility model of discounting, and by arithmetic discounting (Reference Doyle and Chen.Doyle & Chen, 2010), a special case of Killeen’s model, in which the underlying behavioral model is underpinned by the analogy of the excess wages required for waiting (for F), rather than by analogies of simple or compound interest.

Using simulation studies, Reference Navarro, Pitt and MyungNavarro, Pitt, and Myung (2004) have shown how psychological models may be difficult to distinguish from each other, so that deciding which model better captures people’s decision making remains equivocal despite the scrutiny of many empirical studies. Exponential and hyperbolic models of delay discounting could be taken as one such intrinsically confusable pair—after all, how distinguishable can models of simple interest and compound interest really be? In Section 3, our analysis of two frequently used research designs seems to support this view, in that the rate parameters for three different delay discounting models are all highly inter-correlated.

Interestingly, Navarro et al. suggest that a good way to improve the separation between models is to improve the experimental design, which they achieve in their simulations of information integration models by noting that the models make different predictions if trials are added in which only visual or only auditory stimuli are presented. A similar purposeful approach is evident in Glöckner and Betsch (2008) who search for decision tasks that are “diagnostic” between competing models of risky choice. Similarly, the present article develops the tools by which separation may be improved between different models of delay discounting, in that designs can even be constructed in which exponential discounting and hyperbolic discounting make opposite predictions (see Sections 4 and 5). Reference Navarro, Myung, Pitt and KimNavarro, Myung, Pitt and Kim (2002) see part of their research agenda as encouraging researchers to explore the landscape of their “favorite computational model of cognition”. Similarly, our tools help reveal features of the landscape of delay discounting, showing not only what is possible, but also some of the compromises that researchers must make in separating models of delay discounting.

2 Three discounting models

The mathematics of the three simple discounting models that we examine first are in Table 1: exponential (E), hyperbolic (H), and arithmetic (A). Each model is written to give three different points of focus: (1) as the underlying financial model, how P grows to F over time T; (2) as the inverse process, how F is discounted to give P; and (3) the model-specific rate-parameter implied in a choice between receiving P now versus receiving F at time T. For the exponential model, the rate parameter r is the rate of continuously compounded interest that would take an investment of $P up to $F in time T; for the hyperbolic model, the rate parameter h is the simple interest required to do the same; for the arithmetic model it is what rate of pay is needed over a time period T which would yield an income equaling the difference between F and P. In plain words, the decision maker (DM) will choose F only if the rate parameter exceeds an internally held criterion, namely a satisfactory rate of compounded interest, or simple interest to compensate for the wait for F (the criterion rate is r0 if E is assumed: h0 if H is assumed); or likewise an adequate compensatory rate of pay (d0 if A is assumed). Naïve DMs might be just as likely to think in terms of rate parameters as in terms of discount factors in forming their choices between F and P. Indeed, Rubinstein (2003) has pointed out that discounting to net present values is a very recent invention in accounting, and therefore not at all obvious to the naïve DM. Finally, note that the rate parameters of r and h are monotonically related.Footnote 1

Table 1: Formulae for exponential, hyperbolic and arithmetic models of delay discounting.

3 Current practice

If people are not exponential discounters, what kind of discounters are they? To answer this and related questions, it would certainly help if the predictions of the different models could be tested independently, such as by independently manipulating the r, h, and d rates implied by a pair of present and future payoffs. If {P, F, T} is a choice problem from which we calculate {r, h, d}, one approach is to construct a set of {P, F, T} and hope that the derived {r, h, d} have the right properties. Our approach is to reverse this process. We first design a set of {r, h, d} that have exactly the desired correlational properties. We then determine what values of {P, F, T} are consistent with {r, h, d}. For instance, in the worked example shown in Section 4, we make d orthogonal to r and h, which themselves are designed to be negatively correlated: R(r,h) = –1. At the other extreme, in Appendix B r and h were designed to be perfectly positively correlated: R(r,h) = +1.

To appreciate how our approach differs from current practice, we describe two commonly used designs: that of Rachlin, Raineri, and Cross (1991), and that of Kirby et al. (1999). In Rachlin et al., a single F = $1000 is used, combined with seven levels of T (1 month, 6 months, 1, 5, 10, 25, 50 years). At each T, participants have to choose between successively smaller Ps (in $: 1000, 990, 980, 960, 940, 920, 900, 850, 800, 750, 700, 650, 600, 550, 500, 450, 400, 350, 300, 250, 200, 150, 100, 80, 60, 40, 20, 10, 5, or 1) and the fixed F = $1000. The procedure is then repeated in ascending order. In a sense, this procedure lies somewhere between binary choice and guided matching, but treating each question as a separate choice: we can see that 7 levels of money are crossed with the 30 levels of P, and thus time and money are orthogonal. In theory, the maximum number of questions asked is 2 x 7 x 30 (= 420), though if someone states that they prefer P if P = $850, but F if P = $800, there is no need to go on asking about values of P less than $800. Nonetheless, given that intertemporal choice has been used extensively in measuring impulsivity among many kinds of addicts, the questionnaire does place a considerable burden on the participant, who may be poorly disposed to sitting still for that length of time.

The second design we examine is due to Kirby et al., who developed an instrument in which 27 questions were asked (see also Reference Kirby and MarakovicKirby & Marakovic, 1996, for a similar 21 question instrument). The design of these questions is apparent from the scatterplot in Figure 1, being sampled at equal intervals of log(r) and approximately equal intervals of log(d). We have plotted logged rate parameters because the scatterplot of raw d against raw r follows a classic heteroskedastic fan-shape, with half the (r, d) points forming an indistinct and uninformative smudge near the origin.

Figure 1: Scatterplot of stimuli used in Kirby et al. (1999). Rate parameters d and r are for the arithmetic and exponential models, respectively.

Despite the fact that Rachlin et al.’s design orthogonalizesFootnote 2 money and time, Table 2 shows that it does not thereby orthogonalize the discounting models; nor does Kirby et al.’s. Of course, it was not those researchers’ intentions to orthogonalize r and h, and their designs were tailored for their own research purposes. However, if another researcher did want to design a study in which it was important to control or manipulate the degree of correlation between r and hFootnote 3 (e.g., for testing whether training in economics or finance makes people’s choices more consistent with r and less with h), the literature offers few clues about how to do so, nor what hidden costs there may be. The temptation would then be to reach for a ready-made questionnaire from the literature, such as the two described. By contrast, the approach described in this paper shows how the researcher may design binary choice questionnaires that control the degree of correlation between r, h, and d. It requires only a knowledge of spreadsheets and basic algebra (logarithms and powers) to implement.

Table 2: Above, the correlation matrix of rate parameters is for the stimuli in Rachlin et al.’s (1991) design. Below, the correlation matrix is for the stimuli in Kirby et al.’s (1999) design, using logged rate parameters because of extreme heteroskedasticity in the raw versions of r, h, and d.

4 More flexible designs

4.1 Finding T and F/P

Suppose we wish to design a tiny four-item questionnaire in which values of h and r are perfectly negatively correlated, as in the left hand columns of Table 3, and d is orthogonal to both r and hFootnote 4. We will use the notation /–1, 0, 0/ to denote the values that the design requires for the three correlations R(r,h), R(r,d), and R(h,d), respectively. It is crucial to note that, although the paper uses this design in its examples, it is not this particular design that is important. A researcher might wish to implement a design where r and h were perfectly positively correlated instead, or have some idiosyncratic relationship with each other. What is important is that we can design at all. Nonetheless, designing negatively correlated r and h may not only have its uses, but tacking such an extreme problem raises some issues that might otherwise have been neglected. The method shows how to find values of {P, F, T} that will implement whatever (r, h) combinations have been chosen.

Table 3: Starting from a desired set of rate parameters for exponential and hyperbolic models (r, h), corresponding T values are found, and hence F/P. Then Ps are chosen, which with the known F/P ratios determine the Fs and hence the ds. An extended version of this table is in the Appendix A.

First, using relationships E3 and H3 in Table 1, we derive:

(1)

We then select a set of {r, h} pairs with the desired property, in this case perfect negative correlation, and for each pair in turn we solve for T.

To illustrate with choice 1, r = .0050 and h = .0055 (note that, for T > 0, r < h is a mathematical requirement), and solving for T, we find:

(2)

No closed-form solutions exist for such equations. However, for those with programming skills, a program can easily be written that finds an iterative solution. For those who do not, a few lines of Excel provides an excellent alternative. Let:

(3)

If a trial value for T is in cell A1, and ε is in A2, then A2 will contain the calculation:

(4)

We can substitute in values of T by trial and error with the aim of driving ε as close to zero as possible. Alternatively, we can let Excel’s Solver do it for us, by minimizing the contents of cell A2 by changing the contents of cell A1. Since T=0 is always a solution of ε = 0, it may be necessary to choose initial values of T that are slightly larger than one anticipates the solution to be in order to avoid the Solver finding T=0 solutions. Starting the Solver near the solution may also help: easily achieved by graphing ε using different values of T substituted into (4), as in Figure 2. Clearly, the solution for ε = 0 is when T is a bit below 40 (there is also an uninteresting solution at T=0).

Figure 2: Finding an approximate solution to ε = 0 in equation (4).

Once we know T for choice 1 (the Solver tells us it is 37.54), we can determine F/P (= 1 + hT). For choice 1, it is just: 1 + (.0055)(37.54) = 1.206.

4.2 Deriving Ps and Fs

4.2.1 Designing for d

The next step is to give P a specific value, which then fixes F (because F/P is known) and consequently fixes the arithmetic discounting parameter d too. In Table 1, the Ts and F/P ratios have first been calculated, then Ps have been generated to ensure that d has zero correlation with r, and hence h. Given that the expected value of this correlation under random values of P is zero, this is not too difficult, and can be done either by trial and error or using the Excel Solver (see the example in Section 5.2). Indeed, a variety of correlational patterns between r, h, and d may be constructed using the Solver. Using the notation first mentioned in Section 4.1, the following four values of P implement the correlational design /–1, +.5, –.5/: 88.57, 193.86, 87.39, 18.79; and the following four Ps give/-1, –.5, +.5/: 11.71, 227.80, 146.71, 129.70. It is also possible to hold d constant (= dfixed). Let α = F / P, which we have calculated for a particular choice, so that F = α P. Then substituting for F in equation A3 (Table 1), we have dfixed = (α P – P) / T; hence P = dfixed T / (α – 1) and F = α P. Arbitrarily choosing dfixed = 0.5, the following values of P are calculated for choices 1–4: 91.12, 83.29, 76.93, 71.42. Similarly, it is possible to derive Ps and Fs consistent with a constant value of (F – P).

4.2.2 Units of measurement

The final step is to give all Ps and Fs some particular units of measurement, and all Ts a particular unit of time. These could be dollars, cents, euros, thousands of dollars, or even fractions of dollars; days, weeks, fractions of months, etc. It is here that the researcher must exercise judgment about factors that lie beyond the mathematical method itself: for instance, how likely are people to choose P versus F, to be discussed further in Section 5.3; or how practical are the Ps, Fs, and Ts, if real rewards at real delays are to motivate participants. Apart from the effects of rounding, the resulting 4 choice questions (rounded to nearest dollar) still maintain the correlational structure /-1, 0, 0/:

  • Would you prefer $217 now, or $261 in 80 days’ time?

  • Would you prefer $727 now, or $1260 in 262 days’ time?

  • Would you prefer $405 now, or $1000 in 483 days’ time?

  • Would you prefer $252 now, or $886 in 768 days’ time?

Looking at the r and h values in Table 3, and the questions we derive from them, if somebody uses hyperbolic discounting, then the probability of their choosing F should increase across choices 1 through 4 (because h increases across those choices). If instead somebody uses exponential discounting, the probability of their choosing F should decrease across choices 1 through 4 (because r decreases)Footnote 5. Note also, the set of four questions that we actually constructed (those in italics) are just one of an indefinitely large set of alternative possibilities. The scalings we used were 1 money unit = $4.01 for Ps and Fs, and 1 time unit = 2.14 days for T Footnote 6.

It is apparent that the positive colinearity between r and h that one meets in binary choice questionnaires is not a necessity. Beyond the mathematical requirement that r < h for T > 0, r and h can be constructed to have a variety of relationships with each other over a set of choice-questions, from positively correlated, to orthogonal, to perfectly negatively correlated, as in this example. Any of the rate parameters r, h, or d may be held constant while the other ones vary. It is as simple as: (i) writing down the desired (r, h) pairs; (ii) then for each pair Excel-Solving to find T; (iii) calculating F/P for each pair; (iv) choosing a P for each choice, say, and calculating the corresponding Fs (or vice versa).

Finally, given a practical set of Ps and Fs, one guideline is to choose units of time for T that would lead to approximately the same numbers of P and F chosen. For instance, if T is quoted in milliseconds, then presumably everybody will be willing to wait a few (hundred) milliseconds to receive the larger F: but if T is quoted in decades, then presumably everybody will choose P instead (375 years is a long time to wait for anything). Somewhere between these extremes there are units of T (be they days, weeks, fractions of days or years), which will lead to P receiving about half the choices and F the other half. However, unless an adequately large range of r and h are sampled, researchers may still find that some participants choose just one of P or F for a large majority of the choice pairs. In this case the even split of P and F choices would be achieved only at the aggregate level, and never at the individual level. We return to this issue in Section 5.3.

Changing units of time effectively changes the rate parameters r, and h which are expressed as proportions (of increase) per unit of time (e.g. 5% per annum). So it is not surprising that decreasing the units of time (e.g., from days to hours) can shift people’s choices from P to F, because doing so effectively increases 24-fold the r and h in that choice. More subtle are the effects of changing units of money, because of the “magnitude effect”—it has been found that people’s required rate of interest, or subjective discount rate, decreases with increasing size of rewards (Reference Green, Myerson and McFaddenGreen et al., 1997). Changing the units of measurement in the four italicized questions from dollars to cents should induce a shift towards P: changing the units from dollars to thousands of dollars should induce a shift to F, and it seems intuitively plausible that this would happen. Therefore, someone’s r0 or h0 are not fixed. It follows that achieving an equal split between P and F can be achieved not just via manipulating the questionnaire interest rates by scaling T, but may also be achieved by scaling P and F, without changing the questionnaire interest rates at all. Clearly, even when the mathematics of the modeling is completed, the researcher still has matters of art to confront.

5 Extensions and limitations

5.1 Bounding time

These basic design principles may be extended in a number of different ways. Rachlin et al. (1991) used a time range of 1 month to 50 years (a max/min ratio of 600); while Reference Green, Fry and MyersonGreen, Fry, and Myerson (1994) used a time range of 1 week to 25 years (a max/min ratio of 1300). However, researchers may wish to examine choice within a more restricted time range, particularly if real rewards are to be delivered within a credible time frame. Let us examine, for instance, T in [7, 30], with a max/min ratio of just 4.3 (these can be interpreted, for example, as delays ranging from 7 days to 30 days). To find out what the feasible region looks like we can plug in the value T=7 in equation (1) and sketch the curve of h against r (see Figure 3). Below this line all (r, h) points are infeasible because they give rise to solutions of equation (1) in which T < 7. Similarly sketch the curve for T=30, above which (r, h) points are infeasible, implying as they do that T > 30. Inside the trumpet-like shape are (r, h) points that are consistent with the time constraints. One is then free to sample from the within-trumpet region according to the desired criterion. For instance, points sampled uniformly from within the circle will be orthogonal; points within the ellipse will be negatively correlated. Points on a vertical line would hold r constant while varying h; and vice versa for points on a horizontal line.

Figure 3: Feasible and infeasible regions of (h, r) space, given choices are limited to T in [7, 30].

It is also possible to work in terms of log(r) and log(h), rather than r and h themselves. The logged version may be constructed to be orthogonal, or negatively correlated, and so on, as follows. First, design them to have the required relationship by writing them down. Then, calculate the corresponding r and h values by exponentiating. Once they are in this form, proceed as in Table 3 / Section 4. Other transformations of r and h can be treated in the same way.

5.2 Hyperboloid models—a worked example

The same method works for other models, not just E and H. Here is an example, worked through in detail using most of what has been developed so far in this paper. A general hyperboloid model with subjective time and money, as examined in Reference Doyle and Chen.Doyle and Chen (2010) is:

(5)

It nests the hyperbolic model when m = τ = 1; it nests Myerson and Green’s (1995) hyperboloid model when τ = 1; and it nests Rachlin’s (2006) version of the hyperboloid when m = 1. The following form will also be useful:

(6)

Suppose we wish to generate negatively correlated rate parameters h1 and h2 for two such models, where hyperboloid model H1 has (m, τ ) = (.2, .4), and model H2 has (m, τ ) = (.9, .7). Suppose also that we wish to confine our design to have a max/min ratio for T of 100.

The complete method involves three distinct phases. First, using the method outlined in Section 5.1, we can generate a plot of (h1, h2) for T = 1, and a second plot of (h1, h2) for T = 100, and hence determine the feasible region from which (h1, h2) pairs may be sampled or designed. The second phase is to determine Ts and F/Ps for each of these designed (h1, h2) pairs, just as in Section 4.1. The third phase is to determine actual Ps and hence Fs, to meet some additional criterion such as orthogonalizing d, as in Section 4.2. Each of these phases is broken down into simple steps.

5.2.1 Defining the feasible region for (h1, h2) and designing (h1, h2) pairs

Step 1. Select a range for h1. We use [.017, .023] in increments of .001

Step 2. Using T=1, calculate F/P ratios at each level of h1, using equation 6:

Step 3. Use equation 5 to determine h2 for each F/P calculated in Step 2.

Step 4. Plot these set of (h1, h2) pairs as the constraint for T=1, as in Figure 4.

Figure 4: Feasible region (between the constraint curves) for hyperboloid design, and (h1, h2) points chosen (filled circles).

Step 5. Repeat steps 2 and 3 for each level of h1, but using T=100:

Step 6. Plot this new set of (h1, h2) pairs as the constraint for T=100.Footnote 7

Step 7. Choose (h1, h2) pairs from within the feasible region to suit design requirement. Here we select h1 and h2 so they will have a correlation of -1, as shown in the filled circles, which can be calculated by interpolation, or reading them off graph paper.

5.2.2 Finding T and F/P

Step 8. Having generated (h1, h2) pairs that are negatively correlated, we then equate the (F/P)s arising from each model for each given (h1, h2). To illustrate with choice 1 in Table 4, we want to solve for T, such that:

(7)

Table 4: Designing rate parameters h1 and h2 with R(h1, h2) = -1 for two versions of the hyperboloid, where (m, τ ) = (.2, .4) and (.9, .7). The first set of Ps and Fs were constructed so that d is orthogonal to h1 and h2. The second set ensure that F-P is orthogonal to h1 and h2.

This is the analogous optimization problem to that described in equation (2) for E and H. In Excel, once again presuming that T is in cell A1, then A2 will contain the calculation:

(8)

This is the equation for ε , and is the exact analog of equation (4).

Step 9. Once again we use Excel’s Solver to “minimize the contents of cell A2 by changing the contents of cell A1”. In so doing it will have solved equation (7) for T. The solution is T = 1.00. This should be no surprise, because we already know that (.017, .0788), being the end-point of our chord across the feasibility region in Figure 4, lies on the T=1 constraint curve.

Step 10. Substituting in either side of (7) gives F/P = 1.088, as shown in Table 4.

Step 11. Repeat steps 8, 9, and 10 for choice 2 (this time T will turn out to be 1.749) and so on for choices 3–7.

5.2.3 Choosing Ps

To explicitly finish the process off we use the Excel Solver to generate Ps that will give a correlation of zero between d and h1, and thus between d and h2. To ring the changes, we also generate an entirely different set of Ps that would make (F-P) orthogonal to both h1 and h2. The following steps assume we are orthogonalizing for d.

Step 12. Start with a randomly chosen set of P, perhaps using Excel’s function rand(), and calculate Fs, ds, and (F-P)s as Excel formulae that depend on the P values. Alternative is to randomly choose Fs, and calculate Ps from the known F/P ratiosFootnote 8.

Step 13. Suppose the absolute value of the correlation between d and h1 is in cell k10, and the seven Ps are in cells f1:f7, then in Solver jargon: Set target cell: k10 equal to: min by changing cells: f1:f7. Note, it may be necessary to add additional constraints to ensure that P > 0.

Step 14. For presentational purposes, within each set of P we have scaled P so that min(P) = 1, with Fs scaled accordingly. To give appropriate units of measurement to money, the researcher may rescale Ps, Fs, which then rescales both d and (F-P) by that scaling factor, though not h1 or h2 (in equation 5, and assuming a scaling factor of k, kF/kP = F/P). T was already constructed so that min(T) = 1, but the researcher may also rescale T. Doing so will rescale h1, h2, d, and (F-P) by the reciprocal of that scaling factor. As already stated, the rationale for choosing units of measurement lies beyond the model itself.

5.3 Limitations—the range of r and h

By adjusting the units of T, it is possible to obtain {P, F, T} combinations such that about half of all choices are to accept P and half are to accept F. However, this only holds in the aggregate, and not necessarily for any particular individual. An extreme case would be if half the participants choose all P, and half choose all F: the other extreme would be if all participants choose half P and half F. In the former case we could not tell whether any given person was an exponential or hyperbolic discounter, because they are all off the scale. In general, the narrower the range of r and h the more likely are people to be off the scale. The range of r and h that researchers use should extend beyond the range of internal r0 and h0 that the group of people in the study haveFootnote 9. Suppose also that the researcher has limits on the permissible T, either explicitly or implicitly (e.g., T = 375 years is not acceptable), then referring to Figure 5, the following issues emerge.

Figure 5: Trade-offs implicit in three experimental designs (models E and H assumed).

Presume that the median r0 and h0 of a group of people are located at the intersection of ellipse (A) and ellipse (B). Then, choosing r and h from ellipse B represents the kinds of designs that the method outlined in this paper generates. Through empirical study it is possible to find the right level of r and h such that 50% of choices will be to P. Hence ellipse B can be correctly centered to coincide with the median of individuals’ internal criteria r0, h0. However, in B the range of r and h would be small, meaning that the design may suffer from any given person choosing all either P, or all F. The sample of r and h represented by ellipse A is also centered at the correct location but has a larger range of r and h than B, which means it is less likely to suffer from the all P / all F problem. However, its drawback is that r and h are positively correlated. Finally, ellipse C has the same negative correlation between r and h as B, but has a much larger range than B. But it is not centered over the distribution of people’s criterion r0 and h0. Each of A, B, C meets only two of the three design criteria: (i) r and h are correctly centered; (ii) r and h span a sufficient range to match the range of individual r0 and h0; (iii) r and h are not positively correlated.

To extend the range of r and h in B we could stretch apart the sides of the trumpet, but this can only be achieved by increasing the permissible range of T. Alternatively, one can sample from within one or more additional ellipses that are parallel to B, but lie up-range or down-range. This possibility is suggested by the black ellipse. If someone chooses all F to problems sampled from B because his/her internal criterion rate parameter(s) is/are below those present in B, the problems found in the black ellipse may be able to span his/her criterion. However, the more of these parallel regions one constructs, the greater the overall correlation between r and h - though still negative within each sub-sample; also, the formerly slender questionnaire may become quite bloated. However, if a computer controls the experiment, it may be possible to first approximately locate someone’s r0, h0 by using a few problems strata-sampled from the gray ellipse A, then present a tailored form of B which spans that person’s r0, h0. Finally, the researcher who wishes to preserve the range of (r, h) values shown in A, may still be able to reduce the positive correlation between them by making A’s ellipse fatterFootnote 10.

Although these arguments have been presented for E and H, similar arguments and design issues are valid in hyperboloid and other model comparisons. As we see, this paper provides a map for designing binary choice problems, but it is not a cure-all. The researcher still has to do the hard yards by making the sometimes difficult judgments about what and how much to trade-off.

Researchers interested in using the above procedure might be concerned about obtaining {P, F, T} pairs such that the amounts and time periods are feasible to implement with real payoffs and real delays. Instead of generating specific values of P and F, the above procedure produces a P/F ratio, which can be easily scaled according to the monetary units that the researcher can feasibly implement. Further, while the above procedure does produce specific values of T, the values generated can be scaled according to feasibility. For example, if researchers generated T’s ranging from 1 to 100 for a particular set of choice options (as in Table 4), one unit of T could be equated with one day. If an extreme value of T is generated that is impossible to scale to a reasonable level, then researchers can adjust the combination of discount rates that produced the extreme value to make it more reasonable. Therefore, we believe in most cases, it should be possible to generate feasible sets of {P, F, T} by making minimal alterations to the range of discount rates selected.

6 Conclusions

In sum, we present a method of generating choice stimuli that precisely manipulate the relationship between different discount rate parameters. Our methodological innovation permits researchers to explore novel questions that otherwise would be difficult to address. For example, researchers can now test whether certain types of experiences or training (e.g., an education in economics or accounting) makes people more likely to use exponential discounting and less likely to use hyperbolic or arithmetic discounting in their intertemporal choices; which discount rate is best correlated with the quality of people’s real life financial decisions (e.g., amount of credit card debt); and whether addiction or neurological impairment have more detrimental effects for say hyperbolic discounting than for say arithmetic discounting. The ability to precisely manipulate different discount rates, however, comes with certain limitations that researchers would need to be mindful of.

Appendix A

The following table is an extended version of Table 3. Two sets of ten P and F have been generated. For each set, the mean P is 100, and the ds are orthogonal to r and h. Furthermore, the two sets of ds have been generated orthogonal to each other. The ratio of max/min T is 240, which lies within the ratios and maxima that have been used before in the literature: for instance, Green et al. (1994) used Ts in the range 1 week through 25 years (ratio = 1300); Rachlin et al. (1991) used Ts in the range 1 month through 50 years (ratio = 600). If T is measured in units of days, then 9028 days is just short of 25 years. If T is measured in minutes then 9028 minutes is about six and a quarter days.

Appendix B

Design pattern /+1, 0, 0/, in which r and h are perfectly positively correlated, but d is orthogonal to both. As in Appendix A, mean P = 100. The max/min T ratio is 260, and 2512.862 days is nearly 7 years. Such a design might be used to investigate the magnitude effect by varying money units: e.g., 1 unit = $0.1; 1 unit = $1; 1 unit = $5, and so on. Note also, the max/min T ratio for the last 10 choices, which correspond to the range of r used in Appendix A, is 67. Note, the need for a wide range of T can occur in designs for which R(r, h) = +1 as well as in designs for which R(r, h) = –1.

Footnotes

We thank two anonymous reviewers for their insightful comments.

1 Eliminating F/P in E3 and H3, we get: h = (erT-1)/T. Therefore, as r increases / decreases, so does h. Alternatively, r = log(1+hT)/T. Therefore, as h increases / decreases, so does r.

2 When the correlation between variables x and y is zero they are said to be orthogonal. Hence to orthogonalize x and y is to ensure that the correlation between them is zero. In statistical inference, orthogonality between competing explanatory variables is generally held to be a desirable property.

3 The correlations R(d, h) between d and h; R(d, r) between d and r; and R(r, h) between r and h, are computed as follows. Taking each question in turn, calculate the rate parameters d and h, and r from the {P, F, T} triplet using equations A3, H3, and E3 in Table 1. Correlate these rate parameters over the questions.

4 Such a design might be used in a logit / probit analysis to estimate individual binary choice of P or F, with r and d as the independent measures—though obviously expanded beyond just four questions. With r and h making diametrically opposite predictions and d orthogonal to both, it is potentially a highly efficient design. However, we also need to consider the issue of range truncation in r and h that might work against statistical efficiency (see Section 5.3). An extended version of this design is in Appendix A.

5 From this we can make a count of the number of: (a) hyperbolic discounters, (b) exponential discounters, (c) people who are off the scale in that they always choose P, or always choose F, and (d) people who are not off the scale, but who are not compatible with H or E. Suppose we find (a) > (b), we cannot necessarily claim that more people discount hyperbolically than exponentially because if there are many people in category (c), they might all be exponential discounters (but who are off the r-scale). Similar arguments apply if (a) < (b). An unequivocal interpretation is only possible if (c) contains few people.

6 These scalings are arbitrary so that other scalings could be used, for instance: 1 money unit = $25.12, 1 time unit = 7.04 weeks, and so on.

7 Note, the fact that the constraint for T=1 is steeper than the constraint for T=100, whereas it is the other way round in Figure 3, is of no consequence. Had h2 been plotted as the x-axis, and h1 as the y-axis, Figures 3 and 4 would have appeared topologically similar in this respect.

8 Even more generally we may generate a required linear combination of P and F according to a criterion, say α P + γ F = X, whether X is fixed or a random number, as required. We use the known F/P = λ ratio to solve these simultaneous equations for P and F. The default used in our calculations has been α = 1, γ = 0. The alternative of choosing F then calculating P, as mentioned in the text, is α = 0, γ = 1. Any α and γ are possible to meet a particular purpose. The solution is: P = X / (α + γ λ ); F = λ X / (α + γ λ ).

9 To give an idea of rates that have been used in past research, [min, median, max] triplets for d ($ per day) were [.0233, .2436, 2], [.0233, .04, 3.928], and [0.0053, .0674, 7.714] for the three studies of Li (2008), Reference Kirby and MarakovicKirby and Marakovic (1996), and Reference Kirby, Petry and BickelKirby, Petry, and Bickel (1999), respectively. Triplets for h (% per day) for the same three studies in the same order were: [.0684, .08167, 13.33], [.0684, .08571, 13.33], and [.0158, .05961, 25]. Similarly, triplets for r (% per day): [.0674, .75, 8.473], [0.0673, .6936, 8.47], and [.0156, .5136, 14.45]. Listing these ranges does not thereby imply that we endorse their use in other research. It is for background information.

10 Of course, these considerations only matter if the researcher actually wishes to reduce colinearity between r and h. As stated earlier, the researcher may even want positive colinearity (see Appendix B for an example). Even so, the method still shows how that goal can be achieved.

References

Doyle, J. R., & Chen., C. H. (2010). Time is money: Arithmetic discounting outperforms hyperbolic and exponential discounting. http://ssrn.com/abstract=1609594.Google Scholar
Glöckner, A. & Betsch, T. (2008). Do people make decisions under risk based on ignorance? An empirical test of the priority heuristic against cumulative prospect theory. Organizational Behavior and Human Decision Processes, 107, 7595.CrossRefGoogle Scholar
Green, L., Fry, A. F., & Myerson, J. (1994). Discounting of delayed rewards: A life-span comparison. Psychological Science, 5(1), 3336.CrossRefGoogle Scholar
Green, L., Myerson, J., & McFadden, E. (1997). Rate of temporal discounting decreases with amount of reward. Memory and Cognition, 25, 715723.CrossRefGoogle ScholarPubMed
Killeen, P. R. (2009). An additive-utility model of delay discounting. Psychological Review, 116(3), 602619.CrossRefGoogle ScholarPubMed
Kirby, K. N., & Marakovic, N. N. (1996). Delay discounting probabilistic rewards: Rates decrease as amounts increase. Psychonomic Bulletin & Review, 3(1), 100104.CrossRefGoogle ScholarPubMed
Kirby, K. N., Petry, N. M., & Bickel, W. K. (1999). Heroin addicts have higher discount rates for delayed rewards then non-drug-using controls. Journal of Experimental Psychology: General, 128 78–87.Google ScholarPubMed
Li, X. (2008). The effects of appetitive stimuli on out-of-domain consumption impatience, Journal of Consumer Research, 34, 649656.CrossRefGoogle Scholar
Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. In M. L. Commons, J. E. Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative analysis of behavior, Vol. 5. Mahwah NJ: Erlbaum.Google Scholar
Myerson, J., & Green, L. (1995). Discounting of delayed rewards: Models of individual choice. Journal of Experimental Analysis of Behavior, 64, 263276.CrossRefGoogle ScholarPubMed
Navarro, D. J., Myung, I. J., Pitt, M. A., & Kim, W. (2002). Global model analysis by landscaping. Proceedings of the 25 th Annual Conference of the Cognitive Science Socity.Google Scholar
Navarro, D. J., Pitt, M. A., & Myung, I. J. (2004). Assessing the distinguishability of models and the informativeness of data. Cognitive Psychology, 49, 4784.CrossRefGoogle ScholarPubMed
Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior, 85, 425435.CrossRefGoogle Scholar
Rachlin, H., Ranieri, A. & Cross., D. (1991). Subjective probability and delay. Journal of the Experimental Analysis of Behavior, 55, 233244.CrossRefGoogle ScholarPubMed
Rubinstein, M. (2003). Great moments in financial economics: I, present value. Journal of Investment Management, 1(1), 713.Google Scholar
Figure 0

Table 1: Formulae for exponential, hyperbolic and arithmetic models of delay discounting.

Figure 1

Figure 1: Scatterplot of stimuli used in Kirby et al. (1999). Rate parameters d and r are for the arithmetic and exponential models, respectively.

Figure 2

Table 2: Above, the correlation matrix of rate parameters is for the stimuli in Rachlin et al.’s (1991) design. Below, the correlation matrix is for the stimuli in Kirby et al.’s (1999) design, using logged rate parameters because of extreme heteroskedasticity in the raw versions of r, h, and d.

Figure 3

Table 3: Starting from a desired set of rate parameters for exponential and hyperbolic models (r, h), corresponding T values are found, and hence F/P. Then Ps are chosen, which with the known F/P ratios determine the Fs and hence the ds. An extended version of this table is in the Appendix A.

Figure 4

Figure 2: Finding an approximate solution to ε = 0 in equation (4).

Figure 5

Figure 3: Feasible and infeasible regions of (h, r) space, given choices are limited to T in [7, 30].

Figure 6

Figure 4: Feasible region (between the constraint curves) for hyperboloid design, and (h1, h2) points chosen (filled circles).

Figure 7

Table 4: Designing rate parameters h1 and h2 with R(h1, h2) = -1 for two versions of the hyperboloid, where (m, τ ) = (.2, .4) and (.9, .7). The first set of Ps and Fs were constructed so that d is orthogonal to h1 and h2. The second set ensure that F-P is orthogonal to h1 and h2.

Figure 8

Figure 5: Trade-offs implicit in three experimental designs (models E and H assumed).