1 Introduction
In making buying decisions consumers are often confronted with the true price of a commodity that is the sum of two components. Consider, for example, online purchases where in addition to the stated price for an item the consumer must pay shipping costs. True price is the sum of stated price and shipping costs. A second example is a rebate after purchase. In order to obtain true price the consumer must subtract the rebated amount from the stated price. In such situations consumer behavior may be influenced by how the two components are bifurcated. In a field experiment bidders in an online auction were offered identical products but one treatment had a low starting price and higher shipping costs (Reference Hossain and MorganHossain & Morgan, 2006). The researchers found that the auction characterized by the low starting price and higher shipping costs generated a higher number of bidders and higher revenues.
Our research examines decision making when confronted with a monetary decision that is bifurcated into separate components. We conducted a series of experimental first-price, sealed-bid auctions where the item being auctioned was a fixed amount of money. To investigate the effect of bifurcation we divided the money to be auctioned into a “monetary prize” and “winner’s bonus”. We varied the sizes of the two components holding the total amount auctioned constant and found differences in bid distributions. We hypothesize that the differences in bid distributions are the result of how each auction was framed (Reference Kahneman and TverskyKahneman & Tversky, 1984), i.e., the specific bifurcation into monetary prize and winner’s bonus. The monetary prize was hypothesized to serve as an anchor (Reference Tversky and KahnemanTversky & Kahneman, 1974) for some bidders.
Previous research has examined differences in bidding patterns in auctions that have been attributed to framing and anchoring effects. Turocy et al. (2007) considered previous research findings that violated auction theory: non-revenue equivalence of first-price sealed-bid and Dutch experimental auctions. They attributed the difference to how the auctions were framed. They constructed a clock based sealed-bid auction mechanism that shared some of the design features of both the sealed-bid and Dutch auctions and found that revenue from that auction fell between the revenue obtained from the sealed-bid and Dutch auctions, consistent with their hypothesis. Several researchers have found evidence of an anchoring effect. In one experiment identical products were offered on an online auction site for a period of one week, one at a low starting price and one at a high starting price (Reference Ariely and SimonsonAriely & Simonson, 2003). The researchers found that bidders for the product with the low starting price bid lower. They attributed this to an anchoring effect. In an online auction for identical jewelry items researchers found that people bid more for a product with a higher “buy now” price than an identical item with a lower “buy now” price (Reference Dodonova and KhoroshilovDodonova & Khoroshilov, 2004). Reference Beggs and GraddyBeggs & Graddy (2009) found strong support for anchoring effects in two large datasets measuring sale prices at art auctions held in London and New York over a number of years. Anchoring effects were also detected in a recent study on art prices over a period of more than 100 years (Reference Graddy, Loewenstein, Mei, Moses and PowellGraddy et al., 2014).
While we believe that framing with anchoring offers the most plausible theoretical underpinnings for explaining the effects of bifurcation, we cannot rule out other psychological mechanisms. First, there is the principle of loss aversion (Reference Tversky and KahnemanKahneman & Tversky, 1979). By viewing the amount of the monetary prize as the reference point for decision-making, a participant might equate any bid in excess of that amount as a loss. Hence, we would observe reluctance on the part of the participant to enter a bid in excess of the monetary prize. This perception could be reinforced by using the term “winner’s bonus”. The word “bonus” could possibly conjure up a feeling of entitlement, something possibly akin to a pseudo-endowment effect (see, for example, Reference Ariely and SimonsonAriely & Simonson, 2003; Reference Heyman, Orhun and ArielyHeyman et al., 2004; Reference Wolf, Arkes and MuhannaWolf et al. 2008).
A second possible explanation is mental accounting (Thaler, 1980, 1985). Participants might possibly create mental accounts for the monetary prize and the winner’s bonus. Those participants might not be likely to not use money from the winner’s bonus account to bid, because they perceive that only money in the “monetary prize” account should be used to bid in the auction. Consequently, loss aversion and/or mental accounting mechanisms might underlie the behavior we observed.
1.1 Modeling bidding behavior in a money auction
An experimental money auction can be characterized as a common value auction, i.e., all bidders have a common valuation of the item being auctioned. However, there is a key distinction between an experimental money auction and the standard common value auction where value of the item is equally uncertain to all bidders. In an experimental money auction the value of the item being auctioned, money itself, is known with precision and is transparent to all. A similar approach was used in a recent paper applying the Becker-DeGroot-Marschak (BDM) method (Reference Cason and PlottCason & Plott, 2014).
The literature on experimental money auctions is limited. Shubik (1971) conducted a simple game and the results are reported in what has become a classic article in the area of non-cooperative behavior. He proposed the auctioning of a dollar. The winner of the auction would pay the amount of the bid and receive the monetary prize, a dollar. However, the second highest bidder would pay the amount of the bid, but receive nothing. Shubik demonstrated that such a game design would lead to escalation where both the highest and second highest bidder would pay well over a dollar. This experiment has been replicated with similar results (Reference MurnighanMurnighan, 2002).
If we pose the rhetorical question, “How much am I willing to pay for an auctioned amount of $60?” the intuitive answer is “Up to, but not more than $60.” Obviously, one would like to pay as little as possible to maximize the monetary payoff, but awareness of competitors who may think strategically will affect one’s own bidding strategy. A useful starting point for modeling bidding behavior is traditional game theory and identification of Nash equilibria. Assuming a monetary amount being auctioned of $60, the discrete strategy space for any bidder is given by:
To determine the Nash equilibria, we need to know how the winning bid is determined if there is more than one bidder submitting the winning (highest) bid, and we need to know the number of bidders in the auction. If there is more than one winning bid and all bidders submitting that bid receive the payoff, the auction can be treated as a coordination game. There will be Nash equilibria at each possible bid amount with all auction participants bidding that amount. However, Van Huyck et al. (1990) demonstrated that in coordination games with many players (>7) there would be coordination failure. Players will not select the payoff-dominant equilibrium, but rather converge to the most inefficient one. Assuming the discrete strategy space and that the amount auctioned is $60, the most inefficient equilibrium with a positive payoff would be any bid pattern in which participants submit bids of $59.99. A bid of less than $59.99 would be a losing bid. A bid of $60 would be a winning bid, but one that results in a net payoff of zero. If we assume that some bidders are strategically sophisticated and believe that their competitors possess the same level of sophistication, we would expect to see some bids clustered around this Nash equilibrium. However, even some sophisticated bidders may feel that earning $.01 is not worth their effort or may think in terms of $1 increments, so we wouldn’t necessarily expect to see all their bids at exactly $59.99
An additional consideration is the “top-dog” effect (Reference Shogren and HayesShogren & Hayes, 1997), the idea that people gain utility from having the winning bid in an auction. This effect is likely to manifest itself in an experimental money auction. Some bidders will realize that the game (auction) has a monetary payoff that approaches zero. Consequently, non-pecuniary payoffs will be considered, in this case the utility that one receives from having “figured out” the game by being designated a winner. If we further assume that the utility gained from winning is not a function of how many other bidders share that distinction, we can identify Nash equilibria, one where some bidders enter bids of $59.99 (assuming their utility of winning is less than that of one cent) while others enter bids of $60 (where utility of winning is greater than that of one cent). These Nash equilibria give us an initial benchmark for modeling bidding behavior in the experiment, assuming that bidders have some strategic sophisticaion.
Behavioral game theory offers a complementary way of modeling bidding behavior, asking whether it is reasonable to assume that all bidders have a level of sophistication that would lead to bids with a mean approximating Nash equilibria levels, i.e., about $59-$60. If we assume that players (participants) vary in their level of sophistication, i.e., individuals are boundedly rational (Reference SimonSimon, 1982), different models of strategic behavior emerge. Two approaches based on bounded rationality are cognitive hierarchy and level-k reasoning models (e.g., Stahl, 1993; Reference Stahl and WilsonStahl & Wilson, 1994; Reference Ho, Camerer and WeigeltHo et al. 1998; Reference Costa-Gomes and CrawfordCosta-Gomes & Crawford, 2006). Level-k reasoning and cognitive hierarchy models assume that the players in a game do not possess the same level of cognitive strategic ability, but rather are distributed over a number of levels. At the lowest level players have little or no cognitive strategic ability and their decisions may be essentially random. If we similarly assume that bidders in the experiment do not possess the same level of strategic ability, there is no reason to expect a distribution of bids clustering around the Nash equilibria ($59-$60).
A third consideration for modeling bidding behavior is the effect of framing the experimental auction in such a way that some bidders anchor their bids on the monetary prize. There are two possible reasons for this. First, if the monetary prize is large relative to the winner’s bonus, it is more likely to serve as the anchor. Second, the label “bonus” might be viewed, subconsciously or otherwise, as something extra by some bidders and they might decide that it should not be considered in formulating their bids.
If the monetary prize were viewed as the anchor, some bidders would submit what we are calling a “pseudo-Nash equilibrium” bid. This is a bid consistent with what would be a Nash equilibrium if the monetary prize were the maximum possible payoff. For example, in a money auction with a $50 monetary prize and a $10 winner’s bonus ($60 maximum potential payoff) a Nash equilibrium bid would be $59.99 or $60, but the corresponding pseudo-Nash equilibrium bid would be $49.99 or $50, based only on the monetary prize.
2 Experimental design
Students enrolled in Principles of Microeconomics classes at Rochester Institute of Technology during Spring Semester 2014 were invited to visit a website to participate in four experimental auctions and answer questions about their understanding of the instructions, their strategy, and their background. They were told that, if they submitted bids for all four experimental auctions and all other required information, they would receive a $12 participation fee, which would be theirs to keep, and not part of the experiment. In all, 94 students visited the website and submitted the required bids and information.
The first two experimental auctions, a baseline treatment and one of the experimental treatments were presented in a random sequence. For the baseline treatment (BT) participants entered a single bid for a $60 monetary prize. We told participants that bidding would not involve any out of pocket cost to them; they would use their expected winnings to pay for their bid and could not enter a bid greater than $60. If bidders did not have the highest bid, they would not win the monetary prize. The winning bidder would receive a payoff that would equal the difference between her bid and the $60 monetary prize. Bids of more than $60 would be considered invalid and not accepted by the website. Participants were told that there were at least 29 other bidders in the auction. It was explicitly stated that if there were ties for the winning bid, each winning bidder would receive the winning payoff.
For the second experimental auction ($10WB), participants were given the same instructions with the following exceptions. For this treatment the monetary prize was $50. Participants were told that they would receive a $10 “winner’s bonus” as part of the payoff if they had the winning bid. All other bidders would not receive the winner’s bonus. Participants were told that, if they had the winning bid, their payoff would be the difference between $60 ($50 monetary prize + $10 winner’s bonus) and their bid. They were similarly told that their bid could not exceed $60 ($50+$10) or it would be considered invalid and not accepted. Thus, the auctions for $10WB and BT were identical in terms of potential payoff. The instructions made it clear that the participants could use part or all of their winner’s bonus in formulating their bid.
After participants had entered their bids, we asked five questions about their understanding of the instructions and their bidding strategy. We asked open-ended questions regarding understanding of the maximum bid they could have submitted and the minimum number of other bidders in the auction. We asked a multiple choice question regarding whether participants understood that only the highest bidders (including ties) would win something or whether they thought everyone would win something. Two other questions concerned strategy: whether a participant’s bid would have been higher, lower or the same if there had been a) two other bidders and b) 10 other bidders.
The third and fourth auctions involved two other experimental treatments, one with a $55 monetary prize and a $5 winner’s bonus ($5WB) and the other with a $45 monetary prize and a $15 winner’s bonus ($15WB). These two experimental auctions were presented in random sequence as the third and fourth auctions. The instructions for these two auctions were identical to the first two with one difference. We told participants that there was no limit on the amount they could bid, but that they had to be careful in formulating their bid, because depending upon the amount of the winning bid, they could be required to spend some of their own money. The questions that followed the entry of the bid were identical to those for the first two auctions with one difference. Rather than ask the participants to specify the maximum bid that could have been entered (since there was none) we required them to answer a multiple-choice question where the possible answers were none, $60 and $45.
Thus, the maximum payoff that a participant could win was $60 in all four treatments. The difference among treatments was that a portion of potential winnings would include a part or all of a $5, $10, or $15 winner’s bonus depending upon the particular auction.
It should be noted that there is an explicit limitation on the size of the bid ($60) in BT and $10WB, but no limit on bids for $5WB and $15WB. The purpose of this design element was to test whether the participants understood two important aspects of the instructions for the experimental auctions. First, it was crucial that participants understood that they could bid up to $60 with no cost to themselves. Otherwise, they might falsely believe that they were limited to bidding only the amount of the monetary prize. This would bias the results in favor of supporting our hypothesis erroneously. We thus excluded from analysis participants who did not indicate that they were permitted to submit a bid equal to the maximum potential monetary payoff, i.e., $60; we identified them as “confused”.
Second, it was equally crucial that, when there were no explicit limit on the bid, participants understood that any winning bid in excess of $60 would result in an out-of-pocket expense. We determined whether participants understood this aspect of the instructions by asking the participants whether they knew there was no limit on their bid and then observing the extent to which participants submitted bids in excess of $60. We then contacted the participants who submitted bids in excess of $60. Participants who either did not indicate they understood there was no limit on their bid or who bid in excess of $60 and subsequently indicated they did not understand the implications of their bid were identified as “confused” and excluded from statistical analysis.
We needed both sets of instructions administered in two stages to identify our “non-confused” participants: those who knew they could bid up to $60, but that any bid above $60 would require an out-of pocket expense. The participant subset used for the analysis contained only those participants who had demonstrated that they understood the instructions for each of the four experimental auctions and had not unwittingly bid an amount in excess of $60 in the third and fourth treatments.
Following the bids and accompanying questions for each of the four experimental auctions, we asked participants for information regarding their background, including whether they were male or female, whether they had ever taken a college course in economics before the one they were currently enrolled in, and whether they had any previous experience with auctions (including online). We then told participants how to collect the $12 participation fee.
3 Results
We analyzed the data we obtained from the 94 participants to determine those who fully understood the instructions for the experiments. We first eliminated participants who did not fully understand the instructions for the first and second experimental auctions (BT and $10WB). We identified 34 participants who did not correctly answer “$60” or “$59.99” when asked what was the maximum allowable bid for the BT and $10WB auctions. Of the remaining 60 we eliminated 4 more participants who did not fully understand the instructions for the third and fourth experimental auctions ($5WB and $15WB). These participants did not correctly answer “none” when asked what was the maximum allowable bid for $5WB and $15WB. Finally, we contacted the three participants who submitted bids in excess of $60 to determine if they realized the implications of their bids, i.e., that if they had the highest bid they would be required to pay more than the maximum payoff. Two indicated that they had not understood the implications, but the third (who had bid $65 in $5WB) said he was fully aware of the implications of his bid and indicated that he did so to maximize his chances of winning the auction (the top-dog effect). He reasoned that his participation fee ($12) would cover the excess of his bid above $60 and he would not incur an out of pocket expense. We eliminated the two confused participants. This left us with a subset of 54 participants who fully understood the instructions for all four treatments and the implications of their bids in the third and fourth auctions.
Because such a large number of participants were identified as confused (40;42.5% of the total) we performed additional analysis (described in Appendix A) to verify that we had correctly delineated the confused participants from those who fully understood the instructions. This analysis confirmed that we had done so. Table 1 gives the characteristics of the 54 retained participants.
The mean bids for males and females are virtually identical. The mean bids of those participants who have had previous auction experience and those who have not are virtually identical as well. Those subjects who had taken a college economics course previously have mean bids that are higher by over $9. We discuss this phenomenon later in this paper. Table 2 shows the statistics by treatment group.
The mean bids for the three winner’s bonus experimental treatments are consistent with our hypothesis of anchoring effects. If the bids for $5WB reflect anchoring relative to a $55 reference point, while bids for $10WB and $15WB reflect anchoring relative to reference points of $50 and $45, respectively, then:
That pattern was evidenced in the descriptive statistics. However, we would expect the mean bid for BT to be greater than the mean bids for all the other experimental treatment groups and this is not the case.
To measure the consistency of bidding patterns with the Nash equilibria (NE), we defined NE bids as:
The rationale, as discussed in section 1.1, is that some bidders who are aware of the NE (intuitively or otherwise) might think in terms of bidding in $1 increments. They would bid $59 instead of $59.99. Thus, we characterized any bids in the interval between and including $59 to $60 as NE bids. Table 3 gives the percentage of NE bids for each treatment group.
Note: n=54 for each treatment; 216 total bids.
Fisher’s exact test p-value = 0.053
There are two interesting patterns exhibited in this table. First, several bidders from each treatment group submitted bids that were consistent with the Nash equilibrium (NE). Any anchoring effects in the three winner’s bonus treatment groups evidently did not affect all bidders. Second, the vast majority of bidders in all treatments submitted bids that were not consistent with the Nash equilibria. Only 58 of 216 bids (26.9%) were what we have classified as NE bids. This suggests that bidders exhibited various levels of cognitive strategic sophistication.
As noted earlier we observed a large difference in the mean bids (≈ $9) for those participants who had previously taken a college economics course vs. those who had not. We asked whether this difference was in part due to a difference in the percentage of NE bids submitted by each group. If those who had taken a previous college economics course submitted a higher percentage of NE bids, this would account in part for the difference in mean bids. Our analysis is given in Table 4. Those participants who had previously taken a college course had a greater percentage of NE bids as compared with those who had not; 37.5% vs. 23.1%. A Fisher’s Exact Test p-value provided weak support for the hypothesis that there was a difference between the proportions of NE bids for the two groups. The reasons for this difference are unclear. Those who took a previous college economics course could have acquired knowledge that made them more likely to recognize the NE. Alternatively, those participants could have already possessed a superior strategic ability, thus self-selecting into a course aligned with their interests and aptitudes.
Appendix B reports an analysis of the data on other influences on bidding behavior, including how a bidder’s behavior would change if there were fewer other bidders in the auction and whether the order of treatment had any effect.
3.1 Evidence of framing and anchoring effects
As indicated in the previous section, a comparison of the mean bids for $5WB, $10WB, and $15WB indicates a pattern consistent with the existence of anchoring effects. However, given that a significant percentage of bids are in the range we have defined as NE bids, i.e., greater than or equal to $59 and less than or equal to $60, it is likely that each of the winner’s bonus treatments has non-normal bid distributions. We thus used the Mann-Whitney test for pairwise differences in the distributions of bids among $5WB, $10WB, and $15WB, as shown in Table 5. We found statistically significant differences for $5WB vs. $15 WB and $5WB vs. $10WB. The difference in the bid distributions for $10WB vs. $15 WB was weakly significant. When combined with data on the relationships among the mean bids of each winner’s bonus treatment, i.e., that the magnitudes of the mean bids decline monotonically from $5WB to $15WB, these results provide evidence of anchoring effects. Further evidence was provided through application of the Jonckheere-Terpstra test. The alternative hypothesis was specified as:
We obtained a p-value of .033 consistent with the results of the pairwise comparisons.
We examined the bid patterns further to see if there was additional evidence of anchoring. As discussed previously we hypothesized that due to how we framed each experimental auction, we would observe some bidders in the winner’s bonus treatment groups viewing the “monetary prize” as the anchor for their bids. In those instances bidders would be submitting what we are calling pseudo-Nash equilibria (P-NE) bids, i.e., the monetary prize (MP) or the monetary prize minus one cent (MP - $.01). Consistent with our approach regarding the operational definition of NE bids (a bid in the interval between and including $1 less than the maximum potential payoff up to and including the maximum possible payoff), we defined a P-NE bid as being in the interval between and including the monetary prize (MP) and the monetary prize minus one dollar (MP ≥ P-NE bids > MP - $1). The P-NE intervals for each WB treatment group were defined as follows:
P-NE interval for $15WB: $45 ≥ P-NE bids ≥ $44
P-NE interval for $10WB: $50 ≥ P-NE bids ≥ $49
P-NE interval for $5WB: $55 ≥ P-NE bids ≥ $54
We hypothesized that we should observe two bid patterns if there is are anchoring effects through the framing of the auctions. First, we should see a disproportionate number of bids in the respective P-NE intervals for each winner’s bonus treatment group, as compared with the same interval for the pooled data from the other three groups, because the latter would have no special significance for the other treatment groups.
Second, we would expect to observe fewer bids, percentagewise, in the range above the pseudo-Nash equilibria, but below the true Nash equilibria for the particular winner’s bonus treatment. This is because bidders influenced by anchoring will avoid bidding any portion of the winner’s bonus, and thus we should see fewer bids in that interval. We compared bid patterns for each winner’s bonus treatment with the pooled data from the other three treatments in the subset. Table 6 shows the results of this analysis for $15WB.
These results provide strong evidence of anchoring. For $15WB 24.1% of bids were in the P-NE interval as compared with only 5.6% of bids for the other three treatments. We used Fisher’s Exact Test to test the hypothesis that there was a statistically significant difference between the proportion of P-NE bids ($44-$45) in $15WB vs. the same interval for the other treatments. We obtained a two-tailed p-value of .002.
Further evidence of an anchoring effect for $15WB can be seen by comparing the percentage of bids above the P-NE interval but below the NE interval (greater than $45 but less than $59) for $15WB with the other treatments. For $15WB 18.5% of total bids were found in that interval as compared with 32.1% in the other treatments. The difference was weakly significant at the .10 level.
Evidence in support of anchoring is seen in the data for $10WB as well. The percentage of total bids in the $49 to $50 P-NE interval for $10WB is 22.2% as compared with only 9.9% for the other three treatment groups, a significant difference (Fisher’s Exact Test p-value of 0.033, two-tailed). Furthermore, in the interval above the P-NE but below the NE (greater than $50 but less than $59) the percentage of bids was only 3.7% for $10WB as compared to 18.5% for the other three treatments. The two-tailed p-value was 0.007.
The results for $5WB given in Table 6 are not as strong as those for the other two winner’s bonus treatment groups. We do see evidence of anchoring based on analysis of bids in the P-NE interval; 20.3% of total bids for $5WB were in the P-NE interval compared with only 4.3% for the pooled other three treatments yielding a p-value of 0.001. The analysis of patterns in the greater than $55 but less than $59 interval did not provide evidence of anchoring. The percentage of bids for $5WB was actually greater than the percentage in the same interval for the other treatments.
Additional analysis of factors affecting bidding behavior used regression analysis with random effects to explain the amount of the bid on the basis of the experimental auction treatment in which the bid was submitted. We created three dummy variables: $10WB (=1 if bid was made for $10WB treatment; =0 otherwise), $15WB (=1 if bid was made for $15WB treatment; =0 otherwise), and BT (=1 if bid was made for BT treatment; =0 otherwise). A second regression included an additional explanatory variable, PreviousEcon (=1 if participant had taken previous college economics course; =0 otherwise). The results are given in Table 7.
In the regression with only the treatment dummies as independent regressors the signs and magnitudes for $10WB and $15WB are consistent with the anchoring hypothesis. Since the intercept term gives the estimated bid for $5WB we would expect the signs for $10WB and $15WB to both be negative with the absolute value of the coefficient for $15WB to be greater. Consistent with our analysis of the bid distributions in the various treatments, the sign of BT is negative, while our a priori expectation was that it should be positive. The coefficients for $15WB and the intercept are statistically significant. The overall explanatory power of the regression is quite low (R 2 = .029).
When we added the PreviousEcon variable the explanatory power of the regression almost doubled (R 2 = .057) and the F-statistic is significant. PreviousEcon is also significant. The coefficient of PreviousEcon indicates that controlling for treatment, those participants who had taken a previous college economics course bid approximately $9 higher than those who had not. This finding is consistent with analysis discussed earlier in the paper.
4 Discussion
The results of the experiment have made a unique contribution to an already extensive literature on framing and anchoring by virtue of employing a little used experimental design, a money auction. The literature on the effects of framing demonstrates how equivalent descriptions of the same payoff lead to different choices (for example, see Tversky and Kahneman, 1981). In our experimental design, each of the four treatments presented participants with an identical maximum potential payoff, i.e., $60. The difference was how the payoffs were framed; in three of the treatments the maximum potential payoff was bifurcated into a monetary prize and a winner’s bonus. In those three treatments the amount of prize was set at different amounts with the winner’s bonus varying inversely. Our main finding that, even when auctioning a commodity, the value of which is perfectly transparent, the way in which the auction is framed can yield different bid distributions, which we can ascribe to anchoring effects.
Researchers have discovered strong anchoring effects in experimental auctions involving commodities other than money. We hypothesized that the amount of the monetary prize in each treatment would serve as an anchor and affect bidding strategy. Since the item nominally being auctioned (monetary prize) and the winner’s bonus are both denominated in dollars, the behavior we observed would violate fungibility. We found strong evidence of anchoring in our experimental money auctions. What is particularly interesting is that these anchoring effects were present within participants. As hypothesized we found statistically significant differences among the bid distributions in the three winner’s bonus treatments consistent with the existence of anchoring. The mean bid in each of the three treatments was directly related to the size of the monetary prize.
We developed the concept of pseudo-Nash equilibria (P-NE) to analyze the bidding patterns we expected, assuming an anchoring effect. We found differences in the frequencies of P-NE bids for each winner’s bonus treatment compared with the same interval for the other treatments. We further hypothesized that, due to anchoring, we should expect to see relatively fewer bids above the P-NE interval, but below the NE interval for the particular winner’s bonus treatment as compared with the pooled other treatments. We found differences consistent with this hypothesis in two of the three winner’s bonus treatments. Finally, we obtained results consistent with anchoring in our regression analysis explaining bidding behavior.
Our investigation also revealed patterns consistent with the underlying assumption of bounded rationality. Only 26.9% of the bids submitted in the 54 participant subset were what we characterized as Nash equilibria bids. Interestingly, the one participant characteristic that seemed to make a difference was having taken a college economics course. Those participants submitted a significantly higher percentage of NE bids and their mean bid was approximately $9 higher than bids submitted by those without previous college experience.
We conducted ex post statistical tests to determine the validity of our procedure for separating participants into confused and non-confused subsets. The results of the tests, including a comparison of the bid distributions and regression analysis pooling both confused and non-confused participants, validated our procedure for segmenting the total participant pool into non-confused and confused subsets.
As mentioned in the introduction, we recognize that there are other possible explanations for the behavior we observed, including mental accounting and loss aversion. For example, it is possible that some participants developed a sense of ownership for the winner’s bonus that they did not (but hoped to) possess, akin to a pseudo-endowment effect (Reference Ariely and SimonsonAriely & Simonson 2003). A direction for future research would be examination of behavior in a standard auction format for a physical good where the high bidder also receives a winner’s bonus. In any event, we believe that the results provide useful information for marketers and retailers in their attempts to develop revenue maximizing pricing strategies where price can be bifurcated into components, e.g., stated price plus shipping costs or rebates, and for customers trying to avoid being fooled by such efforts.
Appendix A
As indicated previously, we found that many subjects did not understand the instructions. Of the 94 subjects who completed the experiment, 40 indicated that they did not understand the instructions for $10WB and BT or $5WB and $15WB. The vast majority (34 of 40) did not understand the instructions for $10WB and BT. Of the remaining six who indicated understanding of the $10WB and BT instructions, four did not indicate “none” as the limit for the bids for $5WB and $15WB and the other two had bid more than $60 in $5WB. All 34 subjects (except for one) who did not understand the instructions for either $10WB or BT incorrectly indicated that the highest permissible bid was $50 (the one exception indicated $10 for BT and $51 for $10WB).
We wanted to determine ex post whether our verification questions correctly delineated between confused and non-confused participants. We hypothesized that, if we had not correctly separated the confused from the non-confused, there should be no difference in the bid distributions for the two subsets (i.e., all were truly confused). Because virtually all those subjects (38 of 40) who indicated they did not understand the instructions thought they could bid less than they actually could, their distribution of bids should be lower than for those who understood the instructions. The 54 subjects who indicated they understood the instructions (non-confused) entered 216 bids with the mean bid equaling $45.55. The 40 subjects who indicated they did not understand the instructions (confused) entered 160 bids with the mean bid equaling $35.76. A Wilcoxon rank sum test with continuity correction comparing bid distributions yielded a p-value < .0001 for the difference. Furthermore, the magnitude of the difference in means (≈ $10) is consistent with our a priori expectations given that virtually all the confused subjects thought the limit on bids was $50 instead of $60.
Finally, we pooled all 94 subjects (376 observations) and ran regressions with random effects as before, but this time we added a dummy variable, “Good,” where 1=bid from non-confused subject; 0=bid from confused subject. The results for regressions without and with PreviousEcon as an explanatory variable are given below.
The p-value for the coefficient for Good was < .001 in regressions both with and without PreviousEcon as an explanatory variable. The values of the coefficients were 9.6 and 10.0, respectively. These results are entirely consistent with the Wilcoxon test results. Thus, we are satisfied that our verification questions correctly delineated between confused and non-confused subjects.
Appendix B
In addition to finding strong support for the hypothesis that framing and anchoring influence bidding strategy, we were able to discern several other patterns. The responses of the participants as to how their bids would change depending upon whether there were a) two other bidders in the auction, or b) ten other bidders in the auction in Table B1.
The results are consistent with intuitive expectations for the most part. Those participants who would raise their bids had the lowest mean bid. Those who would lower their bids had the highest mean bids with the one exception being the case “With only Two Other Bidders” where those who wouldn’t change their bid had the highest mean bid.
We hypothesized that some bidders would believe that if there were fewer bidders in the auction they would be able to submit lower bids and have an equally good chance of being the highest bidder. Analysis of that question is given in Table B2.
A larger percentage of bidders would submit lower bids if there were only two other bidders as opposed to ten other bidders. The percentage of bidders who would submit lower bids in the case of two other bidders was over twice as great as the percentage where there are ten other bidders (Fisher’s Exact Test two-tailed p-value of .0001).
We also analyzed the effect of the sequencing of the bids for the first and second auctions (BT and $10WB). The sequencing was assigned randomly. Of the 54 participants 29 submitted bids for BT first and $10WB second. The other 25 participants submitted bids for $10WB first and BT second. Irrespective of the treatment, the mean of the second bid in the sequence ($46.46) was higher than the mean of the first bid in the sequence ($44.91). This difference did not reach statistical significance (Mann-Whitney-Wilcoxon p-value, .095).