1. Introduction
For decades, social scientists have studied how people negotiate with each other using the ultimatum game (Camerer, Reference Camerer2003; Güth et al., Reference Güth, Schmittberger and Schwarze1982; Loewenstein et al., Reference Loewenstein, Thompson and Bazerman1989). In this game, two players split a sum of money. One party (the proposer) makes an offer as to how this money should be split, and the other party (the responder) either accepts or rejects it. If the offer is accepted, the money is split as proposed. If it is rejected, neither player receives anything.
To date, research has predominantly focused on the quantitative split of the resources at stake, such as amounts of money with adults (Thaler, Reference Thaler1988), chocolates with children (Murnighan & Saxon, Reference Murnighan and Saxon1998), and even raisins with chimpanzees (Milinski, Reference Milinski2013). The conclusion from this research is that offers of half (50%) of the stake are typically accepted (e.g., 0% rejection; Sanfey et al., Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003). Conversely, people begin to reject unequal offers of 49% or below, and most people reject offers of below approximately 30% (Calvillo & Burgeno, Reference Calvillo and Burgeno2015; Camerer & Thaler, Reference Camerer and Thaler1995; Cameron, Reference Cameron1999). By contrast, how people negotiate over resources that vary on quality in the ultimatum game has received much less attention.
In the current investigation, we seek to answer two questions: if people reject ultimatum game offers that are quantitatively equal (half of the total stake) but qualitatively unequal, and why they might do so. While many resources can vary in quality, we chose to study qualitative splits of cash since cash is frequently used in ultimatum game studies (e.g., Oosterbeek et al., Reference Oosterbeek, Sloof and Van De Kuilen2004; Nelissen et al., Reference Nelissen, Van Someren and Zeelenberg2009; Thaler, 1988). Previous work has focused on quantitative splits of cash, but cash can also vary in quality such that some forms of it are more desirable than others. One such qualitative difference is that larger denominations (e.g., 2€ coins) are preferred to smaller denominations (e.g., 200 × 1¢ coins = 2€; Mishra et al., Reference Mishra, Mishra and Nayakankuppam2006; Raghubir & Srivastava, Reference Raghubir and Srivastava2009). Therefore, we manipulated the quality of a monetary offer by varying the types of denominations that participants received while holding constant the financial value of the offer. For example, if the total stake was 8€ (i.e., 400 × 1¢ coins + 2 × 2€ coins), participants would receive a quantitatively equal offer (4€; 50% of the stake) that was qualitatively inferior (400 × 1¢ coins) or qualitatively equal (200 × 1¢ coins and 1 × 2€ coin) to what was kept by the proposer.
Across three incentive-compatible ultimatum game studies, we show that people reject qualitatively inferior (but quantitatively equal) offers and provide evidence for the mechanisms behind this rejection. Based on prior literature, there are at least three possible reasons behind why people may reject qualitatively inferior offers. The first, badness, is that people dislike the quality of the resources in the offer such that they do not want to receive it. That is, a large number of coins may simply be undesirable regardless of whether the offer constitutes half the stake. The second reason, mere inequality, is that people compare their outcome to that of their partner and reject any offer that gives them the lesser of the two possible allocations, simply due to a dislike for disadvantageous inequality (i.e., inequity aversion; Fehr & Schmidt, Reference Fehr and Schmidt1999). The third reason, fairness, is that people perceive an inferior offer as unfair (Camerer & Thaler, Reference Camerer and Thaler1995; Kagel et al., Reference Kagel, Kim and Moser1996) and therefore reject it to punish the proposer (‘altruistic/costly punishment’; Fehr & Gächter, Reference Fehr and Gächter2002; Henrich et al., Reference Henrich, McElreath, Barr, Ensminger, Barrett, Bolyanatz and Ziker2006; Srivastava et al., Reference Srivastava, Espinoza and Fedorikhin2009).
Consistent with the fairness and mere inequality explanations but inconsistent with the badness explanation, Studies 1 and 2 found that respondents were more likely to reject a qualitatively inferior offer (i.e., the proposer kept better coins) than a qualitatively equal offer (i.e., both parties received the same small coins). In Study 3, a qualitatively inferior offer was more likely to be rejected when it came from a human (vs. a computer) providing support for the fairness account. Mediation analyses using a self-report measure of fairness in Study 3 provided additional support for this account. However, the sizable rejection rate (10%) of computer-made offers suggests that mere inequality matters to rejection too. All results are summarized in Table 1.Footnote 1
a Results are descriptively the same when including all responses (see Supplementary Material). Sensitivity power analyses using G*Power (Faul et al., Reference Faul, Erdfelder, Lang and Buchner2007) suggest that all studies possessed sufficient power to detect a minimum effect size of w = .27, which is lower than the effect sizes observed (see Supplementary Material).
b In all studies, the rejection rate of the inferior condition was always significantly higher than ‘0’ (zs > 4.08, ps < .001; compared to a simulated condition with the same number of participants who all accepted the offer).
c Results are also robust to binary logistic regressions using PMLE (Heinze & Schemper, Reference Heinze and Schemper2002) to account for rejection rates of 0% (i.e., “separation”; Firth, Reference Firth1993).
d This is the largest p-value produced by comparing the inferior offer to each of the other conditions in the study.
2. General method for all studies
To test whether and why individuals might reject qualitatively inferior offers, we adapted the ultimatum game. Specifically, participants received pre-programmed offers that varied on our dimension of interest: whether the qualitative split of coins was inferior or equal. To hold quantity constant, participants were always offered half of the financial value at stake.
In all three studies, participants were first told that they were playing the ultimatum game with another participant in the experiment. This other participant would be anonymous and randomly assigned. The monetary stake was presented as an image in the survey (and was physically present in the laboratory room in Study 1). Participants then read the rules of the ultimatum game and were asked three questions that assessed their understanding of outcomes when offers are accepted or rejected. To ensure that participants had the same, accurate information, feedback was provided as to the correct response. To bolster the cover story that participants would be completing a negotiation with others in the study, we first asked participants to make an offer which was ostensibly presented to another participant. Participants indicated their offer using sliders starting at 0 and moving up in increments of 1 coin (e.g., in the 8€ stake of Study 1, there were 400 steps for 1¢ coins and 4 steps for 2€ coins).
Afterward, participants were shown a loading screen featuring a graphic ‘throbber’ animation that indicated they were being assigned to receive another participant’s offer. This was to bolster the cover story that the offer was coming from a participant and to reduce suspicion that the offer was pre-programmed. After 5 seconds, participants received their predetermined offer as per their randomly assigned condition. How these offers varied in the qualitative distribution between the ostensible proposer and participant are described in detail in each study. The binary dependent variable in all studies was whether the offer was accepted or rejected.
After the dependent variable, participants completed an attention check which assessed their recollection and understanding of the size of the total stake. Consistent with past research (Bago et al., Reference Bago, Bonnefon and De Neys2021), we decided a priori to exclude participants who failed the attention check. Detailed information on exclusions are reported in each study. The results of each study are descriptively the same when including all participants in analyses (see Supplementary Material).
All experiments were incentive compatible: 10 decisions from each experiment were executed. If participants declined the offer, they did not receive any money. If participants accepted the offer, they received their portion of the stake. Laboratory participants (Studies 1 and 2) were given the option between the cash (i.e., the coins they accepted, if they accepted them) or an equivalent gift card. Online participants (Study 3) received their money as a digital bonus to protect their privacy. All original survey materials and data are publicly available (OSF: https://osf.io/epd83/?view_only=e76c5acc92da4f62bdb4cea6ad2d7b33).
3. Study 1
Study 1 was designed to test whether participants would reject a qualitatively inferior offer that was half of the financial stake. To do this, we compared the rejection rate of a qualitatively inferior offer to qualitatively equal and qualitatively superior offers, always holding constant the financial value of these offers (i.e., half of the stake).
Two-hundred and two students (the maximum available at that time in the lab; 193 after exclusions)Footnote 2 at a large European University completed this study in exchange for course credit and the chance to receive a share of 8€. Participants completed the study in a laboratory room. Before completing the study on a computer in the room, participants were lead past a table that had a physical cash stake of 8€ consisting of 400 × 1¢ coins and 4 × 2€ coins. After making their offer, participants were randomly assigned to receive one of three pre-determined offers (see Figure 1): (1) 400 × 1¢ coins (inferior offer; the proposer would keep the 2€ coins); (2) 1 × 2€ coin and 200 × 1¢ coins (equal offer); or (3) 2 × 2€ coins (superior offer; see Figure 1). We expected higher rejection rates of this inferior offer compared to the other two offers, but no difference between the rejection rates of the equal or superior offer.
A chi-square analysis indicated (at least one) significant difference in rejection rates between conditions (Wald χ2 = 28.18, p < .001). Consistent with expectations, z-tests of proportion indicated that participants were more likely to reject the inferior offer (16 participants; 25%) than the equal offer (2 participants; 3%; z = 3.56, p < .001) or the superior offer (0%; z = 4.31, p < .001). Rejection of the equal and superior offers was not significantly different (z = 1.46, p = .144). Thus, despite being offered half of the money at stake, participants were more likely to reject an offer when an ostensible proposer attempted to keep better coins for themselves than when both parties would get the same coins or when the participant would receive the better coins. The pattern of results does not support the ‘badness’ explanation which predicts that participants would be more likely to reject the equal offer (since it contained undesirable coins) than the superior offer (which did not contain undesirable coins). Instead, the results are consistent with the possibility that participants rejected the qualitatively inferior offer because the ostensible bargaining partner would keep better coins.
4. Study 2
Study 2 aimed to build on Study 1 by evaluating the possible role of fairness in the rejection of qualitatively inferior offers. The fairness account proposes that people reject offers when they infer that the proposer is being ‘rude’ or showing poor manners (Camerer & Thaler, Reference Camerer and Thaler1995) by making an unequal offer with bad intentions (Blount, Reference Blount1995; Kagel et al., Reference Kagel, Kim and Moser1996; Rabin, Reference Rabin1993). Therefore, if participants perceive the qualitatively inferior offer to be unfair (rather than merely unequal), we expect them to perceive the proposer as aggressive or offensive. This perception, in turn, should statistically explain the effect of the type of offer on rejection rate.
One-hundred and ninety-two students (maximum available; 130 after exclusions)Footnote 3 at a large Australian University completed the study online in exchange for course credit and the chance to receive their share of $20. Participants were presented with one of three pre-determined offers of $10 from different stakes of $20 (see Figure 2): (1) an inferior offer in which the participant would receive 5¢ coins while the proposer would receive $2 coins; (2) an equal offer where both parties would receive $2 coins; (3) or an equal offer where both parties would receive 5¢ coins. The two equal conditions were included to further evaluate the ‘bad’ offer account. The fairness and mere inequality explanations predict a higher rejection rate in the inferior condition than both equal conditions whereas the bad-option explanation predicts higher rejection rates of the inferior and equal (200 × 5¢) offers than the equal (5 × $2) offer. After the decision to accept or reject the offer, participants indicated how aggressive or offensive they found the proposer to be (i.e., Did you perceive the proposer as offensive or aggressive? 1 = Not at all to 7 = Very much).
A chi-square analysis revealed (at least one) significant difference in rejection rates between conditions (Wald χ2 = 26.89, p < .001). Specifically, participants rejected the inferior offer (32%; 14 people) at a greater rate than the equal offer ($2) of larger coins (0%; z = 3.77, p < .001) and the equal offer (5¢) offer of smaller coins (1 participant, 2%; z = 3.89, p < .001). The difference between the equal conditions (5¢ vs. $2) was not significant (z = 0.87, p = .384). Thus, conceptually replicating the results of Study 1, participants rejected qualitatively unequal offers only when those offers were inferior to their negotiation partner. This pattern of evidence is consistent with the fairness and mere inequality explanations but not the badness explanation.
Next, we examined participants’ perceptions of the (ostensible) proposer to assess the role of fairness. An ANOVA revealed an effect of experimental condition on perceptions of aggressiveness/offensiveness (F(2,127) = 20.55, p < .001, partial η2 = .245). Specifically, participants perceived the (ostensible) proposer to be more aggressive/offensive when they received the inferior offer (M = 3.39, SD = 2.09) than when they received the equal offer ($2) coins (M = 1.46, SD = 0.99, 95% CI for difference = [1.08, 2.77], p < .001) or the equal offer (5¢) (M = 1.59, SD = 1.34, 95% CI for difference = [1.01, 2.58], p < .001). As with the rejection rates, there was no significant difference in perceived aggressiveness/offensiveness between the two equal offer conditions (p = .999).
To test whether perceptions of the ostensible proposer mediated the effect of experimental condition on rejection rate, we used Hayes’ PROCESS macro (Model 4; 2017). As we found no differences between the two equal (5¢ vs. $2) conditions in rejection rate or perceptions of aggressiveness/offensives, and for the sake of simplicity, we reduced this to one ‘Equal ($2/5¢)’ condition (see Supplementary Material for all three conditions; results are descriptively the same). Accordingly, the independent variable was the type of offer (inferior vs. equal ($2/5¢)), the dependent variable was whether participants rejected the offer, and the putative mediator was perceived aggressiveness/offensiveness of the proposer. As summarized in Figure 3, and as theorized, participants in the inferior offer condition perceived that the proposer was more offensive/aggressive than participants in both the equal ($2) and equal (5¢) offers. Heightened perceptions of offensiveness/aggressiveness partly explained the increased likelihood of rejecting the inferior (vs. equal) offers.
Consistent with the fairness account, negative inferences about the proposer drove part of the rejection of inferior offers. If the effect was driven solely by mere inequality, this would not be the case. However, there are two caveats: the evidence for the fairness mechanism is correlational in nature and the negative inferences of the proposer are still a relatively indirect and imperfect measurement. We address these issues in Study 3.
5. Study 3
The key to distinguishing the fairness and mere inequality accounts is that perceptions of unfairness drive individuals to costly punishment: rejecting the offer at cost to themselves to encourage future positive behavior from the proposer (Fehr & Gächter, Reference Fehr and Gächter2002). The pre-requisite for such costly punishment is the perception that the unfair offer comes from a source which can be punished for this behavior rather than coming from a neutral party that allocates randomly (Blount, Reference Blount1995). Therefore, to evaluate between the fairness and mere inequality explanations for the rejection of qualitatively inferior offers, we varied whether the offer came from a human or a computer (Blount, Reference Blount1995; Sanfey et al., Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003). In the human condition, participants were told that the offer to split the stake was made by the other participant, whereas in the computer condition, this offer was randomly generated by a computer (for the two humans). In both cases, participants are left with less than their partner and thus the mere inequality account does not predict a difference in rejection rates. However, the fairness account predicts that participants should be more likely to reject the offer from a human than the computer because the human can be punished for making an unfair offer, while a computer cannot (Blount, Reference Blount1995). Moreover, we measured perceived fairness at the end of the study and tested if this mediates the effect of experimental condition on rejection. This study was preregistered (AsPredicted https://aspredicted.org/9gq9d.pdf).
Nine-hundred participants were requested from the online platform Prolific. Nine-hundred and nine workers residing in the Netherlands, Belgium, and France completed the study online in exchange for £0.65 and the chance to receive a share of 16€ (908 after exclusions).Footnote 4
Participants were presented with a stake of 16€, consisting of 4 × 2€ coins and 800 × 1¢ coins, to be split with another participant. To bolster our cover story that they could receive a share of the cash stake, participants were told that they would be couriered the cash payment. In reality, payment was made as a digital bonus to preserve participant anonymity.
All responders were offered half of the stake in the form of the inferior denominations: 800 × 1¢ coins (8€). What varied was whether this offer was presented as coming from a human proposer (the other participant) or being the result of a random allocation decided by a computer (exact text provided in the Supplementary Material and on OSF).
After participants made their decision to accept or reject the offer, we measured perceptions of fairness (“How fair was the allocation of the money from the Proposer [computer-generated roulette wheel]”; 1 = extremely unfair to 7 = extremely fair; Clark et al., Reference Clark, Baumeister and Ditto2017). We predicted that participants who believed they received a qualitatively inferior offer from a fellow participant (vs. a computer) would be more likely to reject the offer because they would find it more unfair.
A chi-square analysis indicated that participants were more likely to reject the inferior offer from a human than a computer (Wald χ2 = 10.41, p = .001). Specifically, more participants (17%; 78 participants) rejected the offer when it was presented as coming from a human than when it was presented as from a computer (10%; 45 participants). Supporting the role of perceived fairness, a bootstrapped and bias-corrected (10,000 samples) t-test revealed that participants found the offer to be less fair from a human (M = 4.98, SD = 1.87) than from a computer (M = 5.31, SD = 1.85; 95% CI of difference = [.082, .580], p = .007, d = .18).
Next, we evaluated whether perceptions of fairness could explain the effect of experimental condition (source of the offer) on rejection likelihood by using process analysis (Hayes, Reference Hayes2017; model 4). The independent variable was the source of the offer (human vs. computer), the dependent variable was whether participants rejected the offer, and the putative mediator the perceived fairness of the offer. As predicted, the same offer was perceived as less fair coming from a human than a computer and this in turn partly explained the increased likelihood of rejecting the offer (see Figure 4).
These results are consistent with both the fairness and mere inequality explanations. In support of the fairness account, participants were more likely to reject an unfair offer from a person, whom they could punish, than a computer, whom they could not punish. Furthermore, this greater rejection rate of the human (vs. computer) offer was partially mediated by fairness perceptions. Nevertheless, 10% of participants rejected an inferior offer from a computer which is higher than the 0–3% (based on equal offer rejection rates in Studies 1 and 2) one might expect in an equal offer condition (against 3% rejection from Study 1 as a conservative estimate: z = 4.17, p < .001). Thus, fairness and mere inequality both appear to play a role.
6. General discussion
Past research has shown that people rarely reject offers that give them half of a financial stake (i.e., rejection rates around 0%, Sanfey et al., Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003). In three incentive-compatible ultimatum game studies, we found elevated rejection rates for financially fair offers when such offers were qualitatively inferior (17% to 32%).
Perhaps more importantly, we evaluated three potential mechanisms for rejection of quantitatively equal but qualitatively inferior offers: badness, mere inequality, or fairness. Taken together, our results were most consistent with the fairness account. Specifically, participants in Study 2 reported that the proposer was more offensive/aggressive when they received an inferior (vs. equal) offer and those perceptions partially explained the effect of the type of offer on rejection rate. Likewise, participants in Study 3 perceived an inferior offer from a human (vs. a computer) as less fair which partially explained their increased rejection of the offer. Nevertheless, mere inequality does seem to play a role as participants rejected inferior offers from the computer at higher rates than expected (Study 3), though more research is needed to confirm this observation. None of the findings were consistent with the badness account.
Complementing previous research on contextual features of fairness (Kahneman et al., Reference Kahneman, Knetsch and Thaler1986), we expand the scope of the field’s understanding of fairness by identifying and testing resource quality as a core feature of negotiation. For researchers and practitioners who seek predictive accuracy and efficient outcomes, understanding that quantity and quality drive fairness is a boon for effective resource allocations. Indeed, negotiators and allocators may face setbacks if they fail to consider the quality of the resources they allocate. For example, divorce negotiations often follow a legislated 50:50 financial split of marital assets (Landers, Reference Landers2021) but can still fail due to the challenge of allocating familial items which possess qualitative differences (Kristof, Reference Kristof2001). For instance, an offer of $500,000 in financial assets while the other person wishes to keep the $500,000 family home could be rejected, not just because the quality of the financial assets is less desirable, but because this offer is seen as unfair treatment.
While qualitative inequality may be prevalent it need not be a pitfall as the current work suggests a potential solution to costly negotiation breakdowns. In Study 3, responders perceived the same qualitatively inferior offer to be fairer when it came from a computer rather than a human. This may seem surprising given ample research suggesting that people are averse to decisions made by algorithms as compared to humans (Dawes, Reference Dawes1979; Dietvorst et al., Reference Dietvorst, Simmons and Massey2015; Lee, Reference Lee2018; Longoni et al., Reference Longoni, Bonezzi and Morewedge2019). Yet, growing evidence suggests that certain task characteristics can engender trust in and appreciation for decisions made by computer algorithms (Bonezzi & Ostinelli, Reference Bonezzi and Ostinelli2021; Logg et al., Reference Logg, Minson and Moore2019). For example, people trust algorithms more than humans when the task needs to be objective and efficient (Lee, Reference Lee2018). Future research could therefore continue to examine the intersection of intelligent systems, fairness, and qualitative resource allocation to improve negotiation outcomes.
Finally, one open question is if superior quality might compensate for inferior quantity or vice versa. To continue the above example, would divorcees still prefer the qualitatively superior $500,000 family home if the alternative was $600,000 in financial assets (rather than $500,000)? While we kept quantity constant (50%) so as to isolate the effect of quality, future research could alter both quality and quantity to examine potential compensatory effects (e.g., are people more likely to accept an offer lower than half of the stake if the quality is superior to what the proposer keeps?). Real-world negotiations are likely to vary in both quality and quantity at the same time, such that the study of how people make trade-offs may be a compelling avenue for future research. For now, the empirical evidence presented in this manuscript suggests that quality, not just quantity, is an important determinant of fairness.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/jdm.2023.20.
Data availability statement
All original survey materials and data are publicly available (OSF: https://osf.io/epd83/?view_only=e76c5acc92da4f62bdb4cea6ad2d7b33).
Acknowledgments
We thank Hanieh Naeimi for her assistance with Study 3, and we also thank Kathleen Vohs and Marcel Zeelenberg for their friendly reviews of this manuscript.
Funding statement
This research benefited from the support of an Australian Government’s Research Training Program Scholarship (J.Z.) and was supported by funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) grant awarded to N.L.M. (Grant No. 430-2020-00829). These funders had no role in the conduct, design, analysis, interpretation, or reporting of this research.