Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-23T12:12:13.608Z Has data issue: false hasContentIssue false

The Effect of Biased Peacekeepers on Building Trust

Published online by Cambridge University Press:  08 June 2023

Jared Oestman
Affiliation:
University of Nevada, Las Vegas, NV, USA
Rick K. Wilson*
Affiliation:
Rice University, Houston, TX, USA
*
Corresponding author: Rick K. Wilson; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Do unbiased third-party peacekeepers build trust between groups in the aftermath of conflict? Theoretically, we point out that unbiased peacekeepers are the most effective at promoting trust. To isolate the causal effect of bias on trust, we use an iterated trust game in a laboratory setting. Groups that previously engaged in conflict are put into a setting in which they choose to trust or reciprocate any trust. Our findings suggest that biased monitors impede trust while unbiased monitors promote cooperative exchanges over time. The findings contribute to the peacekeeping literature by highlighting impartiality as an important condition under which peacekeepers build trust post-conflict.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Our motivation stems from the simple observation that third-party peacekeepers are commonly used in the aftermath of civil war. Peacekeepers are put in place not only to provide security guarantees but also to promote trust between ex-combatants. As Nomikos (Reference Nomikos2022) argues, peacekeepers act to increase cooperation between ex-combatants, especially when there are low levels of trust. In effect, peacekeepers fill the void left by a weakened state in post-conflict settings. Building trust involves getting both sides to understand the benefits of cooperating with one another. However, what happens when peacekeepers have a stake in ensuring that one side of the underlying conflict prevails (Benson and Kathman Reference Benson and Kathman2014; Rhoads Reference Rhoads2016)? When a third party acts in a biased manner, then efforts to promote trust between ex-combatants will be undermined. While this is commonly assumed, the causal linkage between impartiality in third-party peacekeeping and post-conflict cooperation has, to our knowledge, not been directly tested.

We use a laboratory setting to isolate the effects of bias among third-party peacekeepers on trust. We implement an iterated trust game between strangers previously in conflict. We have three separate treatments: biased third-party monitoring; unbiased third-party monitoring; and no third-party monitoring. In line with our expectations, we find that biased third-party monitors impede trust more so than unbiased third-party monitors. The findings from the lab suggest that the type of a third-party monitor affects trust between actors previously in conflict. Overall, our findings contribute to a growing line of research that draws attention to the different roles that peacekeepers fill with the goal of promoting cooperation in post-conflict settings.

Post-conflict cooperation and impartiality in peacekeeping

Groups previously involved in conflict face uncertainty about the intentions of their opponents and often lack important information about their actions. After a break in fighting, ex-combatants remain mobilized because of uncertainty about whether their counterparts will abide by the terms of a settlement (Posen Reference Posen1993). For example, if there is a small skirmish, rival parties may question whether the fight arose from a deliberate attempt to renew fighting or if it was accidental. Third-party peacekeepers who honestly report accidents may play a role in de-escalating conflict and building trust between ex-combatants.

Studies focusing on peacekeeping in post-civil war environments show that exposure and positive interactions with peacekeepers result in higher levels of trust and a greater willingness to cooperate with peace efforts (Blair Reference Blair2019, Reference Blair2021; Gordon and Young Reference Gordon and Young2017; Mironova and Whitt Reference Mironova and Whitt2017; Nomikos Reference Nomikos2022; Smidt Reference Smidt2020). We conjecture that impartiality enables peacekeepers to promote trust.

Our central claim is that success in third-party monitoring requires that peacekeepers are unbiased in their interactions with all sides. We hypothesize that peacekeeper bias undermines trust between opponents, whereas impartiality helps to promote trust. Our hypothesis closely aligns with the formal model by Kydd (Reference Kydd2006). At the core of Kydd’s model is mistrust among disputing parties. Following a peace agreement, each of the contending actors is uncertain about whether their opponent will abide by or renege on the agreement. This creates an incentive for actors to return to violence if they perceive that their counterparts are violating the agreement. Third parties can aid actors in overcoming mistrust by monitoring each side’s actions and providing assurances that each is committed to the peace. The key to doing this effectively is for the third party to remain unbiased. While Kydd’s concern is with third-party mediators, the logic can be extended to the context of peacekeeping.Footnote 1

We expect that peacekeepers are more effective at building trust in their role as monitors when they remain unbiased in their actions and provide honest reports of the activities of each side. By comparison, biased third-party monitors are likely to be cast as unreliable and as a consequence do nothing to build trust.Footnote 2 This leads to our main hypothesis:

H: Unbiased peacekeepers enable trust whereas biased peacekeepers reinforce distrust between contending actors.

Testing the relationship between bias and trust using observational data is difficult. Consequently, we turn to a laboratory environment in which we create conflict between groups and manipulate bias. Using a standard “trust” game, we measure the level of trust and test between different predictions concerning the directional effects of bias. We have four pre-registered empirical predictions as noted below (see Supporting Information, Section A).Footnote 3 We note that the trust game has often been used to test levels of cooperation between competing groups. Gilligan, Pasquale, and Samii (Reference Gilligan, Pasquale and Samii2014) carry out a lab-in-the-field experiment measuring trusting and trustworthy behavior in Nepalese villages differentially affected by violence. While they do not match ex-combatants, Bauer, Fiala, and Levely (Reference Bauer, Fiala and Levely2018) do so with a lab-in-the-field experiment from Northern Uganda. Their results are mixed, with younger combatants being more trusting. Others have used the trust game to focus on existing political divisions (Carlin and Love Reference Carlin and Love2013, Reference Carlin and Love2018; Carlin, Love, and Young Reference Carlin, Love and Young2020; Iyengar and Westwood Reference Iyengar and Westwood2015; Westwood and Peterson Reference Westwood and Peterson2020) or ethnic divisions (Carlin, Love, and Young Reference Carlin, Love and Young2020, Reference Carlin, González, Love, Miranda and Navia2022; Cassar, Grosjean, and Whitt Reference Cassar, Grosjean and Whitt2013). The trust game has become a workhorse for studying aspects of inter-group conflict in political science.

Experimental design

We use a multi-stage experimental design (see the timeline given in the Supporting Information –SI Figure B.1). First, we construct two groups whose membership depends on an unrelated task. Second, we put those groups into competition with one another and create hostility between the groups. Third, we use an iterated trust game (Anderhub, Engelmann, and Guth Reference Anderhub, Engelmann and Guth2002; Berg, Dickhaut, and McCabe Reference Berg, Dickhaut and McCabe1995; Engle-Warnick and Slonim Reference Engle-Warnick and Slonim2004). Fourth, we introduce the possibility of losses in the transfer of money in the trust game (an accident). Finally, we introduce a computerized monitor who reports to the trustor and trustee.

A total of 144 subjects participated in the computerized experiment, and all subjects were drawn from the subject pool for the Behavioral Research Laboratory at Rice University.Footnote 4 Average earnings were $17.53. All sessions involved eight subjects, with one, two, or three sessions being run simultaneously. Subjects were randomly assigned to a session and a computer terminal by choosing a card from a shuffled deck. Randomization was at the session level and was implemented by the computer program, so both the experimenter and the subjects were blind to treatment.

A key element of the discussion above involves groups in conflict. There are a number of approaches to artificially building groups and getting the groups to dislike one another (see e.g., efforts by Halevy, Bornstein and Sagiv (Reference Halevy, Bornstein and Sagiv2008, Reference Halevy, Weisel and Bornstein2012) or by Abbink and Harris (Reference Abbink and Harris2019)). In our experimental design, we build two groups using the minimal group paradigm (Tajfel and Turner Reference Tajfel, Turner and Tajfel1978). After consenting to the study, subjects are seated at individual computer terminals and are given a dot estimation task. A screen is flashed for four seconds, and subjects are taken to a new page where they are asked how many dots they saw on the screen. Once everyone carries out this task, subjects are sorted by the time they took to respond and then assigned to a “Yellow” or “Green” group based on the speed with which they responded. Subjects are told they are in a group that is similar in terms of the time taken to respond, not accuracy. No one is told whether they are in the fast or slow group.Footnote 5

Following assignment, the groups engage in a version of the contest game (Abbink et al. Reference Abbink, Brandts, Herrmann and Orzen2010) in which both compete for a large prize (a different version of individuals destroying resources of others is detailed in Scacco and Warren Reference Scacco and Warren2018). Subjects are given a private endowment of 10 ECUs (Experimental Currency Units) that they can keep for themselves or use to harm the other group. Both groups are given 24 computerized lottery tickets, and one is drawn. If the Yellow group’s ticket is drawn, then all Yellow group members share the prize of 80 ECUs. Those in the Green group get nothing. To generate inter-group hostility, subjects could spend any amount of their endowment to destroy the other group’s lottery tickets. For every ECU of the endowment spent, one lottery ticket belonging to the other group was destroyed. The parameters were set such that if every subject in a group spends 6 ECUs, all of the other group’s lottery tickets are destroyed. Of course, the other group members could do the same (if all lottery tickets were destroyed, no group gets the large prize). Subjects are only told how many of their group’s lottery tickets were destroyed and whether they won or lost the big prize. They are not told how many lottery tickets of the other group were destroyed in order to reduce hostility toward group members who free-ride. This constitutes a public goods problem for subjects in which they are individually better off free riding on the efforts of their group members (keeping their own endowment) and sharing in the big prize, if it is won. This game is played three times, with subject endowments and the big prize recreated each time. The equilibrium for the game is to never spend any resources to destroy the other group’s tickets. Subjects, however, do spend and this builds hostility between the groups (see Whitt, Wilson, and Mironova Reference Whitt, Wilson and Mironova2021).

To gauge whether trust increases or declines over time, we use an iterated version of the trust game. In the trust game, both players are given the same endowment and a first mover (truster) decides how much (if any) of the endowment to send to a second mover (trustee). The amount sent is tripled and given to the second mover. That subject decides how much, if anything, to send back to the first mover. The amount sent is a measure of trust, and the proportion returned is a measure of trustworthiness. Either the Yellow or Green group is randomly chosen to be the first mover. Subjects keep their role throughout and make four decisions with four different counterparts (with eight subjects per session, each first mover is matched with four different second movers – the algorithm uses a zipper design to ensure stranger matching). Keeping subjects in the same role means they only gain experience from that role and should reduce noise in the data. First movers knew what they sent and what they received in each decision. Second movers knew what they received and what they returned in each decision. Prior to being told what is returned, the first mover is asked to predict what will be returned. Likewise, the second mover is asked to predict how much is sent just before learning what is sent. We use these non-incentivized predictions as measures of beliefs about the actions of their counterparts. Much of the design involves a standard trust game commonly used by researchers in both the laboratory and in the field (Carlin and Love Reference Carlin and Love2013, Reference Carlin and Love2018; Carlin, Love, and Young Reference Carlin, González, Love, Miranda and Navia2022; Gilligan, Pasquale, and Samii Reference Gilligan, Pasquale and Samii2014; Johnson and Mislin Reference Johnson and Mislin2011; Westwood and Peterson Reference Westwood and Peterson2020; Wilson and Eckel Reference Wilson, Eckel, Druckman, Green, Kuklinski and Lupia2011).

The treatments add uncertainty about what is returned by the second mover. First and second movers are told there is a 50% chance that some of what is returned by the second mover is lost (through an accident) and a 50% chance that everything returned by the second mover in fact is returned. If ECUs are lost, subjects are told that between 25% and 100% of what is returned is lost. In the algorithm, if ECUs are lost, there is a 1/3rd chance of either 25%, 50%, or 100% of the returned ECUs being lost.Footnote 6 Uncertainty about what is returned is a key to this element of the study. The second mover can hide behind this uncertainty to return little or nothing to the first mover. This is like the hidden action design used by Charness and Dufwenberg (Reference Charness and Dufwenberg2006).

In the No Monitor condition, everything happens as noted above. First movers are only told what is returned and are not informed about whether anything is lost along the way. In the Unbiased Monitor condition, subjects are told that a monitor reports whether ECUs are lost. This monitor is an automaton (part of the computer program) and always reports truthfully. In this sense, the first mover can impute motives to the second mover if what is returned is below expectations and both players know nothing is lost. This monitor only reports if ECUs are lost and does not reveal how many. In the Biased Monitor condition, subjects are told that the monitor (automaton) sends a false report 50% of the time. Importantly, the nature of that report always advantages the second mover, obfuscating that subject’s actions. These instructions are given in the Supporting Information, Section E.

Predictions

We manipulate the bias shown by a third-party monitor and assume that any bias favors the trustee. The first two predictions provide a clear ordering across different types of monitors. The first prediction (P1) holds that Unbiased Monitors will build trust. We build on the fact that Unbiased Monitors fully inform both sides about accidents. Since both sides know that all actions are reported, there is no room to hide behind lapses in reporting. This leads us to expect trust to emerge under an Unbiased Monitor and not with a Biased Monitor or No Monitor.

P1. An Unbiased Monitor promotes higher levels of trust between counterparts than a Biased Monitor or No Monitor.

We predict there to be no difference between a Biased Monitor and No Monitor. In both instances, trustees are able to hide behind imperfect information. In the No Monitor case, this is obvious since no information is provided about whether there was an accident. In the Biased Monitor case, we assume that any information provided is perceived as unreliable by trustors.

P2. Levels of trust will not differ in the presence of a Biased Monitor or No Monitor.

The flip side of trust is trustworthiness. Trust can emerge if the trustee reinforces trusting behavior. We predict that an Unbiased Monitor encourages reciprocity because the trustee can only hide behind uncertainty. In the case of a Biased Monitor, the trustee can hide behind both uncertainty and the fact that the monitor misreports half the time. This leads to P3.

P3. An Unbiased Monitor will promote higher levels of reciprocity than a Biased Monitor.

Our fourth prediction, at first blush, is not as obvious. If a Biased Monitor always misreported transgressions by the trustee, then the trustee can hide behind complete bias by the monitor. But, if the Biased Monitor sometimes tells the truth, this reduces the ability of the trustee to fully hide behind the Biased Monitor. Uncertainty in reporting will lead the trustee to occasionally reciprocate. This is not the case for the No Monitor condition – here the trustee can hide behind the fact that there is no reporting. Perversely, even a bad monitor is marginally better than no monitor.

P4. A Biased Monitor will promote marginally higher levels of reciprocity than No Monitor.

Findings

Conflict

One of our primary concerns is whether the groups engage in hostile behavior. We measure this by whether subjects spent ECUs to destroy the lottery tickets belonging to the out-group. Indeed, they did. As Fig. 1 shows, between 28% and 52% of tickets were destroyed, on average. Fewer tickets were destroyed in the first period than in the last period. Groups winning the lottery destroyed more tickets on average and won 79.6% of the time. Groups destroyed all of the other group’s lottery tickets 38.9% of the time.

Figure 1. Results from the Contest Game. Points represent the average tickets destroyed, and the bars represent two standard errors around the mean.

Trust

We next turn to the trust component of the study. Our predictions are clear. An Unbiased Monitor will lead to the highest levels of trust (Prediction 1). A Biased Monitor and No Monitor will yield similar, low levels of trust (Prediction 2).

Panel A of Fig. 2 generates hinge plots coupled with the distribution of trust moves. The figure indicates that there is more trust in settings with an Unbiased Monitor than a Biased Monitor. Panel B provides the means broken out by period. This figure appears to confirm our first two predictions. However, under a one-tailed t-test, we cannot reject the null that there is a difference between the Unbiased and Biased Monitors (t = 1.21, df = 190, p = 0.11). As well, using a conventional cutoff, under a non-parametric test we cannot reject the null (Kruskal-Wallis χ 2 = 2.72, p = 0.10). However, a Kolmogorov-Smirnov test of equality between two distributions shows that these distributions are different (D = 0.198, p = 0.02). As expected under Prediction 2, there is no difference in what is sent under the Biased Monitor and No Monitor conditions (t = −0.21, df = 190, p = 0.58). Similarly, under a non-parametric test we cannot reject the null (Kruskal-Wallis χ 2 = 0.07, p = 0.80). As well, a Kolmogorov-Smirnov test for equality of the distributions cannot be rejected (D = 0.117, p = 0.14). Additional discussion of these differences in the distributions is given in the Supporting Information (Section C.1). When beliefs about the actions of the second mover are included, we find separation between the Biased and Unbiased Monitor treatments (see Figure C2.1 in the Supporting Information).

Figure 2. Distributions of trust decisions by treatment.

Trustworthiness

Trust is only part of the equation. With no monitor or a biased monitor favoring the second mover, a reciprocator can get away with returning nothing and let the first mover imagine that ECUs were lost in the exchange. We have two clear predictions. Prediction 3 holds that an Unbiased Monitor should lead to higher levels of reciprocity than with a Biased Monitor. The fourth Prediction states that even a Biased Monitor should lead to higher levels of reciprocity than with No Monitor.

Panel A of Fig. 3 is a hinge plot including the jittered distribution of what is returned. Because what is returned is conditional on what is sent we represent these points as the percentage of what is returned conditional on the tripled amount sent. Second movers could send some of their own endowment in return, and this happened in a small number of instances. Instances when the first mover sent nothing are omitted. Panel B presents the average percentage returned by period. It is useful to note that a 33.3% return is the break-even point. Any percentage below this and the trustor is getting back less than what was sent; anything more and trust pays. We do not report what the first mover actually received due to losses in the return.

Figure 3. Percentage returned (Reciprocity) by second mover.

From Panel A of Fig. 3, it appears that the ordering we hypothesized holds. However, there is no significant difference between the Unbiased and Biased Monitor conditions (Prediction 3). Under a one-tailed t-test we cannot reject the null hypothesis that there is a difference between the two types of monitors (t = 1.55, df = 148, p = 0.06). Similarly, we cannot reject the null hypothesis using a non-parametric test (Kruskal-Wallis χ 2 = 3.39, p = 0.06). However, there is a statistically significant difference between the Biased Monitor and No Monitor case, consistent with Prediction 4 (t = −2.79, df = 144, p = 0.006 and Kruskal-Wallis χ 2 = 6.43, p = 0.01). Overall, the ordering of our hypotheses appears to be correct. Second movers return less when they can hide behind biased or absent information. Additional analysis of the distributions is given in the Supporting Information, Section C.3.

Discussion

The findings from the laboratory are instructive, but not fully convincing. On a positive note, the trust decisions change in the predicted direction. As well, trustees are responsive to the treatments in the predicted directions. Trustees take advantage of being able to hide behind biased and no information. They also appear to learn, over time, that they can do so without retribution.

The problem is that the results are weak. This is not due to being underpowered. However, impressively, we see steady directional shifts in line with our predictions. In the lab, we had the attention of students for less than one hour. Their groups were assigned on the basis of nothing to do with the tasks at hand. Their level of out-group anger was relatively low. They had not engaged in a long-term war of attrition. The trust environment into which they were placed was very sterile. They were paired with anonymous players from the other group. Their interactions were quick and never repeated with the same counterpart. The information provided by a computerized monitor about losses is very limited. At the end of the day, subjects walk out of the lab with earnings.

If these subjects had far worse experiences with the other group and if they interacted over a longer period of time, we suspect that the trends we observe would have continued. The fact that subjects responded to such weak stimuli is impressive and gives us confidence that the mechanism of unbiased reporting is important for building trust.

The laboratory environment used here is very unlike the situation in which current or ex-combatants find themselves. For these groups, the level of animosity and distrust is extremely high. Building trust is likely to take a long time. The same is true for peacekeepers who may need to take some time to build a reputation for being impartial in each new context that they enter. Other factors in the real world may also impact prior combatants’ perceptions of peacekeepers with respect to impartiality along with their willingness to cooperate. Culture, identity, social networks, and institutions may influence the extent to which actors perceive peacekeepers as being biased and thereby moderate the willingness of actors to cooperate. Our study admittedly does not account for such factors. As well, post-conflict environments often experience breaches in the peace with at least one of the conflicting parties returning to violence.

We recognize that peacekeepers may face greater difficulty convincing the belligerents to re-commit to cooperation. While our design holds constant the history of inter-group cooperation and conflict, we expect unbiased peacekeepers are more effective in accomplishing this objective. Future research could further consider the role of impartiality among peacekeepers in promoting peace following ceasefire violations. Overall, we have reason to expect that the weak effects we observe would only be stronger in natural settings.

Conclusion

An important way in which peacekeepers might promote cooperation post-conflict is by monitoring the behaviors of ex-combatants and providing updates on each side’s commitment to the peace. In essence, they can act as referees, noting when there has been a transgression and alerting both sides about the seriousness of an infraction. Peacekeeper impartiality in monitoring seems critical. We provide plausible evidence indicating that the role of peacekeepers as unbiased monitors can be important for building trust between groups in conflict.

Supplementary material

To view the supplementary material for this article, please visit https://doi.org/10.1017/XPS.2023.12

Data availability statement

All data and code are available at Open Science Framework at https://doi.org/10.17605/OSF.IO/Q5JBK and Wilson (Reference Wilson2023) https://doi.org/10.7910/DVN/P2SPSD.

Competing interest

The authors certify that there are no conflicts of interest in carrying out this research or in the results reported in the manuscript. Both authors have filled out COI statements at their respective Universities.

Ethics statement

The research was approved by the IRB at Rice University (IRB-FY2019-68) for the project titled: “Study2018e Bilateral Bargaining JO.” Copies of the IRB are available at Open Science Framework at DOI 10.17605/OSF.IO/Q5JBK and Dataverse https://doi.org/10.7910/DVN/P2SPSD.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

1 Kydd also shows in an iterative extension of his model that third parties can overcome only limited predispositions of biasedness toward the parties by building a reputation as an honest interlocutor. Thus, in both versions of the model, key to building trust is a commitment to honesty. However, this is only possible when the mediator is not extremely biased.

2 This is in line with what should be expected under a “babbling” equilibrium described by Kydd (Reference Kydd2006) in which “the players disregard the mediator and act on their prior beliefs.”

3 These predictions were pre-registered at DOI 10.17605/OSF.IO/Q5JBK. The anonymized pre-registration is in the Supporting Information, Section A. Replication data are also available at Wilson (Reference Wilson2023).

4 Drawing on results from standard trust games, we identified the proper sample size to obtain 0.9 power to detect a difference between a mean of 5 and 3.25 with a standard deviation of 2 and sampling ratio of 1 using a power calculator provided by powerandsamplesize.com.

5 The means by group were 598 milliseconds and 641 milliseconds. The differences between the groups are quite small.

6 Irrespective of the probabilities, the equilibrium for the game is the same across treatments. Under backward induction, the first mover should never send anything. However, behaviorally this rarely happens.

References

Abbink, Klaus, Brandts, Jordi, Herrmann, Benedikt, and Orzen, Henrik. 2010. Intergroup Conflict and Intra-Group Punishment in an Experimental Contest Game. American Economic Review 100(1): 420–47.CrossRefGoogle Scholar
Abbink, Klaus and Harris, Donna. 2019. In-Group Favouritism and out-Group Discrimination in Naturally Occurring Groups. PloS One 14(9): e0221616.CrossRefGoogle ScholarPubMed
Anderhub, Vital, Engelmann, Dirk, and Guth, Werner. 2002. An Experimental Study of the Repeated Trust Game with Incomplete Information. Journal of Economic Behavior and Organization 48(2): 197216.CrossRefGoogle Scholar
Bauer, Michal, Fiala, Nathan, and Levely, Ian. 2018. Trusting Former Rebels: An Experimental Approach to Understanding Reintegration after Civil War. The Economic Journal 128(613): 1786–819.CrossRefGoogle Scholar
Benson, Michelle, and Kathman, Jacob D. 2014. United Nations Bias and Force Commitments in Civil Conflicts. The Journal of Politics 76(2): 350–63.CrossRefGoogle Scholar
Berg, Joyce E., Dickhaut, John W., and McCabe, Kevin. 1995. Trust, Reciprocity, and Social History. Games and Economic Behavior 10(1): 122–42.CrossRefGoogle Scholar
Blair, Robert A. 2019. International Intervention and the Rule of Law after Civil War: Evidence from Liberia. International Organization 73(2): 365–98.CrossRefGoogle Scholar
Blair, Robert A. 2021. Un Peacekeeping and the Rule of Law. American Political Science Review 115(1): 5168.CrossRefGoogle Scholar
Carlin, Ryan E., González, Roberto, Love, Gregory J., Miranda, Daniel Andres, and Navia, Patricio D.. 2022. Ethnicity or Policy? The Conditioning of Intergroup Trust in the Context of Ethnic Conflict. Political Psychology 43(2): 201–20.CrossRefGoogle Scholar
Carlin, R. E., and Love, G. J.. 2013. The Politics of Interpersonal Trust and Reciprocity: An Experimental Approach. Political Behavior 35(1): 4363.CrossRefGoogle Scholar
Carlin, Ryan E., and Love, Gregory J.. 2018. Political Competition, Partisanship and Interpersonal Trust in Electoral Democracies. British Journal of Political Science 48(1): 115–39.CrossRefGoogle Scholar
Carlin, Ryan E., Love, Gregory J., and Young, Daniel J.. 2020. Political Competition, Partisanship, and Interpersonal Trust under Party Dominance: Evidence from Post-Apartheid South Africa. Journal of Experimental Political Science 7(2): 101–11.CrossRefGoogle Scholar
Cassar, Alessandra, Grosjean, Pauline, and Whitt, Sam. 2013. Legacies of Violence: Trust and Market Development. Journal of Economic Growth 18(3): 285318.CrossRefGoogle Scholar
Charness, G. and Dufwenberg, M.. 2006. Promises and Partnership. Econometrica 74(6): 1579–601.CrossRefGoogle Scholar
Engle-Warnick, J. and Slonim, R. L.. 2004. The Evolution of Strategies in a Repeated Trust Game. Journal of Economic Behavior & Organization 55(4): 553–73.CrossRefGoogle Scholar
Gilligan, Michael J., Pasquale, Benjamin J., and Samii, Cyrus. 2014. Civil War and Social Cohesion: Lab-in-the-Field Evidence from Nepal. American Journal of Political Science 58(3): 604–19.CrossRefGoogle Scholar
Gordon, Grant M. and Young, Lauren E.. 2017. Cooperation, Information, and Keeping the Peace: Civilian Engagement with Peacekeepers in Haiti. Journal of Peace Research 54(1): 6479.CrossRefGoogle Scholar
Halevy, Nir, Bornstein, Gary, and Sagiv, Lilach. 2008. “In-Group Love” and “out-Group Hate” as Motives for Individual Participation in Intergroup Conflict: A New Game Paradigm.” Psychological Science 19(4): 405–11.CrossRefGoogle ScholarPubMed
Halevy, Nir, Weisel, Ori, and Bornstein, Gary. 2012. “In-Group Love” and “out-Group Hate” in Repeated Interaction between Groups.” Journal of Behavioral Decision Making 25(2): 188–95.CrossRefGoogle Scholar
Iyengar, Shanto and Westwood, Sean J.. 2015. Fear and Loathing across Party Lines: New Evidence on Group Polarization. American Journal of Political Science 59(3): 690707.CrossRefGoogle Scholar
Johnson, Noel D. and Mislin, Alexandra A.. 2011. Trust Games: A Meta-Analysis. Journal of Economic Psychology 32(5): 865–89.CrossRefGoogle Scholar
Kydd, Andrew H. 2006. When Can Mediators Build Trust? American Political Science Review 100(3): 449–62.CrossRefGoogle Scholar
Mironova, Vera and Whitt, Sam. 2017. International Peacekeeping and Positive Peace: Evidence from Kosovo. Journal of Conflict Resolution 61(10): 2074–104.CrossRefGoogle Scholar
Nomikos, William G. 2022. Peacekeeping and the Enforcement of Intergroup Cooperation: Evidence from Mali. The Journal of Politics 84(1): 194208.CrossRefGoogle Scholar
Posen, Barry R. 1993. The Security Dilemma and Ethnic Conflict. Survival 35(1): 2747.CrossRefGoogle Scholar
Rhoads, Emily Paddon. 2016. Taking Sides in Peacekeeping: Impartiality and the Future of the United Nations. Oxford: Oxford University Press.Google Scholar
Scacco, Alexandra and Warren, Shana S.. 2018. Can Social Contact Reduce Prejudice and Discrimination? Evidence from a Field Experiment in Nigeria. American Political Science Review 112(3): 654–77.CrossRefGoogle Scholar
Smidt, Hannah M. 2020. United Nations Peacekeeping Locally: Enabling Conflict Resolution, Reducing Communal Violence. Journal of Conflict Resolution 64(2–3): 344–72.CrossRefGoogle Scholar
Tajfel, Henri, and Turner, John. 1978. Social Categorization and Social Discrimination in the Minimal Group Paradigm. In Differentiation between Social Groups: Studies in the Social Psychology of Intergroup Relations, ed. Tajfel, H. London: Academic Press, Inc.Google Scholar
Westwood, Sean J., and Peterson, Erik. 2020. The Inseparability of Race and Partisanship in the United States. Political Behavior 44: 123.Google Scholar
Whitt, Sam, Wilson, Rick K., and Mironova, Vera. 2021. Inter-Group Contact and out-Group Altruism after Violence. Journal of Economic Psychology 86: 102420.CrossRefGoogle Scholar
Wilson, Rick. 2023. Replication Data For: The Effect of Biased Peasekeepers on Building Trust. Harvard Dataverse. https://doi.org/10.7910/DVN/P2SPSD Google Scholar
Wilson, Rick K., and Eckel, Catherine C.. 2011. Trust and Social Exchange. In The Handbook of Experimental Political Science, eds. Druckman, James N., Green, Donald P., Kuklinski, James H. and Lupia, Arthur. Boston, MA: Cambridge University Press.Google Scholar
Figure 0

Figure 1. Results from the Contest Game. Points represent the average tickets destroyed, and the bars represent two standard errors around the mean.

Figure 1

Figure 2. Distributions of trust decisions by treatment.

Figure 2

Figure 3. Percentage returned (Reciprocity) by second mover.

Supplementary material: Link

Oestman and Wilson Dataset

Link
Supplementary material: File

Oestman and Wilson supplementary material

Oestman and Wilson supplementary material

Download Oestman and Wilson supplementary material(File)
File 10 MB