Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-21T13:00:24.708Z Has data issue: false hasContentIssue false

Differential efficacy of survey incentives across contexts: experimental evidence from Australia, India, and the United States

Published online by Cambridge University Press:  04 October 2024

Katharine Conn
Affiliation:
Teachers College, Columbia University, New York, NY, USA
Cecilia Hyunjung Mo
Affiliation:
Department of Political Science, Goldman School of Public Policy, University of California, Berkeley, Berkeley, CA, USA
Bhumi Purohit*
Affiliation:
McCourt School of Public Policy, Georgetown University, Washington, DC, USA
*
Corresponding author: Bhumi Purohit; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Scholars often use monetary incentives to boost participation rates in online surveys. This technique follows existing literature from western countries, which suggests egoistic incentives effectively boost survey participation. Positing that incentives’ effectiveness vary by country context, we tested this proposition through an experiment in Australia, India, and the USA. We compared three types of monetary lotteries to narrative and altruistic appeals. We find that egoistic rewards are most effective in the USA and to some extent, in Australia. In India, respondents are just as responsive to altruistic incentives as to egoistic incentives. Results from an adapted dictator game corroborate these patterns. Our results caution scholars against exporting survey participation incentives to areas where they have not been tested.

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of EPS Academic Ltd

As Internet access expands, scholars are increasingly employing web surveys to reach populations that were previously only available through field contact or mail. However, web-based surveys remain prone to low response rates (Göritz, Reference Göritz2006). Researchers have found that incentives typically boost survey participation (ibid), and are typically designed in accordance with three theories of why individuals respond to surveys: (1) egoistic reasons (e.g., respondents are motivated by monetary incentives that advance their self-interest); (2) altruistic reasons (e.g., respondents are motivated by the promise of enabling a social good); and (3) survey-related reasons (e.g., respondents are motivated by their interest in the survey topic itself). Meta-analyses of incentive studies by Church (Reference Church1993), Edwards et al. (Reference Edwards, Roberts, Clarke, DiGuiseppi, Pratap, Wentz and Kwan2002), Singer and Bossarte (Reference Singer and Bossarte2006), Dillman (Reference Dillman2011), and Singer and Ye (Reference Singer and Ye2013) all show that, on average, egoistic incentives are most effective at boosting response rates.

These findings, however, come with a major limitation. The preponderance of studies on survey incentives has been conducted in the USA, Canada, or Western Europe (Meuleman et al., Reference Meuleman, Langer and Blom2018). Findings from these studies may not be applicable in countries with different cultural, social, political, or economic contexts. We help overcome this lacuna in the survey methodology literature by implementing an original online experiment on a comparable sample of individuals in three countries: Australia, India, and the USA. We examined which incentive strategies work best to elicit online survey participation for a like-minded set of individuals, namely recent college graduates interested in joining a national service organization. While the results from these three countries are not nationally representative, holding the sub-population type constant allows us to better assess how the relative effectiveness of these incentives might differ by country context.

The study focuses on the effectiveness of five incentive strategies to increase online survey participation: a control condition with just a narrative appeal;Footnote 1 a narrative appeal coupled with a 5–10 USD donation to a charity of the respondent's choice (an altruistic incentive); and three egoistic incentives. In a replication of the USA study, we also considered a 20 USD donation condition, as it is plausible that 5 USD in India may have been perceived as more attractive than even 10 USD in the USA and Australia.Footnote 2 Our findings indicate that the effectiveness of an incentive is also highly dependent on the country context. While we see that egoistic incentives consistently outperform the altruistic incentive or a narrative appeal in the USA (and to some extent in Australia), egoistic incentives are not more effective than the narrative appeal and altruistic incentive in India.

To provide corroborating insights into the relative effectiveness of egoistic versus altruistic rewards, we conducted an adapted dictator game among all survey respondents. At the start of the survey, respondents were told that if they completed the survey, they would be entered into a 100 USD lottery. They were additionally told that were they to win, they could keep all the prize money for themselves or contribute any or all of the award to one or more charities, and asked to share their preferred award allocations. In line with the findings of our incentive experiment, we find that respondents in India are more inclined to donate their potential monetary prize to charity than a similar population of individuals in the USA. Australian respondents fell somewhere in between American and Indian respondents in their propensity to donate lottery winnings to charity.

The results from these two research activities suggest that the strength of different incentive strategies vary across countries, even among similar groups of respondents. Our findings caution scholars conducting research outside of western countries against simply adopting the recommended incentives from existing survey methods research. Namely, egoistic incentives should be adequately tested for their effectiveness in other country and population settings.

1. Hypotheses

Evidence to date, largely from western settings, shows that altruistic incentives are less effective than egoistic appeals.Footnote 3 However, even egoistic incentives can take multiple forms and therefore vary in effectiveness. We thus tested the relative effectiveness of three different lotteries: a small monetary amount with many prizes, a large monetary amount with a few prizes, and a combination of these small and large lotteries.

Existing literature offers mixed evidence on the effectiveness of a larger number of lottery awards consisting of smaller monetary amounts versus a smaller number of lottery awards of higher amounts. While Deutskens et al. (Reference Deutskens, De Ruyter, Wetzels and Oosterveld2004) find the former to be most effective, Gajic et al. (Reference Gajic, Cameron and Hurley2012) find the latter to be more successful. In yet another configuration, Khan and Kupor (Reference Khan and Kupor2016) examine the effect of bundling smaller lottery prizes with a single large lottery prize, all with an equal likelihood of winning. They find that such bundling leads individuals to perceive the larger prize to be less valuable than if the larger prize is offered on its own, a concept termed “value atrophy.” We thus included this “mixed” lottery as a treatment. Given that the existing literature is inconclusive, we are not able to make any hypotheses on the relative effectiveness of different lotteries in our study, across or within countries.

Similar to egoistic incentives, altruistic incentives vary in nature. We examined the effectiveness of a donation to charity compared to a narrative appeal and egoistic appeals. While comparable altruistic incentives have not been tested across countries, evidence from dictator games suggests that individuals from high-income countries are more likely to give nothing to another party compared to individuals from the “developing” world (Engel, Reference Engel2011). Such differences, though, may change depending on the value of the currency. For example, larger monetary amounts have been shown to lessen giving in India, as players have more to gain (Raihani et al., Reference Raihani, Mace and Lamba2013). Thus, in the context of our study, extant research does not provide clear predictions. It is unclear if those in India (where the same monetary amount may be worth more than in the USA or Australia), are more likely to give away some portion of their prize.Footnote 4

2. Methods

2.1 Research design

To examine the effectiveness of various incentive strategies, we randomly assigned each survey respondent to receive one of five incentives in India, and one of six incentives in Australia and the USA.Footnote 5 The survey was conducted among college-educated individuals who applied to join a similar not-for-profit national service organization in all three country settings: Teach For America (2007–2015 application cycles), Teach For Australia (2011–2016 application cycles), and Teach For India (2009–2014 application cycles). All three organizations are part of the Teach For All network, and they each employ a similar service model and mission: to address education inequality in the country in which they work. The population consists of those who applied and made it to the final round of the selection process, which translates to 120,417 individuals in the USA, 14,336 individuals in India, and 1470 individuals in Australia.Footnote 6 By focusing on a similar population across countries, we overcome a key issue: that differences in respondent samples instead of country contexts may lead to varying effectiveness of survey incentives.

The incentives offered in each country included: (1) a control condition, consisting of a short narrative appeal, (2) an altruistic appeal with a 5 USD donation to charity (of the respondent's choice), (3) an egoistic appeal with entry into a big lottery with two 1000 USD prizes (henceforth called the big lottery condition), (4) an egoistic appeal with entry into a small lottery of twenty 100 USD prizes (small lottery condition), and (5) an egoistic appeal with entry into a mixed lottery with both two 1000 USD prizes and twenty 100 USD prizes (mixed lottery condition).Footnote 7 In Australia and the USA, we added another condition: a 10 USD charity donation out of concern that a 5 USD charity incentive is more valuable in India than the USA or Australia.Footnote 8 Appendix A provides the language used for each incentive. In a February 2024 replication in the USA, we further increased the charity condition incentive to 20 USD.Footnote 9 In the replication study, we also verified that the incentive manipulation worked as intended, with nearly all study participants correctly identifying the incentive that was offered to them when asked (see Tables F12–F14).

The way the incentive was administered differed slightly in India versus the other countries. Each individual in the India survey panel received an invitation e-mail with a narrative appeal as seen in Appendix A1. If individuals accepted the e-mail invitation, they were taken to a landing page with a consent form that randomized incentives across the five treatments described above (full text available in Appendices A2–A6). The online survey in India was kept open for two weeks between December 24, 2014 and January 6, 2015. Within that period, a total of 1780 individuals opened the invitation e-mail and saw the incentive (12.20 percent of the panel), and 643 completed the survey (4.41 percent of the panel; 36.1 percent of those who saw the e-mail invitation). We limit our analysis sample to the 1780 individuals who were exposed to one of the five treatments.

For the USA and Australia studies, the incentives were noted in the e-mail invitation itself as shown in Appendix A7. The USA incentives experiment ran between October 1, 2015 and October 15, 2015, yielding a 9.78 percent response rate and a 7.46 percent completion rate. The Australia study ran from January 9, 2018 to February 17, 2018 and yielded a 16.19 percent response rate and a 13.61 percent completion rate.Footnote 10 The surveys were implemented with random assignment across all respondents.Footnote 11

To validate results from our main study differently, we conducted an adapted dictator game (Gilens, Reference Gilens2011) at the start of the survey. After consenting to participate in the study, respondents were immediately told that if they completed the survey, they were automatically entered into an additional lottery where ten winners would be awarded a 100 USD cash prize in all three countries.Footnote 12 We then asked individuals to play a variant of the dictator game, wherein the respondent was asked to assign the 100 USD prize among themselves and ten charities in the event that they won the lottery (see Appendix A figures and Table A1, respectively, for the full text). The lottery did not include information on the probability of winning as it depended on the number of individuals who completed the survey, which neither the respondents nor researchers knew.

2.2 Measures

To assess the effect of incentives on response rates, we defined the dependent variable in two ways. First, we created a variable for whether the respondent started the survey. Second, we created a variable for whether the respondent completed the survey. The definitions of each of these outcome measures are as follows:

  1. 1. Completion rate: the number of respondents who completed the survey divided by the number of respondents who were exposed to the incentive condition (AAPOR RR1 response rate).

  2. 2. Response rate: the number of respondents who proceeded past the consent form (the first page) of the survey divided by the number of respondents who were exposed to the incentive condition (AAPOR RR2 response rate).

We calculated the completion and response rates slightly differently in India given individuals were informed about the incentive in the consent form of the survey, and not in an e-mail that asked subjects to click on a link to the survey (as was done in the Australia and USA survey). Given this difference, completion and response rates in Australia and the USA reflect the intent to treat (ITT) rates as defined above, akin to the AAPOR RR1 and AAPOR RR2 rates. Rates in India, using the same method, would yield the treatment on treated (ToT) estimates. However, we take advantage of our survey platform's equal distribution of the treatment to estimate the ITT rates in India, which enables comparability across country contexts.Footnote 13

3. Results

3.1 Incentive study

Overall, survey response rates (RR2) were highest in Australia (14.00 percent), followed by India (12.20 percent) and the USA (9.78 percent) (see Figure 1).Footnote 14 Assessing the relative effectiveness of the incentives, we see distinct patterns by country. In India, the charitable incentive and the control condition (narrative appeal only) are just as effective as the lottery treatments, whereas in the USA the lottery incentives are the most effective in improving response and completion rates. In a replication of the USA study in 2024, we also find that the lottery conditions yield the highest response and completion rates (see Table F5, and Figures F1 and F2 for response and completion rates by incentive), which helps assuage any concerns that our findings are sensitive to the timing of surveys. In Australia, the lottery incentives generated higher response rates on average than the charitable incentives and the control condition, though these differences are not statistically significant at standard levels. We report the approximate ITT means and t-test results for India in Appendix C, Tables C1 and C2. We also report the linear probability models and marginal effects models for the ITT in Appendix D, Tables D1–D8 for Australia and Tables D13–D20 for the USA. The results below are from the ITT analysis across the three countries.

Figure 1. Response rates, all countries.

Note: This graph shows response rates across Australia, India, and the USA for each experimental condition. The whiskers represent the confidence interval for the mean response rates. We did not implement the 10 USD charity treatment in India. India's rates refer to the approximate ITT, as detailed in Section 2.2.

3.1.1 India

Overall, we find that the charity and control conditions perform just as well as the lottery conditions in India. Specifically, while the charity treatment yielded the highest response rate in India (14.23 percent), it did not outperform any of the three lotteries (see Figure 1 and rows 5–7 in Appendix Table C1). The control condition also yielded a relatively high response rate (13.99 percent), though this rate is not statistically significantly different at the 90 percent level from the lottery-based or charity-based incentives’ response rates (see Appendix Table C1, rows 1–4). In sum, we see that the charity and control conditions perform just as well as the lottery conditions in eliciting survey response, making the zero-cost narrative appeal the most cost-effective strategy. Moreover, we do not see evidence of value atrophy; the mixed incentives did not underperform the big lottery or small lottery conditions. When we examine completion rates, we find a similar pattern (see Figure 2 and Appendix Table C2).

Figure 2. Completion rates, all countries.

Note: This graph shows completion rates across Australia, India, and the USA for each treatment condition and the control. The whiskers represent the confidence interval for the mean completion rates. We did not implement the 10 USD charity treatment in India. India's rates refer to the approximate ITT, as detailed in Section 2.2.

3.1.2 United States

In the USA, the lottery incentives led to the highest response rates (see Figure 1). Specifically, the mixed lottery yielded a 12.89 percent response rate—2.4 pp higher than both the big and small lotteries (p < 0.001 for both, see Appendix Table D13, rows 4 and 5 in column 6). This finding is at odds with the value atrophy hypothesis.

Our findings for survey completion rates are similar: the mixed lottery condition has the highest completion rate at 10.11 percent, followed by the small lottery condition at 8.0 percent and the big lottery condition at 7.74 percent (see Figure 2). The 5 USD charity condition has a slightly lower completion rate at 7.04 percent, followed by the 10 USD charity condition at 6.71  percent. In line with existing literature, the control condition yielded the lowest completion rate at 5.02 percent. All differences between the best performing incentive—the mixed lottery condition—and each of the other incentive conditions in the USA are statistically significant at p < 0.001 (see Appendix Table D15, column 6).

We replicated the 2015 USA study in 2024, and reassuringly, we find similar results (see Appendix F); the mixed lottery condition resulted in the highest response (9.19 percent) and completion rates (7.94 percent) (see Tables F3–F11 and Figures F1 and F2). These results held even though we increased the value of one of the charity conditions from 10 USD to 20 USD, and yet, each lottery condition achieved a higher response and completion rate than the 20 USD charity condition incentive at statistically significant levels. Moreover, the USD 20 charity condition and the 5 USD charity condition led to comparable completion rates.

3.1.3 Australia

In Australia, the small and big lottery conditions are the most effective, yielding 17.96 percent response rates for each condition, though differences in response rates between the lottery conditions and other incentives are not statistically meaningful (see Figure 1). In terms of completion rates in Australia, the small lottery, big lottery, and 10 USD charity condition yielded the highest rates of 15.10, 14.29, and 14.29 percent, respectively, followed by the mixed lottery condition (12.65 percent) and then the control condition (11.84 percent; see Figure 2 and Appendix Table D3). In Australia, the charity conditions—with completion rates of 13.47 percent for the 5 USD charity and 14.29 percent for the 10 USD charity—fell in between the trends in India and the USA. While the charity conditions performed nominally better than the mixed lottery incentives and the control, they did not perform better than the big and small lotteries on their own. However, no treatment in Australia yielded a statistically significant difference in completion rate over any other treatment or the control (see Appendix Table D3).

3.2 Dictator game

The findings from our adapted dictator game are summarized in Table 1 and visualized in Figure 3. In India, 81.35 percent of respondents donated some amount to charity, with an average donation of 62.04 USD. Comparatively, 45.13 percent of the individuals in the USA donated some amount to charity, with an average donation of 30.18 USD. This difference is statistically significant (p < 0.01). Altruism levels of the Australian respondents fell in between those of India and the USA; 61.88  percent of Australian respondents donated some amount of money, with an average donation of 50.12 USD, which is significantly different from both other countries (p < 0.01).Footnote 15

Table 1. Average amount donated to charity (USD)

Figure 3. Dictator game results.

Note: This figure reports the amount given to a charity by country context in the adapted dictator game.

4. Conclusion

We conducted one of the first comparative studies of survey incentives and found that even among a similar pro-social population, responses to different incentives varied by country context.Footnote 16 While a wide variety of research suggests that egoistic incentives outperform altruistic incentives, these studies only seem to hold weight in the USA and to a lesser extent in Australia. Our study finds that while monetary lotteries are more likely to elicit higher participation rates vis-à-vis altruistic appeals in the USA and to some extent in Australia, they are equally effective as the charity appeal in India—at least among pro-social groups. In line with these findings, individuals in India are much more likely to donate money—either partly or wholly—than in the USA in an adapted dictator game. Similarly, Australian respondents are much more likely to act charitably than American respondents, but not as charitably as Indian respondents.

Additionally, we see that there is variation in the effectiveness of monetary incentives by country context. While the mixed lottery were found to be more effective among respondents in the USA [which is at odds with the value atrophy hypothesis (Khan and Kupor, Reference Khan and Kupor2016)], that was not the case in Australia. As Khan and Kupor's original value atrophy hypothesis was formed using a very different survey population (largely male) and a different set of small prizes, it is clear that additional tests of the value atrophy hypothesis are necessary to better understand the settings in which it holds.

Finally, while we are unable to unearth the mechanisms through which the Indian group acts more charitably or why Australians responded differently than Americans to different monetary incentives, it is evident that survey incentives from one context cannot be wholly exported to another—even among similar populations. Given the low participation rates of most web-based surveys and the high cost of certain incentives, it would be prudent for future researchers to test various incentives prior to running web-based surveys in county settings in which studies of survey incentives have not taken place.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2024.53.

To obtain replication material for this article, https://doi.org/10.7910/DVN/CT9EJU

Footnotes

1 This appeal highlights that the potential study participant has valuable insights to share as someone who has applied to service programs (see Appendix A).

2 The pre-analysis plan can be found here.

3 A few studies have found cases in which altruistic appeals can be effective (Conn et al., Reference Conn, Mo and Sellers2019; Safarpour et al., Reference Safarpour, Bush and Hadden2022).

4 The purchasing power parity (PPP) for India in 2014 was 18.387 (Source: OECD).

5 This study builds on Conn et al. (Reference Conn, Mo and Sellers2019), who examine survey incentives among Teach For India applicants in India.

6 The differing sample sizes by country reflect the different number of cohorts being studied, the different number of applicants to the program, and different processes for gaining applicant consent to participate in third-party studies. See Appendix E for additional details.

7 The number of total participants in the lottery was not included in the invitation because the number of people that would either open this e-mail or take the survey was unknown.

8 Given PPP rates, 5 USD in India in 2014 was equivalent to 18.387 USD in the USA (Source: OECD).

9 This reflects a higher PPP rate during the design of the replication (see Appendix F).

10 The different time frames reflect the timing of studies that were negotiated with the relevant Teach For All office. Differences in time frames may have made certain incentives potentially more or less attractive based on the state of the world. To account for this possibility, we replicate the survey in the USA in 2024 among TFA applicants from 2007 to 2023 (n = 175, 762). Reassuringly, we find similar results in 2024 as we did in 2015 (see Appendix Figures F1 and F2, and Tables F3–F11 for completion and response rate results, and F1 and F2 for balance tests).

11 Descriptive statistics and balance tests are given in Appendix B. We are restricted by the demographic variables collected and shared by each of the three Teach For All organizations. Some variables in Table B6 show imbalance across treatment groups for the USA, but this is largely a result of the large sample size (n = 120, 417). Nevertheless, we consider specifications that control for any imbalanced variables in Appendix D; reassuringly, the results do not substantively change.

12 This was a real lottery, and described as such. Further, to ensure that respondents did not think the lottery offered to them as part of the dictator game was in lieu of the incentive offered to them at the start of the study, we noted that this lottery was “in addition to the prize noted in the survey experiment” (see Appendix A8 for more details).

13 See Appendix C for more details on ITT calculations. For transparency, we also provide the ToT rates, calculated using AAPOR RR1 and AAPOR RR2, for India in Appendix D, Tables D9–D12.

14 Inferences regarding overall response rates should not be drawn by country contexts, as in Australia, our population was restricted to those who had just given explicit permission to Teach For Australia to share their information with authorized third parties. Moreover, the survey in Australia was open longer than in the USA and India.

15 For amount donated disaggregated by the incentive treatment they received at the start of the survey, see Appendix Table G1.

16 While unlikely, different types of individuals may apply to the same type of national service program in different countries. As such, further comparative studies of survey incentives would be useful.

References

Church, AH (1993) Estimating the effect of incentives on mail survey response rates: a meta-analysis. Public Opinion Quarterly 57, 6279.CrossRefGoogle Scholar
Conn, K, Mo, CH and Sellers, L (2019) When less is more in boosting response rates: experimental evidence from a web survey in India. Social Science Quarterly 100, 14451458.CrossRefGoogle Scholar
Deutskens, E, De Ruyter, K, Wetzels, M and Oosterveld, P (2004) Response rate and response quality of internet-based surveys: an experimental study. Marketing Letters 15, 2136.CrossRefGoogle Scholar
Dillman, DA (2011) Mail and Internet Surveys: The Tailored Design Method—2007 Update with New Internet, Visual, and Mixed-Mode Guide, 2nd Edn. New York: John Wiley & Sons.Google Scholar
Edwards, P, Roberts, I, Clarke, M, DiGuiseppi, C, Pratap, S, Wentz, R and Kwan, I (2002) Increasing response rates to postal questionnaires: systematic review. BMJ 324, 1183.CrossRefGoogle ScholarPubMed
Engel, C (2011) Dictator games: a meta study. Experimental Economics 14, 583610.CrossRefGoogle Scholar
Gajic, A, Cameron, D and Hurley, J (2012) The cost-effectiveness of cash versus lottery incentives for a web-based, stated-preference community survey. The European Journal of Health Economics 13, 789799.CrossRefGoogle ScholarPubMed
Gilens, M (2011) The benevolent baker: altruism and political preference formation. In Conference in Honor of Paul Sniderman. Palo Alto, CA: Stanford University.Google Scholar
Göritz, AS (2006) Incentives in web studies: methodological issues and a review. International Journal of Internet Science 1, 5870.Google Scholar
Khan, U and Kupor, D (2016) Risk (mis)-perception: when greater risk reduces risk valuation. Journal of Consumer Research 43, 769786.Google Scholar
Meuleman, B, Langer, A and Blom, AG (2018) Can incentive effects in web surveys be generalized to non-western countries? Conditional and unconditional cash incentives in a web survey of Ghanaian university students. Social Science Computer Review 36, 231250.CrossRefGoogle Scholar
Raihani, NJ, Mace, R and Lamba, S (2013) The effect of $1, $5 and $10 stakes in an online dictator game. PLoS ONE 8, e73131.CrossRefGoogle Scholar
Safarpour, A, Bush, SS and Hadden, J (2022) Participation incentives in a survey of international non-profit professionals. Research & Politics 9, 20531680221125723.CrossRefGoogle Scholar
Singer, E and Bossarte, RM (2006) Incentives for survey participation: when are they “coercive?”. American Journal of Preventive Medicine 31, 411418.CrossRefGoogle ScholarPubMed
Singer, E and Ye, C (2013) The use and effects of incentives in surveys. The Annals of the American Academy of Political and Social Science 645, 112141.CrossRefGoogle Scholar
Figure 0

Figure 1. Response rates, all countries.Note: This graph shows response rates across Australia, India, and the USA for each experimental condition. The whiskers represent the confidence interval for the mean response rates. We did not implement the 10 USD charity treatment in India. India's rates refer to the approximate ITT, as detailed in Section 2.2.

Figure 1

Figure 2. Completion rates, all countries.Note: This graph shows completion rates across Australia, India, and the USA for each treatment condition and the control. The whiskers represent the confidence interval for the mean completion rates. We did not implement the 10 USD charity treatment in India. India's rates refer to the approximate ITT, as detailed in Section 2.2.

Figure 2

Table 1. Average amount donated to charity (USD)

Figure 3

Figure 3. Dictator game results.Note: This figure reports the amount given to a charity by country context in the adapted dictator game.

Supplementary material: File

Conn et al. supplementary material

Conn et al. supplementary material
Download Conn et al. supplementary material(File)
File 2.7 MB
Supplementary material: Link

Conn et al. Dataset

Link