Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-22T03:08:07.407Z Has data issue: false hasContentIssue false

The grapevine effect in sensitive data collection: examining response patterns in support for violent extremism

Published online by Cambridge University Press:  25 September 2020

John McCauley*
Affiliation:
Governent and Politics, University of Maryland, Collge Park, College Park, USA
Steven Finkel
Affiliation:
Political Science, University of Pittsburgh, Pittsburgh, USA
Michael Neureiter
Affiliation:
Political Science, University of Pittsburgh, Pittsburgh, USA
Christopher Belasco
Affiliation:
Graduate School of Public and International Affairs, University of Pittsburgh, Pittsburgh, USA
*
*Corresponding author. Email: [email protected]

Abstract

This study presents a pattern overlooked in previous research on measuring sensitive political outcomes: over the course of data collection, responses tend to shift in the direction of support for the local incumbent power. We suggest that, whereas earlier responses are largely devoid of this social desirability bias, word of the research spreads across enumeration areas, and individuals interviewed later in the process alter their responses out of fear of retribution for inappropriate answers. We document the pattern using original data from two surveys on support for violent extremism conducted in three different countries in the Sahel region of Africa. We rule out a host of alternative explanations and further confirm that the pattern can arise not just with overt survey measures but even with covert, experimental ones. We then demonstrate the same pattern using out-of-sample data from a separate well-known study. The findings offer a cautionary note to both conventional and experimental approaches to measuring sensitive attitudes.

Type
Original Article
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press on behalf of the European Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adida, CL, Ferree, KE, Posner, DN and Robinson, AL (2016) Who's asking? Interviewer coethnicity effects in African Survey Data. Comparative Political Studie 49, 16301660.CrossRefGoogle Scholar
Alexander, Y (2012) Special update report: terrorism in North, West, & Central Africa: from 9/11 to the Arab Spring. International Center for Terrorism Studies 3, 1–52.Google Scholar
Antwi-Boateng, O (2017) The rise of pan-islamic terrorism in Africa: a global security challenge. Politics & Policy 45, 253284.CrossRefGoogle Scholar
Atran, S, Axelrod, R, Davis, R and Fischhoff, B (2017) Challenges in researching terrorism from the field. Science (New York, N.Y.) 355, 352354.CrossRefGoogle ScholarPubMed
Berinsky, A (2004) Can we talk? Self-presentation and the survey response. Political Psychology 25, 643659.CrossRefGoogle Scholar
Blair, G, Fair, CC, Malhotra, N and Shapiro, JN (2013) Poverty and support for militant politics: evidence from Pakistan. American Journal of Political Science 57, 3048.CrossRefGoogle Scholar
Blair, G, Imai, K and Lyall, J (2014) Comparing and combining list and endorsement experiments: evidence from Afghanistan. American Journal of Political Science 58, 10431063.CrossRefGoogle Scholar
Blaydes, L and Gillum, RM (2013) Religiosity-of-interviewer effects: assessing the impact of veiled enumerators on survey response in Egypt. Politics and Religion 6, 459482.CrossRefGoogle Scholar
Bullock, W, Imai, K and Shapiro, JN (2011) Statistical analysis of endorsement experiments: measuring support for militant groups in Pakistan. Political Analysis 19, 363384.CrossRefGoogle Scholar
Corstange, D (2014) Foreign-sponsorship effects in developing-world surveys: evidence from a field experiment in Lebanon. Public Opinion Quarterly 78, 474484.CrossRefGoogle Scholar
Corstange, D (2016) Anti-American behavior in the Middle East: evidence from a field experiment in Lebanon. The Journal of Politics 78, 311325.CrossRefGoogle Scholar
Douglas, M (2015) Their evil lies in the grapevine effect: assessment of damages in defamation by social media. Media and Arts Law Review 20, 367379.Google Scholar
Davis, DW and Silver, BD (2003) Stereotype threat and race of interviewer effects in a survey on political knowledge. American Journal of Political Science 47, 3345.CrossRefGoogle Scholar
Fair, CC, Littman, R and Nugent, ER (2018) Conceptions of sharia and support for militancy and democratic values: evidence from Pakistan. Political Science Research and Methods 6, 429448.CrossRefGoogle Scholar
Finkel, SE, Guterbock, TM and Borg, MJ (1991) Race-of-interviewer effects in a preelection poll Virginia 1989. Public Opinion Quarterly 55, 313330.CrossRefGoogle Scholar
Freedman, DA (2008) Randomization does not justify logistic regression. Statistical Science 23, 237249.CrossRefGoogle Scholar
Fugii, LA (2012) Research ethics 101: dilemmas and responsibilities. PS: Political Science and Politics 45, 717723.Google Scholar
Goldsmith, A (2005) Police reform and the problem of trust. Theoretical Criminology 9, 443470.Google Scholar
Goodson, LP (2001) Perverting Islam: Taliban social policy toward women. Central Asian Survey 20, 415426.CrossRefGoogle ScholarPubMed
Grossman, G, Gazal-Ayal, O, Pimentel, SD and Weinstein, JM (2016) Descriptive representation and judicial outcomes in multiethnic societies. American Journal of Political Science 60, 4469.Google Scholar
Hainmueller, J and Hopkins, DJ (2015) The hidden American immigration consensus: a conjoint analysis of attitudes toward immigrants. American Journal of Political Science 59, 529548.CrossRefGoogle Scholar
Hershfield, AF, Rohling, NG, Kerr, GB and Hursh-Cesar, G (1983) Fieldwork in rural areas. In Bulmer, M and Warwick, DP (eds), Social Research in Developing Countries: Surveys and Censuses in the Third World. London: Routledge, pp. 241252.Google Scholar
Holtgraves, T (2004) Social desirability and self-reports: testing models of socially desirable responding. Personality and Social Psychology Bulletin 30, 161172.Google ScholarPubMed
Johnson, WT and DeLamater, JD (1976) Response effects in sex surveys. Public Opinion Quarterly 40, 165181.Google Scholar
Kalinin, K (2016) The social desirability bias in autocrat's electoral ratings: evidence from the 2012 Russian presidential elections. Journal of Elections, Public Opinion and Parties 26, 191211.CrossRefGoogle Scholar
Krumpal, I (2013) Determinants of social desirability bias in sensitive surveys: a literature review. Quality & Quantity 47, 20252047.CrossRefGoogle Scholar
Lee, T and Pérez, EO (2014) The persistent connection between language-of-interview and Latino political opinion. Political Behavior 36, 401425.CrossRefGoogle Scholar
Lyall, J, Zhou, Y-Y and Imai, K (2020) Can economic assistance shape combatant support in wartime? Experimental evidence from Afghanistan. American Political Science Review 114, 126143.CrossRefGoogle Scholar
Marchal, R (2016) An emerging military power in Central Africa? Chad under Idriss Déby. Sociétés politiques comparées 40, 1–20.Google Scholar
McCauley, JF (2014) Measuring and reducing religious bias in post-conflict zones: evidence from Côte d'Ivoire. Political Psychology 35, 267289.CrossRefGoogle Scholar
Mohammed, K (2014) The message and methods of Boko Haram. In Pérouse de Monclos, M-A (ed.), Boko Haram: Islamism, Politics, Security and the State in Nigeria. Leiden, Netherlands: African Studies Centre, pp. 932.Google Scholar
Rathgeber, E (1996) Women, men, and water-resource management in Africa. In Rached, E, Rathgeber, E, and Brooks, DB (eds), Water Management in Africa and the Middle East: Challenges and Opportunities. Ottowa: International Development Research Centre, pp. 4969.Google Scholar
Robinson, D and Tannenberg, M (2019) Self-censorship of regime support in authoritarian states: evidence from list experiments in China. Research and Politics 6, 19.Google Scholar
Rosenfeld, B, Imai, K and Shapiro, J (2016) An empirical validation study of popular survey methodologies for sensitive questions. American Journal of Political Science 60, 783802.CrossRefGoogle Scholar
Schils, N and Pauwels, L (2014) 2–14. “explaining violent extremism for subgroups by gender and immigrant background, using SAT as a framework.”. Journal of Strategic Security 7, 2747.CrossRefGoogle Scholar
Schreiber, D, Fonzo, G, Simmons, AN, Dawes, CT, Flagan, T, Fowler, JH and Paulus, MP (2013) Red brain, blue brain: evaluative processes differ in democrats and republicans. PLoS One 8, e52970.CrossRefGoogle ScholarPubMed
Singer, E, Hippler, H-J and Schwarz, N (1992) Confidentiality assurances in surveys: reassurance or threat? International Journal of Public Opinion Research 4, 256268.CrossRefGoogle Scholar
Technical Assistance Research Programs (TARP) (1981) Measuring the grapevine consumer response and word-of-mouth, US Office of Consumer Affairs, Washington, DC.Google Scholar
Tourangeau, R and Yan, T (2007) Sensitive questions in surveys. Psychological Bulletin 133, 859883.CrossRefGoogle ScholarPubMed
UNHCR (2017) Nigeria Situation: Populations Forcibly Displaced in the Lake Chad Region. 13 January 2017. Available at https://reliefweb.int/sites/reliefweb.int/files/resources/nga_situation_A4L_2017-01-23.pdf.Google Scholar
Wray, R (2012) Fighting Talk: What Goes on Before the Bomb Goes Off? A Study of the Causal Factors that Influence Propensity to Violent Behavior (Ph.D. disseration), University of London.Google Scholar
Supplementary material: PDF

McCauley et al. supplementary material

Online Appendix

Download McCauley et al. supplementary material(PDF)
PDF 199.3 KB
Supplementary material: Link

McCauley et al. Dataset

Link