Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-22T14:02:13.341Z Has data issue: false hasContentIssue false

Placebo statements in list experiments: Evidence from a face-to-face survey in Singapore

Published online by Cambridge University Press:  15 May 2020

Guillem Riambau*
Affiliation:
Institute of Political Economy Research Group (IPERG)—Universitat de Barcelona, Spain
Kai Ostwald
Affiliation:
School of Public Policy & Global Affairs and Department of Political Science, University of British Columbia, Canada
*
*Corresponding author. Email: [email protected]

Abstract

List experiments are a widely used survey technique for estimating the prevalence of socially sensitive attitudes or behaviors. Their design, however, makes them vulnerable to bias: because treatment group respondents see a greater number of items (J + 1) than control group respondents (J), the treatment group mean may be mechanically inflated due simply to the greater number of items. The few previous studies that directly examine this do not arrive at definitive conclusions. We find clear evidence of inflation in an original dataset, though only among respondents with low educational attainment. Furthermore, we use available data from previous studies and find similar heterogeneous patterns. The evidence of heterogeneous effects has implications for the interpretation of previous research using list experiments, especially in developing world contexts. We recommend a simple solution: using a necessarily false placebo statement for the control group equalizes list lengths, thereby protecting against mechanical inflation without imposing costs or altering interpretations.

Type
Research Note
Copyright
Copyright © The European Political Science Association 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Research carried out under NUS-IRB S-18-343 from the National University of Singapore. We would like to thank Edmund Malesky, Steven Oliver, Tom Pepinsky, and Risa Toha, for insightful comments and suggestions. All errors are ours. Supplementary materials can be found at http://guillemriambau.com/

References

Ahlquist, JS (2018) List experiment design, non-strategic respondent error, and item count technique estimators. Political Analysis 26, 3453.CrossRefGoogle Scholar
Ahlquist, JS, Mayer, KR and Jackman, S (2014) Alien abduction and voter impersonation in the 2012 U.S. general election: evidence from a survey list experiment. Election Law Journal 13, 460475.CrossRefGoogle Scholar
Blair, G and Imai, K (2012) Statistical analysis of list experiments. Political Analysis 20, 4777.CrossRefGoogle Scholar
Glynn, A (2013) What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly 77, 159172.CrossRefGoogle Scholar
Holbrook, AL and Krosnick, JA (2010) Social desirability in voter turnout reports: Tests using the item count technique. The Public Opinion Quarterly 74, 3767.CrossRefGoogle Scholar
Imai, K (2011) Multivariate regression analysis for the item count technique. Journal of the American Statistical Association 106, 407416.CrossRefGoogle Scholar
Institute of Policy Studies POPS (8): IPS Post-Election Survey 2015. Available at https://lkyspp.nus.edu.sg/ips/wp-content/uploads/sites/2/2015/10/POPS-8_GE2015_061115_web-Final.pdf Last accessed, July 9, 2017.Google Scholar
Kiewiet de Jong, CP and Nickerson, DW (2014) Artificial inflation or deflation? Assessing the item count technique in comparative surveys. Political Behavior 36, 659682.CrossRefGoogle Scholar
Kramon, E and Weghorst, KR (2012) Measuring sensitive attitudes in developing countries: lessons from implementing the list experiment. Newsletter of the APSA Experimental Section 3, 1424.Google Scholar
Krosnick, JA (1991) Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology 5, 213236.CrossRefGoogle Scholar
Krosnick, JA (1999) Survey research. Annual Review of Psychology 50, 537567.CrossRefGoogle ScholarPubMed
Rosenfeld, B, Imai, K and Shapiro, J (2016) An empirical validation study of popular survey methodologies for sensitive questions. American Journal of Political Science 60, 783802.CrossRefGoogle Scholar
Zigerell, LJ (2017) List experiments for estimating vote fraud in US elections: the 32 percent of republicans abducted by aliens can be wrong, unpublished manuscript. Available at http://www.ljzigerell.com/wp-content/uploads/2016/07/List-Experiments-for-Estimating-Vote-Fraud-in-US-Elections-FULL.pdf.Google Scholar
Supplementary material: PDF

Riambau and Ostwald supplementary material

Riambau and Ostwald supplementary material

Download Riambau and Ostwald supplementary material(PDF)
PDF 329.6 KB
Supplementary material: Link

Riambau and Ostwald Dataset

Link