Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-26T05:20:10.517Z Has data issue: false hasContentIssue false

Accounting for Noncompliance in Survey Experiments

Published online by Cambridge University Press:  17 April 2019

Jeffrey J. Harden
Affiliation:
Department of Political Science, University of Notre Dame, 2055 Jenkins Nanovic Halls, Notre Dame, IN 46556, USA, e-mail: [email protected]
Anand E. Sokhey*
Affiliation:
Department of Political Science, University of Colorado Boulder, 333 UCB, Boulder, CO 80309, USA, e-mail: [email protected]; Twitter: @AESokhey
Katherine L. Runge
Affiliation:
Department of Political Science, University of Colorado Boulder, 333 UCB, Boulder, CO 80309, USA, e-mail: [email protected]; Twitter: @AESokhey
*
*Corresponding author. Email: [email protected]

Abstract

Political scientists commonly use survey experiments–often conducted online–to study the attitudes of the mass public. In these experiments, compensation is usually small and researcher control is limited, which introduces the potential for low respondent effort and attention. This lack of engagement may result in noncompliance with experimental protocols, threatening causal inferences. However, in reviewing the literature, we find that despite the discipline’s general familiarity with experimental noncompliance, researchers rarely consider it when analyzing survey experiments. This oversight is important because it may unknowingly prevent researchers from estimating their causal quantities of greatest substantive interest. We urge scholars to address this particular manifestation of an otherwise familiar problem and suggest two strategies for formally measuring noncompliance in survey experiments: recording vignette screen time latency and repurposing manipulation checks. We demonstrate and discuss the substantive consequences of these recommendations by revisiting several published survey experiments.

Type
Short Report
Copyright
© The Experimental Research Section of the American Political Science Association 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/MNK26U (Harden, Sokhey, and Runge 2018). The authors have no conflicts of interest.

References

Angrist, Joshua D., Imbens, Guido W. and Rubin, Donald B.. 1996. Identification of Causal Effects Using Instrumental Variables. Journal of the American Statistical Association 91(434):444–55.CrossRefGoogle Scholar
Berinsky, Adam J., Margolis, Michele F. and Sances, Michael W.. 2014. Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys. American Journal of Political Science 58(3):739–53.CrossRefGoogle Scholar
Harden, Jeffrey J. 2016. Multidimensional Democracy: A Supply and Demand Theory of Representation in American Legislatures. New York: Cambridge University Press.CrossRefGoogle Scholar
Harden, Jeffrey J., Sokhey, Anand E. and Runge, Katherine L.. 2018. Replication Data for: Accounting for Noncompliance in Survey Experiments. Harvard Dataverse Network, V1. doi: 10.7910/DVN/MNK26U.Google Scholar
Monogan, James E. 2013. A Case for Registering Studies of Political Outcomes: An Application in the 2010 House Elections. Political Analysis 21(1):2137.CrossRefGoogle Scholar
Rayner, Keith. 1998. Eye Movements in Reading and Information Processing: 20 Years of Research. Psychological Bulletin 124(3):372422.CrossRefGoogle Scholar
Supplementary material: Link

Harden et al. Dataset

Link
Supplementary material: PDF

Harden et al. supplementary material

Harden et al. supplementary material 1

Download Harden et al. supplementary material(PDF)
PDF 174.6 KB