Article contents
Accounting for Noncompliance in Survey Experiments
Published online by Cambridge University Press: 17 April 2019
Abstract
Political scientists commonly use survey experiments–often conducted online–to study the attitudes of the mass public. In these experiments, compensation is usually small and researcher control is limited, which introduces the potential for low respondent effort and attention. This lack of engagement may result in noncompliance with experimental protocols, threatening causal inferences. However, in reviewing the literature, we find that despite the discipline’s general familiarity with experimental noncompliance, researchers rarely consider it when analyzing survey experiments. This oversight is important because it may unknowingly prevent researchers from estimating their causal quantities of greatest substantive interest. We urge scholars to address this particular manifestation of an otherwise familiar problem and suggest two strategies for formally measuring noncompliance in survey experiments: recording vignette screen time latency and repurposing manipulation checks. We demonstrate and discuss the substantive consequences of these recommendations by revisiting several published survey experiments.
- Type
- Short Report
- Information
- Copyright
- © The Experimental Research Section of the American Political Science Association 2019
Footnotes
The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/MNK26U (Harden, Sokhey, and Runge 2018). The authors have no conflicts of interest.
References
- 15
- Cited by