Hostname: page-component-f554764f5-8cg97 Total loading time: 0 Render date: 2025-04-08T20:13:53.880Z Has data issue: false hasContentIssue false

Priming Bias Versus Post-Treatment Bias in Experimental Designs

Published online by Cambridge University Press:  21 March 2025

Matthew Blackwell*
Affiliation:
Associate Professor, Department of Government, Institute for Quantitative Social Science, Harvard University, Cambridge, MA, USA
Jacob R. Brown
Affiliation:
Assistant Professor, Department of Political Science, Boston University, Boston, MA, USA
Sophie Hill
Affiliation:
PhD Student, Department of Government, Harvard University, Cambridge, MA, USA
Kosuke Imai
Affiliation:
Professor, Department of Government and Department of Statistics, Institute for Quantitative Social Science, Harvard University, Cambridge, MA, USA
Teppei Yamamoto
Affiliation:
Professor, Faculty of Political Science and Economics, Waseda University, Tokyo, Japan
*
Corresponding author: Matthew Blackwell; Email: [email protected]

Abstract

Conditioning on variables affected by treatment can induce post-treatment bias when estimating causal effects. Although this suggests that researchers should measure potential moderators before administering the treatment in an experiment, doing so may also bias causal effect estimation if the covariate measurement primes respondents to react differently to the treatment. This paper formally analyzes this trade-off between post-treatment and priming biases in three experimental designs that vary when moderators are measured: pre-treatment, post-treatment, or a randomized choice between the two. We derive nonparametric bounds for interactions between the treatment and the moderator under each design and show how to use substantive assumptions to narrow these bounds. These bounds allow researchers to assess the sensitivity of their empirical findings to priming and post-treatment bias. We then apply the proposed methodology to a survey experiment on electoral messaging.

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

Footnotes

Edited by: Daniel J. Hopkins and Brandon M. Stewart

References

Acharya, A., Blackwell, M., and Sen, M.. 2016. “Explaining Causal Findings Without Bias: Detecting and Assessing Direct Effects.” American Political Science Review 110 (3): 512529.CrossRefGoogle Scholar
Albertson, B., and Jessee, S.. 2023. “Moderator Placement in Survey Experiments: Racial Resentment and the “Welfare” versus “Assistance to the Poor” Question Wording Experiment.” Journal of Experimental Political Science 10 (3): 448454. https://doi.org/10.1017/XPS.2022.18 CrossRefGoogle Scholar
Andrews, D. W. K., and Han, S.. 2009. “Invalidity of the Bootstrap and the m Out of n Bootstrap for Confidence Interval Endpoints Defined by Moment Inequalities.” The Econometrics Journal 12: S172S199.CrossRefGoogle Scholar
Aronow, P. M., Baron, J., and Pinson, L.. 2019. “A Note on Dropping Experimental Subjects Who Fail a Manipulation Check.” Political Analysis 27 (4): 572589. https://doi.org/10.1017/pan.2019.5 CrossRefGoogle Scholar
Balke, A., and Pearl, J.. 1997. “Bounds on Treatment Effects from Studies with Imperfect Compliance.” Journal of the American Statistical Association 92: 11711176.CrossRefGoogle Scholar
Bansak, K. 2021. “Estimating Causal Moderation Effects with Randomized Treatments and Non-Randomized Moderators.” Journal of the Royal Statistical Society Series A (Statistics in Society) 184 (1): 6586.CrossRefGoogle Scholar
Berinsky, A. J., Margolis, M. F., and Sances, M. W.. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739753.CrossRefGoogle Scholar
Brown, J., Blackwell, M., Hill, S., Imai, K., and Yamamoto, T.. 2024. “Replication Data for: Priming Bias Versus Post-Treatment Bias in Experimental Designs.” Harvard Dataverse, V1. https://doi.org/10.7910/DVN/JZ55TF CrossRefGoogle Scholar
Chong, D., and Druckman, J. N.. 2010. “Dynamic Public Opinion: Communication Effects over Time.” American Political Science Review 104 (4): 663680.CrossRefGoogle Scholar
Fang, Z., and Santos, A.. 2019. “Inference on Directionally Differentiable Functions.” The Review of Economic Studies. 86 (1): 377412. https://doi.org/10.1093/restud/rdy049 Google Scholar
Frangakis, C. E., and Rubin, D. B.. 2002. “Principal Stratification in Causal Inference.” Biometrics 58 (1): 2129.CrossRefGoogle ScholarPubMed
Holland, P. W. 1986. “Statistics and Causal Inference (with Discussion).” Journal of the American Statistical Association 81: 945960.CrossRefGoogle Scholar
Horowitz, J., and Klaus, K.. 2020. “Can Politicians Exploit Ethnic Grievances? An Experimental Study of Land Appeals in Kenya.” Political Behavior 42 (1): 3558.CrossRefGoogle Scholar
Imai, K., and Yamamoto, T.. 2010. “Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis.” American Journal of Political Science 54 (2): 543560.CrossRefGoogle Scholar
Imbens, G. W., and Manski, C. F. 2004. “Confidence Intervals for Partially Identified Parameters.” Econometrica 72 (6): 18451857.CrossRefGoogle Scholar
Kane, J. V., and Barabas, J.. 2019. “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments.” American Journal of Political Science 63 (1): 234249.CrossRefGoogle Scholar
Klar, S. 2013. “The Influence of Competing Identity Primes on Political Preferences.” The Journal of Politics 75 (4): 11081124.CrossRefGoogle Scholar
Klar, S., Leeper, T. J., and Robison, J.. 2020. “Studying Identities with Experiments: Weighing the Risk of Post-Treatment Bias Against Priming Effects.” Journal of Experimental Political Science 7 (1): 5660.CrossRefGoogle Scholar
Manski, C. F. 1995. Identification Problems in the Social Sciences. Cambridge, MA: Harvard University Press.Google Scholar
Manski, C. F. 1997. “Monotone Treatment Response.” Econometrica 65 (6): 13111334.CrossRefGoogle Scholar
Montgomery, J. M., Nyhan, B., and Torres, M.. 2018. “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It.” American Journal of Political Science 62 (3): 760775.CrossRefGoogle Scholar
Morris, M. W., Carranza, E., and Fox, C. R.. 2008. “Mistaken Identity: Activating Conservative Political Identities Induces “Conservative” Financial Decisions.” Psychological Science 19 (11): 11541160.CrossRefGoogle ScholarPubMed
Rosenbaum, P. R. 1984. “The Consquences of Adjustment for a Concomitant Variable that has Been Affected by the Treatment.” Journal of the Royal Statistical Society. Series A (General) 147 (5): 656666.CrossRefGoogle Scholar
Schiff, K. J., Pablo Montagnes, B, and Peskowitz, Z.. 2022. “Priming Self-Reported Partisanship: Implications for Survey Design and Analysis.” Public Opinion Quarterly 86 (3): 643667.CrossRefGoogle Scholar
Sheagley, G., and Clifford, S.. 2025. “No Evidence that Measuring Moderators Alters Treatment Effects.” American Journal of Political Science, 69: 4963. https://doi.org/10.1111/ajps.12814 CrossRefGoogle Scholar
Transue, J. E. 2007. “Identity Salience, Identity Acceptance, and Racial Policy Attitudes: American National Identity as a Uniting Force.” American Journal of Political Science 51 (1): 7891.CrossRefGoogle Scholar
Valentino, N. A., Hutchings, V. L., and White, I. K.. 2002. “Cues that Matter: How Political Ads Prime Racial Attitudes During Campaigns.” American Political Science Review 96 (1): 7590.CrossRefGoogle Scholar
Varaine, S. 2023. “How Dropping Subjects Who Failed Manipulation Checks Can Bias Your Results: An Illustrative Case.” Journal of Experimental Political Science 10 (2): 299305.CrossRefGoogle Scholar
Supplementary material: File

Blackwell et al. supplementary material

Blackwell et al. supplementary material
Download Blackwell et al. supplementary material(File)
File 296.3 KB
Supplementary material: Link

Blackwell et al. Dataset

Link