Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-28T01:27:17.757Z Has data issue: false hasContentIssue false

Bias Amplification and Bias Unmasking

Published online by Cambridge University Press:  04 January 2017

Joel A. Middleton*
Affiliation:
Department of Political Science, University of California, Berkeley, Barrows Hall, Berkeley, CA 94720, USA
Marc A. Scott
Affiliation:
Department of Humanities and Social Sciences in the Professions, 246 Greene St., New York University, Steinhardt, NY 10003, USA, e-mail: [email protected]
Ronli Diakow
Affiliation:
New York City Department of Education, 131 Livingston Street, Brooklyn, NY 11201, USA, e-mail: [email protected]
Jennifer L. Hill
Affiliation:
Department of Humanities and Social Sciences in the Professions, 246 Greene St., New York University, Steinhardt, NY 10003, USA, e-mail: [email protected]
*
e-mail: [email protected] (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the literature on bias-increasing covariates by first introducing a way to decompose omitted variable bias into three constituent parts: bias due to an unobserved confounder, bias due to excluding observed covariates, and bias due to amplification. This leads to two important findings. Although instruments have been the primary focus of the bias amplification literature to date, we identify the fact that the popular approach of adding group fixed effects can lead to bias amplification as well. This is an important finding because many practitioners think that fixed effects are a convenient way to account for any and all group-level confounding and are at worst harmless. The second finding introduces the concept of bias unmasking and shows how it can be even more insidious than bias amplification in some cases. After introducing these new results analytically, we use constructed observational placebo studies to illustrate bias amplification and bias unmasking with real data. Finally, we propose a way to add bias decomposition information to graphical displays for sensitivity analysis to help practitioners think through the potential for bias amplification and bias unmasking in actual applications.

Type
Articles
Copyright
Copyright © The Author 2016. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Edited by Prof. R. Michael Alvarez

Authors’ note: For replication files, see Middleton (2016). Supplementary Materials for this article are available on the Political Analysis Web site.

References

Angrist, J. D., Imbens, G., and Rubin, D. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434):444–55.Google Scholar
Angrist, J. D., and Pischke, J. 2009. Mostly harmless econometrics. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Austin, P., Grootendorst, P., and Anderson, G. 2007. A comparison of the ability of different propensity score models to balance measured variables between treated and untreated subjects: a Monte Carlo study. Statistics in Medicine 26:734–53.CrossRefGoogle ScholarPubMed
Breen, R., Karlson, K., and Holm, A. 2013. Total, direct, and indirect effects in logit and probit models. Sociological Methods and Research 42(2):164191.CrossRefGoogle Scholar
Brookhart, M., Sturmer, T., Glynn, R., Rassen, J., and Schneeweiss, S. 2010. Confounding control in healthcare database research. Medical Care 48:S11420.CrossRefGoogle ScholarPubMed
Bhattacharya, J., and Vogt, W. 2007. Do instrumental variables belong in propensity scores? NBER Working Paper 343, National Bureau of Economic Research, MA.CrossRefGoogle Scholar
Carnegie, N. B., Hill, J., and Harada, M. 2014a. Assessing sensitivity to unmeasured confounding using simulated potential confounders. Unpublished manuscript.Google Scholar
Carnegie, N. B., Hill, J., and Harada, M. 2014b. Package: TreatSens. http://www.R-project.org.Google Scholar
Clarke, K. A. 2005. The phantom menace. Conflict Management and Peace Science 22:341352.CrossRefGoogle Scholar
Clarke, K. A. 2009. Return of the phantom menace. Conflict Management and Peace Science 26:4666.CrossRefGoogle Scholar
Cole, S. R., Platt, R. W., Schisterman, E. F., Chu, H., Westreich, D., Richardson, D., and Poole, C. 2010. Illustrating bias due to conditioning on a collider. International Journal of Epidemiology 39(2):417420.CrossRefGoogle ScholarPubMed
D'Agostino, R. Jr. 1998. Propensity score methods for bias reduction in the comparison of treatment to non-randomized control group. Statistics in Medicine 17:314–16.Google ScholarPubMed
Ding, P., and Miratrix, L., 2014. To adjust or not to adjust? Sensitivity analysis of M-bias and butterfly-bias. Journal of Causal Inference 2:217.Google ScholarPubMed
Dunning, T., and Nilekani, J. 2013. Ethnic quotas and political mobilization: caste, parties, and distribution in Indian village councils. American Political Science Review 107:3556.CrossRefGoogle Scholar
Freedman, D. A. 2008. Randomization does not justify logistic regression. Statistical Science 23(2):237–49.CrossRefGoogle Scholar
Frisell, T., Oberg, S., Kuja-Halkola, R., and Sjolander, A. 2012. Sibling comparison designs: bias from non-shared confounders and measurement error. Epidemiology 23(5):713–20.CrossRefGoogle ScholarPubMed
Greene, W. H. 2000. Econometric analysis, 4th ed. Prentice Hall, Upper Saddle River, NJ.Google Scholar
Greenland, S. 2002. Quantifying biases in causal models: classical confounding vs. collider-stratification bias. Epidemiology 14:300306.CrossRefGoogle Scholar
Hill, J. 2007. Discussion of research using propensityscore matching: comments on “A critical appraisal of propensityscore matching in the medical literature between 1996 and 2003” by Peter Austin. Statistics in Medicine 27(12):2055–61.Google Scholar
Heckman, J., and Robb, R. 1985. Alternative methods for estimating the impact of interventions. In Longitudinal analysis of labor market data, eds. Heckman, J. J. and Singer, B. Cambridge University Press, Cambridge, UK.CrossRefGoogle Scholar
Heckman, J., and Robb, R. 1986. Alternative methods for solving the problem of selection bias in evaluating the impact of treatments on outcomes. In Drawing inferences from self-selected samples, ed. Wainer, H. New Jersey: Lawrence Erlbaum Associates.Google Scholar
Imbens, G. W. 2003. Sensitivity to exogeneity assumptions in program evaluation. Recent Advances in Econometric Methodology 93(2):126–32.Google Scholar
Lechner, M. 2001. Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In Econometric evaluations of active labor market policies in Europe, eds. Lechner, M. and Pfeiffer, F., Heidelberg: Physica.CrossRefGoogle Scholar
Liu, W., Brookhart, M. A., Schneeweiss, S., Mi, X., and Setoguchi, S. 2012. Implications of M-bias in epidemiologic studies: a simulation study. American Journal of Epidemiology 176:938–48.CrossRefGoogle ScholarPubMed
Middleton, J. A. 2016. Replication data for: bias amplification and bias unmasking. http://dx.doi.org/10.791/DVN/UO5WQ4, Harvard Dataverse.Google Scholar
Myers, J.A., Rassen, J. A., Gagne, J. J., Huybrechts, K. F., Schneeweiss, S., Rothman, K. J., Joffe, M. M., and Glynn, R. J. 2011. Effects of adjusting for instrumental variables on bias and precision of effect estimates. American Journal of Epidemiology 174(11):1213–22.CrossRefGoogle ScholarPubMed
Pearl, J. 2000. Causality. Cambridge, New York, NY.Google Scholar
Pearl, J. 2009. Myth, confusion, and science in causal analysis. Technical report.Google Scholar
Pearl, J. 2010. On a class of bias-amplifying variables that endanger effect estimates. Proceedings of UAI, pp. 417–24.Google Scholar
Pearl, J. 2011. Invited commentary: Understanding bias amplification. American Journal of Epidemiology 174(11):1223–27.CrossRefGoogle ScholarPubMed
Rosenbaum, P. R. 2002. Observational studies. Springer, New York, NY.CrossRefGoogle Scholar
Rosenbaum, P. R., and Rubin, D. B. 1983. Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. Journal of the Royal Statistical Society Series B (Methodological) 45:212218.CrossRefGoogle Scholar
Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66:688.CrossRefGoogle Scholar
Rubin, D. B. 1978. Bayesian inference for causal effects: the role of randomization. The Annals of Statistics 6(1):3458.CrossRefGoogle Scholar
Rubin, D. B. 2002. Using propensity scores to help design observational studies: application to the tobacco litigation. Health Services and Outcomes Research Methodology 2:169–88.Google Scholar
Shaw, D. R., Green, D. P., Gimpel, J. G., and Gerber, A. S. 2012. Do robotic calls from credible sources influence voter turnout or vote choice? Evidence from a randomized field experiment. Journal of Political Marketing 11(4):231–45.CrossRefGoogle Scholar
Schisterman, E. F., Cole, S. R., and Platt, R. W. 2009. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology 20:488–95.CrossRefGoogle ScholarPubMed
Steiner, P. M., and Kim, Y. 2016. The mechanisms of omitted variable bias: bias amplification and cancellation of offsetting biases. unpublished manuscript.CrossRefGoogle Scholar
Sjlander, A. 2009. Propensity scores and M-structures. Statistics in Medicine 28:141620.Google Scholar
Sobel, M. E. 2006. What do randomized studies of housing mobility demonstrate? Causal inference in the face of interference. Journal of the American Statistical Association 101:1398–407.CrossRefGoogle Scholar
Wooldridge, J. 2009. Should instrumental variables be used as matching variables? Unpublished manuscript.Google Scholar
Wyss, R., Lunt, M., Brookhart, M. A., Glynn, R. J., and Strürmer, T. 2014. Reducing bias amplification in the presence of unmeasured confounding through out-of-sample estimation strategies for the disease risk score. Journal of Causal Inference 2(2):131–46.CrossRefGoogle ScholarPubMed
VanderWeele, T. J. 2015. Explanation in causal inference: Methods for mediation and interaction. Oxford University Press New York, NY.Google Scholar
VanderWeele, T. J., and Arahc, O. A. 2011. Unmeasured confounding for general outcomes, treatments, and confounders: bias formulas for sensitivity analysis. Epidemiology 22(1):4252.CrossRefGoogle ScholarPubMed
Supplementary material: PDF

Middleton et al. supplementary material

Supplementary Material

Download Middleton et al. supplementary material(PDF)
PDF 113.7 KB
Supplementary material: File

Middleton et al. supplementary material

Appendix

Download Middleton et al. supplementary material(File)
File 12.5 KB