Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-24T13:34:00.322Z Has data issue: false hasContentIssue false

How Much Is Minnesota Like Wisconsin? Assumptions and Counterfactuals in Causal Inference with Observational Data

Published online by Cambridge University Press:  04 January 2017

Luke Keele*
Affiliation:
Department of Political Science, Penn State University, 211 Pond Lab, University Park, PA 16802
William Minozzi
Affiliation:
Department of Political Science, Ohio State University, 2189 Derby Hall, Columbus, OH 43210 e-mail: [email protected]
*
e-mail: [email protected] (corresponding author)

Abstract

Political scientists are often interested in estimating causal effects. Identification of causal estimates with observational data invariably requires strong untestable assumptions. Here, we outline a number of the assumptions used in the extant empirical literature. We argue that these assumptions require careful evaluation within the context of specific applications. To that end, we present an empirical case study on the effect of Election Day Registration (EDR) on turnout. We show how different identification assumptions lead to different answers, and that many of the standard assumptions used are implausible. Specifically, we show that EDR likely had negligible effects in the states of Minnesota and Wisconsin. We conclude with an argument for stronger research designs.

Type
Research Article
Copyright
Copyright © The Author 2013. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Authors' note: We thank Michael Hanmer and Devin Caughey for generously sharing code and data. For comments and suggestions, we thank Mike Alvarez, Curt Signorino, Shigeo Hirano, Robert Erikson, Mike Ting, Walter Mebane, Michael Hanmer, Betsy Sinclair, Jonathan Nagler, Don Green, Rocío Titiunik, and seminar participants at Columbia University, the University of Michigan, and the University of Rochester. A previous version of this article was presented at the 2010 Annual Meeting of the Society of Political Methodology, Iowa City, IA, and APSA 2010. Replication files and information can be found in Keele (2012).

References

Abadie, Alberto. 2005. Semiparametric difference-in-difference estimators. Review of Economic Studies 75(1): 119.Google Scholar
Angrist, Joshua D., and Pischke, Jörn-Steffen. 2009. Mostly harmless econometrics. Princeton, NJ: Princeton University Press.Google Scholar
Angrist, Joshua D., and Pischke, Jörn-Steffen. 2010. The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspectives 24(2): 330.Google Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434): 444–55.Google Scholar
Ansolabehere, Stephen, and Konisky, David M. 2006. The introduction of voter registration and its effect on turnout. Political Analysis 14(1): 83100.CrossRefGoogle Scholar
Barabas, Jason. 2004. How deliberation affects policy opinions. American Political Science Review 98(4): 687702.Google Scholar
Barnow, B. S., Cain, G. G., and Goldberger, A. S. 1980. Issues in the analysis of selectivity bias. In Evaluation studies, eds. Stromsdorfer, E. and Farkas, G., Vol. 5. San Francisco: Sage.Google Scholar
Brians, Craig Leonard, and Grofman, Bernard. 1999. When registration barriers fall, who votes? An empirical test of a rational choice model. Public Choice 99: 161–76.Google Scholar
Brians, Craig Leonard, and Grofman, Bernard. 2001. Election day registration's effect on U.S. voter turnout. Social Science Quarterly 82: 170–83.Google Scholar
Burden, Barry C., and Neiheisel, Jacob R. 2011a. The impact of election day registration on voter turnout and election outcomes. American Politics Research 20(4): 636–64.Google Scholar
Burden, Barry C., and Neiheisel, Jacob R. 2011b. Election administration and the pure effect of voter registration on turnout. Political Research Quarterly. doi: 10.1177/1065912911430671.Google Scholar
Caughey, Devin, and Sekhon, Jasjeet S. 2011. Elections and the regression discontinuity design: Lessons from close U.S. house races, 1942–2008. Political Analysis 19(4): 385408.CrossRefGoogle Scholar
Cook, T. D., and Shadish, W. R. 1994. Social experiments: Some developments over the past fifteen years. Annual Review of Psychology 45: 545–80.Google Scholar
Cook, T. D., Shadish, W. R., and Wong, Vivian C. 2008. Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management 27(4): 724–50.Google Scholar
Donald, Stephen G., and Lang, Kevin. 2007. Inference with differences-in-differences and other panel data. Review of Economics and Statistics 89(2): 221–33.Google Scholar
Fisher, Ronald A. 1935. The design of experiments. London: Oliver and Boyd.Google Scholar
Green, Donald P., and Gerber, Alan S. 2002. Reclaiming the experimental tradition in political science. In Political science: The state of the discipline, eds. Katznelson, Ira and Milner Helen, V., 805–32. New York: W. W. Nortion.Google Scholar
Hahn, Jinyong, Todd, Petra, and van der Klaauw, Wilbert. 2001. Identification and estimation of treatments effects with a regression-discontinuity design. Econometrica 69(1): 201–9.Google Scholar
Hanmer, Michael J. 2007. An alternative approach to estimating who is most likely to respond to changes in registration laws. Political Behavior 29(1): 130.Google Scholar
Hanmer, Michael J. 2009. Discount voting. New York: Cambridge University Press.Google Scholar
Highton, Benjamin, and Wolfinger, Raymond E. 1998. Estimating the effects of the national voter registration act of 1993. Political Behavior 20(1): 79104.Google Scholar
Holland, Paul W. 1986. Statistics and causal inference. Journal of the American Statistical Association 81: 945–60.Google Scholar
Imbens, Guido W. 2003. Sensitivity to exogeneity assumptions in program evaluation. American Economic Review Papers and Proceedings 93(2): 126–32.CrossRefGoogle Scholar
Imbens, Guido W. 2010. Better LATE than nothing: Some comments on Deaton (2009) and Heckman and Urzua (2009). Journal of Economic Literature 48(2): 399423.Google Scholar
Imbens, Guido W., and Angrist, Joshua D. 1994. Identification and estimation of local average treatment effects. Econometrica 62(2): 467–76.Google Scholar
Imbens, Guido W., and Kalyanaraman, Karthik. 2012. Optimal bandwidth choice for the regression discontinuity estimator. Review of Economic Studies 79(3): 933–59.CrossRefGoogle Scholar
Imbens, Guido W., and Lemieux, Thomas 2008. Regression discontinuity designs: A guide to practice. Journal of Econometrics 142(2): 615–35.CrossRefGoogle Scholar
Keele, Luke J. 2012. Replication data for: How much is Minnesota like Wisconsin? Assumptions and counterfactuals in causal inference with observational data. http://hdl.handle.net/1902.1/19190, IQSS Dataverse Network [Distributor] V1 [Version]. (accessed December 17, 2012).Google Scholar
Knack, Stephen. 2001. Election-day registration: The second wave. American Politics Research 29: 6578.Google Scholar
Lee, David S. 2008. Randomized experiments from non-random selection in U.S. House elections. Journal of Econometrics 142(2): 675–97.Google Scholar
Lee, David S., and Lemieux, Thomas. 2010. Regression discontinuity designs in economics. Journal of Economic Literature 48(2): 281355.Google Scholar
Manski, Charles F. 1990. Nonparametric bounds on treatment effects. American Economic Review Papers and Proceedings 80(2): 319–23.Google Scholar
Manski, Charles F. 1995. Identification problems in the social sciences. Cambridge, MA: Harvard University Press.Google Scholar
Manski, Charles F. 1997. Monotone treatment response. Econometrica 65(5): 1311–34.Google Scholar
Manski, Charles F. 2007. Identification for prediction and decision. Cambridge, MA: Harvard University Press.Google Scholar
McCrary, Justin. 2008. Manipulation of the running variable in the regression discontinuity design: A density test. Journal of Econometrics 142(2): 698714.Google Scholar
Mill, John Stuart. 1867. A system of logic: The principles of evidence and the methods of scientific investigation. New York: Harper & Brothers.Google Scholar
Mitchell, Glenn E., and Wlezien, Christopher. 1995. Voter registration and election laws in the United States, 1972–1992. Inter-University Consortium for Political and Social Research 6496: 999.Google Scholar
Rhine, S. L. 1995. Registration reform and turnout change in American states. American Politics Quarterly 23: 409–27.Google Scholar
Rosenbaum, Paul R. 2002a. Attributing effects to treatment in matched observational studies. Journal of the American Statistical Association 97(457): 110.Google Scholar
Rosenbaum, Paul R. 2002b. Observational studies. 2nd ed. New York: Springer.Google Scholar
Rosenbaum, Paul R. 2005a. Heterogeneity and causality: Unit heterogeneity and design sensitivity in observational studies. American Statistician 59(2): 147–52.Google Scholar
Rosenbaum, Paul R. 2005b. Observational study. In Encyclopedia of statistics in behavioral science, eds. Everitt Brian, S. and Howell David, C., Vol. 3, 1451–62. Hoboken, NJ: John Wiley and Sons.Google Scholar
Rosenbaum, Paul R. 2010. Design of observational studies. New York: Springer.Google Scholar
Rosenbaum, Paul R., and Silber, Jeffrey H. 2009. Amplification of sensitivity analysis in matched observational studies. Journal of the American Statistical Association 104(488): 1398–405.CrossRefGoogle ScholarPubMed
Rubin, Donald B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 6: 688701.Google Scholar
Rubin, Donald B. 1978. Bayesian inference for causal effects: The role of randomization. Annals of Statistics 6: 3458.Google Scholar
Rusk, Jerrold G. 2001. A statistical history of the American electorate. Washington, DC: Congresional Quarterly Press.Google Scholar
Sekhon, Jasjeet S. 2011. Multivariate and propensity score matching software with automated balance optimization: The matching package for R. Journal of Statistical Software 42(7): 152.Google Scholar
Sekhon, Jasjeet S., and Diamond, Alexis. Forthcoming. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics & Statistics.Google Scholar
Sekhon, Jasjeet S., and Titiunik, Rocío. 2012. When natural experiments are neither natural nor experiments. American Political Science Review 106(1): 3557.Google Scholar
Smolka, Richard G. 1977. Election day registration: The Minnesota and Wisconsin experience in 1976. Washington, DC: American Enterprise Institute for Public Policy Research.Google Scholar
Sovey, J. Allison, and Green, Donald P. 2011. Instrumental variables estimation in political science: A readers' guide. American Journal of Political Science 55(1): 188200.CrossRefGoogle Scholar
Teixeira, Ruy A. 1992. The disappearing American voter. Washington DC: Brookings.Google Scholar
Timpone, Richard J. 1998. Structure, behavior, and voter turnout in the United States. American Political Science Review 92(1): 145–58.Google Scholar
Wolfinger, Raymond E., and Rosenstone, Steven J. 1980. Who votes? New Haven, CT: Yale University Press.Google Scholar