Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-19T05:32:43.213Z Has data issue: false hasContentIssue false

Scalable Protocols Offer Efficient Design for Field Experiments

Published online by Cambridge University Press:  04 January 2017

David W. Nickerson*
Affiliation:
Department of Political Science, University of Notre Dame, 217 O'Shaughnessy Hall, Notre Dame, IN 46556. e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Experiments conducted in the field allay concerns over external validity but are subject to the pitfalls of fieldwork. This article proves that scalable protocols conserve statistical efficiency in the face of problems implementing the treatment regime. Three designs are considered: randomly ordering the application of the treatment; matching subjects into groups prior to assignment; and placebo-controlled experiments. Three examples taken from voter mobilization field experiments demonstrate the utility of the design principles discussed.

Type
Research Article
Copyright
Copyright © The Author 2005. Published by Oxford University Press on behalf of the Society for Political Methodology 

References

Adams, William C., and Smith, Dennis J. 1980. “Effects of Telephone Canvassing on Turnout and Preferences: A Field Experiment.” Public Opinion Quarterly 44: 389395.Google Scholar
Allison, Paul D. 2002. Missing Data. Thousand Oaks, CA: Sage.Google Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996a. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association 91: 444455.Google Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996b. “Identification of Causal Effects Using Instrumental Variables: Rejoinder.” Journal of the American Statistical Association 91: 468472.Google Scholar
Boruch, Robert, Snyder, Brook, and DeMoya, Dorothy. 2000. “The Importance of Randomized Field Trials.” Crime and Delinquency 46: 156180.Google Scholar
Box, George E.P., Hunter, William G., and Hunter, Stuart J. 1978. Statistics for Experiments. New York: Wiley-Interscience.Google Scholar
Campbell, Donald T., and Stanley, Julian C. 1963. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand-McNally.Google Scholar
Cook, Thomas D., and Campbell, Donald T. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand-McNally.Google Scholar
Eldersveld, Samuel J. 1956. “Experimental Propaganda Techniques and Voting Behavior.” American Political Science Review 50: 154165.CrossRefGoogle Scholar
Fisher, Ronald A. 1935. The Design of Experiments. Edinburgh: Oliver & Boyd.Google Scholar
Gerber, Alan S., and Green, Donald P. 2000. “The Effects of Canvassing, Direct Mail, and Telephone Contact on Voter Turnout: A Field Experiment.” American Political Science Review 94: 653663.Google Scholar
Gerber, Alan S., Green, Donald P., and Nickerson, David W. 2003. “Getting Out the Vote in a Local Election: Results from Six Door-to-Door Canvassing Experiments.” Journal of Politics 65: 10831096.Google Scholar
Goodman, Jodi S., and Blum, Terry C. 1996. “Assessing the Non-random Sampling Effects of Subject Attrition in Longitudinal Research.” Journal of Management 22: 627652.Google Scholar
Groves, Robert M. 1989. Survey Errors and Survey Costs. New York: Wiley.Google Scholar
Heckman, James J., and Smith, Jeffrey A. 1995. “Assessing the Case for Randomized Social Experiments.” Journal of Economic Perspectives 9: 85110.Google Scholar
Heitjan, Daniel F. 1999. “Causal Inference in a Clinical Trial: A Comparative Example.” Controlled Clinical Trials 20: 309318.Google Scholar
Kish, Leslie. 1965. Survey Sampling. New York: Wiley.Google Scholar
Lichstein, Kenneth L., Riedel, Brant W., and Grieve, R. 1994. “Fair Tests of Clinical Trials: A Treatment Implementation Model.” Advances in Behavior Research and Therapy 16: 129.Google Scholar
Little, Roderick J.A., and Rubin, Donald B. 1987. Data Analysis with Missing Data. New York: Wiley.Google Scholar
Miller, Roy E., Bositis, David A., and Baer, Denise L. 1981. “Stimulating Voter Turnout in a Primary: Field Experiment with a Precinct Committeeman.” International Political Science Review 2: 445460.Google Scholar
Montgomery, Douglas C. 2001. Design and Analysis of Experiments, 5th ed. New York: Wiley.Google Scholar
Nickerson, David W. 2004a. “Volunteer Phone Calls Can Increase Turnout.” Unpublished manuscript.Google Scholar
Nickerson, David W. 2004b. “Is Voting Contagious?” Unpublished manuscript.Google Scholar
Riffenburgh, R. H. 1998. Statistics in Medicine. San Diego: Academic.Google Scholar
Rubin, Donald B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66: 688701.Google Scholar
Rubin, Donald B. 1986. “Statistics and Causal Inference: Comment: Which Ifs Have Causal Answers?Journal of the American Statistical Association 81: 961962.Google Scholar
Shadish, William R., Cook, Thomas D., and Campbell, Donald T. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York: Houghton Mifflin.Google Scholar
Wu, Chien-Fu, and Hamada, Michael. 2000. Experiments: Planning, Analyzing, and Parameter Design Optimization. New York: Wiley-Interscience.Google Scholar