Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-22T19:50:49.890Z Has data issue: false hasContentIssue false

8 - Quasi-Experimental Designs

from Part II - Basic Design Considerations to Know, No Matter What Your Research Is About

Published online by Cambridge University Press:  12 December 2024

Harry T. Reis
Affiliation:
University of Rochester, New York
Tessa West
Affiliation:
New York University
Charles M. Judd
Affiliation:
University of Colorado Boulder
Get access

Summary

A quasi-experiment is a type of study that attempts to mimic the objectives and structure of traditional (randomized) experiments. However, quasi-experiments differ from experiments in that condition assignment is randomized in experiments whereas it is not randomized in quasi-experiments. This chapter reviews conceptual, methodological, and practical issues that arise in the design, implementation, and interpretation of quasi-experiments. The chapter begins by highlighting similarities and differences between quasi-experiments, randomized experiments, and nonexperimental studies. Next, it provides a framework for discussion of the relative strengths and weaknesses of different study types. The chapter then discusses traditional threats to causal inferences when conducting studies of different types and reviews the most common quasi-experimental designs and how they attempt to reach accurate assessments of the causal impact of independent variables. The chapter concludes with a discussion of how quasi-experiments might be integrated with studies of other types to produce richer insights.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996). Identification of causal effects using instrumental variables (with discussion and rejoinder). Journal of the American Statistical Association, 91, 444472.CrossRefGoogle Scholar
Aussems, M. C. E., Boomsma, A., and Snijders, T. A. (2011). The use of quasi-experiments in the social sciences: A content analysis. Quality & Quantity, 45(1), 2142.CrossRefGoogle Scholar
Barlow, D. H., Hayes, S. C., and Nelson, R. O. (1984). The essentials of time-series methodology: Case studies & single-case experimentation; within-series elements; between-series elements; combined-series elements. In Goldstein, A. P. and Krasner, L. (eds.) The Scientist Practitioner: Research and Accountability in Clinical and Educational Settings. Pergamon Press.Google Scholar
Berk, R., Barnes, G., Ahlman, L., and Kurtz, E. (2010). When second best is good enough: A comparison between a true experiment and a regression discontinuity quasi-experiment. Journal of Experimental Criminology, 6(2), 191208.CrossRefGoogle Scholar
Bloom, H. S., Michalopoulos, C., and Hill, C. J. (2005). Using experiments to assess nonexperimental comparison-group methods for measuring program effects. In Bloom, H. S. (ed.) Learning More from Social Experiments. Russell Sage Foundation.Google Scholar
Bollen, K. A. (1989). Structural Equations with Latent Variables. John Wiley & Sons.CrossRefGoogle Scholar
Bor, J., Moscoe, E., Mutevedzi, P., Newell, M. L., and Bärnighausen, T. (2014). Regression discontinuity designs in epidemiology: Causal inference without randomized trials. Epidemiology (Cambridge, MA), 25(5), 729737.CrossRefGoogle ScholarPubMed
Box, G. E. P., and Jenkins, J. M. (1976). Time Series Analysis: Forecasting and Control, 2nd ed. Holden-Day.Google Scholar
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297312.CrossRefGoogle Scholar
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24, 409429.CrossRefGoogle Scholar
Campbell, D. T., and Stanley, J. C. (1966). Experimental and Quasi-experimental Designs for Research. Houghton Mifflin.Google Scholar
Cappelleri, J. C. (1991). Cutoff-based designs in comparison and combination with randomized clinical trials. Ph.D. dissertation, Cornell University, Ithaca, NY.Google Scholar
Cappelleri, J. C., Darlington, R. B., and Trochim, W. M. K. (1994). Power analysis of cutoff-based randomized clinical trials. Evaluation Review, 18, 141152.CrossRefGoogle Scholar
Cappelleri, J. C., and Trochim, W. M. K. (1994). An illustrative statistical analysis of cut-off based randomized clinical trials. Journal of Clinical Epidemiology, 47, 261270.CrossRefGoogle Scholar
Chaplin, D. D., Cook, T. D., Zurovac, J., Coopersmith, J. S., Finucane, M. M., Vollmer, L. N., and Morris, R. E. (2018). The internal and external validity of the regression discontinuity design: A meta‐analysis of 15 within‐study comparisons. Journal of Policy Analysis and Management, 37(2), 403429.CrossRefGoogle Scholar
Chester, D. S., and Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16(2), 377395.CrossRefGoogle ScholarPubMed
Cialdini, R. B. (2009). Influence: Science and Practice. Pearson Education.Google Scholar
Cochran, W. G. (1965). The planning of observational studies of human populations (with discussion). Journal of the Royal Statistical Society, Series A, 128, 134155.CrossRefGoogle Scholar
Cook, T. D., and Campbell, D. T. (1979). Quasi-experimentation: Design and Analysis Issues for Field Settings. Rand McNally College Publishing Company.Google Scholar
Cook, T. D., Shadish, W. R., and Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 27(4), 724750.CrossRefGoogle Scholar
Cook, T. D., and Steiner, P. M. (2010). Case matching and the reduction of selection bias in quasi-experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychological Methods, 15(1), 5668.CrossRefGoogle ScholarPubMed
Cook, T. D., Steiner, P. M., and Pohl, S. (2009). How bias reduction is affected by covariate choice, unreliability, and mode of data analysis: Results from two types of within-study comparisons. Multivariate Behavioral Research, 44(6), 828847.CrossRefGoogle ScholarPubMed
Cook, T. D., and Wong, V. C. (2008a). Better quasi-experimental practice. In Alasuutari, P., Bickman, L., and Brannen, J. (eds.) The Sage Handbook of Research Methods. Sage Publications.Google Scholar
Cook, T. D., and Wong, V. C. (2008b). Empirical tests of the validity of the regression discontinuity design. Annales d’économie et de statistique, 91–92, 127150.CrossRefGoogle Scholar
Cronbach, L. J. (1982). Designing Evaluations of Educational and Social Programs. Jossey-Bass.Google Scholar
Cruz, M., Bender, M., and Ombao, H. (2017). A robust interrupted time series model for analyzing complex health care intervention data. Statistics in Medicine, 36(29), 46604676.CrossRefGoogle ScholarPubMed
Dong, N., and Lipsey, M. W. (2018). Can propensity score analysis approximate randomized experiments using pretest and demographic information in pre-K intervention research? Evaluation Review, 42(1), 3470.CrossRefGoogle ScholarPubMed
Ewusie, J. E., Soobiah, C., Blondal, E., Beyene, J., Thabane, L., and Hamid, J. S. (2020). Methods, applications and challenges in the analysis of interrupted time series data: a scoping review. Journal of Multidisciplinary Healthcare, 13, 411423.CrossRefGoogle ScholarPubMed
Fabrigar, L. R., and Wegener, D. T. (2014). Exploring causal and noncausal hypotheses in nonexperimental data. In Reis, H. T. and Judd, C. M. (eds.) Handbook of research methods in social and personality psychology, 2nd ed. Cambridge University Press.Google Scholar
Fabrigar, L. R., Wegener, D. T., and Petty, R. E. (2020). A validity-based framework for understanding replication in psychology. Personality and Social Psychology Review, 24(4), 316344.CrossRefGoogle ScholarPubMed
Flake, J. K., Pek, J., and Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370378.CrossRefGoogle Scholar
Fretheim, A., Zhang, F., Ross-Degnan, D., Oxman, A. D., Cheyne, H., Foy, R., … Soumerai, S. B. (2015). A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. Journal of Clinical Epidemiology, 68(3), 324333.CrossRefGoogle ScholarPubMed
Glass, G. V., Willson, V. L., and Gottman, J. M. (1975). Design and Analysis of Time-Series Experiments. Colorado Associated University Press.Google Scholar
Glazerman, S., Levy, D. M., and Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. Annals of the American Academy of Political & Social Science, 589, 6393.CrossRefGoogle Scholar
Gleason, P., Resch, A., and Berk, J. (2018). RD or not RD: Using experimental studies to assess the performance of the regression discontinuity approach. Evaluation Review, 42(1), 333.CrossRefGoogle ScholarPubMed
Goldberger, A. S. (1972). Selection bias in evaluating treatment effects: some formal illustrations. Unpublished manuscript.Google Scholar
Green, P. J., and Silverman, B. W. (1993). Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach. Crc Press.CrossRefGoogle Scholar
Grimm, K. J., Helm, J., Rodgers, D., and O’Rourke, H. (2021). Analyzing cross‐lag effects: A comparison of different cross‐lag modeling approaches. New Directions for Child and Adolescent Development, 2021(175), 1133.CrossRefGoogle Scholar
Hagemeier, A., Samel, C., and Hellmich, M. (2022). The regression discontinuity design: Methods and implementation with a worked example in health services research. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen, 172, 7177.CrossRefGoogle ScholarPubMed
Hahn, J., Todd, P., and Van der Klaauw, W. (2001). Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica, 69(1), 201209.CrossRefGoogle Scholar
Hallberg, K., Cook, T. D., Steiner, P. M., and Clark, M. H. (2018). Pretest measures of the study outcome and the elimination of selection bias: Evidence from three within study comparisons. Prevention Science, 19(3), 274283.CrossRefGoogle Scholar
Hamaker, E. L., Kuiper, R. M., and Grasman, R. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102116.CrossRefGoogle ScholarPubMed
Hart, J. D., and Wehrly, T. E. (1992). Kernel regression when the boundary region is large, with an application to testing the adequacy of polynomial models. Journal of the American Statistical Association, 87(420), 10181024.CrossRefGoogle Scholar
Hayes, A. F. (2006). A primer on multilevel modeling. Human Communication Research, 32(4), 385410.CrossRefGoogle Scholar
Holland, P. W. (1986). Statistics and causal inference (with discussion). Journal of the American Statistical Association, 81, 945970.CrossRefGoogle Scholar
Kisbu-Sakarya, Y., Cook, T. D., Tang, Y., and Clark, M. H. (2018). Comparative regression discontinuity: A stress test with small samples. Evaluation Review, 42(1), 111143.CrossRefGoogle ScholarPubMed
Linden, A. (2018). Using forecast modelling to evaluate treatment effects in single‐group interrupted time series analysis. Journal of Evaluation in Clinical Practice, 24(4), 695700.CrossRefGoogle ScholarPubMed
Liu, L. M. (1989). Identification of seasonal ARIMA models using a filtering method. Communications in Statistics: Theory and Methods, 18(6), 22792288.CrossRefGoogle Scholar
Luellen, J. K., Shadish, W. R., and Clark, M. H. (2005). Propensity scores: An introduction and experimental test. Evaluation Review, 29(6), 530558.CrossRefGoogle ScholarPubMed
McClannahan, L. E., McGee, G. G., MacDuff, G. S., and Krantz, P. J. (1990). Assessing and improving child care: A personal appearance index for children with autism. Journal of Applied Behavior Analysis, 23(4), 469482.CrossRefGoogle ScholarPubMed
Maciejewski, M. L. (2020). Quasi-experimental design. Biostatistics & Epidemiology, 4(1), 3847.CrossRefGoogle Scholar
McKillip, J. (1992). Research without control groups: A control construct design. In Bryant, F. B., Edwards, J., Tindale, R. S., Posavac, E. J., Heath, L. and Henderson, E. (eds.) Methodological Issues in Applied Psychology. Plenum.Google Scholar
Magidson, J. (1977). Toward a causal model approach for adjusting for preexisting differences in the nonequivalent control group situation. Evaluation Quarterly, 1, 399402.CrossRefGoogle Scholar
Mark, M. J., and Reichardt, C. S. (2004). Quasi-experimental and correlational designs: Methods for the real world when random assignment isn’t feasible. In Sansone, C., Morft, C., and Panter, A. T. (eds.) The Sage Handbook of Methods in Social Psychology. Sage Publications.Google Scholar
Muggeo, V. M. (2008). Segmented: An R package to fit regression models with broken-line relationships. R News, 8(1), 2025.Google Scholar
Musca, S. C., Kamiejski, R., Nugier, A., Méot, A., Er-Rafiy, A., and Brauer, M. (2011). Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error. Frontiers in Psychology, 2, 74, DOI:10.3389/fpsyg.2011.00074.CrossRefGoogle ScholarPubMed
Paluck, E. L., and Cialdini, R. B. (2014). Field research methods. In Reis, H. and Judd, C. (eds.). Handbook of Research Methods in Social and Personality Psychology, 2nd ed. Cambridge University Press.Google Scholar
Core Team, R (2022). R: A language and environment for statistical computing. R Foundations for Statistical Computing, Vienna, Austria. Retrieved from www.R-project.org.Google Scholar
Ramsay, C. R., Matowe, L., Grilli, R., Grimshaw, J. M., and Thomas, R. E. (2003). Interrupted time series designs in health technology assessment: Lessons from two systematic reviews of behavior change strategies. International Journal of Technology Assessment in Health Care, 19(4), 613623.CrossRefGoogle ScholarPubMed
Reichardt, C. S. (2006). The principle of parallelism in the design of studies to estimate treatment effects. Psychological Methods, 11(1), 118.CrossRefGoogle ScholarPubMed
Reichardt, C. S. (2019). Quasi-experimentation: A Guide to Design and Analysis. Guilford Press.Google Scholar
Rosenbaum, P. R., and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 4155.CrossRefGoogle Scholar
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688701.CrossRefGoogle Scholar
Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100, 322331.CrossRefGoogle Scholar
Rubin, D. B. (2006). Statistical inference for causal effects, with emphasis on applications in psychometrics and education. In Rao, C. R. and Sinharay, S. (eds.) Handbook of Statistics: Psychometrics. Elsevier.Google Scholar
Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: Parallels with the design of randomized trials. Statistics in Medicine, 26, 2036.CrossRefGoogle ScholarPubMed
Rubin, D. B. (2008). For objective causal inference, design trumps analysis. Annals of Applied Statistics, 2, 808840.CrossRefGoogle Scholar
Rubin, D. B., and Thomas, N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52, 249264.CrossRefGoogle ScholarPubMed
St. Clair, T., Hallberg, K., and Cook, T. D. (2016). The validity and precision of the comparative interrupted time-series design: Three within-study comparisons. Journal of Educational and Behavioral Statistics, 41(3), 269299.CrossRefGoogle Scholar
Schaffer, A. L., Dobbins, T. A., and Pearson, S. A. (2021). Interrupted time series analysis using autoregressive integrated moving average (ARIMA) models: A guide for evaluating large-scale health interventions. BMC Medical Research Methodology, 21(1), 112.CrossRefGoogle ScholarPubMed
Schneeweiss, S., Maclure, M., Carleton, B., Glynn, R. J., and Avorn, J. (2004). Clinical and economic consequences of a reimbursement restriction of nebulised respiratory therapy in adults: Direct comparison of randomised and observational evaluations. British Medical Journal, 328(7439), 560566.CrossRefGoogle ScholarPubMed
Schochet, P. Z. (2009). Statistical power for regression discontinuity designs in education evaluations. Journal of Educational and Behavioral Statistics, 34(2), 238266.CrossRefGoogle Scholar
Shadish, W. R., and Cook, T. D. (1999). Design rules: More steps towards a complete theory of quasi-experimentation. Statistical Science, 14, 294300.Google Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002). Experimental and Quasi-experimental Designs for Generalized Causal Inference. Houghton Mifflin.Google Scholar
Shadish, W. R., Galindo, R., Wong, V. C., Steiner, P. M., and Cook, T. D. (2011). A randomized experiment comparing random and cutoff-based assignment. Psychological Methods, 16(2), 179219.CrossRefGoogle ScholarPubMed
Simon, H. A. (1954). Spurious correlation: A causal interpretation. Journal of the American Statistical Association, 49, 467479.Google Scholar
Steiner, P. M., Cook, T. D., Li, W., and Clark, M. H. (2015). Bias reduction in quasi-experiments with little selection theory but many covariates. Journal of Research on Educational Effectiveness, 8(4), 552576.CrossRefGoogle Scholar
Stuart, E. A., and Rubin, D. B. (2007). Best practices in quasi-experimental designs: Matching methods for causal inference. In Osborne, J. (ed.) Best Practices in Quantitative Methods. Sage.Google Scholar
Tang, Y., and Cook, T. D. (2018). Statistical power for the comparative regression discontinuity design with a pretest no-treatment control function: Theory and evidence from the National Head Start Impact Study. Evaluation Review, 42(1), 71110.CrossRefGoogle ScholarPubMed
Tang, Y., Cook, T. D., and Kisbu-Sakarya, Y. (2018). Statistical power for the comparative regression discontinuity design with a nonequivalent comparison group. Psychological Methods, 23(1), 150168.CrossRefGoogle ScholarPubMed
Thistlewaite, D., and Campbell, D. (1960), Regression-discontinuity analysis: An alternative to the ex-post facto experiment. Journal of Educational Psychology, 51, 309317.CrossRefGoogle Scholar
Trochim, W. M. (1990). The regression-discontinuity design. Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data, 1, 119130.Google Scholar
Trochim, W. M., and Cappelleri, J. C. (1992). Cutoff assignment strategies for enhancing randomized clinical trials. Controlled Clinical Trials, 13(3), 190212.CrossRefGoogle ScholarPubMed
Velicer, W. F., and Fava, J. L. (2003). Time series analysis. In Schinka, J. and Velicer, W. F. (eds.) Handbook of Psychology (editor in chief I. B. Weiner), vol. 2, Research Methods in Psychology. John Wiley & Sons.Google Scholar
Wegener, D. T., and Fabrigar, L. R. (2000). Analysis and design for nonexperimental data: Addressing causal and noncausal hypotheses. In Reis, H. T. and Judd, C. M. (eds.) Handbook of Research Methods in Social and Personality Psychology. Cambridge University Press.Google Scholar
Wegener, D. T., and Fabrigar, L. R. (2004). Constructing and evaluating quantitative measures for social psychological research: Conceptual challenges and methodological solutions. In Sansone, C., Morf, C. C. C., and Panter, A. T. (eds.) The SAGE Handbook of Methods in Social Psychology. Sage.Google Scholar
West, S. G., Biesanz, J. C., and Pitts, S. C. (2000). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In Judd, C. M. and Reis, H. T. (eds.) Handbook of Research Methods in Social and Personality Psychology. Cambridge University Press.Google Scholar
West, S. G., Cham, H., and Liu, Y. (2014). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In Reis, H. T. and Judd, C. M. (eds.) Handbook of Research Methods in Social and Personality Psychology, 2nd ed. Cambridge University Press.Google Scholar
West, S. G., and Thoemmes, F. (2008). Equating groups. In Brannon, J., Alasuutari, P., and Bickman, L. (eds.) Handbook of Social Research Methods. Sage.Google Scholar
West, S. G., and Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15, 1837.CrossRefGoogle ScholarPubMed
Westfall, J., and Yarkoni, T. (2016). Statistically controlling for confounding constructs is harder than you think. PLOS ONE, 11(3), e0152719.CrossRefGoogle ScholarPubMed
Wing, C., and Cook, T. D. (2013). Strengthening the regression discontinuity design using additional design elements: A within‐study comparison. Journal of Policy Analysis and Management, 32(4), 853877.CrossRefGoogle Scholar
Wold, H. (1956). Causal inferences from observational data. Journal of the Royal Statistical Society, A, 119, 2860.CrossRefGoogle Scholar
Wong, V. C., and Steiner, P. M. (2018). Designs of empirical evaluations of nonexperimental methods in field settings. Evaluation Review, 42(2), 176213.CrossRefGoogle ScholarPubMed
Wong, V. C., Steiner, P. M., and Cook, T. D. (2013). Analyzing regression-discontinuity designs with multiple assignment variables: A comparative study of four estimation methods. Journal of Educational and Behavioral Statistics, 38, 107141.CrossRefGoogle Scholar
Zhou, H., and Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493504.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×