Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-25T04:53:56.182Z Has data issue: false hasContentIssue false

How can experiments play a greater role in public policy? Twelve proposals from an economic model of scaling

Published online by Cambridge University Press:  24 July 2020

OMAR AL-UBAYDLI
Affiliation:
Bahrain Center for Strategic, International and Energy Studies, Manama, Bahrain Department of Economics and the Mercatus Center, George Mason University, Fairfax, VA, USA College of Industrial Management, King Fahad University of Petroleum and Minerals, Dhahran, Saudi Arabia
MIN SOK LEE*
Affiliation:
Kenneth C. Griffin Department of Economics, University of Chicago, Chicago, IL, USA
JOHN A. LIST
Affiliation:
Kenneth C. Griffin Department of Economics, University of Chicago, Chicago, IL, USA The Australian National University, Canberra, Australia NBER, Cambridge, MA, USA
CLAIRE L. MACKEVICIUS
Affiliation:
School of Education and Social Policy, Northwestern University, Evanston, IL, USA
DANA SUSKIND
Affiliation:
Professor of Surgery and Pediatric, University of Chicago, Chicago, IL, USA Co-Director, TMW Center for Early Learning + Public Health, University of Chicago, Chicago, IL, USA
*
*Correspondence to: Kenneth C. Griffin Department of Economics, University of Chicago, 1126 E. 59th Street, Chicago, IL60637, USA. E-mail: [email protected]

Abstract

Policymakers are increasingly turning to insights gained from the experimental method as a means to inform large-scale public policies. Critics view this increased usage as premature, pointing to the fact that many experimentally tested programs fail to deliver their promise at scale. Under this view, the experimental approach drives too much public policy. Yet, if policymakers could be more confident that the original research findings would be delivered at scale, even the staunchest critics would carve out a larger role for experiments to inform policy. Leveraging the economic framework of Al-Ubaydli et al. (2019), we put forward 12 simple proposals, spanning researchers, policymakers, funders and stakeholders, which together tackle the most vexing scalability threats. The framework highlights that only after we deepen our understanding of the scale-up problem will we be on solid ground to argue that scientific experiments should hold a more prominent place in the policymaker's quiver.

Type
Articles
Copyright
Copyright © The Author(s) 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Achilles, C. M. (1993), The Lasting Benefits Study (LBS) in Grades 4 and 5 (1990-1991): A Legacy from Tennessee's Four-Year (K-3) Class-Size Study (1985-1989), Project STAR. Paper# 7.Google Scholar
Akram, A. A., Chowdhury, S. and Mobarak, A. M. (2017), Effects of Emigration on Rural Labor Markets (Working Paper No. 23929). https://doi.org/10.3386/w23929CrossRefGoogle Scholar
Aldashev, G., Kirchsteiger, G. and Sebald, A. (2017), ‘Assignment procedure biases in randomised policy experiments’, The Economic Journal, 127(602): 873895.CrossRefGoogle Scholar
Allcott, H. (2015), ‘Site selection bias in program evaluation’, The Quarterly Journal of Economics, 130(3): 11171165.CrossRefGoogle Scholar
Al-Ubaydli, O. and List, J. A. (2013), ‘On the Generalizability of Experimental Results in Economics’ in Frechette, G. and Schotter, A., Methods of Modern Experimental Economics, Oxford University Press.Google Scholar
Al-Ubaydli, O., List, J. A., LoRe, D. and Suskind, D. (2017), ‘Scaling for economists: lessons from the non-adherence problem in the medical literature’, Journal of Economic Perspectives, 31(4): 125144. https://doi.org/10.1257/jep.31.4.125CrossRefGoogle ScholarPubMed
Al-Ubaydli, O., List, J. A. and Suskind, D. (2019), The Science of Using Science: Towards an Understanding of the Threats to Scaling Experiments (Working Paper No. 25848).CrossRefGoogle Scholar
Al-Ubaydli, O., List, J. A. and Suskind, D. L. (2017), ‘What can we learn from experiments? Understanding the threats to the scalability of experimental results’, American Economic Review, 107(5): 282286.10.1257/aer.p20171115CrossRefGoogle Scholar
Andrews, I. and Kasy, M. (2017), Identification of and Correction for Publication Bias (Working Paper No. 23298).CrossRefGoogle Scholar
Angrist, J. D., Dynarski, S. M., Kane, T. J., Pathak, P. A. and Walters, C. R. (2012), ‘Who benefits from KIPP?Journal of Policy Analysis and Management, 31(4): 837860.CrossRefGoogle Scholar
Ashraf, N., Bandiera, O. and Lee, S. S. (2018), Losing Prosociality in the Quest for Talent? Sorting, Selection, and Productivity in the Delivery of Public Services (Working Paper).Google Scholar
August, G. J., Bloomquist, M. L., Lee, S. S., Realmuto, G. M. and Hektner, J. M. (2006), ‘Can evidence-based prevention programs be sustained in community practice settings? The early risers’ advanced-stage effectiveness trial’, Prevention Science, 7(2): 151165.CrossRefGoogle ScholarPubMed
Banerjee, A., Banerji, R., Berry, J., Duflo, E., Kannan, H., Mukerji, S., Walton, M. (2017), ‘From proof of concept to scalable policies: challenges and solutions, with an application’, Journal of Economic Perspectives, 31(4): 73102.CrossRefGoogle Scholar
Banerjee, A., Barnhardt, S. and Duflo, E. (2015a), Movies, Margins and Marketing: Encouraging the Adoption of Iron-Fortified Salt (Working Paper No. 21616).CrossRefGoogle Scholar
Banerjee, A., Karlan, D. and Zinman, J. (2015b), ‘Six randomized evaluations of microcredit: introduction and further steps’, American Economic Journal: Applied Economics, 7(1): 121.Google Scholar
Bauer, M. S., Damschroder, L., Hagedorn, H., Smith, J. and Kilbourne, A. M. (2015), ‘An introduction to implementation science for the non-specialist’, BMC Psychology, 3, 32.CrossRefGoogle ScholarPubMed
Bell, S. H. and Stuart, E. A. (2016), ‘On the “where” of social experiments: the nature and extent of the generalizability problem’, New Directions for Evaluation, 2016(152): 4759.CrossRefGoogle Scholar
Bettis, R. A. (2012), ‘The search for asterisks: compromised statistical tests and flawed theories’, Strategic Management Journal, 33(1): 108113.CrossRefGoogle Scholar
Bold, T., Kimenyi, M., Mwabu, G., Ng'ang’a, A. and Sandefur, J. (2013), ‘Scaling up what works: experimental evidence on external validity in kenyan education’, Center for Global Development Working Paper, (321).Google Scholar
Buera, F. J., Kaboski, J. P. and Shin, Y. (2012), The Macroeconomics of Microfinance (Working Paper No. 17905).CrossRefGoogle Scholar
Butera, L. and List, J. A. (2017), An economic approach to alleviate the crises of confidence in science: With an application to the public goods game (No. w23335). National Bureau of Economic Research.CrossRefGoogle Scholar
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J. and Munafò, M. R. (2013), ‘Power failure: why small sample size undermines the reliability of neuroscience’, Nature Reviews Neuroscience, 14(5): 365376.CrossRefGoogle ScholarPubMed
Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., Heikensten, E., Holzmeister, F., Imai, T., Isaksson, S., Nave, G., Pfeiffer, T., Razen, M. and Wu, H. (2016), ‘Evaluating replicability of laboratory experiments in economics’, Science, 351(6280): 14331436.CrossRefGoogle ScholarPubMed
Chambers, D. A., Glasgow, R. E. and Stange, K. (2013), ‘The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change’, Implementation Science, 8(117).CrossRefGoogle ScholarPubMed
Chen, Y. and Yang, D. Y. (2019), ‘The impact of media censorship: 1984 or brave new world? American Economic Review, 109(6): 22942332.CrossRefGoogle Scholar
Cheng, S., McDonald, E. J., Cheung, M. C., Arciero, V. S., Qureshi, M., Jiang, D., … Chan, K. K. W. (2017), ‘Do the american society of clinical oncology value framework and the european society of medical oncology magnitude of clinical benefit scale measure the same construct of clinical benefit? Journal of Clinical Oncology, 35(24): 27642771.CrossRefGoogle Scholar
Christensen, G. and Miguel, E. (2018), ‘Transparency, reproducibility, and the credibility of economics research’, Journal of Economic Literature, 56(3): 920980.CrossRefGoogle Scholar
Cook, T. and Campbell, D. (1979), Quasi-experimentation: design and analysis issues for field settings, Boston, MA: Houghton Mifflin.Google Scholar
Cooper, C. L., Hind, D., Duncan, R., Walters, S., Lartey, A., Lee, E. and Bradburn, M. (2015), ‘A rapid review indicated higher recruitment rates in treatment trials than in prevention trials’, Journal of Clinical Epidemiology, 68(3): 347354.CrossRefGoogle ScholarPubMed
Crépon, B., Duflo, E., Gurgand, M., Rathelot, R. and Zamora, P. (2013), ‘Do labor market policies have displacement effects? evidence from a clustered randomized experiment’, The Quarterly Journal of Economics, 128(2): 531580.CrossRefGoogle Scholar
Crosse, S., Williams, B., Hagen, C. A., Harmon, M., Ristow, L., DiGaetano, R., … Derzon, J. H. (2011), Prevalence and Implementation Fidelity of Research-Based Prevention Programs in Public Schools. Final Report. Office of Planning, Evaluation and Policy Development, US Department of Education.Google Scholar
Czibor, E., Jimenez-Gomez, D. and List, J. A. (2019), The Dozen Things Experimental Economists Should Do (More Of) (SSRN Scholarly Paper No. ID 3313734).CrossRefGoogle Scholar
Davies, P. (2012), ‘The state of evidence-based policy evaluation and its role in policy formation’, National Institute Economic Review, 219(1): R41R52.CrossRefGoogle Scholar
Davis, J. M. V., Guryan, J., Hallberg, K. and Ludwig, J. (2017), The Economics of Scale-Up (Working Paper No. 23925).CrossRefGoogle Scholar
Deaton, A. and Cartwright, N. (2018), ‘Understanding and misunderstanding randomized controlled trials’, Social Science & Medicine, 210, 221.CrossRefGoogle ScholarPubMed
Deke, J. and Finucane, M. (2019), Moving Beyond Statistical Significance: the BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations (OPRE Report 2019 35). Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.Google Scholar
Duflo, E., Dupas, P. and Kremer, M. (2017), The impact of free secondary education: Experimental evidence from Ghana. Massachusetts Institute of Technology Working Paper Cambridge, MA.Google Scholar
Ferracci, M., Jolivet, G., and van den Berg, G. J. (2013), ‘Evidence of treatment spillovers within markets’, The Review of Economics and Statistics, 96(5): 812823.CrossRefGoogle Scholar
Fiorina, M. P. and Plott, C. R. (1978), ‘Committee decisions under majority rule: an experimental study’, American Political Science Review, 72, 575598.CrossRefGoogle Scholar
Freedman, S., Friedlander, D., Lin, W. and Schweder, A. (1996), The GAIN Evaluation: Five-Year Impacts on Employment, Earnings and AFDC Receipt, New York: MDRC.Google Scholar
Freedman, S., Knab, J. T., Gennetian, L. A. and Navarro, D. (2000), The Los Angeles Jobs-First GAIN Evaluation: Final Report on a Work First Program in a Major Urban Center.Google Scholar
Friedlander, D., Hoetz, G., Long, D. and Quint, J. (1985), Maryland: Final Report on the Employment Initiatives Evaluation, New York, NY: MDRC.Google Scholar
Fryer, R. G., Levitt, S. D. and List, J. A. (2015), Parental Incentives and Early Childhood Achievement: A Field Experiment in Chicago Heights (No. w21477). National Bureau of Economic Research.CrossRefGoogle Scholar
Gelman, A. and Carlin, J. (2014), ‘Beyond power calculations: assessing type S (sign) and type M (magnitude) errors’, Perspectives on Psychological Science, 9(6): 641651.CrossRefGoogle ScholarPubMed
Gilraine, M., Macartney, H. and McMillan, R. (2018), Education Reform in General Equilibrium: Evidence from California's Class Size Reduction (Working Paper No. 24191).Google Scholar
Glennerster, R. (2017), ‘Chapter 5 – The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency’, in Banerjee, A. V. and Duflo, E. (eds), Handbook of Economic Field Experiments, Volume 1, 175243.CrossRefGoogle Scholar
Gottfredson, D. C., Cook, T. D., Gardner, F. E. M., Gorman-Smith, D., Howe, G. W., Sandler, I. N. and Zafft, K. M. (2015), ‘Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: next generation’, Prevention Science, 16(7): 893926.CrossRefGoogle ScholarPubMed
Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N. and Altman, D. G. (2016), ‘Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations’, European Journal of Epidemiology, 31(4): 337350.CrossRefGoogle ScholarPubMed
Hamarsland, G. D. (2012), Cost-benefit Analysis in Norway. Retrieved from https://www.ntnu.edu/documents/1261865083/1263461278/6_4_Hamarsland.pdfGoogle Scholar
Harrison, G. W. and List, J. A. (2004), ‘Field experiments’, Journal of Economic Literature, 42(4): 10091055.CrossRefGoogle Scholar
Heckman, J. J. (2010), ‘Building bridges between structural and program evaluation approaches to evaluating policy’, Journal of Economic Literature, 48(2): 356398.CrossRefGoogle ScholarPubMed
Heckman, J. J., Ichimura, H., Smith, J. and Todd, P. (1998a), ‘Characterizing Selection Bias Using Experimental Data’, Econometrica, 66(5): 10171098.CrossRefGoogle Scholar
Heckman, J. J., Lalonde, R. J. and Smith, J. A. (1999), ‘Chapter 31 - The Economics and Econometrics of Active Labor Market Programs’, in Ashenfelter, O. C. and Card, D. (Eds.), Handbook of Labor Economics, Volume 3, 18652097.Google Scholar
Heckman, J. J., Lochner, L. and Taber, C. (1998b), ‘Explaining rising wage inequality: explorations with a dynamic general equilibrium model of labor earnings with heterogeneous agents’, Review of Economic Dynamics, 1(1): 158.CrossRefGoogle Scholar
Hippel, P. and Wagner, C. (2018), Does a Successful Randomized Experiment Lead to Successful Policy? Project Challenge and What Happened in Tennessee After Project STAR (SSRN Scholarly Paper No. ID 3153503).Google Scholar
Hitchcock, J., Dimino, J., Kurki, A., Wilkins, C. and Gersten, R. (2011), The Impact of Collaborative Strategic Reading on the Reading Comprehension of Grade 5 Students in Linguistically Diverse Schools. Final Report. NCEE 2011-4001. National Center for Education Evaluation and Regional Assistance.Google Scholar
Horner, R. H., Kinkaid, D., Sugai, G., Lewis, T., Eber, L., Barrett, S., Dickey, C. R., Richter, M., Sullivan, E., Boezio, C., Algozzine, B., Reynolds, H. and Johnson, N. (2014), ‘Scaling up school-wide positive behavioral interventions and supports: experiences of seven states with documented success’, Journal of Positive Behavior Interventions, 16(4): 197208.CrossRefGoogle Scholar
Horsfall, S. and Santa, C. (1985), Project CRISS: Validation Report for the Joint Review and Dissemination Panel. Unpublished manuscript.Google Scholar
Horton, J. J., Rand, D. G. and Zeckhauser, R. J. (2011), ‘The online laboratory: conducting experiments in a real labor market’, Experimental Economics, 14(3): 399425.CrossRefGoogle Scholar
Ioannidis, J. P. A. (2005), ‘Contradicted and initially stronger effects in highly cited clinical research’, JAMA, 294(2): 218228.CrossRefGoogle ScholarPubMed
Jennions, M. D. and Moller, A. P. (2001), ‘Relationships fade with Time: a meta-analysis of temporal trends in publication in ecology and evolution’, Proceedings of the Royal Society of London, 269(1486): 4348.CrossRefGoogle Scholar
Jepsen, C. and Rivkin, S. (2009), ‘Class size reduction and student achievement the potential tradeoff between teacher quality and class size’, Journal of Human Resources, 44(1): 223250.CrossRefGoogle Scholar
Karlan, D. and List, J. A. (2007), ‘Does price matter in charitable giving? evidence from a large-scale natural field experiment’. American Economic Review, 97(5): 17741793.CrossRefGoogle Scholar
Kerwin, J. and Thornton, R. L. (2018), Making the Grade: The Sensitivity of Education Program Effectiveness to Input Choices and Outcome Measures (SSRN Scholarly Paper No. ID 3002723). Retrieved from Social Science Research Network website.CrossRefGoogle Scholar
Kilbourne, A. M., Neumann, M. S., Pincus, H. A., Bauer, M. S. and Stall, R. (2007), ‘Implementing evidence-based interventions in health care: application of the replicating effective programs framework’, Implementation Science: IS, 2, 42.CrossRefGoogle ScholarPubMed
Kline, P. and Walters, C. R. (2016), “Evaluating public programs with close substitutes: The case of Head Start,Quarterly Journal of Economics 131(4): 17951848.CrossRefGoogle Scholar
Knechtel, V., Coen, T., Caronongan, P., Fung, N. and Goble, L. (2017), Pre-kindergarten impacts over time: An analysis of KIPP charter schools, Washington, DC: Mathematica Policy Research.Google Scholar
Komro, K. A., Flay, B. R., Biglan, A. and Wagenaar, A. C. (2016), ‘Research design issues for evaluating complex multicomponent interventions in neighborhoods and communities’, Translational Behavioral Medicine, 6(1): 153159.CrossRefGoogle ScholarPubMed
Kushman, J., Hanita, M. and Raphael, J. (2011), An Experimental Study of the Project CRISS Reading Program on Grade 9 Reading Achievement in Rural High Schools. Final Report NCEE 2011-4007. National Center for Education Evaluation and Regional Assistance.Google Scholar
Levitt, S. D. and List, J. A. (2007), ‘What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2): 153174.CrossRefGoogle Scholar
Lin, W. and Green, D. P. (2016), ‘Standard operating procedures: a safety net for pre-analysis plans’, PS: Political Science & Politics, 49(3): 495500.Google Scholar
Lipsey, M. W. (1999), ‘Can rehabilitative programs reduce the recidivism of juvenile offenders? an inquiry into the effectiveness of practical programs’, Virginia Journal of Social Policy & the Law, 6(3): 611642.Google Scholar
List, J. A. (2004), “Neoclassical theory versus prospect theory: evidence from the marketplace,” Econometrica, (2004), 72(2): pp. 615625.CrossRefGoogle Scholar
List, J. A. (2006), “The behavioralist meets the market: measuring social preferences and reputation effects in actual transactions,” Journal of Political Economy, 114(1): pp. 137.CrossRefGoogle Scholar
List, J. A. (2007a), ‘Field experiments: a bridge between lab and naturally occurring data’, The B.E. Journal of Economic Analysis & Policy, 6(2), 85(2): 147.CrossRefGoogle Scholar
List, J. A. (2007b), “On the interpretation of giving in dictator games,” Journal of Political Economy, 115(3): 482494.CrossRefGoogle Scholar
List, J. A. (2011a), “The market for charitable giving,” Journal of Economic Perspectives, 25(2): 157180.CrossRefGoogle Scholar
List, J. A. (2011b), ‘Why economists should conduct field experiments and 14 tips for pulling one off’, Journal of Economic Perspectives, 25(3): 316.CrossRefGoogle Scholar
List, J. A., Momeni, F. and Zenou, Y. (2019), Are Measures of Early Education Programs Too Pessimistic? Evidence from a Large-Scale Field Experiment. Working Paper.CrossRefGoogle Scholar
List, J. A., Shaikh, A. M. and Xu, Y. (2016), ‘Multiple hypothesis testing in experimental economics’. Experimental Economics.CrossRefGoogle Scholar
Maniadis, Z., Tufano, F. and List, J. A. (2014), ‘One swallow doesn't make a summer: new evidence on anchoring effects’, American Economic Review, 104(1): 277290.CrossRefGoogle Scholar
Miguel, E. and Kremer, M. (2004), ‘Worms: identifying impacts on education and health in the presence of treatment externalities’, Econometrica, 72(1): 159217.CrossRefGoogle Scholar
Muralidharan, K. and Niehaus, P. (2017), ‘Experimentation at Scale’, Journal of Economic Perspectives, 31(4): 103124.CrossRefGoogle Scholar
Muralidharan, K. and Sundararaman, V. (2015), ‘The aggregate effect of school choice: evidence from a two-stage experiment in india’, The Quarterly Journal of Economics, 130(3): 10111066.CrossRefGoogle Scholar
Nosek, B. A., Spies, J. R. and Motyl, M. (2012), ‘Scientific utopia II. Restructuring incentives and practices to promote truth over publishability’, Perspectives on Psychological Science, 7(6), 615631.CrossRefGoogle ScholarPubMed
Obama, B. (2013), The Budget Message of the President. Retrieved from https://www.govinfo.gov/content/pkg/BUDGET-2014-BUD/pdf/BUDGET-2014-BUD.pdfGoogle Scholar
Ogutu, S. O., Fongar, A., Gödecke, T., Jäckering, L., Mwololo, H., Njuguna, M., … Qaim, M. (2018), ‘How to make farming and agricultural extension more nutrition-sensitive: evidence from a randomised controlled trial in Kenya’, European Review of Agricultural Economics, 1–24.CrossRefGoogle Scholar
Raudenbush, S. W. and Bloom, H. S. (2015), ‘Learning about and from a distribution of program impacts using multisite trials’, American Journal of Evaluation, 36(4): 475499.CrossRefGoogle Scholar
Riccio, J. (1994), GAIN: Benefits, Costs, and Three-Year Impacts of a Welfare-to-Work Program. California's Greater Avenues for Independence Program.Google Scholar
Rudd, K. (2008), Address to Heads of Agencies and Members of Senior Executive Service, Great Hall, Parliament House, Canberra. Retrieved from https://pmtranscripts.pmc.gov.au/release/transcript-15893Google Scholar
Schumacher, J., Milby, J., Raczynski, J., Engle, M., Caldwell, E. and Carr, J. (1994), ‘Demoralization and Threats to Validity in Birmingham's Homeless Project’, in Conrad, K. (ed), Critically Evaluating the Role of Experiments, Volume 1, San Francisco, CA: Jossey-Bass, 4144.Google Scholar
Smith, V. (1962), ‘An Experimental Study of Competitive Market Behavior’, Economics Faculty Articles and Research.CrossRefGoogle Scholar
Stanley, T. D., Doucouliagos, H., Giles, M., Heckemeyer, J. H., Johnston, R. J., Laroche, P., … & Rosenberger, R. S. (2013), ‘Meta-analysis of economics research reporting guidelines’, Journal of Economic Surveys, 27(2): 390394.CrossRefGoogle Scholar
Stuart, E. A., Ackerman, B. and Westreich, D. (2018), ‘Generalizability of randomized trial results to target populations: design and analysis possibilities’, Research on Social Work Practice, 28(5): 532537.CrossRefGoogle ScholarPubMed
Stuart, E. A., Bell, S. H., Ebnesajjad, C., Olsen, R. B. and Orr, L. L. (2017), ‘Characteristics of school districts that participate in rigorous national educational evaluations’, Journal of Research on Educational Effectiveness, 10(1): 168206.CrossRefGoogle ScholarPubMed
Supplee, L. H. and Meyer, A. L. (2015), ‘The intersection between prevention science and evidence-based policy: how the spr evidence standards support human services prevention programs’, Prevention Science, 16(7): 938942.CrossRefGoogle ScholarPubMed
Supplee, L. H., Kelly, B. C., MacKinnon, D. M. and Barofsky, M. Y. (2013), ‘Introduction to the special issue: subgroup analysis in prevention and intervention research’, Prevention Science, 14(2): 107110.CrossRefGoogle ScholarPubMed
Supplee, L. and Metz, A. (2015), Opportunities and challenges in evidence-based social policy (No. V27, 4).CrossRefGoogle Scholar
Tuttle, C. C., Gill, B., Gleason, P., Knechtel, V., Nichols-Barrer, I. and Resch, A. (2013), ‘KIPP Middle Schools: Impacts on Achievement and Other Outcomes’. Final Report. Mathematica Policy Research, Inc.Google Scholar
Tuttle, C. C., Gleason, P., Knechtel, V., Nichols-Barrer, I., Booker, K., Chojnacki, G., … Goble, L. (2015), ‘Understanding the Effect of KIPP as It Scales: Volume I, Impacts on Achievement and Other Outcomes’. Final Report of KIPP's “Investing in Innovation Grant Evaluation”. Mathematica Policy Research, Inc.Google Scholar
Vivalt, E. (2016), How much can we generalize from impact evaluations? Working paper.Google Scholar
Voelkl, B., Vogt, L., Sena, E. S. and Würbel, H. (2018), ‘Reproducibility of preclinical animal research improves with heterogeneity of study samples’, PLoS Biology, 16(2): e2003693.CrossRefGoogle ScholarPubMed
Wacholder, S., Chanock, S., Garcia-Closas, M., El Ghormli, L. and Rothman, N. (2004), ‘Assessing the probability that a positive report is false: an approach for molecular epidemiology studies’, JNCI: Journal of the National Cancer Institute, 96(6): 434–42.CrossRefGoogle ScholarPubMed
Walsh, E. and Sheridan, A. (2016), ‘Factors affecting patient participation in clinical trials in Ireland: A narrative review’, Contemporary Clinical Trials Communications, 3, 2331.CrossRefGoogle ScholarPubMed
Weiss, M. J., Bloom, H. S., Verbitsky-Savitz, N., Gupta, H., Vigil, A. E., and Cullinan, D. N.. (2017), ‘How much do the effects of education and training programs vary across sites? evidence from past multisite randomized trials.’ Journal of Research on Educational Effectiveness 10(4): 843876. https://doi.org/10.1080/19345747.2017.1300719.CrossRefGoogle Scholar
Young, N. S., Ioannidis, J. P. A. and Al-Ubaydli, O. (2008), ‘Why current publication practices may distort science’, PLoS Medicine, 5(10): e201.CrossRefGoogle ScholarPubMed
Zullig, L. L., Peterson, E. D. and Bosworth, H. B. (2013), ‘Ingredients of successful interventions to improve medication adherence’, JAMA, 310(24): 26112612.CrossRefGoogle ScholarPubMed