Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-05T02:40:39.888Z Has data issue: false hasContentIssue false

Conceptualizing and evaluating replication across domains of behavioral research

Published online by Cambridge University Press:  27 July 2018

Jennifer L. Tackett
Affiliation:
Psychology Department, Northwestern University, Evanston, IL 60208. [email protected]://www.jltackett.com/
Blakeley B. McShane
Affiliation:
Kellogg School of Management, Northwestern University, Evanston, IL 60208. [email protected]://www.blakemcshane.com/

Abstract

We discuss the authors' conceptualization of replication, in particular the false dichotomy of direct versus conceptual replication intrinsic to it, and suggest a broader one that better generalizes to other domains of psychological research. We also discuss their approach to the evaluation of replication results and suggest moving beyond their dichotomous statistical paradigms and employing hierarchical/meta-analytic statistical models.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Carlin, J. B. (2016) Is reform possible without a paradigm shift? The American Statistician, Supplemental material to the ASA statement on p-values and statistical significance 10.Google Scholar
Gelman, A. (2015) The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Journal of Management 41(2):632–43.Google Scholar
Gelman, A. (2016a) The problems with p-values are not just with p-values. The American Statistician, Supplemental material to the ASA statement on p-values and statistical significance 10.Google Scholar
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B. Jr., Bahník, S., Bernstein, M. J., Bocian, K., Brandt, M. J., Brooks, B., Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M., Hasselman, F., Hicks, J. A., Hovermale, J. F., Hunt, S. J., Hunstinger, J. R., IJerzman, H., John, M.-S., Joy-Gaba, J. A., Kappes, H. B., Krueger, L. E., Kurtz, J., Levitan, C. A., Mallett, R. K., Morris, W. L., Nelson, A. J., Nier, J. A., Packard, G., Pilati, R., Rutchick, A. M., Schmidt, K., Skorinko, J. L., Smith, R., Steiner, T. G., Storbeck, J., Van Swol, L. M., Thompson, D., van't Veer, A. E., Vaughn, L. A., Vranka, M., Wichman, A. L., Woodzicka, J. A. & Nosek, B. A. (2014a) Investigating variation in replicability: A “Many Labs” replication project. Social Psychology 45(3):142–52. Available at: http://doi.org/10.1027/1864-9335/a000178.Google Scholar
Leek, J., McShane, B. B., Gelman, A., Colquhoun, D., Nuitjen, M. B., and Goodman, S. N. (2017) Five ways to fix statistics. Nature 551(7682):557–59.Google Scholar
McShane, B. B. & Bockenholt, U. (2017) Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. Journal of Consumer Research 43(6):1048–63.Google Scholar
McShane, B. B. & Bockenholt, U. (2018) Multilevel multivariate meta-analysis with application to choice overload. Psychometrika 83(1):255271.Google Scholar
McShane, B. B., Bockenholt, U. & Hansen, K. T. (2016) Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science 11(5):730–49.Google Scholar
McShane, B. B. & Gal, D. (2016) Blinding us to the obvious? The effect of statistical training on the evaluation of evidence. Management Science 62(6):1707–18.Google Scholar
McShane, B. B. & Gal, D. (2017) Statistical significance and the dichotomization of evidence. Journal of the American Statistical Association 112(519):885–95.Google Scholar
McShane, B. B., Gal, D., Gelman, A., Robert, C. & Tackett, J. L. (2017) Abandon statistical significance. Technical report, Northwestern University. Available at: https://arxiv.org/abs/1709.07588.Google Scholar
Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science 349(6251):aac4716. Available at: http://doi.org/10.1126/science.aac4716.Google Scholar
Simons, D. J., Holcombe, A. O. & Spellman, B. A. (2014) An introduction to Registered Replication Reports at Perspectives on Psychological Science. Perspectives on Psychological Science 9(5):552–55.Google Scholar
Stanley, T. D., Carter, E. C. & Doucouliagos, H. (November 2017) What meta-analyses reveal about the replicability of psychological research. Deakin Laboratory for the Meta-Analysis of Research, Working Paper .Google Scholar
Tackett, J. L., Brandes, C. M. & Reardon, K. W. (in press) Leveraging the Open Science Framework in clinical psychological assessment research. Psychological AssessmentGoogle Scholar
Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L, Krueger, R. F, Miller, J. D., Oltmans, T. F. & Shrout, P. E. (2017a) It's time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science 12(5):742–56.Google Scholar
Tackett, J. L., McShane, B. B., Bockenholt, U. & Gelman, A. (2017b) Large scale replication projects in contemporary psychological research. Technical report, Northwestern University. Available at: arXiv:1710.06031.Google Scholar
van Erp, S., Verhagen, A. J., Grasman, R. P. P. P. & Wagenmakers, E.-J. (2017) Estimates of between-study heterogeneity for 705 meta-analyses reported in Psychological Bulletin from 1990–2013. Journal of Open Psychology Data 5(1):4. DOI: http://doi.org/10.5334/jopd.33.Google Scholar