Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-25T10:42:19.277Z Has data issue: false hasContentIssue false

The replicability revolution

Published online by Cambridge University Press:  27 July 2018

Ulrich Schimmack*
Affiliation:
Department of Psychology, University of Toronto, Mississauga, ON L5L 1C6, Canada. [email protected]://replicationindex.wordpress.com/

Abstract

Psychology is in the middle of a replicability revolution. High-profile replication studies have produced a large number of replication failures. The main reason why replication studies in psychology often fail is that original studies were selected for significance. If all studies were reported, original studies would fail to produce significant results as often as replication studies. Replications would be less contentious if original results were not selected for significance.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Crandall, C. S. & Sherman, J. W. (2016) On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology 66:9399. Available at: http://doi.org/10.1016/j.jesp.2015.10.002.Google Scholar
Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science 349(6251):aac4716. Available at: http://doi.org/10.1126/science.aac4716.Google Scholar
Schimmack, U. (2012) The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods 17:551–56.Google Scholar
Schimmack, U. (2014) The test of insufficient variance (TIVA): A new tool for the detection of questionable research practices. Working paper. Available at: https://replicationindex.wordpress.com/2014/12/30/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices/.Google Scholar
Schimmack, U. (2017) ‘Before you know it’ by John A. Bargh: A quantitative book review. Available at: https://replicationindex.wordpress.com/2017/11/28/before-you-know-it-by-john-a-bargh-a-quantitative-book-review/.Google Scholar
Schimmack, U. & Brunner, J. (submittrd for publication) Z-Curve: A method for estimating replicability based on test statistics in original studies. Submitted for Publication.Google Scholar
Schimmack, U., Heene, M. & Kesavan, K. (2017) Reconstruction of a train wreck: How priming research went off the rails. Blog post. Available at: https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/.Google Scholar
Sterling, T. D. (1959) Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association 54(285):3034. Available at: http://doi.org/10.2307/2282137.Google Scholar