Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-22T00:29:09.124Z Has data issue: false hasContentIssue false

Editor's Corner

Published online by Cambridge University Press:  07 May 2024

Debra L. Martin*
Affiliation:
Department of Anthropology, University of Nevada, Las Vegas, NV, USA
Rights & Permissions [Opens in a new window]

Abstract

Type
Editor's Corner
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of Society for American Archaeology

Introducing Our Associate Editor for Reproducibility, Dr. Alan Farahani, Editorial Board Member

This year we are introducing a new option for American Antiquity (AAQ) authors of accepted articles: to have your quantitative or data-rich research officially (and computationally) reproduced and for its successful reproduction to be publicly recognized. The proposal to have a board member take on the responsibilities of this initiative was discussed extensively by the entire AAQ Editorial Board, and it was unanimously approved. Dr. Alan Farahani (founder, Sci-Scope Solutions) has agreed to serve as the first Associate Editor for Reproducibility (AER). He has extensive experience in computational methods and data analysis in archaeology and anthropology. He will hold this position for at least a year; we see this as a rotating position that taps others with the expertise and commitment to data transparency.

The initiative is entirely optional and is available only for AAQ papers that are accepted for publication. After acceptance, authors can voluntarily submit their free- and open-source code and dependent datasets for any number of specified figures or analyses. The AER will then attempt to reproduce those specific figures and analyses over a set number of days. Those analyses that do reproduce will be acknowledged via a “reproducibility statement.” This statement will include the names of the individuals responsible for the creation of the analyses and figures in the following format: “The Associate Editor of Reproducibility downloaded all materials relevant to Figures 4 and 6, and the analyses presented in Table 2, and was able to reproduce the results as presented by the authors.”

These data and analyses will then be published alongside the article as supplemental information, unless authors have already made them available through an accessible repository. Analyses that have data that cannot be shared for ethical or community-based reasons are still eligible for review, except for the final step of data publication.

If the results do not reproduce, the AER will work with the author(s) over a set timeframe to identify why the analyses/figures could not be replicated and to address any issues before publication. No paper will be rejected because of a failure to reproduce unless significant problems with the data are identified.

What Are the Benefits of Instituting an Associate Editor for Reproducibility?

One of the primary reasons for this new position is to facilitate approaches that foster transparency and integrity in quantitative research that journals across the social (Lindsay Reference Lindsay2023), natural (Powers and Hampton Reference Powers and Hampton2018), and physical (Van de Lindt and Narasimhan Reference Van de Lindt and Narasimhan2024) sciences are now embracing. Many researchers across disciplines ranging from social psychology to cancer biology have noted that there is a “crisis of reproducibility” (Baker Reference Baker2016) in that many highly cited research results have failed to reproduce, which, following the 2019 National Research Council Consensus Study, means “obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis” (National Academies of Sciences, Engineering, and Medicine 2019:1). This failure may be due to a combination of several factors: mundane errors, specific choices made during data analysis (Oza Reference Oza2023), or even intentional data fabrication (Simonsohn et al. Reference Simonsohn, Simmons and Nelson2021). Regardless of the causes, the inability for other researchers to reproduce results, whether due to inaccessible data (Stodden et al. Reference Stodden, Seiler and Ma2018) or other reasons, undermines peer and stakeholder trust in quantitative and empirical archaeological research. Transparency in our analyses reaffirms our collective commitment to making the research enterprise a community effort that depends on the cumulative aggregation of knowledge shared by our peers.

We emphasize again that the option for a reproducibility review is optional. But its benefits are many! In lengthy multiauthored publications, any author or authors who have taken the time to make their data and analyses reproducible will be acknowledged. It is, in effect, not only a public statement of the very hard work that a team has done to make its results reproducible but also a demonstration to our research community of the importance of the accessibility of these routines and data for the archaeological (and scientific) process and a commitment to transparent empirical research conducted with integrity.

This initiative is part of the AAQ mission to be more inclusive (see Editor's Corner, vol. 89, no. 1). It acknowledges that there are many kinds of archaeologies and archaeologists, including those who value sharing their data and creating reproducible analyses. This aligns with a greater emphasis on making archaeological data FAIR: findable, accessible, interoperable, and reusable (Nicholson et al. Reference Nicholson, Kansa, Gupta and Fernandez2023). It will enable our research community to have the tools and resources to address large-scale questions that are dependent on a variety of datasets. Finally, it furthers the call for “open science” in archaeology (Marwick et al. Reference Marwick, d'Alpoim Guedes, Barton, Bates, Baxter, Bevan and Bollwerk2017), by celebrating, via acknowledgment, all those individuals (Marwick Reference Marwick2022) and institutions (Begley et al. Reference Begley, Buchan and Dirnagl2015) committed to it.

References

References Cited

Baker, Monya. 2016. 1,500 Scientists Lift the Lid on Reproducibility. Nature 533(7604):452–454. https://doi.org/10.1038/533452a.CrossRefGoogle ScholarPubMed
Begley, C. Glenn, Buchan, Alastair M., and Dirnagl, Ulrich. 2015. Robust Research: Institutions Must Do Their Part for Reproducibility. Nature 525:2527. https://doi.org/10.1038/525025a.CrossRefGoogle ScholarPubMed
Lindsay, D. Stephen. 2023. A Plea to Psychology Professional Societies that Publish Journals: Assess Computational Reproducibility. Meta-Psychology 7. https://doi.org/10.15626/MP.2023.4020.CrossRefGoogle Scholar
Marwick, Ben. 2022. CRAN Task View: Archaeological Science. Github. https://github.com/benmarwick/ctv-archaeology?tab=readme-ov-file, accessed April 24, 2024.Google Scholar
Marwick, Ben, d'Alpoim Guedes, Jade, Barton, C. Michael, Bates, Lynsey A., Baxter, Michael, Bevan, Andrew, Bollwerk, Elizabeth A., et al. 2017. Open Science in Archaeology. SAA Archaeological Record 17(4):814.Google Scholar
National Academies of Sciences, Engineering, and Medicine. 2019. Reproducibility and Replicability in Science. National Academies Press, Washington, DC. https://doi.org/10.17226/25303.CrossRefGoogle Scholar
Nicholson, Christopher, Kansa, Sarah, Gupta, Neha, and Fernandez, Rachel. 2023. Will It Ever Be FAIR? Making Archaeological Data Findable, Accessible, Interoperable, and Reusable. Advances in Archaeological Practice 11(1):6375. https://doi.org/10.1017/aap.2022.40.CrossRefGoogle Scholar
Oza, Anil. 2023. Reproducibility Trial: 246 Biologists Get Different Results from Same Data Sets. Nature 622:677678. https://doi.org/10.1038/d41586-023-03177-1.CrossRefGoogle ScholarPubMed
Powers, Stephen M., and Hampton, Stephanie E.. 2018. Open Science, Reproducibility, and Transparency in Ecology. Ecological Applications 29(1):e01822. https://doi.org/10.1002/eap.1822.Google ScholarPubMed
Simonsohn, Uri, Simmons, Joe, and Nelson, Leif. 2021. [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty. DataColada (blog), August 17. https://datacolada.org/98, accessed April 24, 2024.Google Scholar
Stodden, Victoria, Seiler, Jennifer, and Ma, Zhaokun. 2018. An Empirical Analysis of Journal Policy Effectiveness for Computational Reproducibility. PNAS 115(11):25842589. https://doi.org/10.1073/pnas.1708290115.CrossRefGoogle ScholarPubMed
Van de Lindt, John W., and Narasimhan, Sriram. 2024. Reproducibility in Journal of Structural Engineering Papers. Journal of Structural Engineering 150(4). https://doi.org/10.1061/JSENDH.STENG-13725.CrossRefGoogle Scholar