Skip to main content Accessibility help
×
Research transparency

Open Practice Badges

We recognise best practices in open research by awarding Open Practice Badges to authors who openly share the data and materials underpinning their research, or who have preregistered their research plans.

Badges are awarded by author declaration. You will be asked during the submission process to confirm whether or not you have met the criteria for each badge.

Open Science Badges are an initiative of the Center for Open Science – a non-profit aiming to increase the openness, integrity, and reproducibility of scientific research – and have been adopted by a number of leading journals in different disciplines.

AAP is following the disclosure model in its award of the Open Data and Open Materials badges: authors affirm that they meet the badge criteria through the submission system and their use of the Data Availability StatementAAP, as the awarding journal, makes a cursory evaluation of the data and materials. This includes checking that the provided link leads to the data or materials in an open repository, that they look appropriate and that they relate to the article. AAP does not perform a full peer review of the data or materials. The onus is on authors to follow the criteria for each badge and they are accountable to the community for the accuracy of their statements. 

In applying for the Open Data badge authors are disclosing and confirming that:

  • They have provided the URL, DOI, or other permanent path for accessing the data in a public, open access repository in the Data Availability Statement.
  • There is sufficient information for an independent researcher to reproduce the reported results

In applying for the Open Materials badge authors are disclosing and confirming that:

  • They have provided the URL, DOI, or other permanent path for accessing the materials in a public, open access repository in the Data Availability Statement.
  • There is sufficient information for an independent researcher to reproduce the reported methodology.


Reproducibility Review

Advances in Archaeological Practice (henceforth Advances) is pleased to offer to authors of accepted articles the ability for their quantitative or data-rich research to be officially (and computationally) reproduced, and for its successful reproduction to be publicly recognized via a “reproducibility statement" on their manuscript. Participation in this initiative is entirely voluntary.

The following guide describes what the “reproducibility review" process entails for Advances and what steps authors must take to prepare their reproducibility “artifacts" for review. It is highly recommended that authors who wish to utilize reproducibility review format their code/data/resources according to the steps outlined below before submission.

What is reproducibility?

The definition of reproducibility employed by Advances follows the 2019 National Research Council Consensus Study, where reproducibility means “obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis." In the case of published archaeological research, reproducibility allows other researchers to re-create published results once given the code / tools and data needed to so.

Replicability, on the other hand, is defined as “obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data". In practical terms for archaeological research “replicability" involves, for example, re-analysis of the same objects (lithics, faunal remains, etc.) and / or re-running particular kinds of chemical or physical analyses by different individuals or laboratories.

The reproducibility approach at Advances is focused on results reproducibility, while study replicability is currently outside of scope for review.

Why reproducibility?

One of the aims of this initiative is to facilitate transparency and integrity in quantitative archaeological research (Marwick 2017; Marwick et al 2020). Incorporating results reproducibility into publication is being embraced by journals across the social, natural, and physical sciences, including at the Journal of Archaeological Science and Journal of Field Archaeology.

Researchers across several quantitative disciplines have found that some high-profile published research results have been not able to be recreated even after researchers were provided the original data and code for analysis (the “crisis of reproducibility"). The reasons for this lack of reproducibility have been attributed to mundane errors, undisclosed choices made during data analysis, or even intentional data fabrication. Regardless of the causes, the inability for researchers to reproduce published results, whether due to inaccessible data or analytical routines, not only undermines peer and stakeholder trust in quantitative and / or empirical archaeological research, but impedes archaeologists from building upon previously existing knowledge.

Transparency in empirical analyses reaffirms our collective commitment to making the archaeological research enterprise a community effort dependent on the cumulative aggregation of shareable knowledge. As successful reproducibility efforts at Advances will be acknowledged via a statement at the end of the manuscript, it is also a public recognition of the hard work that members of archaeological teams expend to make their results available. It is also a demonstration to our research community of the importance of the accessibility of these routines and data for the archaeological (and scientific) process, as well as a commitment to quantitative research conducted with integrity. This aligns with the goal of making archaeological data FAIR, that is findable, accessible, interoperable and reusable, to enable our research community to have the tools and resources to address large-scale questions that are dependent on a variety of datasets. Finally, it furthers the call for “open science" in archaeology, by recognizing all of those individuals and institutions who are committed to it.

Process overview

Upon conditional article acceptance, authors may voluntarily choose to have code they have written in service of any number of figures or analyses reviewed by an Associate Editor of Reproducibility (AER).

The AER will attempt to reproduce these figures / analyses over a maximum of two working days. These working days are not necessarily consecutive nor do they constitute a ‘turn-around’ time for a reproducibility review – instead they represent the sum total time effort an AER will commit to reproducing submitted analyses. Therefore, it behooves authors to organize their data and code in such a way as to more readily facilitate readibility and reproducibility (see best practices below).

If the analyses / figures / etc. do not reproduce, the authors will be given time to make necessary changes for successful reproduction with the support of the AER. However, there are circumstances under which reproduction may be unfeasible due to previously mentioned technical demands of the reproduction effort that may lie beyond what is accessible to an AER (e.g., high-power computing), or time limitations due to the nature of the research itself (e.g., complex ensemble models that require significant computation time). In these cases the authors will be notified and the manuscript would proceed without a reproducibility statement.

Articles that do successfully meet these requirements will be acknowledged via a reproducibility statement at the end of the manuscript. Note that by opting into this process, you consent to the ability of the AER to create and attach such a reproducibility statement to your manuscript.

Example: “The code and materials for Figures 4 and 6, and the analyses presented in Table 2, prepared by O.E.B. and U.L.-G., were downloaded and reproduced successfully by the Associate Editor of Reproducibility, Lauren Olamina."

Eligibility

All submitted manuscripts are eligible for “reproducibility review". While there are specific requirements for formatting code and data, any paper that supplies them will be considered upon request. As with other journals, results reproducibility is envisaged as a continuum, rather than a specific point in the research process.

Some archaeological projects may use open source tools to create figures alone, while others may instead utilize high-throughput CPU computing power to generate geospatial models. Others may use machine learning methods drawing on seed-dependent stochastic processes. It is acknowledged that not every study will be able to be reproduced from raw data to results. Therefore, authors must indicate where along the continuum their reproducibility efforts lie.

All authors are encouraged to submit their reproducible research for review, although it must be emphasized that not all steps of the analytical process may be re-created by the AER owing to technological and time constraints. As such, the AER will work with authors to determine which components of their submitted reproducibility ‘artifacts’ fall under the remit of review.

Data sharing

It is strongly recommended that authors who wish to engage in reproducibility review utilize an established and secure online scientific data repository to exchange both code and data. These repositories generate stable, persistent identifiers (such as a DOI) upon project creation that can be nonetheless kept private until the time of publication.

Examples include:

  1. Zenodo
  2. Open Science Foundation (OSF)
  3. Figshare
  4. Dataverse
  5. tDAR
  6. Archaeology Data Service
  7. Institutional resources (e.g., university computing services or library)
  8. Specialized repositories, such as for isotope, aDNA, proteomics data, etc.

For a detailed post on this issue, please visit the following link and post on where to share your data.

Unfortunately, GitHub does not allow a private repository to be shared outside of the addition of an individual as a collaborator to the project. If your analyses and data are stored in GitHub, it may be necessary to add the AER as a collaborator.

In the event that GitHub is used, it is asked that a persistent DOI be generated for the repository using Zenodo. Instructions are here.

Step-by-step

  1. After conditional acceptance the author or authors will submit their request for reproducibility review by contacting the handling editor. The subject line should read: “reproducibility review for Advances MS#".

  2. In the body of that message or as a separate document, authors must identify which specific figures and / or analyses will be reproduced by the AER. These figures / analyses can be present in the main text of the manuscript, any number of supplements, or a combination thereof.

Example: “The authors submit for reproduction Figures 1, 4, and 6, as well as the hierarchical cluster analysis presented in subsection 4 of the ‘Results’"

  1. Authors will also include the names of those individuals involved in the code write-up and / or data preparation.

Please indicate whether the data and analyses cannot be shared for ethical or other culturally sensitive reasons. These manuscripts are still eligible for review.

  1. Authors will then send their code and data to the AER via an e-mailed link using the repository options outlined above. All of the relevant data and code needed for reproducibility (along with the specification document) must be available.

Note: All shared data and analyses follow standard confidentiality agreements for journal submissions.

  1. The AER will attempt to reproduce the figures / analyses over a sum total of two working days. In that time, the AER will assemble a brief document with an overview of the reproducibility attempt. That overview document is then sent to the Editor for consultation.

  2. After consultation with the Editor(s), this overview document is sent by the AER to the relevant manuscript authors. The document will contain basic details about the code / data, whether the AER was able to reproduce the select figures / analyses, if not, why, and further suggestions for reproducibility.

Inability to reproduce:

In the event the figures / analyses cannot be reproduced, the authors can choose to work with the support of the AER if desired, to identify and rectify any issues.

  1. Upon successful reproduction, the relevant text described will be added to a reproducibility statement in the manuscript.

Requirements

  • The language in which the code is written must be Free and Open Source (FOS) to be eligible for review. For example, Python, R, and Julia, are acceptable, but not SPSS, JMP, Matlab, or other proprietary formats.
  • The raw data must be in an open-source format and included with the reproducibility submission. CSVs (comma separated values) are preferred, but other kinds of open-source formats are acceptable (e.g., .db, .obj, etc.).
  • Depending on the complexity of the project, the use of a markdown format (Quarto, R Markdown, etc.) or Jupyter Notebook for code sharing is highly recommended. Otherwise, for more complex projects, it may be necessary to supply a “data dictionary" that provides a full description of short-hand computational variables (e.g., “cal_cred_int" = “Credibility intervals for calibrated radiocarbon dates using IntCal20") and the relationship of variables to each other if part of a relational database (see this Open Context PDF for some best practices).

Resources

The following is an incomplete but growing list of resources.

References

Farahani, Alan. 2024. “Reproducibility and Archaeological Practice in the Journal of Field Archaeology." Journal of Field Archaeology 49 (6): 391–94.
Grüning, Björn, John Chilton, Johannes Köster, Ryan Dale, Nicola Soranzo, Marius van den Beek, Jeremy Goecks, Rolf Backofen, Anton Nekrutenko, and James Taylor. 2018. “Practical Computational Reproducibility in the Life Sciences." Cell Systems 6 (6): 631–35.
Marwick, Ben. 2017. “Computational Reproducibility in Archaeological Research: Basic Principles and a Case Study of Their Implementation." Journal of Archaeological Method and Theory 24 (2): 424–50.
Marwick, Ben, Li-Ying Wang, Ryan Robinson, and Hope Loiselle. 2020. “How to Use Replication Assignments for Teaching Integrity in Empirical Archaeology." Advances in Archaeological Practice 8 (1): 78–86.
Piccolo, Stephen R, and Michael B Frampton. 2016. “Tools and Techniques for Computational Reproducibility." GigaScience 5 (1): s13742–016.
Sandve, Geir Kjetil, Anton Nekrutenko, James Taylor, and Eivind Hovig. 2013. “Ten Simple Rules for Reproducible Computational Research." PLoS Computational Biology 9 (10): e1003285.
Stodden, Victoria, Marcia McNutt, David H Bailey, Ewa Deelman, Yolanda Gil, Brooks Hanson, Michael A Heroux, John PA Ioannidis, and Michela Taufer. 2016. Science 354 (6317): 1240–41.