Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-16T18:22:04.221Z Has data issue: false hasContentIssue false

Open Science and Epistemic Diversity: Friends or Foes?

Published online by Cambridge University Press:  25 May 2022

Sabina Leonelli*
Affiliation:
University of Exeter, Exeter, United Kingdom; Wissenschaftskolleg zu Berlin, Berlin, Germany
Rights & Permissions [Opens in a new window]

Abstract

I argue that Open Science as currently conceptualized and implemented does not take sufficient account of epistemic diversity within research. I use three case studies to exemplify how Open Science threatens to privilege some forms of inquiry over others, thus exasperating divides within and across systems of practice, and overlooking important sources and forms of epistemic diversity. Building on insights from pluralist philosophy, I then identify four aspects of diverse research practices that should serve as reference points for debates around Open Science: (1) specificity to local conditions, (2) entrenchment within repertoires, (3) permeability to newcomers, and (4) demarcation strategies.

Type
Symposia Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

“The empirical question is how belief, commitment, or theory and hypothesis acceptance are stabilised in the face of openness of inquiry. The normative question is how they are stabilised in a nonarbitrary way that has probative value.” Longino (Reference Longino2003, 205)

1. Introduction

The potential of Open Science (OS) to enhance research quality, integrity, and societal impact has been widely discussed within academic and policy circles over the last two decades, and has been underscored by the rapid development of COVID-19 treatments and vaccines—an extraordinary scientific achievement that was arguably only possible through the immediate sharing of results globally. The effectiveness of disseminating results promptly, sometimes even before having them formally published—thereby speeding up research—has been extolled by scientific and popular media alike, most evidently in relation to the prompt dissemination of genetic sequencing data from various strains of the SARS-COV-2 virus (an exemplary instance of “Open Data”), and the decision by publishing companies to temporarily release all coronavirus-related papers without charges (“Open Access”).Footnote 1 As the United Nations joins the choir of OS supporters with its 2021 recommendation to implement OS worldwide, the OS movement looks well-positioned to determine the future of post-pandemic research and related policies.

What this may mean on the ground is less clear, given the many competing views on what OS practices may involve (Fecher and Friesike Reference Fecher, Friesike, Bartling and Friesike2014; Levin et al. Reference Levin, Leonelli, Weckowska, Castle and Dupré2016). In what follows, I focus on OS within academic research and particularly the vision of OS proposed by the European Commission (2015), which defined it as “a new approach to the scientific process based on cooperative work and new ways of diffusing knowledge by using digital technologies and new collaborative tools.” In this view, OS constitutes a positive development for the scientific landscape, whose implementation automatically improves the outputs of research as well as researchers’ working conditions. OS is presented as tightly linked to the digital transformation of research, and thus as a novel phenomenon that is dependent on the availability of information and communication technologies.Footnote 2 OS is also presented as a global phenomenon focused on providing unlimited access to any research element, at any point of an investigation, to anybody with an interest in research, no matter where they are based and whether or not they are professional researchers. OS is thereby taken to facilitate equity and inclusivity in research production and consumption, by making previously inaccessible resources available to anybody interested in participating in research. An underpinning belief for this vision is that principles such as collaboration, transparency, reproducibility, and openness are constitutive of good scientific practice, regardless of the specific conditions under which research takes place (Burgelman et al. Reference Burgelman2019). Despite vast efforts to implement OS thus defined, it remains unclear how this vision relates to the widespread diversity in epistemic practices and cultures characterizing research, as richly documented by historians, sociologists, and philosophers of science (e.g., Knorr-Cetina Reference Knorr-Cetina1999; Kellert et al. Reference Kellert, Longino and Waters2006). Most OS policies support the adoption of common metrics, standards, and platforms, seen to facilitate communication among researchers and foster the effective sharing of data, models, software, ideas, materials, methods, and whatever else happens to be produced in the course of an investigation (United Nations 2021). It is sometimes acknowledged that “one size does not fit all” (Open Science Policy Platform 2018), and there have been substantive efforts to develop “intelligent” approaches to openness and understand what OS involves within different research contexts (e.g., Boulton et al. Reference Boulton2012 and long-standing work by the Research Data Alliance, CODATA and related organizations). Nevertheless, national and international policies tend to implement OS guidelines, tools, and principles in a top-down manner and across domains, with some attention paid to disciplinary cultures but no fine-grained consideration of the diverse capacities, motivations, and methods characterizing different epistemic communities.

By contrast, this paper argues that understanding the epistemic roles played by different forms of diversity within research, and particularly differences that go beyond the well-recognized and institutionalized boundaries between disciplines, is crucial to the implementation and conceptualization of Open Science—and that philosophy of science can contribute foundational insights toward developing such a framework. Building on consideration of three empirical examples, the first part of this paper illustrates how an overly standardized and generalized conceptualization of OS and its implementation threatens to privilege some forms of inquiry over others, thus exasperating divides within and across systems of research practice. I highlight the complex interrelations among multiple sources of diversity of relevance to OS, and the difficulties in adequately addressing such interrelations through universal policy frameworks or appeals to disciplinary differences. Taking inspiration from philosophical literature on scientific pluralism, the second part of the paper proposes an alternative foundation for debates around OS, which focuses on the following characteristics of research: (1) specificity to local conditions and targets; (2) degree of entrenchment within existing repertoires; (3) degree of permeability to epistemically relevant newcomers; and (4) demarcation strategies to determine whether results can be reliably regarded as scientific contributions, and who is involved in such decisions. I conclude that unless OS advocates embrace a more sophisticated understanding of epistemic diversity grounded in these insights, OS policies risk acting as a reactionary force that reinforces conservatism and inequity in research.

2. Openness in Scientific Practice

I start by briefly considering three scientific debates that exemplify both advantages and challenges of promoting openness across a wide range of research practices and locations.

Case 1. Open versus equitable data access

The first example concerns the Global Initiative on Sharing All Influenza Data (GISAID), a platform for transnational data sharing that rose to international prominence during the COVID-19 pandemic. GISAID was launched in 2008 in order to share data of relevance to those studying influenza and was swiftly modified in 2020 to include data on SARS-COV-2. Contrary to other initiatives in molecular biology, which have placed a premium on sharing genetic sequences without constraints (Maxson Jones et al. Reference Maxson Jones, Ankeny and Cook-Deegan2018; Strasser Reference Strasser2019), GISAID requires users to sign an agreement, which mandates crediting the original data producers and constrains linkage with other datasets. These requirements stem from the recognition that some researchers—often working in low-resourced environments and/or less visible research locations—are reluctant to share data for fear of better-equipped researchers building on such work without due acknowledgment. Such fears are well-justified. Re-using data available online requires reliable connectivity and computing resources, as well as the adoption of standards typically built to match the theoretical perspectives of laboratories in the Global North. Hence, researchers based in low-resourced environments cannot easily take advantage of Open Data, no matter how innovative and rigorous their work may be, and remain reluctant to contribute their own data to online collections (Bezuidenhout et al. Reference Bezuidenhout, Leonelli, Kelly and Rappert2017).

GISAID was built on the recognition of entrenched differences in power, resources, and visibility among research groups, and its data governance actively tries to counter postcolonial asymmetries between researchers based in the Global South and the Global North. Having a formal agreement and credit structure in place has proved effective in fostering trust and information exchanges among groups that differ in their geo-political locations, funding levels, material resources, and social characteristics. And yet, due to the limitation in data mining and linkage stemming from the user agreement, GISAID has come under a barrage of attacks for “not being open enough” and posing “barriers that restrain effective data sharing.”Footnote 3

The debate over GISAID and its governance structure exemplifies how efforts to abide by the principle of openness, particularly when it comes to Open Data, can clash with responsible research measures geared towards protecting researchers whose work is unrecognized and/or discriminated against. In this case as in many others, providing trustworthy and explicitly non-exploitative conditions for data sharing helps to widen participation in data sharing, which in turn expands the evidence base for subsequent discoveries (Krige and Leonelli Reference Krige and Leonelli2021). It can also prevent the circulation of low-quality data (Leonelli Reference Leonelli2018a), the expansion of digital divides (Bezuidenhout et al. Reference Bezuidenhout, Leonelli, Kelly and Rappert2017) and socially harmful research (Elliott and Resnik Reference Elliott and Resnik2019). In choosing to sidestep these issues, researchers calling for “fully open” data are discounting the significance of socio-cultural factors (such as the geo-political location and characteristics of researchers), institutional issues (such as power dynamics among research sites, expectations around intellectual property), and infrastructural resources (such as the availability of funding and dependable connectivity) to data re-use. Paradoxically, this may result in excluding some researchers from data-sharing initiatives, which reduces the diversity and range of data available online, thereby constraining the evidence base for evaluating existing knowledge claims and developing new ones.

Case 2: Free versus proprietary research software

Already in 2011, Harding noted that “gone are the days when it could appear uncontroversial to assume that Western sciences are or have ever been autonomous from society, value-free and maximally objective, or that their standard for rationality is universally valid” (2011). This diagnosis rings ever truer given the rise of China and India as scientific superpowers and the vertiginous growth in the scientific workforce within other non-Western countries. And yet, regimes of assessment, credit, and quality control set up by rich Western institutions continue to rule academic rankings and evaluation regimes. It should therefore come as no surprise that the understanding and implementation of OS is largely grounded on ideas forged within privileged research locations in the Global North. Another case in point concerns the extent to which research quality is assumed to depend on access to specific technologies, and the repercussions that such assumptions have on research evaluation.

As an example, consider the importance assigned by OS to open-source research software, which can be accessed and modified freely, posing no barriers to its adoption—contrary to the expensive subscriptions required by proprietary software. This requirement may seem uncontroversial until one considers how researchers in low-resourced environments select and use software. The Global Young Academy carried out a survey among researchers in Bangladesh and Tanzania, which demonstrated a preference for using expensive proprietary software (Vermeir et al. Reference Vermeir2018). This was confirmed even in cases where equivalent open-source alternatives were available and obtaining funds to pay for proprietary tools was difficult if not impossible. The reason for this preference was the perceived stigma attached to using open software. Participants in the study claimed that using open software was perceived by editors and referees of international journals as a mark of low-quality research, particularly when coming from research locations with little international reputation. Using well-recognized proprietary software such as MatLab and Mathematica, by contrast, was seen to align with international expectations around appropriate methodology, thus facilitating publication in Anglo-American journals. Similar arguments have been made around quality assessment for datasets, which is often seen to depend on the technology used to produce the data—with the latest models of genome sequencers, for instance, taken to generate better data than earlier and now cheaper models (Leonelli Reference Leonelli2018a).

When taking such findings into account, OS tools only look effective within specific types of well-resourced research environment, to the exclusion of others. The preference for specific technologies turns out to depend on factors other than the suitability of those tools to the scientific tasks at hand. Such factors may be infrastructural, such as the availability (or lack thereof) of appropriate training and support for adopting a given technology; institutional, including the structure of scientific publishing and the power exercised by referees and editors; and socio-cultural, like the reputational hierarchies characterizing each field and the assumption that rich labs should act as role models for other research sites. These factors affect the type of research being conducted, with researchers refusing to explore potentially useful tools due to the perceived stigma attached to their use. They also inform collaborative strategies, as researchers who do not have access to resources perceived as essential for international publishing often choose to partner with richer institutions who may provide such access. While open-source software is recognized as valuable in theory, its use in practice clashes with existing—and sometimes conflicting—assumptions about what counts as reliable science, and who gets to decide.

Case 3. Reproducibility in action

To further underscore the methodological and conceptual implications of researchers’ adoption (or lack thereof) of OS practices, and the key role played by social and infrastructural factors in shaping that process, I now briefly consider the raging debate around the principle of reproducibility. Reproducibility, understood as the ability to replicate a given piece of research in ways that yield consistent results thereby corroborating their validity, is often presented as a pillar of OS, whose key function is to help decide which research results are credible and which are not (Burgelman et al. Reference Burgelman2019). Reproducibility therefore acts as a criterion to distinguish science from pseudoscience, with noncompliant research being viewed as unreliable at best and arbitrary at worst. Highly controlled and standardized experiments with pre-specified goals, such as randomized clinical trials or gene knockout experiments on model organisms, constitute well-recognized instances of reproducible research and are held up as a model for good research practice more generally. Notably, the data produced through such controlled settings are also among the easiest to share and re-use, given that they are often obtained in digital form and accompanied by consistent metadata (Leonelli Reference Leonelli2018b).Footnote 4 Where does this leave research settings where controls and standards are not as well-developed, and where in fact the nature of the phenomena being studied demands few if any such controls and standards?

Philosophers have noted how reproducibility takes different forms and meanings depending on the specific cluster of methods, settings, data types, targets, conceptual assumptions, and goals that turns out to be of relevance to each scientific project (Radder Reference Radder1996; Romero Reference Romero2019; Guttinger Reference Güttinger2020). Even within the same discipline, there can be dramatic differences in the significance ascribed to replicating a computer simulation, where control over research settings is so high, given their artificial nature, that both procedures and results are expected to be fully reproducible; field-based observations, where there is little control over research settings, and what is reproduced are observational skills rather than the results themselves; or data obtained by exploratory experiments where conditions vary during and across experiments, generating variations that are often the starting point for new investigations (Leonelli Reference Leonelli2018b; Feest Reference Feest2019). Taking highly controlled experiments as models for best practice in reproducible research, against which other forms of research are evaluated and results are demarcated as more or less credible, can therefore be damaging. Just like attempts to enforce openness regardless of the context, efforts to support reproducibility as demarcation strategy across all scientific domains side-step the variability in the methods developed to suit specific goals, concepts, and target objects—as well the broader conditions for research work.

3. Sources of Epistemic Diversity

Even in their simplified form, these examples point to the variety of elements involved in the implementation of OS principles and tools, and the different ways in which such elements can be clustered and aligned within specific situations of scientific inquiry. Each example presents some respects in which research situations may differ from each other, which are relevant to the implementation and conceptualization of OS, as listed in Figure 1.

Figure 1. Sources of epistemic diversity of relevance to Open Science.

The relevance of these components to knowledge production, and the variability among them, have been amply discussed by scholars of science including philosophers, historians, and sociologists. I here wish to underscore two aspects. One is the profound significance that different alignments of infrastructural, methodological, conceptual, and institutional sources of diversity can have for the implementation and understanding of OS. This is worth emphasizing since some of these factors are not usually regarded by philosophers as having epistemic import, and it is sometimes assumed that conceptual or methodological issues can be evaluated in isolation from social or material elements such as institutional settings and infrastructure. And yet, the examples above highlight the role played by infrastructural, institutional, and socio-cultural factors in determining not only the conditions of possibility for research, but also the criteria used to evaluate its procedures and results. In that respect, the elements in figure 1 function as sources of epistemic diversity, defined as the condition or fact of being different or varied, which affects the development and/or understanding of knowledge.Footnote 5

Another aspect is the capillary, situated nature of epistemic diversity, as evidenced by the variation encountered even within the same disciplinary spaces. The use of disciplines to capture differences in methods and traditions does not capture the high variability displayed by research performed at different locations and with different perceptions of what constitutes best practice and who is responsible for adjudicating it. Most importantly, it does not capture the diverse ways in which researchers may pick and align specific elements to confront a given situation of inquiry. This is especially notable given the global increase in interdisciplinary and transdisciplinary efforts aiming to confront systemic challenges like climate change, pandemics, and population growth: Within such projects, understanding OS involves a finer-grained analysis of differences among research groups than that offered by the broad categories of “discipline,” “domain,” or even “field.” To this aim, I use Chang’s idea of system of practice, which denotes any “coherent set of epistemic activities performed with a view to achieve certain aims” (Reference Chang2013, 16).

4. Systems of Practice: What Matters for OS?

Philosophical debates on scientific pluralism have provided a wealth of insights on what characterizes “the condition or fact of being different or varied” among systems of practice and what makes such differences epistemically salient (“affecting the development and/or understanding of knowledge”). I now discuss four such insights as particularly helpful to conceptualize and implement OS in ways that valorize, rather than undermine, epistemic diversity.

First, as evident from Chang’s definition, systems of practice differ in their specificity to local conditions, targets, and goals. At their best and following iterative refinement over time, research strategies and tools are exquisitely tailored to suit the characteristics of the phenomena under investigation: hence methods, theories, and models differ depending on their suitability to target objects (Mitchell Reference Mitchell2003) and the availability of materials exemplifying those targets (Chapman and Wylie Reference Chapman and Wylie2016). Moreover, research agendas and strategies are adaptive (McLeod and Nersessian Reference McLeod and Nersessian2013) and responsive to the inherent instability of targets (Feest Reference Feest2017). Rather than assuming stable targets, generalizable approaches, and standard methods as desirable for good research practice, OS needs to value the system-specific and dynamic features of local research settings and results.

Over time, some systems of practice acquire a reputation for being more reliable, easier to mobilize and/or more productive than others (whether due to their effectiveness in achieving goals, the robustness of their methods, their suitability to existing policies and institutional settings, and/or other factors). Such systems of practice, including the relevant clustering of values, beliefs, institutions, methods, and goals associated to the study of given phenomena, thereby become entrenched as “gold standards” for research concerning those phenomena. In some cases, this results in a system of practice becoming institutionalized as a research field in and of itself. In others, systems of practice become what Ankeny and Leonelli (Reference Ankeny and Leonelli2016) call repertoires: ways of doing science that do not align with disciplinary boundaries, yet exercise a strong influence as blueprints that can be easily adopted and are implicitly recognized as effective and reliable. The use of randomized controlled trials, and related understandings of reproducibility, is a case in point (Cartwright Reference Cartwright2007); model organism research, and its associated ethos of data sharing, is another (Ankeny and Leonelli Reference Ankeny and Leonelli2016).

In principle, good research can be construed as involving the freedom to consider which system of practice may be best suited to investigating given questions and targets (whether such a system already exists or needs to be developed). In practice, there are strong incentives to redeploy existing repertoires, not least because such mature systems of practice tend to have a standardized structure (including well-developed OS infrastructures) and require less work than the creation of a new system. The extent to which standards for making or evaluating research are embedded in a wider repertoire is highly relevant to OS. Indeed, the second insight I want to underscore is that systems of practice differ in the degree to which they are entrenched within existing repertoires, and thus the degree to which researchers are free to select and develop systems of practice that are specific to their target objects.

The specificity and entrenchment of systems of practice, when considered together, present a problem for OS. The standardization and redeployment of existing resources, including data and software, is a central aim for OS; but it is also a key avenue by which systems of practice lose specificity and epistemic diversity. Indeed, researchers working with a system that is highly entrenched within existing repertoires may not value—or even consider—elements that are not already part of that repertoire. In other words, systems of practice differ in their permeability to epistemically relevant newcomers (whether these be ideas, methods, people, technologies, or research sites), thereby challenging the appeal to openness and inclusivity that underpins OS policies.

Pluralist philosophers have long pointed to the danger that excessive conservatism and exclusionary logics present to the robustness of knowledge claims. For instance, Longino highlighted the conditions under which results are publicly scrutinized as critical to the quality of that scrutiny: “not only must potentially dissenting voices not be discounted; they must be cultivated” (Reference Longino2003, 132). Exclusions based on social conventions embedded in specific institutions, such as the above-mentioned perception of open software as less likely to be favorably reviewed, are particularly damaging since they do not have a scientific rationale, and yet they have powerful epistemic implications. OS practices need to explicitly and actively challenge the dominance of long-standing repertoires and encourage inclusivity, even where this complicates attempts to develop common standards and infrastructures.

This brings me to my fourth and final point, which concerns the demarcation strategies used within any one system of practice to determine whether results can be reliably regarded as scientific contributions, and who should be involved in such decisions. Whether such demarcation strategies are implicitly assumed or explicitly discussed, their development and adoption by researchers is arguably an unavoidable part of creating and maintaining a system of practice in the first place. By setting criteria for what constitutes proper science and what does not, and which forms of expertise are deemed to be relevant, demarcation strategies provide the glue that brings and keeps epistemic activities together—what makes systems of practice coherent, in Chang’s terminology. This was famously recognized by both Popper and Kuhn, though Kuhn’s paradigms failed to capture the fine-grained nature of such decisions and Popper dismissed the normative relevance of factors other than conceptual and methodological. What this teaches OS is to exercise caution with indiscriminate appeals to general principles or procedures, and to ensure that debates around OS implementation within any one system of practice include explicit and regular consideration of existing demarcation strategies.

5. Conclusion: Ways forward for a pluralistic OS

I argued that a key challenge for OS is to productively manage the clash between different interpretations and the operationalization of openness, which emerge from diverse systems of practice. Far from constituting an obstacle to the implementation of OS, consideration of epistemic diversity can help realize the aspirations of the OS movement, while at the same time providing an opportunity to avoid capture by dominant repertoires and defy inequitable and overly conservative approaches to research (Mirowski Reference Mirowski2018). Conceptually, this means moving away from defining OS through appeals to general principles and context-independent ideals of best practice, focusing instead on a procedural approach to OS as the ensemble of practices that facilitate critical scrutiny and re-use of research components and results, including of the demarcation criteria used by researchers to adjudicate who constitutes a relevant beneficiary of and/or contributor (Levin and Leonelli Reference Levin and Leonelli2017). Methodologically, this involves improving researchers’ capacity to choose and develop systems of practice (and related demarcation strategies) that are well-suited to their targets and goals—which in turn requires transdisciplinary exchanges that help identify appropriate and responsible manners of OS implementation, as well as provisions for situations where digital tools are not immediately available or even relevant. Institutionally, OS needs to challenge existing systems for academic reward and publishing, and foster venues and tools to critically evaluate whether and how dominant repertoires serve existing and future epistemic goals. This is a fraught requirement especially in relation to commercially sensitive research, whose scrutiny can be highly restricted and not independently verified. Last but not least, the transition to OS provides philosophers with an opportunity to further unpack the situatedness of knowledge production and ecological understandings of scientific reasoning long advocated for by feminist epistemologists (Code Reference Code2006). The push to implement OS demands a systematic account of the implications of epistemic diversity in its various forms, as well as the role played by all components of research—including overlooked ones such as data, samples, and software—toward probing the credibility of scientific results.

Acknowledgments

This research received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreements No. 335925 and No. 101001145). This paper reflects only the author’s view and the commission/agency is not responsible for any use that may be made of the information it contains. I am grateful to the fantastic research support and impeccable hospitality of the Wissenschaftskolleg zu Berlin, without which I would not have managed to complete this manuscript in time.

Footnotes

1 I examined these developments in Leonelli (Reference Leonelli2021) and Krige and Leonelli (Reference Krige and Leonelli2021).

2 The emphasis on the novelty of OS runs counter to historical evidence of long-standing collaborative and sharing practices (e.g., Strasser Reference Strasser2019).

3 Open Letter (2021), subsequently reported in Nature.

4 Whether and how data are actually shared, given existing property regimes, depends on the specific cases.

5 Definition adapted from the Cambridge English Dictionary.

References

Ankeny, Rachel A., and Leonelli, Sabina. 2016. “Repertoires: A Post-Kuhnian Perspective on Scientific Change and Collaborative Research.” Studies in the History and the Philosophy of Science: Part A 60:1828. https://doi.org/10.1016/j.shpsa.2016.08.003 CrossRefGoogle ScholarPubMed
Bezuidenhout, Louise, Leonelli, Sabina, Kelly, Ann, and Rappert, Brian. 2017. “Beyond the Digital Divide: Towards a Situated Approach to Open Data.” Science and Public Policy 44 (4):464–75.CrossRefGoogle Scholar
Boulton, Geoffrey et al. 2012. Science as an Open Enterprise. 02/12. London: The Royal Society Science Policy Centre.Google Scholar
Burgelman, Jean-Claude et al. 2019. “Open Science, Open Data, and Open Scholarship: European Policies to Make Science Fit for the Twenty-First Century.” Frontiers in Big Data 2:43. https://doi.org/10.3389/fdata.2019.00043 CrossRefGoogle ScholarPubMed
Cartwright, Nancy. 2007. “Are RCTs the Gold Standard?BioSocieties 2 (1):1120.10.1017/S1745855207005029CrossRefGoogle Scholar
Chang, Hasok. 2013. Water is not H2O: Evidence, Realism and Pluralism. New York: Springer.Google ScholarPubMed
Chapman, Robert, and Wylie, Alison. 2016. Evidential Reasoning in Archaeology. New York: Bloomsbury.Google Scholar
Code, Lorraine. 2006. The Politics of Epistemic Location. New York: Oxford University Press.Google Scholar
Elliott, Kevin, and Resnik, David B.. 2019. “Making open science work for science and society.” Environmental Health Perspectives 127:075002.Google ScholarPubMed
European Commission. 2015. Open Innovation, Open Science, Open to the World. Luxembourg: Publications Office of the European Union.Google Scholar
Fecher, Benedikt, and Friesike, Sarah. 2014. “Open Science: One Term, Five Schools of Thought.” In Opening Science. The Evolving Guide on How the Internet is Changing Research, Collaboration and Scholarly Publishing, edited by Bartling, Sönke and Friesike, Sarah, 1747. New York: Springer.Google Scholar
Feest, Uljana. 2017. “Phenomena and Objects of Research in the Cognitive and Behavioral Sciences.” Philosophy of Science 84 (5):1165–76.CrossRefGoogle Scholar
Feest, Uljana. 2019. “Why Replication Is Overrated.” Philosophy of Science 86 (5):895905.CrossRefGoogle Scholar
Güttinger, Stephan. 2020. “The Limits of Replicability.” European Journal for Philosophy of Science 10:10. https://doi.org/10.1007/s13194-019-0269-1 CrossRefGoogle Scholar
Kellert, S.H., Longino, Helen E., and Waters, C.K., eds. 2006. Scientific Pluralism. Minneapolis, MN: University of Minnesota Press.Google Scholar
Knorr-Cetina, Karen. 1999. Epistemic Cultures: How Science Makes Knowledge. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Krige, John, and Leonelli, Sabina. 2021. “Mobilizing the Translational History of Knowledge Flows: COVID-19 and the Politics of Knowledge at the Borders.” History and Technology 37 (1):125–46CrossRefGoogle Scholar
Leonelli, Sabina. 2018a. “Global Data Quality Assessment and the Situated Nature of “Best” Research Practices in Biology.” Data Science 16 (32):111.Google Scholar
Leonelli, Sabina. 2018b. “Re-Thinking Reproducibility as a Criterion for Research Quality.” Research in the History of Economic Thought and Methodology 36B:129–46.10.1108/S0743-41542018000036B009CrossRefGoogle Scholar
Leonelli, Sabina. 2021. “Data Science in Times of Pan(dem)ic.” Harvard Data Science Review 3 (1).Google Scholar
Levin, Nadine, and Leonelli, Sabina. 2017. “How Does One “Open” Science? Questions of Value in Biological Research.” Science, Technology and Human Values 42 (2):280305.10.1177/0162243916672071CrossRefGoogle ScholarPubMed
Levin, Nadine, Leonelli, Sabina, Weckowska, Dana, Castle, David, and Dupré, John. 2016. “How Do Scientists Understand Openness? Exploring the Relationship between Open Science Policies and Research Practice.” Bulletin for Science and Technology Studies 36 (2):128–41.CrossRefGoogle ScholarPubMed
Longino, Helen E. 2003. The Fate of Knowledge. Princeton, NJ: Princeton University Press.Google Scholar
Maxson Jones, Katherine, Ankeny, Rachel A., and Cook-Deegan, Robert. 2018. “The Bermuda Triangle: The Pragmatics, Policies, and Principles for Data Sharing in the History of the Human Genome Project.” J Hist Biol. 51 (4):693805.CrossRefGoogle ScholarPubMed
McLeod, Miles, and Nersessian, Nancy. 2013. “The creative industry of integrative systems biology.” Mind & Society 12:3548.Google Scholar
Mirowski, Philip. 2018. “The future of (open) science.” Social Studies of Science 48 (2):171203.CrossRefGoogle ScholarPubMed
Mitchell, Sandra. 2003. Biological Complexity and Integrative Pluralism. New York: Cambridge University Press.10.1017/CBO9780511802683CrossRefGoogle Scholar
Open Letter. 2021. Support data sharing for COVID-19. https://www.covid19dataportal.org/support-data-sharing-covid19 Google Scholar
Open Science Policy Platform. 2018. OSPP-REC: Recommendations of the Open Science Policy Platform.Google Scholar
Radder, Hans. 1996. In and About the World: Philosophical Studies of Science and Technology. Albany, NY: New York State University Press.Google Scholar
Romero, Felipe. 2019. “Philosophy of Science and the Replicability Crisis.” Philosophical Compass 14:e12633.Google Scholar
Strasser, Bruno. 2019. Collecting Experiments: Making Big Data Biology. Chicago, IL: Chicago University Press.10.7208/chicago/9780226635187.001.0001CrossRefGoogle Scholar
United Nations. 2021. UNESCO Recommendation on Open Science. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000379949.locale=en Google Scholar
Vermeir, Koen, et al. 2018. Global Access to Research Software: The Forgotten Pillar of Open Science Implementation. A Global Young Academy Report. Halle: Global Young Academy. https://globalyoungacademy.net/activities/global-access-to-research-software/ Google Scholar
Figure 0

Figure 1. Sources of epistemic diversity of relevance to Open Science.