Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-21T22:43:15.911Z Has data issue: false hasContentIssue false

Structuring Institutions for Responsible and Accountable Science

Published online by Cambridge University Press:  04 October 2023

Heather Douglas*
Affiliation:
Michigan State University, East Lansing, Michigan, USA
Rights & Permissions [Opens in a new window]

Abstract

Oversight institutions that hold science and scientists accountable for responsible science have thus far not focused on the general societal impact of science. Responsibility for societal impact is now a pervasive aspect of scientific practice, but accountability remains elusive. I argue here that we should proceed cautiously, and that only clear and precise floors for responsibility should have accountability mechanisms. For the remainder of societal responsibilities in science, we should institutionalize assistive ethical mechanisms, which help scientists meet their responsibilities and share rather than offload ethical labor.

Type
Symposia Paper
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Societal responsibility, or responsibility for the impact of one’s work on society, is now a pervasive feature of scientific practice (AAAS 2017; International Science Council 2021). Unlike in the twentieth century, when scientific freedom was defined in part by freedom from societal responsibilities, particularly for those pursuing basic research, the twenty-first century has seen a major shift in the understanding of responsibility in science (Douglas Reference Douglas, Hartl and Tuboly2021). Whether because scientists have the same general responsibilities we all have or because professional scientific organizations have increasingly emphasized societal responsibilities as part of the role of scientist, there is no responsibility-free space in science anymore. Unfortunately, the institutional structures for fostering, supporting, and enforcing responsible practice in science were crafted in the twentieth century and remain tied to those earlier views of responsibility.

Here I describe this mismatch and offer guidance on how to fix it. I will develop important considerations for any new institutions centered on responsibility in science, namely that we need to recognize that (1) accountability and responsibility are not the same thing and (2) many responsibilities should not have accountability mechanisms tied to them. I will start with the difference between responsibility and accountability, and then present reasons for why some responsibilities should not have accountability mechanisms (i.e., mechanisms for compliance and enforcement). Then I will describe how we might support and foster those responsibilities that should elude accountability, and, thus, how we might nurture a more responsible culture in science generally. We need such a culture shift because of the recent changing terms of responsibility; many scientists were trained under a different culture. Finally, I will conclude with a brief assessment of what this means for the division of or sharing of ethical labor in science.

2. Responsibility and accountability

Although responsibility and accountability are sometimes used interchangeably, we should recognize that they are not the same thing (Bivins Reference Bivins, Fitzpatrick and Bronstein2006; Leonelli Reference Leonelli2016). Responsibility concerns moral duties and obligations, and both floors (minimum standards) and ideals (for which we reach but rarely attain). In the pursuit of science, scientists have moral responsibilities to science (e.g., responsibilities regarding data and inference), colleagues and students (e.g., mentoring and sharing responsibilities), and the broader society, with floors and ideals for each of these bases for responsibility (Douglas Reference Douglas2014). Responsibility centers on what one intends, but also includes what is not intended and is nevertheless foreseeable (Douglas Reference Douglas2003). Responsibility is a pervasive aspect of human practice, and science is no exception.

Accountability, however, concerns that for which one is asked to give an account or for which one can be held to account (Bivins Reference Bivins, Fitzpatrick and Bronstein2006). Usually in science, accountability is built around responsibility floors (rather than ideals). One is held to account (and often asked to give an account) when one has not met one’s minimum responsibilities in a particular area. Accountability can also occur, however, when one is not properly morally responsible, but is in a position, or role, where one is still held to account for failures. This happens, for example, when leaders do not know about failures going on within their organization, but, when failures come to light, leaders are part of who is held accountable, even if we can find no moral failing on their part.

Because of the pervasive view coming out of World War II that scientists when pursuing basic science did not have societal responsibilities (Bridgman Reference Bridgman1947; Douglas Reference Douglas, Hartl and Tuboly2021), institutional structures for the support of and enforcement of responsible science were built piecemeal and in response to egregious moral failures. It was the revelation of grievous abuse of human subjects that built the Institutional Research Board (IRB) structures, abuse of animal subjects that built the Institutional Animal Care and Use Committee (IACUC) structures, and clear cases of fraud that built Research Integrity Offices (RIO) structures (Horner and Minifie Reference Horner and Minifie2011).

Through the lens of accountability and responsibility, we can see that the so-called responsible conduct of research (RCR) institutional infrastructure is mostly an accountability infrastructure. These are structures to keep scientists from falling below a minimum floor of acceptable practice or for sanctioning scientists when they do. The combination of a baseline belief (incorrect but pervasive) that societal responsibilities are not intrinsic to science and the fact that RCR is constructed around compliance is why RCR feels to many scientists like an external imposition with which they must deal and that impedes their scientific work (Pennock and O’Rourke Reference Pennock and O’Rourke2017). Such structures do not inculcate a full sense of responsibility for the practices of science or for societal impact of one’s work.

That does not mean that such structures are not important. Accountability for clear responsibility floors can be essential to maintaining minimum standards of practice. But it does mean that we should not turn to such accountability-focused institutions for meeting the full range of responsibilities in science. One key lacuna concerns responsibility ideals. Because responsibilities also include consideration of ideals, accountability mechanisms geared to floors cannot grapple with or encourage the meeting of the full range of responsibilities in science. It would be a mistake to ratchet up the minimum floors, as the space between floors and ideals is needed for decision making in the face of practical demands and trade-offs. So there are important responsibilities for which we should not have accountability mechanisms—namely reaching for responsibility ideals. We can incentivize such responsibilities with prizes and recognition, but those are not usually considered accountability mechanisms (no one is called to account for not winning such prizes).

Responsibility ideals, however, are not the only responsibilities for which we should not have accountability mechanisms. I argue here that some responsibility floors should also not have accountability mechanisms. In particular, responsibility floors that cannot be specified precisely in advance should not have accountability mechanisms. Accountability mechanisms built around imprecise responsibility floors would open science to serious risks of harmful politicization. Because some responsibility floors cannot be specified precisely in advance, we should not have accountability mechanisms for all responsibility floors.

In the next section, I will examine a recent debate over how to handle the societal impact of research through IRBs to show the problematic limits of accountability mechanisms as enforcement for all responsibility floors in science. This is not an unusual case—similar concerns are cropping up elsewhere in science (e.g., around IACUCs and journal practices). The reasons why we should not have accountability mechanisms for all responsibility floors further emphasizes the need for alternative institutional structures to foster fully responsible science, such as assistive ethical mechanisms. I will discuss assistive ethical mechanisms in section 4.

3. Should IRBs assess the societal impact of research?

With the rise in recognition of societal responsibility for scientists, there has been an increased call for accountability and oversight mechanisms for the societal impact of research (e.g., Kourany Reference Kourany2010). Because there are already oversight institutions for some areas of research, such as IRBs for human subject research, these institutions are often thought of as an obvious place to which to turn for such oversight. I will argue in this section that this is a mistake, that such institutions should not be attempting to provide accountability for broader societal impact, and that, indeed, accountability for broader societal impact is generally not a responsibility for which we should have accountability mechanisms. Different kinds of institutional mechanisms—mechanisms that are not focused on accountability—are needed.

Where did IRBs come from and what do they do? Although IRBs as a local institutional oversight mechanism for human subject research predate the 1974 National Research Act, it was this act that “conferred full authority” for such oversight on IRBs (McKay Reference MacKay1995, 67). The National Research Act was passed after a series of damning revelations about biomedical human subject research, including the 1963 scandal of physicians experimenting on ailing patients at the Jewish Chronic Disease Hospital, the 1966 Beecher report, the 1972 revelations of the forty-year Tuskegee Syphilis study, in addition to concerns from the social sciences over the 1960s Millgram obedience experiment and the 1971 Zimbardo prison experiment (Horner and Minifie Reference Horner and Minifie2011). The 1974 act created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. This commission produced the 1979 Belmont Report (MacKay Reference MacKay1995).

The Belmont Report (1979) provided ethical principles around which human subject protection should be built: respect for persons, beneficence, and justice. These principles were developed into practice guidelines regarding informed consent, the assessment of risks and benefits, and the selection of subjects. The recommendations in the Belmont Report were adopted and strengthened in regulations for IRB functioning. The central US regulation for IRBs is known as the Common Rule (1998/2018). First promulgated in 1991, the Common Rule codified key aspects of how IRBs currently function, including guidelines for IRB composition and attention to informed consent, justice considerations, and risk-benefit considerations.

Included in the Common Rule (45 CFR 46) (both in the 1991 rule and in 2018 revisions) is a preclusion statement regarding what IRBs should not consider:

In evaluating risks and benefits, the IRB should consider only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research). The IRB should not consider possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility. (45 CFR 46, Subpart A, §46.111 (a)(2))

For the argument here, the second sentence is particularly important. This aspect of the Common Rule prevents researchers from considering long-term benefits of research and weighing those benefits against the risks posed to research subjects. And this is exactly as it should be, as long-term potential societal benefits do not justify imposing extra risks to research subjects. For the subjects enrolling in the study, the risks and benefits for them must be properly balanced, without consideration of whatever benefits society may gain. It is precisely this kind of reasoning (the imposition of risk on the few to benefit the many) that often justified ethically egregious studies prior to the increased regulation of human subject research. Although recent commentators like Doerr and Meeder (Reference Doerr and Meeder2022, 36) have noted a conflict between the Common Rule and the Belmont Report, the Common Rule properly precludes IRBs from considering broader societal impacts.

So, IRBs should not consider societal benefits in the risk/benefit calculations. What of societal harms? This question is particularly pressing and difficult in an age of big data medical research, as Doerr and Meeder (Reference Doerr and Meeder2022) note, where group harms become more possible and are a pressing concern well beyond the individual privacy and deidentification issues central to big data medical research.

Yet despite the importance of societal impacts and the risk of harm to particular groups (who are not part of the study), authorizing IRBs to include consideration of societal risks in its gate-keeping practices would be a mistake. This is because (1) there is no clear responsibility floor as of yet for the societal impact of research, (2) there are reasons to think that such a clear floor will elude us, and (3) there are reasons to think that a clear floor is essential for accountability, especially if we are to avoid improper politicization of science. Let’s consider each point in turn.

The responsibility floor of “do no harm” will not work for science generally.Footnote 1 Scientific findings often cause harm to industries for which its findings motivate new regulations or for which results provide better alternatives than existing enterprises provide. It can also cause harm by discovering risks that drive changes in zoning laws or housing prices (e.g., flood or fire risks), or by discovering new diseases that then require new regulations that are harmful to some (e.g., quarantining those infected). Because science can motivate change, and change often comes with harm to some, this cannot be the correct responsibility floor.

Here is a more plausible responsibility floor for societal impact: Scientists should not make the world worse. While this a good general sensibility, it is vague in practice. What counts as making the world worse? How are distributions of impacts versus the intensity of impacts to be assessed? There are clear cases where research in particular areas would make the world worse (such as novel biological weapons), there are also many cases where it is simply unclear whether (scientifically successful) research would make the world worse or not. Even in cases in which societal impacts can be circumscribed (bounded temporally and spatially), and described accurately, how to weigh those impacts, to assess whether they would make the world worse or not, would be often highly contentious.

Think of the debates over GMOs and food production. In a particular case, would introducing a GMO seed make the world worse or not? Suppose we are considering a more drought resistant wheat seed. Questions about whether the research would make the world worse would depend on a range of factors such as (1) where the wheat would be grown, (2) whether lateral gene transfer would cause problems for native flora and fauna, (3) the way in which intellectual property regimes would be enforced and reinforced, and (4) the impact of these decisions on the particular culture(s) and economics where the wheat seed was deployed. Aside from the complexity of both decision and prediction in these areas, there is the broader point of the difficulty of assessing what would be a “worse world.”

This brings us to the second point, that once we are attempting to assess broader societal impact, the lack of clear populations (e.g., the particular human subjects) to be affected and ongoing timelines for assessment make providing a clear responsibility floor elusive. Societal impacts will extend well into the future for successful research projects, to cultures and environments yet to come into existence. What constitutes making the world worse (or some other standard for moral evaluation) in such an unbounded assessment will always be open to contestation, in part because of its unbounded nature and in part because of the difficult ethical questions at the heart of such an assessment (what is an appropriate distribution of goods and harms, which populations and cultures deserve special consideration for the impacts, how to weigh goods and harms against each other, etc.)

This leads to the third and final point. Douglas (Reference Douglas2023) has argued that the central way that science gets politicized (particular in democracies), in which political power improperly interferes with scientific inquiry, is when norms acceptable in democratic politics are conflated with norms for inquiry, and for the norms for politics to be imported into inquiry. One such norm for good inquiry concerns the necessity for clear boundaries for what is and what is not minimally acceptable practice within inquiry, that is, for what is forbidden and what is not. The imposition of such constraints on inquiry is not itself politicization but rather politicization occurs when the constraints are not clearly marked and delineated, that is, when there is vagueness about where the boundaries of acceptable inquiry lie.

The murkiness of the minimum floor for the overall societal impact of research means it is not sufficiently clear to impose accountability mechanisms for this floor. The ethical and predictive complexity of not making the world worse (a plausible floor) cautions us that generating an accountability mechanism (like an IRB gate-keeping approval or a post hoc accountability mechanism for fielding complaints about science that might have made the world worse) would be a key way to politicize scientific inquiry, to provide an avenue for improper and damaging exercise of political power in science. Imagine an institutional committee tasked with providing accountability for researchers whose work might have made the world worse, and the difficulties of adjudicating complaints about scientists and their work on this basis. Given the blurriness around the floor, such a committee would be a key way in which political power could (and most likely would) punish scientists for unwelcome results, regardless of whether their work made the world worse in any clear way.

We could address this problem with accountability structures by attempting to generate more precise floors. In some cases, I think this is exactly the right thing to do. But those will often be field specific, and tied to accountability mechanisms in particular journals and conferences (i.e., those institutional structures that are central to a particular field of inquiry). For example, a recent Nature Human Behavior editorial (2022) attempted to grapple with the harms of research on people beyond research subjects (i.e., societal impact). When the editorial calls for researchers to avoid harm (1029), I think they are making a mistake for the reasons previously articulated (this is not the right responsibility floor). Nor is the aim of “maximization of benefits and minimization of potential harms” the correct aim in science, as it ignores the difficult issues of distribution of harms and benefits (among other issues) (1030). However, one of the guidelines proposed is a clear and precise floor that can be used with accountability mechanisms, and that is a refusal to publish “[c]ontent that is premised upon the assumption of inherent biological, social or cultural superiority or inferiority of one human group over another” (ibid.). Note that this is a ban on research that presupposes such human hierarchies, not research that might discover some differences. Here is a clear and precise floor to which researchers can be alerted and for which researchers can be held accountable, in this case through the journal’s enforcement of this standard.

In short, IRBs (or similar institutional mechanisms) should not be used to provide accountability for the responsibility floor concerning the general societal impact of research. There are not currently sufficiently precise floors, and we should be skeptical there can be, given the complexity of societal impact of science. Imprecise responsibility floors should, in general, not have accountability mechanisms attached to them because of the pervasive threat of politicization such mechanisms pose.

Clear and precise responsibility floors are crucial for the good functioning of gatekeeping and post hoc accountability mechanisms. That data fabrication and fraud are clear floors are one reason RIOs function well. The debate over whether questionable research practices (QRPs) should also be included should attend to whether QRPs can be defined precisely and be clearly understood to fall below a particular floor. Ongoing concerns about the effectiveness of other accountability institutions (e.g., IACUCs) rest in part on the spongy nature of the responsibility floors they are supposed to enforce (Hansen Reference Hansen2013; Nobis Reference Nobis2019). For IACUCs, the goals of refinement, reduction, and replacement as best as one can are turned into precise procedures for the care and housing of animals, but avoid the difficult moral decisions about whether a particular animal study is necessary or worth the suffering inflicted, and thus avoid the questions of replacement, reduction, and refinement. Concerns about therapeutic misconception and informed consent standards raise similar concerns about other responsibility floors in human subject research (Applebaum et al. Reference Appelbaum, Roth, Lidz, Benson and Winslade1987).

Lacking precise floors makes the assessment of accountability difficult and opens up accountability mechanisms as possible avenues of politicization, where broader political concerns enter into scientific inquiry in surprising and unexpected ways. Where responsibility floors cannot be made precise, we need an additional way to generate responsible science, one that is not centered on accountability.

4. Fostering responsible science: Assistive ethical mechanisms

Over the past decades, a different set of mechanisms for fostering responsible science have evolved. Instead of focusing on holding scientists accountable, these mechanisms help scientists pursue more responsible forms of research. They can thus assist scientists with both staying above murky responsibility floors and reaching for responsibility ideals.

Such assistive ethical mechanisms include (1) research ethics training that moves beyond mere compliance to address the underlying responsibilities of scientists, including the inherent responsibilities to society; (2) research ethics consultations (RECs) that provide ethical advice to researchers, often confidential, when complex ethical issues arise in the course of research; and (3) embedded humanists (e.g., the STIR model developed by Erik Fisher) who work with scientists in labs to assist with raising and reflecting on ethical issues in research (Chen Reference Chen, Ratti and Stapleford2021; Bourgeois Reference Bourgeois, Ratti and Stapleford2021; de Melo-Martin et al Reference de Melo-Martín, Palmer and Fins2007; McCormick et al Reference McCormick, Sharp, Ottenberg, Reider, Taylor and Wilfond2013; Fisher and Schuurbiers Reference Fisher, Schuurbiers, Doorn, Schuurbiers, Poel and Gorman2013). These are not the only possibilities but provide a window on the range of options available.

Further, ethical assistive mechanisms can be developed and deployed beyond the academic institution. They could be utilized by conferences and journals, offering assistance to scientists whose work raises ethical concerns. Such assistance can help make the work (and the world) better, while not just imposing an accountability cost to scientists who fail to meet minimum floors of practice. For example, scientists submitting their research to a journal may not be able to assess ahead of time whether they had framed the communication of the work to avoid harming vulnerable groups. An assistive mechanism to help scientists avoid harms would be crucial to make science generally more socially responsible, while avoiding accountability problems.

Assistive ethical mechanisms could also be used in funding institutions. When a scientists submits a proposal to a funding agency, there is much they might not have thought about (e.g., the way intellectual property concentrates benefits in the most wealthy, the way the least well-off often struggle to take advantage of new findings), that a funding agency’s assistive responsibility system could raise regarding the societal impact of their research and could help reframe the research at the start to improve the societal impact of research.

In short, assistive ethical mechanisms, if employed more broadly, could become a normalized part of scientific practice and aid the scientific community in embracing the broader societal responsibilities that have been recently recognized as inherent to scientific research. It is imperative that such assistive mechanisms not morph into accountability mechanisms, as the assistive mechanisms are needed for vague responsibility floors and for responsibility ideals, and accountability mechanisms for such complex terrain raises substantial risks for politicization.

With institutional mechanisms including both accountability (compliance) mechanisms and assistive ethical mechanisms, division of ethical labor takes on a range of forms. Accountability mechanisms tend to offload the ethical labor from scientists. Others set the standards, and scientists must simply comply. With ethical assistance, the labor is distributed, rather than offloaded. Scientists can bring ethical assistance into their practice (e.g., through the embedded humanist model) and share the labor of ethical decision making, or scientists can consult with a REC for help with ethically thorny problems. With these assistive ethical mechanisms, scientists make decisions with the advice and help of others. This shares, rather than divides, ethical labor. For aiming at responsibility ideals, this seems the best possible system.

5. Conclusion

Accountability structures (compliance institutions) are important for science, but not sufficient for responsible science. For responsibility floors that are vague or open to multiple interpretations and for responsibility ideals, accountability mechanisms are an inapt institutional structure. Accountability structures need to be built around clear and precise floors. Not all responsibility floors can be clear and precise, and, further, accountability mechanisms do not help to promote reaching for responsibility ideals. Thus, there is much to responsible science that cannot fall under the purview of accountability or compliance.

We need ways to foster responsible science that do not depend on accountability mechanisms, for both imprecise floors and for ideals. To foster responsible science fully, we need both responsible science cultures within science (i.e., RCR training that is much broader than compliance) and assistive ethical mechanisms to aid scientists in the complex task of doing responsible science.

The combination of compliance mechanisms (pursuing accountability for clear responsibility floors) and assistive ethical mechanisms (aiding in meeting vague responsibility floors and in pursuing responsibility ideals) makes for a complex division of and sharing of ethical labor in science. Assistive ethical mechanisms aid scientists in doing responsible science, but the authority to make decisions about how to pursue and promulgate science is still left with scientists. Accountability compliance mechanisms require scientists to deal honestly with such systems, but the ethical labor of making judgments about ethical standards is offloaded to the compliance mechanism. With both kinds of institutions in place, ethical labor becomes more distributed overall, but never fully leaves the scientist.

Ultimately, we want scientists to not just avoid making the world worse but also to make it better. We need to develop the institutions to help them do that. This means moving toward a better culture of responsibility in science—a culture beyond compliance—and finding ways to assist scientists in making more responsible decisions in their research practice. When responsibility floors clarify, we can build accountability mechanisms for them, but we should not build such mechanisms prior to having clarity. We should also continue to revisit our accountability floors to ensure they remain appropriate, and do not themselves become an impediment to responsible research (e.g., the demands for informed consent forms for indigenous research collaborations where signing documents undermines, rather than builds, trust). Finally, accountability mechanisms should not be merely ways for institutions to cover themselves from legal liability or to deflect public concern. Such mechanisms should always first and foremost be geared toward promoting responsible research, in the fullest sense.

Acknowledgments

Thanks to Kevin Elliott for organizing the session where this paper was first given, to all those who participated in the session for a great discussion, and to Ted Richards for his incisive editorial help.

Footnotes

1 This is despite the scientific societies that claim that this is the moral requirement, such as the American Anthropological Association; see https://www.americananthro.org/LearnAndTeach/Content.aspx?ItemNumber=22869&navItemNumber=652

References

Appelbaum, Paul S., Roth, Loren H., Lidz, Charles W., Benson, Paul, and Winslade, William. 1987. “False Hopes and Best Data: Consent to Research and the Therapeutic Misconception.” The Hastings Center Report 17 (2):2024.CrossRefGoogle ScholarPubMed
Bivins, Thomas H. 2006. “Responsibility and Accountability.” In Ethics in Public Relations: Responsible Advocacy, edited by Fitzpatrick, Kathy and Bronstein, Carolyn, 1938. Belmont, CA: Sage Publications.CrossRefGoogle Scholar
Bourgeois, Mark. 2021. “Virtue Ethics and Social Responsibilities of Researchers.” In Science, Technology, and Virtues: Contemporary Perspectives, edited by Ratti, Emanuele and Stapleford, Thomas A., 245–68. New York: Oxford University Press.CrossRefGoogle Scholar
Bridgman, Percy W. 1947. “Scientists and Social Responsibility.” The Scientific Monthly 65 (2):148–54.Google ScholarPubMed
Chen, Jiin-Yu. 2021. “Integrating Virtue Ethics into Responsible Conduct of Research Programs: Challenges and Opportunities.” In Science, Technology, and Virtues: Contemporary Perspectives, edited by Ratti, Emanuele and Stapleford, Thomas A., 225–44. New York: Oxford University Press.CrossRefGoogle Scholar
de Melo-Martín, Inmaculada, Palmer, Larry I., and Fins, Joseph J.. 2007. “Developing a Research Ethics Consultation Service to Foster Responsive and Responsible Clinical Research.” Academic Medicine 82 (9):900–4. https://doi.org/10.1097/ACM.0b013e318132f0ee CrossRefGoogle Scholar
Doerr, Megan, and Meeder, Sara. 2022. “Big Health Data Research and Group Harm: The Scope of IRB Review.” Ethics & Human Research 44 (4):3438. https://doi.org/10.1002/eahr.500130 CrossRefGoogle ScholarPubMed
Douglas, Heather. 2003. “The Moral Responsibilities of Scientists (Tensions between Autonomy and Responsibility).” American Philosophical Quarterly 40 (1):5968.Google Scholar
Douglas, Heather. 2014. “The Moral Terrain of Science.” Erkenntnis 79 (5):961–79. https://doi.org/10.1007/s10670-013-9538-0 CrossRefGoogle Scholar
Douglas, Heather. 2021. “Scientific Freedom and Social Responsibility.” In Science, Freedom, and Democracy, edited by Hartl, Peter and Tuboly, Adam Tamas, 6887. New York: Routledge.CrossRefGoogle Scholar
Douglas, Heather. 2023. “Differentiating Scientific Inquiry and Politics.” Philosophy 98 (2):123–46. https://doi.org/10.1017/S0031819122000432 CrossRefGoogle Scholar
Fisher, Erik, and Schuurbiers, Daan. 2013. “Socio-technical Integration Research: Collaborative Inquiry at the Midstream of Research and Development.” In Early Engagement and New Technologies: Opening Up the Laboratory. Philosophy of Engineering and Technology , Vol. 16, edited by Doorn, Neelke, Schuurbiers, Daan, Poel, Ibo van de, and Gorman, Michael E., 97110. Dordrecht: Springer. https://doi.org/10.1007/978-94-007-7844-3_5 CrossRefGoogle Scholar
Hansen, Lawrence A. 2013. “Institution Animal Care and Use Committees Need Greater Ethical Diversity.” Journal of Medical Ethics 39 (3):188–90. https://doi.org/10.1136/medethics-2012-100982.CrossRefGoogle ScholarPubMed
Horner, Jennifer, and Minifie, Fred D.. 2011. “Research Ethics I: Responsible Conduct of Research (RCR)—Historical and Contemporary Issues Pertaining to Human and Animal Experimentation.” Journal of Speech, Language, and Hearing Research 54 (1):S30329. https://doi.org/10.1044/1092-4388(2010/09-0265).CrossRefGoogle ScholarPubMed
International Science Council. 2021. “A Contemporary Perspective on the Free and Responsible Practice of Science in the 21st Century: Discussion Paper of the Committee for Freedom and Responsibility in Science.” https://stories.council.science/science-freedom-responsibility/ Google Scholar
Kourany, Janet A. 2010. Philosophy of Science After Feminism. New York: Oxford University Press.CrossRefGoogle Scholar
Leonelli, Sabina. 2016. “Locating Ethics in Data Science: Responsibility and Accountability in Global and Distributed Knowledge Production Systems.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2083):20160122. https://doi.org/10.1098/rsta.2016.0122 CrossRefGoogle ScholarPubMed
MacKay, Charles R. 1995. “The Evolution of the Institutional Review Board: A Brief Overview of Its History.” Clinical Research and Regulatory Affairs 12 (2):6594. https://doi.org/10.3109/10601339509079579 CrossRefGoogle Scholar
McCormick, Jennifer B., Sharp, Richard R., Ottenberg, Abigale L., Reider, Carson R., Taylor, Holly A., and Wilfond, Benjamin S.. 2013. “The Establishment of Research Ethics Consultation Services (RECS): An Emerging Research Resource.” Clinical and Translational Science 6 (1):4044. https://doi.org/10.1111/cts.12008 CrossRefGoogle ScholarPubMed
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf Google Scholar
Nobis, Nathan. 2019. “Why IACUCs Need Ethicists.” ILAR Journal 60 (3):324–33.CrossRefGoogle Scholar
Pennock, Robert T., and O’Rourke, Michael. 2017. “Developing a Scientific Virtue-Based Approach to Science Ethics Training.” Science and Engineering Ethics 23:243–62.CrossRefGoogle ScholarPubMed
Science Must Respect the Dignity and Rights of All Humans.” 2022. Nature Human Behaviour 6:1029–31. https://doi.org/10.1038/s41562-022-01443-2 CrossRefGoogle Scholar
“The Common Rule.” 1998/2018. US Department of Health and Human Services Federal Code 45 C.F.R 46, Subpart A. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/annotated-2018-requirements/index.html#46.111 Google Scholar