Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-21T22:29:35.578Z Has data issue: false hasContentIssue false

To Hedge or Not to Hedge: Scientific Claims and Public Justification

Published online by Cambridge University Press:  07 May 2024

Zina B. Ward*
Affiliation:
Department of Philosophy, Florida State University, Tallahassee, FL, USA
Kathleen A. Creel
Affiliation:
Department of Philosophy and Religion and Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA
*
Corresponding author: Zina B. Ward; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Scientific hedges are communicative devices used to qualify and weaken scientific claims. Gregor Betz has argued—unconvincingly, we think—that hedging can rescue the value-free ideal for science. Nevertheless, Betz is onto something when he suggests there are political principles that recommend scientists hedge public-facing claims. In this article, we recast this suggestion using the notion of public justification. We formulate and reject a Rawlsian argument that locates the justification for hedging in its ability to forge consensus. On our alternative proposal, hedging is often justified because it renders scientific claims as publicly accessible reasons.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

In its 2021 “Science Brief” on COVID-19 vaccines, the Centers for Disease Control and Prevention (CDC) treads carefully. Drawing on almost 200 publications and preprints, the brief summarizes scientific evidence about COVID-19 vaccination available through August 24, 2021. While the CDC is unequivocal about the “considerable protection” that COVID-19 vaccines provide against severe disease and death, it qualifies and tempers many of its subsidiary claims. For instance, the brief contains numerous statements about what “may” be the case: Vaccination may protect against asymptomatic infection in adults; infection in a vaccinated person may boost immunity; and a particular vaccine may be less effective than others. There is talk of the “potential” benefit of a booster shot for immunocompromised people and claims about Delta variant infections in vaccinated people “potentially” having reduced transmissibility compared to infections in unvaccinated people. The brief’s tables also all include 95 percent confidence intervals for estimates of vaccine effectiveness.

Gregor Betz (Reference Betz2013, Reference Betz, Elliott and Steel2017) has drawn philosophical attention to these devices for communicating uncertainty, which he calls “hedges.” Hedging, Betz argues, can save the value-free ideal, which holds roughly that choices in the internal stages of science should be insulated from non-epistemic values (Douglas Reference Douglas2009). For reasons to be discussed below, Betz’s attempted revitalization of the value-free ideal falls short. And yet we think Betz is onto something when he claims that hedging is an appropriate approach to value management in policy-relevant science. In this article we build on Betz’s brief suggestion that hedging is recommended by “democratic principles,” using prominent accounts of public justification to characterize its value.

We begin in section 2 by critically examining Betz’s defense of the value-free ideal. We then ask: What can political philosophy tell us about the value of hedging? In section 3, we introduce the framework of public reason liberalism. Section 4 formulates a Rawlsian argument in defense of hedging whose starting point is CSPR: the idea that all and only scientific claims that are the object of consensus can be appealed to in public reason. We argue against CSPR and show that the limited scope of Rawls’s ideal of public reason precludes a Rawlsian defense of hedging. In section 5, we replace CSPR with a principle that holds that a scientific claim is admissible in public justification only if it is justified by shared standards of evaluation. Section 6 argues that this principle, Accessible Public Reason (APR), supports hedging in cases in which unhedged scientific claims depend on evaluative standards that are not widely shared. Finally, section 7 shows that this defense of hedging has broad scope and relevance beyond debates about values in science.

2. Hedging and the Value-Free Ideal

Betz (Reference Betz2013, Reference Betz, Elliott and Steel2017) surveys the different types of hedges used by scientists. Some hedging strategies use “epistemic qualification and conditionalization” to make hedged hypotheses weaker than their unhedged counterparts (Betz Reference Betz2013, 214). Others involve presenting multiple interpretations of ambiguous data or carrying out inference for multiple models rather than selecting one. Scientists sometimes hedge by making use of different epistemic modalities, as when the CDC claimed that COVID-19 infection in a vaccinated person may increase immunity or framed their findings in terms of what is likely, unlikely, possible, or plausible. Other hedges involve conditionalizing on debatable assumptions (e.g., “Assuming that ABC, we find that XYZ”). In quantitative hedging, researchers report a range of numerical estimates of an important quantity, assign a probabilistic degree of confidence to a hypothesis, or report the results of statistical tests performed with a variety of significance levels. The CDC’s reporting of confidence intervals for vaccine effectiveness is an example of quantitative hedging. As Betz (Reference Betz2013) shows and John (Reference John2015b) highlights, the value of hedging is not merely theoretical: Many of these strategies are visible in the work of the Intergovernmental Panel on Climate Change.

Betz appeals to hedging to defend the value-free ideal for science against a version of the argument from inductive risk (Rudner Reference Rudner1953; Douglas Reference Douglas2009). The argument from inductive risk holds that uncertainty necessitates the use of nonepistemic values in justifying the acceptance and rejection of scientific hypotheses. Footnote 1 When deciding whether to accept a scientific claim, a researcher must decide if the evidence is strong enough to warrant acceptance. This requires weighing the consequences of mistakenly accepting a false claim against the consequences of mistakenly rejecting a true claim. Footnote 2 Proponents of the argument from inductive risk argue that justifying a particular trade-off between these consequences requires nonepistemic values. They conclude that science is necessarily value-laden: In the face of uncertainty, scientists must justify their acceptance of claims by appeal to nonepistemic values.

According to Betz, this argument applies only to unhedged hypotheses. We can rescue the value-free ideal, he suggests, by using hedging to make the uncertainties underlying scientific claims fully explicit. Consider a hypothesis that is subject to scientific uncertainty, such as, “Existing COVID-19 vaccines reduce the risk of asymptomatic infection.” To assert this claim or its negation outright, researchers would have to make a value-laden assessment of the cost of false positives versus false negatives. Their justification of the claim would need to reference value judgments about how bad it would be to mistakenly assert that COVID-19 vaccines fight asymptomatic infection when they do not, or to fail to assert the claim when they do. Betz argues that instead of making such judgments, researchers should (and often do) use one or more hedging devices to make scientific claims “beyond reasonable doubt” (Reference Betz2013, 215). For instance, they might conditionalize: “Assuming that asymptomatic and symptomatic infection respond similarly to COVID-19 vaccines, COVID-19 vaccines reduce the risk of asymptomatic infection.” The explicit assumption in the antecedent of this hedged claim makes the claim more certain than its unhedged counterpart. Betz argues that conditionalization and other hedging strategies can be used to make even contentious scientific claims “virtually certain”—as certain as “benchmark statements” like “coal burns” or “Africa is larger than Australia,” which are taken for granted in all decision contexts (Reference Betz2013, 218, 215). With uncertainty reduced to the point of practical irrelevance, the need to make nonepistemic value judgments evaporates. Hedging allows scientists to provide value-free hypotheses to policy makers, vindicating the value-free ideal.

Betz claims that the value-free ideal is supported by “democratic principles” that condone a division of labor between democratically legitimated decision makers, who are responsible for the “normative assumptions of policy justification,” and scientists, who supply only the “descriptive assumptions” (Reference Betz, Elliott and Steel2017, 99). In a democratic society, experts should tell society how to achieve its goals, not set those goals themselves. Because hedging scientific claims enables the realization of the value-free ideal, the democratic principles that bolster the ideal also justify hedging. Hedging is valuable, on Betz’s picture, because it ensures the democratically essential division of labor between experts and democratic decision makers.

Despite the initial appeal of this picture, a strict division of labor is not feasible. Betz assumes that scientific hypotheses can be hedged to the point of virtual certainty without diminishing their policy relevance. But consider again a benchmark statement like “coal burns.” We are extremely certain that coal burns. How many universal generalizations can be made with the same degree of confidence about climate change, novel vaccines, or child development? We wager very few. If policy about the environment or public health or education were made only on the basis of statements as certain as “coal burns,” policy makers would have little to go on. To achieve virtual certainty, an estimated range of a significant quantity might need to be expanded to the point of nonactionability (e.g., “Average surface temperature will change between –5°C and 15°C over the next century”). It is also very difficult to make policy with statements that concern mere possibility. Public health officials would not find virology very helpful if it only generated claims about the possible modes of transmission of a circulating virus. In short, there may be very few hypotheses that are hedged to the point of virtual certainty and yet still useful to policy makers.

We are not the first to raise this worry about Betz’s defense of the value-free ideal. Steel (Reference Steel2016) points out that “even well confirmed scientific theories fall far short of the certainty of truisms such as ‘coal burns’” (703). If scientific advice must be just as well confirmed as such truisms, “scientists will have precious little informative advice to give” (707; see also John Reference John2015b; Magnus Reference Magnus2018; Frisch Reference Frisch2020). Pamuk (Reference Pamuk2021) argues that there is a trade-off between the usefulness and (value) neutrality of expert advice: The more “neutral” a scientific claim, the less useful it is likely to be for policy makers. This is in part because ridding a claim of value influence requires adding complexity. Betz’s ideal, Pamuk observes, threatens to make expert advice so complex that it becomes incomprehensible to nonexperts. This criticism suggests that hedging cannot be justified by its ability to realize the value-free ideal. The strict division of labor between democratic decision makers and scientists that Betz suggests would render science advice useless.

Others have raised additional problems for Betz. Nyrup (Reference Nyrup2022) claims that values are woven through countless scientific choices such that many of the values are not transparent to scientists. Given this opacity, it is implausible to think that scientists can hedge in ways that fully compensate for the values needed to justify a scientific result. Hicks (Reference Hicks2018) points out that whether a claim is “beyond reasonable doubt” depends on a value judgment about what is reasonable. Finally, Steel (Reference Steel2016) argues that hedging doesn’t eliminate uncertainty. Even if scientists strive only to communicate the state of scientific understanding of a topic, there may still be considerable uncertainty about what that state is. Moreover, he points out that varying the epistemic modality of a claim is not guaranteed to provide certainty: There can be uncertainty even about what is possible. Hence, hedging scientific claims to the point of virtual certainty not only undermines the usefulness of science to policy making but it may not be achievable at all.

Betz’s defense of the value-free ideal and his accompanying justification of hedging are therefore not successful. Hedging does not obviate the need for nonepistemic values in justifying the acceptance or rejection of scientific hypotheses. Still, we share Betz’s sense that there are good reasons in many cases for scientists to hedge their claims. Having shown that hedging’s importance does not lie in its preservation of value-freedom, the remainder of the article will ask: Then what does justify the hedging of scientific claims?

3. Public Reason Liberalism

Schroeder (Reference Schroeder2020) distinguishes two approaches to questions about values in science, one grounded in ethics and the other in political philosophy. An ethically oriented philosopher interested in what justifies hedging might pose the question: What moral duties do scientists have when communicating their results? Our preferred approach to hedging is instead political. Hedging is of particular interest when it is applied to public-facing scientific claims. Footnote 3 There are distinctively political reasons to hedge scientific findings that play a role in the public sphere.

A central question in political theory concerns when the exercise of coercive state power is legitimate. Thinkers who endorse a principle of public justification argue that coercion is acceptable only when its exercise is justifiable to all members of the public. To treat people as equals, political power must not be exercised in a way that privileges some people’s conceptions of the right or the good over others. Rather, there must be a rationale for coercive policies that is available to everyone who is subject to them. Public reason liberalism combines a principle of public justification with the traditional liberal commitment to individual liberty. There are many ways of understanding the requirement of public justification, making public reason liberalism a wide tent. In Vallier’s (Reference Vallier2011) estimation, public reason liberals include John Rawls, Jürgen Habermas, Thomas Scanlon, and Gerald Gaus. Public reason liberalism is a natural place to look for a political defense of hedging: Because scientific claims are often invoked to justify coercive state policies, public reason liberalism seems prima facie to require public-facing scientific claims to not depend in problematic ways on idiosyncratic conceptions of the right or the good. Perhaps hedging is a strategy for ensuring that science-supported policies are justifiable to all.

One way of developing such an argument takes on board Rawls’s ideal of public reason. Rawls holds that decisions about the fundamental political structure of society should be justifiable by appeal to “values that…others can reasonably be expected to endorse” (Reference Rawls2005, 226). Citizens and lawmakers may form opinions on the basis of their own “comprehensive doctrines”—that is, their personal moral, religious, and metaphysical commitments. But they ought to appeal only to public reasons, namely “reasons that all other members of the justificatory constituency could accept as valid,” when discussing and voting on fundamental political matters (Quong Reference Quong and Edward2018).

Rawls’s ideal of public reason confers on individual citizens a “duty of civility” (Reference Rawls2005, 217). Recognizing the diversity of comprehensive doctrines held by their fellow citizens, a person “should be ready to explain the basis of their actions to one another in terms each could reasonably expect that others might endorse as consistent with their freedom and equality” (218). For example, while not all citizens believe that human beings were created equal in the eyes of a divinity, all citizens can be reasonably expected to believe that a just society involves a minimal standard of equality. Although a citizen may have a belief in divinely sanctioned equality, she also has a duty of civility to justify her claims about fundamental matters by referring to a more neutral conception of equality that she can expect others to share.

A distinctive feature of Rawls’s conception of public reason is its restricted scope: He holds that the ideal applies only to questions about “constitutional essentials and basic matters of justice” (Reference Rawls1997, 235). These questions include “who has the right to vote, or what religions are to be tolerated, or who is to be assured fair equality of opportunity, or to hold property” (Reference Rawls2005, 214). Political discussion about such fundamental matters is to be conducted by appeal to public reasons. But it is “neither attainable nor desirable” for all political debates to be held to such a high standard (Reference Rawls, Kelly and Cambridge2001, 91; cf. Quong Reference Quong2010). Reliance on comprehensive doctrine is acceptable (and indeed unavoidable) when discussing nonfundamental policy questions. Rawls also claims that only discussions in the “public political forum,” and hence only people engaged in particular activities, are subject to the duty of civility (Reference Rawls1997, 767). These activities include running for office, voting on basic political questions, writing judicial opinions, and serving as a public official.

4. Consensus Science as Public Reason

It is widely acknowledged that Rawls’s account of the place of science in public reason is underdeveloped (Jønch-Clausen and Kappel Reference Jønch-Clausen and Kappel2016; Bellolio Reference Bellolio Badiola2018; Pamuk Reference Pamuk2021). Rawls allows those engaged in public justification to appeal to generally accepted beliefs, common sense, and “the methods and conclusions of science when these are not controversial” (2005, 224–25). There is debate about what he means here: Does noncontroversiality require complete consensus or just widespread agreement (Jønch-Clausen and Kappel Reference Jønch-Clausen and Kappel2016)? Must a scientific conclusion be noncontroversial among scientists or among the general public (Galston Reference Galston1995)? A number of Rawlsians endorse a principle that, partly following Bellolio (Reference Bellolio Badiola2018), we call CSPR: Among the claims of science, all and only those claims that are the object of consensus in the scientific community can be appealed to in public reason. According to CSPR, scientific consensus is both necessary and sufficient for use of a scientific claim in public justification. Uncontroversial science is allowed; controversial science is not.

As a normative political principle or an exegesis of Rawls, CSPR is fairly popular among Rawlsians thinking about expertise (Torcello Reference Torcello2011; Bellolio Reference Bellolio Badiola2018; Kappel Reference Kappel2021). It is therefore a natural starting point for a Rawlsian justification of scientific hedging. For those who prize scientific consensus, hedging has a straightforward appeal: There are some matters about which scientists disagree. In such cases, consensus can be forged by offering more tentative conclusions—by hedging. It is plausible to think that it is good to expand the number of scientific claims admissible in public justification. A larger stock of public reasons facilitates productive political debate and might even promote agreement on policy matters. For the proponent of CSPR, then, hedging public-facing scientific claims is often justified because by building greater scientific consensus, it expands the space of public reason.

This argument resonates with the current Rawlsian literature as well as work by philosophers of science on the importance of consensus (Oreskes Reference Oreskes2004; Miller Reference Miller2013; Stegenga and Menon Reference Stegenga and Menon2023). Footnote 4 Nevertheless, we think it doesn’t capture the political value of hedging for two reasons. First, scientific consensus is indeed a good heuristic for identifying claims eligible for inclusion in public justification, but it is only a symptom of what matters. Consensus does not guarantee, nor is it required for, inclusion of a claim into public justification. And second, because of the restricted scope of Rawls’s ideal of public reason, even if CSPR were a reasonable principle, a committed Rawlsian could provide only a limited defense of scientific hedging.

The first objection targets CSPR, which is subject to counterexamples in both directions. Consider its “necessity claim”: the idea that only scientific claims about which there is consensus can feature in public justification. Dahlquist and Kugelberg (Reference Dahlquist and Kugelberg2021) argue that this requirement is too demanding. They point out that for much of the COVID-19 pandemic, there was not scientific consensus about the efficacy of nonpharmaceutical interventions (NPIs) such as mask wearing, school closures, and environmental disinfection. Nevertheless, they think it was appropriate for the state to enact NPIs early in the pandemic. An effective governmental response required swift action in conditions of uncertainty. This shows that consensus is not necessary to legitimize policy making on the basis of a scientific claim.

Additional counterexamples come from cases of “manufactured doubt” (Michaels Reference Michaels2008; Oreskes and Conway Reference Oreskes and Conway2010), such as when researchers were paid by the tobacco industry to muddy the waters about smoking’s relationship to lung cancer. Industry interference stymied the formation of scientific consensus about whether smoking causes cancer. Nevertheless, it was (eventually) appropriate for public health policy makers to assume that it does. In cases of urgency or manufactured doubt, there are scientific claims that do not achieve scientific consensus and yet are admissible into public justification.

One can also object to CSPR’s “sufficiency claim”: the idea that all scientific claims about which there is consensus can enter in to public justification. A number of authors have rejected this claim (Galston Reference Galston1995; Jønch-Clausen and Kappel Reference Jønch-Clausen and Kappel2016; Reid Reference Reid2019; Bellolio Reference Bellolio2019; Kappel Reference Kappel2021). Jønch-Clausen and Kappel (Reference Jønch-Clausen and Kappel2016) consider what happens when the general public rejects a scientific consensus. Sometimes such distrust is well founded: “[E]ven in scientific communities a broad consensus can at times come about due to factors other than those harbored in the burdens of judgment: political influence, prejudice, systemic bias or unwarranted orthodoxy, influence of industrial partners etc.” (130; Holman and Elliott Reference Holman and Elliott2018). These are cases of “manufactured consensus”—the mirror of “manufactured doubt”—in which scientists come to agreement prematurely (McIlroy-Young et al. Reference McIlroy-Young, Öberg and Leopold2021). In such cases, it is inappropriate to take scientific claims for granted in public justification. Footnote 5

These arguments suggest that CSPR is an untenable political principle. Scientific consensus about a scientific claim is a useful heuristic for its admissibility into public justification but is not itself the source of political legitimacy.

The second problem with a Rawlsian justification of hedging stems from Rawls’s claim that the ideal of public reason applies only to discussion of fundamental political matters and to people engaged in activities such as running for office and writing judicial opinions. Scientists qua scientists are not subject to his ideal of public reason. As Wenar (Reference Wenar2021) explains, on Rawls’s account, “[C]itizens are not bound by any duties of public reason when they…worship in church, perform on stage, pursue scientific research, send letters to the editor, or talk politics around the dinner table” (20; our italics). Rawls singles out scientific societies and universities as private domains governed by “nonpublic reasons” (Reference Rawls2005, 220). Thus, scientists do not have duties of civility, at least in their capacity as scientists. But hedging scientific claims is usually carried out by scientists, not candidates for public office, judges, or legislators. Because Rawls’s ideal of public reason does not apply to scientists, it cannot require them to hedge.

Even if a Rawlsian were to claim that scientists do have distinctive duties of civility that recommend hedging, a further obstacle remains: As mentioned above, Rawls holds that his ideal of public reason only applies to “constitutional essentials and basic matters of justice” (Reference Rawls1997, 235). Discussions of nonfundamental policy questions are exempt from the requirements of public reason. Although scientific claims might occasionally be relevant to fundamental political questions, they more often bear on nonfundamental policy issues. Science is useful, for example, when we are setting emissions standards for vehicles, determining whether proposed construction projects will be ecologically disruptive, and establishing vaccination standards for children. If hedging is to be justified by appeal to Rawls’s ideal of public reason, its justification only extends to those cases in which science is brought to bear on fundamental political questions. The ideal turns out to be silent on the value of hedging in the vast majority of policy discussions, which concern more mundane matters.

These considerations show that drawing inspiration from Rawls in an attempt to defend scientific hedging fails: We need a justification of hedging that does not rely on CSPR and is not constrained by Rawls’s narrow conception of the scope of public justification.

5. Accessibility, Science, and Public Justification

If consensus is not the proper criterion for determining whether a claim is admissible in public justification, then what is? Our answer to this question will serve as the basis for an alternative argument for scientific hedging. This section will lay out our conception of public justification and the next will apply it to hedging, characterizing hedging as a contribution to public justification.

We first depart from Rawls in holding that the principle of public justification has a broad scope: It applies not only to fundamental matters but to nonfundamental policy issues as well (Greenawalt Reference Greenawalt1994; Schwartzman Reference Schwartzman2004; Quong Reference Quong2010; Torcello Reference Torcello2011; McKinnon Reference McKinnon2012). The considerations that motivate public reason liberalism—commitments to treating people as equals, to avoiding privileging any idiosyncratic conception of the right or the good, to achieving political reconciliation in the face of reasonable pluralism—do not apply exclusively to discussion of fundamental matters. We therefore agree with what Quong (Reference Quong2004) calls “the broad view,” which holds that “the ideal of public reason ought to be applied, whenever possible, to all political decisions where citizens exercise coercive power over one another” (234).

As the “whenever possible” caveat suggests, this broad view takes public justification to be a regulative ideal that may not always be satisfiable. When we debate matters of policy, we should strive to justify our proposals in terms that others would accept, but there is no guarantee that public reasons will be able to fully resolve such debates. It is sometimes seen as a fatal weakness of public reason that it is “incomplete” in this sense (i.e., inconclusive or indeterminate; Gaus Reference Gaus1996; Quong Reference Quong2004), but a number of authors have argued that public justification need not be complete to be a valuable ideal (Quong Reference Quong2010; Boettcher Reference Boettcher, Mandle and Roberts-Cady2020). As Schwartzman (Reference Schwartzman2004) explains, “[O]ur working assumption should be that most issues can be decided within the limits of public reason” (193). It would be “premature” to declare a policy issue irresolvable through public justification, partly because such a declaration would be self-fulfilling: Abandoning the search for public reasons is a surefire way to not find them (206; see also Quong Reference Quong2004). The ideal of public justification does not require that public reasons can always settle a dispute, only the “methodological assumption…that the available reasons rarely run out,” and thus a commitment to conducting political discussion in publicly accessible terms (Gaus Reference Gaus1996, 225).

In recent years public reason liberals have explored what exactly it takes for a reason to be public. Vallier (Reference Vallier2011, Reference Vallier2016, Reference Vallier2022) distinguishes three ways of understanding the publicity of reasons: shareability, accessibility, and intelligibility. Intelligibility is the least demanding, as it holds that for a reason to be public, it must be understandable to everyone (under suitably idealized conditions). More demanding is accessibility: A reason is accessible only if it is justified by common standards of evaluation. On this conception, not all members of the public need to endorse the reason for it to be public, but they do need to endorse the evaluative standards that justify it (under suitably idealized conditions). An evaluative standard is common when it “enjoys intersubjective recognition among people and is independent of any particular comprehensive doctrines” (Wong Reference Wong2022, 238). The most demanding conception of publicity, shareability, holds that a reason is public only if both the reason and the evaluative standards by which it is justified are widely shared (under suitably idealized conditions). Note that all these conceptions of publicity require moderate idealization. Public reason liberals are not interested in whether standards and/or reasons are actually shared or intelligible, but whether they would be if citizens all had a baseline level of information and rational capacities (Vallier Reference Vallier2011, 371).

In our view, public justification should be held to a standard of accessibility on which a reason is public if and only if it is justified according to common evaluative standards (Badano and Bonotti Reference Badano and Bonotti2020; Tyndal Reference Tyndal2019; Wong Reference Wong2022). Footnote 6 Badano and Bonotti (Reference Badano and Bonotti2020) show that accessibility is an attractive middle ground between intelligibility and shareability. Intelligibility is too permissive, enabling reasons rooted in (say) religious doctrines to count as public because they are intelligible to nonbelievers (cf. Vallier Reference Vallier2011). Shareability is too restrictive because even when citizens agree about the dimensions of evaluation on which a policy or law should be judged, they may interpret or weigh those dimensions differently. (Rawls’s [2005] discussion of the “burdens of judgment” explores these sources of disagreement.) Accessibility offers a path between these extremes, requiring evaluative standards—but not the particular reasons to which they give rise—to be shared. Footnote 7 Evaluative standards are “norms” on the basis of which one can evaluate reasons. They include both normative principles for action and descriptive beliefs. They also include “epistemic rules for the collection of factual evidence and for drawing inferences” (Badano and Bonotti Reference Badano and Bonotti2020, 39). Wong (Reference Wong2022) explains that shared evaluative standards “enable people to scrutinize, dispute, or affirm the reasons offered by others” (238). Political deliberation would be hamstrung without common yardsticks of evaluation.

Incorporating the notion of accessibility into public reason liberalism suggests an alternative principle to replace CSPR. According to what we call APR, a scientific claim is admissible in public justification iff it is justified by standards of evaluation that are shared (under suitably idealized conditions). Badano and Bonotti (Reference Badano and Bonotti2020) suggest that scientific claims are indeed accessible in this sense. They claim that the standards of evaluation operative in science are Kuhn’s (Reference Kuhn1977) theoretical virtues, which include accuracy, consistency, scope, simplicity, and fruitfulness. They argue that these standards are, in fact, widely shared: “[M]ost citizens in contemporary societies, including most religious citizens, do acknowledge the soundness and validity of scientific inquiry as applied to empirical issues” (Badano and Bonotti Reference Badano and Bonotti2020, 52).

We agree that scientific claims can be accessible but wish to highlight a few complexities. First, scientific reasons are not a monolith. Some of the claims made by scientists are justified by reference to common evaluative standards but others are not. APR claims that scientific claims are admissible in public justification only when they are grounded in shared standards, not that all scientific claims are so grounded. Of the set of claims that are not already grounded in shared standards, we will contend that hedging can ground some but not all. Second, we doubt that evaluative standards in science are best described at the high level of theoretical virtues. Kuhn’s criteria of theory choice are highly general. To understand the warrant for scientific claims, one must appeal to mid-level principles, beliefs, and rules that govern the evaluation of those claims. These include claims about the relevant strength of different kinds of evidence and principles of experimental design. For example, a mid-level principle with broad scope holds that inferences about a population should be drawn by examining a random sample of that population when possible.

Badano and Bonotti (Reference Badano and Bonotti2020) make the subtle point that preserving the distinction between shareability and accessibility requires evaluative standards that are not characterized at too low a level of abstraction (39–40). If evaluative standards are so fine-grained that they fully determine belief, then the distinction between shareable and accessible reasons collapses. And yet there is considerable room between stratospheric Kuhnian virtues and low-level principles that dictate choice. Evaluative standards, we suggest, are substantive norms that occupy this middle terrain but still require interpretation and balancing.

Comparing APR with CSPR, we see that accessibility is a better criterion for the inclusion of a scientific claim in public justification than consensus. First, APR casts a wider net, assuaging Dahlquist and Kugelberg’s (Reference Dahlquist and Kugelberg2021) concerns about policy making during a crisis. A scientific claim about the effectiveness of, say, mask wearing might be justifiable by reference to shared standards of evaluation regarding data collection and statistical inference. The claim is accessible and therefore public even if some experts do not endorse it (i.e., it fails to satisfy CSPR). Expert disagreement can be attributed to the burdens of judgment: Although scientists agree about criteria of evaluation, they sometimes interpret and trade off those criteria differently. Badano and Bonotti note that such disagreement is normal, as “different experts generally make different judgments in interpreting and weighing evidence” (2020, 61). APR is preferable to CSPR because it is unreasonable to require cutting-edge scientific claims to achieve consensus before being permitted in political discussion. All we can ask is that such claims are grounded in shared evaluative standards.

There are also scientific claims that satisfy CSPR but not APR. Consider a case of manufactured consensus, such as industry-funded biomedical research that reaches premature agreement about the efficacy of a lucrative new drug. CSPR focuses on the first-order scientific consensus, wrongly admitting the claim that the drug is efficacious into public justification. APR, however, bars the claim from public justification precisely because it was reached on the basis of nonshared evaluative standards that prioritized industry profit over consumer protection.

It has been argued that there is a fundamental incompatibility between public reason and the use of scientific expertise in policy making (McKinnon Reference McKinnon2012; Jønch-Klausen and Kappel Reference Jønch-Clausen and Kappel2016; Kogelmann and Stich Reference Kogelmann, Stich, Sobel, Vallentyne and Wall2021). The scientific claims that justify government policies are often too complex for ordinary citizens to understand. This is a problem if there is a “manageability requirement” on public justification, such that only reasons that “members of the general public can reasonably be expected to manage” are permissible (Badano Reference Badano, Newfield, Alexandrova and John2022). Kogelmann and Stich (Reference Kogelmann, Stich, Sobel, Vallentyne and Wall2021), assuming something like a manageability requirement, argue that “the set of scientific reasons that citizens may appeal to in public debate is astonishingly small” (162).

This objection can be blunted by recognizing the idealization built into accessibility. Theorists of public justification hold that what matters to the legitimate exercise of power is not actual endorsement by members of the public, but counterfactual or rationally required endorsement (Vallier Reference Vallier2022). Badano and Bonotti (Reference Badano and Bonotti2020) argue that whether a scientific claim is accessible depends on whether it is justified by standards of evaluation that would be shared by any citizen who channeled her “time, energy, and cognitive capacities” toward the study of the relevant science (54). The idealized scenario is one in which the citizen becomes knowledgeable enough about the methods to be able to assess whether research justifies a particular conclusion. We need not suppose that any citizen could “become fully-fledged experts, capable of advancing the discipline,” but only that they could in principle gain a “passive understanding of science’s evaluative standards” (57). It is therefore mistaken to think that public justification is subject to a manageability requirement if that is taken to concern actual (rather than counterfactual or idealized) manageability.

6. An Accessibility-Based Argument for Hedging

The APR criterion states that scientific claims that are justified by shared standards of evaluation are suitable for use in public justification. Standards of evaluation are often a locus of scientific disagreement: Not all evaluative standards used in science are widely shared. Our suggestion is that the value of hedging lies in its ability to forge scientific claims that are justified by shared standards. Like the advocate of the Rawlsian argument, we hold that it is beneficial for scientific claims to be available to public justification because such claims can fuel discussion and promote policy agreement. But unlike the proponent of CSPR, we think that what is required to expand the stock of publicly available scientific reasons is not reaching consensus about those very claims, but rather formulating claims that are defensible by appeal to shared standards of evaluation. We argue that hedging can often reduce the dependence of scientific claims on evaluative standards that are not shared. Hedging is thus recommended for its ability to generate publicly accessible scientific reasons.

To illustrate this argument, imagine that a team of researchers is using a rodent model of Parkinson’s disease to assess the potential of a new drug to treat Parkinson’s. Finding encouraging results, the researchers must decide whether to accept the claim, “The drug slows the progression of Parkinson’s disease.” Various standards of evaluation can be invoked to determine whether the evidence is strong enough to warrant acceptance of this claim. Some researchers may endorse an evaluative standard on which claims about a drug’s efficacy in treating disease must be supported by knowledge of its mechanism of action. These researchers might be wary of hype surrounding new treatments, and especially concerned not to give patients false hope. Such considerations might also motivate them to adopt an evaluative standard requiring strict separation between human diseases and animal models of those diseases, such that it is inappropriate to draw conclusions about the former on the basis of the latter. Still another potential standard of evaluation concerns when a drug’s effect is large enough to warrant talk of “slowing the progression” of a disease. Some researchers may insist that a treatment must extend an animal’s lifespan by, say, 10 percent to justify claims about its efficacy. According to these researchers’ evaluative standard, a drug can have a statistically significant but practically meaningless impact on disease progression.

Other researchers might reject these standards of evaluation. After all, many drugs have been successful in treating disease despite our ignorance of how they work, and biomedical research has shown that inferences from (some) animal models to human diseases are reliable. These researchers might also argue that any statistically significant effect on an animal’s lifespan counts as “slowing the progression” of the disease. We take it that both perspectives on each of these evaluative standards are reasonable. We may imagine that the evidence is such that, given their differing standards, researchers cannot agree whether to accept that “the drug slows the progression of Parkinson’s disease.” There are no shared standards of evaluation by appeal to which the claim can be justified. As a result, it is not accessible and by APR not admissible in public justification.

Our suggestion is that hedging can sometimes produce a claim that is justifiable by reference to standards that all of the researchers accept. Consider the hedged conclusion, “The drug slows the progression of disease in a rodent model of Parkinson’s by 4–6 weeks.” This claim is defensible by reference to a less contentious set of evaluative standards. It is justified, for example, by the principle that randomized control trials of drugs in rodents license conclusions about the efficacy of those drugs in rodents. It is also presumably justified by evaluative standards governing statistical inference (e.g., about the proper use of t-tests to compare treatment and control groups), which allow the researchers to give a numerical (but uncertain) estimate of the drug’s effect. Indeed, one can imagine that the hedged claim could be justified entirely by appeal to evaluative standards that the researchers share (or would share, under idealized conditions). In that case the hedged claim, but not its unhedged counterpart, would count as accessible and therefore be admissible in public justification, under APR. Or consider another hedged variant: “Assuming that human patients respond similarly to rats, the drug slows the progression of Parkinson’s disease.” By conditionalizing, this claim leaves open whether the antecedent is satisfied. (This strategy is similar to what Havstad and Brown [Reference Havstad and Brown2017] call “deferral.”) The claim can therefore be justified without appealing to a nonshared evaluative standard about the validity of animal-to-human inference.

Quantifying uncertainty in climate science provides another illustration of the role of hedging in fulfilling APR. Climate scientists build complex predictive models that rely on limited and heterogeneous data to predict consequential outcomes such as future sea-level rise, ice sheet melt, and temperature. Different modeling teams make different assumptions, most reasonable and defensible, about the appropriate parameters to assume for each aspect of the model and also choose different overall modeling approaches. For example, a long-running disagreement over globally averaged surface temperature during the last six thousand years, nicknamed the “Holocene Temperature Conundrum,” exists because some modelers believe that global mean surface temperature is best estimated using only physical evidence that can provide “proxies” for sea surface temperature, while others prefer a more complex “transient” modeling approach that corrects for known biases in proxy data by incorporating information from physics-based models of ocean and atmospheric processes (Thompson et al. Reference Thompson, Zhu, Poulsen, Tierney and Skinner2022). Because of these different choices, models can produce different predictions (Winsberg Reference Winsberg2012, 116–17). Importantly, each choice corresponds to a different implied standard of evaluation, e.g., the principle that modeling estimates of important quantities should depend on physical evidence alone, or that proxies should not be used without correcting for bias. Choosing only one of the resulting predictions as a basis for public policy would seem arbitrary. But merely averaging all available predictions would elide the uncertainty that their spread represents, as well as the substantial variation in standards of evaluation that produced the models.

Instead, as APR might predict, the climate science community devotes tremendous technical and diplomatic resources to producing hedged claims that appropriately quantify the uncertainty represented by model differences. Betz (Reference Betz2013) rightly draws attention to the hedging strategies deployed by the Intergovernmental Panel on Climate Change. But these strategies, rather than ensuring value freedom, represent an attempt to produce claims that are acceptable according to shared standards of evaluation—and therefore are suitable for public justification.

Our argument, then, is this: Some unhedged scientific claims cannot be justified by shared standards of evaluation and therefore are not accessible. Hedging can make at least some of those nonaccessible claims justifiable according to shared standards, rendering them accessible and hence public. Hedging is therefore beneficial because it expands the number of scientific claims available for public justification. Note that this proposal does not require the same degree of hedging as Betz’s ideal. One must hedge extensively to make policy-relevant scientific claims “virtually certain.” Hedges need not go as far if the aim is to make a claim justifiable by appeal to shared standards.

Our argument is modest. It offers only a pro tanto reason in favor of hedging public-facing scientific claims. We recognize that there are difficult cases — situations in which the unhedged version of a policy-relevant scientific claim requires a nonshared evaluative standard for its justification, but hedging would make the claim so complex as to be incomprehensible to nonexperts. There are no perfect options in these situations. Hedging the claim would satisfy the ideal of public justification while preventing important information from reaching the public. Leaving the claim unhedged would furnish policy making with valuable input but violate standards of public justification. We provide no guidance about how to handle such dilemmas. We claim only that avoiding nonshared evaluative standards gives scientists reason to hedge, not that it outweighs all countervailing considerations. Footnote 8 Our argument is also compatible with there being other political and nonpolitical reasons to hedge. For instance, consider Pamuk’s (Reference Pamuk2021) claim that there is “inequality in opportunities for political influence” when some citizens exert disproportionate influence on policy through value-laden scientific choices (50). Perhaps hedging reduces the disproportionate power of a small number of citizens to shape political decisions through science. Scientific hedging would then be additionally justified by its promotion of political equality.

A potential worry about our argument is that it merely passes the buck. Scientists avoid appeal to nonshared evaluative standards by offering hedged claims to the public. But eventually those claims have to be translated into action: Officials have to decide which waste disposal procedures to adopt, whether to move forward with trials of a new drug, or how to regulate agriculture in the name of food safety. If nonpublic reasons are eventually required to resolve such policy questions, hedging has done little to preserve the ideal of public justification. In response, we note that if policy making sometimes seems to require nonpublic reasons, it is a problem for all public reason liberals. Our suggestion to hedge might exacerbate, but does not alone generate, an apparent need for officials to make nonpublic judgments. There are, however, approaches in the literature that seek to dispel this worry about public reason liberalism. Schwartzman (Reference Schwartzman2004) canvasses five political decision-making strategies that “forestall or make unnecessary the need to go beyond the limits of public reason,” even when public justification proves indeterminate on a particular issue (209). These include postponement, randomization, and democratic procedures. For instance, when there is an intractable disagreement about policy, “rather than impose their nonpublic reasons on others, citizens can choose to submit their disputes to various forms of procedural adjudication,” including straightforward majority rule (211). Thus, even if public justification proves incapable of resolving a policy dispute, one need not revert to nonpublic reasons to justify a course of action. Hedging scientific claims contributes to the preservation of public justification by ensuring such disputes are resolved openly in the political arena rather than illicitly in science. Footnote 9

7. Conclusion

This article has considered three possible answers to the question: What is the politically significant difference between hedged claims and their unhedged counterparts? Betz’s answer was that fully hedged claims are “virtually certain,” whereas many unhedged claims are not. We rejected this idea, as well as Betz’s use of hedging to defend the value-free ideal. Another possible answer, suggested by the Rawlsian literature, is that hedged claims achieve scientific consensus. However, consensus is neither necessary nor sufficient for a scientific claim to enter into public justification and Rawls’s conception of public reason is too narrow to justify hedging. This led us to a third and final proposal: The politically significant feature of hedged scientific claims is that they are less reliant on nonshared standards for their justification than their unhedged counterparts. By rendering scientific claims accessible, hedging expands the stock of claims available for public justification.

Our argument situates science in relation to public justification and contributes to a growing literature on how to manage nonepistemic values in scientific communication (John Reference John2015a; Franco Reference Franco2017). Although the value-free ideal has been criticized, there is widespread recognition of a tension between expertise and democratic governance (Douglas Reference Douglas2009; Schroeder Reference Schroeder2021; Lusk Reference Lusk2021; Pamuk Reference Pamuk2021): The use of nonepistemic values in science threatens to give scientific experts illegitimate power in a democratic society that uses scientific findings to make policy. Many authors have responded to this tension by endorsing public involvement in science (Douglas Reference Douglas, Maasen and Weingart2005; Brown Reference Brown2009; Alexandrova Reference Alexandrova2018). Engaging the public in science, however, is resource and time intensive, participants are often nonrepresentative, and many scientific issues are too complex to be understood by nonexperts. With these shortcomings in mind, we suggest that hedging can complement democratization as a strategy for responsible value management. The conflict between democracy and expertise runs deep, but hedging is one important tool at scientists’ disposal for minimizing the tension.

Our liberal defense of hedging has a broad scope, with implications beyond the values in science debate. Footnote 10 Because standards of evaluation can be both epistemic and nonepistemic, epistemic and nonepistemic factors alike are potential barriers to the accessibility of scientific claims. Our argument captures the political value of hedging for all public-facing claims that are not justified by shared standards, whether epistemic or not. In this respect, our account is more comprehensive than Betz’s, whose focus is on ridding science of nonepistemic values. Our unified treatment of epistemic and nonepistemic disagreements is especially appealing given the well-known difficulty of drawing a sharp distinction between the epistemic and nonepistemic (Rooney Reference Rooney1992; Longino Reference Longino1996).

The account also highlights the value of hedging beyond the official science-policy interface. Betz’s focus on the IPCC exemplifies a general tendency among philosophers of science to pay most attention to formal science advising. But scientific claims enter the public sphere, and thereby have the potential to ground coercive state policy, through many routes both formal and informal. Our argument for hedging applies to all public-facing scientific claims, not just claims made by individual science advisors or expert committees. If the ideal of public justification bars scientific claims based on nonshared standards from shaping policy, then scientists have reason to hedge whenever their work enters the public sphere.

Acknowledgments

Our thanks to audiences at the 2022 workshop on Values in Science and Political Philosophy at Claremont McKenna College and the McCoy Family Center for Ethics in Society works-in-progress group at Stanford University for their helpful feedback. Thanks also to Liam Kofi Bright, Roger Creel, Mikkel Gerken, Henrik Kugelberg, Arnon Levy, Rune Nyrup, Zeynep Pamuk, and Drew Schroeder for valuable comments and conversation. We are especially grateful to two anonymous reviewers whose constructive and detailed feedback helped make the article much better.

Footnotes

1 We are here focusing on the version of the argument from inductive risk that focuses on the justification of scientific choices (Ward Reference Ward2021). We think this is the most plausible construal of the argument, and Betz also seems to be thinking about values in their justificatory role. He defines the value-free ideal as the claim that “the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values” (Reference Betz2013, 207). We also follow the convention of using “values” as shorthand for “nonepistemic values.”

2 Steel (Reference Steel2016) and others have pointed out that a third option is suspension of judgment. This doesn’t undermine the argument from inductive risk, however, because there are still different inductive risks associated with each of scientists’ three options.

3 By “public-facing” claims, we mean the subset of “public scientific testimony” that Gerken (Reference Gerken2022) labels “scientific expert testimony.” This is narrower than what Dang and Bright (Reference Dang and Kofi Bright2021) call “public avowals,” which include claims made for scientific audiences.

4 Stegenga and Menon (Reference Stegenga and Menon2023) sketch a consensus-based argument for hedging that is slightly different from the one articulated in this section, as it aims to establish that there is epistemic rather than political reason to hedge. On their view, scientific knowledge requires consensus about both a claim and the validity of the “epistemic toolkit” by which it was arrived at. They briefly suggest that hedging helps scientists achieve such strong consensus, a constitutive aim of science. We are skeptical of this consensus-based argument, as their view of what is required for scientific knowledge is implausibly strong.

5 Such cases are not counterexamples to all articulations of CSPR. Kappel (Reference Kappel2021) formulates CSPR as follows: “Some policy-relevant factual proposition P is part of public reason if and only if there is consensus about P among scientific experts in the relevant well-functioning scientific institutions” (619; our italics). Cases of manufactured consensus or doubt are arguably subversions of well-functioning science.

6 Unlike Vallier (Reference Vallier2011), we construe accessibility as both necessary and sufficient for publicity (372).

7 Badano and Bonotti (Reference Badano and Bonotti2020) argue that Rawls too endorses an accessibility conception of publicity. We are skeptical of this reading, particularly because it seems to conflict with CSPR (as Badano and Bonotti recognize; p. 63), but exegesis of Rawls is not our aim here.

8 One might worry that Betz could make a similar move to escape our earlier criticisms. If his defense of hedging provides only a pro tanto reason in its favor, then he can acknowledge that hedging to the point of virtual certainty would sometimes come at the cost of usefulness to policy makers. In those cases, usefulness should trump preservation of the value-free ideal. This rejoinder falls flat, however, once one recognizes that such cases are not the exception but the norm. Moreover, any justification of hedging (pro tanto or otherwise) that focuses on its ability to achieve certainty misdiagnoses its political significance. Thanks to an anonymous reviewer for pressing this point.

9 Note that we are not claiming that Schwartzman’s strategies are applicable in science. We are rather suggesting that it is reasonable for scientists to decline to make nonpublic judgments, thereby passing the buck to policy makers, because the latter have additional tools at their disposal for making political decisions without relying on nonpublic reasons.

10 Thanks to a reviewer for suggesting this framing.

References

Alexandrova, Anna. 2018. “Can the Science of Well-Being Be Objective?The British Journal for the Philosophy of Science 69 (2):421–45. https://doi.org/10.1093/bjps/axw027.Google Scholar
Badano, Gabriele. 2022. “Are Numbers Really as Bad as They Seem? A Political Philosophy Perspective.” In Limits of the Numerical, edited by Newfield, Christopher, Alexandrova, Anna, and John, Stephen, 161–78. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226817163-008.Google Scholar
Badano, Gabriele, and Bonotti, Matteo. 2020. “Rescuing Public Reason Liberalism’s Accessibility Requirement.” Law and Philosophy 39 (1):3565. https://doi.org/10.1007/s10982-019-09360-8.Google Scholar
Bellolio Badiola, Cristóbal. 2018. “Science as Public Reason: A Restatement.” Res Publica 24 (4):415–32. https://doi.org/10.1007/s11158-018-09410-3.CrossRefGoogle Scholar
Bellolio, Cristóbal. 2019. “The Quinean Assumption. The Case for Science as Public Reason.” Social Epistemology 33 (3):205–17. https://doi.org/10.1080/02691728.2019.1599462.CrossRefGoogle Scholar
Betz, Gregor. 2013. “In Defence of the Value Free Ideal.” European Journal for Philosophy of Science 3 (2):207–20.10.1007/s13194-012-0062-xCrossRefGoogle Scholar
Betz, Gregor. 2017. “Why the Argument from Inductive Risk Doesn’t Justify Incorporating Non-Epistemic Values in Scientific Reasoning.” In Current Controversies in Values in Science, edited by Elliott, Kevin and Steel, Daniel, 94110. New York: Routledge.10.4324/9781315639420-7CrossRefGoogle Scholar
Boettcher, James. 2020. “Just Wide Enough: Reidy on Public Reason.” In John Rawls: Debating the Major Questions, edited by Mandle, Jon and Roberts-Cady, Sarah, 35–50. New York: Oxford University Press. https://doi.org/10.1093/oso/9780190859213.003.0004.Google Scholar
Brown, Mark B. 2009. Science in Democracy: Expertise, Institutions, and Representation. Cambridge, MA: MIT Press.Google Scholar
Centers for Disease Control and Prevention. “Science Brief: COVID-19 Vaccines and Vaccination.” 2021. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/science/science-briefs/fully-vaccinated-people.html.Google Scholar
Dahlquist, Marcus, and Kugelberg, Henrik D.. 2021. “Public Justification and Expert Disagreement over Non-Pharmaceutical Interventions for the COVID-19 Pandemic.” Journal of Medical Ethics 49 (1):913. https://doi.org/10.1136/medethics-2021-107671.Google Scholar
Dang, Haixin, and Kofi Bright, Liam. 2021. “Scientific Conclusions Need Not Be Accurate, Justified, or Believed by Their Authors.” Synthese 199:81878203. https://doi.org/10.1007/s11229-021-03158-9.Google ScholarPubMed
Douglas, Heather. 2005. “Inserting the Public into Science.” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, edited by Maasen, Sabine and Weingart, Peter, 153–69. Sociology of the Sciences Yearbook. Dordrecht: Springer Netherlands. https://doi.org/10.1007/1-4020-3754-6_9.Google Scholar
Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.10.2307/j.ctt6wrc78CrossRefGoogle Scholar
Franco, Paul L. 2017. “Assertion, Nonepistemic Values, and Scientific Practice.” Philosophy of Science 84 (1):160–80. https://doi.org/10.1086/688939.Google Scholar
Frisch, Mathias. 2020. “Uncertainties, Values, and Climate Targets.” Philosophy of Science 87 (5):979–90. https://doi.org/10.1086/710538.CrossRefGoogle Scholar
Galston, William A. 1995. “Two Concepts of Liberalism.” Ethics 105 (3):516–34. https://doi.org/10.1086/293725.Google Scholar
Gaus, Gerald F. 1996. Justificatory Liberalism: An Essay on Epistemology and Political Theory. New York: Oxford University Press.10.1093/oso/9780195094398.001.0001CrossRefGoogle Scholar
Gerken, Mikkel. 2022. Scientific Testimony: Its Roles in Science and Society. Oxford: Oxford University Press.10.1093/oso/9780198857273.001.0001CrossRefGoogle Scholar
Greenawalt, Kent. 1994. “On Public Reason.Chicago-Kent Law Review 69 (3):669–90.Google Scholar
Havstad, Joyce C., and Brown, Matthew J.. 2017. “Inductive Risk, Deferred Decisions, and Climate Science Advising.” In Exploring Inductive Risk, edited by Kevin C. Elliott and Ted Richards, 101–123. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190467715.003.0006.Google Scholar
Hicks, Daniel J. 2018. “Inductive Risk and Regulatory Toxicology: A Comment on de Melo-Martín and Intemann.” Philosophy of Science 85 (1):6474. https://doi.org/10.1086/694771.CrossRefGoogle Scholar
Holman, Bennett, and Elliott, Kevin C.. 2018. “The Promise and Perils of Industry-Funded Science.” Philosophy Compass 13 (11):1–14. https://doi.org/10.1111/phc3.12544.Google Scholar
John, Stephen. 2015a. “Inductive Risk and the Contexts of Communication.” Synthese 192 (1):7996.10.1007/s11229-014-0554-7CrossRefGoogle Scholar
John, Stephen. 2015b. “The Example of the IPCC Does Not Vindicate the Value Free Ideal: A Reply to Gregor Betz.” European Journal for Philosophy of Science 5 (1):113.10.1007/s13194-014-0095-4CrossRefGoogle Scholar
Jønch-Clausen, Karin, and Kappel, Klemens. 2016. “Scientific Facts and Methods in Public Reason.” Res Publica 22 (2):117–33. https://doi.org/10.1007/s11158-015-9290-1.CrossRefGoogle Scholar
Kappel, Klemens. 2021. “Science as Public Reason and the Controversiality Objection.” Res Publica 27 (4):619–39. https://doi.org/10.1007/s11158-021-09503-6.CrossRefGoogle Scholar
Kogelmann, Brian, and Stich, Stephen G. W.. 2021. “When Public Reason Falls Silent: Liberal Democratic Justification Versus the Administrative State.” In Oxford Studies in Political Philosophy Volume 7, edited by Sobel, David, Vallentyne, Peter, and Wall, Steven, 161–193. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780192897480.003.0006.Google Scholar
Kuhn, Thomas S. 1977. “Objectivity, Value Judgment, and Theory Choice.” In The Essential Tension: Selected Studies in Scientific Tradition and Change, rev. ed., 320–39. Chicago: University of Chicago Press.10.7208/chicago/9780226217239.001.0001CrossRefGoogle Scholar
Longino, Helen E. 1996. “Cognitive and Non-Cognitive Values in Science: Rethinking the Dichotomy.” In Feminism, Science, and the Philosophy of Science, edited by Lynn Hankinson Nelson and Jack Nelson, 39–58. Synthese Library 256. Dordrecht: Kluwer Academic Publishers. https://doi.org/10.1007/978-94-009-1742-2_3.CrossRefGoogle Scholar
Lusk, Greg. 2021. “Does Democracy Require Value-Neutral Science? Analyzing the Legitimacy of Scientific Information in the Political Sphere.” Studies in History and Philosophy of Science Part A 90 (December):102–10. https://doi.org/10.1016/j.shpsa.2021.08.009.CrossRefGoogle ScholarPubMed
Magnus, P. D. 2018. “Science, Values, and the Priority of Evidence.” Logos and Episteme 9 (4):413–31. https://doi.org/10.5840/logos-episteme20189433.CrossRefGoogle Scholar
McIlroy-Young, Bronwyn, Öberg, Gunilla, and Leopold, Annegaaike. 2021. “The Manufacturing of Consensus: A Struggle for Epistemic Authority in Chemical Risk Evaluation.” Environmental Science & Policy 122 (August):2534. https://doi.org/10.1016/j.envsci.2021.04.003.CrossRefGoogle Scholar
McKinnon, Catriona. 2012. Climate Change and Future Justice: Precaution, Compensation and Triage. New York: Routledge.10.4324/9780203802205CrossRefGoogle Scholar
Michaels, David. 2008. Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health. New York: Oxford University Press.Google Scholar
Miller, Boaz. 2013. “When Is Consensus Knowledge Based? Distinguishing Shared Knowledge from Mere Agreement.” Synthese 190 (7):12931316. https://doi.org/10.1007/s11229-012-0225-5.CrossRefGoogle Scholar
Nyrup, Rune. 2022. “The Limits of Value Transparency in Machine Learning.” Philosophy of Science 89 (5):1054–64. https://doi.org/10.1017/psa.2022.61.CrossRefGoogle Scholar
Oreskes, Naomi. 2004. “The Scientific Consensus on Climate Change.” Science 306 (5702):1686–86. https://doi.org/10.1126/science.1103618.CrossRefGoogle ScholarPubMed
Oreskes, Naomi, and Conway, Erik M.. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press.Google Scholar
Pamuk, Zeynep. 2021. Politics and Expertise: How to Use Science in a Democratic Society. Princeton, NJ: Princeton University Press.Google Scholar
Quong, Jonathan. 2004. “The Scope of Public Reason.” Political Studies 52 (2):233–50. https://doi.org/10.1111/j.1467-9248.2004.00477.x.CrossRefGoogle Scholar
Quong, Jonathan. 2010. Liberalism without Perfection. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199594870.001.0001.CrossRefGoogle Scholar
Quong, Jonathan. 2018. “Public Reason.” In The Stanford Encyclopedia of Philosophy, edited by Edward, N. Zalta, Spring 2018. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2018/entries/public-reason/.Google Scholar
Rawls, John. 1997. “The Idea of Public Reason Revisited.” The University of Chicago Law Review 64 (3):765807. https://doi.org/10.2307/1600311.CrossRefGoogle Scholar
Rawls, John. 2001. Justice as Fairness: A Restatement. Edited by Kelly, Erin. Cambridge, MA: Harvard University Press.10.2307/j.ctv31xf5v0CrossRefGoogle Scholar
Rawls, John. 2005. Political Liberalism. 2nd ed. New York: Columbia University Press.Google Scholar
Reid, Andrew. 2019. “What Facts Should Be Treated as ‘Fixed’ in Public Justification?Social Epistemology 33 (6):491502. https://doi.org/10.1080/02691728.2019.1637965.CrossRefGoogle Scholar
Rooney, Phyllis. 1992. “On Values in Science: Is the Epistemic/Non-Epistemic Distinction Useful?PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1992 (January):1322. https://doi.org/10.2307/192740.Google Scholar
Rudner, Richard. 1953. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1):16. https://doi.org/10.2307/185617.CrossRefGoogle Scholar
Schroeder, S. Andrew. 2020. “Thinking about Values in Science: Ethical Versus Political Approaches.” Canadian Journal of Philosophy, October, 1–10. https://doi.org/10.1017/can.2020.41.CrossRefGoogle Scholar
Schroeder, S. Andrew. 2021. “Democratic Values: A Better Foundation for Public Trust in Science.” The British Journal for the Philosophy of Science 72 (2):545–62. https://doi.org/10.1093/bjps/axz023.CrossRefGoogle Scholar
Schwartzman, Micah. 2004. “The Completeness of Public Reason.” Politics, Philosophy & Economics 3 (2):191220. https://doi.org/10.1177/1470594X04042963.CrossRefGoogle Scholar
Steel, Daniel. 2016. “Climate Change and Second-Order Uncertainty: Defending a Generalized, Normative, and Structural Argument from Inductive Risk.” Perspectives on Science 24 (6):696721. https://doi.org/10.1162/POSC_a_00229.CrossRefGoogle Scholar
Stegenga, Jacob, and Menon, Tarun. 2023. “The Difference-to-Inference Model for Values in Science.” Res Philosophica 100 (4):423–47. https://doi.org/10.5840/resphilosophica2023928102.CrossRefGoogle Scholar
Thompson, Alexander J., Zhu, Jiang, Poulsen, Christopher J., Tierney, Jessica E., and Skinner, Christopher B.. 2022. “Northern Hemisphere Vegetation Change Drives a Holocene Thermal Maximum.” Science Advances 8 (15):eabj6535. https://doi.org/10.1126/sciadv.abj6535.CrossRefGoogle ScholarPubMed
Torcello, Lawrence. 2011. “The Ethics of Inquiry, Scientific Belief, and Public Discourse.” Public Affairs Quarterly 25 (3):197215.Google Scholar
Tyndal, Jason. 2019. “Public Reason, Non-Public Reasons, and the Accessibility Requirement.” Canadian Journal of Philosophy 49 (8):1062–82. https://doi.org/10.1080/00455091.2019.1584935.CrossRefGoogle Scholar
Vallier, Kevin. 2011. “Convergence and Consensus in Public Reason.” Public Affairs Quarterly 25 (4):261–79.Google Scholar
Vallier, Kevin. 2016. Liberal Politics and Public Faith: Beyond Separation. New York and London: Routledge.Google Scholar
Vallier, Kevin. 2022. “Public Justification.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Winter 2022. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2022/entries/justification-public/.Google Scholar
Ward, Zina B. 2021. “On Value-Laden Science.” Studies in History and Philosophy of Science Part A 85 (February):5462. https://doi.org/10.1016/j.shpsa.2020.09.006.CrossRefGoogle ScholarPubMed
Wenar, Leif. 2021. “John Rawls.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Summer 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2021/entries/rawls/.Google Scholar
Winsberg, Eric. 2012. “Values and Uncertainties in the Predictions of Global Climate Models.” Kennedy Institute of Ethics Journal 22 (2):111–37.10.1353/ken.2012.0008CrossRefGoogle ScholarPubMed
Wong, Baldwin. 2022. “Accessibility, Pluralism, and Honesty: A Defense of the Accessibility Requirement in Public Justification.” Critical Review of International Social and Political Philosophy 25 (2):235–59. https://doi.org/10.1080/13698230.2019.1658480.CrossRefGoogle Scholar