1 Introduction
Scientific realists maintain that our best scientific theories are true or approximately true. Footnote 1 They often rely on what has become known as the No-Miracles argument in defending this claim. The argument, usually credited to Putnam (Reference Putnam1975, 73), holds that “realism is the only philosophy that doesn’t make the success of science a miracle.” Footnote 2 It is generally agreed that science has been highly successful. The only way of adequately accounting for this success, the realist contends, is through an appeal to the approximate truth of our best scientific theories.
Colin Howson (Reference Howson2000), however, has disputed the validity of the No-Miracles argument, charging that it is fallacious when understood as an argument concerning conditional probabilities. Footnote 3 Taking the titular “miracle” to be an event that is highly improbable, the No-Miracles argument is plausibly formulated as follows: Footnote 4
-
NM-1: If a particular scientific theory is not approximately true, its success would be a chance event that is highly unlikely (i.e., a miracle).
-
NM-2: If a particular scientific theory is approximately true, its success would be highly likely.
-
NM-3: If a particular scientific theory is successful, it is then likely that it is approximately true (following from NM-1 and NM-2).
-
NM-4: Our best scientific theories have been successful.
-
NM-5: Likely, our best scientific theories are approximately true (following from NM-3 and NM-4). Footnote 5
The first three steps of the argument are naturally interpreted as claims concerning conditional probabilities. For a particular scientific theory, take T to stand for the proposition that it is approximately true, take ¬T to stand for the proposition that it is not approximately true, and take S to stand for the proposition that it is successful. The initial steps of the argument then become as follows:
-
NM*-1: P(S|¬T) is very low.
-
NM*-2: P(S|T) is very high.
-
NM*-3: Therefore, P(T|S) is high. Footnote 6
But, as Howson points out, this subargument commits the base-rate fallacy. Footnote 7
To help illustrate, consider a familiar example involving a disease for which there is a highly reliable test. If a person has the disease and is tested, the test returns a positive result 95 percent of the time and a false negative result 5 percent of the time. If a person doesn’t have the disease and is tested, the test similarly returns a negative result 95 percent of the time and a false positive result 5 percent of the time. The base-rate fallacy is then commonly demonstrated by asking the question, What is the probability that a randomly selected individual has the disease, given that the person has tested positive? In psychological studies, the most frequent response is 95 percent. But this answer is correct only under the assumption that exactly 50 percent of the general population has the disease. Footnote 8 If the rate of the disease in the population is actually much lower, the chance that a person who tests positive has the disease will be lower as well (e.g., less than 2 percent if the base rate of the disease is 0.1 percent). Footnote 9 Without further information concerning the base rate of the disease in the general population, no response to the question can correctly be given.
Howson contends that this same mistake in reasoning is made by proponents of the No-Miracles argument. If the prior probability that a given theory is approximately true is sufficiently low (e.g., P(T) = 0.001), the first two premises of the argument can be true (e.g., P(S|¬T) = 0.05 and P(S|T) = 0.95) and the argument’s subconclusion false (e.g., P(T|S) $ \approx $ 0.019). Footnote 10 Without some further premise concerning the prior probability that a theory is approximately true (i.e., the “base rate” of approximately true theories), the conclusion of the No-Miracles argument is probabilistically unsupported.
Stathis Psillos (Reference Psillos2009) has provided arguably the most prominent defense of the No-Miracles argument in the face of Howson’s charge. Psillos, however, acknowledges that the supplemental premise Howson takes as vital to the argument’s success is not one that the scientific realist can easily provide. Measuring the rate of approximately true theories in the general population of scientific theories is, according to Psillos, infeasible due to a lack of precision concerning the argument’s central individuating concepts (i.e., theory, success, and approximate truth). Appealing to the principle of indifference in supplying the missing premise is considered but quickly abandoned by Psillos—presumably because of the controversial nature of such an objective Bayesian approach. Footnote 11 Howson’s own suggestion that the prior probability be supplied based on one’s subjective credence concerning the approximate truth of a given theory is also rejected by Psillos as contrary to the argument’s objective intent.
Psillos’s admission that he cannot convincingly provide the argument’s purported missing premise is taken by Howson as tantamount to an acknowledgment of defeat. In his brief rejoinder to Psillos, Howson (Reference Howson2013, 211) declares the argument “dead in the water” on probabilistic grounds. What Howson fails to address, however, is Psillos’s more involved suggestions as to how the argument could proceed in the absence of such a premise. As Psillos (Reference Psillos2009, 68) puts it, “reasoning is much more complex” than Howson is willing to admit. By appealing to reasoning that appears acceptable in other contexts, he contends that the No-Miracles argument can be convincingly saved as an argument for scientific realism.
In this article, I look more closely at Psillos’s defense of the No-Miracles argument. In particular, I focus on his contention that there are cases in which seemingly relevant base rates can safely be ignored in arguments involving conditional probabilities and that the No-Miracles argument provides one such instance. Psillos’s claim is worth examining in its own right, but also, as I will discuss, elements of his multipronged approach to rescuing the argument presage more recent realist attempts. I conclude that Psillos fails to provide an adequate defense of the No-Miracles argument. Finally, I consider whether the scientific realist might be better served by reformulating the No-Miracles argument as an inference to the best explanation (IBE). Although this approach may allow the realist to bypass Howson’s objection, it would limit both the argument’s audience and the ability of the realist to effectively counter the challenge to scientific realism presented by the Pessimistic Induction.
2 Psillos’s three cases
In defending the No-Miracles argument, Psillos considers three types of cases in which he contends that base-rate information can properly be ignored. In the subsections that follow, I present each of these cases in turn and demonstrate how each fails in providing a convincing rebuttal to Howson’s objection. Psillos’s particular focus is on defending the No-Miracles argument under the assumption that the base rate of approximately true theories is low. Given the first two premises of the argument (i.e., P(S|¬T) is very low and P(S|T) is very high), this is the only scenario in which P(T|S) comes out to be low and thus the only scenario in which the argument’s realist conclusion is at all threatened.
In presenting these cases, Psillos makes extensive use of an example devised by experimental psychologists Tversky and Kahneman (Reference Tversky, Kahneman, Daniel Kahneman and Tversky1982):
CAB. There is one eyewitness to a hit-and-run accident involving a taxicab late at night, and that witness reports that the cab she saw was blue. In the city where the accident occurred, 85 percent of the cabs belong to the Green Cab Company, and 15 percent of the cabs belong to the Blue Cab Company. To assess the reliability of the eyewitness, a test is performed in which she is asked to identify blue and green cabs under visual conditions closely resembling those present at the time of the accident. In this test environment, the eyewitness makes correct identifications 80 percent of the time and is mistaken 20 percent of the time.
In Tversky and Kahneman’s study, test subjects were asked to estimate the probability that the witnessed accident involved a blue cab. Seeming to ignore the given base rates, Tversky and Kahneman found that the answer the test subjects typically gave was 80 percent. Using Bayes’s theorem in combination with the information provided in the example, the probabilistically correct answer can be shown to be approximately 41 percent. Footnote 12 Despite the eyewitness’s report that the cab involved in the accident was blue, it is actually more likely that it was green.
2.1 Case 1: Relevant background information
In presenting the first type of case in which base-rate information may safely be ignored, Psillos begins by noting that the witness in the CAB example is likely to have known that there are more green cabs than blue cabs in the city where the accident occurred. This knowledge, according to Psillos, may predispose the witness to identify cabs as green when the perceptual experience is at all ambiguous. Such a disposition is unlikely to show up in the test environment used to estimate rates of error, because in that environment, the witness may reasonably assume that a roughly equal number of green and blue cabs will be presented. Rates of error would then differ in real-world situations when compared to those measured in the test environment. It would actually be a mistake, according to Psillos, for the test subjects in Tversky and Kahneman’s psychological experiment to blindly reason in a probabilistic manner when confronted with the CAB example rather than taking this background information about dispositions into account. “There is a sense in which the subjects commit a fallacy (since they are asked to reason probabilistically but fail to take account of the base-rates), but there is another sense in which they reason correctly because the salient features of the case history can get them closer to the truth” (Psillos Reference Psillos2009, 63).
Psillos takes this observation concerning the CAB example to be relevant to a proper assessment of the No-Miracles argument. Granting that the base rate of approximately true theories is likely to be low, Psillos (Reference Psillos2009, 63) suggests that “one might well be predisposed to say that a theory T is false, given its success.” Scientists (who are meant to be analogous to the eyewitness just considered) may be predisposed, based on suspicions that the base rate of approximately true theories is low, to think that any theory under consideration is not likely to be approximately true. Taking this background information into account, “when, then, the eyewitnesses (the scientists, in this case) say that a specific theory T is approximately true (despite that this is unlikely, given the base-rates), they should be trusted—at the expense of the base-rates” (Psillos Reference Psillos2009, 63). As in the CAB example, our knowledge of relevant background information may lead us to properly ignore base rates in considering the No-Miracles argument rather than reaching a conclusion based on the blind application of probabilistic reasoning.
2.1.1 Response to Psillos
The general point that Psillos appears to be making in considering this first case is that a correct conclusion will only be reached using Bayes’s theorem if the probabilities that enter into the calculation are the appropriate ones. In considering the CAB example, Psillos is particularly worried about issues in estimating rates of error for the eyewitness in circumstances similar to those involved in the actual accident. If, owing to changes in relevant dispositions, the false positive and false negative rates are actually lower than given in the original example, the low base rate of blue cabs in the city can be overcome such that the eyewitness’s testimony should actually be trusted. Footnote 13
While Psillos is right to point out this potential issue, it is unclear how it helps his defense of the No-Miracles argument. In drawing the relevant analogy, he seems to be claiming that scientists believe that most scientific theories are not approximately true. For this reason, scientists are not likely to accept a theory as genuinely successful if its status remains at all unclear. But, in effect, this is simply the claim that, owing to the mind-set of most scientists, the rate of false positives is very low in the assessment of the approximate truth of scientific theories. Provided this rate is not zero, however, even a very low rate of false positives (e.g., P(S|¬T) = 0.01) can be overcome by a sufficiently low prior (e.g., P(T) = 0.001) such that the approximate truth of a successful theory is still unlikely (e.g., P(T|S) $ \approx $ 0.09, given the further assumption that P(S|T) = 1). Footnote 14 As Psillos himself concedes, there is no clear way to produce an estimate for the base rate of approximately true theories in the general population. Without such an estimate, there is no way to know if even a very cautious group of scientists will be able to overcome it.
There does seem to be a second possible interpretation of Psillos’s first case. Psillos (Reference Psillos2009, 63) writes that “if we take the base-rates into account, we may get at the correct probability of a theory’s (chosen at random) being approximately true.” Scientists, however, do not just randomly select theories for empirical testing; rather, theories are only subject to testing if they have first been identified as potential candidates by scientists employing what can be broadly characterized as the scientific method. According to this line of thinking, the base rate appropriate to the No-Miracles argument should be the rate of approximately true theories in the limited population of theories selected by scientists, not the rate of approximately true theories in the population of theories in general. In fact, several recent attempts at rescuing the No-Miracles argument take this exact position (Menke Reference Menke2014; Henderson Reference Henderson2017; Dawid and Hartmann Reference Dawid and Hartmann2018). Henderson (Reference Henderson2017, 1295), for instance, writes that “the base rate fallacy allegation relies on an assumption of random sampling of individuals from the population which cannot be made in the case of the no miracles argument.”
The problem with this approach is that we are once again confronted with a base rate that is unknown. Because the very point of the No-Miracles argument is to establish which scientific theories are likely to be approximately true, we are in a poor position to directly assess this new base rate in a noncircular manner. Providing this base rate indirectly, however, seems at least possible. If the rate of success of theories selected using the scientific method were provided (i.e., P(S)), P(T) could be calculated using the law of total probability and already assumed values for P(S|T) and P(S|¬T). Footnote 15 In fact, as Dawid and Hartmann (Reference Dawid and Hartmann2018) have recently shown, P(T|S) is guaranteed to be greater than 0.5—the lowest plausible threshold for the success of the No-Miracles argument—if P(S) is more than twice P(S|¬T).
Given the premise that “our best scientific theories have been successful” (i.e., NM-4), it may seem that the No-Miracles argument already presupposes that the rate of success among theories selected by scientists is high. Footnote 16 This, however, is not the case. Because success is generally a prerequisite for a theory to be considered among our best, the very high rate of past success for theories we currently take to be our best can provide little evidence as to their approximate truth. What is needed in support of the No-Miracles argument is the rate of success of all theories selected by scientists for further testing, not the rate only among those theories that did in fact succeed. Notice as well that the meaning of P(S|¬T) has changed in moving to this new reference class. While even the antirealist may be willing to concede that an arbitrarily selected theory that is not approximately true is highly unlikely to succeed, this is far less obvious when considering the limited set of theories produced using the scientific method. There seems to be good reason to think—owing to the elimination of clearly inferior theories, for instance—that P(S|¬T) will at least be higher than in the original case. That the rate of success of theories selected using the scientific method is twice this higher value of P(S|¬T) is far from obvious.
Perhaps estimates for the rates relevant to the No-Miracles argument could be established through historical analysis. Dawid and Hartmann (Reference Dawid and Hartmann2018) suggest such a possibility, although they make no claims as to the prospects for its success. Footnote 17 This is a proposal to which I return in the conclusion of this article.
2.2 Case 2: Explanatory considerations
Psillos also uses the CAB example to illustrate a second type of case in which he thinks base-rate information can properly be ignored. Recall that in Tversky and Kahneman’s original study, test subjects ignored the fact that 85 percent of the cabs in the city were green and 15 percent of the cabs in the city were blue when considering whether to trust the eyewitness’s account. As Psillos points out, however, a slight modification to the study produced very different results. In CAUSAL CAB, a second example devised by Tversky and Kahneman (Reference Tversky, Kahneman, Daniel Kahneman and Tversky1982, 157), the CAB example is altered to specify that there are an equal number of green and blue cabs in the city, and a new detail is added that 85 percent of the cab-related accidents in the city involve green cabs, whereas 15 percent of the cab-related accidents involve blue cabs. Footnote 18 When presented with the CAUSAL CAB example, test subjects started to factor in the relevant base rates in determining whether a green or blue cab was involved in the witnessed accident.
Psillos (Reference Psillos2009) contends that the reason that test subjects started incorporating this information into their assessments was not an increased desire to get the probabilities right—they presumably wanted to get the probabilities right when considering the original CAB example as well—but rather because they thought that the base-rate information “was causally relevant to the issue at hand” (64). And, as Psillos puts it, “causally relevant information has a better chance to lead to true beliefs” (64). Because the CAUSAL CAB example strongly implies that a given green cab is more likely to cause an accident than a given blue cab, test subjects saw the high base rate of accidents involving green cabs as relevant to the question of the color of the cab responsible for the witnessed accident.
Psillos takes Tversky and Kahneman’s experimental result to provide insight into how we ought to reason in the case of the No-Miracles argument. Taking reasoning about causation essentially to be reasoning about explanation, Psillos suggests that we only ought to use base-rate information in deciding whether a successful theory should be considered approximately true if it is explanatorily relevant to such a determination. The issue then becomes distinguishing explanatorily relevant base rates, such as the ones seen in CAUSAL CAB, from base rates that are merely “incidental” in nature.
Psillos (Reference Psillos2009) again focuses on the situation where the base rate of approximately true theories is low. He writes that “if falsity did explain success, then, clearly, the small base-rate for truth would undermine belief in a connection between success and approximate truth” (64). If both a theory not being approximately true and a theory being approximately true provided explanations for success, then the low base rate of approximately true theories would be explanatorily relevant, favoring the former over the latter. However, according to Psillos, this is not the case, and “falsity does not explain success” (64). Because a theory being approximately true is the only explanation available for a theory’s success, the low base rate of approximately true theories is explanatorily irrelevant to determining what should be believed concerning a successful theory. The low base rate of approximately true theories can then safely be ignored.
Psillos (Reference Psillos2009) provides two examples to help illustrate this point. In the first example, the likelihood of a successful theory being approximately true is high (e.g., P(T|S) $ \approx $ 0.957) despite the base rate of approximately true theories being low (e.g., P(T) = 0.1, hence P(¬T) = 0.9). Footnote 19 Because the only explanation for a theory’s success is that it is approximately true, the high base rate of not approximately true theories should in no way undermine our belief that a successful theory is approximately true. Psillos contrasts this first example with a second. Here the base rate of approximately true theories is even lower than in the previous example (e.g., P(T) = 0.001, hence P(¬T) = 0.999), and the likelihood of a successful theory being approximately true comes out to be low as well (e.g., P(T|S) $ \approx $ 0.165). Footnote 20 Concerning this second example, Psillos writes that “despite the low base-rate [i.e. P(T)], a certain successful theory may be deemed approximately true. Its posterior probability [i.e. P(T|S)] may be low, but this will be attributed to the rareness of truth and not to any fault of the individual theory” (64). Approximately true theories may be uncommon, but only if a theory is approximately true can its success actually be explained. The low base rate of approximately true theories should, then, not factor into our reasoning, and a successful theory should be taken to be approximately true despite the calculated value of P(T|S) being low.
Because successful theories should be considered approximately true regardless of whether the probability of a successful theory being approximately true is calculated to be high or low, the base rate of approximately true theories is then irrelevant to the conclusion of the No-Miracles argument.
2.2.1 Response to Psillos
Psillos’s argument in considering this second case is problematic for at least two reasons. First, he seems to be confusing the results of a psychological experiment meant to show how humans actually reason with normative claims about how we ought to reason. As Tversky and Kahneman (Reference Tversky, Kahneman, Daniel Kahneman and Tversky1982, 156) point out with regard to a related pair of examples, “from a normative standpoint … the causal and the incidental base rates in these examples should have roughly comparable effects.” The fact that test subjects treated explanatory and nonexplanatory base rates differently does not provide convincing evidence for thinking that they ought to be treated differently.
The second problem with Psillos’s view is that it has consequences that are scientifically unacceptable. Consider the earlier discussed disease example with both the rate of false positives and the rate of false negatives specified to be 5 percent. Now add the further detail that the base rate of the disease in the general population is only 0.1 percent. It seems clear that having the disease explains a positive test result, while not having the disease does not. Footnote 21 Given that the base rate of the disease in the general population is low, Psillos’s method of evaluation would seem to yield the verdict that the base rate involved in this example is not explanatorily relevant. This is analogous to Psillos’s contention that the base rate of approximately true theories is not explanatorily relevant if it is low because it does not reflect the fact that only approximate truth can explain a theory’s success. Consistent with Psillos’s treatment of the base rates involved in the No-Miracles argument, the low base rate of disease in the general population should then be ignored. But this means that an individual who tests positive should be taken to have the disease despite there being a less than 2 percent chance that this is actually the case. This is surely an assessment that no reputable epidemiologist would accept. If, as realists generally contend, scientific reasoning provides our best guide to truth, we should reject this kind of reasoning when it comes to the No-Miracles argument as well.
Although the specific suggestion Psillos makes for incorporating explanation into the No-Miracles argument does not seem to work, perhaps the second case Psillos considers is meant to gesture more generally at an interpretation of the No-Miracles argument as an IBE. To paraphrase Lipton (Reference Lipton2003), the contention would be that we should look to the “loveliest” explanation of the success of science, not just the likeliest. I explore this possibility more fully in this article’s conclusion.
2.3 Case 3: Justice and fairness
Psillos (Reference Psillos2009) presents one final type of case in which he thinks it may be proper to ignore base rates. He again returns to the original CAB example, but now adds a wrinkle where this same scenario plays out multiple times. After each accident, the involved victim files a lawsuit against each of the two cab companies that could potentially be involved, with the only evidence provided in each case being the statistical information specified in the example and the testimony of the one eyewitness. If the courts reached their judgments based purely on an application of Bayes’s theorem, the Green Cab Company would be found financially liable in every single one of the cases—despite actually being responsible only 59 percent of the time. According to Psillos, this would be a mistake, and “fairness and justice seem to give us some reason to ignore the base-rates” (65).
Returning to the No-Miracles argument, Psillos (Reference Psillos2009) sees the community of scientists serving a role analogous to the courts in the modified CAB example just considered. Psillos contends that “if scientists acted as the imagined judges above, they would be unfair and unjust to their own theories” (65). If the base rate of approximately true theories was low and exclusively probabilistic reasoning was used, then all successful theories would be deemed not approximately true. According to Psillos, this would be unjust and unfair. This provides at least some reason for taking successful theories to be approximately true, regardless of the actual base rates involved.
2.3.1 Response to Psillos
In evaluating Psillos’s argument, it is worth briefly examining the source of the intuition that the Green Cab Company would be unjustly treated if it were found guilty in every case brought against it. In the Anglo-American tradition, the keystone of criminal justice is that a defendant is “innocent until proven guilty.” Footnote 22 This corresponds to a systematic attempt to lower the rate of false convictions at the expense of allowing some guilty defendants to escape unpunished. Footnote 23 In civil cases, such as the ones involved in Psillos’s example, the closely related “presumed nonculpable” standard is in effect.
What Psillos fails to notice, however, is that this standard should apply in cases brought against the Blue Cab Company as well. If base-rate information were ignored and the eyewitness were simply trusted, it would seem that the Blue Cab Company would be found guilty in each case brought against it, despite being responsible less than half the time. This would certainly be as much of an injustice as the one perpetrated against the Green Cab Company. Considerations of justice and fairness would then indicate that one absolutely should take base-rate information into account in cases involving the Blue Cab Company, with not-guilty verdicts handed down in each instance.
When court cases involving the Blue Cab Company are also considered, it doesn’t appear that the modified CAB example to which Psillos is appealing in support of the No-Miracles argument actually helps his position. Psillos takes blue cabs to correspond to approximately true theories and green cabs to correspond to not approximately true theories. If guilty verdicts should not be handed down in court cases involving either—something that justice and fairness seem to demand—this would correspond to our best scientific theories being judged neither approximately true nor not approximately true. In effect, this is the admission that the No-Miracles argument, when factors of justice and fairness are considered, does not provide a positive argument for scientific realism.
Psillos may well reject this response and claim that this is not the analogy he wants to draw. Instead, he may contend that misjudging an approximately true theory to be not approximately true is an injustice that must be avoided, whereas misjudging a not approximately true theory to be approximately true is no injustice at all. Returning to the CAB example, this is in effect the claim that the Green Cab Company should be presumed nonculpable, while no such assumption, or maybe even the reverse assumption, should be applied in cases involving the Blue Cab Company. This difference in the treatment of the two companies is clearly unjust and unfair. Psillos would need to provide an explanation for why this type of asymmetric treatment is not similarly inappropriate when considering the No-Miracles argument.
In fact, it is highly questionable whether concerns over justice and fairness should impact the No-Miracles argument at all. The type of justice that Psillos seems concerned with in the modified CAB example is compensatory justice. The injustice that Psillos wants to avoid is the harm that would be done to the company owners if they were forced to pay compensation for an accident in which the company was not actually involved. But, in the case of scientific theories, who exactly is harmed? It can’t be the theories themselves, because they aren’t the types of things that are capable of suffering harm. The scientists involved in formulating the theories in the first place, or perhaps those who endorse them as approximately true, seem to be the only individuals who could be damaged by an incorrect judgment as to a theory’s approximate truth. Psillos’s argument would then be that we should judge our current best scientific theories to be approximately true to prevent reputational harm to scientists, or perhaps harm to their self-esteem. But if this is right, then this harm should presumably always be of concern in epistemic matters involving science. For instance, considerations of justice and fairness should factor against the publication of results that disprove an existing scientific theory, because scientists who currently endorse that theory could be harmed. Also, we should be reluctant to propose new theories that might displace old theories, because the reputation and self-worth of the scientists who came up with the current theories may be negatively impacted. But these types of considerations are epistemically inappropriate and downright unscientific. If realists are as concerned with truth as Psillos claims, there would be no reason for considerations of this type to factor into what should be purely epistemic evaluations.
3 Conclusion
Psillos is then unsuccessful in his defense of the No-Miracles argument. His failure highlights the difficulties faced by scientific realists wanting to directly engage with Howson’s probabilistic formulation. Without some further premise concerning the value of P(T), the argument provided is probabilistically invalid and should be rejected. This, however, should not be taken as providing a positive argument for scientific antirealism. Although the inability to establish that P(T) is not very low has been emphasized, no argument in support of the claim that P(T) is very low has been provided.
It is worth considering whether the realist might be better served by attempting to sidestep Howson’s rendering of the No-Miracles argument altogether. Taking the “miracle” referenced in the argument to be an event that is unexplained rather than one that is unlikely, the No-Miracles argument could instead be formulated as an IBE. The conclusion that our best scientific theories are approximately true would be reached based on approximate truth providing the best explanation for their success. This is an interpretation that Howson does not even consider and one that Psillos explicitly endorses elsewhere. Footnote 24
There are two general worries with this approach. First, it is far from clear how explanations should be judged and ranked. Most proponents of this style of argument rely on explanatory virtues that are poorly defined and generally appeal to the actual practice of science in defending these virtues as indicators of truth. However, this appeal to science seems to rely on the assumption that science actually produces explanations that are true—the very issue the No-Miracles argument is meant to settle. Footnote 25 Second, if the mechanism for picking the best explanation is made explicit, there are two possibilities. If the resulting No-Miracles argument does not respect the rules for Bayesian updating, it will be subject to a diachronic Dutch book argument. Footnote 26 Alternatively, if the resulting No-Miracles argument does fit within the Bayesian framework, explanatory considerations would have to be used to justify why P(T) should not be taken to be too low. Footnote 27 It is unclear how this could be done in a way that will be generally accepted.
Given these issues, many philosophers—antirealists chief among them—will dismiss the suggestion that a probabilistically unsupported conclusion should be accepted based on explanatory considerations alone. Attempts to bypass Howson’s formulation of the argument will then result in a version of the No-Miracles argument that is unconvincing to the scientific antirealist. There is also a price proponents of an IBE-based No-Miracles argument will have to pay when it comes to overcoming the challenge to scientific realism presented by the Pessimistic Induction.
The Pessimistic Induction, generally credited to Laudan (Reference Laudan1981), is rooted in the claim that many scientific theories that have historically been considered successful are no longer taken to be among our best. Footnote 28 Because these past theories frequently posit entities—phlogiston, for instance—that scientific realists and antirealists alike no longer take to exist, there is good reason to regard many of these theories as not approximately true. Footnote 29 Laudan goes so far as to claim that “for every highly successful theory in the past of science which we now believe to be a genuinely referring theory, one could find half a dozen once successful theories which we now regard as substantially non-referring” (35). The success of these now-rejected theories is just as in need of explanation as the success of the scientific theories we now take to be among our best. The problem for the realist is that any viable explanation provided would seem to involve success being an unreliable indicator of a theory’s approximate truth, rather than a reliable one. As Laudan puts it, “realists have no explanation whatever for the fact that many theories which are not approximately true and whose ‘theoretical’ terms seemingly do not refer are nonetheless often successful” (47).
Contrary to Laudan’s contention, an account of the historical success of not approximately true theories could be offered on behalf of the scientific realist. Laudan’s claim that we should regard most historically successful scientific theories as not approximately true reduces essentially to the claim that P(T|S) has historically been low. However, as Lewis (Reference Lewis2001) points out, a low value of P(T|S) does not necessarily indicate that success is an unreliable indicator of a theory’s approximate truth. Footnote 30 Reaching such a conclusion would once again involve committing the base-rate fallacy. The earlier discussed disease example illustrates this point. If the base rate of the disease in the general population is sufficiently low (e.g., 0.1 percent), the probability that an individual who tests positive has the disease can be low (e.g., less than 2 percent) despite the test for the disease being highly reliable (e.g., a false positive rate of 5 percent and a false negative rate of 5 percent). Analogously, if the base rate of approximately true theories in the relevant population of scientific theories is sufficiently low, the probability that a successful theory is approximately true can be low (i.e., P(T|S) is low), despite success being a reliable indicator of a theory’s approximate truth—in that a theory that is approximately true is likely to be successful and a theory that is not approximately true is likely not to be successful.
Laudan’s contention that P(T|S) has been low historically is then perfectly compatible with success being a reliable indicator of a theory’s approximate truth. It is also compatible with scientific realism. According to the convergent realist, science should not necessarily be regarded as having been historically successful in selecting theories that are approximately true. Owing to general improvements in the methodologies employed by science, however, science has been getting progressively better at this task. Although the base rate of approximately true theories in the population of theories selected by science may well be very low historically, this rate is higher for theories selected more recently. The success of our current best scientific theories may then indicate that they are likely to be approximately true.
Consider a slight modification to the disease example. Assume that, having carefully examined the age and genetic background of individuals who went on to exhibit the full range of symptoms characteristic of a disease, epidemiologists were to establish that the relatively rare disease for which a test was designed disproportionately targeted children of Scandinavian descent. While it may be the case that a randomly selected individual from the general population who tests positive is unlikely to have the disease, it could still be likely that a randomly selected Scandinavian child who tests positive has the disease. By limiting testing only to Scandinavian children, the low base rate of the disease in the general population could effectively be overcome. Likewise, by only considering the success of theories selected using the methodologies employed by science more recently, the low base rate of approximately true theories in the general population of theories selected by science historically could be overcome.
Although this account of the success of theories we no longer regard as among our best seems to provide an effective response to the Pessimistic Induction on behalf of the scientific realist, it is not a response that realists who reject the probabilistic No-Miracles argument in favor of an IBE-based version should accept. The response provided appeals to the low base rate of approximately true theories in explaining how, despite success being a reliable indicator of approximate truth, theories that are not approximately true could be successful. This, however, seems to be the exact explanation of success proponents of an IBE-based No-Miracles argument would have to reject to reach the conclusion that the approximate truth of our current best scientific theories provides the best explanation for their success. If there is to be no appeal to “miracles” in explaining the success of our current best scientific theories, it would seem this same standard should apply when it comes to explaining the historical success of scientific theories we should no longer take to be approximately true. A further cost of rejecting the probabilistic formulation of the No-Miracles argument is that the very strategy Howson uses to show why the No-Miracles argument’s realist conclusion should be rejected seems no longer to be available as an effective response to the Pessimistic Induction.
The scientific realist is, then, faced with a dilemma. Viewed probabilistically, both the Pessimistic Induction and the No-Miracles argument commit the base-rate fallacy and should be rejected. Recast in explanatory terms, the No-Miracles argument appears more tenable—at least for those who accept IBE as a method of inference—but so, too, does the Pessimistic Induction. Neither approach provides a clear win for the scientific realist.
There is a final approach to settling the scientific realism/antirealism debate worth considering. The key premise on which the Pessimistic Induction is based is the claim that the rate of not approximately true theories among theories that have been successful (i.e., P(¬T|S)) has historically been high. If, through historical analysis, this rate could be convincingly shown actually to be much lower than Laudan (Reference Laudan1981) contends, the Pessimistic Induction would, even under the explanatory reading, no longer serve as a credible threat to scientific realism. Footnote 31 This suggestion is similar to one presented earlier in this article. If, through historical analysis, the rates relevant to the probabilistic No-Miracles argument could similarly be established, the charge that the No-Miracles argument commits the base-rate fallacy would be avoided, and the argument could potentially go through. Footnote 32
The problems associated with these suggestions are the same. As Psillos (Reference Psillos2009) points out, the central individuating concepts on which both the Pessimistic Induction and the No-Miracles argument rely—theory, success, and approximate truth—are vaguely defined, and the prospects for reaching consensus when evaluating all but the most clear-cut cases appear dim. As an example, consider the nineteenth-century wave theory of light. Menke (Reference Menke2014) has recently argued that because this theory exhibited multiple instances of success while no contemporary rival theory exhibited any, the base rate of approximately true theories among optical theories in the nineteenth century should be taken as high enough for the No-Miracles argument to go through in this domain. Laudan (Reference Laudan1981), however, takes this same theory to be not approximately true because it posits an optical ether—an entity Laudan contends realists and antirealists alike should no longer take to exist. This same theory is then used by some to bolster the Pessimistic Induction and by others to provide a clear case in which the No-Miracles argument goes through. It is unclear how to adjudicate this dispute in a way that does not involve simply begging the question. An additional worry in producing estimates for the rates relevant to the two arguments is that the historical records on which such estimates would have to be based are far from complete. Even more concerning, gaps in these records are certainly not random, and any estimates made using the data available are likely to be biased in various ways. There is no clear way to ascertain the degree of these biases or to accurately correct for them.
To be certain, there are historical rates of success and approximate truth that would allow the No-Miracles argument to go through and the Pessimistic Induction to fail. Likewise, there are rates that could be used to provide a convincing argument for scientific antirealism. Producing estimates for these rates may in fact be the best option available in attempting to settle the scientific realism/antirealism debate. What has yet to be shown is how to produce these estimates in a way that most will find convincing.
Acknowledgments
I am particularly grateful to Thomas Barrett for his extensive comments on various drafts of this article. The help of Jeff Barrett, Craig Callender, and Kyle Stanford in situating the Howson–Psillos debate in the literature more broadly is also deeply appreciated. Finally, I thank Jon Charry, Kevin Falvey, Dan Korman, Alex LeBrun, Aaron Zimmerman, and two anonymous referees for their valuable comments and suggestions.