Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-22T14:29:43.411Z Has data issue: false hasContentIssue false

A Defense of Explanationism against Recent Objections

Published online by Cambridge University Press:  04 September 2023

Tomas Bogardus*
Affiliation:
Pepperdine University, Malibu, CA 90263-3999, USA
Will Perrin
Affiliation:
Georgetown University, Washington, DC 20057, USA
*
Corresponding author: Tomas Bogardus; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In the recent literature on the nature of knowledge, a rivalry has emerged between modalism and explanationism. According to modalism, knowledge requires that our beliefs track the truth across some appropriate set of possible worlds. Modalists tend to focus on two modal conditions: sensitivity and safety. According to explanationism, knowledge requires only that beliefs bear the right sort of explanatory relation to the truth. In slogan form: knowledge is believing something because it's true. In this paper, we aim to vindicate explanationism from some recent objections offered by Gualtiero Piccinini, Dario Mortini, and Kenneth Boyce and Andrew Moon. Together, these authors present five purported counterexamples to the sufficiency of the explanationist analysis for knowledge. In addition, Mortini devises a clever argument that explanationism entails the violation of a plausible closure principle on knowledge. We will argue that explanationism is innocent of all these charges against it, and we hope that the strength of the defense we offer of explanationism is evidence in its favor, and a reason to investigate explanationism further as the long-elusive truth about the nature of knowledge.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

There's an ongoing rivalry in epistemology between two opposing views of knowledge, modalism and explanationism. According to modalism, knowledge requires that our beliefs track the truth across some appropriate set of possible worlds.Footnote 1 Modalists have focused on two modal conditions. Sensitivity concerns whether your beliefs would have tracked the truth had it been different.Footnote 2 A rough test for sensitivity asks, “If your belief had been false, would you still have held it?” Safety, on the other hand, concerns whether our beliefs track the truth at “nearby” (i.e. “similar”) worlds.Footnote 3 A rough test for safety asks, “If you were to form your belief in this way, would you believe truly?”

According to explanationism, by contrast, knowledge requires only that beliefs bear the right sort of explanatory relation to the truth. In slogan form: knowledge is believing something because it's true.Footnote 4 Less roughly, explanationism says knowledge requires only that truth play a crucial role in the explanation of your belief.Footnote 5 A crucial role may be understood in terms of difference-making (cf. Strevens Reference Strevens2011 “kairetic test” for difference-making in scientific explanation).Footnote 6 One motivation for explanationism is the observation that debunking arguments across philosophy – against moral realism, religious belief, color realism, mathematical Platonism, dualist intuitions in the philosophy of mind, and so on – have a certain commonality.Footnote 7 To undermine some belief, philosophers often begin something like this: “You just believe that because…,” and then they continue by citing an explanation that does not feature the truth of this belief. Our evident faith in the power of such considerations to undermine a belief suggests that we take knowledge to require that a belief be held because it's true, and not for some other reason independent of truth. Explanationism agrees. Consider an ordinary case of visual perception, for example, the belief that there's a computer in front of you, formed on the basis of visual perception under normal conditions. Your believing this is straightforwardly explained in part by the fact that there is a computer in front of you: you believe there's a computer in front of you because that's how it seems to you, and that's how it seems to you because there is a computer in front of you. It's true that there's more we could (and perhaps even should) add to the explanation – facts about the lighting, distance, functioning of your visual system, your visual experience, etc. – but the truth of your belief would remain a crucial part of this explanation. So, explanationism tells us this is a case of knowledge, as it should.Footnote 8

Though modalism has dominated over the last few decades, every kingdom has a grave. In this paper, we would like to respond to recent objections to explanationism from Gualtiero Piccinini, Dario Mortini, and Kenneth Boyce and Andrew Moon.

2. Piccinini's objections

Gualtiero Piccinini (Reference Piccinini2022: 411) offers three objections to explanationism. First, he says, “Explanationism is a step in the right direction but it's inadequate because a belief being explained by its truthmaker is not sufficient for knowledge. One reason is that other important factors, besides its truthmaker, are needed to explain a belief that amounts to knowledge (or better, as I say, other factors must be in place to ground a belief in its truthmaker).” What are these other factors? Piccinini (ibid., 411) concludes his objections by saying explanationism is “inadequate, and the FGB account explains why.” The “FGB account” is his preferred account of knowledge, an account on which, as he puts it (ibid., 403), “knowledge is factually grounded belief.” So, as things look to Piccinini, a subject may believe that p because it's true, and yet not know that p, since the belief may lack something required for knowledge: being factually grounded. And what is factual grounding? Piccinini says (ibid., 407) that this grounding is “a relation that involves a causal connection between a belief and its truthmaker…” And later (ibid., 410) he says, “a belief being factually grounded entails that there is a causal connection (in Goldman's sense) between the belief and the facts.”

So Piccinini's view is that believing something because it's true, all by itself, is not sufficient for knowledge. And part of his motivation for thinking so is that it seems to him that only when the belief is also factually grounded will it suffice for knowledge. Our first reply to Piccinini aims to undercut this seeming necessity of factually grounded belief for knowledge. By undercutting this, we hope to demotivate Piccinini's objections to some degree. We can see that knowledge does not require factually grounded belief by considering cases of knowledge in which there is no causal connection between the belief and the facts, between the belief and the truthmaker. You may agree that this happens in cases of knowledge of causally inert abstracta. Or, depending on your view of rational intuition, you may think causation is not involved in our direct, unmediated knowledge of important propositions of mathematics, logic, morality, and the like. Finally, you may worry about our knowledge of the future, given a causal condition on knowledge, unless you're happy to countenance backwards causation. So, we conclude that at the very least, it's not so clear that knowledge requires factually grounded belief. Since this was a substantial part of the motivation for Piccinini's objections to explanationism, we conclude that those objections are not as compelling as they may first appear.

Citing our previous paper (Bogardus and Perrin Reference Bogardus and Perrin2020) among others, Piccinini offers more by way of an objection to explanationism, in the form of a concrete counterexample to explanationism. He believes this concrete case shows that believing something because it's true is not sufficient for knowledge. He says (Reference Piccinini2022: 411), “Another reason is that a truthmaker may explain a belief it makes true even though that belief fails to be knowledge. Consider the belief that something is a barn formed by looking at the one real barn in Fake Barn County. The truthmaker explains the belief and yet the belief fails to be knowledge.”

However, there is a conspicuous response to this very sort of objection in that previous paper of ours (Reference Bogardus and Perrin2020: 193), so it's surprising that Piccinini did not discuss it. In that paper, we acknowledged that, among epistemologists, intuitions about Fake Barn Country are not unanimous. So, it would be a virtue of an analysis of knowledge if it could help explain why. In the good case, when you're looking at a real barn, you believe there's a barn before you because it looks like there's a barn before you, and it looks like there's a barn before you because there is a barn before you. Explanationism rules that this is knowledge, as it should. But, we said,

…as the fake barns proliferate, the second link in the explanatory chain becomes less plausible. Even if your eyes happen to fall upon a real barn in a forest of fakes, we might begin to think it false that it looks like there's a barn before you because there is a barn before you. As the barn façades proliferate, a rival explanation looms into view: that it looks like there's a barn before you because you're in a region full of structures that look like barns. In other words, it becomes plausible to say that you believe there's a barn before you because it looks like there's a barn before you, and it looks like there's a barn before you because you're in Fake Barn Country.

And we continued (ibid.), in a parenthetical remark:

In Fake Barn Country, it's common to be appeared to barn-ly, and more common the more barn facades there are. In that case, we can – and perhaps we should – explain your belief by citing your presence among a multitude of things that look like barns. Given all those nearby structures, it seems that particular structure you happened to see – and the fact that it happened to be a real barn rather than a façade – plays no crucial role in the explanation of your belief.Footnote 9 Compare: While driving, a contractor's truck drops a large number of sharp objects – nails and screws – all over the road. The car following behind the truck gets a flat tire because it ran through this mess. Meditate for a moment on the suitability of this explanation. Now, while it may be true that one particular sharp object – a nail, let's say – punctured the tire, it's unnecessary to cite that particular object, or the fact that it was a nail rather than a screw, in order to explain the puncture, given all the other sharp objects nearby that nail, poised to puncture the tire in its place. All that figures crucially into the explanation of the punctured tire is the prevalence of these sharp objects. Perhaps the same goes with Fake Barn Country. What figures crucially into the explanation of your belief that there's a barn is the prevalence of objects that look like barns.

And, in that case, there would be no knowledge. It may well be that epistemologists who judge that you don't know that there's a barn before you in Fake Barn Country are those to whom this rival explanation seems plausible, and those to whom this rival explanation does not seem plausible judge otherwise. So, it looks as though explanationism does better than give us a plausible verdict; it offers us a diagnosis and explanation of the differing intuitions that epistemologists have about this case. And no matter which group of epistemologists is right, explanationism gives the correct verdict. Piccinini offers no response at all to this.

Finally, Piccinini presents a third objection, in the form of another purported counterexample to explanationism's claim that believing something because it's true is sufficient for knowledge. It goes like this (Reference Piccinini2022: 411):

Or consider a situation in which agent A shares their knowledge that p (e.g., “the rat is on the vat”) with agent B, agent B mishears and whispers q to C (e.g., “the cat is on the mat”), but agent C mishears in the opposite direction and comes to truly believe that p. By the light of explanationism, p explains C's true belief that p, but C does not know that p. The reason is that the explanatory chain explains C's true belief while failing to ground it in its truthmaker.

We believe this case will receive a treatment directly analogous to the case of the epistemically serendipitous lesion, which we also discussed in our previous paper (Reference Bogardus and Perrin2020: 191–2). It's unlikely, but possible, that agents B and C mishear things in such a way that they would always fortuitously reverse each other's mistakes. But, if that is the case, then receiving testimony in this way would in fact be a reliable guide to the truth. It would be a bit like a process of photography whereby a scene is transferred onto film with reversed tones as a negative, and then the negative image has its tones reversed again when it is developed, resulting in a faithful representation of the scene. One can know, on the basis of the final, developed film, what the original scene was like, despite this double-reversal of tones involved in the photographic process. If something like that is the case with agents B and C – which, admittedly, would be rather bizarre, even by the standards of philosophical thought experiments – then while the truth of agent C's belief figures into the explanation of why he holds it, explanationism gives the right result: this is knowledge.

Alternatively, agents B and C mishear things quite at random, and there's far from any guarantee that their mistakes will reverse each other in a way that will preserve truth. But, if that's what's happening, then agent C believes that the rat is on the vat not because of the particular content of what he heard from agent B, but because he heard something or other, and scrambled it. And agent B said “the cat is on the mat” because that's what he heard, but that's what he heard not because agent A said “the rat is on the vat,” but rather because agent A said something or other, which agent B scrambled. While agent A did say something because he wished to report that a rat is on the vat, the particular content of his belief is not a “difference-maker” in this chain of explanation. It doesn't really matter what exactly he wished to report, only that he initiated this chain of events by reporting something or other. If the scrambling on the part of agents B and C is random, as we're now considering, then it's just a fluke that agent C ended up believing something that matched the original input from agent A. It was just as likely that agent C would have ended up with that belief had agent A said something else entirely. So, agent C's belief that the rat is on the vat is not held because it's true. It's held because agent A initiated a process of random scrambling, which by the sheerest coincidence happened to result in this belief on the part of agent C.

There are also possibilities in between. Perhaps agents B and C do not scramble what they hear completely at random, and perhaps they don't reliably reverse each other's errors so as to invariably produce a faithful report in the end. Perhaps they corrupt what they hear only to some degree, and in this particular case Piccinini describes it merely happened faithfully to preserve the original message. If so, we believe the thing to say is this: the closer the corruption process of agents B and C is to random, the clearer it is that there's no knowledge here, according to explanationism. And that accords with our intuitions. The closer the corruption process of agents B and C is to the reliable double-reversal process, the clearer it is that there is indeed knowledge here, according to explanationism. And that too accords with our intuitions. There will be a sliding scale in between these two extremes. But, anywhere you go on that scale, explanationism delivers verdicts that accord with our intuitions. So, we conclude that Piccinini's third objection is like the first two: it fails to cast doubt on explanationism.Footnote 10

3. Mortini's objections

We turn now to a recent paper from Dario Mortini (Reference Mortiniforthcoming), who presents two objections to explanationism. First, Mortini says (ibid., 5), “the explanationist analysis… falls prey to Gettier-style cases.” This is the case Mortini has in mind:

DEFECTIVE STOPPED CLOCK

Russell takes a competent reading from a clock that he knows to be reliable and has no reason to think is currently not working. Based on this reading, he forms the belief that it's 8:22 pm. What is more, it is 8:22 pm and the clock correctly reads 8:22 pm. There is, however, a twist to the story: in virtue of an undetected manufacturing defect, the clock is designed to stop at exactly 8:22 pm, which is also when Russell happens to look at it. It's 8:22 pm, the clock stops at 8:22 pm, and Russell truly believes that it's 8:22 pm.

Mortini continues, saying, “Russell may have a justified true belief that it's 8:22 pm, but he doesn't know it. The reason why Russell fails to know is the uncontroversial assumption that one can't know the time from a stopped clock regardless of when exactly the clock happens to stop.” And the problem, according to Mortini, is that explanationism entails that Russell does know it's 8:22 pm: “Russell believes that it's 8.22 pm because the clock reads 8:22 pm. Unlike the previous version of this Gettier case, the clock reads 8:22 pm because it is 8:22 pm. Nevertheless, the clock also stops because it is 8:22 pm, and one hardly comes to know the time by consulting a stopped clock.”

We agree that one cannot know the time on the basis of checking a clock that has stopped. Once a clock has stopped, it no longer reads as it does because that reading is true. It reads as it does because that reading was true, when it stopped. But the case Mortini describes is subtly – but importantly – different. In this case, Russell checks the clock at the very moment the clock is stopping. And, we say, this is the last possible moment at which Russell could gain knowledge from that clock, because this is the last possible moment at which the clock reads as it does because that reading is true. One fraction of a second later, and the clock would become a clock that is no longer stopping, but which in fact has stopped. And, from then on, it wouldn't read as it does because that reading is true. So, while we agree with Mortini that “one hardly comes to know the time by consulting a stopped clock,” we think this is true only with respect to clocks that have stopped, not with respect to clocks that are stopping. And Russell's clock is merely stopping; it hasn't yet stopped.

Mortini has a second objection (ibid., 8–9), which is considerably more complex. Mortini alleges that, in Fake Barn Country, explanationism entails that the subject – call him “Barney” – can know that this is a barn. And that's because, as Mortini says, “Barney truly believes that that very object on the hill is a barn because that very object looks like a barn and it's true that that very object is a barn.” This sort of case is “structurally identical to ordinary cases of knowledgeable perceptual beliefs,” he says (ibid., 8), and therefore “a no-knowledge verdict in the de re version of FAKE BARN would open the door to a significant (and hence disturbing) sceptical threat.”

But, Mortini continues, explanationism entails that Barney cannot know that there's a barn on the hill. And that's because, according to Mortini, “Barney truly believes that there's a barn on the hill not because it's true that there's a barn on the hill, but because many other objects in that portion of the environment look like barns.” In other words, Mortini accepts the diagnosis of the no-knowledge intuition in Fake Barn Country that we offered on behalf of explanationism in our 2020 paper, and which we quoted at length above.

So, according to Mortini, things are a bit awkward for Barney according to explanationism: he can know that this is a barn on the hill, but not that there is a barn on the hill. So, Mortini thinks, explanationism is forced to present Barney as a counterexample to a plausible closure principle, namely:

If one knows P and competently deduces Q from P, thereby coming to believe Q while retaining one's knowledge that P, then one comes to know that Q.

Here, in the end, is the deadly sin of explanationism, according to Mortini: “Barney knows (de re) that that very object on the hill is a barn, he competently deduces that since that object is a barn then there's a barn on the hill but, according to Bogardus and Perrin, he fails to know (de dicto) that there's a barn on the hill.”

We have two responses that we believe will absolve explanationism. First, as we said above in response to Piccinini, explanationism is somewhat ambiguous about its verdict in Fake Barn Country cases. Whether the subject knows the relevant barn proposition depends on what exactly the explanation of his belief is, and that itself is unclear. Can I know the relevant barn proposition in Fake Barn Country, while looking at a real barn? On the one hand, what's to stop me? I'm standing before a real barn at a good distance in good light, and so on. Such a case does seem “structurally identical to ordinary cases of knowledgeable perceptual beliefs,” as Mortini says. On the other hand, there are those nearby fakes, and as we proliferate the fakes and bring them closer to the real barn I happen to see, they do seem to pose an obstacle to my knowing the relevant barn proposition.Footnote 11 And explanationism has a nice answer why that is: it starts looking as though I believe the relevant barn proposition not because it's true, but because I'm in Fake Barn Country, surrounded by fake barns (see above).

All this to say, we think this is the case with respect to both the de re and also the de dicto barn beliefs that Mortini considers. Even with respect to the belief that this very object is a barn, it becomes less clear that Barney can know even this proposition, at least once we start threatening his epistemic position with nearby barn façades. Those nearby barn façades make the structure of the case importantly different from “ordinary cases of knowledgeable perceptual beliefs.” Indeed, that's precisely the point of Fake Barn Country cases. So, we believe explanationism may well entail that Barney does not know, of that particular barn, that it is a barn. He may not know that de re proposition that Mortini highlights. But, as we say above, we think this is a virtue, since the intuitions of epistemologists seem to be divided on such questions in Fake Barn Country. Surely, we think, it's a virtue of any theory of knowledge that it does not provide clear verdicts on unclear cases.

But even if explanationism does entail that Barney knows that this is a barn on the hill without knowing that there is a barn on the hill, we deny that this entails the falsity of the plausible closure principle Mortini mentions. And that's because, as Mortini describes it, the closure principle requires that Barney competently deduce that there's a barn on the hill from the proposition that this is a barn on the hill. And if he were to competently deduce that there's a barn on the hill from his knowledge that this is a barn on the hill, then, as we explained previously (Reference Bogardus and Perrin2020: 190), Barney can indeed come to know that there's a barn on the hill, via competent deduction. At most, Mortini has shown that, according to explanationism, Barney cannot know that there's a barn on the hill via perception. But if he deduces it from his knowledge that this is a barn on the hill, explanationism may well give this belief its imprimatur as knowledge. And, in that case, we'd have here no violation of Mortini's closure principle.Footnote 12

So, in summary, Mortini's objection requires that, according to explanationism, Barney knows that p (this is a barn on the hill), yet he doesn't know that q (there's a barn on the hill), and this violates a plausible closure principle. We don't think it's clear that explanationism entails that Barney knows that p (or that he doesn't know that q), and we believe that this is a virtue of the theory. And, even if explanationism had these entailments, this doesn't entail the falsity of Mortini's closure principle, due to the competent deduction requirement in that principle.

Before we move on, though, let us quickly raise one concern for the view that Mortini proposes as superior to explanationism. Mortini (ibid., 10) endorses this version of the safety condition:

In most or all close possible worlds in which S believes that p via the same method of belief formation M that S uses in the actual world and S occupies the same environment E that S occupies in the actual world, p is true.

The addition of “the same environment” is meant to answer proposed counterexamples to the safety condition on knowledge, such as atomic clock (cf. Bogardus Reference Bogardus2014). In that case, a very reliable atomic clock is imperiled by a nearby radioactive isotope, which is due to decay at any moment, and will stop the clock when it decays. The worry for the safety condition on knowledge is that, so long as the isotope hasn't decayed yet, and the clock remains in perfect working order, it seems that a subject can know the time by checking the clock, despite the fact that, given the nearby threat, the clock is no longer safe. (It could easily fail.)

Mortini's solution (ibid., 11) goes like this:

In ATOMIC CLOCK, the error-possibility requires a change in the environment: the isotope has to decay. If we keep both method and environment fixed and we focus on the worlds where the subject reads from the atomic clock and the isotope doesn't decay, then the subject continues forming true beliefs. The safety condition is satisfied: accordingly, the knowledge verdict is aptly captured.Footnote 13

Here's a concern about the notion that these nearby error possibilities involve changes of environment, and should therefore be ignored. Suppose you suddenly develop the power to teleport around the world, but you can't yet control this power, so that you find yourself rapidly cycling through environments. First you teleport to a forest, then to the Louvre, then to a bus station bathroom, and so on, very quickly. To pass the time, you decide to try your best to discern which environment you're in as you snap your fingers. Try as you might, you're very bad at this, since the environments are changing so rapidly. Yet you persist. “Waffle House!” you shout, as the environments cycle before your eyes. You sincerely believe you were right, and that you were in a Waffle House a moment ago.

Suppose that, as luck would have it, you were right, by chance. You were indeed in a Waffle House. Was that knowledge? We think the answer is clearly “no.” And we presume that the safety theorist should like to explain your failure to know by a failure of the safety condition: clearly, your whole approach to the question of where you are is unsafe, unreliable, and so on. Easily might you have believed falsely in this situation, as Sosa (Reference Sosa and Tomberlin1999a: 142) would put it. But, if Mortini is right that we can ignore nearby error possibilities where the environment is different, we get the surprising result that your belief was formed safely. And that's because: in most or all close possible worlds in which you believe that you're in a Waffle House via the same method of belief formation (namely, vision) that you use in the actual world, and you occupy the same environment that you occupy in the actual world (namely, a Waffle House), your belief that you're in a Waffle House is true. Notice that it's the very same-environment provision that Mortini added to the safety condition to try to avoid atomic clock that gets the view into trouble here.Footnote 14 We conclude, then, that more work needs to be done to rescue the safety condition – and Modalism more generally – from counterexamples that have been presented against it.

4. Boyce's and Moon's objection

Kenneth Boyce and Andrew Moon (Reference Boyce, Moon and Oliveiraforthcoming) propose the following counterexample to explanationism:

HOLOPROJECTOR

Micha sees what appears to be a vase sitting on a pedestal. As it happens, the pedestal is really a holographic projector, and there is no vase on top of it. Rather, what Micha is seeing is merely a realistic holographic projection. Micha, who is ignorant of these facts, comes to believe there is a vase in front of him. As it turns out, hidden in a hollow compartment within the pedestal, out of sight, is a vase. The setup is such that the pedestal projects a realistic holographic image of whatever is in that compartment onto its surface, and this explains why Micha sees the image before him.Footnote 15

Boyce and Moon say that, given the set-up, the fact that Micha believes there is a vase in front of him is explained (at least in part) by the fact there is one. So, they say, explanationism entails that Micha knows there is a vase before him. But, since he clearly does not know that, Boyce and Moon conclude that explanationism is false.

In response, we deny that explanationism entails that Micha knows there's a vase before him. True enough, Micha believes there's a vase in front of him because he sees that hologram of a vase on top of the pedestal. And that hologram is there because the device projects there an image of whatever is in a compartment within the pedestal. But it is not crucial to that explanation that the compartment is within the pedestal, in front of Micha. Plausibly, the location of the “input” into this holographic projector is not a difference-maker, as Michael Strevens (Reference Strevens2011) might say.Footnote 16 With regard to Micha's belief, it doesn't really matter where that compartment is located relative to Micha, and therefore where the vase is located relative to Micha. Perhaps, given the set-up, it's crucial to the explanation of Micha's belief that the vase be within the compartment, but it's not crucial to the explanation of Micha's belief that the compartment be located in front of him.Footnote 17 That is, it's incidental that the device was constructed so that the object to be projected is adjacent to the projected image itself. And, in that case, the fact that the vase is before Micha does not figure crucially into the explanation of Micha's belief, in which case explanationism does not entail that Micha knows that there's a vase in front of him.

We might imagine instead Micha forming a more general belief, like that there exists a vase. We believe this would sidestep the concern raised in the previous paragraph, and plausibly explanationism would entail that Micha knows that. But that seems like the right result to us; we think it's less clear that he doesn't know there's a vase, and in fact it looks like he does know this, in the case described. Micha seems to be in a position like that of someone getting information from viewing a television, which is reliably projecting images from a camera somewhere. If some unsuspecting person mistakes a television for a window, and the television is reliably transmitting images of, e.g., Joe Biden, and the person infers (or just simply believes) on this basis that a man exists, well, we're inclined to say that this person knows that a man exists on this basis. And Micha seems to be in a position like that, in this modified case. So, again, there is no worry for explanationism here.

5. Conclusion

We have vindicated explanationism from several novel objections recently proposed in the literature. Of course, this doesn't settle the matter in favor of explanationism in its ongoing rivalry with modalism, or with other theories of the nature of knowledge. But the richness of the resources available to defend explanationism from critics is suggestive of a deeper merit in explanationism. Though the long and complex battles among rival theories of knowledge have exhausted many epistemologists, explanationism promises to be the truth of the matter, and calls us once more unto the breach.

Footnotes

2 A more precise statement of the sensitivity condition would specify the method being used. For examples of sensitivity theorists, see Fred Dretske (Reference Dretske1970), Robert Nozick (Reference Nozick1981), and Kelly Becker and Tim Black (Reference Becker and Black2012).

3 For examples of safety theorists, see Duncan Pritchard (Reference Pritchard2005), Ernest Sosa (Reference Sosa and Tomberlin1999a), John Hawthorne (Reference Hawthorne2004), and Mark Sainsbury (Reference Sainsbury1997).

4 For examples of broadly explanationist projects, see Alan Goldman (Reference Goldman1984), L.S. Carrier (Reference Carrier1993), Steven Rieber (Reference Rieber1998), Ram Neta (Reference Neta2002), Marc Alspector-Kelly (Reference Alspector-Kelly2006), Carrie Jenkins (Reference Jenkins2006), Dan Korman and Dustin Locke (Reference Korman, Locke and Shafer-Landau2020, Reference Korman and Locke2023), and David Faraci (Reference Faraci2019).

5 As Alan Goldman put it, the truth of your belief “enters prominently” into the best explanation for its being held.

6 With regard to scientific explanation, Strevens' proposal is this: Start with a deductive argument, with premises correctly representing some set of influences (potential explanantia), and the conclusion correctly representing the explanandum. Make this argument as abstract as possible, while preserving the validity of the inference from premises to conclusion; strip away, as it were, unnecessary information in the premises. When further abstraction would compromise the validity of the inference, stop the process. What's left in the premises are difference-makers, factors that play a “crucial role” in the scientific explanation. In order to adapt the kairetic test to explanationism, we propose beginning with a set of potential explanantia that explain the relevant beliefs being held, and then proceeding with the abstraction process until further abstraction would make the explanation fail. The remaining explanantia are the difference-makers. On explanationism, for the belief to count as knowledge, the truth of the relevant belief must be among these difference-makers in order a belief to count as knowledge.

7 See Dan Korman (Reference Korman2019) for a nice overview of this style of argument across the landscape of philosophy.

8 There's much more to be said about this and other more difficult cases. See Bogardus and Perrin (Reference Bogardus and Perrin2020).

9 Goldman (Reference Goldman1988: 44) agrees: “In such cases … the proper explanation for the belief appeals to the broader context of the perceiver's being in the vicinity of all these look-alike objects, any of which would produce the belief in question.”

10 Or, as agent B might put it: the abjection flails to catch trout on hexagonalism.

11 If you hold in your hand a clutch of counterfeit diamonds along with one real diamond, it's awfully tempting to judge that you could not know, of the real diamond, that it is a diamond, even if you happen to be looking directly at it, in good light.

12 In other words, either Barney competently deduces the de dicto belief from the de re belief, or he doesn't. If he does, then explanationism rules that he knows the de dicto belief via deduction, and therefore there's no counterexample to Mortini's closure principle here. If, on the other hand, Barney does not competently deduce the de dicto belief from the de re belief, then again this cannot be a counterexample to Mortini's closure principle, as that principle applies only in cases of competent deduction.

13 This is similar to a solution offered by Fernando Broncano-Berrocal (Reference Broncano-Berrocal2014), though Broncano-Berrocal offers his only with respect to an individuation of methods. It is also similar to a solution offered by Benoit Gaultier (Reference Gaultier2014), who indeed proposed the solution in terms of environments. One question arises: In atomic clock, it doesn't seem essential that what imperils the clock is in the environment. If we move the isotope into the subject's skull, but poised to upset his method in roughly the same way as in the original case, is the environment now free of complications? If not, what exactly is an environment? But if so, then this minor modification will circumvent Mortini's proposed solution, and still make trouble for his revised safety condition. The subject can know the time on the basis of the clock, despite the fact that only his internal, skull-bound faculties are imperiled by a radioactive isotope, and not his external environment. So, his belief would evidently be formed safely, even on Mortini's revised notion of safety, despite not amounting to knowledge. And in that way the case would still serve as a counterexample to the alleged safety condition on knowledge.

14 Mortini's proposal also seems to get the wrong result in Fake Barn Country cases. About Fake Barn Country, Mortini says (ibid., 13), “Barney fails to know that the very object he is looking at is a barn because in close possible worlds he could have easily formed the false belief that there's a barn on the hill.” But, if Mortini is right that we should hold fixed the environment when we consider the safety condition, then we evaluate the safety condition only with respect to environments in which Barney looks at a real barn. After all, if Mortini is right that a decaying isotope and a stopped clock amount to a different environment in the atomic clock case, why wouldn't a different location and a barn façade amount to a different environment for Barney? But, if they do, then Barney's belief meets the safety condition after all, contrary to what Mortini says while counting up the virtues of his proposed safety condition.

15 Boyce and Moon credit Bob Beddor for suggesting this sort of case to them as a counterexample to explanationism. They say a case like this was originally raised by Lehrer and Paxson (Reference Lehrer and Paxson1969: 234) against Unger's non-accidentality theory of knowledge, and later adapted to count against Nozick's (Reference Nozick1981: 190) theory.

16 Confer our previous paper (Bogardus and Perrin Reference Bogardus and Perrin2020: 179) where we propose adapting Strevens' “kairetic test” for difference-making in scientific explanation, to the more general purposes of explanationism.

17 Just as, if one is within ten yards of a nuclear explosion, the orientation of one's body relative to the explosion is not crucial to the explanation of one's death. Whether the explosion was to one's left, one's right, before one, or behind one is not a difference-maker in this explanation. Similarly, the color of the bullet that penetrated the victim's heart would not be crucial to the explanation of his death, even if, contrary to fact, it were not technologically feasible to make a bullet with any other color. All that matters is that the bullet could have been a different color, and, if it had been, the victim would have died all the same. Similarly, even supposing it were technologically required for the scanning compartment to be located within the pedestal, the location of that scanning compartment (and, therefore, the vase within it) would not be crucial to the explanation of Micha's belief. And that's because, setting aside the contingent constraints of technology, the scanning compartment (and the vase within it) could have been located elsewhere, and, if it had been, Micha would have believed all the same that there was a vase before him. We are grateful to an anonymous reviewer for encouraging us to consider the possible variation of the case.

References

Alspector-Kelly, M. (2006). ‘Knowledge Externalism.’ Pacific Philosophical Quarterly 87, 289300.CrossRefGoogle Scholar
Becker, K. and Black, T. (2012). The Sensitivity Principle in Epistemology. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Black, T. (2008). ‘Defending a Sensitive neo-Moorean Invariantism.’ In Hendricks, V.F. and Pritchard, D. (eds), New Waves in Epistemology, pp. 827. Houndmills: Palgrave Macmillan.Google Scholar
Black, T. and Murphy, P. (2007). ‘In Defense of Sensitivity.’ Synthese 154(1), 5371.CrossRefGoogle Scholar
Bogardus, T. (2014). ‘Knowledge under Threat.’ Philosophy and Phenomenological Research 88(2), 289313.CrossRefGoogle Scholar
Bogardus, T. and Perrin, W. (2020). ‘Knowledge is Believing Something Because It's True.’ Episteme 19, 178–96.CrossRefGoogle Scholar
Boyce, K. and Moon, A. (forthcoming). ‘An Explanationist Defense of Proper Functionalism.’ In Oliveira, L.R.G. (ed.), Externalism About Knowledge. New York, NY: Oxford University Press.Google Scholar
Broncano-Berrocal, F. (2014). ‘Is Safety In Danger?Philosophia 42(1), 6381.CrossRefGoogle Scholar
Carrier, L.S. (1993). ‘The Roots of Knowledge.’ Pacific Philosophical Quarterly 74, 8195.CrossRefGoogle Scholar
Clarke-Doane, J. (2012). ‘Morality and Mathematics: The Evolutionary Challenge.’ Ethics 122(2), 313–40.CrossRefGoogle Scholar
Clarke-Doane, J. (2014). ‘Moral Epistemology: The Mathematics Analogy.’ Noûs 48(2), 238–55.CrossRefGoogle Scholar
Clarke-Doane, J. (2015). ‘Justification and Explanation in Mathematics and Morality.’ In Shafer-Landau, R. (ed.), Oxford Studies in Metaethics, Vol. 1, pp. 80103. Oxford: Oxford University Press.CrossRefGoogle Scholar
Clarke-Doane, J. (2016). ‘What Is the Benacerraf Problem?’ In Pataut, F. (ed.), New Perspectives on the Philosophy of Paul Benacerraf: Truth, Objects, Infinity, pp. 1743. New York, NY: Springer.CrossRefGoogle Scholar
DeRose, K. (1995). ‘Solving the Skeptical Problem.’ Philosophical Review 104(1), 152.CrossRefGoogle Scholar
Dretske, F. (1970). ‘Epistemic Operators.’ Journal of Philosophy 67, 1007–23.CrossRefGoogle Scholar
Faraci, D. (2019). ‘Groundwork for an Explanationist Account of Epistemic Coincidence.’ Philosopher's Imprint 19(4), 126.Google Scholar
Gaultier, B. (2014). ‘Achievements, Safety and Environmental Epistemic Luck.’ Dialectica 68(4), 477–97.CrossRefGoogle Scholar
Goldman, A. (1984). ‘An Explanatory Analysis of Knowledge.’ American Philosophical Quarterly 21, 101–8.Google Scholar
Goldman, A. (1988). Empirical Knowledge. Berkeley, CA: University of California Press.Google Scholar
Hawthorne, J. (2004). Knowledge and Lotteries. Oxford: Oxford University Press.Google Scholar
Ichikawa, J.J. (2011). ‘Quantifiers, Knowledge, and Counterfactuals.’ Philosophy and Phenomenological Research 82, 287313.CrossRefGoogle Scholar
Jenkins, C. (2006). 'Knowledge and Explanation.’ Canadian Journal of Philosophy, 36(2), 137–64.CrossRefGoogle Scholar
Korman, D. (2019). ‘Debunking Arguments.’ Philosophy Compass 14(12), 117.CrossRefGoogle Scholar
Korman, D. and Locke, D. (2020). ‘Against Minimalist Responses to Moral Debunking Arguments.’ In Shafer-Landau, R. (ed.), Oxford Studies in Metaethics, Vol. 15, pp. 309–32. New York, NY: Oxford University Press.CrossRefGoogle Scholar
Korman, D and Locke, D. (2023). ‘An Explanationist Account of Geneological Defeat,’ Philosophy and Phenomenological Research 106(1), 176–95.CrossRefGoogle Scholar
Lehrer, K. and Paxson, T. Jr. (1969). ‘Knowledge: Undefeated Justified True Belief.’ The Journal of Philosophy 66, 225–37.CrossRefGoogle Scholar
Luper-Foy, S. (1984). ‘The Epistemic Predicament: Knowledge, Nozickian Tracking, and Scepticism.’ Australasian Journal of Philosophy 62(1), 2649.CrossRefGoogle Scholar
Mortini, D. (forthcoming). ‘The Explanationist and the Modalist.’ Episteme.Google Scholar
Neta, R. (2002). ‘S Knows that P.’ Noûs 36, 663–81.CrossRefGoogle Scholar
Nozick, R. (1981). Philosophical Explanations. Cambridge, MA: Harvard University Press.Google Scholar
Piccinini, G. (2022). ‘Knowledge as Factually Grounded Belief.’ American Philosophical Quarterly 59(4), 403–17.CrossRefGoogle Scholar
Pritchard, D. (2005). Epistemic Luck. Oxford: Oxford University Press.CrossRefGoogle Scholar
Pritchard, D. (2007). ‘Anti-Luck Epistemology.’ Synthese 158(3), 277–98.CrossRefGoogle Scholar
Pritchard, D. (2009). ‘Safety-Based Epistemology: Whither Now?Journal of Philosophical Research 34, 3345.CrossRefGoogle Scholar
Rieber, S. (1998). ‘Skepticism and Contrastive Explanation.’ Noûs 32, 189204.CrossRefGoogle Scholar
Roush, S. (2005). Tracking Truth: Knowledge, Evidence and Science. Oxford: Oxford University Press.CrossRefGoogle Scholar
Sainsbury, R.M. (1997). ‘Easy Possibilities.’ Philosophy and Phenomenological Research 57, 907–19.CrossRefGoogle Scholar
Sosa, E (1999 a). ‘How to Defeat Opposition to Moore.’ In Tomberlin, J. (ed.), Philosophical Perspectives 13: Epistemology, pp. 141–54. Oxford: Blackwell.Google Scholar
Sosa, E. (1999 b). ‘How Must Knowledge be Modally Related to What is Known?Philosophical Topics 26(1/2), 373–84.CrossRefGoogle Scholar
Sosa, E. (2007). A Virtue Epistemology: Apt Belief and Reflective Knowledge, Vol. I. Oxford: Oxford University Press.CrossRefGoogle Scholar
Sosa, E. (2009). Reflective Knowledge: Apt Belief and Reflective Knowledge, Vol. II. Oxford: Oxford University Press.CrossRefGoogle Scholar
Strevens, M. (2011). Depth. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Williamson, T. (2002). Knowledge and Its Limits. Oxford: Oxford University Press.CrossRefGoogle Scholar