Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-22T20:06:07.807Z Has data issue: false hasContentIssue false

Précis of The Limitations of the Open Mind and Replies to Nathan Ballantyne and Miriam Schleifer McCormick

Published online by Cambridge University Press:  18 November 2024

Jeremy Fantl*
Affiliation:
Department of Philosophy, University of Calgary, AB, Canada
Rights & Permissions [Opens in a new window]

Abstract

In this article, I summarize the main takeaways from The Limitations of the Open Mind and reply to concerns raised by Miriam Schleifer McCormick and Nathan Ballantyne. In reply to McCormick, I emphasize potential difficulties involved in helping people change their minds while representing yourself as taking an “objective stance” toward them. In reply to Ballantyne, I clarify my reasons for thinking that open-mindedness is a matter of being willing to change your mind and that amateurs can in some ways and in some situations be more immune to misleading arguments than experts can.

Résumé

Résumé

Dans cet article, je résume les principaux points de The Limitations of the Open Mind et je réponds aux préoccupations soulevées par Miriam Schleifer McCormick et Nathan Ballantyne. En réponse à McCormick, j'insiste sur les difficultés potentielles liées au fait d'aider les gens à changer d'avis tout en se présentant comme adoptant une « position objective » à leur égard. En réponse à Ballantyne, je précise les raisons pour lesquelles je pense que l'ouverture d'esprit est une question de volonté de changer d'avis et que les amateurs peuvent, d'une certaine manière et dans certaines situations, être plus à l'abri des arguments trompeurs que les experts.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of the Canadian Philosophical Association / Publié par Cambridge University Press au nom de l’Association canadienne de philosophie

1. The Upshot of the Book

The primary takeaway I intended in The Limitations of the Open Mind is that sometimes you shouldn't be open-minded toward arguments for various propositions you disagree with because you know that you're in the right about those propositions. For example, there can be occasions in which, because you know that Lee Harvey Oswald acted alone in assassinating JFK, you shouldn't be open-minded toward arguments that he acted under orders in assassinating JFK.

What I mean by saying that you shouldn't be open-minded toward those arguments is that you shouldn't be disposed or willing to reduce your confidence that Oswald acted alone in response to those arguments if, after some time grappling with them, all their steps seem compelling, you can't figure out what's wrong with them, and you can't expose a flaw. (The argument for this account of open-mindedness is the topic of Chapter 1.) Because you shouldn't be open-minded in this sense toward these arguments, a fortiori, you shouldn't engage open-mindedly with those arguments in this sense. (This argument is the topic of Chapter 6.)

There might be occasions in which, even if you shouldn't engage open-mindedly with such an argument, you should engage closed-mindedly with it, if doing so would achieve other goods (like convincing the person you're arguing with or convincing an audience or satisfying some other intellectual curiosity or coming to understand the person you're in conversation with better than you do). But, in some of the situations in which you shouldn't engage open-mindedly, none of these other goods might be achievable (at least, by some people), and so (at least, for those people), not only shouldn't they engage open-mindedly with arguments against what they believe, they shouldn't engage closed-mindedly either. (This is covered in Chapter 7.)

I don't mean this conclusion to apply only to uncontroversially true beliefs — like the belief that things move or that something is a heap — that can admit of trick arguments to the contrary. I mean this conclusion to apply to at least some interestingly controversial beliefs: beliefs that a fair number of folks disagree about and which they would continue to disagree about even if they shared evidence bearing on the belief. So, I mean this conclusion to apply to scientific claims about global warming and vaccinations, historical claims about who shot JFK, religious claims about the existence of God, and so on.

2. The Primary Argument

The argument for the primary takeaway has three main steps. First is a claim about knowledge: that you can sometimes know that such controversial claims are true, even though they are controversial, and even though you're confronted by an argument that you cannot find any holes in and, even after spending some time with the argument, all the steps seem compelling (in the same way all the steps in Zeno's argument that there's no motion can seem compelling). (This is the main claim of Chapter 2.)

The second step is a claim about what knowledge justifies you in doing: namely, when you know something, you should do whatever that thing is a decisive reason for doing. I don't defend this premise too much in the book; it's the product of the literature on pragmatic encroachment and knowledge-action links. But, to illustrate, if you know that the glass contains petrol, you should do what that fact is a decisive reason for doing, which is, in standard situations, refraining from drinking its contents.

The third step is a premise about what the fact that some argument is misleading is, in standard situations, a decisive reason to do. I say that the fact that some argument is misleading is, in standard situations, a decisive reason not to respond to that argument by increasing confidence in its conclusion. In standard situations, we don't want our confidence levels to be responsive to misleading arguments. We want them to be responsive to non-misleading arguments. If you ask me why I'm disregarding some argument and I tell you that it's misleading, you might ask me why I think it's misleading, but you won't say, “Well, yeah, of course it's misleading. But why are you disregarding it?”

These final two premises, taken together, entail this principle: if you know that some argument is misleading, you should disregard it. Those familiar with Gilbert Harman and Saul Kripke's famous dogmatism paradox might recognize this as largely similar to a crucial step in the argument for the unpalatable dogmatist conclusion: in Harman's words, “I should disregard evidence that I know is misleading” (Harman, Reference Harman1973, p. 148). In the book, I call this “the linking premise,” (Fantl, Reference Fantl2018, p. 130) but I prefer Maria Lasonen-Aarnio's (Reference Lasonen-Aarnio2014, p. 419) term: Entitlement.

Because I endorse Entitlement, I therefore endorse the unpalatable dogmatist conclusion. Sometimes, because you know that the conclusion of an argument is false, you thereby know that the argument is misleading. Therefore, by Entitlement, sometimes, because you know that the conclusion of an argument is false, you should disregard that argument. That's the primary takeaway of the book.

3. Replies to Miriam Schleifer McCormick

Miriam McCormick raises two questions about the book, though she primarily emphasizes the first. I'll open with some quick remarks on her second question before devoting the bulk of my discussion to the first. Her second question is about how the final chapter of the book relates to the first parts of the book. In the final chapter, I argue for the impermissibility, in certain situations, of inviting problematic speakers to campuses. In the first parts, I argue for the possibility of retaining knowledge in the face of compelling counterarguments and the permissibility of closed-mindedness in the face of such arguments. As McCormick notes, “it seems that whether one adopts an open or closed mind to the arguments of these potential speakers has no effect on whether they should be invited” (McCormick, Reference McCormick2024, Section 4).

I take the final chapter to be an exploration of one consequence of the possibility of retaining knowledge in the face of compelling counterarguments. As I argue in the chapter, it's when you have certain sorts of controversial knowledge — say, that the speaker's speech is both false and demeaning — that you should fail to invite them to campus. Therefore, the conclusions only have consequences for our invitations if there are some such cases. According to the first part of the book, this condition is satisfied.

More importantly, one general lesson I take the book to have is that you can't defend certain kinds of engagement with others while remaining neutral on whether you know that what they're saying is false. Knowing certain things has consequences for what kind of engagement is permissible. If you know that an argument is misleading, you shouldn't engage open-mindedly with it. And, as I argue in the last chapter, if you know that a speaker's words are false and demeaning, you shouldn't engage with the speaker by inviting them to campus. Therefore, if you defend inviting them to campus, you can't consistently allow that you know that their words are false and demeaning.

I emphasize in the book that the principles I argue for shouldn't be taken as complete and substantive advice. Again, you should be closed-minded toward an argument if you know that the argument is misleading. That advice is only complete if you can figure out whether you know that the argument is misleading. The book doesn't offer a whole lot of advice about that (that, of course, is the subject matter of epistemology generally). Still, the book has some practical lessons even given the difficulties in figuring out when you have knowledge. One is that certain kinds of objections to a closed-minded attitude are unsuccessful. You can't legitimately object to my closed-minded attitude while granting my knowledge that the argument is misleading. Likewise, if the conclusions of the final chapter are right, you can't argue for the permissibility of inviting a problematic speaker while granting knowledge that what the speaker is saying is false and demeaning. Opponents of no-platforming commonly insist that their invitations to problematic speakers don't say anything about their own attitudes toward those speakers. But, if the conclusions of the final chapter are right, to consistently claim that it is permissible to invite some speaker, you will have to deny knowing that what they're saying is false and demeaning; the permissibility of the invitation entails that you don't know that what they're saying is false and demeaning.

I turn now to McCormick's first concern — the concern to which she devotes the most space. McCormick thinks that my worries about the difficulty of getting people to change their minds while you remain sufficiently non-deceptive are overstated. I agree with much of what McCormick has to say about this. That is, I think it probably is possible to both be sufficiently non-deceptive while also being effective in getting people to change their minds, and a lot of the way to do this may involve engaging on an emotional level. There may also be special relationships between you and your conversational partner that make it easier to get them to change their mind without problematic deception; if you're close friends or they've come to you for advice (“Am I just totally going off the rails here?”), then I think your task might be made easier. My concern in Chapter 7 was mainly to highlight two pitfalls of closed-minded engagement that stand on either side of the narrow path to effectiveness, especially, but not exclusively, when you're engaging with people with whom you lack certain sorts of special relationships — if you're talking to a person you've just met in a bar, or at a political rally, or online.

One pitfall is manipulative deceptiveness. The other pitfall is alienating condescension or arrogance. I do think that there is a path between them, but I also think that the path is fairly treacherous and it takes a certain kind of empathetic charisma to tread it. McCormick's suggestion is to treat problematic beliefs the way we might treat problematic emotions. We can imagine different kinds of cases. In one kind of case, the person we're talking to doesn't want to have the particular emotion — maybe it's anxiety or fear or self-doubt or anger. Here, I think, McCormick's suggestion that we can refer to the causes of the emotion can be particularly effective. I know that when I wake up in the wee hours of the morning stressing about something, it can be oddly effective to tell myself that people — and I in particular — often have bizarre night thoughts that seem far less pressing in the light of day: “I'm just stressing about this because it's 2:00 in the morning.”

But this can only go so far. Sometimes being told that I'm only having a certain emotional reaction because I'm hungry or I'm nervous about something else is particularly galling. Sometimes being told that you're only angry about something because of some causal factor is patronizing or condescending. Certainly as a teenager, being told by my parents that I only felt or thought a certain way because I was so young was singularly ineffective. So, in many of the cases of interest — where the person in question doesn't particularly want your help in getting rid of a belief — making it clear that you're taking a partially objective stance toward them is not going to be helpful.

So, now the hope is to take this kind of objective stance toward your conversational partner while hiding it sufficiently to be effective, but not being problematically deceptive in doing so. Perhaps this is just a confession about my own limitations, but I find it extremely difficult to pull off. And perhaps that's where the main difference is between McCormick's view and my own: the difficulty of treading this path. Here's why I, personally, find it difficult. Imagine the conversation you would have with McCormick's hypothetical shut-in friend who has turned to Qanon (though if the person is someone with whom you have a certain sort of friendship, the conversation may be easier to have). Here's the conversation that, I would think, you don't want to have:

Friend: Hey, sheep, the government and the mainstream media are being secretly overrun by a bunch of Satan-worshipping cannibal pedophiles. Don't believe me? Check out all these websites!

You: I'm not really convinced by this. I think you just believe this stuff because you've been out of work, stuck at home for months, and feel alienated.

Friend: Well, F.U.!

You need a different response after “Check out all these websites!” But what? Here's one possibility:

You: Look, I need to tell you that hearing this stuff from you makes me really worried. I care about you and I don't want to see you falling into this rabbit hole. Can we just go for a walk outside?

Here's another:

You: You know that guy we used to make fun of back at university? The guy who said all the bizarre stuff about 9/11? You're sounding like that guy.

Here's another:

You: One thing I always valued about you is your ability to spot bullshit. What's going on here?

These are all non-deceptive (unless that last one is deceptive). I also think that they will often be ineffective. Your friend didn't come to you for counselling. They came to you because they discovered this amazing truth and they're worried about you: that you've been duped by CNN and MSNBC. They came to you because, though you think they sound like that 9/11 conspiracy theorist from university, they think you sound like that other guy who unreflectively bought into the claim that tobacco companies had no clue that smoking tobacco cigarettes causes cancer. They came to you because they've always valued about you your openness to new ideas.

What they want from you is point-by-point engagement. Anything less is going to sound to them like a copout. Point-by-point engagement is, of course, not McCormick's suggestion; that is not to take the objective stance, which is what we need to do. But it's still possible to take them up on their demand for point-by-point engagement. You can do it by representing yourself as closed-minded or open-minded. The latter is misleading and will require some crucial moments of deception where you nod and say, “Oh yeah, this is promising and interesting. Let's see if that works out!” The former, I think, will be ineffective, singularly so, if you can't actually figure out what's wrong with the evidence. This is not to say that there aren't some people who, through sheer force of charisma, can pull this kind of thing off: they say, in just the right excited tone, “Look, you know I think this is going nowhere, but let's do this!” I've never found myself able to manage the trick. And, I should say, in certain contexts, charisma won't be enough, because tone of voice and mannerism are crucial here and in some contexts, like the internet, those useful tools will be unavailable.

Am I right that there has to be a crucial deceptive moment? There does have to be a moment when you fail to reveal your true reactions. But, as McCormick points out, you can fail to be deceptive while still not revealing all of the information about your motives. I think that's right. For example, if I ask you for directions to a restaurant but don't reveal my motive for going to the restaurant (namely, not to eat there but because I have a particularly vivid memory of the restaurant's sign from when I was a child and I want to revisit it), then I don't think I've been problematically deceptive. But one difference between this kind of case and engagement with your Qanon friend is that the missing information isn't likely to make a difference to you. You won't be less likely to give directions to someone who is seeking to relive a nostalgic moment than to someone looking to get a good meal. But, in the Qanon case, your friend will be less willing to change their mind as a result of engagement with someone who they know is engaging with them just for the purpose of changing their mind, or with someone whom they know has no willingness to learn (the truth of the matter) from them. Nor does it seem that they are irrational in doing so; it's not irrational to be less willing to engage seriously with people who aren't taking you seriously. What is more, you know this about them, and you are hiding these details of your motive from them precisely because you know that revealing them will make your engagement less effective. This is less like a realtor making a house smell like cinnamon (Baron, Reference Baron, Coons and Weber2014, p. 114) and more like a realtor purposely omitting the information that the sellers are avowed Nazis. In both cases — where you omit the information that you have no willingness to learn, and where the realtor omits the information that the sellers are avowed Nazis — the target may have rationally modified their behaviour if they had such information. It's rational not to want to buy a house from avowed Nazis, and it's rational not to want to engage with people who have no interest in learning the truth from you. But the target has been denied that information expressly to prevent them from so modifying their behaviour.

What is it actually like to engage with a person while hiding information about your motives that you know would make them become defensive? Would your facial expressions be able to mask the real attitude you have when they say, “You know, Joe Biden's not even real. … That's why he's wearing a mask all the time, because the fake face that he's wearing, the mouth doesn't move correctly when he talks” (Reed, Reference Reed2021)? Would it not feel much less disingenuous to say, “Now, you know what I think about this, right? But I'll talk it through with you if there's any chance that you'll change your mind”? Again, I do think there are ways to do this effectively. But it takes a certain kind of charisma to do it; not everyone can pull it off, and not with just anyone.

4. Replies to Nathan Ballantyne

Nathan Ballantyne raises three main concerns for the arguments and conclusions in the book. I'll address them in turn.

First, Ballantyne proposes a modification of my necessary condition on open-mindedness. Suppose you aren't willing, prior to hearing an argument, to reduce your confidence conditional on spending time with the argument, finding the steps compelling, and being unable to expose a flaw. But suppose you're not unwilling, either. You might just not have an intention or disposition yet. You might be willing to continue thinking about the argument without an accompanying willingness to reduce your confidence if you can't figure out what's wrong with it. In this case, seemingly contrary to my necessary condition on open-mindedness, are you not open-minded? If asked, “So, what will you do if you find all of the steps in the argument compelling and you can't expose a flaw? Will you reduce your confidence or not?,” your answer isn't “no” (as it is for the closed-minded). But it's not quite “yes.” I gather it's something like, “Well, I'll have to see. It might depend on the details. But don't worry. I'll keep thinking about it.” This doesn't strike me, certainly, as closed-minded. It strikes me as open-minded.

But, if you say this, it also strikes me that you are in an important sense willing to reduce your confidence in those conditions. Compare it to a situation in which you are asked about robbing a bank. Suppose, when you get into the bank, you see that the guard is unarmed and the vault is open. Would you rob it? If you say, “no,” then you're not open to robbing the bank. If you say “yes,” then you are. What if you say, “Well, I'd have to see. It might depend on the details. How many customers are around? Does it look like there's a lot of money in there? If so, yeah, maybe. But maybe not. I'd have to see. I'm going to keep thinking about it.” This sounds to me like a person who's willing to rob a bank.

The difficulty with the proposal that open-mindedness toward an argument is simply the willingness to keep thinking about the argument is that there are lots of ways to keep thinking about an argument. My life's work might be the study of Zeno's arguments that there is no motion. I might continue thinking about the arguments — where they go wrong, what the best formulations of the premises are, etc. But I might have zero concern that the arguments will turn out to be sound or the conclusion true. In this case, I don't think I'm open-minded toward the arguments. It is absurd for me to say, “Look, I'm completely open-minded toward Zeno's arguments. Of course, even if at the end of my life's work every step still seems strong to me and I can't figure out any problem with the arguments, I'm not going to be at all convinced by them. But yes, I'm open-minded toward the arguments.” Whether a willingness to continue thinking about an argument counts as being open-minded depends on whether the willingness to continue thinking about it comes along with a willingness to be moved by it.

And note that, whether this counts as being open-minded or not, it will still be sufficient for closed-mindedness that you are positively unwilling to reduce confidence in your own position. Because the arguments in the later chapters (if they work) show that you should be closed-minded in this sense, I think Ballantyne's suggestion is consistent with much of what I would want to say in the rest of the book in any case.

Second, Ballantyne has concerns about my contention that amateurs who are confronted with apparently flawless arguments should reduce their confidence less than experts. One of his concerns is that the amateur should reason like this: “Look, if an expert were confronted by the same argument and found it apparently flawless, they should reduce confidence. So, I should do what an expert would do in my situation: reduce confidence.” But it's not entirely clear whether that's the right thing for the amateur to say. Here's another thing they could say, “I'm an amateur. This argument looks pretty worrisome to me, but then … it would, wouldn't it, since I don't really understand it. I know it's misleading, though, since I still know that its conclusion is false. So, I presume that an expert would be able to figure out where it goes wrong.

Which way should the amateur reason? It depends, I think, on whether the amateur really does continue to have knowledge after confrontation with the argument. Of course, if the amateur knows that the expert was unable to expose a flaw, the amateur may themselves lose knowledge that the argument is misleading. In this case, the amateur gets more evidence than that provided by the argument itself. But, in cases in which the only counterevidence that the amateur has is the argument itself, then I think the amateur can continue to have knowledge that the argument is misleading after confrontation with the argument. After all, there is no principled reason that the amateur can't have a great deal of positive epistemic support for their own position.

Ballantyne's concern is that any “felt obviousness” an amateur has when confronted by a question is unreliable because it is “not conditioned or trained by a representative sample of relevant evidence or facts” (Ballantyne, Reference Ballantyne2024, Section 3). But, first, felt obviousness at least sometimes is sufficient for knowledge even about domains in which we lack expertise. You don't need to be a moral philosopher — an expert in ethics — to know on the basis of felt obviousness that certain acts are horribly morally wrong (see, for example, the story I cite from Eleonore Stump on p. 77). And, second, felt obviousness is not the only source of justification available to an amateur. As I note in the book:

Laypeople have extremely strong support for many propositions about which they lack the relevant expertise. You have enough support, for instance, to know that the earth revolves around the sun, that the green color in plants is the result of chlorophyll, and that dead animals in rivers increase the likelihood of disease transmission downstream. (Fantl, Reference Fantl2018, p. 36)

The question is whether that support — support that can be arbitrarily, knowledge-level high — is always defeated by the countervailing force of an argument that seems good but which the amateur knows they are in no position to reliably evaluate. I guess I still don't see a reason that it would have to be, given that it is already established — as Ballantyne agrees — that the mere presence of apparently flawless counterarguments doesn't automatically destroy knowledge.

This connects to the final point: what is the book doing? Is it a book of regulative epistemology? Is it a guidebook? I think the answer is no — at least, not a complete guidebook. (This harkens back to some of the comments in my response to McCormick, having to do with the connection between the final chapter and the first part of book.) The reason it's not a complete book of advice is that the principles it yields tell you not to reduce your confidence when you know you're right. But the book doesn't say very much at all about when you know you're right. That's something epistemologists have to figure out separately. (Of course, sometimes we can tell whether we know things without doing epistemology, so sometimes the advice given in the book will, in conjunction with our prior knowledge of what we know, be complete.) The book is primarily about what doesn't always defeat knowledge. I don't think this is a problem for the principles in the book. Take the claim that if you know you're right, you should disregard opposing arguments. Yes, but what if you incorrectly think you know you're right?! Then you might wrongly use this claim to disregard opposing arguments.

True. The same goes for all true conditionals: if the yolk is firm, the hard-boiled egg is done. If you don't have the flu, you can't infect others with it. If the dish doesn't contain peanuts, you can feed it to someone with a peanut allergy. If the gun is empty, it's safe to shoot. All of these are true. But all of them can be misapplied if you wrongly think that the antecedent is true. If you wrongly think the yolk is firm, you'll remove the egg too soon. If you incorrectly think you don't have the flu, you might engage in risky behaviour that ends up killing someone. If you wrongly think the gun is empty, you can misapply this principle and shoot someone. All of these, when conjoined with claims that the antecedents are true, count as complete advice. All of them are true parts of the advisory story, even though they can be misapplied when you wrongly think the antecedents are true.

Likewise, if you wrongly think that you know that some counterargument is misleading. That incorrect belief might lead you to misapply this principle and disregard arguments that you shouldn't. But that doesn't undercut the truth or importance of the principle. (Again, an incorrect belief that a dish doesn't contain peanuts might lead you to misapply this principle: if a dish doesn't contain peanuts, you can safely give it to someone who is allergic to peanuts. Nevertheless, the principle is true and important.) What the principle does do is tell us what we should be thinking about (and providing grant money to): what it takes to know that something is true and how we can identify when we're in that state. This, of course, is the subject matter of epistemology (which is why — getting back to something like Ballantyne's first, “softball” question, I don't take the work to be a departure from traditional epistemological questions; I take it to be an explanation of why those traditional questions are important).Footnote 1 Sadly, I don't have an answer to those questions, though if I'm right in the book, one thing that's not needed in order to know something is that you've looked at or figured out what's wrong with all arguments to the contrary. But, for the time being, we can learn one more substantive lesson.

The charge that unless you can engage with my argument and figure out what's wrong with it, you should be less confident in your own position does not, by itself, work. That accusation, by itself, is not a sufficient complaint against you. If I don't like the fact that you're not properly taking my argument into account, what I have to do is show that you don't know what you claim to know in the first place. To do that, I have to do more than say that you haven't engaged with my argument. For, if what I've said in the book is right, you can perfectly well know that you're right even if you haven't engaged with my argument or you have engaged with it and couldn't figure out where it goes wrong. Or, to quote country artist Kacey Musgraves, “Just ‘cause you can't beat ‘em don't mean you should join ‘em.”

Acknowledgements

Let me first thank Nathan Ballantyne and Miriam Schleifer McCormick for their willingness to devote their time to discussion of my book and for their thoughtful, persuasive, and trenchant concerns. It's been a privilege to have them as correspondents. Thanks are due as well to the anonymous referees for Dialogue, whose suggestions were well-taken and which improved the commentary.

Competing interests

The author declares none.

Footnotes

1 For more on this argument and its consequences, see (Fantl, Reference Fantl2023).

References

Ballantyne, N. (2024). Let me think about it more: On The limitations of the open mind. Dialogue: Canadian Philosophical Review, 63(2), 301308.Google Scholar
Baron, M. (2014). The mens rea and moral status of manipulation. In Coons, C. & Weber, M. (Eds.), Manipulation: Theory and practice (pp. 98120). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199338207.003.0005CrossRefGoogle Scholar
Fantl, J. (2018). The limitations of the open mind. Oxford University Press. https://academic.oup.com/book/26297CrossRefGoogle Scholar
Fantl, J. (2023). Guidance and mainstream epistemology. Philosophical Studies, 180(7), 21912210. https://doi.org/10.1007/s11098-023-01970-2CrossRefGoogle Scholar
Harman, G. (1973). Thought. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691645117/thoughtGoogle Scholar
Lasonen-Aarnio, M. (2014). The dogmatism puzzle. Australasian Journal of Philosophy, 92(3), 417432. https://doi.org/10.1080/00048402.2013.834949CrossRefGoogle Scholar
McCormick, M. S. (2024). Comments on Jeremy Fantl's The limitations of the open mind. Dialogue: Canadian Philosophical Review, 63(2), 293300.Google Scholar
Reed, B. (2021, March 1). QAnon supporters think Biden is a robot who wears a face mask to cover up his malfunctioning mouth: CNN. Rawstory. https://www.rawstory.com/qanon-2650840117/Google Scholar