Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-24T22:06:12.109Z Has data issue: false hasContentIssue false

Strong Belief is Ordinary

Published online by Cambridge University Press:  28 November 2022

Roger Clarke*
Affiliation:
Queen's University Belfast, Belfast, UK
Rights & Permissions [Opens in a new window]

Abstract

In an influential recent paper, Hawthorne, Rothschild, and Spectre (“HRS”) argue that belief is weak. More precisely: they argue that the referent of believe in ordinary language is much weaker than epistemologists usually suppose; that one needs very little evidence to be entitled to believe a proposition in this sense; and that the referent of believe in ordinary language just is the ordinary concept of belief. I argue here to the contrary. HRS identify two alleged tests of weakness – the neg-raising and weak upper bounds tests, as I call them – which they claim believe and think pass. But I identify several other expressions in ordinary English for attributing belief, all of which fail both tests. Therefore, even if HRS are correct that believe and think refer to a weak attitude, it does not follow that the ordinary concept of belief is weak. I conclude by raising some problems for the accounts of belief as guessing, building on HRS's arguments, due to Kevin Dorst, Matt Mandelkern, and Ben Holguín.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

In an influential recent paper,Footnote 1 John Hawthorne, Daniel Rothschild, and Levi Spectre (hereafter “HRS”) argue that belief is weak – or rather, that the referent of belief in ordinary English is weak. I argue here that while this is an interesting and important result, it does not entail that the strong notion of belief one typically finds in epistemology is not an ordinary or commonsense notion.

HRS provide several independent arguments for belief's weakness. All centre on when it is or isn't acceptable to assert that someone believes or thinks something. HRS conclude that the referent of belief is a weak attitude, such that believing p, in this sense, is tantamount to thinking p likelier than salient alternatives. This contrasts both with being certain that p and with an intermediate attitude often called outright belief or full belief. HRS write (p. 1402):

The main use in the literature of these terms is to distinguish merely believing something probable from believing simpliciter. This may be a useful theoretical notion distinct from certainty and sureness, and it may be one for which norms comparable to those for assertion apply. However, our arguments above indicate that this notion is not a disambiguation of what we ordinarily mean by ‘belief’; rather it seems a theoretical posit. Thus, those arguing for the importance of outright or full belief as a notion stronger than ordinary belief but distinct from believing or being certain cannot argue for it on the basis of its commonsense status as grounded in our talk about belief.

Note that there are some missing steps between the conclusion that ordinary believe and think refer to weak attitudes and the conclusion that any philosophical notion of strong belief is not ordinary. Here is one way of reconstructing those implicit premises in HRS's argument:

P1. Believe and think in ordinary speech (always) refer to a weak attitude.

P2. If believe and think in ordinary speech (always) refer to a weak attitude, then ordinary speech can only refer to weak belief states.

P3. If ordinary speech can only refer to weak belief states, then the ordinary concept of belief is weak.

C. Therefore, the ordinary concept of belief is weak.

I think all three premises are false, but I will focus on disputing P2. HRS explicitly argue for P1; I will offer reasons to doubt their arguments for the weakness of believe and think, but my main argument does not depend on those verbs being strong. P3, on the other hand, is tantamount to the bold claim that there is a one-to-one correspondence between ordinary concepts and (the semantics of) ordinary language. I think this is highly implausible, but a direct argument to the contrary is work for another day.

Here, instead, I will show that even if we grant P1 and P3 to HRS, and even if we accept (improved versions of) the tests for weakness they apply to believe and think in their argument for P1, the argument quoted above fails because P2 is false: there are plenty of belief-ascribing words and phrases in ordinary English which do not pass HRS's proposed tests of weakness.

I will begin, in §§2–5, by examining HRS's arguments that believe and think are weak in order to extract tests of weakness we can apply to the other VPs I will consider in §6. By my count, HRS offer four main arguments that belief is weak: an argument from certain Moore-paradoxical statements; an argument about lotteries; an argument from weak upper bounds; and an argument from neg-raising. I'll respond to each of these, although I'll be briefer about the first two, because I think they have been effectively rebutted elsewhere. Finally, in §7, I will discuss some positive accounts of belief, due to Ben Holguín, Kevin Dorst, and Matt Mandelkern, building on HRS's conception of weak belief. But first, the next section clarifies our dialectical situation.

1. Preliminaries

Let's start with HRS's precise thesis. What does it mean for belief to be “weak”? They write (pp. 1394–5):

Let's call the thesis that the level of evidence that entitles one to belie[ve] a proposition is the same as that which entitles one to assert it entitlement equality. … Entitlement equality is false. It is false, we argue, because belief is weak. What we mean by this is that the evidential standards that are required for belief are very low. … To be more concrete, we argue below that merely thinking that a proposition is likely may entitle you to believe the proposition. By contrast, thinking a proposition[] is likely does not, normally, entitle you to assert it.

Many epistemologists, including me, take something like HRS's “entitlement equality” as a starting point for an explication of belief. For this reason, I want to resist HRS's arguments for the weakness of ordinary belief. To be sure, HRS allow that there might be theoretical reasons to posit a stronger notion of belief – it could be that there is exactly one ordinary concept of belief and it is weak, yet for theoretical reasons we might define a stronger philosophical concept of belief. But I want to avoid resorting to this sort of response. The philosophical notion of belief I take myself and most other epistemologists to be working with is certainly a theoretical one – and so we should not expect it to coincide perfectly with an ordinary notion of belief – but I conceive of it as an explication of an ordinary concept, one characterized by a tight connection with assertion. So in this paper I aim to resist HRS's conclusion that the only ordinary concept of belief is weak.

One caveat before we continue. Strictly speaking, what I will offer is only a defence of the orthodox view that there is an ordinary concept of belief for which entitlement equality holds, not a positive argument to establish that view. Rejecting an argument that belief is weak does not, by itself, give us reason to think belief is strong. But there is (defeasible) reason available to prefer the orthodox view if HRS's argument fails. For one thing, it is widely claimed to be intuitive that there is some sort of parallel between belief and assertion. It is certainly possible that only philosophers have such intuitions, but without evidence to the contrary, it is reasonable to assume intuitions widely professed among philosophers are widely shared among ordinary folk.

We could look for further evidence in favour of orthodoxy in attempts to give a positive account of belief based on multiple intuitive starting-points including a belief-assertion parallel. That is, if we can show that the same attitude our assertions commit us to is the attitude involved in folk belief-desire psychological explanation, we have reason to say that the ordinary concept of belief is strong. And there are such positive accounts available: see, for example, Kaplan (Reference Kaplan1996: 109–10), Leitgeb (Reference Leitgeb2017: 6) or Clarke (Reference Clarke2018). Therefore, if HRS's argument against the orthodox view fails, I think we have (defeasible) reason to suppose that strong belief is ordinary after all.

2. First-Person Belief Attributions as Hedged Assertion

HRS note that assertion of sentence (1) sounds bad, but (2) sounds fine:

  1. 1. ?? It's raining but I'm not sure it's raining.

  2. 2. I believe it's raining, but I'm not sure it's raining.

One way to explain the difference: we aren't entitled to assert a proposition when we aren't (entitled to assert that we're) sure of it, but we may assert that we believe a proposition while being (entitled to assert that we're) unsure of it. I'm moving hastily from premises to conclusion here, not because HRS are hasty, but because they consider and reject an objection I think succeeds – so rather than go through details of their argument, I want to move to that objection.

HRS acknowledge, citing Stanley (Reference Stanley2008), that some regard assertions of first-person belief attributionsFootnote 2 like I believe/think p as hedged assertions of the embedded proposition p, not as proper belief attributions (Stanley Reference Stanley2008: 51–2, quoted by HRS at p. 1397):Footnote 3

However the function of using “I believe” [in the sentence] is to qualify support for the truth of a proposition, rather than endorse it. In short, such uses of “believe” are not cases in which one reports a belief that p at all; they are rather cases in which one reports that one has weak reasons in support of the truth of a proposition.

The I believe … construction serves then to modify the speaker's commitment to p, not to self-ascribe belief. HRS reply (p. 1397): “This view … suggests a radical mismatch between the literal meaning of ‘believe’ and what it is used to express[] in these sentences. This should be avoided if possible as the non-literality of ‘belief’ in these cases does not cohere with any systematic pragmatic story we know about ‘belief’.”

To be brief, I think the objection articulated in the quote from Stanley is correct, and HRS's response fails. I intend to be brief about this because Jennifer Nagel has already given a detailed survey of work in linguistics on uses of I believe and I think as hedges. As Nagel (Reference Nagel and Douven2021: §2) concludes: “it remains controversial exactly what range of functions is served by ‘I think’, but it is widely agreed that these functions are various, and that many of them involve bleaching of the literal mental state meaning of ‘think’…”

It's worth noting also that I'm not sure is, along with I think or I believe, a standard example of a hedging expression, especially when combined (as in HRS's examples) with a contrastive conjunction like but (see McCready Reference McCready2015: 113). So there is more than one reason to be suspicious of HRS's Moore-paradoxical sentences as evidence about the relationship between belief and certainty.

I agree with Nagel's conclusion that I think is often used non-literally, but we can bolster the reply to HRS: I think can have exactly the function attributed to it in the quote from Stanley without being used non-literally; and if this is so, HRS's use of I think sentences in the present argument fails. I'll just sketch the argument I have in mind here, in two steps. First, one can tell a Gricean story about how to calculate a conversational implicature from an utterance of I think p to the speaker's having only weak support for p.Footnote 4 If an assertion of p communicates that the speaker believes p, then prefacing one's assertion with I believe that is needlessly prolix. The hearer searches for an explanation for why the speaker added these uninformative words, and hits upon the hypothesis that the speaker wants to emphasize their personal commitment to p; in some contexts, this could serve to make the assertion more emphatic (“I really believe it!”), while in other contexts it can serve to make the assertion more personal and subjective (“I believe it, but others might not”).Footnote 5

Second, note that in some cases, a literally false sentence may seem acceptable because it is used to implicate something true. Establishing this is somewhat trickier than establishing the contrary, i.e., that a literally true statement may seem unacceptable because it is used to implicate something false. After all, the latter sort of statement has something wrong with it, which could be enough to make it seem unacceptable – but there is also something wrong with the former sort of statement (namely its literal falsity), which makes it difficult to judge unambiguously that it is acceptable. Nevertheless, here is an example, which I hope is clear enough. This is a version of a remark widely attributed to US Representative Thaddeus Stevens. When asked by Abraham Lincoln whether Senator Simon Cameron would steal, Stevens replied “He would not steal a red-hot stove.” The implicature here is that Cameron is indeed a thief. This is not an idiom or a figure of speech; the implicature that Cameron would steal anything easier to grasp than a red-hot stove is generated by taking Stevens's words literally.Footnote 6 But now if we suppose Cameron were, in fact, so thievish that he would even find a way to steal a red-hot stove,Footnote 7 Stevens's assertion would still be acceptable. That is, although Stevens speaks literally, the point of his assertion is the implicature it generates, and so the acceptability of his assertion depends entirely on the acceptability of the proposition he implicates.Footnote 8

Putting the two steps together, it's possible that I believe p can (a) be interpreted literally, (b) literally express the falsehood that the speaker believes p, (c) function primarily to express qualified support for p through a conversational implicature, and (d) sound acceptable because of that implicature. So even if I think and I believe are used literally when they function as hedges, it may nevertheless be the case that acceptable sentences containing those hedges are literally false.

3. Lotteries

HRS's next argument deals with lotteries. They write (p. 1397):

Many argue that one cannot felicitously assert that one's lottery ticket with a one in a hundred chance of winning won't in fact win. … However, at least intuitively, it seems reasonable to believe that one's lottery ticket will lose in these situations. … If this were not the case no one would be even initially bothered by the lottery paradox. … [T]he data suggests that having a norm of belief on par with that for assertion is revisionary of our ordinary practice in a way that, e.g., the knowledge norm of assertion does not appear to be.

Again, this is rebutted in Nagel (Reference Nagel and Douven2021). Nagel shows that, in fact, the opposite intuition to HRS's is also taken to be obvious (e.g., in Staffel Reference Staffel2016: 1725).

All I want to add here is a way of being bothered by the lottery paradox without either making belief weaker than assertion or being revisionary of ordinary practice. We might feel a pull toward both the judgments that it is and that it isn't acceptable to believe (or assert) that one's lottery ticket will lose. Put roughly, it might feel acceptable because it's so close to certain, yet it might feel unacceptable because it is not certain.

This is not an unusual phenomenology for paradoxes, I take it. A standard way of setting up a paradox consists of giving a set of individually intuitive premises which are jointly inconsistent. In that case, one might well feel both the intuitive pull toward premise 1, considered on its own, and the intuitive push away from premise 1, by thinking about the consequences of the other premises. For example, consider this way of setting up a sceptical paradox. It's intuitive (1) that one knows one has hands, (2) that one cannot know one has hands unless one can independently show that there's an external world, and (3) that one cannot independently show that there's an external world. Thinking about (1) by itself and thinking about (2) and (3) together can leave us feeling squeezed from both sides. One might even claim that nobody would be bothered by this sceptical paradox unless one felt both intuitions – so we might complain that HRS have left out an important half of the picture.

Putting a finer point on it, let's grant all three of the following claims contained in the HRS quote above: many argue one cannot felicitously assert that one's ticket will lose; intuitively, one may believe one's ticket will lose; and the lottery paradox won't bother anyone who doesn't feel the intuition that one may believe one's ticket will lose. I say this is compatible with the following further claims: it is also intuitive that one may assert that one's ticket will lose; it is also intuitive that one may not believe one's ticket will lose. If we add these two claims, we can manage to be bothered by the lottery paradox without giving up entitlement equality.

4. Neg-Raising

HRS observe that believes and thinks are canonical examples of triggers for what linguists call neg-raising (NR). That is, negated belief-attributing sentences tend to be understood as attributions of belief in a negated proposition. Thus, I don't think she's coming tends to be understood as I think she's not coming. HRS claim that “only weak mental state verbs allow neg-raising” (p. 1399), noting that want, like,Footnote 9 advise, and recommend allow NR, but need, love, command, demand, and order do not, nor do is certain that or is sure that. (We might add to the list of canonical NR triggers intend, which HRS may or may not regard as weak.)

I take it this is a straightforward argument by analogy: other NR-triggering mental state verbs in English denote weak states and other non-NR mental state verbs denote strong ones; therefore, we should expect that the NR triggers believe and think also denote weak states.Footnote 10 If this is right, we can conceive of NR as a test of weakness: if a VP triggers NR, we have (defeasible) reason to conclude that VP is weak. In §6, I'll accept the idea and reply by applying the NR test to other ordinary belief-attributing VPs I claim fail the test. But first, let me lodge two objections to the argument by analogy underwriting NR as a test of weakness.

First, it's not clear what the analogue of HRS's thesis would be for the other mental states listed. For belief, “weak” is to be understood as “subject to a weaker epistemic norm than the norm on assertion”. But what would be the parallel for liking/loving, wanting/needing, advising/commanding? Perhaps one might suspect the verbs on HRS's two lists pair off in scales. Being certain might be seen as the same sort of thing as believing, only stronger. Likewise, a command might be a particularly forceful sort of advice. And indeed if Romoli (Reference Romoli2013) is correct, NR is best understood as a kind of scalar implicature – although the scales in question do not involve other (stronger) VPs than the NR-triggering one.Footnote 11 For example, with believes p, Romoli postulates a two-element scale: believes p and has an opinion whether p. But perhaps we could extend Romoli's scale by adding elements beyond belief, such as being certain that p. But there are several further problems here. First, if we understand “weak” for these other verbs as “occupying the lower/weaker position in a scale”, then the claim about believes/belief becomes trivial, since everyoneFootnote 12 agrees certainty is stronger than mere belief. Moreover, unlike with typical scalar implicatures, the hypothetical scales I've just suggested are not all ordered by generalized entailment. Need does not entail want, love does not entail like, and so on.Footnote 13

It is not enough simply to say that there is a scale, with one (neg-raising) attitude in the lower/weaker position and another (non-neg-raising) attitude in the higher/stronger position. After all, it's uncontroversial that there is a stronger doxastic attitude than belief: being sure or certain. Perhaps HRS's claim is not just that belief occupies a lower position on the relevant scale, but that it occupies the lowest position. After all, they suggest that believing p is no weaker than believing p likelier than salient alternatives. (See §7 below.)

My second main objection is that we should be very cautious about making inferences about attitudes from evidence about NR-triggering attitude VPs in English. There is cross-linguistic variation in which verbs trigger NR: for example, English hope is an NR trigger, but German hoffen is not. Perhaps this only shows that native English speakers’ hopes are weaker than Germans’, but even if there is a genuine difference of mental state concepts across languages, so much the worse for English hope as denoting “the ordinary concept” of hope.

Indeed, there is cross-linguistic variation in NR behaviour among belief-ascribing verbs (Horn Reference Horn1989: 322):Footnote 14

Among verbs of opinion, Hebrew xosev ‘think’ is an NR trigger, but maamin ‘believe’ is not; the opposite pattern obtains in Malagasy. NR in Hindi applies to complements of lagnaa ‘seem’, but not of soocnaa ‘think’ or of X-koo khvaal hoonaa ‘have the opinion’, and to caahnaa ‘want’ in Equi contexts … but not with unlike subjects …, and so on.Footnote 15

Even sticking with English mental state verbs, Horn writes (Reference Horn1989: 321):

As is well known, the availability of NR understandings is subject to semantically unmotivated lexical exceptions. In English, suppose neg-raises on its parenthetical reading for all speakers, but guess does so only for some (I don't {suppose/%guess} Lee will arrive until midnight). Want neg-raises freely, wish somewhat less so, and desire only with difficulty; the same pattern obtains for expect and anticipate. It is hard to detect any relevant non-ad hoc semantic or pragmatic distinction between want and desire, between expect and anticipate, between parenthetical uses of suppose and guess, which could account for this distinction.Footnote 16

The fact that NR behaviour is to this extent conventional suggests we should be careful about inferring facts about belief from facts about whether believe triggers NR. Note, though, that Horn has given us counterexamples only to one direction of a putative equivalence between NR and weakness: we have seen “weak” verbs failing to trigger NR, but no “strong” verbs triggering NR. So perhaps we can still treat NR as a test of weakness.

In §6, I will give another objection to HRS's neg-raising argument, construing NR as a test for weakness which believe and think pass; I'll respond by offering other ordinary belief-attributing VPs which fail the test. Note that I am not construing failure to trigger NR as a test of strength. I aim thereby to defend, not establish, the claim that there is an ordinary concept of belief weaker than certainty but stronger than believing-likely.

5. Weak Upper Bounds

Before I discuss those alternative VPs, there is another test of weakness HRS offer, namely, what I'll call “weak upper bounds”. That is, HRS claim that ordinary belief is no stronger than some other states which are more clearly weak. This will furnish another test of weakness we can subject our alternative VPs to.

First, HRS claim that believing p is no stronger than thinking that p or being of the opinion that p, because the following are intuitively contradictory:Footnote 17

  1. 3. ?? Tim thinks it's raining, but he doesn't believe that it is.

  2. 4. ?? Tim is of the opinion that it will rain, but he doesn't go so far as to think/believe that it will.

More tentatively, HRS suggest that the following more clearly weak states appear to be as strong as belief: suspecting, having some confidence, half-expecting, being tempted to think. The evidence for this claim is similar – namely, that the following four statements all sound contradictory, or at least odd – but HRS concede that “status of the judgments here is not sufficiently clear” to “take a firm stance here on whether, in fact, it is possible to suspect/half-expect/be tempted to think/have some confidence that p without thinking p” (p. 1399).

  1. 5. ? Tim doesn't actually think that John stole the painting, but he suspects that he did.

  2. 6. ? Tim has some confidence that it will rain, but it's not that he thinks it will rain.

  3. 7. ? Tim half-expects that it will rain, but it's not that he thinks it will rain.

  4. 8. ? Tim is tempted to think that it will rain, but it's not that he thinks it will rain.

Interestingly, following these observations, HRS write, “These examples suggest that, at least for some, ‘believe’ is a bit like ‘open’: when something is open to any degree it is open, when you believe something to any degree you believe it”. Logins (Reference Logins2020) suggests, based on a very similar analogy, that believes (or rather, is confident that, which Logins takes to capture the ordinary notion of belief) has two senses, a minimal one and a maximal one. That is, just as open is a minimal gradable adjective, there is a reading of confident where having a minimal degree of confidence that p entails being confident that p. But, as Kennedy (Reference Kennedy2007) notes, open also allows maximal interpretations, such that something must have the maximum degree of openness to qualify as open simpliciter. Kennedy illustrates the two interpretations with the following pair of examples (p. 38):

  1. 9. If the airlock is open, the cabin will depressurize.

  2. 10. The ship can't be taken out of the station until the space door is open.

If I am a member of the crew of the starship Enterprise and I do not understand [9] to be a warning that any amount of opening of the airlock will result in depressurization, then I am a danger to the ship and crew. Likewise, if I am the helmsman and fail to understand [10] as a[] prohibition against trying to leave the station before the space door is completely open (here the space door refers to the door of the space station, which the ship needs to pass through in order to get into space), I am again a danger to the ship and crew.

Logins argues that, similarly, there is an interpretation of confident which requires a maximal degree of confidence for the simple positive ascription is confident that p. HRS do explicitly deny that believe is ambiguous (i.e., that the word has both a weak and a strong sense); it's not clear to me whether they would count open as ambiguous in their sense. That is, we might well want to say that, while open can function as a maximal or minimal absolute gradable adjective, both readings refer to the same scale of degrees of openness, and therefore open has only one meaning.

I am persuaded by Logins’ argument that confident behaves as an absolute gradable adjective whose underlying scale has both maximal and minimal elements. I am less persuaded that ordinary concepts of belief track ordinary language confident as closely as Logins suggests. It seems to me that one can believe something without being at all confident it is true; confidence tracks subjective feeling more closely than belief does, as I see it. I won't insist on this divergence (I'm not very confident of it); instead I'll focus my attention elsewhere, on other VPs I claim function to attribute belief.

But before we move on, we can improve on HRS's weak upper bounds test. The specific sentences 5–8 have some potentially confounding details. I see four removable problems here: a possible NR reading in 5, the repetition and the awkward it's not that construction in the other three sentences, and the choice of complement in all four. Because these problems are removable, I won't put a lot of effort into arguing that these are problems; rather, I'll briefly explain why I see potential trouble here and how to improve on HRS's test, then apply the improved test in what follows.

Depending on context, Tim doesn't actually think p could be read as Tim actually thinks ¬p. (For example, consider a context where actually functions as a hedge.) On that reading, the sentence would say “Tim actually thinks that John didn't steal the painting, but he suspects that he did.” It could be that it's incoherent to think ¬p while suspecting p, even if one can coherently suspect p without thinking p – and the latter is what HRS want this sentence's badness to suggest. So we should be careful to find ways of denying that Tim thinks/believes something which do not trigger NR; my solution, in the next section, will be to use VPs that do not trigger NR, so we can use simple negation.

The remaining three sentences avoid this problem by using the it's not that construction to deny Tim's thinking it will rain.Footnote 18 But, at least in my dialect, this construction itself is awkward,Footnote 19 typically signalling a sort of metalinguistic negation, with the implicature that the following words aren't quite right. This awkwardness might contribute to judgments of the sentence's badness, irrespective of whether Tim thinks it will rain. So we should avoid the it's not that construction if possible.

The latter three sentences also repeat the full complement it will rain in both clauses. This wordiness could be a source of the oddness HRS report. At the least, the sentences wind up sounding rather formal; when there is a shorter paraphrase readily available, they seem to violate the Gricean maxim “be brief” (Grice Reference Grice, Cole and Morgan1975).

Finally, the complements that John stole the painting and that it will rain are, in slightly different ways, potentially ill-chosen. Thinking about whether John stole the painting suggests a context where Tim is a detective trying to determine who stole the painting. I agree that thinks that and suspects that pick out the same mental state in such a context, but I think that tells us more about the sleuthing context than it does about belief in general. That is, we are imagining a context where it is presupposed that Tim has inconclusive evidence about who stole the painting. In that case, unless Tim is peculiar or irrational, he will not have any stronger attitude than suspecting John is the thief. (We might also worry that, even for laypeople, suspecting in this context has a special meaning.)

Similarly, propositions about weather in the near future are typically very uncertain, even with the most up-to-date evidence.Footnote 20 So, again, unless Tim is peculiar or irrational or a professional meteorologist, we might not expect him to have any stronger attitude than having some confidence/half-expecting/being tempted to think it will rain. So the denial that he thinks that it will rain might sound odd – an affirmation of a proposition in the common ground. Nor is the problem just with the future tense. In other example sentences, HRS use the complement that it is raining. This suggests a context where Tim can't see or feel the weather directly – perhaps he is in a windowless room far from the sound of possible rain outside, and has been there for some time. But in that case, it's again strange to have strong attitudes to the target proposition.

So we can improve the weak upper bounds test by choosing complements which do not invoke a context of presupposed uncertainty. Likewise, of course, we should avoid complements which invoke contexts of presupposed certainty: religious or ideological claims. “Isaac thinks the universe is ruled by God's immutable laws” or “Thomas believes all people are created equal” introduce extra baggage prejudicing us toward a strong interpretation of the VP.

Of course, there is no perfectly neutral proposition to choose such that attributing or denying belief in it suggests no information about the conversational context, and any such contextual information may confound our judgments about the meaning of thinks that. But we can do better by choosing a proposition without obvious ideological or religious weight, which people typically cannot verify easily (as with that it is raining), but which are not typically deeply uncertain (as with that it will rain). Trivia might do the trick (e.g., that there have been six Kings George of England), although these suggest a quiz context, which is a very specific sort of thing. To weaken the suggestion, we could choose facts one can easily imagine making a difference to everyday action: maybe that it's illegal to turn right on a red light in Michigan. (Context: imagine a non-driver from Canada who knows just enough to realize little legal details are often different in the USA.)

Summing up: I propose to improve on HRS's weak upper bounds test by avoiding NR-triggering negations of thinks that while also avoiding the awkward it's not that construction (admittedly difficult without substituting a non-NR VP), avoiding unnecessary wordiness/repetition, and choosing neutral complements. Now we are ready to look for alternative phrases to subject to these tests.

6. Fifty Ways to Believe Your Lover: Other Ordinary Ways to Attribute Belief

It is widely acknowledged among philosophers interested in ordinary language belief ascriptions that think is often more natural than believe. Nagel (Reference Nagel and Douven2021) provides some evidence beyond the armchair: “in a balanced corpus of written and spoken English, these are the 12th- and 50th-most common verbs, respectively (Davies and Gardner Reference Davies and Gardner2010, 317), and in spoken language, ‘think’ is more than six times as common as ‘believe’ ([Corpus of Contemporary American English], accessed November 13, 2018)”.

But expanding our focus beyond believe only to include think shows a lack of imagination. Ordinary English provides a myriad of ways to attribute belief to oneself and others. I recognize that “be more imaginative” is difficult advice to enact, so I'll suggest here a method of finding alternative belief-attributing expressions: searching Bible translations.

Two facts about the Bible are salient here; one is a reason for caution, the other is the reason why it is an incomparably useful resource. That is, the Bible is both a religious text and one that has been translated into English more often than any other. I suggest using Bible translations to find alternative VPs because of the number of translations and in spite of its being a religious text.

Because the Bible has been translated so often, and because there are many freely available resources for searching and comparing translations, these translations can more reliably provide the services of a thesaurus. A thesaurus might provide a list of synonyms for believe or think, but these may not all work in a given sentence or context. (Reading a certain kind of student essay drives this point home.) But looking at multiple translations of a given sentence into (more or less) contemporary English gives multiple ways of aiming to capture precisely the same meaning. Of course, sometimes differences between translations reflect disputes over the correct interpretation of the original Hebrew or Greek text, so some caution (and attention to the original) is called for. In general, my concern is not with the correct interpretation of the Bible, but with the sorts of words translators use to try to communicate the same or similar ideas. Happily, there are (to put it mildly) quite a few translations we can compare.

But the other salient fact about the Bible – that it is a religious text, and in particular one often concerned with religious belief – is a reason for caution. There is controversy over whether religious belief is, in fact, belief at all (Van Leeuwen Reference Van Leeuwen2014). For this reason, I will take belief-ascribing VPs in Bible translations only as a starting point; I will only conclude that a VP can be used to ascribe strong belief in ordinary English if I can find contemporary, non-religious examples.

To begin, here are some examples of what I take to be clearly non-religious belief ascriptions from both the Hebrew and Greek Bible. All of these translations are from the English Standard Version (ESV):

  1. 11. Job 9:16 “If I summoned him and he answered me, I would not believe [אאמין, ’a'amin] that he was listening to my voice.”Footnote 21

  2. 12. Job 35:2 “Do you think [השב, chashab] this to be just? Do you say, ‘It is my right before God,’ […?]”

  3. 13. Judges 15:2 “And her father said, ‘I really thought [אמרתי, amarti] that you utterly hated her, so I gave her to your companion. Is not her younger sister more beautiful than she? Please take her instead.’”

  4. 14. John 9:18 “The Jews did not believe [ἐπίστɛυσαν, episteusan] that he had been blind and had received his sight, until they called the parents of the man who had received his sight.”Footnote 22

  5. 15. Matthew 10:34 “Do not think [νομίσητɛ, nomisēte] that I have come to bring peace to the earth. I have not come to bring peace, but a sword.”

  6. 16. Luke 8:18 “Take care then how you hear, for to the one who has, more will be given, and from the one who has not, even what he thinks [δοκɛῖ, dokei] that he has will be taken away.”

None of these examples ascribes belief in the sense of “belief in God”. All but one take a propositional complement in the form of a that-clause. The exception is of the form … think X to be Y. From these examples, we can see that Biblical Hebrew and Greek have multiple words that can be translated as think or believe in English. This by itself might lead us to be suspicious of inferences from linguistic facts about English to facts about the ordinary concept of belief – it might be that English runs together distinctions present in Biblical Hebrew or Greek. But that is not the argument I intend to pursue here. Instead, let's turn from giving multiple examples from a single translation to looking at multiple different translations of a single passage.

Here is Romans 3:28 in the ESV:

  1. 17. Romans 3:28 “For we hold [λογιζόμɛθα, logizometha] that one is justified by faith apart from works of the law.”

Other translations use a wide variety of VPs to translate logizometha:Footnote 23

  • we hold that: ESV, AMPC, MOUNCE, NMB, NRSV

  • we hold the view that: CJB

  • we maintain that: NIV, NASB, ISV, WEB, AMP

  • we conclude that: KJV, CSB, GNT, GW, JUB

  • EXB offers both maintain and conclude as alternatives, as well as assert.

  • we see that: CEV, PHILLIPS

  • we consider that: NET

  • we determine that: Aramaic Bible in Plain English

  • we reckon that: ASV, DARBY, RV

  • we firmly believe that: NIRV

  • we calculate that: NTE

  • we know that: WE

All of the above constructions take a that clause as complement. The following translations use a direct object and an infinitive copula, as in “I consider him [to be] a friend”.

  • we account X to be Y: DRA

  • we consider X to be Y: DLNT, LEB

  • we reckon X to be Y: YLT

  • we deem X to be Y: WYC

And the following (looser) translations use a complete sentence for logizometha:

  • We've finally figured it out: MSG

  • This is what we believe: ERV

  • This is what we have come to know: NLV

  • So our conclusion is this: TPT

Finally, some translations (ICB, TLB, NCV, NLT) don't include anything clearly corresponding to logizometha. This choice makes sense if we understand logizometha as merely communicating something arguably superfluous like we assert that.

I certainly don't want to claim that all of the VPs above attribute belief in ordinary English, or that the author of Romans was clearly self-attributing belief. I'm using Bible translations as a source in the context of discovery rather than justification, so to speak. But I do think some of these constructions can be used to attribute belief in ordinary English, and that they fail HRS's tests of weakness. So my next steps will be to give contemporary examples of some of these constructions which are clearly attributions of some doxastic attitude, and to submit the VPs in question to HRS's tests. The constructions I'll use are is satisifed that, has concluded that, maintains that, and holds that.Footnote 24

Here are some contemporary examples of each phrase; I found some of these in the Corpus of Contemporary American English (COCA),Footnote 25 and others through my own web searches. The underlining in each quote is mine.

  1. 18. I'll agree there was outright fraud going on … [sic] I'll also still maintain that the investors did not know what they were actually agreeing to, because of the fraud.Footnote 26

  2. 19. Contrary to popular belief, Judaism does not maintain that Jews are better than other people. Although we refer to ourselves as G-d's chosen people, we do not believe that G-d chose the Jews because of any inherent superiority.Footnote 27

  3. 20. We hold that tattooing is purely expressive activity fully protected by the First Amendment …Footnote 28

  4. 21. I hold that The Departed is the Best Picture and Best Director of 2006.Footnote 29

  5. 22. I have only one complaint about Sachs’ Project Syndicate piece. It does not hold that the policy cliques, intelligence services and pols in Washington could conceal transgressions as gross as those the U.S. and its European and Arab allies have incessantly committed in Syria.Footnote 30

  6. 23. The simulation was run over and over again until the developers were satisfied that their game bot had evolved the desired characteristics and behavior.Footnote 31

  7. 24. If Ellsbury comes out on fire in 2013 and the Sox are either out of contention or satisfied that Jackie Bradley, Jr. is the future and the future is now (doubtful, but who knows?), then you think about trading him.Footnote 32

  8. 25. [The aging and injured pitcher Mel Stottlemyre, after the New York Yankees cut him in 1975:] “I'm really shocked,” the righty said. “In the back of my mind I know that I haven't given up. I'm still not satisfied that I can't pitch.”Footnote 33

  9. 26. When I tested the Fuji X-Pro1 in the past, I concluded tthat it was a firmware update and a price drop away from being a great camera. I think the summary applies perfectly for this camera, too.Footnote 34

  10. 27. Some progressive journalists concluded that Romney's religion actually might have a much larger impact on his policy views than many people would expect – see for example the extensive report published on September 15 at ‘Think Progress’: …Footnote 35

  11. 28. I'm not an anarchist any longer, because I've concluded that anarchism is an impractical ideal.Footnote 36

  12. 29. Abby has not concluded that the other metal that MacDonald's sword came in contact with came from another sword.Footnote 37

The two tests of weakness we identified above from HRS's arguments are what I've called neg-raising and weak upper bounds. So we want to check, for each construction: whether it triggers NR; and whether sentences of the form Tim Ψ that p, but he doesn't Φ that p, where Ψ is clearly weak and Φ is the construction being tested.

Let's take the NR test first. The weak upper bounds test can be more cleanly applied if we first establish that our VPs do not trigger NR. And indeed, none of the four VPs I've chosen trigger NR readings:

  1. 30. Tim isn't satisfied that it's legal to turn right here. ⇏

    1. Tim is satisfied that it's not legal to turn right here.

  2. 31. Tim hasn't concluded that it's legal to turn right here. ⇏

    1. Tim has concluded that it's not legal to turn right here.

  3. 32. Tim doesn't maintain that it's legal to turn right here. ⇏

    1. Tim maintains that it's not legal to turn right here.

  4. 33. Tim doesn't hold that it's legal to turn right here. ⇏

    1. Tim holds that it's not legal to turn right here.

In each pair, the second sentence does not follow from the first: NR is not triggered.

Now we can move on to the weak lower bounds test. Since our VPs do not trigger NR, we can avoid the awkward it's not that construction in setting up our contrasting sentences:

  1. 34. Tim suspects that it's illegal to turn right here, but he hasn't concluded that it is.

  2. 35. Tim half-expects that he'll get a ticket for turning right here, but he hasn't concluded that it is.

  3. 36. Tim has some confidence that it's legal to turn right here, but he doesn't maintain that it is.

  4. 37. Tim is tempted to think that it's legal to turn right here, but he isn't satisfied that it is.

  5. 38. Tim is of the opinion that it's legal to turn right here, but he doesn't go so far as to maintain that it is.

  6. 39. Tim thinks it's legal to turn right here, but he isn't satisfied [that it is].

All of these are fine, and remain fine if we substitute one of the VPs being tested for another. I conclude that HRS's arguments do not show that the VPs has concluded that, holds that, maintains that, is satisfied that are weak. Therefore, since these are belief-attributing expressions, we should not overturn the traditional view that there is an ordinary concept of belief weaker than certainty but stronger than merely thinking likely.

Objection: These expressions are stronger than thinking likely, but that's because they attribute certainty rather than belief.

Reply: On the contrary, the following sentences with concluded that and satisfied that are fine, and mutatis mutandis for hold that and maintain that:Footnote 38

  1. 40. Tim has concluded that he can turn right here, but he isn't certain.

  2. 41. It's impossible to be certain without invasive testing, but nevertheless we are satisfied that you are fit to play. Welcome back to the team.

Objection: holds that and maintains that don't attribute belief, but rather speech behaviour. One who holds or maintains that p is one who argues for p, advances p in dialogue, and so on.

Reply: On my view, it should be difficult to distinguish predicates of speaking from predicates of thinking, since the thesis I aim to defend here is a sort of equivalence between assertion and belief. Reason to think these VPs attribute speech behaviour is not reason to think they do not attribute belief. I do not see positive reason to deny that they function to attribute belief in the examples I've given.

Objection:Footnote 39 Much more needs to be done to show that there is an ordinary notion matching the philosophers’ notion of belief. That philosophical notion has all sorts of baggage loaded into it: knowledge entails belief; beliefs meeting certain conditions amount to knowledge; belief is subject to a knowledge norm; and so on. It has not been established that, if there is a notion common to ordinary hold, conclude, etc., it would be the notion philosophers are after.

Reply: This objection asks for too much. My goal has been to defend the claim that there is an ordinary notion of belief for which entitlement equality holds. There may well be other claims philosophers make about belief, even philosophically uncontroversial claims, which may or may not hold for this ordinary notion. Defending those claims is beyond the scope of this paper.

Objection: is satisfied that and has concluded that do not simply attribute belief. Not all beliefs are concluded from anything, nor do they all result from persuasion. If these expressions attribute a strong attitude, it is because they attribute the combination of belief and something else.

Reply: I do not claim that the strong notion of belief under discussion is simple rather than complex. Suppose the basic vocabulary of folk epistemology includes concepts of various sources of belief, of varying strength. For example, we might think that witnessing something firsthand licenses or causes a stronger opinion than thirdhand testimony. Weak evidence might license weak opinion, while strong evidence permits strong opinion; satisfied that and conclude that language might be permitted only when the evidence is strong enough. Then even if the basic vocabulary includes only a weak concept of belief, we should expect folk epistemology to be able to recognize a version of HRS's entitlement thesis: One might be entitled to believe p without being entitled to assert it, but one is entitled to assert p if and only if one is entitled to conclude that p – meaning one believes that p and has come to believe it as the result of a decisive weighing of evidence.

If it turns out that there is an ordinary concept fitting the description of what philosophers call “belief”, but which is not the denotation of ordinary belief, I don't see much reason for philosophers to care. The objection then is not, as HRS say, that the philosopher's notion of “outright” or “full” belief is not grounded in ordinary talk, but rather that it should not be called “belief”, because there is a different ordinary notion going by that name.

7. Weak Belief: Guessing and Thinking Likely

HRS do tentatively offer a positive suggestion about the ordinary concept of belief: believing p requires only believing p is likely. “Likely” here could mean more likely than salient alternatives (but see HRS pp. 1400ff. for complications). I don't want to say much against HRS's positive account of ordinary belief. My main contention here is that there is an ordinary notion of belief for which entitlement equality holds, not that there is no other ordinary notion of belief for which entitlement equality fails. On the contrary, I think there are many belief-like ordinary concepts – so let a hundred flowers bloom.

But I have worries I think are worth raising about the arguments for HRS's positive view, both in their 2016 paper and in Dorst (Reference Dorst2019), Dorst and Mandelkern (Reference Dorst and Mandelkern2021), and Holguín (Reference Holguín2022). I take it the most compelling arguments for the view of believing as thinking likely (HRS, Dorst) or guessing (Dorst, Holguín, Mandelkern) come from considering cases such as the following, which HRS attribute to Jeremy Goodman (Hawthorne et al. Reference Hawthorne, Rothschild and Spectre2016: 1400):

To take Goodman's example, consider a three-horse race. Assume that horse A is more likely to win than horse B which in turn is more likely to win th[a]n horse C (so the probabilities of winning could be known to be 45, 28, 27%). In this case it seems fine to say ‘I think horse A will win’ or ‘I believe horse A will win’.

HRS go on to suggest that it might also be right to say one thinks it likely horse A will win, and so that appropriately believing p and appropriately thinking p likely are compatible with p being more likely false than true. Holguín (Reference Holguín2022: §3) supplies further cases of this sort: “We think things about the weather, upcoming elections, unsolved murders, mathematical conjectures, and so on – even when we know full well that the evidence for our opinions on these is matters is far from decisive.”

As I suggested in §5, I think these are peculiar cases: they are all cases where we naturally take it for granted that one does not have conclusive evidence. Using the language of guessing from Holguín, Dorst, and Mandelkern, these are the sorts of examples where one expects nothing more than a guess, but that does not show that one can never do better than guessing, or expect more than a guess.

Holguín explicitly argues for the methodological use of felicity judgments about thinks statements as evidence for the rationality of the attitude expressed thereby (2022: §2):

There is no impression that I represent myself as irrational in using [“I think he'll lose”] to answer your question about what I think. And we know that we often do detect when a person making a ‘thinks’-report would have to be irrational for the report to be true.

I'd agree with this methodological point in general, but if we add a bit of detail to the picture, we can see a reason to doubt that these intuitions are trustworthy guides to rationality in the sorts of cases at hand. That is, you might think (a) that intuition is a good guide to which sorts of statements are normal or usual, and also (b) that normal, usual beliefs are usually rational. These two claims, taken together, are enough to establish that intuition is a trustworthy guide to rationality in general – and I agree that it is. But if we have independent reason to think, for some more specific domain, that (b) is false – that normal, usual beliefs in that domain are very often irrational – then even if (a) holds and intuition still reliably tells us which beliefs are normal or usual, we should not trust it as a guide to rationality. We have reason to think that people are not very good at reasoning with probabilities or dealing with uncertainty. There is controversy about whether examples like the feminist bank teller show straightforwardly that ordinary people make irrational probability judgments, e.g., committing the conjunction fallacy, but the controversy itself shows, at least, that determining what judgments lie behind ordinary people's statements in such cases is not straightforward. (See, e.g., Kahneman et al. Reference Kahneman, Slovic and Tversky1982.)

I once listened to a baseball podcastFootnote 40 where one host, Arden, asked the other, Ben, what he thought would happen in the first major league plate appearance of a celebrated Blue Jays prospect, Vladimir Guerrero, Jr. This conversation took place well before anyone knew the date Guerrero would make his debut, much less any specific details about it, such as the ballpark or opposing pitcher. Ben's answer was something along the following lines: “Well, I think he has an n percent chance of getting a hit, with an m percent chance of a home run, and an l percent chance of striking out, [etc.].” This was not a satisfying answer to Arden's question. (Arden's own answer, as I recall, was that Guerrero would walk – which was not the most likely of the possible outcomes explicitly listed.)

There was some back and forth about this – it seemed to me that both hosts agreed on Guerrero's minor league statistics, agreed on a general system of projecting major league performance based on minor league statistics, and agreed on how well that system was suited to predicting Guerrero's performance in particular, given some unmeasurable observations of his play. That is, it seemed to me they agreed on the probabilities of each of the possible outcomes – but that was the end of the story for Ben, whereas Arden had a guess about what would happen.

Here's the worry I think this story expresses about Holguín's and Dorst and Mandelkern's arguments about guessing: I can talk myself into Arden's view or into Ben's, but neither of these fits the picture of guessing on offer. That is, I can talk myself into the intuition that Arden is making a mistake or being irrational in guessing that Guerrero would walk, but only by talking myself into Ben's position, according to which we just have the probabilities, and adding a guess on top is simply a mistake. On the other hand, if guessing seems like a reasonable thing to do – something like choosing an outcome to bet on – I have trouble seeing why Arden is making a mistake betting on something other than the most likely outcome.Footnote 41 This conflicts with Dorst and Mandelkern's (Reference Dorst and Mandelkern2021) claims about “intuitively unacceptable” answers to a question about where Latif will go to law school: if the probabilities he will go to Yale, Harvard, Stanford, and NYU are, respectively, 38%, 30%, 20%, and 12%, Dorst and Mandelkern claim it is intuitively unacceptable to guess that he will go to Harvard, or that he will go to Stanford, or that he will go to NYU. (Dorst and Mandelkern elaborate: “To be clear, we are not claiming that people never have guesses like [Arden's]. Our claim is rather normative: there is something peculiar – something irrational – about guesses like this.” And my worry is, likewise, not just that Arden has this guess, but that it is normatively fine, irrational only insofar as guessing itself is irrational.)

To put the worry very bluntly: I do not share the intuitions Holguín, Dorst, and Mandelkern express, and I do not trust their intuitions or mine. I worry that our intuitions result from too much education about probability theory and too little empirical evidence about people's actual behaviour when asked to make guesses and to evaluate the rationality of each other's guesses in the sorts of scenarios on offer. Going back to Goodman's original example in HRS (2016), if it were so intuitively clear that the only acceptable guess is that horse A would win, simply because no other horse is more likely to win, bookmakers should be surprised whenever anyone bets on another horse. Or we should expect punters to say things like “I think A will win, but I'm putting my money on B”. To my ear, this sounds decidedly odd.

I began this section saying “let a hundred flowers bloom”; readers who recognize the phrase might be surprised if I don't conclude by denouncing my rivals as rightists and counterrevolutionaries. But I meant it sincerely! Although I baulk at too-bold claims that all believing is guessing, and I have worries about the intuitions used to support the theory of guessing being developed, I am excited by that theory. I hope it thrives, and I hope empirical work is done on folk intuitions around guessing. But, again, my main goal in this paper has been to defend the ordinariness of strong belief, to show that guessing is not the only ordinary notion of belief. We should not abandon the orthodox view that there is an ordinary notion for which entitlement equality holds.Footnote 42

Footnotes

1 Hawthorne et al. (Reference Hawthorne, Rothschild and Spectre2016). Hereafter, page references are to that paper unless otherwise indicated.

2 We should probably specify further: first-person singular belief attributions seem to have this function, but first-person plural attributions do not show the same behaviour.

3 Although, as I'll indicate shortly, I agree with Stanley's view of the use of I believe in the sentences under discussion, the final sentence is probably too strong: hedges don't have to be indicators of weak reasons. For example, speakers sometimes qualify their endorsement of a claim out of politeness, despite having strong reasons. See Fraser (Reference Fraser, Kaltenböck, Mihatsch and Schneider2010) for more on the pragmatic uses of hedging.

4 For a systematic, broadly Gricean account of the function of hedges and disclaimers as means of speakers’ protecting their reputations, see McCready (Reference McCready2015). McCready writes that speakers use hedges like I believe or I think as shields “to avoid being held responsible for the content of [the] utterance if it proves to be false, just as the goal of a disclaimer in an advertisement is to avoid being held responsible if the actual product is less satisfactory than the advertisement makes it out to be” (p. 39). From the hearer's side, McCready also explains how hearers should update their beliefs on receipt of a disclaimed or hedged assertion. Chapter 5 in particular is dedicated to explaining “how and why certain constructions can receive interpretations as hedges” (p. 113). Eliding considerable detail, McCready's proposal is that “we can get disclaimed interpretations due to pragmatic (Moorean) inconsistency and speaker reasoning about that inconsistency” (p. 145).

5 A referee worries that this argument might prove too much: if asserting that p represents the speaker as knowing p, then adding I know should also be needlessly prolix – but adding I know does not hedge in the way that adding I believe does. I think the difference can be explained by the factivity of know: adding I know can serve to emphasize the speaker's awareness of p without diminishing their commitment to p, as in “You're busy, I know, but can you do Φ for me?” or “Who's available to do Φ? S, I know, is unavailable.” That is, the calculation of an implicature in these cases takes a similar premise (the speaker is being needlessly prolix) but draws a different conclusion, because the superfluous words (know, believe) are different. The referee suggests, along with Benton (Reference Benton2011) and Benton and van Elswyk (Reference Benton, van Elswyk and Goldberg2020: §2), that sentence-medial or sentence-final I know are generally infelicitous, heard as merely redundant. Although I think the examples I've given above are felicitous, it is no problem for my argument if there are contexts where the referee is correct: I predict those will be cases where there is no explanation available for why the speaker would add I know.

6 See Davis (Reference Davis2016: §2.3.2) and the Stanford Encylopedia of Philosophy entry on “Implicature” (Davis Reference Davis and Zalta2019: §4) on types of implicature involving literal speech. Davis uses the terms “figures of speech” and “modes of speech”, respectively, for implicatures involving non-literal and literal speech.

7 Asked to retract the insult, Stevens is supposed to have said to Lincoln, “I believe I told you he would not steal a red-hot stove. I will now take that back.”

8 Grice (Reference Grice, Cole and Morgan1975: 52) has a famous example of similar damning with faint praise: a letter of recommendation for a job candidate in philosophy saying little about the subject's philosophical ability, praising only their command of English and their punctuality. But one might reasonably think academics have a duty not to give our students negative recommendations, even if strictly speaking we only say positive things; some days this makes Grice's sentence sound inappropriate to my ears, so I think it's not an ideal example for present purposes.

9 Like and love are slightly irregular cases, as HRS note (n. 15). Presumably, what they have in mind is that S doesn't like O tends to communicate S dislikes O, whereas S doesn't love O does not tend to communicate S hates O. But HRS do not make the intended claim explicit here.

10 Note that Rothschild (Reference Rothschild2020: 1353) concedes, for independent reasons, that the NR argument for belief's weakness in HRS's 2016 paper “clearly does not work”, but and now sees NR as “just suggestive of the weakness of belief”.

11 Rothschild (Reference Rothschild2020: 1349, n. 9) briefly objects to Romoli's account of NR as insufficiently sensitive to variation across languages (and therefore better treated semantically than pragmatically). But in fact, despite “implicature” in the name, the theory of scalar implicatures Romoli adopts treats them as features of grammar, not pragmatics. See, e.g., Chierchia et al. (Reference Chierchia, Fox, Spector, von Heusinger, Maienborn and Portner2012) for more.

12 The reader might suspect that Clarke (Reference Clarke2013), Greco (Reference Greco2015), and Dodd (Reference Dodd2017) are exceptions. On the contrary, although Clarke, Greco, and Dodd identify belief with maximal credence, they all deny that maximal credence entails certainty. Even Dodd, who calls his belief-as-credence-one view “(Certainty)”, denies this (see §4.1, pp. 4612–3).

13 A referee objects that there is something like a scale in these examples, since otherwise it would be hard to explain [the felicity of] sentences like the following:

  • I don't just like them, I love them.

  • I don't just want it, I need it.

But recall the equally felicitous pitch line for Hair Club for Men, “I'm not just the president, I'm also a client.” This adds also, but note that the following reversals without also remain felicitous:

  • I don't just love them, I like them.

  • I don't just need it, I want it.

I would suggest that sentences of the form I don't just X, I Y are felicitous when X does not entail Y. But for Y to be a stronger element in a scale of the relevant sort, it must entail X. This does not hold for the items in question, as the following show:

  • I love them, but I don't like them.

  • I need it, but I don't want it.

14 See also Horn (Reference Horn and Cole1978: 152, 188).

15 A referee informs me that Horn's claim about Hebrew maamin is wrong. I have not been able to refute or corroborate the referee's claim, but I have no reason to doubt their report. If the rest of what Horn says in this passage is correct, the overall point stands; nevertheless, we should avoid relying too heavily on Horn here.

16 Although these exceptions are “semantically unmotivated”, we can still say something about where to expect exceptions. Horn notes, citing Kiparsky and Kiparsky (Reference Kiparsky, Kiparsky, Steinberg and Jakobovitz1971), that factives never trigger NR (p. 323). Interestingly for present purposes, Horn also notes (pp. 338–9) that explanations of NR often tie it to politeness and hedging (cf. §2 above). “Since the same association of raised negs with politeness, hesitancy, and/or uncertainty has been observed (cf. Horn (Reference Horn and Cole1978)) in languages as diverse as Hindi, Japanese, Swahili, and Turkish, it appears to be inherent in the very nature of the [NR phenomenon].”

17 I share HRS's intuition about these specific sentences, but I worry that there may be other sentences with a similar contrast without the contradictory intuition. For example, the following sounds fine to me:

  • I think he's dead, but I won't believe it until I see the body.

(Context: the speaker is a detective searching for a missing person.) See also Heiphetz et al. (Reference Heiphetz, Landers and Van Leeuwen2021) for an argument that think and believe are not synonymous based on linguistic corpus data and some experiments. See also Van Leeuwen et al. (Reference Van Leeuwen, Weisman and Luhrmann2020).

18 Dorst (Reference Dorst2019: §1) uses a variant, it's not as if.

19 It's not that p can sometimes sound natural, as in a sentence like “It's not that my licence is suspended, it's that I've had too much to drink. If I were sober, I'd drive.” But here, the speaker has not denied that their licence is suspended. They could coherently have continued “My licence is suspended, but I'd drive anyway if I were sober.”

20 There are exceptions, of course: in a desert or during a monsoon, reasonable people can be very certain about whether it will rain.

21 A referee notes that we should be cautious with examples from Biblical Hebrew: Lebens (Reference Lebens2021) argues that the Hebrew Bible “has no word for belief” (p. 1). Lebens focusses in particular on the word in this verse, amen, which he argues expresses faith rather than belief. On the other hand, Lebens does not discuss the verbs in the two verses I quote next, chashab and amarti; nor is it clear which of Lebens’ non-doxastic uses of amen is plausibly at work in this verse. Nevertheless, I agree that caution is warranted here.

22 Another note of caution: I am reluctant to rely too much on examples with Greek pisteuo for precisely the reason Lebens (Reference Lebens2021) gives for caution with Hebrew amen: although there are some examples, like this one, of pisteuo-that, pisteuo-in is much more common – especially in John. There is a distinct flavour, at least, of trust or faith in pisteuo; one might worry this lends itself to a strong reading precisely by invoking attitudes other than belief.

23 For brevity, I use the abbreviations listed on https://www.biblegateway.com/verse/en/Romans 3:28 (accessed 22 December 2019), which is also my source for these translations.

24 A referee suggests is convinced that might meet my aims here without raising some of the objections I will address below. The referee may be correct, but I avoid this particular example because is convinced that can sound too easily like is certain that, and I seek a kind of belief weaker than certainty.

26 Blog comment, 2010, at http://theoildrum.com/node/7137, accessed 26 August 2020.

27 http://www.jewfaq.org/gentiles.htm, accessed 26 August 2020.

28 California Circuit Judge Jay S. Bybee, in Anderson v. City of Hermosa Beach, 621 F. 3d 1051, 2010.

30 “Syria's tragedy, America's crime: The collapse of national sovereignty”, Salon magazine, 2018 (25 February 2018), excerpted in COCA.

33 This Week in Baseball History, episode 83, 16 January 2019, available at https://thisweekinbaseballhistory.libsyn.com/episode-83-the-black-sox-lose-their-appeal-with-jacob-pomrenke, or wherever you get your podcasts.

36 Robert Anton Wilson, 1980. From https://en.wikiquote.org/wiki/Libertarianism, accessed 13 August 2020. Original interview archived at http://web.archive.org/web/20070610042641/http://www.rawilsonfans.com/articles/Starship.htm.

37 “The Immortals”, NCIS season 1 episode 4, 2003, quoted in COCA.

38 A referee objects that there are other patterns which my chosen VPs do not fit. In particular, “Mary's still at the party, I'm sure of it” sounds fine, whereas “Mary's still at the party, I conclude/hold/am satisfied/outright believe that she is” do not. But there are removable problems with all of these (putting aside outright believe, which is technical philosophese): the examples with conclude and hold are unnecessarily repetitive; the example with am satisfied is fine without the comma splice. The following all sound fine to my ear:

  • Mary's still at the party, I hold.

  • Mary's still at the party, I've concluded.

  • Mary's still at the party; I'm satisfied that she is.

39 Thanks to a referee for raising this objection.

40 This was an episode Sportsnet.ca's At the Letters, hosted by Arden Zwelling and Ben Nicholson-Smith. I haven't been able to find the specific episode again, so take this story as a piece of fiction; resemblance between the fictional characters and the actual podcast hosts is hopefully a little more than coincidental.

41 The story would be different if there were explicit stakes, advantages or disadvantages to guessing right or wrong.

42 Thanks to audiences at Zhejiang University, Queen's University Belfast, the European Epistemology Network 2022 conference, and the workshop “Is Belief Weak?”, and to Kevin Dorst, Matt Mandelkern, Davide Fassio, Jie Gao, Tom Walker, Jeremy Watkins, Suzanne Whitten, several anonymous reviewers, and especially to Joe Morrison for helpful comments and encouragement.

References

Benton, M.A. (2011). ‘Two More for the Knowledge Account of Assertion.’ Analysis 71(4), 684–7.CrossRefGoogle Scholar
Benton, M.A. and van Elswyk, P. (2020). ‘Hedged Assertion.’ In Goldberg, S. (ed.), The Oxford Handbook of Assertion, pp. 245–63. Oxford: Oxford University Press.Google Scholar
Chierchia, G., Fox, D. and Spector, B. (2012). ‘Scalar Implicature as a Grammatical Phenomenon.’ In von Heusinger, K., Maienborn, C. and Portner, P. (eds), Semantics: An International Handbook of Natural Language Meaning, pp. 2297–331. Berlin: De Gruyter Mouton.Google Scholar
Clarke, R. (2013). ‘Belief Is Credence One (In Context).’ Philosophers’ Imprint 13(11), 118.Google Scholar
Clarke, R. (2018). ‘Assertion, Belief, and Context.’ Synthese 195(11), 4951–77.CrossRefGoogle Scholar
Davies, M. and Gardner, D. (2010). A Frequency Dictionary of Contemporary American English: Word Sketches, Collocates and Thematic Lists. London: Routledge.Google Scholar
Davis, W.A. (2016). Irregular Negatives, Implicatures, and Idioms. Dordrecht: Springer.CrossRefGoogle Scholar
Davis, W. (2019). ‘Implicature.’ In Zalta, E.N. (ed.), Stanford Encyclopedia of Philosophy (Fall edition). https://plato.stanford.edu/archives/fall2019/entries/implicature/.Google Scholar
Dodd, D. (2017). ‘Belief and Certainty.’ Synthese 194, 4597–621.CrossRefGoogle Scholar
Dorst, K. (2019). ‘Lockeans Maximize Expected Accuracy.’ Mind 128(509), 175211.CrossRefGoogle Scholar
Dorst, K. and Mandelkern, M. (2021). ‘Good Guesses.’ Philosophy and Phenomenological Research. https://doi.org/10.1111/phpr.12831.CrossRefGoogle Scholar
Fraser, B. (2010). ‘Pragmatic Competence: The Case of Hedging.’ In Kaltenböck, G., Mihatsch, W. and Schneider, S. (eds), New Approaches to Hedging, pp. 1534. Bingley: Emerald Group Publishing.CrossRefGoogle Scholar
Greco, D. (2015). ‘How I Learned to Stop Worrying and Love Probability 1.’ Philosophical Perspectives 29(1), 179201.CrossRefGoogle Scholar
Grice, H.P. (1975). ‘Logic and Conversation.’ In Cole, P. and Morgan, J. (eds), Syntax and Semantics 3: Speech Acts, pp. 4158. New York, NY: Academic Press.Google Scholar
Hawthorne, J., Rothschild, D. and Spectre, L. (2016). ‘Belief Is Weak.’ Philosophical Studies 173(5), 1393–404.CrossRefGoogle Scholar
Heiphetz, L., Landers, C.L. and Van Leeuwen, N. (2021). ‘Does Think Mean the Same Thing as Believe? Linguistic Insights into Religious Cognition.’ Psychology of Religion and Spirituality 13(3), 287–97.CrossRefGoogle Scholar
Holguín, B. (2022). ‘Thinking, Guessing, and Believing.’ Philosophers’ Imprint 22(6), 125.Google Scholar
Horn, L.R. (1978). ‘Remarks on Neg-Raising.’ In Cole, P. (ed.), Syntax and Semantics 9: Pragmatics, pp. 129220. New York, NY: Academic Press.Google Scholar
Horn, L.R. (1989). A Natural History of Negation. Chicago, IL: University of Chicago Press.Google Scholar
Kahneman, D., Slovic, P. and Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases. New York, NY: Cambridge University Press.CrossRefGoogle Scholar
Kaplan, M. (1996). Decision Theory as Philosophy. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Kennedy, C. (2007). ‘Vagueness and Grammar: The Semantics of Relative and Absolute Gradable Adjectives.’ Linguistics and Philosophy 30, 145.CrossRefGoogle Scholar
Kiparsky, C. and Kiparsky, P. (1971). ‘Fact.’ In Steinberg, D. and Jakobovitz, L. (eds), Semantics: An Interdisciplinary Reader, pp. 345–69. Cambridge: Cambridge University Press.Google Scholar
Lebens, S. (2021). ‘Amen to Daat: On the Foundations of Jewish Epistemology.’ Religious Studies. https://doi.org/10.1017/S0034412521000470.CrossRefGoogle Scholar
Leitgeb, H. (2017). The Stability of Belief: How Rational Belief Coheres with Probability. Oxford: Oxford University Press.CrossRefGoogle Scholar
Logins, A. (2020). ‘Two-State Solution to the Lottery Paradox.’ Philosophical Studies 177(11), 3465–92.CrossRefGoogle Scholar
McCready, E. (2015). Reliability in Pragmatics. Oxford: Oxford University Press.Google Scholar
Nagel, J. (2021). ‘The Psychological Dimension of the Lottery Paradox.’ In Douven, I. (ed.), The Lottery Paradox. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781108379755.004.Google Scholar
Romoli, J. (2013). ‘A Scalar Implicature-Based Approach to Neg-Raising.’ Linguistics and Philosophy 36, 291353.CrossRefGoogle Scholar
Rothschild, D. (2020). ‘What It Takes to Believe.’ Philosophical Studies 177, 1345–62.CrossRefGoogle Scholar
Staffel, J. (2016). ‘Beliefs, Buses and Lotteries: Why Rational Belief Can't Be Stably High Credence.’ Philosophical Studies 173, 1721–34.CrossRefGoogle Scholar
Stanley, J. (2008). ‘Knowledge and Certainty.’ Philosophical Issues 18, 3557.CrossRefGoogle Scholar
Van Leeuwen, N. (2014). ‘Religious Credence is not Factual Belief.’ Cognition 133, 698715.CrossRefGoogle Scholar
Van Leeuwen, N., Weisman, K. and Luhrmann, T.M. (2020). “’Think” and “Believe” Across Cultures: A Shared Folk Distinction Between Two Cognitive Attitudes in the US, Ghana, Thailand, China, and Vanuatu.’ https://cogsci.mindmodeling.org/2020/papers/0137/index.html. 21 July 2021Google Scholar