Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-23T17:55:06.222Z Has data issue: false hasContentIssue false

From Belief Polarization to Echo Chambers: A Rationalizing Account

Published online by Cambridge University Press:  07 June 2022

Endre Begby*
Affiliation:
Simon Fraser University, Burnaby, Canada
Rights & Permissions [Opens in a new window]

Abstract

Belief polarization (BP) is widely seen to threaten havoc on our shared political lives. It is often assumed that BP is the product of epistemically irrational behaviors at the individual level. After distinguishing between BP as it occurs in intra-group and inter-group settings, this paper argues that neither process necessarily reflects individual epistemic irrationality. It is true that these processes can work in tandem to produce so-called “echo chambers.” But while echo chambers are often problematic from the point of view of collective rationality, it doesn't follow that individuals are doing anything wrong, epistemically speaking, in seeking them out. In non-ideal socio-epistemic contexts, echo chamber construction might provide one's best defense against systematic misinformation and deception.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

1. Introduction

“Belief polarization” (BP) is the name for a class of phenomena in which subjects tend to become more entrenched or more extreme in their views following the exchange of information with others. BP is a matter of concern because it seems to undermine the epistemic value of public discourse, maybe even to the point of rendering it counterproductive. Similarly, BP can contribute to destabilizing our political institutions, insofar as the legitimacy of these institutions depends on broad public consensus about matters of political interest. Ordinarily, we would think that public discourse is among the primary mechanisms by which this kind of consensus would be forged.Footnote 1 BP appears to push us ineluctably in the opposite direction.

We know about BP from decades of studies in social psychology,Footnote 2 though the phenomenon itself also seems readily observable in our everyday lives. Psychology aside, how should we think about BP from a normative, philosophical point of view? At a first glance, it can certainly seem like a paradigm of collective irrationality, in the sense that it places out of our reach certain public goods which really ought to be perfectly attainable. Once it becomes entrenched, BP all but ensures that we will be stuck with a noisy and antagonistic public sphere and inefficient governments struggling to prove their legitimacy.

But is this predicament also a matter of individual irrationality?Footnote 3 In this paper, I will be pursuing this question from the point of view of epistemic rationality in particular. Belief formation and maintenance clearly fall under epistemic norms. But more specifically, I will be placing the study of BP firmly in the context of social epistemology, which, one would think, is where it belongs, insofar as it involves public discourse and joint decision making.Footnote 4 As philosophers have long recognized, a crucial source of learning for cognitively limited agents such as us is the exchange of information and joint deliberation with others. By learning about what others believe, we can come to know about lots of things that we couldn't otherwise. Additionally, joint deliberation can serve as a crucial check on the quality of our own reasoning. Broadly speaking, then, we should take public discourse as a “learning opportunity,” and an important one at that. And from the point of view of epistemic normativity, it seems natural to expect that public discourse of this sort should generally lead one to moderate one's views, not to making them more radical. In other epistemic domains, at least, it seems reasonable to assume that, whenever we have at our disposal a suitably large set of sample opinions, the correct view should lie somewhere closer to the middle, not toward the margins.Footnote 5 Accordingly, when we observe widespread tendencies toward polarization, it is tempting to surmise that something must have gone wrong, epistemically speaking. Specifically, it would seem that BP can obtain at the “group level” only because individual group-members are responding incorrectly to the information that is provided to them through their exchanges with others, perhaps by letting pressures of social conformity, resentment, or other affective commitments get in the way of proper epistemic processing.

Nonetheless, this paper argues that these impressions can be misleading: instead, BP is the predictable outcome of cognitive behaviors that can be seen to comport well with grounding principles of socio-epistemic rationality.Footnote 6 I aim to show this by simple extrapolations from well-known results in other areas of social epistemology, in particular by drawing on insights from the literature on the epistemology of disagreement. BP, I will argue, can be the result of generally rational procedures for updating one's beliefs in the face of patterns of variable agreement or disagreement with others. (Of particular interest here is the fact that we tend to take evidence from peer agreement to confirm what we already believe while we tend to give evidence from disagreement no such privilege. How can this apparently selective approach to new evidence be epistemically rational? I aim to show not only that it can be rational, but that these are in fact naturally interlocking processes relating to how epistemic trust (and distrust) is formed and maintained in political communities.)

Nonetheless, one might worry that this updating procedure will lead to a pernicious form of echo chamber construction. While it is true that echo chambers can be pernicious in many contexts, this doesn't mean that individuals are necessarily irrational for falling into them. Instead, I will argue that some degree of echo chambering is a natural and inevitable by-product of any socio-epistemic process, at least in contexts where factual judgments and value judgments intersect, as they generally do in politics and other matters of public interest. Moreover, in socio-epistemic contexts that are already marked by a significant degree of inter-group antagonism, echo chamber construction may well be part of one's best strategy for maintaining a healthy access to truth-tracking evidence.

This element of antagonism places our argument not simply in the context of social epistemology but also of “non-ideal epistemology,” the branch of epistemology which aims to articulate norms for belief formation under a variety of situational parameters that predictably tend to frustrate our pursuit of true belief.Footnote 7 Quite simply, there's a kind of evidence out there that we must, as responsible epistemic agents aware of our own cognitive limitations, seek to take into account, namely others’ contributions to discourse about matters of shared interest. But at the same time, we have every reason to believe that these contributions contain a mixture of genuine information, mere noise, misinformation, and outright disinformation. Since simply withdrawing is not an option, we are, as epistemic agents, required to carve out epistemic policies suited to the regrettably non-ideal situation we find ourselves in. Specifically, we must find ways of distinguishing between trustworthy and non-trustworthy interlocutors. Given this task, and given the situation in which the task must be carried out, it cannot be held against us that we adopt belief-updating policies which predictably lead to BP and echo chamber formation, even as we recognize that we thereby run the risk of further contributing to our collective predicament.

2. Two dimensions of belief polarization

To get us started, I loosely described BP in terms of the observation that people tend to become more entrenched or more extreme in their views as a result of exchanges of opinion with others. In other words, these exchanges tend to move them farther toward the “pole” rather than closer to the center, as one might otherwise have hoped or expected.

Moving forward, it will be important to be more specific here. Much of the psychological literature is strictly concerned with BP as it arises in the context of what we can call “intra-group deliberation.” Take some group of relatively like-minded people, set them to discuss some (controversial) issue around which they already tend to agree. Now observe how, as a result of this discussion, they will, collectively and individually, end up espousing a more extreme version of the view that they started out with.

Now, if we could take BP to be exclusively or predominantly a result of intra-group deliberation, then, theoretically at least, the remedy would seem ready at hand: we can seek to correct the bias arising from intra-group deliberation simply by ensuring that people also have ample opportunity for inter-group deliberation, i.e., the exchange of information with others who don't already share their views.Footnote 8

But things are not so simple: in light of developing trends in public discourse over the last decade (if not more), it seems plausible that people are no less prepared to polarize following exchanges with others who hold very different, even diametrically opposed views. For instance, climate change skeptics don't typically become less entrenched in their views after it's pointed out to them that the overwhelming majority of climate scientists believe that climate change is very much real and is the measurable consequence of human activity. Similarly, on the other end of the political spectrum, proponents of organic farming, for example, don't always take kindly to reminders that GMOs are generally safe and may be part of our best strategy for ending food insecurity in the developing world.

Let us call these phenomena “intra-group” and “inter-group” BP, respectively. The two are not always fully distinguished in the literature, perhaps as a result of the natural supposition that they must, one way or another, be related. In this paper, I will eventually aim to substantiate and vindicate this supposition, though only by way of treating them initially as distinct.

As it will turn out, providing a rationalizing account of intra-group BP alone might actually be quite straightforward. The bigger philosophical challenge lies in extrapolating from similar principles the rationality also of inter-group BP. As I will argue, while these are nominally independent processes, they are nonetheless processes that we should expect to move in tandem – like interlocking cogwheels – to produce the phenomenon that we are concerned with: BP on a large scale, undermining the value of public discourse and potentially destabilizing our political institutions.

(Finally, and before we start, a note on this paper's aim of providing a “rationalizing account” of BP. Much of the psychological literature proceeds on the supposition, typically not articulated or defended in any great detail, that BP manifests some kind of epistemic irrationality, for instance by way of subjects letting affective bonds distort their epistemic processing, or updating their beliefs merely to seek the approval of others, in a bid to maintain or strengthen their sense of social identity. Some contributors to the philosophical literature follow in these steps.Footnote 9 In developing a “rationalizing account” of BP, one shouldn't be taken to suggest that these sorts of factors are never in play, or that they might not be causally relevant in any particular instance of BP. Instead, the aim is just to show that BP could be the result of perfectly rational belief-updating procedures (or perfectly rational for limited epistemic agents such as us). In this sense, BP is very much part of non-ideal epistemology.Footnote 10 But the relevant concessions to non-ideal epistemology shouldn't be controversial here, since they are arguably part of the fabric of social epistemology from the start.Footnote 11 Any epistemic agent who is forced to take their cues in part by interactions with others in this sense is arguably already a non-ideal epistemic agent.)

In other words, what we would learn from developing a rationalizing account of BP is not that all instances of BP are epistemically rational. Rather, we learn that it would be too quick to assume that the epistemic behaviors we observe in BP are necessarily norm-violating behaviors.

3. Intra-group belief polarization

My argument will proceed by extrapolation from common reasoning in the literature on the epistemology of disagreement. This may seem an odd choice: isn't it precisely the point of this literature that one ought to become more measured in one's convictions as a result of information exchanges with disagreeing “peers”? To be sure, there is a live debate as to precisely how much one should temper one's attitudes as a result of such interactions.Footnote 12 But surely, no one would argue that one should generally become more confident or move further toward the pole as a result of noting such disagreements. But this is precisely what seems to be the case with BP.

To a first approximation, then, it might seem like BP represents a position even beyond “steadfastness.” In learning what others think, one doesn't simply stay put where one is, but actually moves further in the opposite direction. And this seems like a bad epistemic policy. So, BP must be epistemically irrational.

Nonetheless, it is my contention that a deeper reconstruction of the central motifs in the epistemology of peer disagreement might shine a very different light on this phenomenon. The basic idea driving the epistemology of peer disagreement is that, as limited epistemic agents, we can stand to learn from others.Footnote 13 While the literature is mostly focused on the epistemic significance of disagreement with others, a fuller picture will also acknowledge the epistemic significance of agreement. That is, the point of the exercise is presumably to provide guidelines for updating on information about what others believe in general, not just for updating in the special case where these people turn out to disagree with us.Footnote 14

We will start, then, by exploring the possibility that we might account for intra-group BP first by tying it to an epistemology of peer agreement. In a second step (section 4), we can then move to considering inter-group BP in terms of an epistemology of disagreement.

How might this work? Well, let's say I come into the situation having worked out my own view on some matter of public interest. I believe I am right, but my confidence, all things told, is quite moderate, reflecting the fact that I understand that the issue is complex, that my own evidence is limited, and that I should be less than certain of my own ability to process this evidence correctly.

In light of this, how should I respond to learning that some person (or better yet, a group of people) broadly agrees with me on this question? If the default position in encountering peer disagreement is that of reducing one's confidence in the proposition in question (by whatever factor we determine is appropriate), then the default position in encountering peer agreement should presumably be to increase one's confidence. So far, these issues seem entirely symmetrical: the reason one should decrease one's confidence in the face of peer disagreement is that peer disagreement raises the evidential probability that one is wrong about the disputed proposition. Conversely, the reason one should increase one's confidence in the face of peer agreement is that it raises the evidential probability that one was right.

(Why does this simple idea so often go missing in the literature? Information about what other people believe with regard to p is a peculiar sort of evidence, sometimes called “higher-order evidence” (HOE).Footnote 15 Much of the philosophical discussion focuses on HOE's potential to serve as defeating or undercutting evidence, i.e., evidence that serves to weaken your epistemic position.Footnote 16 But we shouldn't lose sight of the fact that HOE can be positive just as easily as it can be negative. Suppose, for instance, that I'm trying to do long division for the first time in years, and I'm not totally confident that I remember the method correctly. Now, watching a YouTube video of some highly rated math tutor demonstrating the method can provide higher-order evidence that my answer was right, even if they used the method to solve a different problem. It's certainly still possible that I made a mistake. But the probability that I got it right seems to have increased, since it appears that I at least got the method right. Accordingly, I should now feel more confident in my result. Conversely, seeing them carry out the method in a different manner is a reason to think that I got it wrong (even if it remains perfectly possible that my method is good as well). In brief, there is just no reason to assume that HOE has a built-in negative polarity in this sense.)

Seen from this angle, at least one important dimension of intra-group BP seems easy enough to account for in rationalizing terms: upon learning that scores of others essentially agree with me on some question of interest, I should increase my confidence that this is the right answer. It is true that we might still “disagree” in the technical sense of attaching different credences to the question under discussion – some of us are more confident, some are less. But then again, we also don't typically compare our credences directly. Instead, what is salient is that we all basically agree about the matter at hand. This reassurance leads us all, and reasonably so, to increase our confidence that we got the answer right.Footnote 17 Increasing one's confidence in this way means becoming more entrenched in one's views: it would now take more and better evidence to move me in the opposite direction. In effect, we will have undergone a process of belief polarization as a result of exchanging opinions with like-minded others.Footnote 18

(Should we be bothered by the realization that, given facts about our prior beliefs and the social setting we happen to find ourselves in, it appears to be to some extent predictable not just that we will polarize as a result of such interactions, but also in which direction we will polarize? Moreover, should we be bothered by the fact that this seems predictable even from our own point of view? In a recent paper, Dorst (Reference DorstMS) argues that this is indeed something that a rationalizing approach to BP will have to take account of, because it seems to entail a violation of the Reflection Principle (Van Fraassen Reference Van Fraassen1984). According to the Reflection Principle, my current confidence in some proposition d should be equal to my “rational expectation of my future, more-informed rational confidence” (Dorst Reference DorstMS: 6). As Dorst comments: “If at an initial time I could expect that my future, rational more-informed self will be less confident of d, shouldn't I now lower my confidence in d?” (Dorst Reference DorstMS: 6). If this is the case, it would seem that we need not even wait for these interactions to take place: we should already have polarized.

I am not convinced there is a genuine problem here. Certainly, if I knew (and perhaps knew that I knew) at t that my future, rational self will be in possession of evidence E which uniquely warrants a particular credence in d, then I should update on that knowledge now, thereby adopting the credence in question. But meanwhile, I don't generally know what evidence my future self will be in possession of. This is why we still describe the process of evidence-disclosure as a process of discovery or learning. This is not to say, of course, that I don't have any rationally supported beliefs about what the future course of evidence will reveal. But those beliefs are already baked into my current assessment of the probability that p. I can't have, say, a 0.8 credence in p (based on my current evidence) at the same time as I feign indifference on the question of whether the total evidence (revealed, we suppose, in the fullness of time to some future self) will ultimately support p. In this sense, our current credence is already aligned with our rational expectations about what a future, more informed self will believe, and I don't see a problem with that. Adopting an evidence-based credence on some proposition is not simply to tally the evidence that is already in one's possession, but also – simultaneously – to make a projection on the future course of evidence. The sense in which it is “predictable” that I will polarize over time in a particular direction is indeed a reflection of the fact that I already have a certain (positive) credence in p, and thereby rationally expect that new, incoming evidence will continue to support p. I am less than certain of this, of course, which is why my current credence is less than 1. But as I gather this evidence over time, and note that it does indeed – as I “predicted” – support p, I will gradually become more confident in p, just as I should. In other words, so long as the sense of “predictable” is recognizably consistent with some degree of uncertainty, there really shouldn't be a problem with “predictable polarization” and the Reflection Principle.)

4. Inter-group belief polarization

It seems, then, that the phenomenon of “intra-group” BP falls quite naturally within the purview of a simple extrapolation of lessons from the epistemology of peer disagreement. Intra-group BP is a matter of becoming more entrenched in one's views or adopting a more extreme version of one's view as a result of joint deliberation with “like-minded others.” We can now see this as the simple outcome of learning that other, apparently well-informed people, think in similar terms as you do. This should give you more confidence in your own judgment. It might also lead you to consider the possibility that more radical views pointing in the same general direction are better supported than you would initially have been inclined to think. Both these sorts of belief-changes are covered by this simple extrapolation from the epistemology of peer disagreement.

That leaves inter-group BP to be accounted for, i.e., the tendency to become more entrenched or more extreme in one's views as a result of exchanges with people who think very differently from us. This form of BP is not as well-documented within the empirical literature as the intra-group variety. But its existence can hardly be denied; indeed, it seems to be a driving force in the overall sense of polarization and antagonism that exists within our contemporary public discourse. Finally, there seems to be a plausible explanatory connection between the two phenomena: in fact, it seems pro tanto desirable to hold out for an account which could explain BP along both dimensions within a suitably unified framework, to the point of explaining how the two sorts of processes could feed off each other.

Nonetheless, if intra-group BP was relatively straightforward to explain within the framework we have adopted, it would seem, conversely, that inter-group BP will be significantly harder to explain. After all, now we are truly talking about disagreement, not agreement. And if the epistemology of agreement should lead us to strengthen our views, then presumably the epistemology of disagreement should lead us to moderate them in a similar way.

But as it turns out, there is more depth to the account than this simple application would seem to suggest. It is crucial to the epistemological consequences of disagreement that it be disagreement with a “peer” and not just with anybody. A peer is someone who you have reason to believe is roughly equal to yourself in terms of their epistemic capacities, the information they have access to, etc.Footnote 19 Obviously, you shouldn't update on the beliefs of others if you have reason to think that they are incompetent or misinformed.

For one angle on this, consider the influential work of Elga (Reference Elga2007). Elga argues for an “Equal Weight view” of the epistemology of disagreement: given an instance of peer disagreement, one should take it that one's peer is as likely to be right as oneself. Accordingly, one should respond to peer disagreement by “conciliating,” i.e., moving one's credence significantly in the direction of one's peer. Addressing the concern that the Equal Weight view would effectively force us to abandon our beliefs on any controversial topic – simply because there is so much disagreement out there that we would have to conciliate over – Elga reminds us that as we move toward considering complex moral and political questions, it will become increasingly difficult to sustain the notion that people who evince systematic disagreement about issues of public concern are in fact our “peers” in the sense required by the theory. Therefore, the theory might simply not apply to these cases, insofar as the theory is an account of the epistemic consequences of peer disagreement.

Take for instance a disagreement about the morality of abortion. Typically, if we disagree about this question, we will also tend to disagree about a host of others, such as the existence of the human soul, the time at which a fetus is to be considered a “person,” and much else besides. In other words, there is systematicity to our disagreements:Footnote 20 once we start to get a clearer chart of the pattern of these disagreements, it should become clear that we do not, in any relevant way, “share evidence,” either in the way of having the “same evidence” or in the sense of having different but “equally good” bodies of evidence. And so, we are not “peers” by our theory, and therefore, the theory simply doesn't apply to our disagreement. Accordingly, there's nothing in the account that prohibits us from simply disregarding the opinions of others who systematically disagree with us on controversial moral or political issues.Footnote 21

Elga's response here may seem like a bit of an ad-hoc maneuvering to get himself out of an uncomfortable tight spot created by his espousal of a particularly radical account of the epistemology of disagreement. I have some sympathy with this suspicion, and certainly don't endorse Elga's Equal Weight account in general.Footnote 22 But at the same time, I think that we can reach a similar, though more relevant conclusion, by looking more closely at the fundamentals of the theory.

To see how, consider how someone gets to be designated a “peer” in the first place. Peerhood is not given, but bestowed. Presumably, I must play some role in bestowing it. In this sense, we should expect that the criteria for designating someone as a peer couldn't be fully independent of what I already believe.Footnote 23 Rather, in areas where I have a reasonable starting confidence in my beliefs, I should be much more inclined to designate as peers those who tend to share my judgment on what I take to be issues of importance. This is, essentially, the insight that we called upon above to explain the basic rationality of intra-group BP. And clearly, there's a mutually reinforcing dialectic in play here: if I will tend to designate as peers (to some extent or other) those who evince a fair degree of agreement with my antecedent views, I will also tend to become more confident in those views as a result of noting my agreement with the peer group. What, then, should we think about the people who tend to display systematic disagreement with us on these issues? Shouldn't I, by the same lights, become increasingly confident that they are wrong, and not just “simply wrong” but systematically wrong?

This may certainly look like a suspicious move, somewhat akin to the process of epistemic “bootstrapping” that Elga warns against elsewhere.Footnote 24 We will address this question in more detail below, in consideration of the problem of “echo chamber construction.” But for now, it is important to understand that what is going on here is not a “mere bias,” i.e., an epistemically unmotivated preference for people who share my general outlook. Instead, we presume that I already hold some belief, with credence in the relevant positive range. Let's further presume that I am justified in holding a credence in this range, based on the evidence I have surveyed.Footnote 25 One way to understand what it is to have a (justified) credence in some positive range is just in terms of a commitment to the view that any epistemically rational person with access to a relevantly similar body of evidence would come to a similar conclusion.Footnote 26 So now I use that commitment to single out my peer group, namely those who tend to agree with me on questions of concern.

The reasoning here is relatively simple, though, I think, underappreciated in the literature: as a limited epistemic agent, I need to rely on the input of others, both for supplying me with information I wouldn't otherwise have access to, and for serving as a check on my own reasoning. These others, whoever they may be, will constitute my “peer group,” for the purposes of belief formation and maintenance. How do I find these others? I'm aware that our society is marked by rampant disagreement about these matters. I can't indiscriminately designate people as belonging to my peer group, with no screening of the antecedent beliefs they hold on matters of interest. Instead, I have to take my own initial beliefs – formed as they are in light of the evidence that I have access to – as providing my guidance here.Footnote 27 Of course, I understand that my evidence is limited, and that I might have misinterpreted it. So I remain open to the possibility that I am wrong. Nonetheless, I clearly believe that I am right. Anything else would be epistemically incoherent, and inconsistent with my commitment to the rationality of my beliefs: if I believe that p, to any relevant degree of confidence, then I believe that it is rational to believe that p, given the evidence that I have access to. And so I should believe that anyone who holds a significantly different belief, in consideration of similar evidence, is less likely to be my “epistemic peer” than someone who broadly agrees with me. Quite simply, my antecedent commitment to the belief that p, even if my initial confidence is relatively moderate, rationally precludes me from feigning indifference to the question, which of two people – one who believes that p, one who believes that not-p, no further information given – is more likely to be my peer, i.e., someone I could turn to for illumination on further questions of concern, someone who could serve as a check on my reasoning, etc.Footnote 28

From this point on, things will proceed in lockstep. Agreement on one significant question of concern increases the probability that we will agree on further questions as well: as we deliberate, I will become stepwise more confident that these people are my peers, and more confident in my various beliefs as a result of noting that my peer group agrees with me.

At this point, it is not unknown to me that there also exist people who disagree with us. What should we think about them? As Elga points out, there is often systematicity to the questions that give rise to disagreements. For the same reason that I promote the peer-status of the people who systematically agree with me, I must downgrade the peer-status of the people who systematically disagree with me. In fact, over time, one might plausibly come to think that these people aren't merely not my peers, but also in some sense, my anti-peers: they seem to come down reliably on the wrong side of every relevant issue.

Now we have a sketch of an explanation for inter-group BP. These people – the Libtards, the Repugs, or whatever – aren't merely not my peers. In fact, they are so wrong on so many things – gun control, abortion, immigration, etc. – that if some new issue were to arise that I haven't yet given any thought to, but I noticed that these people hold this view on this issue, I would defeasibly take myself to have some reason to gravitate toward the opposing view. If I already held a view with some degree of confidence, noting that these people hold the opposing view would certainly make me more confident, not less, that I had the right view to begin with.Footnote 29

What this shows is that even though there is an important sense in which the epistemologies of peer agreement and peer disagreement are symmetrical, it doesn't follow that if I should increase my credence in some proposition whenever I note that people agree with me, I should also decrease my credence whenever someone disagrees with me. This is because there's an important asymmetry built into the peer-designation itself. Peer-designations aren't given, but are bestowed by me. In bestowing this honorific, I must start with my own beliefs. Necessarily, even if I remain “open-minded” about the ultimate truth of my beliefs – i.e., I remain mindful of my own fallibility – I can't be indifferent to the question of whether they are true, since belief entails a commitment to truth, given one's evidence. So if I truly believe that p, then I believe that the evidence indicates (to some degree or other) the truth of p, and I believe that any rational agent who has access to similar evidence would support that judgment. The fact that others do share my point of view is a reason to believe that they are well-informed, which is a further reason to believe that I am right. If I encounter people who hold the opposite view, then that surely is some reason to believe that I am wrong. But as I simultaneously discover that they hold opposite views on a number of issues, and that there is a systematic pattern to the questions on which they disagree with me, then I will also have found reason (though surely not decisive or demonstrative reason) to believe that we collectively are right about these things, and that they thereby must be wrong. They are not wrong in the simple sense that their beliefs are randomly chosen, as though determined by some stochastic process. If I could assume that their beliefs were stochastically determined, then there would simply be no information to be gained, and “Steadfastness” would be the right position to take. But if I am already reasonably confident in my own view, then learning of the patterned disagreement between me and others is learning that they aren't “simply” wrong, but systematically wrong. Crudely, they appear to have lower-than-chance odds of being right about relevant issues. And so, their disagreement provides me with reason to become more entrenched in my own view. (This further reason might be marginal, given what I already have reason to believe: but it is nonetheless real.)

Obviously, if this account is on the right track, it might still be seen to give rise to serious concerns. My account licenses a reinforcing mechanism of designating a peer group by noting antecedent agreement and then responding to that agreement by increasing one's confidence. How is this not simply a recipe for echo chamber construction? We will return to this question below, after first comparing this account with some extant philosophical accounts.

5. Comparison with extant views

My account has drawn on ideas from the epistemology of disagreement to show that BP, in either of its two dimensions, is not intrinsically irrational but could be seen as perfectly reasonable adjustments to the information one can presume to glean from what (groups of) others believe. Moreover, it has pointed to a reason why we might expect these two phenomena to develop in lockstep.

How does this compare with competing accounts? Sunstein (Reference Sunstein2002) was among the first to bring the phenomenon into philosophical discussion. His paper focuses specifically on intra-group BP, and draws attention to two different mechanisms. One points to a fairly familiar phenomenon in social psychology. Belief formation and maintenance is to some extent driven by social comparison, and in particular the felt need to maintain socially acceptable views (Sunstein Reference Sunstein2002: 179). As one engages in in-group deliberation and discovers that views of this sort are generally endorsed, one will move by stepwise increments toward the pole. In general, one wants to be seen as edgy, though without taking the risk of being branded as too far-out. But as the group as a whole moves further toward the pole, the boundary of what is considered socially acceptable will also move as a result.

Presumably, we can all agree that social comparison of this sort is not a sanctioned method of belief revision according to our best accounts of epistemic normativity. Certainly, when this process draws us farther away from the truth on some issue that we should care to know about, it is natural to suppose that we are looking at an epistemically irrational process.

Nonetheless, the problem here should be evident: the fact that such an explanation is viable (and potentially relevant) in no way entails that the phenomenon couldn't also have a rationalizing explanation. And Sunstein gives us no reason to suppose that one mode of explanation is more viable than the other in particular cases. For any particular observed case of intra-group BP, it may be empirically indeterminate whether it is caused by one sort of factor or another. Moreover, both causal pathways may work in tandem to produce the empirical phenomenon we are concerned with. But this is nothing new: for any well-justified belief that we might imagine our epistemic agent antecedently committed to, we could wonder whether their continued commitment to that belief in light of favorable incoming evidence is really caused by them actually having considered that evidence properly or if they would have been disposed to retain their belief regardless. And we might find ourselves unable to settle that question. Nonetheless, it is clearly relevant to note that the agent, as a matter of fact, is in possession of evidence which would have justified the retention of the belief, if the agent had properly processed that evidence.Footnote 30

The other mechanism that Sunstein points to may seem more promising. Here, intra-group BP is seen to result from a bias or skew in the “argument pool” that one is exposed to as one predominantly engages in deliberation with like-minded others: since the group as a whole will tend to find arguments pulling in a certain direction more persuasive, there will be a clear tilt in the arguments that members of this group are exposed to or familiar with (Sunstein Reference Sunstein2002: 179–80). Our (individual) thinking will typically reflect this tilt, and further in-group deliberation will only serve to strengthen it. Quite simply, we hold the views we hold as a result of finding certain kinds of arguments to be (more or less) persuasive. Associating with people with similar cognitive outlooks will tend to expose us to further arguments along similar lines, as a result of which our persuasion will be strengthened.

On its own terms, this sounds broadly plausible. However, it seems naïve, at least by current standards, to suppose that the “bias” that we observe in BP is best explained simply in terms of subjects lacking information about what others believe or why they believe it. On the contrary, people nearer to the poles of today's political spectrum seem to have plenty of knowledge of the standard stock of arguments made in support of the position they oppose.Footnote 31 The problem is rather that – for good reasons or bad – they just don't recognize these as persuasive arguments.

In other words, it appears that Sunstein treats the phenomenon of intra-group BP as though it were simply the result of an “epistemic bubble” – a contingent matter of biased access to information – and not as an “echo chamber” – a structurally reinforced socio-epistemic mechanism that evinces some degree of active resistance to contrary opinions.Footnote 32 This conflation leads Sunstein to propose that the remedy for intra-group BP could be as simple as ensuring a platform for broader and more inclusive deliberation.Footnote 33 As I have argued, however, given the conditions that lead to intra-group BP, we have reason to be concerned that more “inclusive” deliberation might as well make the problem worse: given certain situational parameters, the result of further exposure to people who believe differently than us could just be inter-group BP. And as I argued in the previous section, there doesn't seem to be anything intrinsically irrational about this way of responding to disagreement, once the relevant conditions are in place.

Talisse (2021) has recently offered a different non-rational account of BP, according to which BP is best seen to result from a confusion of confirmation with corroboration: when we observe that our co-deliberators hold similar views to ours, we take that to provide confirmation, whereas in reality it only provides corroboration. Confirmation warrants increased credence, whereas corroboration does not. “Simplifying slightly,” says Talisse, “we might say that whereas confirmation adds evidence, corroboration is simply a matter of popularity” (Talisse 2021: 219). We will have a closer look at the nature of the simplification shortly. Meanwhile, Talisse offers the following sort of explanatory outline: “Corroboration from others with whom we identify makes us feel good our beliefs … When we feel good about what we believe, we experience a boost to our commitment to our perspective, we feel affirmed in our social identity. In turn, when we feel affirmed in this way, we intensify our attitudes and shift to more extreme belief contents” (Talisse 2021: 219). Presumably, pursuing this “good feeling” is not an epistemically rational strategy for belief formation and maintenance.

But is it true that learning of significant peer-group agreement regarding p could never provide epistemically relevant information regarding p, but only “corroboration,” i.e., the epistemically non-relevant fact that belief in p is widespread in some group? It might be true that others’ believing that p provides no direct evidence that p is true. But under plausible conditions, it can certainly provide higher-order evidence that p is true. As we saw above, higher-order evidence can be an important epistemic source for reinforcing one's conviction that p. We are social beings and are generally dependent on others for information: an epistemic agent who had no policy for updating beliefs in light of information about what others believe would plainly be living an impoverished epistemic life.

So information about what others believe with regard to p can be a source of confirmation, even if it is indirect (“higher-order”) confirmation. Oddly, Talisse himself seems to concede this point in a footnote, where he acknowledges that there are cases in which the popularity of a belief itself provides evidence that there is evidence for the belief in question. While he argues, correctly, that such higher-order evidence does not itself provide evidence for the particular proposition believed, he does acknowledge that “it might provide rational permission to adopt the belief in question.” Nonetheless, he is keen to distinguish such cases from cases where “the sheer number of corroborating voices, regardless of any judgment of the underlying evidence, functions to induce the extremity shift” (Talisse 2021: 219n7). I am not going to dispute that such cases might also exist: the point to note, rather, is that this has no bearing on the question of whether adjusting one's views to one's designated peer group could be part of a rational belief formation and management strategy. (On the other hand, should we think that reasoning from “sheer numbers” is prevalent in our political discourse, as Talisse seems to assume? I think not. For example, witness how people on the margins of political discourse are often perfectly happy to identify themselves as part of some elite vanguard battling against the vast mass of “sheeple.” In these cases, it is pretty clear that it's not the sheer quantity of peers that count, but rather the presumptive quality of their opinions, i.e., precisely their status as “peers.”)

Accordingly, it is by no means clear that intra-group BP is generally caused by subjects’ confusing corroboration with confirmation. Instead, it seems quite fair to assume that people who are pushing toward the poles of the political opinion spectrum are typically well aware that they are in the minority and seem quite comfortable with that. Specifically, they seem to be generally well-informed of the stock of arguments that is typically adduced in favor of the opposing views.

This is obviously not an exhaustive list of strategies that have been deployed for showing that BP is a manifestation of epistemic irrationality. But it should suffice to give some indication of the flavor of such proposals. And to be entirely clear, I wouldn't think to deny that non-rational factors like the ones highlighted by Sunstein and Talisse couldn't be part of the causal story behind BP. Instead, what is in question is whether there couldn't also be processes in play which could render BP the epistemically rational outcome in certain contexts. (In any particular actual context, it may be hopeless to determine, without further information, what is actually causing it.)

Accordingly, we have good reason to search for rationalizing modes of explanation as well. I am not the first philosopher to propose such an explanation. An important early attempt comes from Kelly (Reference Kelly2008). On Kelly's account, BP can be seen as an unintended side-effect of the fact that we tend to spend more time critically analyzing arguments against our view than we do analyzing arguments in favor of our view. Because we spend more time looking for flaws in arguments against our view, it is also more likely that we will find such flaws. As a result, we will over time become more confident in our initial views.

Now, this may seem like a perniciously biased way of distributing one's cognitive resources, to predictably deleterious result. Not so fast, says Kelly: “when we encounter evidence that is plausibly explained by things that we already believe, we typically do not devote additional resources attempting to generate alternatives. Data that seem to support hypotheses that are already believed thus tend to get considered against a comparatively impoverished or sparse background of alternative hypotheses. As a result of the less competitive milieu, the support conferred by the new evidence is not siphoned away, and thus tends to go in relatively undiluted form to the already accepted hypothesis” (Kelly Reference Kelly2008: 621). Accordingly, it will certainly be true that already established beliefs enjoy a certain kind of “competitive advantage” in meeting new evidence (Kelly Reference Kelly2008: 622). But this is not in general an irrational phenomenon, but rather a sensible economizing strategy for cognitively limited agents to allocate their scarce epistemic resources.Footnote 34 Moreover, it is certainly not unique to BP, but is plausibly a feature of any kind of epistemic inquiry in the wild: the same would presumably hold for scientists seeking to come to grips with anomalous lab results.

Kelly's analysis can shed valuable light on what is going on in cases of intra-group BP, specifically. Moreover, this analysis is perfectly consistent with the account I have offered, and there is no reason to think that these processes couldn't work in tandem. Nonetheless, I maintain that my account, on its own terms, has certain advantages that Kelly's account lacks. First, my account explicitly covers both intra-group and inter-group BP, and indeed explains how these could be seen as complementary processes. Second, my account specifically addresses BP as a socio-epistemic phenomenon, whereas Kelly's treats it as a special case of a more general epistemic problem, pertaining to individuals’ allocation of scarce epistemic resources to evidence processing of any sort.Footnote 35 As I will argue in my final section, we have good reason to seek specifically social-level explanations of BP, even if more general epistemic processes might also be in play.

6. BP and echo chambers

BP is a complex phenomenon in several ways. First, it is complex in its manifestation. Minimally, we will want to distinguish between its manifestation following intra-group deliberation and its manifestation following inter-group deliberation. But at the same time, we should also hold out for an explanation which shows how the two manifestations are related.

Second, BP could be complex also in its etiology. Several plausible patterns of explanation suggest themselves: some of these explanations could portray BP as a non-rational, affect-driven adjustment to the information that public discourse provides. Some could portray it as potentially rational, but in a manner that would arise from perfectly general ways in which limited agents must prioritize their assessment of the evidence they confront. By contrast, I have argued for a rationalizing account that ties BP specifically to unique features of socio-epistemic processing. Moreover, my account can explicitly distinguish, but also tie together in an explanatorily illuminating way, the intra-group and inter-group manifestations of BP. BP arises from the way that we depend on others to supply us with information and for joint deliberation. In choosing our “peer group,” we cannot but start from our own pre-existing beliefs. Therefore, we will tend to designate as peers those who are broadly disposed to agree with us already: noting such agreement, we will further increase our confidence in our views. Similarly, in cases where the disagreement is sufficiently deep and systematic, we may reasonably come to designate the opposing group as our anti-peers, and see their disagreement as a reason to move further in the opposite direction.

Without saying that this is the only way that BP could come about, I have argued that it provides valuable light on BP as a complex phenomenon potentially grounded in broadly rational socio-epistemic processes.

Even so, one might have concerns about this approach: I have described a process in which one designates a peer group based on observed agreement, and then uses that agreement to increase one's credence in the things we agree about. To use a current catch phrase, my account seems like the simplest recipe for creating an echo chamber. And since echo chambers are generally presumed to be pernicious socio-epistemic phenomenaFootnote 36 – something that well-motivated epistemic agents will want to avoid – perhaps my ambition of providing a rationalizing account comes to naught after all.

The first thing to note in response to this concern is that the question of whether echo chambers are pernicious, in some sense or other, doesn't settle the question of whether one would have to violate epistemic norms to end up in one. Cultivating bonds of epistemic trust is a crucial function of human cognitive agency. (From one perspective, this might seem “non-ideal.” But then again, we are very much non-ideal epistemic agents, and our epistemic norms should reflect this fact.) This trust is necessarily selective. The process of selection can manipulated, or it can be misdirected on other grounds: it doesn't follow that one is doing something wrong, epistemically speaking, even in the cases where one ends up placing one's trust in the wrong crowd.Footnote 37 One might simply be unlucky in one's socio-epistemic affordances.Footnote 38

The second response might require more careful handling: are we right to assume that echo chambers are always epistemically pernicious, i.e., that they are bound one way or another to frustrate the agent's pursuit of their reasonable epistemic goals? Possibly, there are questions on which we should start out by assuming that any opinion is as good as any other, at least so far as we have reason to believe. In such cases, one ought perhaps to sample as wide a range of opinions as one can, without discriminating with regard to their source. Isolating the opinion of an arbitrarily chosen few, and anchoring one's update-policies to them, now will seem like a bad approach, certainly in a case where it transpires that this group is significantly at odds with tendencies in the broader crowd. But coming up with relevant examples of this sort turns out to be quite difficult. Even in the famous case of Galton's “wisdom-of-the-crowds” approach (Galton Reference Galton1949), it presumably matters that Galton has already deemed – and reasonably so – that the people whose opinions he is sampling were generally quite knowledgeable – i.e., they were visitors, let's say, to a county fair. Moreover, it's crucial to this approach that Galton was able to assume that the vast majority of the votes were cast independently: i.e., there were no significant “group deliberation effects” already present in the opinion pool.

By contrast, we have every reason to believe that settling one's mind on matters of public policy is nothing like guessing the heft of an animal carcass at the county fair. Accordingly, we should be open to the possibility that one's epistemic policies should be different as well, even as they are alike in the sense of involving the counsel of others. First, we have every reason to believe that the distribution of political opinions in the population at large already reflects significant group-deliberation effects. Second, we have good reason to assume that not everyone's opinion is equally well-informed. This is not simply because people form their beliefs in light of different bodies of evidence, or that some people have more background expertise than others. It is also because political judgment is characteristically informed by values and not simply by evidence. Even if we could assume that everyone's opinion was formed following exposure to a sufficiently large body of relevant evidence, we could still harbor serious concerns that many opinions reflect values that we ourselves would not want to stand by. Finally, we approach our questions with an understanding that they are fundamentally contested, and that the contestation often takes an antagonistic form, aiming to undermine the credibility and political power of the “other” every bit as much as it would aim to seek the truth.

In these sorts of contexts – i.e., contexts where judgments of fact and value intersect, and where we can reasonably assume that interactions are antagonistic as much as they are collaborative – a different epistemic strategy is needed. As epistemically limited (“non-ideal”) agents we still need to seek the counsel of others. But we cannot be indiscriminate (i.e., “non-biased”) in seeking that counsel. Here, forming steady epistemic alliances with select others can be crucial to maintaining one's political agency. If one has a reasonable degree of confidence in one's antecedent outlook, then the obvious solution is to use that outlook to select one's peer group: my peer-group is, to a first approximation, the set of people who evince broadly similar judgments to mine on relevant issues. Even with regards to questions that I haven't yet considered, finding that this group tends strongly toward p gives me reason to believe that p is correct. This is not to say that I shouldn't conduct a review of their grounds for believing that p. But it is to acknowledge that, in the end, I might find this evidence persuasive in no small part just because members of this group appear to find it persuasive. The result is recognizably an “echo chamber” in some sense of the term. But seeking the bulwark of this echo chamber might be crucial to shielding us from misinformation and deception.

As theorists, we are free, of course, to lament this state of affairs. And from the point of view of a more hopeful account of the epistemic promise of a maximally inclusive deliberative democracy,Footnote 39 this is certainly a discouraging analysis. Nonetheless, our outlook has been guided by the question of epistemic rationality. Collectively, we might certainly have hoped for a different sort of epistemic dynamics for public deliberation. But given these sorts of collective-level epistemic dynamics, we cannot, I have argued, blame individual cognizers for adopting epistemic policies which predictably lead them into echo chambers.Footnote 40 In doing so, they might well be contributing their small part to a collective predicament. But the collective predicament would prevail no matter what individual epistemic policies they adopted. Meanwhile, they each have a clear epistemic interest in forming true, value-reflecting judgments on political questions. And given the environment that they find themselves in, even a policy that predictably leads to belief polarization – whether from intra-group exchanges or from inter-group exchanges, or both – might well be the best epistemic policy they have.

Of course, there remains a sense in which such a policy is non-ideal. It is non-ideal, first, in the sense that one might think that, ideally, we shouldn't ever rely on others in forming our epistemic outlooks. But essentially every contribution to political epistemology affirms that our epistemic reliance on others is, in some deep sense, a manifestation of our shared political fate: notably, even theorists affirming the value of deliberative democracy maintain that joint deliberation is crucial to forming our beliefs, not simply as a method for sampling pre-existing opinions.

Secondly, we might think this policy is non-ideal in the sense that it might further contribute to the antagonism already prevalent in the public sphere, thereby further contributing to undermining the value of deliberative democracy. This is a valid concern. But again, it matters what perspective we adopt. It is probably true that we would be collectively better off with a less antagonistic public sphere. But given that we, as individuals, find ourselves forced to form our political beliefs in precisely such an antagonistic environment, it doesn't follow that we, as individual epistemic agents, would be better off adopting belief-forming policies designed to ameliorate the antagonism.

7. Conclusion

This paper has sought to develop a rationalizing perspective on BP. My account has distinguished between BP as it arises from intra-group deliberation – i.e., information-exchanges between people who already “agree” to some relevant extent – and BP as it arises from inter-group deliberation – i.e., information-exchanges between people who evince systematic disagreement. I have argued that the two phenomena are related at heart, and that, given certain background circumstances, we should expect both to develop in tandem. In brief: the need – cognitively fundamental for limited, socially embedded beings such as us – to consolidate our epistemic outlooks with a peer group will also, in many contexts, produce a matching outgroup. As one must use one's own pre-existing cognitive commitments in choosing one's peer group, so one must take it that the “others” are simply those who evince systematic disagreement with us. At this point, one's peer group functions effectively as an echo chamber. This is regrettable when the effect is to insulate us from the truth. But it doesn't follow that echo chambers are necessarily epistemically pernicious.Footnote 41 Echo chamber construction can be epistemically beneficial when it serves to protect us from manipulation and disinformation.

Any rationalizing account will of course recognize that this is in some sense a disappointing result, specifically from the point of view of a more optimistic account of the value of broadly inclusive deliberative democracy. But the disappointment is a reflection of the kind of information environment one is operating within, not necessarily a reflection of the epistemic policies one adopts given that one is in such an environment. As we already know from studying other problems of collective rationality, we should always be open to the possibility that situationally optimal epistemic strategies at the individual level might well lead to sub-optimal decision-making strategies at the collective level.Footnote 42

Footnotes

1 Cf. the voluminous literature around the value of “deliberative democracy” (e.g., Dewey Reference Dewey1927; Cohen Reference Cohen, Hamlin and Pettit1989; Habermas Reference Habermas1996; Goodin Reference Goodin2003; Anderson Reference Anderson2006; Landemore Reference Landemore2012).

2 See, for instance, Lord et al. (Reference Lord, Ross and Lepper1979), Houston and Fazio (Reference Houston and Fazio1989), Koehler (Reference Koehler1993), Miller et al. (Reference Miller, McHoskey, Bane and Dowd1993), and Munro and Ditto (Reference Munro and Ditto1997).

3 Cf. Dorst (Reference Dorst2019).

4 Notice, by contrast, how some of the literature defines BP simply as the tendency of parties with opposing antecedent beliefs to respond differently to disclosures of new bodies of “mixed” evidence, each side effectively taking the new evidence to support their antecedent belief (cf. Kelly Reference Kelly2008; Jern et al. Reference Jern, Chang and Kemp2014; Cook and Lewandowsky Reference Cook and Lewandowsky2016; Nielsen and Stewart Reference Nielsen and Stewart2021; Williams Reference Williams2021). I am not saying that the resulting problem is not of intrinsic interest (for instance, in the form of concerns about perfectly general biases in evidence processing). But notice that, by omitting any reference to joint deliberation or information-exchange between parties, the problem seems to have lost any connection to social epistemology or the public sphere more broadly. Accordingly, this paper addresses BP not simply as involving the disclosure of new evidence to parties with opposing antecedent beliefs, but also as centrally involving information about how the other party responds to this evidence.

5 Perhaps inspired by Francis Galton's reflections on the “wisdom of the crowd” (Galton Reference Galton1949), to which I will return in section 6.

6 I am not the first to offer a rationalizing account of BP (cf. Kelly Reference Kelly2008; Jern et al. Reference Jern, Chang and Kemp2014; Dorst Reference Dorst2019, Reference DorstMS; Benoit and Dubra Reference Benoit and Dubra2019; Singer et al. Reference Singer, Bramson, Grim, Holman, Jung, Kovaka, Ranginani and Berger2019; Nielsen and Stewart Reference Nielsen and Stewart2021; Pallavincini et al. Reference Pallavincini, Hallson and Kappel2021). Nonetheless, my account has broader reach, and is particularly useful in forging stronger connections with principles known to apply in a wide range of socio-epistemic settings.

7 For a larger picture, see Begby (Reference Begby2021b).

8 Cf. Sunstein (Reference Sunstein2002), to which I will return below.

9 Though, it should be noted, some philosophers do provide careful and substantive arguments for views of this sort, as for instance Avnur (Reference Avnur2020).

11 Cf. Hardwig (Reference Hardwig1985), Fricker (Reference Fricker, Lackey and Sosa2006), and others for discussions of the ideal of “epistemic autonomy” and the sorts of vulnerabilities that we open ourselves up to by accepting the testimony of others. Nonetheless, it is clear from these discussions that these are vulnerabilities that we must accept if we are to develop or retain anything recognizable as human epistemic agency.

12 Cf. Christensen (Reference Christensen2007), Elga (Reference Elga2007), and Kelly (Reference Kelly, Feldman and Warfield2010). Note here that I take the debate between “conciliationists” and “steadfasters” (so-called) effectively as a question about how conciliationist one should be in the face of peer disagreement (along a more/less dimension). In other words, I take “steadfastness” to serve as something like an asymptotic limit point in these debates, since few contributors would want to argue that peer disagreement could never, under any circumstances provide epistemic reasons to moderate one's views. (I will return to this point briefly in section 4.)

13 Cf. Christensen (Reference Christensen2007).

14 Cf. Begby (Reference Begby2021a).

16 Cf. Lasonen-Aarnio (Reference Lasonen-Aarnio2014).

17 Cf. Christensen (Reference Christensen2009: 759) for an example along these lines.

18 A further specification: I here interpret BP in terms of increasing credence, of becoming more confident in a particular target proposition. The BP literature doesn't clearly distinguish between polarization in terms of increasing credence in a particular belief versus polarization in terms of substituting one belief for a different, more extreme belief along the same lines. (Cf. Talisse 2021.) But I do think there are good reasons to start with credences, and that we can view any observable tendency to also switch to a more extreme version of one's previous belief as following naturally from the increase in credence. In brief, I believe it would be a mistake to model our beliefs in terms of commitments to some unique, maximally specific proposition (i.e., the philosopher's “S believes that p”). Instead, by holding a “view” on some question of interest, we express variable degrees of commitment to a cluster of specific, probabilistically interdependent propositions (e.g., only limited background checks on handguns → no background checks → no background checks even on high-powered assault rifles, etc.). Increasing one's confidence in one particular representative proposition in this general cluster also tends to shade over onto the related propositions, thereby making epistemically available a commitment to a more extreme version of the same broad “view.”

19 Cf. King (Reference King2012) for a critical overview.

20 Cf. Weatherall and O'Connor (Reference Weatherall and O'Connor2020).

21 Elga (Reference Elga2007: 495–6).

23 See Christensen (Reference Christensen and Lackey2014: 157) for a similar point.

24 Elga (Reference Elga2007: 486–8). See also Boyd (Reference Boyd2019).

25 We could of course waive this stipulation. But I take it that the resulting account would be less interesting for it. What motivates the present inquiry is whether BP embodies a novel form of epistemic irrationality, one that reflects how we update our beliefs in light of information about others’ beliefs. We get the clearest view of this question by supposing that subjects hold justified beliefs going into the process.

26 This need not be anything as strong as “uniqueness” – i.e., the supposition that there is only one uniquely rational credence that one should have in light of some body of evidence (cf. White Reference White2005). Typically, there will be plenty of differences in background beliefs to explain how rational people could have some tolerance for disagreement about what the evidence supports.

27 Cf. O'Connor and Weatherall (Reference O'Connor and Weatherall2018), Henderson and Gebharter (Reference Henderson and Gebharter2021).

28 I should certainly recognize that it remains epistemically possible – consistent with my evidence – that they are right and we are wrong. But this is a perfectly general feature of evidential reasoning. This possibility can still be highly improbable given my evaluation of the evidence: this relatively low probability is just what my credence reflects. If we can assume that my credence is roughly correct (rational, justified, etc.), then we can assume that I am right to treat this possibility as marginal and to give it a corresponding degree of consideration (perhaps similarly to the way that many of us treat skeptical possibilities more broadly).

30 See Begby (Reference Begby2021b), on the relevance of notions of “propositional” and “doxastic justification” to these questions.

31 This is related to the widespread idea that our political biases may in part be the result of one-sided exposure to news sources, and that we accordingly should recognize an epistemic obligation to seek broader coverage. (See Worsnip Reference Worsnip, Fox and Saunders2019 for discussion.) The problem with this view is that (i) it seems fair to assume that most people who are deeply engaged in political discourse do in fact have a pretty good grasp of what “the other side” will say, and (ii) given our prior beliefs, it's not obvious that increasing our exposure to opposing views will provide any epistemic benefit, as per the argument of the previous section.

32 Cf. Nguyen (Reference Nguyen2020b).

33 Cf. Sunstein (Reference Sunstein2002: 180–2). Importantly, though, he acknowledges that this “solution” might be difficult and delicate to implement in practice, particularly in light of pre-existing power differentials between groups (2002: 190–1).

34 See also Dorst (Reference Dorst2019) for a similar argumentative strategy, pivoting on the claim that it can be epistemically rational for cognitively limited agents to prioritize the effort to debunk new evidence which would seem to threaten the beliefs that we already hold.

35 Cf. Dallman (Reference Dallman2017) and Begby (Reference BegbyMS), for further perspectives on the problem of resource allocation in epistemic agency.

36 Cf. Lynch (Reference Lynch2016) and Sunstein (Reference Sunstein2017).

39 Cf. Anderson (Reference Anderson2006) and Landemore (Reference Landemore2012).

40 Cf. Anderson (Reference Anderson2006: 16), who writes that “[t]o realize the epistemic powers of democracy, citizens must follow norms that welcome or at least tolerate diversity and dissent, that recognize the equality of participants in discussion by giving all a respectful hearing.” This, then, obviously raises the question of how one should comport oneself in cases where one has good reason to believe that others will not comply with these norms. Similarly, Landemore (Reference Landemore2012: 3) seems to acknowledge that her arguments regarding the epistemic potential of “democratic reason” hold true only under “conditions conducive to proper deliberation.” I assume we can agree that many contemporary polities fall far short of realizing these conditions.

41 Lackey (Reference Lackey, Bernecker, Flowerree and Grundmann2021) advocates a somewhat similar perspective, but with important differences. Lackey acknowledges that finding that one's beliefs are formed in an echo chamber does not necessarily undermine their epistemic standing. She adopts a reliabilist approach to explain the difference: crudely put, good echo chambers are ones that reliably put us in touch with the truth, whereas the bad ones pull us further away from it. So we are free to describe, if we will, the centre-leftist media-opinion cluster of CNN, MSNBC, etc. as an echo chamber no less than the rightist media-opinion cluster of Fox News and One America Network. But they differ crucially in terms of epistemic reliability. By contrast, my interest is primarily in how we should frame our verdicts on individual epistemic rationality, and here I do not think that reliabilism is of much help. In particular, I do not think that finding oneself in a “bad” – unreliable – echo chamber necessarily undermines one's epistemic standing to believe as one does. Instead, I hold that given certain contingent, socio-epistemic circumstances not of one's choosing, even perfect epistemic rationality can lead one further away from the truth. In this sense, we must acknowledge that even generally reliable epistemic policies can turn out to be locally unreliable. (For more on this, see Begby Reference Begby2021b.)

42 For helpful discussion, I am indebted to Holly K. Andersen, Yan Chen, Reetika Kalita, students in my Fall 2020 PHIL 329 Epistemology of Democracy course at SFU, as well as an anonymous reviewer for this journal.

References

Anderson, E. (2006). ‘The Epistemology of Democracy.’ Episteme 3(1–2), 822.CrossRefGoogle Scholar
Avnur, Y. (2020). ‘What's Wrong with the Online Echo Chamber: A Motivated Reasoning Account.’ Journal of Applied Philosophy 37(4), 578–93.CrossRefGoogle Scholar
Bail, C.A., Argyle, L.P., Brown, T.W., Bumpus, J.F., Chen, H., Fallin Hunzaker, M.B., Lee, J., Mann, M., Merhout, F. and Volfovsky, A. (2018). ‘Exposure to Opposing Views on Social Media Can Increase Political Polarization.’ Proceedings of the National Academy of Sciences USA 115(37), 9216–21.CrossRefGoogle ScholarPubMed
Begby, E. (2013). ‘The Epistemology of Prejudice.’ Thought 2(1), 90–9.CrossRefGoogle Scholar
Begby, E. (2018 a). ‘Straight Thinking in Warped Environments.’ Analysis 78(3), 489500.CrossRefGoogle Scholar
Begby, E. (2018 b). ‘Doxastic Morality: A Moderately Skeptical Perspective.’ Philosophical Topics 46(1), 155–72.CrossRefGoogle Scholar
Begby, E. (2021 a). ‘The Problem of Peer Demotion, Revisited and Resolved.’ Analytic Philosophy 62(2), 125–40.CrossRefGoogle Scholar
Begby, E. (2021 b). Prejudice: A Study in Non-Ideal Epistemology. Oxford: Oxford University Press.CrossRefGoogle Scholar
Begby, E. (MS). ‘Opportunity Costs and Resource Allocation Problems: Epistemology for Finite Minds.’ Under review.Google Scholar
Benoit, J.-P. and Dubra, J. (2019). ‘Apparent Bias: What Does Attitude Polarization Show?’ International Economic Review 60(4), 1675–703.CrossRefGoogle Scholar
Boyd, K. (2019). ‘Epistemically Pernicious Groups and the Groupstrapping Problem.’ Social Epistemology 33(1), 6173.CrossRefGoogle Scholar
Christensen, D. (2007). ‘Epistemology of Disagreement: The Good News.’ Philosophical Review 116(2), 187217.CrossRefGoogle Scholar
Christensen, D. (2009). ‘Disagreement as Evidence.’ Philosophy Compass 4(5), 756–67.CrossRefGoogle Scholar
Christensen, D. (2010). ‘Higher Order Evidence.’ Philosophy and Phenomenological Research 81(1), 185215.CrossRefGoogle Scholar
Christensen, D. (2014). ‘Disagreement and Public Controversy.’ In Lackey, J. (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press.Google Scholar
Cohen, J. (1989). ‘Deliberation and Democratic Legitimacy.’ In Hamlin, A. and Pettit, P. (eds), The Good Polity, pp. 1734. New York, NY: Blackwell.Google Scholar
Cook, J. and Lewandowsky, S. (2016). ‘Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks.’ Topics in Cognitive Science 8(1), 160–79.CrossRefGoogle ScholarPubMed
Dallman, J. (2017). ‘When Obstinacy is a Better (Cognitive) Policy.’ Philosophers’ Imprint 17(24), 117.Google Scholar
Dewey, J. (1927). The Public and its Problems. Chicago, IL: Swallow Press.Google Scholar
Dorst, K. (2019). ‘Why Rational People Polarize.’ The Phenomenal World, 27 January 2019. https://phenomenalworld.org/analysis/why-rational-people-polarize.Google Scholar
Dorst, K. (MS). ‘Rational Polarization.’ Manuscript, October 2021. https://philpapers.org/archive/DORRP-2.pdf.CrossRefGoogle Scholar
Elga, A. (2007). ‘Reflection and Disagreement.’ Noûs 41(3), 478502.CrossRefGoogle Scholar
Fricker, E. (2006). ‘Testimony and Epistemic Autonomy.’ In Lackey, J. and Sosa, E. (eds), The Epistemology of Testimony, pp. 225–50. Oxford: Oxford University Press.CrossRefGoogle Scholar
Galton, F. (1949). ‘Vox Populi.’ Nature 75, 450–1.CrossRefGoogle Scholar
Goodin, R. (2003). Reflective Democracy. Oxford: Oxford University Press.CrossRefGoogle Scholar
Habermas, J. (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Translation by W. Rehg. Cambridge: Polity Press.CrossRefGoogle Scholar
Hardwig, J. (1985). ‘Epistemic Dependence.’ Journal of Philosophy 82(7), 335–49.CrossRefGoogle Scholar
Henderson, L. and Gebharter, A. (2021). ‘The Role of Source Reliability in Belief Polarization.’ Synthese 199, 10253–76.CrossRefGoogle Scholar
Houston, D.A. and Fazio, R.H. (1989). ‘Biased Processing as a Function of Attitude Accessibility: Making Objective Judgments Subjectively.’ Social Cognition 7, 5166.CrossRefGoogle Scholar
Jern, A., Chang, K.-M. and Kemp, C. (2014). ‘Belief Polarization is not Always Irrational.’ Psychological Review 121(2), 206–24.CrossRefGoogle Scholar
Kelly, T. (2008). ‘Disagreement, Dogmatism, and Belief Polarization.’ Journal of Philosophy 105(10), 611–33.CrossRefGoogle Scholar
Kelly, T. (2010). ‘Peer Disagreement and Higher Order Evidence.’ In Feldman, R. and Warfield, T. (eds), Disagreement. Oxford: Oxford University Press.Google Scholar
King, N.L. (2012). ‘Disagreement: What's the Problem? Or, a Good Peer is Hard to Find.’ Philosophy and Phenomenological Research 85(2), 249–72.CrossRefGoogle Scholar
Koehler, J.J. (1993). ‘The Influence of Prior Beliefs on Scientific Judgment of Evidence Quality.’ Organizational Behavior and Human Decision Processes 56, 2855.CrossRefGoogle Scholar
Lackey, J. (2021). ‘Echo Chambers, Fake News, and Social Epistemology.’ In Bernecker, S., Flowerree, A. K. and Grundmann, T. (eds), The Epistemology of Fake News. Oxford: Oxford University Press.Google Scholar
Landemore, H. (2012). Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many. Princeton, NJ: Princeton University Press.Google Scholar
Lasonen-Aarnio, M. (2014). ‘Higher-Order Evidence and the Limits of Defeat.’ Philosophy and Phenomenological Research 88(2), 314–45.CrossRefGoogle Scholar
Lord, C.G., Ross, L. and Lepper, M.R. (1979). ‘Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence.’ Journal of Personality and Social Psychology 37(11), 2098–109.CrossRefGoogle Scholar
Lynch, M.P. (2016). The Internet of Us: Knowing More and Understanding Less in the Age of Big Data. New York, NY: Norton.Google Scholar
Miller, A.G., McHoskey, J.W., Bane, C.M. and Dowd, T.G. (1993). ‘The Attitude Polarization Phenomenon: The Role of Response Measure, Attitude Extremity, and Behavioral Consequences of Reported Attitude Change.’ Journal of Personality and Social Psychology 65, 561–74.CrossRefGoogle Scholar
Munro, G.D. and Ditto, P.H. (1997). ‘Biased Assimilation, Attitude Polarization, and Affect in Reactions to Stereotype-Relevant Scientific Information.’ Personality and Social Psychology Bulletin 23, 636–53.CrossRefGoogle Scholar
Nguyen, C.T. (2020 a). ‘Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.’ Synthese 197, 2803–21.CrossRefGoogle Scholar
Nguyen, C.T. (2020 b). ‘Echo Chambers and Epistemic Bubbles.’ Episteme 17(2), 141–61.CrossRefGoogle Scholar
Nielsen, M. and Stewart, R.T. (2021). ‘Persistent Disagreement and Polarization in a Bayesian Setting.’ British Journal for the Philosophy of Science 72(1), 5178.CrossRefGoogle Scholar
O'Connor, C. and Weatherall, J.O. (2018). ‘Scientific Polarization.’ European Journal for Philosophy of Science 8, 855–75.CrossRefGoogle Scholar
Pallavincini, J., Hallson, B. and Kappel, K. (2021). ‘Polarization in Groups of Bayesian Agents.’ Synthese 198, 155.CrossRefGoogle Scholar
Singer, D.J., Bramson, A., Grim, P., Holman, B., Jung, J., Kovaka, K., Ranginani, A. and Berger, W.J. (2019). ‘Rational Social and Political Polarization.’ Philosophical Studies 176(9), 2243–67.CrossRefGoogle Scholar
Sunstein, C.R. (2002). ‘The Law of Group Polarization.’ Journal of Political Philosophy 10(2), 175–95.CrossRefGoogle Scholar
Sunstein, C.R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Talisse, R.C. (2021). ‘Problems of Polarization.’ In E. Edenberg and M. Hannon (eds), Political Epistemology. Oxford: Oxford University Press.CrossRefGoogle Scholar
Van Fraassen, B. (1984). ‘Belief and the Will.’ Journal of Philosophy 81(5), 235–56.CrossRefGoogle Scholar
Weatherall, J.O. and O'Connor, C. (2020). ‘Endogenous Epistemic Factionalization.’ Synthese 198(Suppl. 25), 6179–200.CrossRefGoogle Scholar
White, R. (2005). ‘Epistemic Permissiveness.’ Philosophical Perspectives 19(1), 445–59.CrossRefGoogle Scholar
Williams, E.C. (2021). ‘Evidentialism and Belief Polarization.’ Synthese 198(8), 7165–96.CrossRefGoogle Scholar
Worsnip, A. (2019). ‘The Obligation to Diversify One's Sources: Against Epistemic Partisanship in the Consumption of News Media.’ In Fox, C. and Saunders, J. (eds), Media Ethics, Free Speech, and the Requirements of Democracy. New York, NY: Routledge.Google Scholar