Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-26T12:44:22.481Z Has data issue: true hasContentIssue false

The Value of Independence between Experts: Epistemic Autonomy and Different Perspectives

Published online by Cambridge University Press:  12 September 2024

Jack Wright*
Affiliation:
Institutionen för filosofi, lingvistik och vetenskapsteori, Göteborgs Universitet, Box 200, 40530 Göteborg, Sweden
Rights & Permissions [Opens in a new window]

Abstract

I offer two interpretations of independence between experts: (i) independence as deciding autonomously, and (ii) independence as having different perspectives. I argue that when experts are grouped together, independence of both kinds is valuable for the same reason: they reduce the likelihood of erroneous consensus by enabling a greater variety of critical viewpoints. In offering this argument, I show that a purported proof from Finnur Dellsén that groups of more autonomous experts are more reliable does not work. It relies on a flawed ceteris paribus assumption, as well as a false equivalence between autonomy and probabilistic independence. A purely formal proof that more autonomous experts are more reliable is in fact not possible – substantive claims about how more autonomous groups reason are required. My alternative argument for the value of autonomy between experts rests on the claim that groups that triangulate a greater range of critical viewpoints will be less likely to accept hypotheses in error. As well as clarifying what makes autonomy between experts valuable, this mechanism of critical triangulation, gives us reason to value groups of experts that cover a wide range of relevant skills and knowledge. This justifies my second interpretation of expert independence.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

A common intuition holds that the advice of multiple experts is more useful when those experts are more independent from one another (Goldman Reference Goldman2001; Gundersen and Holst Reference Gundersen and Holst2022; Moore Reference Moore2017). But what does this independence entail? And why is it valuable?

Alvin Goldman (Reference Goldman2001) laid the foundations for the epistemology of expertise by outlining how novices can justifiably rely on experts.Footnote 1 Much of Goldman's focus was on the reliability of single experts. Yet it is common for individuals, as well as states, firms and other organisations, to also solicit advice from groups of experts – the UK's Scientific Advisory Group for Emergencies, the US's Council of Economic Advisers and the UN's Intergovernmental Panel on Climate Change are all examples of such groups. It is worth asking, therefore: are certain ways of collecting experts together preferable to others? By attending to what it means for experts to be independent from one another, I will argue that the answer to this question is yes.Footnote 2

I clarify two interpretations of independence between experts: (i) deciding autonomously, and (ii) coming at a problem from different perspectives. I argue that both are valuable because they leverage a benefit of cognitive diversity. Groups of experts that decide more autonomously from one another and groups that contain experts with a wider range of relevant perspectives are less likely to accept hypotheses in error because they can draw on a greater range of critical viewpoints to challenge those hypotheses.

Finnur Dellsén (Reference Dellsén2020) argues the value of my first interpretation of independence – (i) experts deciding autonomously – by other means. He equates greater autonomy between experts with greater independence in the probabilities that they will accept a hypothesis. He then purports to prove that a hypothesis is more likely to be true when a group of more autonomous experts agree that it is, than when a group of less autonomous experts agree that it is. I show that no such conclusion can be drawn. What Dellsén attempts is in fact impossible. There is nothing about probabilistic independence that should entail more independent groups are more reliable – so there is no way to formally prove that they are. To link epistemic autonomy or probabilistic independence to reliability, a substantive claim about how groups reason is required. Dellsén's proof inadvertently smuggles such a substantive claim in as a ceteris paribus assumption.

I offer an alternative, preferable, way of showing why (i) autonomy between experts is valuable. I start with a substantive claim: groups that draw on a wider set of critical viewpoints on a hypothesis are less likely to accept that hypothesis in error – a mechanism I call critical triangulation. I then show that groups of more autonomous experts leverage this mechanism better. They better utilise the full cognitive diversity of the group. This clarifies what it is about autonomy between experts that is valuable.Footnote 3 It also highlights a link between expert autonomy and perspective. If the benefit of autonomy is that it enables better use of the viewpoints within a group, then it is important to consider the composition of perspectives in that group. Critical triangulation, thus, also offers a reason to value groups that contain more diverse relevant perspectives. This justifies the value of my second interpretation of independence between experts – as offering (ii) different perspectives.

The paper proceeds as follows. In section 1, I summarise Dellsén's argument that greater autonomy between experts leads to greater reliability, before showing why it does not work. In section 2, I show that autonomy between experts is valuable, just not for the reason that Déllsen claimed. In section 3, I argue that a similar argument can be used to show that experts covering a wider range of relevant perspectives are also valuable. Section 4 concludes with a brief discussion of the implications of my argument for how expert groups should be composed.

1. Autonomy between experts

To work out what it means for experts to be independent from one another, it is pertinent to consider how expert opinions are actually combined.Footnote 4 Norway and Sweden have a long tradition of appointing ad hoc expert committees to advise on specific topics. After a period of research and deliberation, Norges offentlige utredninger and Statens offentliga utredningar compose reports that are typically used as a basis for legislation. These reports often articulate a consensus position.Footnote 5 Consensus reports like these are not unique to Scandinavia. It is often the case that expert committees report consensus, and with good reason. Consensus reports are thought to have more impact, and denying consensus is a common way of undercutting expert advice (Oreskes and Conway Reference Oreskes and Conway2010).

My aim here is not to discuss the virtues of consensus or the details of Scandinavian expert commissions.Footnote 6 What is interesting about the consensus reports produced by these commissions for my purposes is one potential cost. Given the value in being able to present consensus, there can sometimes be pressure on individual experts within commissions to acquiesce to the views of others. This seems to clash with the intuition I am interested in: that experts in groups should be independent from one another. This suggests that one way of interpreting independence between experts is the opposite of acquiescence: experts deciding autonomously from one another.

Within epistemology, epistemic autonomy has attracted much attention.Footnote 7 Dellsén (Reference Dellsén2020: 349) offers a way of applying the concept to experts:

Epistemic autonomy. S is epistemically autonomous with respect to a proposition P to the extent that S's expert acceptance regarding P is not directly influenced by other agents’ expert acceptance regarding P.

This seems like a good first interpretation of independence between experts as it guards against the acquiescence that occurs in false consensus. Interpreted as epistemic autonomy, independence entails that experts think for themselves.

What is it about epistemic autonomy between experts that is valuable? Dellsén claims to have an answer.Footnote 8 He argues that other things being equal, agreement among more autonomous experts is more reliable.

1.1. Proving the reliability of autonomous experts?

Dellsén imagines individual experts as functions that map hypotheses and bundles of evidence to the binary outcomes: accept or not. X i(H, E) is the outcome that expert X i accepts a hypothesis H in light of the evidence E. P(X i(H, E)) is the probability that expert X i will accept H in light of E. Dellsén equates degrees of epistemic autonomy among experts with degrees of probabilistic independence between X i(H, E) for a group of experts X: = {X i : i ∈ [1, n]}. He then compares two groups, $X^A: = \{ {X_i^A \, \colon \, i\in [ {1, \;n} ] } \}$ and $X^B: = \{ {X_i^B \, \colon \, i\in [ {1, \;n} ] } \}$, where $X_i^B ( {H, \;E} )$ are more (positively) dependent on each other than $X_i^A ( {H, \;E} )$. This means that:Footnote 9

(1)$$\displaystyle{{P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) } \over {\prod _1^n P( {X_i^A ( {H, \;E} ) } ) }} < \displaystyle{{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) } \over {\prod _1^n P( {X_i^B ( {H, \;E} ) } ) }}$$

Dellsén then makes two assumptions. First, given H is true, the two groups are equally likely to agree that H is true. Second, the experts in each group can be matched pairwise so that the ith expert in each group will be equally likely to accept H in light of E. Stated formally:

Assumption 1. Equal agreement on truth

$$P \big ({\mathop \bigwedge \limits_1^n X_i^A ( {H, \;E} ) \vert H} \big) = P\big ({\mathop \bigwedge \limits_1^n X_i^B ( {H, \;E} ) \vert H} \big)$$

Assumption 2. Pairwise equal likelihood of acceptance

$$P( {X_i^A ( {H, \;E} ) } ) = P( {X_i^B ( {H, \;E} ) } ) , \;\forall i\in [ {1, \;n} ] $$

Both of these assumptions are stated as equalising the prior distributions of expertise between the two groups. The aim is to isolate the effects of the different levels of dependence. (I will argue, below, that assumption 1 actually does much more than this and that it is the source of the problems for Dellsén's argument.)

Given these assumptions and (1), a straightforward application of Bayes's Theorem gives:Footnote 10

(4)$$P\big ({H\vert \mathop \bigwedge \limits_1^n X_i^A ( {H, \;E} ) } \big)> P \big ({H\vert \mathop \bigwedge \limits_1^n X_i^B ( {H, \;E} ) } \big )$$

Thus, the probability that H is true given that those in X A all accept H is higher than the probability that H is true given that those in X B all accept H. In words, a hypothesis is more likely to be true when a group of more epistemically autonomous experts agree that it is than when a group of less epistemically autonomous experts agree that it is.

1.2. An impossible task

If correct, the proof above offers a powerful result. By rearranging a few equations, we seem to have shown that groups of epistemically autonomous experts are, qua epistemically autonomous experts, more reliable.

But it is also a very surprising result. It seems odd that probabilisitic independence (which Dellsén treats as coextensive with epistemic autonomy) should entail greater reliability. After all, if two outcomes, a and b, are positively dependent, the only thing this tells us is that the probability of both occurring is greater than the product of the probabilities of each occurring [$P( {a\wedge b} ) > P( a ) P( b )$]. On its own, more or less dependence between a and b should have no bearing on a third outcome, c. For that to be the case, some further claims about how a and b relate to c would have to be made. Setting a and b to whether or not individuals accept a hypothesis and c to whether or not that hypothesis is true does not change this fact. Further claims about how the acceptances of the individuals involved relate to the truth of the hypothesis must be made for greater dependence between X i(E, H) and X j(E, H) to bear on P(H).

How does this square with the proof Dellsén offers, which is formally speaking correct? Dellsén's proof works because he introduces the required relation between X i(E, H), X j(E, H) and H in his first assumption, equal agreement on truth. But rather than giving a substantive reason to accept it, he suggests that it is only holding the prior distributions of expertise between the two groups even. This, however, is not correct.

1.3. An uneven comparison

Pairwise equal likelihood of acceptance holds the prior probabilities of each individual expert's acceptance of H equal between the two groups. This means that any difference in the probabilities that the groups collectively accept H are due to the differences in how the individuals are combined. Given the aim of comparing the impact of different ways of combining experts, pairwise equal likelihood of acceptance is thus a fair ceteris paribus assumption.

Equal agreement on truth is also offered as a ceteris paribus assumption. Dellsén's aim is to ensure that the only difference between the two groups is their relative dependence, and not differences in levels of expertise. As he puts it:

[I]f one of the group[s] is more likely to agree that H is true when H is indeed true, then this would already favor that group's consensus over the other group's as an indicator of the truth regarding H. (Dellsén Reference Dellsén2020: 355)

To add force to this point, Dellsén points out that one way in which equal agreement on truth could be violated is if one of the groups was more knowledgeable about the issues around H. Dellsén is trying to evaluate the impact of relative dependence between experts. If one group has more relevant expertise than the other, then that is going to stack the evaluation in their favour.

The problem is that equal agreement on truth does more than simply ensure that neither group has more relevant expertise for H than the other. If all it were doing was keeping levels of expertise fixed then the converse, equal agreement on falsity [$P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert \neg H} \right) = P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \vert \neg H} \right)$], should be similarly neutral. But assuming equal agreement on falsity would completely reverse the result.Footnote 11 So what is actually going on?

Once pairwise equal likelihood of acceptance is assumed, more dependent groups will be more likely to agree in general. That is, given assumption 2, (1) implies:

(5)$$P\big ({\mathop \bigwedge\limits_1^n X_i^A ( {H, \;E} ) } \big )< P\big ({\mathop \bigwedge\limits_1^n X_i^B ( {H, \;E} ) } \big )$$

This means that it is not true that ‘if one of the group[s] is more likely to agree that H is true when H is indeed true, then this would already favor that group's consensus over the other group's as an indicator of the truth regarding H’. If one group were potentially more likely to agree when H is true but also potentially more likely to agree when H is false, as is the case for more dependent groups of experts, then them agreeing on H gives us no better guide as to whether H is true.

Moreover, assuming that the hypothesis in question can only be true or not true, then (5) can be split up into:

(6)$$\eqalign{& P\big ({\mathop \bigwedge \limits_1^n X_i^A ( {H, \;E} ) \wedge H} \big ) + P\big ({\mathop \bigwedge \limits_1^n X_i^A ( {H, \;E} ) \wedge \neg H} \big )\cr & < P\big ({\mathop \bigwedge \limits_1^n X_i^B ( {H, \;E} ) \wedge H} \big ) + P\big ({\mathop \bigwedge \limits_1^n X_i^B ( {H, \;E} ) \wedge \neg H} \big )} $$

In words, the higher probability that those in X B will agree on H in general could come from a higher probability of them agreeing when H is false and/or a higher probability of them agreeing when H is true. Nothing about the degree of independence in the probability of accepting H can tell us which way this will go. In Dellsén's result, however, equal agreement on truth assumes that dependent groups are no more likely to agree on H when it is true, meaning that they must be more likely to agree when H is false.Footnote 12

Equal agreement on truth does not just hold the levels of expertise around H even between both groups. Groups of dependent experts are more likely to agree. Adopting equal agreement on truth entails assuming that this can only apply to false hypotheses. Dellsén gives no justification for assuming this. Nor should such an assumption be made if the goal is to evenly compare the reliability of both groups.Footnote 13 To justify an assumption of this sort, a substantive claim about how more and less dependent groups reason would be required.Footnote 14

Thus, although Dellsén's conclusion seems to follow from the nature of probabilistic independence, the work is being done by an assumption that is insufficiently justified. This offers a cautionary tale for purely formal epistemological arguments – one with significance because Dellsén's result has gained traction.Footnote 15 I will argue that autonomy between experts is valuable. My argument differs from Dellsén's purely formal approach, however, by stating clearly the mechanism that makes autonomy valuable.

2. Independence as epistemic autonomy reconsidered

Dellsén's proof that epistemic autonomy between experts begets greater reliability does not work. Nonetheless, because its opposite – acquiesce – seems to clash so clearly with independence, autonomy between experts warrants further examination. In what follows I offer an alternative reason to value it. But first a clarification.

2.1. Autonomy and independence

In the definition of expert epistemic autonomy, above, the word ‘direct’ is key. Recall (emphasis added):

Epistemic autonomy. S is epistemically autonomous with respect to a proposition P to the extent that S's expert acceptance regarding P is not directly influenced by other agents’ expert acceptance regarding P.

An expert is autonomous when they do not accept H simply because another tells them that they should. It is possible, however, for autonomous experts to be indirectly influenced by one another. By explaining her reasoning or ways of combining the available evidence, the jth expert, Jill, may convince the ith expert, Iris, to accept H without undercutting Iris's epistemic autonomy. As long as Iris utilises her own critical capacities to analyse what Jill tells her before accepting H, and does not simply accept H because Jill does, then she does not count as being directly influenced by Jill.

The distinction between direct and indirect influence is important as complete isolation between experts does not seem desirable or even possible. Experts within groups will very often have existing connections to one another and interaction between them may help each clarify and refine their own views.Footnote 16 This distinction disappears, however, when Dellsén moves to probabilistic independence. If Jill can influence Iris's acceptance of H then X Iris(H, E) and X Jill(H, E) will be dependent, even if the influence Jill might have over Iris is only indirect.Footnote 17 There are even situations in which greater dependence between X Iris(H, E) and X Jill(H, E) can lead to greater reliability. In a group that covers a wide range of relevant expertise, some dependence in the probabilities that the experts will accept hypotheses may aid the group to reliably agree on true hypotheses by triangulating their different perspectives. Iris and Jill may be able to put their perspectives together to rule out alternative hypotheses and consequently both agree on H, when independently neither could rule out enough alternatives. This can be the case without any loss of epistemic autonomy. Jill may be able to explain why an alternative hypothesis cannot be true using knowledge Iris already has but via an argument that she had not thought of (and vice versa).

Thus, it is not true that greater epistemic autonomy is coextensive with greater probabilistic independence. Moreover, in some cases, it is possible for greater probabilistic dependence to lead to greater reliability. Given this, I will stick to Dellsén's original definition of epistemic autonomy and not talk in terms of probabilistic independence.

2.2. Critical triangulation

In discussing the intuition behind his defence of epistemic autonomy, Dellsén draws on the nineteenth-century English natural philosopher William Whewell's idea of consilience: if a theory can be supported by evidence from many different directions (‘different classes of facts’, Reference Whewell1858: 88), then that offers a reason to infer the truth of that theory. If Whewell is correct, then something similar should apply to testimonial evidence. Testimony in favour of a hypothesis from autonomous (or more autonomous) sources of expertise should be taken to provide a strong (or stronger) reason to believe that hypothesis. This is an interesting suggestion and I will show how consilience can be used to highlight the value of expert autonomy. The problem with Dellsén's proof is that it missed a key part of Whewell's idea: difference.

Whewell's suggestion is that taking evidence from different directions gives more reason to believe a theory than taking evidence from one direction. To apply this to the case of epistemic autonomy, testimony from (more) autonomous experts must be analogous with evidence from (more) different directions. The first thing this requires is that the experts involved are different in some relevant sense. Ten identical experts deciding autonomously from one another do not fit the logic of consilience. Their autonomous testimony would be more akin to checking the same evidence ten times, rather than gathering evidence from ten different directions.Footnote 18

Taking this on board, imagine again two groups of experts that decide whether to accept a hypothesis given a bundle of evidence. For consistency, retain Dellsén's labels. Call the groups $X^A : = \{ {X_i^A \,\colon\, i\in [ {1, \;n} ] } \}$ and $X^B: = \{ {X_i^B \, \colon \, i\in [ {1, \;n} ] } \}$, the hypothesis H and the bundle of evidence E. Imagine also that the experts in X A are more epistemically autonomous than those in X B.

To add the difference that drives consilience, assume that the experts within both groups differ from one another in the skills and background knowledge they use to assess H (the skills and knowledge of $X_i^A$ differ from $X_j^A$ and so on). Adding such an assumption is not particularly costly. It is clear that experts do often possess different collections of skills and knowledge. Most of those that discuss expert testimony assume as much and assume that this variation is compatible with genuine expertise.

Because their skills and knowledge differ, the experts within both groups offer different perspectives on E and H. It is these different perspectives that seem analogous to Whewell's evidence from different directions. A group that can make better use of the range of perspectives of its members draws on a wider set of interpretations of the available evidence. The logic of consilience then suggests that we have greater reason to believe any conclusions such a group agree on.

How might epistemic autonomy play a role in this? Imagine our simple case with two experts, Iris and Jill, again. What is at issue when Iris accepts H simply because Jill asserts that she should? One issue seems to be that in accepting H on Jill's say so, Iris's own skills and knowledge are bypassed. She no longer thinks with her own mind but instead relies on Jill's.Footnote 19 If Iris and Jill are truly experts on H (as assumed) and they differ in the skills and knowledge they bring to the assessment of H, then there may be reasons against H that are accessible to Iris but not Jill. These reasons are ignored when Iris's skills and knowledge are bypassed. And H is, consequently, scrutinised from less perspectives than it might have been.

In more general terms, any group of experts that bypasses the skills and knowledge of some of its members utilises less expert perspectives than it might have done. Such groups use a narrower range of skills and knowledge in assessing H, and so consider fewer critical viewpoints on H than they might have.Footnote 20 Such groups will consequently be more likely to incorrectly accept H when it is false because they are less likely to notice an error or poor judgment in the reasoning from E to H. We can summarise this idea as the following mechanism:

Critical triangulation. Other things being equal, groups that draw on a greater range of relevant critical viewpoints on a hypothesis will be less likely to accept that hypotheses in error.

If the mechanism of critical triangulation is correct, it gives us a reason to value epistemic autonomy between experts.Footnote 21 More specifically, when (a) the experts within X A and X B differ in the skills and knowledge they possess, (b) both groups are made up of experts with the same distribution of skills and knowledge, and (c) the experts in X A are more epistemically autonomous than those in X B, critical triangulation gives us reason to believe that those in X B will be more likely to accept H in error.Footnote 22 This is because less epistemically autonomous groups utilise a narrower range of the perspectives available to them than more epistemically autonomous groups. And, given two groups that contain equivalent perspectives, the group that utilises less of those perspectives will develop less critical viewpoints on H, and so, via critical triangulation, will be more likely to accept H in error.

2.3. Technical aside

In the language of probability, critical triangulation gives us reason to believe that the combined probability of groups of less autonomous experts agreeing on H and H being false is higher than the equivalent probability for more autonomous groups. That is, given critical triangulation and (a–c):

(7)$$P\big ({\mathop \bigwedge\limits_1^n X_i^A ( {H, \;E} ) \wedge \neg H} \big )< P \big ({\mathop \bigwedge\limits_1^n X_i^B ( {H, \;E} ) \wedge \neg H} \big )$$

This is not equivalent to saying that the probability that H is true given that more epistemically autonomous groups of experts agree that it is is greater than for less epistemically autonomous groups – i.e. (4). I suggest that critical triangulation and (7) give us reason to value epistemic autonomy between experts. But, if the less autonomous group of experts (X B) are more likely to agree on H in general – this seems likely because both groups have the same spread of expertise but experts in the less autonomous group rely on one another more – then (7) is weaker than (4).Footnote 23 How much does this weaken the conclusion that epistemic autonomy between experts is valuable?

The first thing to note is that (7) leads to (4) under certain conditions. If more autonomy between experts decreases the chances of false agreement at a greater rate than it decreases the chances of agreement, then (4) follows.Footnote 24 Alternatively, (4) also follows if the difference between the probability that the more autonomous experts will agree and the probability that they will falsely agree is larger than or equal to the same difference for the less autonomous group.Footnote 25 Both of these conditions will be satisfied if the extra perspectives that are available in X A – due to the fact that the experts think more for themselves – decrease false agreement more than they decrease agreement. This could be justified by assuming a veritistic understanding of expertise – entailing that the perspectives the experts in both groups bring track truth better than average (Goldman Reference Goldman2001). Thus, although any extra expert perspectives are likely to decrease the chances of agreement, those perspectives should hopefully decrease the chance of agreement when H is false even more.

Given that (4) can be derived from (7), why do I not make the stronger claim that hypotheses are more likely to be true when more autonomous groups of experts agree on them, than when less autonomous experts agree on them? The reason why I conclude with (7) rather than adding ‘and under most conditions this also leads to (4)’ is that the idea within (7) is simpler and easier to work with. Less autonomous groups are more likely to agree on falsehoods. That is enough of a reason to value autonomy between experts. The reader may disagree on this point. In which case, the more detailed reasoning above suggests that under plausible conditions (4) also follows. It is just that that requires more complicated checks, which would be impractical in reality. One of the virtues of the argument in 2.2 is that it clarifies, in a simple manner, the key value of epistemic autonomy: that it lowers the chances of false agreement via critical triangulation.

2.4. The value of autonomy between experts

Critical triangulation offers a reason to believe that groups of more epistemically autonomous experts are less likely to reach consensus in error. This is a conclusion similar to the one that Dellsén aimed at but failed to show. It also clarifies what it is about epistemic autonomy that is valuable: it reduces the risk of incorrect consensus by enabling the full difference within expert groups.

This conclusion does not follow from the logical features of epistemic autonomy alone. Rather, it relies on the claim that critical triangulation happens. I have given reasons to believe that critical triangulation is a factor in group reasoning. But there is no guarantee that these reasons correctly identify what happens in all cases. There may also be other mechanisms at play that pull in the opposite direction. This should not, however, undermine the value of my point. It should not be surprising that a consequential conclusion – that epistemic autonomy between experts is valuable – should rest on a substantive claim. It is a virtue of my argument that I clarify the substantive claim that underpins the argument.

On my telling, the value of epistemic autonomy rests in ensuring that all the viewpoints in a group are heard. In addition to clarifying its value, this highlights a limitation of epistemic autonomy between experts. No matter how autonomously two experts decide, if they are trained in the same skills, at the same institution and under the same people, the perspectives they offer are likely to be broadly similar. If the power of epistemic autonomy lies in enabling difference, then the significance of its effects depends on how different the experts in a group are. This opens the door to a second interpretation of independence between experts: as offering different perspectives.

3. Independence as different perspectives

Interpreting independence between experts in groups as epistemic autonomy – experts deciding what to accept themselves – developed out of the idea that independence and acquiescence are at odds with one another. Independence seems to have something to do with autonomy as the counter to acquiescence. Although this highlights autonomy as one interpretation of expert independence – one that is often implied in practical discussions on how to form expert groups – it is not the only one. Independence may also entail complete isolation, experts holding different values or experts coming from different geographical locations, universities or disciplines. Given that complete isolation seems an implausible basis for collecting expert views together, I focus on the others. One factor that is often considered in composing expert committees, and that I suggest captures the epistemically significant parts of these other ways of understanding independence, is perspective.

Let the reasonable investigative perspectives on a topic be the modes of pursuing research into that topic that could plausibly be pursued by an unbiased agent motivated only by the search for new knowledge. The reasonable investigative perspectives on why some people chose to pursue counter insurgent action during the civil war in El Salvador might include, for example, interviews with participants, critically examining the archives of and data collected by domestic and international media and the government, creating a model of strategic interaction of the different participants, as well as many other strategies (Wood Reference Wood2003).

Imagine our two groups of experts, X A and X B, again with H, E, X i(H, E) and P(X i(H, E)) defined as before. Now assume that the experts in X A have skills and knowledge that draw from a wider range of reasonable investigative perspectives concerning H than those in X B. (If an example is helpful, imagine that H concerns how a policy will impact unemployment in the long-term, and that X A contains a macroeconomist from the US Federal Reserve, an experimental economist and a quantitative sociologist, while X B contains three macroeconomists from the Federal Reserve.) Given the wider range of reasonable investigative perspectives in X A, the experts in X B are less likely to have as wide a range of critical resources for assessing E as those in X A, simply because their modes of analysis are more likely to cross over with others in the group. Because of critical triangulation, the experts in X A will consequently be less likely than those in X B to accept H in error.

Thus, critical triangulation gives us a reason to value groups of experts that cover a wider range of reasonable investigative perspectives that is exactly the same in form as our reason to value groups of more epistemically autonomous experts: they are less likely to accept hypotheses in error.Footnote 26 Another way of grounding the idea that groups of more independent experts are valuable is, therefore, to interpret greater independence between members of expert groups as having more different reasonable investigative perspectives.

How does this conclusion compare to the one for expert autonomy (2.2)? One difference is that the latter is simpler. It compares two groups with almost identical experts. The argument here requires the additional concept ‘reasonable investigative perspective’. This opens the door to two objections.

First, is the concept ‘reasonable investigative perspectives’ too ambiguous? Despite space for interpretation, many scientists and commentators on science often do assume that there exists something like a set of reasonable investigative perspectives for any given topic. Think, for example, of how a grant award committee may suggest that the method a proposal suggests is not appropriate for its topic, or that it leaves out an important perspective on its topic. Moreover, the idea that covering a range of elements in the set of reasonable investigative perspectives can help increase the reliability of collective expert testimony is coherent, even if there is dispute about what can be put in that set for any given topic.

Second, even if there were a fixed set of reasonable investigative perspectives for each topic, is it really possible to rank groups in terms of how many of those perspectives they contain? This is trickier. While they may not agree, experts on a given topic are likely to be able to suggest some reasonable investigative perspectives on that topic. But whether groups of experts can be compared based on whether they cover more of the reasonable investigative perspectives for a given topic is more doubtful.Footnote 27 Acknowledging this fact, however, does not undermine the value of my argument. There are cases in which comparing the reasonable investigative perspectives of groups is fairly straight forward. Most would agree, for example, that the trend of adding mathematicians to teams studying biological systems has led to valuable expansions in the reasonable investigative perspectives of those teams. When expert groups are comparable on the range of reasonable investigative perspectives they contain, my argument offers a reason to believe the groups with a wider range will be less likely to accept hypotheses in error.

4. Conclusion

It is commonly held that independence between experts is valuable. I have offered two ways of interpreting and justifying that idea. One interpretation of independence is as epistemic autonomy. Epistemic autonomy between experts in a group is valuable as a way of ensuring that the full cognitive diversity of the group is utilised. Critical triangulation offers a reason to believe that groups of experts that do this will be less likely to accept hypotheses in error. In articulating this position I correct an argument in favour of expert autonomy by Dellsén. My reasoning also suggests that other ways of encouraging the triangulation of critical viewpoints within groups of experts are also valuable. This opens the door to a second interpretation of independence as offering different perspectives. The same mechanism of critical triangulation implies that groups that contain experts with a greater range of the reasonable investigative perspectives on a topic will also be less likely to accept hypotheses on that topic in error.

These two interpretations of expert independence reinforce one another. Epistemic autonomy requires that experts should think for themselves. But if the benefit of this is that it ensures the full range of skills and knowledge of the group are utilised, then it makes sense to also ensure that the group covers a good range of reasonable investigative perspectives on the topic at hand. Vice versa, adding experts with new reasonable investigative perspectives to a group does not do much if those experts simply acquiesce to others. The lesson for how to compose expert groups then is to aim for a combination of relatively autonomous experts and experts that cover a wide range of the reasonable investigative perspectives for the topic at hand.Footnote 28, Footnote 29

Appendix 1 Weakening equal agreement on truth

In discussing potential objections to his argument, Dellsén suggests the plausibility of equal agreement on truth may be a concern. It seems unrealistic that two real groups of experts will be equally likely to agree on hypotheses when they are true. He therefore points out that a weaker assumption will also do. Rather than equality in assumption 1, all that is needed for (4) is to ensure that $P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} \right)$ and $P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \vert H} \right)$ do not unbalance the inequality in (3). Thus, rather than equal agreement on truth, Dellsén suggests that $P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} \right)$ and $P\left({\bigwedge_1^n X_i^B ( {H, \;E} ) \vert H} \right)$ can be restrained by assuming:Footnote 30

Assumption 3. High relative dependence

$$\displaystyle{{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) /\prod _1^n P( {X_i^B ( {H, \;E} ) } ) } \over {P\left({\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) /\prod _1^n P( {X_i^A ( {H, \;E} ) } ) }} > \displaystyle{{P\left({\bigwedge_1^n X_i^B ( {H, \;E} ) \vert H}\right) } \over {P\left({\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} \right) }}$$

As Dellsén puts it, ‘the ratio of dependence in the more versus less autonomous group must be greater than the ratio of their corresponding likelihoods of agreeing on H, conditional on H being true’ (356). What does this assumption entail and what does it do to the result? Dellsén says:

[A] consensus among a more autonomous group of experts would be a more reliable guide to truth even when such a group is less likely to agree on the truth than a corresponding group of less autonomous experts, provided that the former's degree of autonomy exceeds that of the latter's to a sufficiently high degree. (356)

This is correct, but it does not show much. High relative dependence asserts that the relative dependence of those in X B compared to those in X A is higher than the relative likelihood of those in X B to agree on H given it is true compared to those in X A. Given pairwise equal likelihood of acceptance (assumption 2), high relative dependence amounts to assuming that those in X B are more likely to agree than those in X A but the degree to which they are more likely to agree is less when H is true.Footnote 31 As with equal agreement on truth, this amounts to an unjustified assumption about collective competence. The resulting comparison does not show that dependant groups of experts are less reliable in general, just that the dependant groups that are less reliable are less reliable.Footnote 32

The issue with equal agreement on truth is not a lack of plausibility. The problem, rather, is that it is an assumption about collective competence and not a ceteris paribus condition that holds individual competence equal. Adopting equal agreement on truth leads to an unfair contrast between more and less dependent groups because it assumes away, without justification, a potential benefit of dependence. This issue is not resolved by moving to high relative dependence.

Footnotes

1 For a range other ways of approaching the epistemology of expertise see Anderson (Reference Anderson2011); Collins and Evans (Reference Collins and Evans2002, Reference Collins and Evans2007); Dellsén (Reference Dellsén2018); Goldman (Reference Goldman2018); Lane (Reference Lane2014); Nguyen (Reference Nguyen2020b); Quast (Reference Quast2018); Shaw (Reference Shaw2021); Singleton and Booth (Reference Singleton and Booth2023); Whyte and Crease (Reference Whyte and Crease2010).

2 This need not entail non-summativism about the epistemic properties of groups – it is possible for groups to have no epistemic properties above and beyond those of their members while the way that those properties are aggregated is epistemically significant. See Kallestrup (Reference Kallestrup2020) and Pino (Reference Pino2021) for non-summative and summative views on group epistemic properties.

3 By showing why Dellsén's result is wrong and clarifying the value of expert autonomy, this also offers a better basis for discussing what it is about epistemic autonomy (in general) that is socially beneficial (Dellsén Reference Dellsén, Matheson and Lougheed2021b).

4 Given that my concern is how to combine the advice of many experts, I take the existence of expertise as given. What I have to say is compatible with multiple conceptions of what it means to be an expert (Collins and Evans Reference Collins and Evans2002; Goldman Reference Goldman2001, Reference Goldman2018) and various criteria for trustworthy expert advice (Holst and Molander Reference Holst and Molander2017; Irzik and Kurtulmus Reference Irzik and Kurtulmus2019; Oreskes Reference Oreskes2019).

5 Although dissenting opinions are sometimes offered, this is not the norm.

6 See Dellsén (Reference Dellsén2021a) on the former and Holst and Molander (Reference Holst and Molander2018) on the latter.

8 Dellsén (Reference Dellsén2020) aims to show that expert autonomy is both compatible with the idea that rational agents should trust the testimony of one another (Zagzebski Reference Zagzebski2007), and a good thing to aim for. My aim is to assess what we can infer about the conclusions of groups of experts with different properties. It is the second part of Dellsén's argument that links these two issues. It is this that I will focus on. Moreover, the second part of the argument (that epistemic autonomy is valuable) is key to Dellsén's whole position in (Reference Dellsén2020) and what he builds on in (Reference Dellsén, Matheson and Lougheed2021b).

9 Based on the fact that y i are independent when $P\left( {\bigwedge_1^n y_i} \right) = \prod _1^n P( {y_i} )$ and positively dependent when $P\left( {\bigwedge_1^n y_i} \right) > \prod _1^n P( {y_i} )$. Thus, ${{P\left( {\bigwedge_1^n y_i} \right)} \over{\prod _1^n P( {y_i} )}}$ gives a graded notion of positive dependence, where ${{P\left( {\bigwedge_1^n y_i} \right)} \over{\prod_1^n P( {y_i} ) } } = 1$ implies independence and ${{P\left( {\bigwedge_1^n y_i} \right)} \over{\prod_1^n P( {y_i} ) }}> {{P\left( {\bigwedge_1^n z_i} \right)} \over{\prod_1^n P( {z_i} ) }}$ implies that y i are more dependent on each other than z i.

10 In light of assumption 2, (1) becomes:

(2)$$P\big ({\mathop \bigwedge \limits_1^n X_i^A ( {H, \;E} ) } \big ) < P \big ({\mathop \bigwedge \limits_1^n X_i^B ( {H, \;E} ) } \big )$$

Taking the reciprocal of both sides of this equation, multiplying both sides by P(H), multiplying both sides by the equal probabilities in assumption 1 and assuming that none of the probabilities involved are equal to 0, gives:

(3)$${{P( H ) P( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} ) } \over {P( {\bigwedge_1^n X_i^A ( {H, \;E} ) } ) }} > {{P( H ) P( {\bigwedge_1^n X_i^B ( {H, \;E} ) \vert H} ) } \over {P( {\bigwedge_1^n X_i^B ( {H, \;E} ) } ) }}$$

(4) then follows by Bayes's Theorem.

11 Given pairwise equal likelihood of acceptance and the greater dependence of X B, we have (2). Taking the reciprocal of both sides of (2) and then multiplying both sides by PH) and the equal probabilities in equal agreement on falsity gives us $P( {\neg H\vert \bigwedge_1^n X_i^A ( {H, \;E} ) } ) > P( {\neg H\vert \bigwedge_1^n X_i^B ( {H, \;E} ) } )$ (via Bayes's equation). Multiplying both sides by −1 and adding 1 to both sides gives us the reverse of (4).

12 This is because, if P(H) ≠ 0, then the decomposition of conditional probability means that equal agreement on truth is equivalent to $P( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge H} ) = P( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge H} )$. Thus, the left-hand terms of both sides of the inequality in (6) cancel and leave $P( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge \neg H} ) < P( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge \neg H} )$.

13 If the goal is to ensure that neither group had more relevant expertise for H, then more carefully targeted assumptions that do not obscure the potential benefits of dependence are available. The groups could have, for example, been assumed to be pairwise equally likely to accept H when H is true [${P( {X_i^A ( {H, \;E} ) \vert H} ) = P( {X_i^B ( {H, \;E} ) \vert H} ) , \;\forall i\in [ {1, \;n} ] }$] and/or pairwise equally reliable [$P( {H\vert X_i^A ( {H, \;E} ) } )\, = P( {H\vert X_i^B ( {H, \;E} ) } ) , \;\forall i\in [ {1, \;n} ] $]. Dellsén's result could not have been derived if either or both of these assumptions were adopted instead of equal agreement on truth. This should be taken as a sign that the extra constraints in equal agreement on truth have a significant effect on the comparison. When discussing another potential objection to his argument, Dellsén (Reference Dellsén2020) does suggest a weaker assumption to equal agreement on truth. Because the issue with that assumption is broadly the same in character as the issue with equal agreement on truth, my objection to it is in Appendix 1.

14 When this was put to him, Dellsén (personal communication) stressed that his comparison is only intended to apply when both groups already agree on H. The point is to compare the probability of H being true given that the experts in X A and X B already all accept H – i.e. to compare $P( {H\vert \bigwedge_1^n X_i^A ( {H, \;E} ) } )$ and $P( {H\vert \bigwedge_1^n X_i^B ( {H, \;E} ) } )$. The benefit of dependence – that it might hasten correct agreement – does not apply, he argues, as $\bigwedge _1^n X_i^A ( {H, \;E} )$ and $\bigwedge _1^n X_i^B ( {H, \;E} )$ are already assumed. But this misses that the benefits of dependence assumed away by equal agreement on truth do not just impact cases in which both groups have not yet all agreed. Via (6), equal agreement on truth also assumes greater unreliability of X B. Because X B are more likely to agree but it is assumed that they are only equally likely to agree when H is true, they must be more likely to agree on falsehoods. This is highlighted by the fact that assuming equal agreement on falsity rather than truth would still reverse the result even if we focus just on cases in which $\bigwedge _1^n X_i^A ( {H, \;E} )$ and $\bigwedge _1^n X_i^B ( {H, \;E} )$ are fixed.

15 Dellsén builds on the result in (Reference Dellsén, Matheson and Lougheed2021b) and (Reference Dellsén2021a), and Matheson (2021, Reference Matheson2024) and Nguyen (Reference Nguyen2020a) nod to the result in related arguments.

16 This is a special case of a general point – autonomy need not entail the complete absence of interaction. By depending on others for their intellectual development and by engaging with others, agents may increase their capacity for epistemic autonomy (Grasswick Reference Grasswick and Battaly2018; Matheson Reference Matheson2024). More generally, if an autonomous person is one who determines the course of their own life (Raz Reference Raz1986), some interaction with others may give them more options in how to do that (Matheson Reference Matheson, Matheson and Lougheed2021).

17 That is, Iris and Jill can be completely epistemically autonomous without it being the case that $P( {X_{Iris}( {H, \;E} ) \wedge X_{Jill}( {H, \;E} ) } ) = P( {X_{Iris}( {H, \;E} ) } ) P( {X_{Jill}( {H, \;E} ) } )$

18 Repeatedly checking the same evidence (or repeating the same experiment) may reveal errors, but that is different to consilience.

19 She defers to Jill rather than deciding herself (McGrath Reference McGrath2009).

20 To minimise confusion, I distinguish between ‘critical viewpoints’ and ‘perspectives’. The abstract structure of both terms are the same (different ways of looking at something), but I reserve ‘critical viewpoints’ for different ways of evaluating a given proposition and ‘perspectives’ for the different collections of skills and knowledge that give rise to ‘critical viewpoints’.

21 Critical triangulation is partly inspired by Mill's argument for the transformative power of criticism (Reference Mill1859), expanded and further developed by Longino (Reference Longino1990, Reference Longino2002). I focus on how the triangulation of perspectives can reveal errors in a hypothesis and not on the significance of critical debate and interaction between agents. This offers a way of applying ideas about cognitive diversity to practical questions about how groups of experts should be formed (see e.g. Kitcher Reference Kitcher1993; Hong and Page Reference Hong and Page2004; Weisberg and Muldoon Reference Weisberg and Muldoon2009; Muldoon Reference Muldoon2013; Grim et al. Reference Grim, Singer, Bramson, Holman, McGeehan and Berger2018; Rolin Reference Rolin2019; Wright Reference Wright2023).

22 Although (a) and (b) are additional to Dellsén's original argument, they do not significantly alter the scope of the conclusion. Condition (a) is only necessary to keep the inequality in (7) strict, and so can be dropped with only minor changes to the conclusion. Condition (b) can be seen as a different way of holding the individual levels of expertise constant between the two groups (as opposed to pairwise equal likelihood of acceptance). An alternative to (b) would be to imagine the two groups to be made up of the same experts and only differ on their reletive degrees of epistemic autonomy.

23 (4) is equivalent to $\frac{P\left( \bigwedge_1^n X_i^A ( H, \;E ) \wedge \neg H \right) }{P\left( \bigwedge_1^n X_i^A ( H, \;E ) \right)} < \frac{P\left( \bigwedge_1^n X_i^B ( H, \;E ) \wedge \neg H \right)}{P\left( \bigwedge_1^n X_i^B ( H, \;E ) \right) }$, which implies (7) when $P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) > P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) } \right)$. But (4) cannot be derived from (7) without further conditions (see below).

24 Formally this means that:

$${{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) } \over {P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) }} < {{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge \neg H} \right) } \over {P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge \neg H} \right) }}$$

which can be rearranged into:

$${{P( {\neg H} ) P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert \neg H} \right) } \over {P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) }} < {{P( {\neg H} ) P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \vert \neg H} \right) } \over {P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) }}$$

Multiplying this by −1 and adding 1 to both sides then gives (4).

25 This means that:

$$P\big ({\mathop \bigwedge \limits_i^n X_i^A ( {H, \;E} ) } \big )-P \big ({\mathop \bigwedge \limits_i^n X_i^A ( {H, \;E} ) \wedge \neg H} \big )\ge P \big ({\mathop \bigwedge \limits_i^n X_i^B ( {H, \;E} ) } \big )-P \big ({\mathop \bigwedge \limits_i^n X_i^B ( {H, \;E} ) \wedge \neg H} \big )$$

which is equivalent to $P\left( {\bigwedge_i^n X_i^A ( {H, \;E} ) \wedge H} \right) \ge P\left( {\bigwedge_i^n X_i^B ( {H, \;E} ) \wedge H} \right)$. Because we are assuming that $P\left( {\bigwedge_i^n X_i^B ( {H, \;E} ) } \right) > P\left( {\bigwedge_i^n X_i^A ( {H, \;E} ) } \right)$ – if it did not then (7) would imply (4) directly – this implies that:

$${{P\left( {\bigwedge_i^n X_i^A ( {H, \;E} ) \wedge H} \right) } \over {P\left( {\bigwedge_i^n X_i^A ( {H, \;E} ) } \right) }} > {{P\left( {\bigwedge_i^n X_i^B ( {H, \;E} ) \wedge H} \right) } \over {P\left( {\bigwedge_i^n X_i^B ( {H, \;E} ) } \right) }}$$

which is equivalent to (3) and thus (4).

26 As for epistemic autonomy, this can be strengthened. If a greater range of perspectives decreases the chances of false agreement at a greater rate than it decreases the chances of agreement, then (4) follows. Condition (4) also follows if the difference between the probability that the group with more perspectives will agree and the probability that they will falsely agree is larger than or equal to the same difference for the group with less perspectives. Both of these conditions might be justified in the same way as for more epistemically autonomous groups – by assuming a veritistic understanding of expertise, meaning that the perspectives the experts bring track truth better than average in relation to H.

27 Consider our example of advice on how a policy will impact unemployment in the long-term, again. Is it really the case that three macroeconomists from the Fed that were trained at different graduate schools and that work with different models really offer a less suitable range of the reasonable investigative perspectives than one of those economists along with an experimental economist and a quantitative sociologist?

28 There is a caveat to this lesson. Critical triangulation gives us reason to value groups of experts that are more epistemically autonomous and that cover a wider range of relevant perspectives, but these properties do not trump all. It is likely, for example, that increasing epistemic autonomy and the range of perspectives a group covers will lower the likelihood of a group being able to find consensus. If the goal is to ascertain as certainly as possible whether a hypothesis is true, this is not an issue. But if the goal is to bring together expertise to communicate the existing knowledge on a topic to policy makers, this may be more problematic.

29 Thanks to Anna Alexandrova, Christopher Clarke, Finnur Dellsén, Torbjørn Gundersen and Ida Sognnæs for their invaluable comments on drafts of this paper. Thanks also to the audiences of the University of Oslo's Centre for Philosophy and the Sciences (CPS) and Socratic evening seminars for their feedback on earlier versions of the argument.

30 The result follows from this assumption in combination with pairwise equal likelihood of acceptance. All that is needed is to multiply the top and bottom of the left-hand side of assumption 3 by $\prod _1^n P( {X_i^A ( {H, \;E} ) } )$ or $\prod _1^n P( {X_i^B ( {H, \;E} ) } )$ (which are equal, by assumption 2), then divide both sides by $P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right)$ and multiply both sides by $P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} \right)$ and P(H).

31 This is a substantive claim that may be justifiable – indeed my idea of critical triangulation is a way of justifying something like this claim. The problem is that Dellsén gives no justification and instead suggests it is a mild formal assumption.

32 Thinking of the issue in terms of the probability of conjoined events again highlights this point. Given pairwise equal likelihood of acceptance, high relative dependence is equivalent to:

(8)$${{P\left({\bigwedge_1^n X_i^A ( {H, \;E} ) \vert H} \right) } \over {P\left({\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) }} > {{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \vert H} \right) } \over {P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) } \right) }}$$

By the decomposition of conditional probability, multiplying both sides of this inequality by P(H) gives:

(9)$${{P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge H} \right) } \over {P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) } \right) }} > {{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge H} \right) } \over {P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) }\right) }}$$

Decomposing the bottom of this inequality into the mutually exclusive and exhaustive cases where H is true and H is not true, then taking the reciprocal of both sides gives:

(10)$${{P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge \neg H} \right) } \over {P\left( {\bigwedge_1^n X_i^B ( {H, \;E} ) \wedge H} \right) }} > {{P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge \neg H} \right) } \over {P\left( {\bigwedge_1^n X_i^A ( {H, \;E} ) \wedge H} \right) }}$$

This is the result of assumptions 2 and 3 alone. Dellsén is assuming that the ratio of the probability that the experts will agree on H and H is false over the probability that the experts will agree on H and H is true is greater in X B than in X A.

References

Anderson, E. (2011). ‘Democracy, Public Policy, and Lay Assessments of Scientific Testimony1.’ Episteme 8(2), 144–64.Google Scholar
Coady, C.A. (2002). ‘Testimony and Intellectual Autonomy.’ Studies in History and Philosophy of Science Part A 33(2), 355–72.Google Scholar
Collins, H. and Evans, R. (2002). ‘The Third Wave of Science Studies Studies of Expertise and Experience.’ Social Studies of Science 32(2), 235–96.Google Scholar
Collins, H. and Evans, R. (2007). Rethinking Expertise. Chicago: University of Chicago Press.Google Scholar
Dellsén, F. (2018). ‘When Expert Disagreement Supports the Consensus.’ Australasian Journal of Philosophy 96(1), 142–56.Google Scholar
Dellsén, F. (2020). ‘The Epistemic Value of Expert Autonomy.’ Philosophy and Phenomenological Research 100(2), 344–61.Google Scholar
Dellsén, F. (2021 a). ‘Consensus Versus Unanimity: Which Carries More Weight?The British Journal for the Philosophy of Science.Google Scholar
Dellsén, F. (2021 b). ‘We Owe it to Others to Think for Ourselves.’ In Matheson, J. and Lougheed, K. (eds), Epistemic Autonomy, pp. 306322. New York: Routledge.Google Scholar
Fricker, E. (2006). ‘Testimony and Epistemic Autonomy.’ In Lackey, J. and Sosa, E. (eds), The Epistemology of Testimony, pp. 225–50. Oxford: Oxford University Press.Google Scholar
Goldberg, S. (2013). ‘Epistemic Dependence in Testimonial Belief, in the Classroom and Beyond.’ In Kotzee, B. (ed.), Education and the Growth of Knowledge: Perspectives from Social and Virtue Epistemology, pp. 1435. Malden, MA: Wiley-Blackwell.Google Scholar
Goldman, A.I. (2001). ‘Experts: Which Ones Should You Trust.’ Philosophy and Phenomenological Research 63(1), 85110.Google Scholar
Goldman, A.I. (2018). ‘Expertise.’ Topoi 37(1), 310.Google Scholar
Grasswick, H. (2018). ‘Epistemic Autonomy in a Social World of Knowing.’ In Battaly, H. (ed.), The Routledge Handbook of Virtue Epistemology, pp. 196208. London: Routledge.Google Scholar
Grim, P., Singer, D.J., Bramson, A., Holman, B., McGeehan, S. and Berger, W.J. (2018). ‘Diversity, Ability, and Expertise in Epistemic Communities.’ Philosophy of Science 86(1), 98123.Google Scholar
Gundersen, T. and Holst, C. (2022). ‘Science Advice in an Environment of Trust: Trusted, but Not Trustworthy?Social Epistemology 36(5), 629–40.Google Scholar
Holst, C. and Molander, A. (2017). ‘Public Deliberation and the Fact of Expertise: Making Experts Accountable.’ Social Epistemology 31(3), 235–50.Google Scholar
Holst, C. and Molander, A. (2018). ‘Asymmetry, Disagreement and Biases: Epistemic Worries about Expertise.’ Social Epistemology 32(6), 358–71. doi: 10.1080/02691728.2018.1546348Google Scholar
Hong, L. and Page, S.E. (2004). ‘Groups of Diverse Problem Solvers can Outperform Groups of High-Ability Problem Solvers.’ Proceedings of the National Academy of Sciences 101(46), 16385–89.Google Scholar
Irzik, G. and Kurtulmus, F. (2019). ‘What is Epistemic Public Trust in Science.’ The British Journal for the Philosophy of Science 70(4), 1145–66.Google Scholar
Kallestrup, J. (2020). ‘Group Virtue Epistemology.’ Synthese 197, 5233–51.Google Scholar
Kitcher, P. (1993). The Advancement of Science: Science without Legend, Objectivity with Illusions. New York: Oxford UP.Google Scholar
Lane, M. (2014). ‘When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.’ Episteme 11(1), 97118.Google Scholar
Longino, H.E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press.Google Scholar
Longino, H.E. (2002). The Fate of Knowledge. Princeton: Princeton University Press.Google Scholar
Matheson, J. (2021). ‘The Virtue of Epistemic Autonomy.’ In Matheson, J. and Lougheed, K. (eds), Epistemic Autonomy, pp. 173–94. New York: Routledge.Google Scholar
Matheson, J. (2024). ‘Why Think for Yourself?Episteme 21(1), 320–38.Google Scholar
Matheson, J. and Lougheed, K. (2021). Epistemic Autonomy. New York: Routledge.Google Scholar
McGrath, S. (2009). ‘The Puzzle of Pure Moral Deference.’ Philosophical Perspectives 23, 321–44.Google Scholar
Mill, J.S. (1859). On Liberty. London: John W. Parker & Son.Google Scholar
Moore, A. (2017). Critical Elitism: Deliberation, Democracy, and the Problem of Expertise. Cambridge: Cambridge University Press.Google Scholar
Muldoon, R. (2013). ‘Diversity and the Division of Cognitive Labor.’ Philosophy Compass 8(2), 117–25.Google Scholar
Nguyen, C.T. (2020 a). ‘Autonomy and Aesthetic Engagement.’ Mind 129(516), 1127–56.Google Scholar
Nguyen, C.T. (2020 b). ‘Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.’ Synthese 197(7), 2803–21.Google Scholar
Oreskes, N. (2019). Why Trust Science. Princeton: Princeton University Press.Google Scholar
Oreskes, N. and Conway, E.M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press.Google Scholar
Pino, D. (2021). ‘Group (Epistemic) Competence.’ Synthese 199(3–4), 11377–96.Google Scholar
Pritchard, D. (2016). ‘Seeing it for Oneself: Perceptual Knowledge, Understanding, and Intellectual Autonomy.’ Episteme 13(1), 2942.Google Scholar
Quast, C. (2018). ‘Expertise: A Practical Explication.’ Topoi 37, 1127.Google Scholar
Raz, J. (1986). The Morality of Freedom. Oxford: Clarendon Press.Google Scholar
Rolin, K. (2019). ‘The Epistemic Significance of Diversity.’ In M. Fricker, P. J. Graham, D. Henderson, N. J. Pedersen, and J. Wyatt (eds.), The Routledge Handbook of Social Epistemology, pp. 158–66. London: Routeledge.Google Scholar
Shaw, J. (2021). ‘Feyerabend and Manufactured Disagreement: Reflections on Expertise, Consensus, and Science Policy.’ Synthese 198(Suppl 25), 6053–84.Google Scholar
Singleton, J. and Booth, R. (2023). ‘Expertise and Information: an Epistemic Logic Perspective.’ Synthese 201(2), 64.Google Scholar
Weisberg, M. and Muldoon, R. (2009). ‘Epistemic Landscapes and the Division of Cognitive Labor.’ Philosophy of Science 76(2), 225–52.Google Scholar
Whewell, W. (1858). Novum Organon Renovatum: The Second Part of the Philosophy of the Inductive Sciences. London: Parker.Google Scholar
Whyte, K.P. and Crease, R.P. (2010). ‘Trust, Expertise, and the Philosophy of Science.’ Synthese 177(3), 411–25.Google Scholar
Wood, E.J. (2003). Insurgent Collective Action and Civil War in El Salvador. Cambridge: Cambridge University Press.Google Scholar
Wright, J. (2023). ‘The Hierarchy in Economics and its Implications.’ Economics and Philosophy 122.Google Scholar
Zagzebski, L. (2007). ‘Ethical and Epistemic Egoism and the Ideal of Autonomy.’ Episteme 4(3), 252–63.Google Scholar