Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-23T12:23:25.146Z Has data issue: false hasContentIssue false

Cascading expert failure

Published online by Cambridge University Press:  25 July 2022

Jon Murphy*
Affiliation:
Western Carolina University, Cullowhee, North Carolina, USA
*
Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Recent research has shown how experts may fail in their duty as advisors by providing advice that leads to a worse outcome than that anticipated by the user of expert opinion. However, those models have focused on the immediate effects of the failure on experts and nonexperts. Using a cascading network failure model, I show how expert failure can cascade throughout multiple sectors, even those not necessarily purchasing the expert opinion. Consequently, even relatively small failures end up having outsized aggregate effects. To provide evidence of my theory, I look at two case studies of COVID expert advice to show how one seemingly minor failure ended up contributing to the pandemic. I conclude with a discussion on institutional frameworks that can prevent such cascades.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of Millennium Economics Ltd.

1. Introduction

Over the past decade, research has begun to explore how relatively minor microeconomic changes can have substantial effects beyond just the market in question (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012; Baqaee, Reference Baqaee2018; Foerster et al., Reference Foerster, Sarte and Watson2011; Taschereau-Dumouchel, Reference Taschereau-Dumouchel2020). This literature finds that small failures (e.g. firms exiting the market, over- or under-supply in the market) do not necessarily average out throughout the economy, as the Law of Large Numbers would suggest. Given different market structures, the negative effects can ripple through markets (Baqaee, Reference Baqaee2018). Both the connectivity of producers and the structure of the market matter for this effect to appear.

However, little attention has been paid in this literature to the cascading effects in the production of expert opinion. Specifically, do cascades remain locked within the area they occur or do they spill over into other areas? In the early days of the COVID-19 pandemic, we saw the influence of public health expertise show up in fields far removed from public health, but public health officials cared little for the insights of experts in those fields. I explore the effects of these choices by experts by combining the cascading failures literature (Banerjee, Reference Banerjee1992; Baqaee, Reference Baqaee2018; Bikhchandani et al., Reference Bikhchandani, Hirshleifer and Welch1992; Wu, Reference Wu2015) with recent work on the production of expert opinion (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a, Reference Gentzkow and Kamenica2017b; Koppl, Reference Koppl2018, Reference Koppl2021; Koppl and Murphy, Reference Koppl and Murphy2022; Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021) to show that seemingly small expert failures can have cascading effects on the decisions of unrelated actors, leading to large adverse effects.

Following Koppl (Reference Koppl2018), I define an expert as one paid for their opinion. Consequently, the nonexpert is the purchaser of expert opinion. This definition places the expert and nonexpert into a contractual relationship in the same way a market exchange between a producer and a consumer is a contractual relationship. Expert opinion is the commodity being exchanged. Defining the expert as one who is paid for their opinion helps us sidestep questions of reliability. We are not bogged down in who qualifies as an expert in this or that field; who qualifies as an expert is endogenous. By commodifying expert opinion, we can bring the analysis tools that have served economists well in law, political economy, and other fields to bear. ‘Failure’ takes on a specific meaning in the market for experts: Expert failure occurs when the expert's advice leads to a worse situation than expected by the individual purchasing the expert's opinion (Koppl, Reference Koppl2018).

Commodifying expert advice also helps distinguish the theory of expert failure from the theory of bureaucracy (Tullock, Reference Tullock2005a), hierarchy (Miller, Reference Miller1992), and public choice (Buchanan and Tullock, Reference Buchanan and Tullock1999). Whereas those theories focus on the operations of an individual within a bureaucratic or government system, the theory of expert failure focuses on experts qua experts. An expert may operate as an advisor to the government or even as an employee of a government agency, but the role of the individual is different. There is a kinship between the fields (Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021), but expert failure is distinct.

The rest of the paper is as follows. Section 2 briefly discusses the literature of experts. Section 3 develops a theory of cascading expert failure. Section 4 discusses institutional arrangements that contribute to cascading expert failure. Section 5 provides two case studies on expert failure in the early days of the COVID-19 pandemic in the United States. Section 6 discusses how to prevent cascading failure. Section 7 concludes.

2. Literature review

Koppl (Reference Koppl2018) dates the literature on experts and expertise as beginning with Socrates's Apology (Xenophon, 2013). Socrates argued that experts should be obeyed in their areas of expertise as they are the ‘wisest authorities’ within those bounds. A broad literature review would be impossible given this ancient line of inquiry. The topic has arisen in fields as different as philosophy (Mannheim, Reference Mannheim1936), science and technology (Turner, Reference Turner2001), sociology (Berger and Luckmann, Reference Berger and Luckmann1966), law (Block et al., Reference Block, Parker, Vyborna and Dusek2000; Hand, Reference Hand1901; Lind et al., Reference Lind, Thibaut and Walker1973), and economics (Andreoni and Mylovanov, Reference Andreoni and Mylovanov2012; Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a, Reference Gentzkow and Kamenica2017b; Koppl, Reference Koppl2018; Milgrom and Roberts, Reference Milgrom and Roberts1986; Tullock, Reference Tullock2005b).

In economics, much of the attention on experts and expertise is geared toward producing and disseminating information from expert to the nonexpert. Generally, the nonexpert calls in the expert to help overcome informational issues: the nonexpert does not have enough information to act correctly, is aware they are limited, and knows that the information is costly to obtain and analyze. The nonexpert seeks the advice of an expert (or experts) to help them decide (Dewatripont and Tirole, Reference Dewatripont and Tirole1999). However, research has shown that experts face an incentive to conceal certain information if the nonexpert pays them for their advice. If information is detrimental to the expert's cause, they may not reveal that information to the nonexpert (ibid.). Alternatively, the expert may tailor their advice along with what the nonexpert wants to hear if the nonexpert is sufficiently large enough in the marketplace for opinion (Koppl, Reference Koppl2002).

One way to increase the information available to nonexperts is to increase the number of competing experts in the marketplace. By placing experts with differing interests into a dialogue with one another, more information is revealed. Both experts want to try to ‘win’ the business of the nonexpert, and thus have the incentive to reveal any information that would support their case or harm the other expert's case (Milgrom and Roberts, Reference Milgrom and Roberts1986). In equilibrium, all information is revealed. Further, while Milgrom and Roberts (ibid.) build their model without transaction costs, additional research reveals that costs do not necessarily affect information revelation (Froeb and Kobayashi, Reference Froeb and Kobayashi1996). Similarly, even if the nonexpert is biased toward a certain outcome, competition can lead to full revelation (Froeb and Kobayashi, Reference Froeb and Kobayashi1993; Shin, Reference Shin1998).

Even with increased competition among experts, the structure of competition matters. Gentzkow and Kamenica (Reference Gentzkow and Kamenica2017b) construct a game theory model that shows competition among experts may reveal no information if the situation is a Prisoner's Dilemma and the experts cannot reveal information about their competitors. Koppl and Murphy (Reference Koppl and Murphy2022) explore organizational structures and management strategies that can increase or hinder information revelation.

Koppl (Reference Koppl2018) provides the most detailed explication of the broad phenomenon of expert failure, although concerns about expertise are older. Adam Smith was warning of the dangers of overreaching expertise in his classroom lectures (Smith, Reference Smith1982) and published work (Smith, [1776]Reference Smith1981). The economic literature on failure focuses on incentive and institutional structures that can cause expert failure (Koppl, Reference Koppl2021; Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021) and how receptive experts are to disconfirming information (Andreoni and Mylovanov, Reference Andreoni and Mylovanov2012; Kang and Kim, Reference Kang and Kim2021). Knowledge issues also arise regarding how well the expert can advise the nonexpert (Hayek, Reference Hayek1945; Lavoie, Reference Lavoie2016). Organizational psychology has focused on how the ways in which experts signal their trustworthiness may lead them to fail (Radzevick and Moore, Reference Radzevick and Moore2011).

There are works on informational cascades that parallel the argument in this paper. Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992) and Banerjee (Reference Banerjee1992) both develop herd behavior models where individuals in a decision chain take actions of previous individuals as informational inputs into their own decision-making process. At a certain point, the choice of the individual is relying entirely on the previous actions taken by individuals and no personal information is applied to the choice. Whereas Banerjee (ibid.) focuses mainly on herd behavior leading to cascades, Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992) show how fragile cascades can be. In both models, they rely on first-movers conveying information to laymen. In the Bikhchandani, Hirshleifer, and Welch model, they rely on certain ‘fashion leaders’ as first-movers who have higher signal accuracy. Wu (Reference Wu2015) expands the model to include both experts, who have high-quality signals, and laymen, who have lower-quality signals. My model compliments but differs from theirs, where I discuss how one expert's actions can lower the signal accuracy of another expert elsewhere in the decision-making chain.

Earl et al. (Reference Earl, Peng and Potts2007) also are in the tradition of cascades, specifically how decision rules on how to interpret the data we gather cascade. In their work, they show how decision rules made by experts in financial markets degrade over time as the rule is passed from one person to another. When the decision rule is made, it fits into a certain context. However, there are non-trivial time lags and by the time nonexperts adopt the rule, the context likely has changed. Consequently, a game of Telephone can happen, as key qualifications get lost (ibid., pp. 356–358). Decision rules get flattened down and become less effective at helping to formulate optimal decisions. My model is parallel, although I am less concerned with decision rules (i.e. how individuals should interpret data) with how the decision rules of experts affect the informational inputs other experts use in the production of their advice.

I aim to fill two gaps in this literature. First, I show how failures cascade not only within fields, but beyond them. Secondly, I address the gap of how cascades can perpetuate or end. Experts can become ‘siloed’ within their fields and thus be unaware of how to interpret some of the information they use in the opinion formation process. This siloing then causes them to repeatedly offer the same advice, even though it is failing to achieve the desired goals. Experts may not be the high-accuracy, high-signal individuals they are assumed to be.

3. Cascading expert failure

3.1 The basic model

Cascading expert failure occurs when one expert failure leads to other failures removed from the original transaction. Just as a single snowball may cascade into an avalanche of destruction, so too might a single failure cascade into multiple and multiplying failures.

Following the literature on cascading production failures (Baqaee, Reference Baqaee2018; Taschereau-Dumouchel, Reference Taschereau-Dumouchel2020), I aim to model cascading expert failure as a network problem.

Like macroeconomic analyses of microeconomic failures, which argue that microeconomic failures will average out at the macroeconomic level (Lucas, Reference Lucas1977), one may argue that expert failure, when sufficiently diffused, would average out in the aggregate. However, if we consider the economy as a network of inputs and outputs, then the shape of the network would matter as to whether shocks get averaged out. As Acemoglu et al. (Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012) discuss, interconnections between firms and sectors act as a propagation mechanism of these idiosyncratic shocks throughout the economy if the input–output networks are not symmetrical. Even relatively small shocks can amplify as they cascade through the network (Baqaee, Reference Baqaee2018).

Lucas-style reasoning would apply in symmetrical production networks, such as those represented in Figure 1. In a symmetrical network, each actor equally relies on each other actor in the network as both producer and consumer. In Figure 1(a) each sector network has a single sector (or node) that both produces and consumes output as indicated by the curved arrow. As such, the network is symmetrical. In Figure 1(a), since each sector is independent of the other, shocks in one sector would not spread to the others.

Figure 1. Representations of two production networks. Each circle represents a node of a sector or producer. Each arrow represents the direction output flows. (a) A production network where no producer relies on another for input. Each producer is entirely self-sufficient. (b) A production network where each producer relies equally on the other. Each producer uses input and sells output to each other producer.

Source: Acemoglou et al. (Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012).

Figure 1(b) also represents a symmetrical network, despite interconnectedFootnote 1 sectors. Each sector relies on each other sector equally, and thus there is symmetry in the network. In Figure 1(b) the argument that diversification would cause failures to net out applies. According to the Law of Large Numbers, any shock to the individual sectors would average out rapidly at a rate of $\sqrt n$, where n is the number of sectors in an economy (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012). If, for example, sector 1 represented a producer who underproduced a needed good, this error would be counteracted as the other sectors, 2 through n, adjusted their own production and consumption to make up for the error.

Figure 2 is a representation of an asymmetric production network. In an asymmetric production network, not all sectors equally rely on each other. A single sector may dominate production, such as sector 1 in Figure 2. If a single sector (sector 1) supplies multiple other sectors, even small changes in that sector would not necessarily average out in the aggregate. Alternatively, we could imagine a situation similar to Figure 1(b), but one or two sectors have significant control over output. In this case, a failure of one of those major sectors would not average out as other firms could not necessarily pick up the slack (Baqaee, Reference Baqaee2018). Thus far, the network analysis I discussed only partially gets us to cascading failures. Figure 2 shows how a relatively minor failure will not necessarily dissipate as the effects move through the economy. If those sectors (2 through n in Figure 2) were effectively segregated from the larger economy, then the effects of the failure would remain contained to those sectors (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012); we would not have cascading failure. However, if those sectors were themselves producers, then the failure of sector 1 could cascade throughout the economy. Figure 3 demonstrates a network where cascades are possible.

Figure 2. A production network where one producer is the sole supplier to all other producers. Each circle represents a producer/sector. Each arrow indicates output flow. Sector 1 supplies sectors 2 through n.

Source: Acemoglou et al. (Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012).

Figure 3. A production network with a single shared supplier. Each circle represents a producer/sector. Each arrow indicates output flow.

Source: Acemoglou et al. (Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012).

In Figure 3, there is a sole shared supplier, 1, for sectors 2 through n. Sectors 2 through n are also suppliers for other sectors. In other words, sectors 2 through n are nodes, not terminals, as they are in Figure 2. Thus, production decisions or shocks made by sector 1 will affect sectors 2 through n and the sectors 2 through n serve. A failure can potentially cascade down through this network.

What is key here is not the size of the supplier in the network. Baqaee (Reference Baqaee2018) shows that systemic importance in a network is decoupled from firm size. Rather, it is the role as a supplier. The more interconnected the provider is, the most likely a cascade. The logic of this point can be seen in Figure 1(a). The sectors in Figure 1(a) are monopolies in their industries. However, a failure in one sector will not cascade to other sectors since they are segregated (i.e. not interconnected) from one another. Whether sector 1 has $1 in revenue or $1 billion, failure in sector 1 will not affect sectors 2 through n.

Given the definition of an expert as ‘one who is paid for advice’, we can apply the same logic of the cascading production network failure. We can conceptualize Figure 3 as a network of expert sectors (such as the Centers for Disease Control and Prevention (CDC), the United Kingdom's Scientific Advisory Group for Emergencies (SAGE), or other hospital group) and nonexpert sectors (such as households, firms, or legislators) rather than industry sectors. Assume sector 1 represents a shared provider of expert opinion to sectors 2 through n; sectors 2 through n use sector 1's advice in producing their advice or consuming it. If sector 1 fails in their expert advice, that will affect the actions of sectors 2 through n, and subsequently the consumers they may serve. Given the relatively high degree of interconnectedness of sector 1, a small failure could end up cascading through the economy.

For example, consider the case of the SAGE, as discussed by Koppl (Reference Koppl2021). The pandemic models SAGE and others used in formulating their advice for the COVID-19 pandemic relied heavily on assuming a homogeneous population (ibid.); that is, ‘all people hav[e] equal chances of mixing with each other and infecting each other’ (Ioannidis et al., Reference Ioannidis, Cripps and Tanner2022). This assumption is inappropriate as an empirical matter since people were voluntarily social distancing and locking down before government orders came (Goolsbee and Syverson, Reference Goolsbee and Syverson2021). As a modeling matter, the assumption leads to overestimating herd immunity thresholds (Britton et al., Reference Britton, Ball and Trapman2020; Gomes et al., Reference Gomes, Ferreira, Corder, King, Souto-Maior, Penha-Goncalves, Goncalves, Chikina, Pegden and Aguas2020).Footnote 2 As a consequence of modeling a homogeneous population, SAGE's advice to the British government was predicated on an analysis that likely overestimated benefits of various mitigation measures. Given that SAGE has significant market power in their role as advisor for the British government (Koppl, Reference Koppl2021), failure on their part could have an effect like that described in Figure 3: A relatively small failure (overestimating herd immunity thresholds and spread) causes the experts to be more pessimistic in their advice. This overestimation becomes an input into the British government's decision-making. The government developed suboptimal policy, which became an input into individual firms and business decisions within the United Kingdom.

3.2 Siloing

One aspect of cascading failure that deserves special attention is siloing. Siloing is a mechanism that can start an expert failure cascade. Siloing occurs when an expert has little relevant knowledge outside of their area of expertise, and thus is confined to their own discipline or ‘silo’. To attempt a more precise definition, siloing is when the expert has high expected costs and low expected benefits of interacting with experts from other disciplines in the formation of their opinion. Consequently, the expert does not interact with other silos or dismisses insights and challenges from others outside of their silo.

Siloing coincides with the division of labor. As labor is divided into different jobs, specialized knowledge of those jobs forms (Koppl, Reference Koppl2018; Smith, [1776]Reference Smith1981). Experts are trained in their fields and learn the tools favored by their colleagues. Consequently, the expert analyzes problems through their particular lens and theory and may be unaware of alternative explanations.Footnote 3 Even if they are aware of alternatives, the expert may not understand the subtleties of other fields. Thus, models or explanations in other fields may be misunderstood or misapplied.

Siloing also creates the impression of distinct boundaries between areas of expertise. For example, with siloing, economics and sociology are two distinct fields although both study human behavior. As a consequence, an effectively siloed researcher may discount or dismiss information presented by experts in other silos. Siloing encourages treating information as one-dimensional (X is an economic problem) as opposed to multi-dimensional (X is a problem with multiple aspects). Andreoni and Mylovanov (Reference Andreoni and Mylovanov2012) show individuals discount information when it is passed through others as opposed to presented directly. I argue this same mechanism is at play: experts discount information when generated other silos rather than presented directly to them from their own silo.

In short, siloing reduces the ability of experts to process, or even be aware of, all relevant information that is part of their opinion-formation process. We need not go as far as Adam Smith, who argued, ‘The man whose whole life is spent in performing a few simple operations…renders him, not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many of even the ordinary duties of private life’ or ‘the great and extensive interests of his country’ (Smith, [1776]Reference Smith1981: 782). We must merely recognize that siloing creates barriers to information and knowledge transference between fields of expertise.

Figure 4 represents a network model of siloing in action. Like Figure 2, sector 1 represents an expert to sectors 2 through n. The solid arrows represent recognized information transfers.

Figure 4. Siloing as a cause of cascading expert failure. Each circle represents a sector. Each arrow represents information flows. The dashed arrows represent noisy signals.

Sectors α, β, and γ represent information spaces that provide data to sector 1. For example, if sector 1 is an economic advisor, α may represent the US Census Bureau, IMPUS, academic journals, and other providers of data, information, and knowledge. The solid double-arrows indicate that sector 1 uses input from sector α in its production process and provides sector α with inputs as well (e.g. published research).

However, sector 1 indirectly exchanges information with other sectors as well, perhaps unwittingly. That information is noisy as the siloed expert may not know how to interpret it. These exchanges are represented by the dashed arrows. Sectors β and γ represent sectors that are not associated with sector 1's area of expertise but can still provide useful insights if the producer of expert opinion is willing to look. To keep the analogy going, if sector 1 is an economic expert, then sectors β and γ may represent political science and psychology, respectively. An expert's silo is represented by the solid line from an information space to the producer of expert opinion.

The presence of these indirect exchanges of information among silos also explains why cascades can perpetuate. The signals that come from outside the expert's silo (the dashed lines in Figure 4) are noisy or perceived to be low signal to the expert. The expert may discount or not understand the relevance of the information they are seeing. In turn, this indicates the expert may not be aware their advice is failing at all, or why it is failing. This point will be explored further in section 5.2.

Figure 5. The COVID-19 expert opinion production network.

4. Institutions and cascading failure

Cascading expert failure occurs when there is a sufficiently interconnected provider of expert opinion. This interconnectedness in expert opinion may arise when there are high barriers to entry in a marketplace, preventing entry of new experts, and thus new network connections to form. Natural barriers to entry could include items like a certain level of technical competence necessary to be a valuable expert (e.g. a good neurosurgeon would require sophisticated knowledge of how the brain works). Artificial barriers to entry could include occupational licensing or ritualistic behavior such as completing a degree program. Artificial barriers to entry serve to ‘certify’ the expert to the nonexpert. Thus, the barriers may serve a purpose by reducing search costs for a group of nonexperts and result in the expert becoming interconnected.

In a similar manner, these certifications could suggest to nonexperts that these experts have superior information and judgement compared to other potential experts (Murphy, Reference Murphy2022). Experts are seen as relatively high-information, high-signal accuracy individuals compared to nonexperts. In turn, relatively high information indicates the expert may understand the world with a higher degree of accuracy than the nonexpert (Wu, Reference Wu2015). Consequently, nonexperts may follow their advice uncritically. For example, early in the COVID-19 pandemic, the CDC (and other health organizations like the National Institutions of Health) were frequently called on to advise policy. Newspapers uncritically reported their recommendations, and many individuals began following CDC guidelines before they acquired the force of law (Goolsbee and Syverson, Reference Goolsbee and Syverson2021). Even after official mandates were repealed or expired, many individual actors continued to follow the guidance of the CDC (Finucane and McKenna, Reference Finucane and McKenna2021). Thus, even without the force of law or regulation prohibiting entry of other experts, these organizations were able to strongly influence opinion given their perception as high-accuracy experts.

Additionally, these experts may also influence other experts' opinions. For example, a physician may uncritically follow the advice of a board of high-ranking physicians given the board's relative prestige to the physician. This advice will, in turn, affect the advice the physician gives their patient. Consequently, experts can become subject to informational cascades with some probability (Banerjee, Reference Banerjee1992; Bikhchandani et al., Reference Bikhchandani, Hirshleifer and Welch1992). The opinion of an expert and actions of other experts are taken together and the following experts adopt the opinion without adding their own private information.

Certification can lead to the siloing phenomenon I discuss above: only certain experts are ‘allowed’ to have opinions on a topic and any insights from outside the silo are perceived to be low information. Given how siloing results in noisy signals to the expert, the perception of experts as being relatively high-information, high-signal accurate compared to the nonexpert may not hold even within their area of expertise.

Ikeda (Reference Ikeda1997: 112–118) notes the importance of ideology in the process of shaping policy and advice. Barriers to entry can also affect the ideology of experts. By enforcing specific standards, gatekeeper experts can control entry of generally like-minded individuals into the market. Additionally, they can exclude (or minimize) heterodox ideologies and opinions (Callais and Salter, Reference Callais and Salter2020; Flegal, Reference Flegal2021). The gatekeepers can thus control, to some extent, the intellectual ideology in the market for expert opinion, reducing opinion and increasing the likelihood of a cascade.

Institutions that encourage uniform expert advice, as opposed to a diversity of opinion, can contribute to cascading expert failure. For example, SAGE in the United Kingdom has an explicit goal of ‘provid[ing] unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants’ (SAGE, 2020). SAGE's role is to provide the single opinion of their experts, rather than provide full information, to the decision-makers of the United Kingdom. Given the influence and legal authority of the government, this makes SAGE an interconnected expert node.

Section 6 will discuss ways to reform or enhance current institutions to prevent cascading expert failure. But first, I will examine two cases from the COVID-19 pandemic that demonstrate cascading expert failure.

5. Two case studies of cascading expert failure

5.1 COVID test regulatory policies

To explore the effects of cascading expert failure, I will examine how the decision by the Food and Drug Administration's COVID testing regulatory policies, coupled with advice from the CDC, led to expert failure in the epidemiological testing world.

One of the central questions arising from the COVID-19 pandemic is why there was no effort to conduct randomized testing early in the pandemic (Ioannidis, Reference Ioannidis2020; Padula, Reference Padula2020). Public decision-makers require reliable data to make decisions. In an outbreak of a novel virus, randomized testing helps us acquire that data (Ioannidis, Reference Ioannidis2020; Padula, Reference Padula2020). Despite the success of mass testing in other countries, the United States government made no effort to randomly test the population. Instead, the CDC recommended that tests be limited to patients who had returned from China or exhibited symptoms (Centers for Disease Control and Prevention, 2020b; Jernigan and CDC COVID-19 Response Team, Reference Jernigan2020). The advisory came, in part, due to the limited quantity of tests in the United States stemming from the FDA's and CDC's regulations on what tests may be used in the United States (Advisory Board, 2020). The recommendations led to unintended results for the CDC/FDA and caused suboptimal recommendations in other fields.

Randomized testing is needed to discover the characteristics of a novel disease, such as how quickly it spreads, who is most at risk, and what infection and fatality rates are (Hu et al., Reference Hu, Wang, Wang, Litvinova, Luo, Ren, Sun, Chen, Zeng, Li, Liang, Deng, Zheng, Li, Yang, Guo, Wang, Chen, Liu, Yan, Shi, Chen, Zhou, Sun, Vespignani, Viboud, Gao, Ajelli and Yu2021; Ioannidis, Reference Ioannidis2020). Even if only 70% of infected people tested are returned positive, mass testing still provides essential clues on how to combat disease and insights for policy (Paltiel et al., Reference Paltiel, Zheng and Walensky2020). However, the FDA's regulations limited the supply in the market by restricting who could produce tests (Food and Drug Administration Staff, 2021) and severely limiting imports of testing equipment (Food and Drug Administration Staff, 2021; US Customs and Border Protection, 2020). The lower supply of tests indicates that a socially optimal policy would be for the marginal test to be used for marginally higher-valued uses. According to the CDC's medical experts, the higher-valued use was to test those suspected of having the disease and then to engage in contract tracing, as evidenced by their advisory (Centers for Disease Control and Prevention, 2020b). Initial treatments of the COVID-19 virus treated it like a type of influenza (Ferguson et al., Reference Ferguson, Laydon, Gilani, Imai, Ainslie, Baguelin, Bhatia, Boonyasiri, Cucunuba, Cuomo-Dannenburg, Dighe, Dorigatti, Fu, Gaythorpe, Green, Hamlet, Hinsley, Okell, Elsland, Thompson, Verity, Volz, Wang, Wang, Walker, Walters, Winskill, Whittaker, Donnelly, Riley and Ghani2020). When a disease and its properties are well-known, testing patients who exhibit symptoms is a standard operating procedure (Centers for Disease Control and Prevention, 2020a), and medical professionals often recommend non-pharmaceutical interventions to limit spread (ibid.). The CDC and its leadership are primarily medical doctors. Thus, their behavior early in the pandemic is consistent with previous pandemics of known viruses.

However, the COVID-19 virus was novel. The information needed by decision-makers to formulate responses did not previously exist, nor could it be reasonably inferred. According to the statistical experts, randomized testing was the higher valued use of the limited tests, as randomized testing would provide the needed information (Padula, Reference Padula2020). Given the CDC's authority, both de jure as a regulatory body and de facto as a prominent expert body, their opinion prevailed, and the tests were allocated to testing symptomatic patients (Shear et al., Reference Shear, Goodnough, Kaplan, Fink, Thomas and Weiland2020). Subsequently, the data collected from those tests were incorporated into policymaking.

Here the first step of cascading expert failure is taken. The tests were to help guide policy on the pandemic. However, by limiting testing to suspected cases of COVID-19, the initial results likely resulted an upward bias in COVID-19 mortality and severity numbers (Ioannidis et al., Reference Ioannidis, Cripps and Tanner2022). The experts did not know what they needed to know about the virus. Patients hospitalized with the virus, or exhibiting symptoms that may trigger a test, are likely those with a more severe case. Thus, the initial case fatality rates were almost certainly too high, especially given asymptomatic carriers of COVID-19 (Michaels and Stevenson, Reference Michaels and Stevenson2020). Randomized testing would have helped eliminate these statistical biases as well as provided epidemiologists valuable information of the properties of the virus: how it spread, how fast it spread, time from infection to symptom, etc. Biased data and lack of epidemiological information hindered the CDC's ability to advise on the pandemic (Rosen, Reference Rosen2021). Additionally, the failure affected several other significant groups of experts.

Stemming from the CDC's first instance of expert failure to recommend using tests that resulted in estimates that were likely too high, the next step of cascading failure occurred. The disease figures generated by the CDC, the World Health Organization (WHO), and other organizations were used to build models of the virus's spread and death rates, such as the Imperial College Model (Ferguson et al., Reference Ferguson, Laydon, Gilani, Imai, Ainslie, Baguelin, Bhatia, Boonyasiri, Cucunuba, Cuomo-Dannenburg, Dighe, Dorigatti, Fu, Gaythorpe, Green, Hamlet, Hinsley, Okell, Elsland, Thompson, Verity, Volz, Wang, Wang, Walker, Walters, Winskill, Whittaker, Donnelly, Riley and Ghani2020) or the Institute for Health Metrics and Evaluation (IHME) model (IHME COVID-19 Health Service Utilization Forecasting Team and Murray, Reference Murray2020). Consequently, these modeling experts failed in their recommendations, as their models were too pessimistic given the statistically biased data being used in the models. Ioannidis et al. (Reference Ioannidis, Cripps and Tanner2022) note that a data bias undermined much of the forecasting in the COVID-19 pandemic. The initial failure of the CDC's recommendation led to a cascade into the model forecasting area of expertise, leading the epidemiologists to fail in their expert advice by producing upward-biased models.Footnote 4

The expert failure stemming from the CDC's initial recommendation to limit testing to patients exhibiting symptoms also had cascading effects in the realm of policy. We have already seen how the CDC's recommendation led to biased modeling. Likewise, those models were used to inform policy. Model projections informed recommendations on lockdowns, travel restrictions, mask requirements, social distancing, hospital and nursing home visitations, and medical procedures. Given that the models' projections were likely too high, empirical justifications for the lockdowns were also based on a cost-benefit analysis that over-stated the benefits relative to costs from these policies. Likewise, the models did not consider that people would change their behavior in ways that rendered policies like lockdowns redundant (Goolsbee and Syverson, Reference Goolsbee and Syverson2021; Leeson and Rouanet, Reference Leeson and Rouanet2021) or potentially deadly (Mulligan, Reference Mulligan2021) since the behavioral changes would not be incorporated into the data because of the biased sampling. The cost-benefit analysis used to justify lockdowns relied on models that overestimated the benefits of lockdowns and underestimated the costs.

It should be noted that I am not arguing that lockdowns were unjustified by cost-benefit analysis; a more accurate cost-benefit analysis may still have justified lockdowns, although the case may have been more marginal or the time frame shorter. Additionally, given heavily tail risks with a contagious disease, lockdowns may initially be justified even absent clear data (Cirillo and Taleb, Reference Cirillo and Taleb2020). My claim here is that the data used in the modeling to justify lockdowns were likely heavily distorted. The key statistical characteristics of the disease remained unknown. In turn, the information produced by experts was not more accurate compared to what was known before testing. These distortions led to an overstatement of the net benefits of lockdown and undue confidence on the part of experts in their recommendations.

Figure 5 is a visual representation of the COVID-19 expert opinion production network I have just discussed, set within the model developed in section 3. The CDC made a decision about how tests should be used early in the pandemic. As a consequence of that decision, they issued guidance on how tests ought to be used. That guidance influenced how testing clinics and hospitals tested patients. Consequently, this guidance influenced the information they reported to the CDC, as indicated by the arrow going from the ‘Clinics’ sector back to the ‘CDC’ sector. Data then reported by the CDC were used by modelers to produce their advice.Footnote 5 That advice then went to other consumers of expert opinion, shaping their behavior.

The cascading effects of the initial expert failure by the CDC are apparent. I have followed the line of failure down just one path of many that branch out from that decision. Much like Adam Smith's woolen coat, tracing out all the actions that spawn from that one decision regarding testing would be a difficult, if not impossible, task. Many other unforeseen consequences could be traced from the initial instance of the CDC's expert failure (see, e.g. Ravindran and Shah, Reference Ravindran and Shah2020). If the CDC had taken a different action in the early days of the pandemic and allocated tests to randomly testing the population, some of these failures could have been avoided.

The cascading expert failure discussed in this section by the CDC and subsequent experts is due to the interconnected and dominant position of the CDC as providers of expert opinion. The policy recommendations failed to achieve their desired aims of reducing the damage caused by the virus. In some cases, like the lack of randomized testing, experts' decisions may have caused the outbreak to worsen in the United States, given the lack of reliable data. Other policies, such as lockdowns that served to codify behavior people were already taking, may have failed a cost-benefit test since the benefits of the policies were likely overestimated.

5.2 Face mask recommendations during COVID

Expert failure can cascade into other seemingly unrelated silos because decision-making processes are interconnected, as discussed above in section 3.1. Relative prices transmit information to different participants of the production process such that the consumer of any given product need not know why it is more expensive relative to other goods to economize on the good (Hayek, Reference Hayek1945). Similarly, the expert advice given in one area can have cascading effects down on other areas as they are all interconnected.

The confusion about the effectiveness of masks at the beginning of the 2020 COVID-19 pandemic is an example of siloing causing cascading expert failure. In February 2020, Dr Anthony Fauci advised that most Americans do not need to wear masks to protect themselves against the coronavirus (O'Donnell, Reference O'Donnell2020). Other government expert advisors, such as the US Surgeon General (Cramer and Sheikh, Reference Cramer and Sheikh2020) and the WHO (Pan American Health Organization and World Health Organization, 2020) repeated this advice. By April, these experts had reversed course. They now recommended wearing masks as they were necessary to combat the spread of the coronavirus. When asked why the reversal in a June 2020 interview, Fauci stated he knew masks were effective when he provided the advice in February. He advised otherwise to ensure enough masks and personal protective equipment (PPE) were available to medical personnel (Why Weren't We Wearing Masks From the Beginning? Dr. Fauci Explains, 2020).

The mixed messaging had a detrimental effect on the US government's response to managing the pandemic (Fauci: Mixed Messaging On Masks Set U.S. Public Health Response Back, 2020; Scheid et al., Reference Scheid, Lupien, Ford and West2020), the opposite of the experts' goal and the goal of the nonexperts they were advising. The mixed messaging, combined with confusion from political leaders, gave the impression that masking advice was based off political, rather than scientific, reasoning (Ho and Huang, Reference Ho and Huang2021; Kiviniemi et al., Reference Kiviniemi, Orom, Hay and Waters2022; Noar and Austin, Reference Noar and Austin2020). This deterioration of trust hindered the ability of the experts to properly advise in the pandemic.

The advice failed in two other crucial ways: First, it discouraged a supply response to an increase in demand. When there is a sudden increase in demand, prices need to rise to allocate the scarce quantity on the market. The expert advisors appeared to have a mental model of a perfectly inelastic supply curve, where higher prices would only lead to masks being allocated to the highest bidders. These bidders may not be medical personnel. Their advice discounted the existence of an upward-sloping supply curve, allowing firms to increase production at a higher price. The initial advice ended up delaying the market response that would have allowed more masks to come to the market.

Secondly, the advice did not achieve the goal of preventing hoarding and reserving supply for medical workers. The mixed messaging failure may have encouraged hoarding. If prices do not rise, a shortage emerges. When the shortage persists, consumers tend to hoard in order to insure themselves against unreliable availability (Chakraborti and Roberts, Reference Chakraborti and Roberts2021b). Existing price controls and purchase quotas on many products, including PPE, encouraged hoarding by consumers of these increasingly hard-to-find products (ibid). Additionally, extra trips to the stores to hunt for products likely increased the spread of COVID in early 2020 (Chakraborti and Roberts, Reference Chakraborti and Roberts2021a).

Thus, the public health experts committed expert failure: their advice, which was supposed to reduce the spread of the disease, ended up contributing to the spread of COVID-19 in the early days of the pandemic. The reversal on masks, coupled with local and state mask mandates, led to a sudden increase in demand. The shortages that arose sent noisy signals to the advisors; because they were effectively siloed and did not seek input from economists, they did not see the shortages were the result of their advice. Taking a longer view, the use of price controls to prevent hoarding may reduce the effectiveness of the US to manage future pandemics as price controls discourage building inventory for demand shocks (Zycher et al., Reference Zycher, Solomon and Yager1991).

Figure 6 represents siloing during the COVID-19 pandemic. The public health experts' decision to advise against masks even though they knew masks would be necessary to limit COVID was based on the information and interpretation they developed in their silo, labeled ‘Public Health’. However, their advice also relied on insights from at least two other silos: economics and psychology. Economics had insights in how resources will be allocated following a sudden increase in demand. Psychology had insights in how people will react to sudden shortages and rapidly changing advice. The experts were unaware of these insights because they did not see value in interacting with those fields even though they were engaging with the fields. Consequently, the signals the experts got were very noisy. The experts were unaware that it was their advice that was causing the failure.

Figure 6. Siloing of public health opinion during the COVID-19 pandemic.

6. Reforms that can prevent cascading expert failure

A goal of expert advice is to help the nonexpert become more informed. Highly interconnected and siloed experts can work against this goal by (unintentionally) providing low-quality, low-information advice and causing a cascade of failure. Therefore, we must discuss institutions that can prevent the concentration of expert power, such as that indicated by Figure 3 and increase the information available to the nonexpert. In section 4, I discussed two institutional structures that lead to situations where cascading expert failure is more likely: uniform expert advice and institutions arising to combat extreme uncertainty. Thus, the policy proposals I discuss will focus on those institutional structures.

While uniform expert advice may be expedient, the speed comes at the trade-off with accuracy. Uniform expert advice will work against the goal of increasing the information available to the nonexpert. Milgrom and Roberts (Reference Milgrom and Roberts1986) developed a model showing that experts with ‘strongly opposed’ interests can increase the quantity and informativeness of information available to the nonexpert. They show how even a nonexpert who naïvely accepts all information given to them (i.e. they do not question the information themselves) comes to be fully informed (see also Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a). In an adversarial setting like a common-law courtroom,Footnote 6 one side must win, and another must lose. Thus, if the defendant's expert has information that could sway the nonexpert (e.g. the judge) to their side, the plaintiff's expert has the incentive to reveal more information to sway the nonexpert back to their side. Consequently, more information is revealed to the nonexpert.

The theoretical equilibrium is full information revelation to the nonexpert. An adversarial arraignment prevents monopolization of expertise, allows for multiple producers of expert opinion, and increases the quality of information, preventing a cascade.

Additionally, by having more access to information, the nonexperts are in a position to better evaluate the quality of advice given by experts with regard to the goals of the nonexpert. Even if the expert's advice, if taken, would lead to failure, the nonexpert can better judge the advice and opt to not take it, limiting a cascade.

Two studies from law support my contention that adversarial competition among experts increases the information available to the nonexpert. Lind et al. (Reference Lind, Thibaut and Walker1973) studied information purchased and conveyed by lawyers under three scenarios: client-oriented lawyers versus client-oriented lawyers (adversarial system), court-oriented lawyers versus court-oriented lawyers (inquisitorial system), and court-oriented lawyers versus client-oriented lawyers (a ‘mixed’ system). The lawyers did not ‘purchase’ more information under the different administrative regimes. However, lawyers operating under the adversarial regime did convey more information to their clients (even when that information was detrimental to the client's interests) when compared to the inquisitorial or ‘mixed’ system. In other words, the experts (lawyers) provided more information to the nonexperts (their clients), helping them to make more informed decisions under an adversarial system.

More recently, Block et al. (Reference Block, Parker, Vyborna and Dusek2000) directly test information revelation by contesting parties under the Milgrom and Roberts (Reference Milgrom and Roberts1986) model (the adversarial model I propose) and Tullock's (Reference Tullock1980) discussion of the inquisitorial model. They find the inquisitorial model reveals more information when information is private: the contesting parties do not know what knowledge the other parties possess. However, when information is correlated (each party has some clue that the other party possesses information that may discredit them), the parties reveal more information under the adversarial regime. In most policy discussions requiring expert opinion, information is likely correlated as expert witnesses are aware of differing interpretations and competing theories.

Further, generating this type of competition would be fairly easy to achieve. It does not take many competitors to cause monopoly firms to behave as though they operate in a competitive market (Bain, Reference Bain1954; Baumol et al., Reference Baumol, Panzar and Willig1983, Reference Baumol, Panzar and Willig1988; Kessel, Reference Kessel1971). Indeed, even network firms can behave as though they face competition even if they command a significant market share (Boudreaux and Folsom, Reference Boudreaux and Folsom1999). The Milgrom and Roberts model uses only two experts and full information is achieved. It is not just the number of experts, but also the adversarial competition that generates the full information result.

One may argue full-information revelation to the nonexpert could result in information overload. Information overload is unlikely in expert opinion because the nonexpert is paying for the information. Information overload is an externality that occurs because human attention is unpriced in most information transmission scenarios; information becomes detrimental when the average value of the information is declining (Zandt, Reference Zandt2004). Overload is typical in advertising, where the individual is bombarded with information whether they pay for it or not. However, when the nonexpert is purchasing the information, included in their offered price is an estimation of their attention span and capabilities. A rational actor would not purchase additional information once the marginal cost exceeds the (estimated) marginal benefit and thus not see declining average value of information. Additionally, competitive experts have an incentive to prevent information overload from occurring. In order to ‘win’ the business of the nonexpert, the expert is incentivized to make their information understandable to the nonexpert (Koppl, Reference Koppl2018).

Preventing cascading failure resulting from certification is a trickier problem. As discussed above, there is an asymmetric information situation with experts; experts are better informed in their area of expertise than nonexperts. Thus, certifications do serve an informational purpose, albeit at an elevated risk of cascading failure and information cascades. Certifications and ‘brand names’ (such as a Ph.D. from Harvard) can reduce information quality issues (Akerlof, Reference Akerlof1970). Consequently, removing certification in the market for expert opinion would reduce the quantity of information and expert opinion in the marketplace.

Rather than eliminating certification, increasing the number of voices in the market for expert opinion can prevent these cascades. Wu (Reference Wu2015) shows having a small number of low-signal accuracy nonexperts in the opinion formation process can reduce the probability of informational cascades. Wu is responding to the Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992), where each actor moves in a sequence after observing all behaviors ahead of them in the sequence. In that model, a low-accuracy individual would increase the number of decision-makers needed to trigger a cascade. In my network model, the extra low-accuracy individual is more akin to an additional node in the network that can absorb and stop a cascade.

Wu does note that the addition of the low-accuracy individual ‘decreases the overall information quality by a little’ (Wu, Reference Wu2015: 408). However, if the experts and nonexperts are in conversation with one another, as in the Milgrom and Roberts (Reference Milgrom and Roberts1986) model, the addition is less likely to result in lower quality information and may even increase the quality (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a).

At this point, it may be tempting to say my argument is nothing more than increasing the number of competing experts in the market. Such an interpretation is incorrect. Merely increasing the number of experts may not necessarily lead to improved outcomes beyond a certain point (Koppl et al., Reference Koppl, Kurzban and Kobilinsky2008). Furthermore, groupthink may dominate even with multiple experts, preventing effective adversarial competition among experts (Koppl, Reference Koppl2021; Koppl and Murphy, Reference Koppl and Murphy2022). Indeed, Figure 1(a) has monopoly experts, but since each expert's opinion is only bought by one sector, cascades are unlikely. Instead, it is the market structure of expert opinion and interconnectedness of experts that matters for cascades (Baqaee, Reference Baqaee2018). Allowing free entry and exit of potential competitors will tend to reduce cascading failure.

7. Conclusion

Koppl (Reference Koppl2018) discusses at length expert failure, which helps economists explore why bad policy develops and persists. Using cascading network failure modeling, I expand Koppl's analysis to include a dynamic dimension: how failures can spread over time and areas of expertise. I show how even relatively small failures can cascade throughout a network to have significant aggregate impacts across sectors.

Compared to a competitive marketplace of expert opinion, where many experts from many fields can compete for consumers, interconnected experts are more likely to create such cascades. Additionally, siloed, yet interconnected, experts may provide lower-quality advice (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a). However, there may be benefits to such a monopoly interconnected expert. If the market for expert opinion has sufficiently high negative externalities from the production of advice, then one would want a monopolist to restrict output. We should not dismiss monopoly in the market out of hand, but we should be aware of the potential dangers of such concentration.

Experts are a necessary part of life. Just as the division of labor and gains from trade improve economic outcomes, so does the division of knowledge. However, such division carries with it dangers. Smith ([1776]Reference Smith1981: 782) famously worried the division of knowledge taken too far could result in a human becoming ‘incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many of even the ordinary duties of private life’. Understanding the role of experts and expertise, particularly the limits and failures, will help us researchers improve our own expert advice and improve the institutional arrangements of expertise in policymaking and general advising.

Acknowledgement

I thank John Palmer, Michael Enz, Abigail Devereaux, Roger Koppl, Art Carden, Alex Tabarrok, participants at the 58th Annual Meetings of the Public Choice Society, and three anonymous referees for valuable feedback.

Footnotes

1 The degree of interconnectedness refers to how many producers rely on the supplier.

2 There is some debate about whether the assumption is still valid to reduce uncertainty of forecasts. See, for example, Dbouk and Drikakis (Reference Dbouk and Drikakis2021).

3 I am aware of the irony that I am using the tools of economics to analyze the epistemology of other fields.

4 It is important to note that the issue here is not biased models, per se. Biased models often serve a purpose. Rather, the problem is that without randomized testing, one cannot know if one is statistically biased or not. Modelers and other experts had no idea if they were over- or under-estimating the effects of the disease.

5 Figure 5 is simplified by considering all modelers as a single sector of experts. In reality, there were many different sectors of experts relying on the CDC's data, from epidemiologists to economists to educators. Each one of these sectors should be within their own circle, but such added complexity would not change the logic and serve only to obscure the diagram.

6 In Anglo-American Common Law, both plaintiff and defense provide evidence and have the right to cross-examine witnesses (Encyclopaedia Britannica, 2014a). This is called the adversarial system. In the Continental Civil Law system, the judge questions the witnesses and neither the plaintiff nor defense has the right to cross-examine (Encyclopaedia Britannica, 2014b). This is called the inquisitorial system.

References

Acemoglu, D., Carvalho, V., Ozdaglar, A. and Tahbaz-Salehi, A. (2012), ‘The Network Origins of Aggregate Fluctuations’, Econometrica, 80(5): 19772016.Google Scholar
Advisory Board (2020), ‘Why Doesn't America Have Enough Coronavirus Tests?’ Advisory Board, March 10, 2020. Accessed January 29, 2021. https://www.advisory.com/en/daily-briefing/2020/03/10/testing-errors.Google Scholar
Akerlof, G. (1970), ‘The Market for “Lemons”: Quality Uncertainty and the Market Mechanism’, The Quarterly Journal of Economics, 84(3): 488500.CrossRefGoogle Scholar
Andreoni, J. and Mylovanov, T. (2012), ‘Diverging Opinions’, American Economic Journal: Microeconomics, 4(1): 209232.Google Scholar
Bain, J. S. (1954), ‘Economies of Scale, Concentration, and the Condition of Entry in Twenty Manufacturing Industries’, The American Economic Review, 44(1): 1539.Google Scholar
Banerjee, A. V. (1992), ‘A Simple Model of Herd Behavior’, The Quarterly Journal of Economics, 107(3): 797817.CrossRefGoogle Scholar
Baqaee, D. R. (2018), ‘Cascading Failures in Production Networks’, Econometrica, 86(5): 18191838.CrossRefGoogle Scholar
Baumol, W. J., Panzar, J. C. and Willig, R. D. (1983), ‘Contestable Markets: An Uprising in the Theory of Industry Structure: Reply’, The American Economic Review, 73(3): 491496.Google Scholar
Baumol, W. J., Panzar, J. C. and Willig, R. D. (1988), Contestable Markets and The Theory of Industry Structure (Rev. ed.), San Diego, CA: Harcourt Brace Jovanovich.Google Scholar
Berger, P. and Luckmann, T. (1966), The Social Construction of Reality, New York, NY: Anchor Books.Google Scholar
Bikhchandani, S., Hirshleifer, D. and Welch, I. (1992), ‘A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades’, Journal of Political Economy, 100(5): 9921026.CrossRefGoogle Scholar
Block, M., Parker, J., Vyborna, O. and Dusek, L. (2000), ‘An Experimental Comparison of Adversarial Versus Inquisitorial Procedural Regimes’, American Law and Economics Review, 2(1): 170194.CrossRefGoogle Scholar
Boudreaux, D. and Folsom, B. (1999), ‘Microsoft and Standard Oil: Radical Lessons for Antitrust Reform’, The Antitrust Bulletin, 44(3): 555576.CrossRefGoogle Scholar
Britton, T., Ball, F. and Trapman, P. (2020), ‘A Mathematical Model Reveals the Influence of Population Heterogeneity on Herd Immunity to SARS-CoV-2’, Science, 369(6505): 846849.CrossRefGoogle ScholarPubMed
Buchanan, J. M. and Tullock, G. (1999), The Calculus of Consent: Logical Foundations of Constitutional Democracy, Indianapolis, IN: Liberty Fund, Inc.Google Scholar
Callais, J. and Salter, A. (2020), ‘Ideologies, Institutions, and Interests: Why Economic Ideas Don't Compete on a Level Playing Field’, The Independent Review, 25(1): 6378.Google Scholar
Centers for Disease Control and Prevention (2020a), Guide for Considering Influenza Testing When Influenza Viruses are Circulating in the Community, September 1, 2020. Accessed November 13, 2021. https://www.cdc.gov/flu/professionals/diagnosis/considerinfluenza-testing.htm.Google Scholar
Centers for Disease Control and Prevention (2020b), ‘Update and Interim Guidance on Outbreak of 2019 Novel Coronavirus (2019nCov)’, Emergency Preparedness and Response, February 1, 2020. Accessed January 29, https://emergency.cdc.gov/han/han00427.asp.Google Scholar
Chakraborti, R. and Roberts, G. (2021a), ‘How Price-Gouging Regulation Undermined COVID-19 Mitigation: Evidence of Unintended Consequences’, Working Paper, The Center for Growth and Opportunity at Utah State University. Accessed January 27, 2022. https://www.thecgo.org/research/how-price-gouging-regulation-underminedcovid-19-mitigation-evidence-of-unintended-consequences/.Google Scholar
Chakraborti, R. and Roberts, G. (2021b), ‘Learning to Hoard: The Effects of Preexisting and Surprise Price-Gouging Regulation during the COVID-19 Pandemic’, Journal of Consumer Policy, 44(4): 507529.CrossRefGoogle ScholarPubMed
Cirillo, P. and Taleb, N. N. (2020), ‘Tail Risk of Contagious Diseases’, Nature Physics, 16(6): 606613.CrossRefGoogle Scholar
IHME COVID-19 Health Service Utilization Forecasting Team and Murray, C. J. L. (2020), Forecasting COVID-19 Impact on Hospital Bed-Days, ICU-Days, Ventilator-Days and Deaths by US State in the Next 4 Months. Preprint. medRxiv, March 30, 2020. http://medrxiv.org/lookup/doi/10.1101/2020.03.27.20043752.Google Scholar
Cramer, M. and Sheikh, K. (2020), ‘Surgeon General Urges the Public to Stop Buying Face Masks’, The New York Times (New York) (February 29, 2020). Accessed December 29, 2021. https://www.nytimes.com/2020/02/29/health/coronavirus-n95-facemasks.html.Google Scholar
Dbouk, T. and Drikakis, D. (2021), ‘Fluid Dynamics and Epidemiology: Seasonality and Transmission Dynamics’, Physics of Fluids, 33(2): 021901.CrossRefGoogle ScholarPubMed
Dewatripont, M. and Tirole, J. (1999), ‘Advocates’, Journal of Political Economy, 107(1): 139.CrossRefGoogle Scholar
Earl, P. E., Peng, T. and Potts, J. (2007), ‘Decision-Rule Cascades and the Dynamics of Speculative Bubbles’, Journal of Economic Psychology, 28(3): 351364.CrossRefGoogle Scholar
Encyclopaedia Britannica (2014a), Adversarial Procedure, In Encyclopaedia Britannica, Accessed November 22, 2021. https://www.britannica.com/topic/adversary-procedure.Google Scholar
Encyclopaedia Britannica (2014b), Inquisitorial Procedure, In Encyclopaedia Britannica. Accessed November 22, 2021. https://www.britannica.com/topic/inquisitorial-procedure.Google Scholar
Fauci: Mixed Messaging On Masks Set U.S. Public Health Response Back (2020), In Collaboration with Anthony Fauci, July 1, 2020. Accessed December 17, 2021. https://www.npr.org/sections/health-shots/2020/07/01/886299190/it-does-not-haveto-be-100-000-cases-a-day-fauci-urges-u-s-to-follow-guidelines.Google Scholar
Ferguson, N., Laydon, D., Gilani, G. N., Imai, N., Ainslie, K., Baguelin, M., Bhatia, S., Boonyasiri, A., Cucunuba, Z., Cuomo-Dannenburg, G., Dighe, A., Dorigatti, I., Fu, H., Gaythorpe, K., Green, W., Hamlet, A., Hinsley, W., Okell, L. C., Elsland, S., Thompson, H., Verity, R., Volz, E., Wang, H., Wang, Y., Walker, P.G.T., Walters, C., Winskill, P., Whittaker, C., Donnelly, C.A., Riley, S. and Ghani, A. C. (2020), Report 9: Impact of Non-Pharmaceutical Interventions (NPIs) to Reduce COVID19 Mortality and Healthcare Demand. Imperial College London, March 16, 2020. https://doi.org/10.25561/77482.CrossRefGoogle Scholar
Finucane, M. and McKenna, C. (2021), ‘Take a Breath of Fresh Air: The State's Outdoor Mask Requirement Loosens’, Boston Globe (Boston) (April 30, 2021). Accessed December 13, 2021. https://www.bostonglobe.com/2021/04/30/nation/take-breathfresh-air-outdoor-mask-requirement-mass-is-looser-today/.Google Scholar
Flegal, K. M. (2021), ‘The Obesity Wars and the Education of a Researcher: A Personal Account’, Progress in Cardiovascular Diseases, 67: 7579.CrossRefGoogle ScholarPubMed
Foerster, A. T., Sarte, P. D. G. and Watson, M. W. (2011), ‘Sectoral versus Aggregate Shocks: A Structural Factor Analysis of Industrial Production’, Journal of Political Economy, 119(1): 138.CrossRefGoogle Scholar
Food and Drug Administration Staff (2021), Policy for Coronavirus Disease-2019 Tests During the Public Health Emergency (Revised), May. Accessed December 17, 2021. https://web.archive.org/web/20200505044345/https://www.fda.gov/regulatory-information/search-fda-guidance-documents/policy-coronavirus-disease-2019-tests-during-publichealth-emergency-revised.Google Scholar
Froeb, L. and Kobayashi, B. (1993), ‘Competition in the Production of Costly Information: An Economic Analysis of Adversarial Versus Court-Appointed Presentation of Expert Testimony’, Working Paper, George Mason University. https://www.law.gmu.edu/pubs/papers/9305.Google Scholar
Froeb, L. and Kobayashi, B. (1996), ‘Naive, Biased, yet Bayesian: Can Juries Interpret Selectively Produced Evidence?’, Journal of Law, Economics, and Organization, 12(1): 257276.CrossRefGoogle Scholar
Gentzkow, M. and Kamenica, E. (2017a), ‘Bayesian Persuasion with Multiple Senders and Rich Signal Spaces’, Games and Economic Behavior, 104: 411429.CrossRefGoogle Scholar
Gentzkow, M. and Kamenica, E. (2017b), ‘Competition in Persuasion’, The Review of Economic Studies, 84(1): 300322.CrossRefGoogle Scholar
Gomes, M. G. M., Ferreira, M. U., Corder, R. M., King, J. G., Souto-Maior, C., Penha-Goncalves, C., Goncalves, G., Chikina, M., Pegden, W. and Aguas, R. (2020), Individual Variation in Susceptibility or Exposure to SARS-CoV-2 Lowers the Herd Immunity Threshold, Preprint. medRxiv. http://medrxiv.org/lookup/doi/10.1101/2020.04.27.20081893.Google Scholar
Goolsbee, A. and Syverson, C. (2021), ‘Fear, Lockdown, and Diversion: Comparing Drivers of Pandemic Economic Decline 2020’, Journal of Public Economics, 193: 104311.CrossRefGoogle ScholarPubMed
Hand, L. (1901), ‘Historical and Practical Considerations regarding Expert Testimony’, Harvard Law Review, 15(1): 4058.CrossRefGoogle Scholar
Hayek, F. A. (1945), ‘The Use of Knowledge in Society’, The American Economic Review, 35(4): 519530.Google Scholar
Ho, A. and Huang, V. (2021), ‘Unmasking the Ethics of Public Health Messaging in a Pandemic’, Journal of Bioethical Inquiry, 18(4): 549559.CrossRefGoogle ScholarPubMed
Hu, S., Wang, W., Wang, Y., Litvinova, M., Luo, K., Ren, L., Sun, Q., Chen, X., Zeng, G., Li, J., Liang, L., Deng, Z., Zheng, W., Li, M., Yang, H., Guo, J., Wang, K., Chen, X., Liu, Z., Yan, H., Shi, H., Chen, Z., Zhou, Y., Sun, K., Vespignani, A., Viboud, C., Gao, L., Ajelli, M. and Yu, H. (2021), ‘Infectivity, Susceptibility, and Risk Factors Associated with SARS-CoV-2 Transmission Under Intensive Contact Tracing in Hunan, China’, Nature Communications, 12(1): 1533.CrossRefGoogle ScholarPubMed
Ikeda, S. (1997), Dynamics of the Mixed Economy: Toward a Theory of Interventionism, New York, NY: Routledge.CrossRefGoogle Scholar
Ioannidis, J. P.A. (2020), ‘A Fiasco in the Making? As the Coronavirus Pandemic Takes Hold, We Are Making Decisions Without Reliable Data.’ Stat, March 17, 2020. https://www.statnews.com/2020/03/17/a-fiasco-in-the-making-as-the-coronaviruspandemic-takes-hold-we-are-making-decisions-without-reliable-data/.Google Scholar
Ioannidis, J. P. A., Cripps, S. and Tanner, M. A. (2022), ‘Forecasting for COVID-19 has Failed’, International Journal of Forecasting, 38(2): 423438.CrossRefGoogle ScholarPubMed
Jernigan, D. B. and CDC COVID-19 Response Team (2020), Update: Public Health Response to the Coronavirus Disease 2019 Outbreak – United States, February 24, 2020. Centers for Disease Control, February 28, 2020.Google Scholar
Kang, S. H. and Kim, J. (2021), ‘The Fragility of Experts: A Moderated-Mediation Model of Expertise, Expert Identity Threat, and Overprecision’, Academy of Management Journal, 65(2).Google Scholar
Kessel, R. (1971), ‘A Study of the Effects of Competition in the Tax-Exempt Bond Market’, Journal of Political Economy, 79(4): 706738.CrossRefGoogle Scholar
Kiviniemi, M. T., Orom, H., Hay, J. L. and Waters, E. A. (2022), ‘Prevention is Political: Political Party Affiliation Predicts Perceived Risk and Prevention Behaviors for COVID-19’, BMC Public Health, 22(1): 298.Google ScholarPubMed
Koppl, R. (2002), Big Players and the Economic Theory of Expectations. New York, NY: Palgrave Macmillan.Google Scholar
Koppl, R. (2018), Expert Failure, New York, NY: Cambridge University Press.CrossRefGoogle Scholar
Koppl, R. (2021), ‘Public Health and Expert Failure’, Public Choice, https://doi.org/10.1007/s11127-021-00928-4.CrossRefGoogle ScholarPubMed
Koppl, R. and Murphy, J. (2022), ‘Mange Your Experts: Social Structure Influences Expert Overconfidence’, Unpublished Manuscript, Syracuse University.Google Scholar
Koppl, R. G., Kurzban, R. and Kobilinsky, L. (2008), ‘Epistemics for Forensics’, Episteme, 5(2): 141159.CrossRefGoogle Scholar
Lavoie, D. (2016), National Economic Planning: What is Left?, Arlington, VA: The Mercatus Center at George Mason University.Google Scholar
Leeson, P. T. and Rouanet, L. (2021), ‘Externality and COVID-19’, Southern Economic Journal, 87(4): 11071118.CrossRefGoogle ScholarPubMed
Lind, E. A., Thibaut, J. and Walker, L. (1973), ‘Discovery and Presentation of Evidence in Adversary and Nonadversary Proceedings’, Michigan Law Review, 71(6): 11291144.CrossRefGoogle Scholar
Lucas, R. E. (1977), ‘Understanding Business Cycles’, Carnegie-Rochester Conference Series on Public Policy, 5: 729.Google Scholar
Mannheim, K. (1936), Ideology and Utopia: An Introduction to the Sociology of Knowledge, New York, NY: Harcourt, Brace & World, Inc.Google Scholar
Michaels, J. A. and Stevenson, M. D. (2020), Explaining National Differences in the Mortality of Covid-19: Individual Patient Simulation Model to Investigate the Effects of Testing Policy and Other Factors on Apparent Mortality. Preprint. medRxiv.Google Scholar
Milgrom, P. and Roberts, J. (1986), ‘Relying on the Information of Interested Parties’, The RAND Journal of Economics, 17(1): 1832.CrossRefGoogle Scholar
Miller, G. J. (1992), Managerial Dilemmas: The Political Economy of Hierarchy, Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Mulligan, C. (2021), ‘The Backward Art of Slowing the Spread? Congregation Efficiencies during COVID-19’, Working Paper #w28737. Cambridge, MA: National Bureau of Economic Research.CrossRefGoogle Scholar
Murphy, J. (2022), ‘The Ambiguity of Superiority and Authority’, SSRN Electronic Journal, https://papers.ssrn.com/sol3/papers.cfm?abstractid=4111527.Google Scholar
Murphy, J., Devereaux, A., Goodman, N. and Koppl, R. (2021), ‘Expert Failure and Pandemics: On Adapting to Life with Pandemics’, Cosmos + Taxis, 9(5): 717.Google Scholar
Noar, S. M. and Austin, L. (2020), ‘(Mis)communicating about COVID-19: Insights from Health and Crisis Communication’, Health Communication, 35(14): 17351739.CrossRefGoogle ScholarPubMed
O'Donnell, J. (2020), ‘Top Disease Official: Risk of Coronavirus in USA is “Minuscule”; Skip Mask and Wash Hands’, USA Today, February 17, 2020.Google Scholar
Padula, W. V. (2020), ‘Why Only Test Symptomatic Patients? Consider Random Screening for COVID-19’, Applied Health Economics and Health Policy, 18(3): 333334.CrossRefGoogle ScholarPubMed
Paltiel, A. D., Zheng, A. and Walensky, R. P. (2020), ‘Assessment of SARS-CoV-2 Screening Strategies to Permit the Safe Reopening of College Campuses in the United States’, JAMA Network Open, 3(7): e2016818. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2768923.CrossRefGoogle ScholarPubMed
Pan American Health Organization and World Health Organization (2020), ‘Face Masks During Outbreaks: Who, When, Where, and How to Use Them,’ Pan American Health Organization: News, February 28, 2020. Accessed May 26, 2022. https://www.paho.org/en/news/28-2-2020-face-masks-during-outbreaks-who-when-where-and-how-usethem.Google Scholar
Radzevick, J. R. and Moore, D. A. (2011), ‘Competing to Be Certain (But Wrong): Market Dynamics and Excessive Confidence in Judgment’, Management Science, 57(1): 93106.CrossRefGoogle Scholar
Ravindran, S. and Shah, M. (2020), ‘Unintended Consequences of Lockdowns: COVID-19 and the Shadow Pandemic’, Working Paper #w27562. Cambridge, MA: National Bureau of Economic Research.CrossRefGoogle Scholar
Rosen, J. (2021), ‘Johns Hopkins Launches Pandemic Data Initiative to Address COVID-19 Data Problems’, John Hopkins University. Accessed May 26, 2021. https://hub.jhu.edu/2021/05/17/pandemic-data-initiative-crc/.Google Scholar
SAGE (2020), SAGE Explainer, May 5, 2020. Accessed November 11, 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachmentdata/.Google Scholar
Scheid, J. L., Lupien, S. P., Ford, G. S. and West, S. L. (2020), ‘Commentary: Physiological and Psychological Impact of Face Mask Usage during the COVID-19 Pandemic’, International Journal of Environmental Research and Public Health, 17(18): 6655.CrossRefGoogle ScholarPubMed
Shear, M., Goodnough, A., Kaplan, S., Fink, S., Thomas, K. and Weiland, N. (2020), ‘The Lost Month: How a Failure to Test Blinded the U.S. to Covid-19’, The New York Times (New York, NY) (March 28, 2020).Google Scholar
Shin, H. S. (1998), ‘Adversarial and Inquisitorial Procedures in Arbitration’, The RAND Journal of Economics, 29(2): 378405.CrossRefGoogle Scholar
Smith, A. ([1776]1981), An Inquiry into the Nature and Causes of the Wealth of Nations, Indianapolis, IN: Liberty Fund, Inc.Google Scholar
Smith, A. (1982), Lectures on Jurisprudence, Indianapolis, IN: Liberty Fund, Inc.Google Scholar
Taschereau-Dumouchel, M. (2020), ‘Cascades and Fluctuations in an Economy with an Endogenous Production Network’, SSRN Electronic Journal. https://ssrn.com/abstract=3115854.Google Scholar
Tullock, G. (1980), Trials on Trial: The Pure Theory of Legal Procedure, New York, NY: Columbia University Press.Google Scholar
Tullock, G. (2005a), Bureaucracy, Indianapolis, IN: Liberty Fund, Inc.Google Scholar
Tullock, G. (2005b), The Organization of Inquiry, Indianapolis, IN: Liberty Fund, Inc.Google Scholar
Turner, S. (2001), ‘What is the Problem with Experts?’, Social Studies of Science, 31(1): 123149.CrossRefGoogle Scholar
US Customs and Border Protection (2020), COVID-19 Test Kit Importation Requirements, October 28, 2020. Accessed December 17, 2021. https://imports.cbp.gov/s/article/COVID-19-Test-Kit-Importation-Requirements.Google Scholar
Why Weren't We Wearing Masks From the Beginning? Dr. Fauci Explains (2020), In Collaboration with Anthony Fauci, June 12, 2020. Accessed July 26, 2021. https://www.thestreet.com/video/dr-fauci-masks-changing-directive-coronavirus.Google Scholar
Wu, J. (2015), ‘Helpful Laymen in Informational Cascades’, Journal of Economic Behavior & Organization, 116: 407415.CrossRefGoogle Scholar
Xenophon (2013), The Apology. Edited by H. G. Dakyns. Project Gutenberg. Accessed December 4, 2021. https://www.gutenberg.org/files/1171/1171-h/1171-h.htm.Google Scholar
Zandt, T. (2004), ‘Information Overload in a Network of Targeted Communication’, The RAND Journal of Economics, 35(3): 542560.CrossRefGoogle Scholar
Zycher, B., Solomon, K. A. and Yager, L. (1991), An Adequate Insurance Approach to Critical Dependencies of the Department of Defense, Santa Monica, CA: RAND Corporation. https://www.rand.org/pubs/reports/R3880.html.Google Scholar
Figure 0

Figure 1. Representations of two production networks. Each circle represents a node of a sector or producer. Each arrow represents the direction output flows. (a) A production network where no producer relies on another for input. Each producer is entirely self-sufficient. (b) A production network where each producer relies equally on the other. Each producer uses input and sells output to each other producer.Source: Acemoglou et al. (2012).

Figure 1

Figure 2. A production network where one producer is the sole supplier to all other producers. Each circle represents a producer/sector. Each arrow indicates output flow. Sector 1 supplies sectors 2 through n.Source: Acemoglou et al. (2012).

Figure 2

Figure 3. A production network with a single shared supplier. Each circle represents a producer/sector. Each arrow indicates output flow.Source: Acemoglou et al. (2012).

Figure 3

Figure 4. Siloing as a cause of cascading expert failure. Each circle represents a sector. Each arrow represents information flows. The dashed arrows represent noisy signals.

Figure 4

Figure 5. The COVID-19 expert opinion production network.

Figure 5

Figure 6. Siloing of public health opinion during the COVID-19 pandemic.