Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-22T00:02:09.102Z Has data issue: false hasContentIssue false

A psychological model of collective risk perceptions

Published online by Cambridge University Press:  15 April 2024

Sergio Pirla*
Affiliation:
Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
Rights & Permissions [Opens in a new window]

Abstract

Decades of research seek to understand how people form perceptions of risk by modeling either individual-level psychological processes or broader social and organizational mechanisms. Yet, little formal theoretical work has focused on the interaction of these 2 sets of factors. In this paper, I contribute to closing this gap by modifying a psychologically rich individual model of probabilistic reasoning to account for the transmission and collective emergence of risk perceptions. Using data from 357 individuals, I present experimental evidence in support of my main building assumptions and demonstrate the empirical validity of my model. Incorporating these results into an agent-based setting, I simulate over 1.5 billion social interactions to analyze the emergence of risk perceptions within organizations under different information frictions (i.e., limits on the availability and precision of social information). My results show that by focusing on information quality (rather than availability), groups and organizations can more effectively boost the accuracy of their emergent risk perceptions. This work offers researchers a formal framework to analyze the relationship between psychological and organizational factors in shaping risk perceptions.

Type
Theory Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making

1. Introduction

Human decision-making frequently hinges on our perceptions of risk, the subjective probabilities we attribute to uncertain outcomes. Whether it’s an individual deciding to invest in a start-up, guided by their calculated risk of financial reward, a patient contemplating a novel medical treatment, swayed by their grasp of potential health risks, or a daily commuter choosing to cycle rather than drive, steered by their safety assessment, risk perceptions infiltrate all aspects of our everyday lives.

Considering the critical importance of risk perceptions in shaping our day-to-day experiences, a vast scholarly literature has attempted to dissect its complex nature (see Siegrist and Árvai, Reference Siegrist and Árvai2020 for a review). This exploration encompasses the emotional, cognitive, and social factors that influence our view of risk in a variety of contexts and circumstances (Holzmeister et al., Reference Holzmeister, Huber, Kirchler, Lindner, Weitzel and Zeisberger2020; Kim et al., Reference Kim, Schroeder and Pennington-Gray2016; Slovic and Peters, Reference Slovic and Peters2006; Slovic et al., Reference Slovic, Fischhoff and Lichtenstein1980; Tompkins et al., Reference Tompkins, Bjälkebring and Peters2018). Within the scope of this literature, several authors have put forth formal mathematical models with the intent to elucidate the process by which individuals form risk perceptions (Bordalo et al., Reference Bordalo, Gennaioli and Shleifer2012; Busby et al., Reference Busby, Onggo and Liu2016; Einhorn and Hogarth, Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986; Moussaıd, Reference Moussaıd2013). These formal models hold a number of advantages over verbal psychological theories (Borsboom et al., Reference Borsboom, van der Maas, Dalege, Kievit and Haig2021; Fried, Reference Fried2020; Robinaugh et al., Reference Robinaugh, Haslbeck, Ryan, Fried and Waldorp2021). For one, formal models offer a mathematically rigorous framework that provides a systematic and structured understanding of the underlying mechanisms driving the formation of risk perceptions. This mathematical modeling allows for more precise predictions, making it easier to validate or refute the proposed mechanisms. Second, compared to verbal theories, formal models have the capacity to incorporate and quantify complex interactions between variables, which can be difficult to disentangle or express in verbal theories. Third, by explicitly outlining the mathematical relationship among different variables, formal models bridge the divide between psychological theories and the application of statistical models to empirical data (Borsboom et al., Reference Borsboom, van der Maas, Dalege, Kievit and Haig2021; Fried, Reference Fried2020).

Despite the undeniable strides made by these formalization efforts, there remains an important gap in the existing literature. Prior work on risk perception has modeled either individual-level psychological processes or broader social and organizational phenomena, with scant attention paid to the interaction of these 2 sets of factors. In this paper, I contribute to closing this gap by modifying a psychologically rich individual model of probabilistic reasoning to account for the social transmission and collective emergence of risk perceptions.

1.1. Modeling risk perceptions: The role of psychological processes

In various everyday situations, individuals often deal with limited or imprecise information regarding the actual probabilities associated with different outcomes. This lack of accessible and precise probabilistic information presents a scientific challenge when modeling the psychological processes behind the formation of subjective probabilistic assessments.

An early body of literature proposed modeling individual risk perceptions as emerging from Bayesian updating, providing a theoretically sound framework for understanding how individuals should ideally update their probabilistic beliefs in response to new information. Examples of these mathematically tractable models can be found in settings where individuals were presented with nuclear risks (Smith and Michaels, Reference Smith and Michaels1987), dangerous chemicals (Smith and Johnson, Reference Smith and Johnson1988), workplace hazards (Viscusi and O’Connor, Reference Viscusi and O’Connor1984), or overall mortality assessments (Hakes and Viscusi, Reference Hakes and Viscusi1997). However, empirical evidence has since accumulated showing that individuals often fail to adhere to the principles of Bayesian updating when assessing risks (Kahneman et al., Reference Kahneman, Slovic, Slovic and Tversky1982; Siegrist and Árvai, Reference Siegrist and Árvai2020; Slovic, Reference Slovic2016; Tversky and Kahneman, Reference Tversky and Kahneman1974; Viscusi, Reference Viscusi1985).

Responding to this discrepancy between normative models and actual psychological processes, several authors departed from rational models of risk perceptions to focus on the role of biases and heuristics (Kahneman et al., Reference Kahneman, Slovic, Slovic and Tversky1982; Tversky and Kahneman, Reference Tversky and Kahneman1974). Notably, Einhorn and Hogarth (Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986) suggested a model in which subjective risk perceptions emerge from a process of anchoring and adjustment. In this framework, an individual initially sets a probability estimate, termed an anchor, and then mentally explores probabilities both above and below this anchor, leading to a final adjustment away from the initial estimate.

This model, which offers a descriptive view of the formation of subjective risk perceptions, has been shown to account for a significant number of empirical regularities and exhibits a good empirical fit in experimental settings (Einhorn and Hogarth, Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986; Hogarth and Einhorn, Reference Hogarth and Einhorn1990; Hogarth and Kunreuther, Reference Hogarth and Kunreuther1985, Reference Hogarth and Kunreuther1989). Moreover, many of the ideas introduced in the Einhorn and Hogarth’s model have significantly influenced the risk literature in the fields of economics and psychology. For instance, a recent paper by Jaspersen and Ragin (Reference Jaspersen and Ragin2021) developed an anchoring and adjustment model of decision-making under risk, demonstrating that such a model can elucidate various choice anomalies. Similarly, Johnson and Busemeyer (Reference Johnson and Busemeyer2016) employ an anchoring and adjustment model to capture the process of mental simulation of probabilities by an individual facing a choice under risk. Other noteworthy contributions in the economics literature have either adapted the insights of Einhorn and Hogarth’s risk perception model to a revealed-preference context (Abdellaoui et al., Reference Abdellaoui, Baillon, Placido and Wakker2011) or incorporated some of its insights to analyze additional psychological processes like salience (Bordalo et al., Reference Bordalo, Gennaioli and Shleifer2012) or cognitive uncertainty (Enke and Graeber, Reference Enke and Graeber2019).

The rich body of theoretical work has been instrumental in directing and enriching an expanding empirical literature on risk perceptions and preferences in economics and psychology (Barberis, Reference Barberis2013; Siegrist and Árvai, Reference Siegrist and Árvai2020). In particular, it has shed light on the complexities of individual decision-making processes under conditions of uncertainty. However, a common assumption these models often make is that individuals confront risky choices in isolation. In taking this approach, these models inadvertently overlook the social dynamics that frequently characterize the formation of risk perceptions.

1.2. Modeling risk perceptions: The role of social and organizational processes

A distinct body of literature has concentrated on exploring the social and organizational determinants of risk perception, introducing several models to clarify how these elements shape subjective risk evaluations. This literature has been notably influenced by network models of attitudinal contagion, wherein individuals are portrayed as nodes connected within a network to study the formation and evolution of different attitudes and perceptions within groups. Popular contagion models include the threshold model (Granovetter, Reference Granovetter1978; Watts, Reference Watts2002, Reference Watts2004; which postulates that an individual will adopt a new attitude or behavior when a specified proportion of their network connections have done so) or the independent cascade model (Kempe et al., Reference Kempe, Kleinberg and Tardos2003; Watts, Reference Watts2004; which postulates that when an individual adopts a new attitude, each of their connections in the network has a certain probability of also adopting that attitude).

Beyond attitudinal contagion models, a few authors have attempted to formalize the social amplification of risk framework (SARF, Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988, Reference Kasperson, Webler, Ram and Sutton2022). This framework describes risk perceptions as emerging from the interaction of complex psychological, social, institutional, and cultural processes. Various societal mechanisms such as media coverage, cultural beliefs, interpersonal communication, or people’s experiences can attenuate or amplify these perceptions of risk. While formalizing all aspects of the SARF is unfeasible, past work has focused on important specific aspects of this theoretical framework (Busby et al., Reference Busby, Onggo and Liu2016; Moussaıd, Reference Moussaıd2013). For example, Moussaıd (Reference Moussaıd2013) modifies an opinion-dynamics model (i.e., a framework used to understand how individuals’ opinions evolve and influence each other within a social network through persuasion and consensus building) to study the social evolution of risk perceptions. Specifically, Moussaid focuses on the role of the media environment, showing that the propensity of individuals to seek independent information can play a key role in the formation of risk perceptions. Similarly, Busby et al. (Reference Busby, Onggo and Liu2016) employ an agent-based model to formalize some concepts presented in the SARF. Specifically, these authors show that the availability of different information sources and the way risk communications are crafted can largely impact collective risk perceptions.

Despite the substantial advancements these models have made in uncovering the nature of risk perceptions, their focus on social and organizational aspects has resulted in a somewhat circumscribed view, incorporating only a limited set of psychological factors. Moreover, by failing to explicitly incorporate descriptive psychological mechanisms, these models cannot be easily modified to further study the role of different psychological processes in shaping collective risk perceptions. The absence of this psychological modeling in existing formal models of the social emergence of risk perceptions underscores the need for more integrative approaches that can bridge the gap between social and psychological processes in the formation of risk perceptions.

1.3. The present paper

The goal of this paper is to synthesize these 2 separate fields of research. To do so, I present a modified individual model of probabilistic reasoning to account for the social propagation of subjective risk perceptions. In doing so, this paper presents a framework to study how subjective perceptions of risk (i.e., subjective probabilities assigned to uncertain events) evolve in groups.

As typically assumed in information transmission models (Banerjee, Reference Banerjee1992; Banerjee and Fudenberg, Reference Banerjee and Fudenberg2004; Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988; Watts, Reference Watts2002, Reference Watts2004), in my theoretical approach, agents can discern the subjective probabilities attached to an uncertain event by members of their social network. Utilizing this social information, agents generate subjective risk perceptions by adhering to an anchoring-and-adjustment process, as proposed by Einhorn and Hogarth (Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986). Specifically, my framework presumes that agents pay attention to 2 aspects of their social information—the mean and the dispersion of subjective probabilities. Agents employ the standard deviation of subjective probabilities associated with an event to infer the level of ambiguity or precision in their social information, capturing the notion that a higher level of consensus among individuals implies a lower degree of ambiguity about an event’s true probabilities. Given an estimate of the ambiguity in their social information, the agents anchor their subjective probability assessments on the average subjective probability assigned to an event by their network members. Following this, the agents undergo an adjustment process to account for the ambiguity in their social information. This adjustment process represents the mental simulation of probabilities above and below the anchoring estimate. Given the individual’s uncertainty about the actual probability distribution of an event, the agent mentally explores and tests various levels of likelihood, resulting in an adjustment away from the anchor that is contingent on the anchoring probability itself, the degree of ambiguity in the agent’s social information, and the agents’ attitudes toward the risky event.

Empirical support for my primary building assumptions and the validity of my model are obtained using data from 357 participants in an online study. Furthermore, I utilize this experimental data to calibrate the model and implement it in an agent-based setting. By simulating over 1.5 billion social interactions, I investigate the emergence of risk perceptions within organizations. To exemplify how my model can yield testable predictions, I examine how different information frictions (i.e., constraints on the precision and availability of social information) influence a group’s emergent risk perception. My findings reveal that constraining the quality of socially shared information results in more imprecise and ambiguous risk perceptions. Conversely, limiting the quantity of socially shared information leads to a more accurate (albeit slower) emergence of risk perceptions and a higher degree of ambiguity resolution. My results suggest that groups and organizations can more effectively enhance the accuracy of their emergent risk perceptions by prioritizing information quality over availability.

Beyond specific results, this work offers researchers a versatile framework to formally study the interplay of a broad range of organizational, psychological, and social phenomena in shaping collective risk perceptions. By doing so, this paper paves the way for the development of novel research lines on the construction of collective risk perceptions.

2. Theoretical model

In this section, I first offer an overview of the Einhorn and Hogarth’s (Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986) model of probabilistic reasoning (Section 2.1). Then, I adapt this model to a social setting by endogenizing some of its key ingredients (Section 2.2).

2.1. Theoretical foundation: A model of probabilistic reasoning

My theoretical framework is an adaptation to a social setting of the Einhorn and Hogarth’s model of probabilistic reasoning under ambiguity (Einhorn and Hogarth, Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986). In essence, this model outlines a cognitive process by which agents assign a subjective probability to an uncertain event in situations where probabilistic information is either lacking or vague. In this model, the agents follow a process of anchoring and adjustment when deriving the subjective probability of an event. Specifically, the subjective probability assigned to an outcome x—namely SP(x)—is given by the following equation:

(1) $$ \begin{align} SP(x)= p + k, \end{align} $$

where p (ranging from 0 to 1) is the anchoring probability and k the adjustment made to account for the ambiguity in the available probabilistic information. The authors provide a rationale for this adjustment process—it represents a process of mental simulation of probabilities above and below the anchor (see Johnson and Busemeyer, Reference Johnson and Busemeyer2016 for a different approach to modeling the mental simulation of probabilities in risky choice). As no precise probabilistic information exists, an agent tests and evaluates the plausibility of different probability levels above and below the anchor. This leads to an adjustment away from the anchor that depends on the total amount of ambiguity, the agent’s attitudes toward the uncertain event (i.e., the agent’s level of optimism/pessimism), and the anchoring probability itself. Specifically, the total adjustment (k) is the difference between the adjustment toward greater probabilities (k $_g$ ) and the adjustment toward smaller probabilities (k $_s$ ):

(2) $$ \begin{align} k = k_g - k_s. \end{align} $$

As highlighted in Einhorn and Hogarth (Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986), the maximum value of k $_g$ is $(1-p)$ . Similarly, k $_s$ cannot exceed p—violating any of these 2 conditions could result in subjective probabilities that are below 0 or above 1. The authors assume that k $_s$ and k $_g$ are captured by the maximum adjustments multiplied by a constant of proportionality $\theta $ ( $\theta \in [0,1]$ ). This constant of proportionality represents the amount of ambiguity in the probabilistic information, reflecting the idea that the mental simulation of probabilities is more extensive and leads to a greater adjustment away from the anchor under conditions of increased ambiguity (i.e., when the agent holds less precise probabilistic information). Finally, to capture the agent’s attitudes toward the uncertain event, the authors include a parameter ( $\beta $ ) representing the agent’s degree of optimism or pessimism—the extent to which probabilities above (as opposed to below) the anchor are considered. To do so, the authors weight the maximum adjustment toward smaller probabilities by $\beta $ ( $\beta \in [0,\infty )$ ). The final adjustment functions are given in the following equations:

(3) $$ \begin{align} k_g & = \theta (1 - p), \end{align} $$
(4) $$ \begin{align} k_s = \theta p^\beta. \end{align} $$

For values of $\beta $ below 1, an agent gives more weight to probabilities below (as opposed to above) the anchor, with the opposite effect taking place for values of $\beta $ above 1. If $\beta $ is equal to 1, the agent gives the same weight to probabilities above and below the anchor. Replacing these equations in Equation (1) yields the following functional form:

(5) $$ \begin{align} SP(x) = p + \theta (1-p-p^\beta). \end{align} $$

2.2. A social model of probabilistic reasoning

People routinely use information gathered from their social interactions to build their risk perceptions. This is especially important in contexts where access to objective information is either limited or costly. In my model, I assume that any given individual can recover the subjective probabilities assigned to an ambiguous or uncertain event by those in their social network—an assumption commonly made in models of information transmission and social learning (Banerjee, Reference Banerjee1992; Banerjee and Fudenberg, Reference Banerjee and Fudenberg2004; Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988; Watts, Reference Watts2002, Reference Watts2004). Using this social information, each agent anchors its probability judgment on the average probability level observed in its social network. That is, in my model, the anchoring probability used by agent j is defined as

(6) $$ \begin{align} p_j = \frac{1}{n} \sum_{i=1}^{n} SP_i(x), \end{align} $$

where $SP_i(x)$ (ranging from 0 to 1) represents the observed subjective probabilities assigned to the event by agent i in the social network of agent j.

Beyond average probability levels, the distribution of probability judgments in a network conveys important information. Figure 1 shows the distribution of probability judgments across 2 similar networks of agents. In this setting, the anchoring probabilities of agents A and B would be the same. Nevertheless, there is a higher level of social consensus in the network of agent A than in the network of agent B. The average probability level is a more precise signal of the probability assigned to the uncertain event in the network of agent A than in the network of agent B. That is, the socially transmitted information carries a lower degree of ambiguity in the network of agent A than in the network of agent B. To capture this insight, I assume that individuals pay attention to both the average and the standard deviation of probability judgments within a network. Specifically, I assume that the dispersion in probability judgments allows an agent to capture the precision or ambiguity in its social information. Hence, in my model, the amount of ambiguity in the probabilistic information of agent j—namely $\gamma _j$ —is implicitly defined by the standard deviation of probability judgments in its social network:

(7) $$ \begin{align} \gamma_j = \sqrt{\frac{\sum_{i=1}^{n} (SP_i(x) - p_j)^2 }{n-1}}. \end{align} $$

Hence, the length of the mental simulation process performed by agent j ( $\theta _j$ ) is defined as the amount of ambiguity in the probabilistic information of agent j ( $\gamma _j$ ) multiplied by a constant of proportionality ( $\alpha $ ):

(8) $$ \begin{align} \theta_j = \gamma_j \cdot \alpha. \end{align} $$

Figure 1 Example of networks that convey similar anchoring probabilities with different degrees of ambiguity.

The inclusion of this constant of proportionality ( $\alpha $ ) responds to theoretical reasons. Specifically, it has been shown that individuals differ in their ability to discriminate between different levels of likelihoods in the absence of precise probabilistic information (Abdellaoui et al., Reference Abdellaoui, Baillon, Placido and Wakker2011; Dimmock et al., Reference Dimmock, Kouwenberg and Wakker2016; Li et al., Reference Li, Müller, Wakker and Wang2018). That is, while some people are readily able to discern between different levels of likelihood under ambiguity, others tend to see ambiguous uncertain events as having a 50% chance. This cognitive process—termed ambiguity-generated likelihood insensitivity or a-insensitivity—is commonly portrayed as a measure of people’s understanding of ambiguous situations. To account for a-insensitivity, I describe the total length of the mental simulation process ( $\theta $ ) as resulting from the interaction of the amount of ambiguity in a specific scenario ( $\gamma $ ) and a measure of the agents’ a-insensitivity ( $\alpha $ ). Given that a-insensitivity reflects the understanding of the ambiguous situation, a higher level of a-insensitivity (i.e., a poorer understanding of the ambiguous context) should increase the length of the process of mental simulation of probabilities above and below the anchor (especially in context with high ambiguity). Given an anchor, and under non-extreme values of $\beta $ , the increased length of the mental simulation process will lead to final probability estimates that are closer to 50% (compared with the anchor). Hence, by including this constant, this model accommodates a measure of a-insensitivity.

Summing up, agent j’s subjective probabilities assigned to an uncertain event x can be defined as

(9) $$ \begin{align} SP_j(x) = p_j + (\gamma_j \alpha)(1-p_j-p_j^\beta), \end{align} $$

where $p_j$ and $\gamma _j$ represent the average and standard deviation in subjective probabilities assigned to event x by those in the network of agent j, $\alpha $ represents the agents’ a-insensitivity, and $\beta $ represents the agents’ attitude toward the uncertain event (i.e., the degree of optimism/pessimism). For consistency with the empirical analyses and simulations, $\alpha $ and $\beta $ are population-level parameters. That is, they are assumed to be fixed between individuals.

In the following sections, I first estimate this model and test its main building assumptions using data from an online study (Section 3). Then, I incorporate the empirically calibrated version of the model into an agent-based framework to analyze the evolution of group risk perceptions under varying information frictions (Section 4).

3. Experimental evidence

In this section, I present my experimental design (Section 3.1), an overview of the statistical procedures used to analyze the experimental data (Section 3.2), and the results of these analyses (Section 3.3).

3.1. Experimental setting

I used data from an online experiment to empirically calibrate this model and test its main building assumptions. Specifically, I used hypothetical choice scenarios to analyze how individuals form risk perceptions (i.e., subjective probability assessments) about an investment with an unknown success probability when presented with risk estimates (i.e., probabilities) from 4 hypothetical co-workers.

A total of 695 participants were recruited on Prolific for an online study on risk perceptions (see SM Note 2 of the Supplementary Material for an overview of the main demographic characteristics of the sample). The study had a median completion time of 9 min, and the participants received a fixed payment of £1. I preregistered my sample size, exclusion criteria, and main analyses. Data and code are openly available in an OSF repository.Footnote 1

The study involved 20 hypothetical scenarios. In each scenario, the participants had to choose between 2 investments that only differed in their probability of success. Specifically, the participants had to choose between an investment with a known probability of success (Investment 1) and an investment with an unknown probability of success (Investment 2). Although Investment 2 had an unknown success probability, the participants had access to 4 estimates of this probability obtained by hypothetical co-workers. Keeping these 4 estimates constant for each scenario, the participants had to choose between Investment 1 and Investment 2 for different probabilities of Investment 1 being successful. In each scenario, the participants made 11 choices, corresponding to probability levels of Investment 1 being successful that ranged from 0 to 1 (in 0.1 increments). Across scenarios, the 4 probability estimates reflecting the likelihood of Investment 2 being successful (i.e., the estimates given by the hypothetical co-workers) varied in their average probability estimate (with estimates averaging at approximately 0.2, 0.4, 0.6, and 0.8 probability of success) and their dispersion (with standard deviations in probability estimates of approximately 0.05, 0.1, 0.15, 0.2, and 0.25). Figure 2 presents an example of the hypothetical scenarios and the subjective probability elicitation task used in this study. Although this elicitation method has some limitations compared to simpler alternatives (as detailed in the Discussion section), it captures the role of key cognitive and affective mechanisms that influence risk perceptions in daily life. Rather than simply choosing a number, participants in this task are guided to consider various outcomes and compare different alternatives, much like they would in real-life situations. In doing so, this task incorporates key emotional and cognitive factors that are difficult to incorporate in a purely numerical estimation task.

Figure 2 Example of hypothetical scenario and subjective probability elicitation task used in the experiment. Additional study materials (consent form and detailed instructions) are presented in SM Note 1 of the Supplementary Material. Across scenarios, the participants were presented with different sets of co-workers’ probability estimates.

To ensure that the participants had carefully read the instructions given at the beginning of the study (“Study Instructions” on SM Note 1 of the Supplementary Material), they were presented with a multiple choice attention check (see “Attention Check” on SM Note 1 of the Supplementary Material). A total of 93 participants failed this attention check and were not allowed to continue with the study. Of the remaining 602 participants, a total of 245 participants chose—at some point in the experiment—an investment with a known probability of success of 0 or failed to select an investment with a known probability of success of 1. I excluded these responses from my analyses, resulting in a final sample of 357 individuals. In SM Note 5 of the Supplementary Material, I show that the model estimates remain qualitatively similar when applying more relaxed inclusion standards. In SM Note 6 of the Supplementary Material, I demonstrate that this experimental design provides good parameter recoverability.

3.2. Analyses and statistical procedures

For each scenario and participant, I obtained a measure of the subjective probability of success assigned to Investment 2 (my main dependent variable) by taking the middle point between (1) the greatest probability of Investment 1 being successful for which the participant chose Investment 2, and (2) the smallest probability of Investment 1 being successful for which the participant chose Investment 1. A unique switching point between Investments 1 and 2 was not imposed on the participants, nor were they prompted to reconsider their choice if more than 1 switching point was provided. In cases with multiple switching points, subjective probabilities were derived using the same method described above: the middle point between (1) the greatest probability of Investment 1 being successful for which the participant chose Investment 2, and (2) the smallest probability of Investment 1 being successful for which the participant chose Investment 1. As stated in the previous paragraph, note that I focus on participants that started the elicitation task by selecting Investment 2, and finished it by selecting Investment 1.

Using this variable, I started by testing the 2 main building assumptions of my model. First, I tested whether the participants’ risk perceptions could be approximated by the average of their co-workers’ subjective probabilities in a given scenario (anchoring assumption). To test this assumption, I simply regressed my main dependent variable (i.e., the participant and scenario-specific measure of subjective probability assigned to Investment 2) on the average probability estimate reported by the 4 co-workers in a given scenario. Second, I tested whether the study participants deviated more extensively from the average of their co-workers’ estimates under increased ambiguity (when the SD of their co-workers’ estimates was high, adjustment assumption). To test this assumption, I first obtained a participant and scenario-specific measure of adjustment size by taking the absolute value of the difference between a participant’s subjective success probability of Investment 2 and the average probability reported by the 4 co-workers in a given scenario. Then, I regressed this adjustment size on the amount of ambiguity in an scenario (i.e., the SD of the probability estimates reported by the 4 co-workers). Both of these models were estimated using linear mixed models and included participant-specific fixed effects.

Moving to my main analyses, I employed my participant and scenario-specific measure of subjective probability to estimate the following model:

(Full Model) $$ \begin{align} SP_{is} = p_s + (\gamma_s \alpha)(1-p_s-p_s^\beta) + \epsilon_{is}, \end{align} $$

where $SP_{is}$ is the subjective probability assigned to Investment 2 being successful by participant i in scenario s, $p_s$ and $\gamma _s$ are the average and standard deviation of the co-workers’ subjective probabilities in scenario s, and $\epsilon _{is}$ is a zero-mean normally distributed error term with $\sigma ^2$ variance.

This model has 3 free parameters ( $\alpha $ , $\beta $ , and $\sigma ^2$ ). I estimated these parameters by maximum likelihood and followed a cluster bootstrap approach to estimate their standard errors (to account for the nested nature of my data, see estimation details in SM Note 4 of the Supplementary Material). Moreover, I compared the fit of this full model to the empirical fit of the following 2 models:

(Null Model 1) $$ \begin{align} SP_{is} = p_s + \epsilon_{is}, \end{align} $$
(Null Model 2) $$ \begin{align} SP_{is} = c + \epsilon_{is}, \end{align} $$

where $SP_{is}$ , $p_s$ , and $\epsilon _{is}$ are defined as in my full model, and c represents a constant. Again, I estimated these 2 null models by maximum likelihood, and obtained their parameters’ standard errors by a cluster bootstrap procedure. To compare Null Model 1 against my full model, I performed a likelihood ratio test. As Null Model 2 is not nested in my Full Model, I compared the empirical fit of these 2 models by simply reporting their estimated likelihoods. For completeness, I also consider additional comparison metrics. Specifically, I use both the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) to compare model fit.

Finally, I compared the empirical fit of my model against 5 alternative specifications. These specifications were versions of the full model that replaced the anchoring and amount of ambiguity measures (i.e., the average and the standard deviation of co-workers’ estimates) by other centrality and dispersion variables. Specifically, I considered all combinations of 2 measures of centrality (average and median of co-workers’ estimates) to be used as anchors, and 3 measures of dispersion (standard deviation, variance, and relative standard deviationFootnote 2 of co-workers’ estimates) to be used as proxies for the amount of ambiguity in the agents’ social information.

3.3. Results

My experimental results offer empirical evidence in support of the 2 main building assumptions of my model. Specifically, the participants’ subjective probabilities could be well approximated by the average probability reported by the co-workers in each scenario ( $\beta _p$ = 0.813, t = 156.86, $p <$ 0.001), evidence that is consistent with the anchoring assumption. Moreover, the participants deviated more extensively from this anchor under conditions of increased ambiguity ( $\beta _{SD}$ = 0.135, t = 10.96, $p <$ 0.001), evidence that is consistent with the adjustment assumption (see SM Note 3 of the Supplementary Material for detailed results). Overall, the experimental results provide evidence in support of these 2 main building assumptions.

Moving to my full model estimations, my main results are presented in Table 1. My analyses yield an estimate of $\alpha $ of 0.674 (SE = 0.051) and $\beta $ of 0.337 (SE = 0.045). These estimates have 3 direct implications. First, the adjustment away from the anchoring probability is not negligible. In fact, my estimate of $\alpha $ is significantly greater than zero (t = 13.02, $p <$ 0.001), suggesting that the participants in my study did not simply assign subjective probabilities based on the average probability estimate reported by their co-workers. On average, my model estimates predict an adjustment away from the anchor with an absolute size of 0.077 for high-ambiguity contexts (i.e., when co-workers’ estimates have a standard deviation of 0.25) and 0.015 for low-ambiguity contexts (i.e., when co-workers’ estimates have a standard deviation of 0.05). These deviations away from the anchoring probability are greater in size for high probabilities. For example, when the anchoring probability is 0.8, my model predicts a final subjective probability of 0.68 when ambiguity is high (SD co-workers’ estimate = 0.25), and 0.78 when ambiguity is low (SD co-workers’ estimate = 0.05). Figure 3 presents the resulting subjective probabilities predicted by my model for different anchoring probabilities and levels of ambiguity (i.e., co-workers’ estimates with different averages and standard deviations).

Table 1 Estimated parameters (with standard errors in parentheses) for the full and the 2 null models. All parameters are estimated by maximum likelihood. Standard errors are obtained using a cluster bootstrap approach

Figure 3 Estimated subjective probabilities for different anchoring probabilities (i.e., averages in co-workers’ estimates) and levels of ambiguity (i.e., standard deviations in co-worker’s estimates).

Second, in line with past empirical evidence (Abdellaoui et al., Reference Abdellaoui, Baillon, Placido and Wakker2011; Dimmock et al., Reference Dimmock, Kouwenberg and Wakker2016; Li et al., Reference Li, Müller, Wakker and Wang2018), my results demonstrate the existence of significant ambiguity-generated insensitivity (a-insensitivity)—the tendency to perceive ambiguous events as having 50% probability. This inability to correctly differentiate across probability levels is readily visible in Figure 3, where estimated probabilities are biased toward 50% (resulting in the overweighting of small probabilities and the underweighting of large ones). Consistent with past evidence (Henkel, Reference Henkel2022), a-insensitivity is more prevalent in more ambiguous scenarios.

Third, the study participants display significant levels of pessimism or ambiguity aversion. An estimate of $\beta $ = 1 would imply ambiguity neutrality (i.e., that the participants put the same weight on probabilities above and below the anchor). However, my $\beta $ estimate ( $\beta $ = 0.337) is significantly smaller than 1 (t = $-$ 14.49, $p <$ 0.001), suggesting that the participants gave more weight to probabilities below the anchor. This finding is consistent with past evidence showing the prevalence of ambiguity aversion in a broad range of settings (see Trautmann and Van De Kuilen, Reference Trautmann and Van De Kuilen2015 for a review). However, it is important to note that attitudes toward ambiguous events have been shown to be source dependent (Abdellaoui et al., Reference Abdellaoui, Baillon, Placido and Wakker2011; Baillon et al., Reference Baillon, Huang, Selim and Wakker2018). That is, they vary depending on the uncertainty source. To be consistent with these findings, in my simulations (Section 4) rather than describing an event’s true probabilities as high or low, I focus on their relationship to the agents’ attitudes (i.e., whether the probabilities are consistent with the agents’ attitudes towards the event or not, see Section 4.2). For instance, in the case of an event with a high degree of likelihood, if the agents’ attitudes are such that the agent puts a higher weight into probabilities above the anchor when following the process of mental simulation ( $\beta> 1$ ), then these attitudes are described as consistent with the event’s true probabilities.

Beyond specific parameters, my estimations allowed me to test the empirical validity of my full model against a null model. Specifically, I tested the empirical fit of my model against the empirical fit of a model with no adjustment away from the anchor (Null Model 1). Using a likelihood ratio tests, my results demonstrate that my full model offers a significantly superior fit ( $\lambda _{RT}$ = 1070.32, df = 2, $p <$ 0.001). Similarly, a direct comparison of likelihoods suggest that my full model outperforms a model with a single constant (Full Model Log L = 4,777.84, Null Model 2 Log L = 683.20). A result that holds when using different comparison metrics (see Table 1). For instance, my full model yields a root mean square error that is 45% smaller than the root mean square error of Null Model 2 ( $RMSE_{\text {Full Model}}$ = 0.124, $RMSE_{\text {Null Model 2}}$ = 0.220). While both the likelihood ratio test and the estimated parameters demonstrate that the adjustment component of my model is non-negligible, it is important to remark that the anchoring component of my model explains the largest share of accounted error ( $RMSE_{\text {Null Model 1}}$ = 0.134).

Finally, Table 2 presents the empirical fit (log likelihood) of 6 model specifications differing in their operationalization of the anchor value and the amount of ambiguity. Again, as these models are not nested within the full model, I compared their empirical fit by simply reporting their likelihoods. Overall, my main specification—the one that uses the average and standard deviation of the co-workers’ estimates—yielded a superior empirical fit. However, differences across operationalizations are rather small. These results demonstrate that, compared to alternative specifications, my preferred model operationalization does not display a poorer empirical fit.

Table 2 Empirical fit for 6 different specifications of the main model. The specifications include 2 measures of centrality (to be used as anchors) and 3 measures of dispersion (to be used as proxies for the amount of ambiguity)

Altogether, in this section, I (1) provide evidence in support of my main building assumptions, (2) show that my model provides a significantly better empirical fit than simpler alternative models, and (3) demonstrate that alternative measures of centrality and dispersion do not improve the empirical fit of my model. These results demonstrate the empirical validity of my main model and offer important insights on how we use social information to construct our subjective perceptions of risk.

4. Calibrated simulations

In this section, I further develop my model to demonstrate its theoretical relevance. Specifically, I first expand the model to an agent-based setting (Section 4.1), and use it to simulate the evolution of risk perceptions within an organization (Section 4.2). Using empirically calibrated agent-based simulations, I conclude by studying the impact of information frictions (limits on the availability and quality of social information) on the emergent risk perceptions (Section 4.3).

4.1. Agent-based model

Agent-based models are used across the social sciences to study the emergence of collective phenomena from the interaction of individual agents. For instance, past work has used agent-based simulations to model the emergence of risk perceptions in contexts such as floods (Haer et al., Reference Haer, Botzen, de Moel and Aerts2017), or water reuse (Kandiah et al., Reference Kandiah, Binder and Berglund2017). In the field of organization science, past work has used agent-based simulations to study topics that include organizational design (Clement and Puranam, Reference Clement and Puranam2018), division of labor (Raveendran et al., Reference Raveendran, Puranam and Warglien2022), or firm performance under different levels of complexity and market turbulence (Siggelkow and Rivkin, Reference Siggelkow and Rivkin2005).

To adapt my model to an agent-based setting, I use a network of 16 agents distributed across a 4 $\times $ 4 grid to study the evolution of risk perceptions (i.e., subjective probabilities) assigned to an uncertain event with a fixed true probability. While most agents will not have access to the true probability of the event, each agent will have access to the subjective probabilities assigned to the event by those individuals located in its 4 surrounding grid points (neighborhood, see Figure 4). The grid wraps around the edges—that is, in Figure 4, agent A44 will be able to collect information from agents A14 and A41.

Figure 4 Network of agents employed in the simulations. Each individual has access to the risk perceptions of their surrounding 4 agents. For instance, agent A33 will observe the subjective probabilities assigned to an event by agents A32, A23, A43, and A34.

In everyday situations, precise probabilistic information is often difficult or costly to obtain. To reflect the limited availability of precise probabilistic information in real life, throughout the simulations, I assume that agent A11 is the only agent with access to the true precise probability associated with a given event. Moreover, I assume that agent A11 is an informed “stubborn individual,” an agent that maintains fixed beliefs or opinions, irrespective of surrounding influences. Hence, agent A11 knows the precise true probability of the uncertain event and does not update its subjective probability throughout the simulation. The presence of such stubborn agents is a common feature in network models of information transmission (Acemoglu et al., Reference Acemoglu, Ozdaglar and ParandehGheibi2010; Ghaderi and Srikant, Reference Ghaderi and Srikant2014; Yildiz et al., Reference Yildiz, Ozdaglar, Acemoglu, Saberi and Scaglione2013). In the context of the present model, this stubborn agent symbolizes the existence of objective information within the network. Without a stubborn agent, the network would lack access to the true probability associated with the specific event. Consequently, the resultant risk perception would be uniquely determined by the agent’s attitudes toward the uncertain event.

The rest of individuals (referred as non-informed or non-stubborn individuals) update their subjective probabilities following the process of mental simulation outlined in my main model. That is, they (1) anchor their probability assessments on the average probability assessment in their neighborhood, and (2) follow a process of mental simulation of probabilities that results in an adjustment away from the anchor to account for the amount of ambiguity in their probabilistic information. Specifically, in each iteration, a random individual in the 4 $\times $ 4 grid is selected. If this agent is a “stubborn individual” (i.e., if it is A11), the iteration concludes, and a new agent is selected. If the randomly selected individual is not an informed individual, the agent recovers the subjective probabilities of the 4 agents in its neighborhood and derives a new subjective probability estimate. Given the subjective probabilities of those in its neighborhood, a non-informed agent j will derive the following subjective probability:

(10) $$ \begin{align} SP_{j} = p_j + 0.674 \cdot \gamma_j \cdot (1-p_j-p_j^{0.337}), \end{align} $$

where $p_j$ and $\gamma _j$ represent the average and the standard deviation of the observed subjective probabilities in the neighborhood of agent j. Note, that in order to obtain empirically calibrated results, I have replaced the parameters $\alpha $ and $\beta $ of my model (Equation 9), by its empirical estimates from Section 3.3.

After updating the agent’s subjective probability, the model iteration concludes, and a new agent is selected. I repeat this process until a steady state has been reached (e.g., after 50,000 model iterations). To ensure that my results are not driven by random variation in the selection of individuals within the network, all my results represent aggregations (i.e., averages) across 100 model repetitions. I perform my simulations independently for 101 true-probability levels (ranging from 0 to 1 in 0.01 increments). I initialized the non-informed agents’ subjective probabilities at 50% (so that under the absence of information, uninformed agents give a 50% probability to any event occurring). Again, data and code for the simulations are posted in an OSF repository.Footnote 3

Figure 5 Evolution of group subjective probabilities and amount of ambiguity across model iterations.

Figure 6 Evolution of group subjective probabilities and amount of ambiguity across the first 5,000 model iterations for high (i.e., 99%) and low (i.e., 1%) probability events.

4.2. Baseline simulations

To evaluate my simulations, I focused on 2 main outcomes—the average subjective probability and the average perceived amount of ambiguity (i.e., the average of the SDs in the agents’ social information). To estimate both measures, I excluded stubborn individuals (i.e., I exclude agent A11), focusing exclusively on non-informed agents. Figure 5 presents the group’s subjective probabilities and amount of ambiguity for different values of the true probability and model iterations. Figure 6 presents a more detailed overview of the dynamics of the risk perception formation process for low- and high-probability events (i.e., events with a true probability of 1% or 99%).

These results carry several implications. First, when true probabilities are aligned with the agents’ ambiguity attitudes, the subjective probabilities in the group converge to the true probabilities of the ambiguous event. In this setting, when the agents follow a process of mental simulation to account for the ambiguity in their social information, the agents put more weight on probabilities below their anchor (i.e., the parameter beta of Equation 9 is below 1 in our simulations). This is not problematic when true probabilities are small. In fact, when true probabilities are small, putting more weight on probabilities below the anchor makes the adjustment process more likely to go in the direction of the true probability. However, when true probabilities are large, putting more weight on probabilities below the anchor means that the agents fail to sufficiently adjust in the direction of their social information. This tendency to adjust insufficiently limits the extent to which the group subjective probabilities converge to the true probabilities.

Second, when true probabilities are inconsistent with the agent’s ambiguity attitudes, the resulting network of risk perception reflects a higher degree of unresolved ambiguity. That is, given that in our setting, the agents give more weight to probabilities below (as opposed to above) the anchor, when true probabilities are large, the emergent probabilistic assessment in a group is not only more likely to deviate from the true probability, but yields a higher degree of perceived ambiguity. This could be understood as a collective form of motivated ambiguity by which individuals—by adjusting insufficiently for information that is contrary to their attitudes—create a degree of ambiguity in the network that allows them to hold risk perceptions that are more in line with their own attitudes.

Third, the resulting pattern of risk perception is a non-monotonic function of an event’s true probability. For instance, the resulting group subjective probability is 0.8 when the true probability of the event is 0.8, and 0.75 when the true probability of the event is 0.99. Again, this can be explained by the agents’ tendency to under-react to social information that is at odds with their own ambiguity attitudes. As agents give more weight to probabilities below (as opposed to above) their anchor, they underreact to information that goes against their own attitudes (i.e., they underreact to high probability estimates). This underreaction leads to greater levels of ambiguity in the available social information. As the size of this under-adjustment is larger for more extreme probabilities, the amount of unresolved ambiguity is greater for extreme probability events. This increased ambiguity allows the agents to hold risk perceptions that are more in line with their own attitudes, thus, creating a non-monotonic pattern of risk perception.

It is important to acknowledge that this observed non-monotonic pattern of risk perceptions is at odds with previous empirical evidence (Siegrist and Árvai, Reference Siegrist and Árvai2020). However, it is also important to notice that this non-monotonicity emerges only under specific model conditions. For instance, Figure 5 demonstrates that risk perceptions form a non-monotonic relationship with true probabilities only after a large number of model iterations. In the initial phases of the simulation, risk perceptions are a monotonic function of an event’s true probabilities. Moreover, as presented in the following section, when agents do not accurately discern the subjective probabilities assigned to an uncertain event by those in their social network, the resulting pattern of subjective probabilities is a monotonic transformation of the event’s true probabilities. Therefore, while the existence of a non-monotonic range in risk perceptions is theoretically possible, it is not a robust property of this model.

Fourth, as depicted in Figure 6 (right panel), the amount of ambiguity evolves following a non-monotonous trajectory—rapidly increasing and collapsing in the early stages of the simulation. This trajectory in ambiguity perceptions closely resembles the patterns of information seeking found on previous empirical studies of collective attention (Crane and Sornette, Reference Crane and Sornette2008; Wu and Huberman, Reference Wu and Huberman2007). My simulations, therefore, not only provide a theoretical explanation to the emergence of collective risk perceptions but can explain collective patterns of attention and information seeking. Note that, in the early stages of the simulation process, the amount of ambiguity evolves similarly for high and low probability events (i.e., the trajectory in the amount of ambiguity is similar irrespective of whether the event’s true probability aligns with the agents’ attitudes or not). It’s only after 100 model iterations that large disparities in the amount of ambiguity appear for events with different true probabilities (with greater ambiguity resolution for events whose true probabilities align with the agents’ attitudes). This pattern of ambiguity resolution is translated into a marginally decreasing rate of information transmission. In other words, changes in the subjective probabilities of a group are concentrated in the initial phase of the simulation process. For instance, Figure 6 (left panel) shows that about 75% of the shift in subjective probabilities occurs within the first 2,000 model iterations.

4.3. Information frictions

Using agent-based simulations, I study how information-availability frictions (i.e., constraints on the amount of information gathered by an agent) and information-quality frictions (i.e., constraints on the precision of the information gathered by an agent) shape the collective emergence of risk perceptions.

4.3.1. Modeling information frictions

I carry my agent-based simulations under 3 different conditions. These conditions represent (1) a frictionless setting, (2) a setting with information-quality frictions, and (3) a setting with information-availability frictions. In the frictionless condition, the agents can perfectly recover the subjective probabilities assigned to the uncertain event by all other agents in the group. That is, in this condition, the agents observe the exact risk perception of the remaining 15 agents in the group. In the quality-frictions condition, the agents recover the subjective probabilities assigned to the uncertain event by all other agents in the group, but they do so imprecisely. That is, in this condition, the agents observe the risk perception of the remaining 15 agents in the group plus an error term. Throughout the simulations, I assume that this error term is normally distributed with mean 0 and a standard deviation of 0.05. Finally, in the availability-frictions condition, the agents recover the exact subjective probabilities assigned to the uncertain event by a fraction of the agents in the group. As in the baseline simulations, in this setting the agents can recover the subjective probabilities assigned to an ambiguous event by their 4 surrounding agents. This condition is equivalent to the baseline setting.

All the simulations are conducted on a network of 16 agents with agent A11 as a stubborn individual. Each simulation consists of 50,000 model iterations and the results represent aggregates (i.e., averages) across 100 model repetitions. Again, I perform these simulations independently for 101 true-probability levels (from 0 to 1 in 0.01 increments).

Figure 7 Resulting group subjective probabilities and amount of ambiguity under different information-friction conditions.

Figure 8 Evolution of group subjective probabilities and amount of ambiguity under different information-friction conditions. The graph presents the the first 5,000 model iterations for high (i.e., 99%) and low (i.e., 1%) probability events.

4.3.2. Results

Again, to derive my main results, I focused on the average subjective probability and amount of ambiguity perceived by non-stubborn individuals. Figure 7 presents the resulting patterns of risk perception and ambiguity for each condition after 50,000 model iterations. Figure 8 presents the dynamic evolution of risk perceptions across conditions for events with high (99%) and low (1%) true probability.

As shown in Figure 7, limiting the precision of social information not only results in subjective probabilities that deviate more extensively from the true probabilities, but in a pattern of risk perceptions that displays a higher degree of unresolved ambiguity. As one would expect, in the quality-frictions condition, the amount of ambiguity perceived in the social information is never below 0.05 (the standard deviation of the error term that is incorporated into an agent’s social information). Compared to the frictionless condition, introducing quality frictions has a limited impact on the resulting subjective probabilities at the extremes (i.e., when the true probability is either 0 or 1). However, the precision loss can be large for non-extreme probability events. For instance, for events with a true probability of 0 or 1, the accuracy loss due to quality frictions is 0.060 and 0.019, respectively. On the other hand, for a true probability of 0.6, introducing quality frictions increases the difference between the true probability and the group resulting subjective probability in 0.104. Overall, introducing quality frictions leads to more imprecise and ambiguous subjective probabilities.

On the other hand, introducing availability frictions results in a more precise and less ambiguous information transmission process. This is especially true in situations where the agents’ attitudes are at odds with the true probabilities of the event (i.e., for high probabilities). To explain this result, one needs to consider the impact that the stubborn (or informed) agent has on its surrounding agents. Under the frictionless condition, the true subjective probabilities are diluted in a set of 15 subjective probabilities. However, for those agents located next to the stubborn individual, introducing availability frictions means that the true probability is only diluted in a set of 4 subjective probabilities. Hence, for those agents that directly surround the stubborn individual, introducing availability friction increases the impact that the stubborn agent has on their subjective probability estimates. While these frictions will limit the direct impact that the stubborn agent has on those located further away in the network, by shaping the risk perceptions of those in its neighborhood, the stubborn agent generates a cascade effect—indirectly shaping the perceptions of agents located further away in the network by impacting the perceptions of the agents in its neighborhood. This information cascade translates into a slower, but more precise transmission of risk perceptions.

As information-availability frictions slow down the information transmission process, the group’s risk perceptions under these frictions display a more imprecise pattern of risk perceptions during the first stages of the simulation (i.e., the first 500 model iterations). As depicted in Figure 8, when the true probabilities are in line with the agents’ attitudes (i.e., for low probabilities), both the frictionless condition and the quality-frictions condition outperform the availability-frictions condition during the first 500 model iterations. This results is ultimately reversed, but shows that beyond shaping the precision of the emergent risk perceptions, availability frictions also shape the speed of this emergent process.

Overall, limits on the quality of the social information led to more imprecise and ambiguous risk perceptions. On the contrary, limits on the amount of social information slowed down the information transmission process but resulted in a pattern of risk perceptions that reflected a greater precision and ambiguity resolution.

5. Discussion

By modifying an individual model of probabilistic reasoning (Einhorn and Hogarth, Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986), in this paper, I present a psychologically founded model of the transmission and emergence of risk perceptions. As typically assumed in models of information transmission under risk (Banerjee, Reference Banerjee1992; Banerjee and Fudenberg, Reference Banerjee and Fudenberg2004; Eyster and Rabin, Reference Eyster and Rabin2010; Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988; Watts, Reference Watts2002, Reference Watts2004), agents in my theoretical framework can recover the subjective probabilities assigned to an uncertain event by those in their social network. In my model, agents use this information to (1) anchor their risk perceptions, and (2) infer the amount of ambiguity in their social information. More specifically, my theoretical framework assumes that the members of a group anchor their subjective risk perceptions on the average subjective probability estimate in their social network. Then, they follow a process of mental simulation of probabilities above and below the anchor to account for the amount of ambiguity in their social information. This process of mental simulation leads to an adjustment away from the anchor that varies in size and direction depending on an individual’s attitudes toward the uncertain event, the amount of ambiguity, and the anchoring probability itself. Using an online study, I present evidence in support of my main building assumptions and the empirical validity of my model. Applying my experimental results to an agent-based setting, I simulate the evolution of risk perceptions within organizations under different information frictions. While limiting the quality of the socially shared information leads to more imprecise and ambiguous risk perceptions, limiting the amount of information socially shared leads to more precise and less ambiguous risk perceptions.

This paper makes several important contributions. First, the present work provides researchers with a framework to theoretically study the interplay between organizational, psychological, and social phenomena in shaping collective risk perceptions. My model can be easily expanded to account for psychological factors such as salience, cognitive load, limited attention, motivated cognition, or costly mental effort. For example, past laboratory work has studied the impact of cognitive load on the anchoring-and-adjustment heuristic, showing that cognitive load leads to a smaller adjustment away from the anchor (Deck and Jahedi, Reference Deck and Jahedi2015). In line with this evidence, one could argue that a high cognitive load impairs people’s ability to mentally simulate and test the likelihood of different probability estimates away from the anchor. As the length of the process of mental simulation of probabilities in my model is determined by the amount of ambiguity multiplied by a constant of proportionality ( $\alpha $ ), one could simulate the evolution of risk perceptions in groups under different levels of cognitive load by modifying this parameter (where a lower $\alpha $ would represent a higher level of cognitive load).

Similarly, my model can be easily modified to study social factors such as status, social influence, or the dynamic endogenous evolution of social networks. For example, all my simulations assume that every agent has the same number of connections and that these connections are bidirectional. However, in real life, some agents occupy a public role, capable of influencing the perceptions of a large crowd, but are unable to gather information from everyone. One could explore the role of these highly influential individuals, and how their characteristics impact the resulting pattern of risk perceptions at the group level by modifying the network used in the simulations. Specifically, by repeating my simulations in a network where agents have access to only their 3 surrounding neighbors and 1 fixed influential individual, one can study the role of status or social influence in the emergent patterns of risk perceptions.

Moreover, my framework can be used to study the combined impact of psychological and social processes in shaping collective perceptions of risk. For instance, by applying the 2 previously described sets of changes, one can examine the role of highly influential agents in shaping risk perceptions across groups experiencing different levels of cognitive load.

In more organizational contexts, my agent-based model can be used to investigate how different design features such as firm structure (e.g., horizontal vs. vertical), incentives and information cost, or the self-selection of individuals across tasks affect the collective emergence of risk perceptions within organizations. By providing researchers with a framework to analyze the interaction of these organizational, social, and psychological processes, my model paves the way to new research lines on how we construct collective risk perceptions.

Second, my simulations and experimental results offer important insights on how we use social information to construct subjective risk perceptions. In fact, my experimental results shed light on the factors driving everyday choices in settings such as investments, insurance, or health-related decisions. Beyond individual decisions, the results of my agent-based simulation have important implications at the group and organizational level. In fact, my simulation results suggests that by focusing on information quality (rather than availability), groups and organizations can more effectively boost the accuracy of emergent risk perceptions.

Third, this paper contributes to several different literatures. For instance, my model contributes to the literature on the SARF (Kasperson et al., Reference Kasperson, Renn, Slovic, Brown, Emel, Goble, Kasperson and Ratick1988). In fact, my model can be understood as a mathematical formalization of some of the ideas presented in this literature (see Kasperson et al., Reference Kasperson, Webler, Ram and Sutton2022 for a recent review) including the importance of considering the interaction of social and psychological phenomena in the formation of risk perceptions. Similarly, my results add to the literature on how social processes such as attitudinal contagion (Fan and Pedrycz, Reference Fan and Pedrycz2016; Mason et al., Reference Mason, Conrey and Smith2007; Scherer and Cho, Reference Scherer and Cho2003; Watts, Reference Watts2002, Reference Watts2004), media influence (Moussaıd, Reference Moussaıd2013) or information cascades (Banerjee, Reference Banerjee1992; Banerjee and Fudenberg, Reference Banerjee and Fudenberg2004; Bikhchandani et al., Reference Bikhchandani, Hirshleifer and Welch1992; Eyster and Rabin, Reference Eyster and Rabin2010) shape collective perceptions of risk. Specifically, this paper adds to this line of research by explicitly modeling a psychologically rich individual process of risk perception formation. Finally, this paper adds to a growing body of evidence that uses agent-based simulations to study the emergence of behaviors, preferences, and perceptions in groups (Brown et al., Reference Brown, Lewandowsky and Huang2022; Clement and Puranam, Reference Clement and Puranam2018; Gray et al., Reference Gray, Rand, Ert, Lewis, Hershman and Norton2014; Gross and De Dreu, Reference Gross and De Dreu2019; Haer et al., Reference Haer, Botzen, de Moel and Aerts2017; Kandiah et al., Reference Kandiah, Binder and Berglund2017; Moussaıd, Reference Moussaıd2013; Raveendran et al., Reference Raveendran, Puranam and Warglien2022; Siggelkow and Rivkin, Reference Siggelkow and Rivkin2005). The present work contributes to this literature by illustrating how existing individual models of cognition and behavior can be easily adapted to an agent-based setting.

Despite these important contributions, this paper has a number of limitations that should be address in future research. First, my theoretical framework prioritizes the modeling of certain mechanisms—ignoring several factors that can impact the emergence of collective risk perceptions. In fact, my framework ignores a number of cognitive mechanism and biases that have been shown to play a key role in the formation of individual risk perceptions (Kahneman et al., Reference Kahneman, Slovic, Slovic and Tversky1982). Future research should expand my framework to consider how such biases impact the formation of collective risk assessments. It is also important to keep in mind that the simulations are based on 3 arbitrary discrete states or conditions (frictionless, availability frictions, and quality frictions). In SM Note 7 of the Supplementary Material, I expand on these simulations by considering the combination of varying degrees of availability and quality frictions and their impact on the formation of collective risk perceptions.

Second, while my experimental setting captures a static one-shot decision scenario, my agent-based simulations are dynamic. This implies that my simulations rest upon additional implicit assumptions that are not directly tested in this paper. For instance, my agent-based model assumes that individuals have no direct access to feedback, and that they don’t take into consideration their previous risk perceptions when generating new ones. Future research should test the impact of these assumptions on the collective emergence of risk perceptions.

Third, in my simulations, the agents give the same weight to the social information gathered from all their sources. This simplification does not hold in real life where individuals trust some sources more than others. Future research should incorporate differing levels of trust within the agent’s network and jointly model the dynamic evolution of risk perceptions and trust across groups and organizations.

Fourth, I restricted my simulations to groups of homogeneous agents over a fixed network topology (i.e., a fixed map of agents and neighborhoods). Further work is needed to establish how heterogeneity in the agents’ attitudes impacts the emerging risk perceptions. Similarly, future work needs to consider how different network topologies shape the resulting patterns of risk perceptions.

My experimental design also has some limitations. While multiple price lists are widely used in economics and psychology to recover subjective probabilities (see Charness et al., Reference Charness, Gneezy and Imas2013 for a discussion of experimental elicitation methods in risky choice) they have certain shortcomings. Specifically, my method for eliciting subjective probabilities lacks perfect precision, as it only allows for probability measurements in increments of 0.1. This could be especially problematic in contexts of low ambiguity. Additionally, this method is prone to a central tendency bias, potentially skewing my subjective probability measurements toward 0.5. The method’s complexity may also lead to decision errors among some participants. Future research should aim to replicate my main experimental findings using different methods of probability elicitation, such as direct probability matching.

Finally, past work has demonstrated that attitudes toward ambiguous uncertain events are source dependent (Abdellaoui et al., Reference Abdellaoui, Baillon, Placido and Wakker2011; Baillon et al., Reference Baillon, Huang, Selim and Wakker2018). That is, individual attitudes vary depending on the source that generates the uncertainty. Given my model symmetry, this source-dependence has a limited impact on the main results of my simulations (i.e., the effect of information frictions on the emergence of risk perceptions). Yet, it is important to remark that in contexts where agents give more weight to probabilities above the anchor, the resulting pattern of probabilities will be different. For instance, in my baseline setting—and assuming that the agents put more weight on probabilities above their anchor—the group probabilities would converge to the true probabilities for high probability events. On the contrary, for low true-probability events, the group’s subjective probabilities would diverge from the true probabilities of the event. To make more generalizable claims, throughout the paper rather than focusing on high or low probabilities, I have described the relationship of these probabilities with the agent’s ambiguity attitudes (i.e., whether the probabilities were consistent with the agents’ attitudes or not).

It’s important to note that, while there are several risk perception models proposed in the literature, the present work focuses on exclusively adapting the Einhorn and Hogarth (Reference Einhorn and Hogarth1985, Reference Einhorn and Hogarth1986) model of probability reasoning under ambiguity to an agent-based setting. Future work should adapt other models of individual behavior to an agent-based framework to explore how risk perceptions form and evolve within groups. Other anchoring and adjustment models of risk perception, for instance, could be successfully translated into a social context using some of the simplifying assumptions introduced in this paper. For instance, in the anchoring and adjustment model proposed by Johnson and Busemeyer (Reference Johnson and Busemeyer2016), decision weights are obtained through a Markov process in which the decision maker first anchors and then mentally transitions between outcomes until making a prediction. When adapting this to an agent-based setting, one could develop a model in which agents gather different outcomes along their prevalence from their social network and use this information to mentally simulate the likelihood of each outcome, anchoring on the outcomes experienced by those individuals nearest to them in the social network.

Moving beyond the anchoring and adjustment heuristic, there is promising potential in adapting models that examine specific cognitive and affective processes in risky decision-making (Bordalo et al., Reference Bordalo, Gennaioli and Shleifer2012; Enke and Graeber, Reference Enke and Graeber2019) to an agent-based setting. Such adaptations would broaden our understanding of the dynamics of risk perception in group and organizational settings, offering a richer, more diverse perspective on how individuals collectively navigate and interpret uncertainty.

Despite these limitations, this paper presents a flexible approach to theoretically study the interaction of psychological, social, and organizational processes in shaping collective perceptions of risk. In doing so, this work paves the way for future research on how we collectively form perceptions of risk.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/jdm.2024.9.

Data availability statement

Data and code are available in the OSF repository at https://osf.io/bdgm3/?view_only=32d3dd1abc4b4a1e9ad5c3eccaa5dd8b.

Acknowledgments

I am very grateful to Robin M. Hogarth for insightful comments and feedback on an earlier version of this manuscript.

Funding statement

This research received no specific grant funding from any funding agency, commercial or not-for-profit sectors.

Competing interest

The author declares no competing interests.

Footnotes

2 The relative standard deviation is a measure of dispersion that takes into account the confound between average and standard deviation for bounded variables. Details on its estimation are presented in Mestdagh et al. (Reference Mestdagh, Pe, Pestman, Verdonck, Kuppens and Tuerlinckx2018). In short, this measure results from dividing the standard deviation of a bounded variable by its maximal standard deviation given its average.

References

Abdellaoui, M., Baillon, A., Placido, L., & Wakker, P. P. (2011). The rich domain of uncertainty: Source functions and their experimental implementation. American Economic Review, 101(2), 695723.CrossRefGoogle Scholar
Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis)information in social networks. Games and Economic Behavior, 70(2), 194227.CrossRefGoogle Scholar
Baillon, A., Huang, Z., Selim, A., & Wakker, P. P. (2018). Measuring ambiguity attitudes for all (natural) events. Econometrica, 86(5), 18391858.CrossRefGoogle Scholar
Banerjee, A. V. (1992). A simple model of herd behavior. Quarterly Journal of Economics, 107(3), 797817.CrossRefGoogle Scholar
Banerjee, A. V., & Fudenberg, D. (2004). Word-of-mouth learning. Games and Economic Behavior, 46(1), 122.CrossRefGoogle Scholar
Barberis, N. C. (2013). Thirty years of prospect theory in economics: A review and assessment. Journal of Economic Perspectives, 27(1), 173196.CrossRefGoogle Scholar
Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 9921026.CrossRefGoogle Scholar
Bordalo, P., Gennaioli, N., & Shleifer, A. (2012). Salience theory of choice under risk. Quarterly Journal of Economics, 127(3), 12431285.CrossRefGoogle Scholar
Borsboom, D., van der Maas, H. L., Dalege, J., Kievit, R. A., & Haig, B. D. (2021). Theory construction methodology: A practical framework for building theories in psychology. Perspectives on Psychological Science, 16(4), 756766.CrossRefGoogle ScholarPubMed
Brown, G. D., Lewandowsky, S., & Huang, Z. (2022). Social sampling and expressed attitudes: Authenticity preference and social extremeness aversion lead to social norm effects and polarization. Psychological Review, 129(1), 18.CrossRefGoogle ScholarPubMed
Busby, J. S., Onggo, B. S., & Liu, Y. (2016). Agent-based computational modelling of social risk responses. European Journal of Operational Research, 251(3), 10291042.CrossRefGoogle Scholar
Charness, G., Gneezy, U., & Imas, A. (2013). Experimental methods: Eliciting risk preferences. Journal of Economic Behavior & Organization, 87, 4351.CrossRefGoogle Scholar
Clement, J., & Puranam, P. (2018). Searching for structure: Formal organization design as a guide to network evolution. Management Science, 64(8), 38793895.CrossRefGoogle Scholar
Crane, R., & Sornette, D. (2008). Robust dynamic classes revealed by measuring the response function of a social system. Proceedings of the National Academy of Sciences, 105(41), 1564915653.CrossRefGoogle ScholarPubMed
Deck, C., & Jahedi, S. (2015). The effect of cognitive load on economic decision making: A survey and new experiments. European Economic Review, 78, 97119.CrossRefGoogle Scholar
Dimmock, S. G., Kouwenberg, R., & Wakker, P. P. (2016). Ambiguity attitudes in a large representative sample. Management Science, 62(5), 13631380.CrossRefGoogle Scholar
Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 92(4), 433.CrossRefGoogle Scholar
Einhorn, H. J., & Hogarth, R. M. (1986). Decision making under ambiguity. Journal of Business, S225S250.CrossRefGoogle Scholar
Enke, B., & Graeber, T. (2019). Cognitive uncertainty (tech. rep.). National Bureau of Economic Research.CrossRefGoogle Scholar
Eyster, E., & Rabin, M. (2010). Naive herding in rich-information settings. American Economic Journal: Microeconomics, 2(4), 221–43.Google Scholar
Fan, K., & Pedrycz, W. (2016). Opinion evolution influenced by informed agents. Physica A: Statistical Mechanics and Its Applications, 462, 431441.CrossRefGoogle Scholar
Fried, E. I. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry, 31(4), 271288.CrossRefGoogle Scholar
Ghaderi, J., & Srikant, R. (2014). Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate. Automatica, 50(12), 32093215.CrossRefGoogle Scholar
Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 14201443.CrossRefGoogle Scholar
Gray, K., Rand, D. G., Ert, E., Lewis, K., Hershman, S., & Norton, M. I. (2014). The emergence of “us and them” in 80 lines of code: Modeling group genesis in homogeneous populations. Psychological Science, 25(4), 982990.CrossRefGoogle ScholarPubMed
Gross, J., & De Dreu, C. K. (2019). The rise and fall of cooperation through reputation and group polarization. Nature Communications, 10(1), 110.CrossRefGoogle ScholarPubMed
Haer, T., Botzen, W. W., de Moel, H., & Aerts, J. C. (2017). Integrating household risk mitigation behavior in flood risk analysis: An agent-based model approach. Risk Analysis, 37(10), 19771992.CrossRefGoogle ScholarPubMed
Hakes, J., & Viscusi, W. K. (1997). Mortality risk perceptions: A Bayesian reassessment. Journal of Risk and Uncertainty, 15, 135150.CrossRefGoogle ScholarPubMed
Henkel, L. (2022). Experimental evidence on the relationship between perceived ambiguity and likelihood insensitivity (tech. rep.). ECONtribute Discussion Paper. University of Bonn and University of Cologne.Google Scholar
Hogarth, R. M., & Einhorn, H. J. (1990). Venture theory: A model of decision weights. Management Science, 36(7), 780803.CrossRefGoogle Scholar
Hogarth, R. M., & Kunreuther, H. (1985). Ambiguity and insurance decisions. American Economic Review, 75(2), 386390.Google Scholar
Hogarth, R. M., & Kunreuther, H. (1989). Risk, ambiguity, and insurance. Journal of Risk and Uncertainty, 2(1), 535.CrossRefGoogle Scholar
Holzmeister, F., Huber, J., Kirchler, M., Lindner, F., Weitzel, U., & Zeisberger, S. (2020). What drives risk perception? A global survey with financial professionals and laypeople. Management Science, 66(9), 39774002.CrossRefGoogle Scholar
Jaspersen, J. G., & Ragin, M. A. (2021). A model of anchoring and adjustment for decision-making under risk. Available at SSRN 3845633.CrossRefGoogle Scholar
Johnson, J. G., & Busemeyer, J. R. (2016). A computational model of the attention process in risky choice. Decision, 3(4), 254.CrossRefGoogle Scholar
Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge university Press.CrossRefGoogle Scholar
Kandiah, V., Binder, A. R., & Berglund, E. Z. (2017). An empirical agent-based model to simulate the adoption of water reuse using the social amplification of risk framework. Risk Analysis, 37(10), 20052022.CrossRefGoogle ScholarPubMed
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J. X., & Ratick, S. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8(2), 177187.CrossRefGoogle Scholar
Kasperson, R. E., Webler, T., Ram, B., & Sutton, J. (2022). The social amplification of risk framework: New perspectives. Risk Analysis, 42(7), 13671380.CrossRefGoogle ScholarPubMed
Kempe, D., Kleinberg, J., & Tardos, É. (2003). Maximizing the spread of influence through a social network. In Conference Chair: L. Getoor, General Chair: T. Senator, Program Chairs: P. Domingos & C. Faloutsos. (Eds.), Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining. New York, NY, USA: Association for Computing Machinery, (pp. 137146).CrossRefGoogle Scholar
Kim, H., Schroeder, A., & Pennington-Gray, L. (2016). Does culture influence risk perceptions? Tourism Review International, 20(1), 1128.CrossRefGoogle Scholar
Li, Z., Müller, J., Wakker, P. P., & Wang, T. V. (2018). The rich domain of ambiguity explored. Management Science, 64(7), 32273240.CrossRefGoogle Scholar
Mason, W. A., Conrey, F. R., & Smith, E. R. (2007). Situating social influence processes: Dynamic, multidirectional flows of influence within social networks. Personality and Social Psychology Review, 11(3), 279300.CrossRefGoogle ScholarPubMed
Mestdagh, M., Pe, M., Pestman, W., Verdonck, S., Kuppens, P., & Tuerlinckx, F. (2018). Sidelining the mean: The relative variability index as a generic mean-corrected variability measure for bounded variables. Psychological Methods, 23(4), 690.CrossRefGoogle ScholarPubMed
Moussaıd, M. (2013). Opinion formation and the collective dynamics of risk perception. PLoS One, 8(12), e84592.CrossRefGoogle ScholarPubMed
Raveendran, M., Puranam, P., & Warglien, M. (2022). Division of labor through self-selection. Organization Science, 33(2), 810830.CrossRefGoogle Scholar
Robinaugh, D. J., Haslbeck, J. M., Ryan, O., Fried, E. I., & Waldorp, L. J. (2021). Invisible hands and fine calipers: A call to use formal theory as a toolkit for theory construction. Perspectives on Psychological Science, 16(4), 725743.CrossRefGoogle Scholar
Scherer, C. W., & Cho, H. (2003). A social network contagion theory of risk perception. Risk Analysis, 23(2), 261267.CrossRefGoogle ScholarPubMed
Siegrist, M., & Árvai, J. (2020). Risk perception: Reflections on 40 years of research. Risk Analysis, 40(S1), 21912206.CrossRefGoogle ScholarPubMed
Siggelkow, N., & Rivkin, J. W. (2005). Speed and search: Designing organizations for turbulence and complexity. Organization Science, 16(2), 101122.CrossRefGoogle Scholar
Slovic, P. (2016). The perception of risk. New York: Routledge.CrossRefGoogle Scholar
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1980). Facts and fears: Understanding perceived risk. Boston, MA: Springer.Google Scholar
Slovic, P., & Peters, E. (2006). Risk perception and affect. Current Directions in Psychological Science, 15(6), 322325.CrossRefGoogle Scholar
Smith, V. K., & Johnson, F. R. (1988). How do risk perceptions respond to information? The case of radon. Review of Economics and Statistics, 70, 18.CrossRefGoogle Scholar
Smith, V. K., & Michaels, R. G. (1987). How did households interpret chernobyl?: A Bayesian analysis of risk perceptions. Economics Letters, 23(4), 359364.CrossRefGoogle Scholar
Tompkins, M. K., Bjälkebring, P., & Peters, E. (2018). Emotional aspects of risk perceptions. In M. Raue, E. Lermer, & B. Streicher. (Eds.), Psychological Perspectives on Risk and Risk Analysis Theory, Models, and Applications (1st ed. 2018). (pp. 109130). https://doi.org/10.1007/978-3-319-92478-6 CrossRefGoogle Scholar
Trautmann, S. T., & Van De Kuilen, G. (2015). Ambiguity attitudes. In G. Keren, and G. Wu. The Wiley Blackwell handbook of judgment and decision making Chichester, West Sussex; Marlden, MA: John Wiley & Sons, Ltd, (vol. 2, pp. 89116).CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157), 11241131.CrossRefGoogle Scholar
Viscusi, W. K. (1985). Are individuals Bayesian decision makers? American Economic Review, 75(2), 381385.Google Scholar
Viscusi, W. K., & O’Connor, C. J. (1984). Adaptive responses to chemical labeling: Are workers Bayesian decision makers? American Economic Review, 74(5), 942956.Google Scholar
Watts, D. J. (2002). A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences, 99(9), 57665771.CrossRefGoogle ScholarPubMed
Watts, D. J. (2004). The “new” science of networks. Annual Review of Sociology, 30, 243270.CrossRefGoogle Scholar
Wu, F., & Huberman, B. A. (2007). Novelty and collective attention. Proceedings of the National Academy of Sciences, 104(45), 1759917601.CrossRefGoogle ScholarPubMed
Yildiz, E., Ozdaglar, A., Acemoglu, D., Saberi, A., & Scaglione, A. (2013). Binary opinion dynamics with stubborn agents. ACM Transactions on Economics and Computation (TEAC), 1(4), 130.CrossRefGoogle Scholar
Figure 0

Figure 1 Example of networks that convey similar anchoring probabilities with different degrees of ambiguity.

Figure 1

Figure 2 Example of hypothetical scenario and subjective probability elicitation task used in the experiment. Additional study materials (consent form and detailed instructions) are presented in SM Note 1 of the Supplementary Material. Across scenarios, the participants were presented with different sets of co-workers’ probability estimates.

Figure 2

Table 1 Estimated parameters (with standard errors in parentheses) for the full and the 2 null models. All parameters are estimated by maximum likelihood. Standard errors are obtained using a cluster bootstrap approach

Figure 3

Figure 3 Estimated subjective probabilities for different anchoring probabilities (i.e., averages in co-workers’ estimates) and levels of ambiguity (i.e., standard deviations in co-worker’s estimates).

Figure 4

Table 2 Empirical fit for 6 different specifications of the main model. The specifications include 2 measures of centrality (to be used as anchors) and 3 measures of dispersion (to be used as proxies for the amount of ambiguity)

Figure 5

Figure 4 Network of agents employed in the simulations. Each individual has access to the risk perceptions of their surrounding 4 agents. For instance, agent A33 will observe the subjective probabilities assigned to an event by agents A32, A23, A43, and A34.

Figure 6

Figure 5 Evolution of group subjective probabilities and amount of ambiguity across model iterations.

Figure 7

Figure 6 Evolution of group subjective probabilities and amount of ambiguity across the first 5,000 model iterations for high (i.e., 99%) and low (i.e., 1%) probability events.

Figure 8

Figure 7 Resulting group subjective probabilities and amount of ambiguity under different information-friction conditions.

Figure 9

Figure 8 Evolution of group subjective probabilities and amount of ambiguity under different information-friction conditions. The graph presents the the first 5,000 model iterations for high (i.e., 99%) and low (i.e., 1%) probability events.

Supplementary material: File

Pirla supplementary material

Pirla supplementary material
Download Pirla supplementary material(File)
File 955.6 KB