As conspiracy theories have taken center stage in politics, there has been growing concern over the spread of these beliefs and their pernicious effects on the public. Scholars have documented a number of ill effects of conspiracy beliefs, including decreased intentions to get vaccinated (Jolley and Douglas Reference Jolley and Douglas2014), decreased social distancing during the COVID-19 pandemic (Bierwiaczonek, Kunst, and Pich Reference Bierwiaczonek, Kunst and Pich2020), increased support for illegal behavior (Imhoff, Dieterle, and Lamberty Reference Imhoff, Dieterle and Lamberty2020), increased prejudice (Jolley, Meleady, and Douglas Reference Jolley, Meleady and Douglas2019), and decreased trust in government (Einstein and Glick Reference Einstein and Glick2014). These are just a few examples from a rapidly growing literature (for a recent review, see Douglas et al. Reference Douglas2019).
The increase in scholarly attention to the topic has led to the proliferation of conspiracy belief questions in surveys. Troublingly, there is reason to think that mere exposure to these questions can increase conspiracy belief. Survey methods research suggests that exposure to prior surveys increases knowledge of topics contained within the survey (e.g., Das, Toepoel, and van Soest Reference Das, Toepoel and van Soest2011; Kruse et al. Reference Kruse, Callegaro, Dennis, DiSogra, Subias, Lawrence and Tompson2010). Given the nature of conspiracies and the design of conspiracy questions, exposure to these questions may cause respondents to adopt these beliefs. As a result, scholars may be unwittingly contributing to the spread of conspiracy beliefs and their consequences.
In this article, we test this possibility using a within-subjects experiment embedded in a panel survey. Our results suggest that exposure to a standard conspiracy question causes a significant increase in the likelihood of later endorsing that belief. However, we do not observe this effect when using a question format that asks respondents to choose between a conspiratorial and a non-conspiratorial explanation for an event. Thus, we recommend that researchers adopt this question format to reduce the likelihood of spreading conspiracy beliefs with their surveys.
How surveys may be spreading conspiracy theories
Conspiracy questions have proliferated in surveys, including the influential American National Election Studies. Survey researchers have long worried about how exposure to a survey might affect one’s beliefs and attitudes, an effect known as “panel conditioning.” Generally, this literature finds small or null effects, with the exception of knowledge questions. A variety of studies find that exposure to a knowledge question increases knowledge and familiarity in later waves (e.g., Das, Toepoel, and van Soest Reference Das, Toepoel and van Soest2011). The most relevant finding comes from a 2008 panel study (Kruse et al. Reference Kruse, Callegaro, Dennis, DiSogra, Subias, Lawrence and Tompson2010). Compared to fresh respondents, panel participants were more likely to correctly answer an open-ended knowledge question about President Obama that had been asked in prior waves.
Conspiracy questions might be particularly susceptible to this type of learning effect. While not typically considered knowledge questions, conspiracy questions are inherently asking respondents to evaluate explanatory claims about the world. For example, consider the conspiratorial claim that the Bush administration allowed the 9/11 attacks to occur to provide a pretext for war in the Middle East. This is a claim about how and why the 9/11 attacks and subsequent wars occurred. Given that most people pay little attention to politics, many respondents are unfamiliar with conspiratorial claims (Oliver and Wood Reference Oliver and Wood2014a) and have little background knowledge about the broader topic. This lack of information leaves room for a novel explanation to take root, particularly when the conspiratorial claim offers a relatively simple explanation for a complex event (Marchlewska, Cichocka, and Kossowska Reference Marchlewska, Cichocka and Kossowska2018).
Another reason to think that merely asking conspiracy questions might spread belief comes from a large psychological literature on the “illusory truth” effect. This research finds that people are more likely to believe a factual claim when they have been previously exposed to it (e.g., DiFonzo et al. Reference DiFonzo, Beckstead, Stupak and Walders2016; Pennycook, Cannon, and Rand Reference Pennycook, Cannon and Rand2018). One of the most likely mechanisms for this effect is through perceptual fluency – claims that have been repeated can be more easily recalled and understood (e.g., Henderson, Simons, and Barr Reference Henderson, Simons and Barr2021; Reber and Schwarz Reference Reber and Schwarz1999). This effect tends to increase with repetition (DiFonzo et al. Reference DiFonzo, Beckstead, Stupak and Walders2016; Hassan and Barber Reference Hassan and Barber2021) and persists for weeks to months after initial exposure (Henderson, Simons, and Barr Reference Henderson, Simons and Barr2021). While this literature generally does not focus on conspiracies, these types of claims might be particularly susceptible to the illusory truth effect. This is because conspiracy theories often involve emotionally evocative and vivid moral content designed to capture our attention (Brady, Crockett, and Van Bavel Reference Brady, Crockett and Van Bavel2020; van Prooijen et al. Reference van Prooijen, Ligthart, Rosema and Xu2021). As a result, conspiratorial claims lend themselves to fluency (i.e., are easily processed and remembered), and thus to the illusory truth effect.
However, there is one important reason why the illusory truth effect may not apply here. In the exposure stage of these designs, respondents are typically asked to rate their interest in the claim or their willingness to share the claim, rather than reporting their belief in the claim. There is some evidence that when respondents instead rate their belief in the claim (as they would in a typical survey) the illusory truth effect is less likely to occur (Calvillo and Smelter Reference Calvillo and Smelter2020). This is presumably because respondents are more focused on the accuracy of the claim while initially processing it, which tends to undermine the illusory truth effect (Brashier, Eliseev, and Marsh Reference Brashier, Eliseev and Marsh2020; Jalbert, Newman, and Schwarz Reference Jalbert, Newman and Schwarz2020). Thus, it is less clear whether these findings apply in the context of exposure to a survey question.
Nonetheless, both the survey literature on panel conditioning and the psychological literature on the illusory truth effect give reason to think that prior exposure may contribute to belief. Conspiracy questions convey relatively simple explanations for complex events that respondents often know little about, and these explanations tend to be emotionally evocative and memorable. Thus, we expect that asking respondents about their belief in a conspiracy will cause them to be more likely to report belief in the same conspiracy later in time (H1).
However, whether exposure to conspiracy questions increases belief may depend on how the question is asked. The most common question formats simply state a conspiratorial claim about the world and ask respondents to rate their agreement with the statement, or the accuracy of it. In contrast, some scholars have advocated for an “explicit choice” format, which asks respondents to choose between a conspiratorial and conventional explanation of the same event (Clifford, Kim, and Sullivan Reference Clifford, Kim and Sullivan2020). The common statement format is more likely to contribute to an increase in conspiracy belief for at least two reasons. First, by design, the statement format offers only a conspiratorial claim, meaning it is the only content that a respondent might learn or become familiar with. This is consistent with evidence that people high in need for cognitive closure are more likely to endorse conspiratorial explanations for an event, but that this effect only occurs when an alternative explanation is not available (Marchlewska, Cichocka, and Kossowska Reference Marchlewska, Cichocka and Kossowska2018). Second, the choice format, by asking respondents to choose between two alternative explanations of an event, likely encourages greater scrutiny of the accuracy of the claims, which is known to reduce or eliminate the illusory truth effect (Brashier, Eliseev, and Marsh Reference Brashier, Eliseev and Marsh2020; Jalbert, Newman, and Schwarz Reference Jalbert, Newman and Schwarz2020). This is consistent with evidence that acquiescence bias inflates endorsement of conspiracy beliefs in conventional statement formats (Hill and Roberts 2021), while the choice format yields lower rates of conspiracy endorsement (Clifford, Kim, and Sullivan Reference Clifford, Kim and Sullivan2020). Thus, our second hypothesis is that any effect of exposure should occur primarily with a statement format, rather than the choice format (H2).
Finally, it is well documented that some people have a “propensity to view the world in conspiratorial terms” (Uscinski, Klofstad, and Atkinson Reference Uscinski, Klofstad and Atkinson2016). Indeed, people who agree with general claims, such as “much of our lives are being controlled by plots hatched in secret places,” are more likely to also report belief in a variety of specific conspiracies. Thus, we expect that exposure effects should be largest among those high in conspiratorial predispositions (H3).
Exploratory study
We conducted an initial exploratory study through Amazon’s Mechanical Turk. As discussed in detail in the online Appendix, this study provides no evidence that exposure to conspiracy beliefs increases later endorsement of those beliefs. However, this study has several important limitations, such as reliance on only a small number of conspiracy questions and an unusual outcome measure. We address these limitations below in an improved, pre-registered experiment.
Pre-registered experiment
Our pre-registered experiment consists of a four-wave panel study fielded on Mechanical Turk. Respondents were required to have completed at least 100 human intelligence tasks (HITs), at least a 95% approval rate, and to have passed the CloudResearch approval filter. Wave 1 was fielded on Aug. 3, 2021 (N = 1,303). Waves 2 and 3 were fielded approximately 3 days apart from the other waves. Wave 4 was fielded on Aug. 10–11 and 1,050 respondents completed it for an 81% response rate. As we note below and detail in the online Appendix, we find no evidence of differential attrition by experimental condition.
Conspiracy measures
Our focal measures consist of 18 conspiracy questions, many of which are drawn from past research (e.g., Enders et al. Reference Enders, Uscinski, Seelig, Klofstad, Wuchty, Funchion, Murthi, Premaratne and Stoler2021; Oliver and Wood Reference Oliver and Wood2014a, Reference Oliver and Wood2014b; van Prooijen and Acker Reference van Prooijen and Acker2015). We sought conspiracy theories that were relevant to current politics, deal with important and salient issues, and could be clearly explained to respondents. Additionally, we designed our set of conspiracy theories to be evenly divided into three categories: 1) more likely to be believed by Republicans (e.g., Biden secretly has dementia), 2) more likely to be believed by Democrats (e.g., Trump sabotaged the COVID vaccination rollout), and 3) non-partisan (e.g., a cancer cure is being withheld). See online Appendix for details. Because we expected few people would endorse conspiracy theories that run contrary to their partisan identity (e.g., Smallpage, Enders, and Uscinski Reference Smallpage, Enders and Uscinski2017), respondents were only exposed to neutral and co-partisan conspiracy theories. Pure independents were randomly assigned to a partisan condition.
We created two versions of each question: one using a common agree–disagree format and one using the explicit choice format. For both formats, questions began with a one-sentence statement of an event (e.g., “As you may know, President Biden has made relatively few public appearances and speeches since becoming President, which has led some to wonder why.”). For the agree–disagree format, respondents were then asked their agreement with a conspiratorial explanation of that event (e.g., “Biden has been avoiding public appearances because he has dementia and is unable to speak coherently for more than a few minutes at a time”). For the explicit choice format, respondents were asked to choose which of two statements is most likely to be true – the same conspiratorial statement used in the agree–disagree format, or a conventional explanation for the event (e.g., “Biden has been avoiding public appearances because he wants media coverage to focus on his policy rather than on him as a person”). Respondents were also offered an “unsure” option. To maintain similarity, the agree–disagree scale offered three response options, including a “neither agree nor disagree” option.
Design
Figure 1 summarizes the study design. Waves 1–3 served to deliver the treatment of exposure to conspiracy questions, while Wave 4 consisted of the outcome measures. The low exposure condition involved treatment only in Wave 1, while high exposure involved treatment in Waves 1–3. At the beginning of Wave 1, all respondents answered a series of questions about their conspiratorial predispositions and partisan identity. Respondents were then randomly assigned to either an experimental arm (n = 982) or a pure control (n = 321). The treatment consists of asking respondents to answer a conspiracy question. In the experimental arm, respondents were all exposed to all three experimental conditions: zero, low, or high exposure, making it a within-subjects design. Among each partisan group, the 12 relevant conspiracy theories (six co-partisan, six neutral) were divided into three sets of four (two co-partisan, two neutral). For each respondent, the three sets were randomly assigned to an exposure level of zero, one (low), or three (high), such that all respondents received all three exposure conditions, but for different sets of conspiracy theories. In Wave 4, all respondents answered all 12 relevant conspiracy questions, which make up the dependent variable. Prior to Wave 4, respondents were exposed to one set of questions in all three prior waves, a second set in only Wave 1, and were not previously exposed to a third set. This design allows a within-subjects test of exposure.Footnote 1 Notably, in the low exposure condition, outcomes were measured 1 week after treatment, offering a test of the duration of exposure effects.
In the pure control arm, respondents were not exposed to any conspiracy questions in Wave 1, nor were they invited to participate in Waves 2–3. Respondents simply completed all 12 relevant conspiracy theories in Wave 4, as did the experimental arm.
Results
Before conducting our main analyses, we first conducted an exploratory test for the possibility of spillover between experimental conditions using our pure control condition. As detailed in the online Appendix, we find no evidence that being exposed to some conspiracy questions affected responses to different conspiracy questions. Nor do we find any evidence of differential attrition by experimental condition.
To test our first hypothesis, we follow our pre-registration plan and stack the conspiracy belief outcomes from Wave 4 such that each respondent contributes up to 12 observations (total N = 9,629). The dependent variable is coded dichotomously (1 = endorsement, 0 = rejection or unsure/neither).Footnote 2 We use ordinary least squares regression to model conspiracy beliefs as a function of dichotomous indicators of the low and high exposure conditions, along with respondent random effects, and question fixed effects. Standard errors are clustered on the respondent. The coefficient for low exposure is positive but not significant (b = 0.009, p = 0.319), while the coefficient for high exposure is positive and statistically significant (b = 0.023, p = 0.012), suggesting a roughly two percentage point increase in conspiracy belief (full model results shown in the online Appendix). This two-percentage point increase represents an 11% increase in conspiracy belief above the baseline belief of 0.21.
Hypothesis 2 holds that treatment effects should occur primarily among those exposed to the agree–disagree format, rather than the explicit choice format. Following our pre-registration, we expand the model described above by including an indicator of question format along with interactions with each of the exposure indicators. The effects of each exposure level are plotted by question format in Figure 2. Consistent with H2, both low and high levels of exposure increase conspiracy belief among those receiving the agree–disagree format (low: b = 0.032, p = 0.012; high: b = 0.035, p = 0.005). Substantively, these effects represent an increase in conspiracy beliefs of about 15% (low exposure) to 17% (high exposure), relative to baseline. Surprisingly, there is no discernible difference between the effects of high and low levels of exposure to the agree–disagree format (p = 0.791; test not pre-registered). But the effect of the low exposure condition indicates that the effect persists at least a week after initial exposure.
Consistent with H2, neither level of exposure affected conspiracy belief among those exposed to the explicit choice format (low: b = −0.017, p = 0.192; high: b = 0.008, p = 0.528). Crucially, the interaction term is significant for low exposure (p = 0.007), but not high exposure (p = 0.123), providing mixed evidence as to whether the effects of the two formats differ.
According to H3, treatment effects should be largest among those who are predisposed to believe in conspiracy theories. To test this hypothesis, we follow our pre-registration and add the Wave 1 measure of conspiratorial predispositions to the baseline model and include interactions with each of the exposure indicators. Contrary to H3, the interaction terms offer little evidence that exposure has a larger effect among those high in conspiratorial predispositions (low: p = 0.948; high: p = 0.165).
Are respondents learning?
So far, we have coded no-opinion responses (“neither agree nor disagree” and “unsure”) in the same category as rejections of conspiracy theories (disagree or endorse a conventional explanation). In two exploratory analyses, we model no-opinion responses separately using the approach described above, including interactions between treatments and question format. There is little evidence that exposure to the agree–disagree format affected no-opinion rates (low: b = −0.013, p = 0.352; high: b = −0.006, p = 0.654), but suggestive evidence that it decreased rejection rates (low: b = −0.019, p = 0.177; high: b = −0.029, p = 0.030). In contrast, there is evidence that exposure to the explicit choice format reduced no-opinion rates (low: b = −0.018, p = 0.107; high: b = −0.041, p < 0.001) and increased rejections (low: b = 0.035, p = 0.018; high: b = 0.033, p = 0.019). An alternative modeling approach using a multinomial logit finds similar results. Taken together, these results suggest that respondents learn the information that is provided to them in conspiracy questions.
Conclusion
In recent years, researchers have raced to understand the pernicious effects of conspiracy beliefs. In the process, countless respondents have been exposed to a variety of questions about conspiracy theories, rumors, and falsehoods. Consistent with the illusory truth effect, we find that mere exposure to conspiracy questions increases conspiracy belief and that this effect lasted at least 1 week. However, this effect only obtained when respondents were exposed to the agree–disagree format, not the explicit choice format. Consistent with the panel conditioning literature, the evidence suggests that respondents learn from the content that is offered to them in the survey. Respondents exposed to the agree–disagree format could only learn one thing – the conspiratorial claim offered to them. And some of them did. Respondents who were instead exposed to the explicit choice format could have learned either the conspiratorial claim or the conventional explanation for the event. These respondents became less likely to say they were unsure and more likely to adopt the conventional explanation but were not more likely to adopt the conspiracy.
Of course, it is reasonable to wonder whether the observed effect sizes are substantively meaningful. We think so. Our estimates suggest that a single exposure to the agree–disagree format increases conspiracy belief by about 3.2 percentage points 1 week after exposure. While this may not seem large, consider the potential consequences for a standard survey (N = 1,000) that contains five conspiracy questions. If our effect size generalizes, an increase of 3.2 percentage points implies that this study would create about 160 new conspiracy beliefs. If the conspiratorial claim involves a topic like vaccination that may have important downstream effects on respondent behavior, these are not trivial effects.
These findings suggest that researchers should consider the potential ethical implications of inadvertently spreading conspiracy beliefs. In many cases, the potential risks might be minimal, but this may not always be the case, such as in the case of vaccines. Fortunately, our research suggests that researchers can avoid this risk by adopting the explicit choice question format. Of course, more research is needed on the validity of alternative measures, but the choice format appears to have multiple advantages (Clifford, Kim, and Sullivan Reference Clifford, Kim and Sullivan2020). Alternatively, a researcher might debrief respondents about the nature of the conspiratorial claims. However, conventional debriefing is not always completely effective (e.g., Greenspan and Loftus Reference Greenspan and Loftus2022), and it may be time-consuming to debrief on multiple conspiracies. Nonetheless, researchers ought to take the ethical considerations of conspiracy research seriously.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2023.1
Data availability statement
The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at DOI: 10.7910/DVN/GCKQYW (Clifford and Sullivan Reference Clifford and Sullivan2022).
Acknowledgements
We would like to thank Adam Enders, Ben Lyons, Elizabeth Simas, the editors, and anonymous reviewers for helpful suggests and feedback.
Funding
Research was funded by the College of Liberal Arts and Social Sciences at the University of Houston.
Conflicts of interest
We have no conflicts of interest or potential conflicts of interest with regard to the submitted work.
Ethics statement
All studies obtained Institutional Review Board (IRB) approval from the University of Houston (STUDY00002500 and STUDY00003105). We affirm that this research adheres to APSA’s Principles and Guidance for Human Subjects Research. Please see the online Appendix for more information about the ethical treatment of human subjects in this study.