Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-19T09:50:20.177Z Has data issue: false hasContentIssue false

The Effects of Unsubstantiated Claims of Voter Fraud on Confidence in Elections

Published online by Cambridge University Press:  28 June 2021

Nicolas Berlinski
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
Margaret Doyle
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
Andrew M. Guess
Affiliation:
Department of Politics and Woodrow Wilson School, Princeton University, Princeton, USA
Gabrielle Levy
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
Benjamin Lyons*
Affiliation:
Department of Communication, University of Utah, Salt Lake City, USA
Jacob M. Montgomery
Affiliation:
Department of Political Science, Washington University, St. Louis, USA
Brendan Nyhan
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
Jason Reifler
Affiliation:
Department of Politics, University of Exeter, Exeter, UK
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Political elites sometimes seek to delegitimize election results using unsubstantiated claims of fraud. Most recently, Donald Trump sought to overturn his loss in the 2020 US presidential election by falsely alleging widespread fraud. Our study provides new evidence demonstrating the corrosive effect of fraud claims like these on trust in the election system. Using a nationwide survey experiment conducted after the 2018 midterm elections – a time when many prominent Republicans also made unsubstantiated fraud claims – we show that exposure to claims of voter fraud reduces confidence in electoral integrity, though not support for democracy itself. The effects are concentrated among Republicans and Trump approvers. Worryingly, corrective messages from mainstream sources do not measurably reduce the damage these accusations inflict. These results suggest that unsubstantiated voter-fraud claims undermine confidence in elections, particularly when the claims are politically congenial, and that their effects cannot easily be mitigated by fact-checking.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

After Donald Trump lost the 2020 US presidential election, he and his allies made sweeping and unsupported claims that the election had been stolen. These unsubstantiated assertions ranged from familiar voter-fraud tropes (claims that illegitimate ballots were submitted by dead people) to the fanciful (voting machines were part of a complicated conspiracy involving the late Venezuelan leader Hugo Chávez). Amid increasingly heated rhetoric, a January 6, 2021 “Stop the Steal” rally was followed by a violent insurrection at the US Capitol that sought to disrupt the certification of President-elect Biden’s victory, a tragic event many observers partially attributed to the false claims of fraud made by President Trump and his allies.

Claims of voter fraud like this are not uncommon, especially outside the USA. In early February 2021, the Myanmar military justified its coup against the civilian government by alleging voter fraud in the most recent election (Goodman Reference Goodman2021). In other cases, elites have made unsubstantiated claims of voter fraud in order to cast doubt on unfavorable or potentially damaging electoral results. For instance, Jair Bolsonaro, the president of Brazil, expressed fears of voter fraud during his presidential campaign in 2018 to pre-emptively cast doubt on an unfavorable electoral outcome (Savarese Reference Savarese2018). Prabowo Subianto, a presidential candidate who lost the 2019 Indonesian election, used this tactic even more aggressively, claiming that he had been the victim of voter fraud and refusing to concede (Paddock Reference Paddock2019).

Though accusations of misconduct are a frequent feature of electoral politics, the effects of this phenomenon on voter beliefs and attitudes have not been extensively studied. To date, research has largely focused on how actual irregularities or the presence of institutions intended to constrain malfeasance affect electoral confidence, particularly in less established democracies (e.g., Norris Reference Norris2014; Hyde Reference Hyde2011). Less is known about the effects of unfounded assertions of voter fraud on public faith in free and fair elections, especially in advanced democracies such as the USA. Can elites delegitimize a democratic outcome by asserting that electoral irregularities took place? Our motivating examples, particularly recent events in the USA, suggest reasons for concern. Given the centrality of voter fraud to Trump’s rhetoric in the weeks leading up the January 6 insurrection, it is essential to better understand whether and how baseless accusations of election-related illegalities affect citizens. Footnote 1

While recent events and the voluminous elite cues literature (e.g., Zaller Reference Zaller1992) lead us to expect that fraud claims would have a deleterious effect, several streams of previous research suggest that political leaders may have a limited ability to alter citizen’s attitudes about the legitimacy of foundational political institutions like elections by inventing accusations of fraud. First, previous work in this area suggests that unsubstantiated claims of widespread voter fraud may have little effect on public attitudes. Most notably, recent studies of the 2016 US presidential election using panel designs provide mixed evidence on the effect of voter-fraud claims. Despite Donald Trump’s frequent (and unsubstantiated) claims of voter fraud before that election, Trump voters’ confidence in elections did not measurably change and Democrats’ confidence in elections actually increased pre-election, possibly in response to Trump’s claims (Sinclair, Smith and Tucker Reference Sinclair, Smith and Tucker2018). After the election, confidence in elections actually increased and belief in illicit voting decreased among Trump supporters (a classic “winner effect”) while confidence of Clinton’s voters remained unchanged (Levy Reference Levy2020).

Second, there is a reason to be skeptical about claims that political leaders can alter citizens’ attitudes so easily by alleging fraud. Studies find, for instance, that presidents struggle to change public opinion on most topics despite extensive efforts to do so (Edwards Reference Edwards2006; Franco, Grimmer and Lim n.d.). Moreover, they may face electoral sanctions for challenging democratic norms. Reeves and Rogowski (Reference Reeves and Rogowski2016, Reference Reeves and Rogowski2018), for instance, argue that leaders are punished for acquiring or exercising power in norm-violating ways. Similarly, conjoint studies by Carey et al. (Reference Carey, Clayton, Helmke, Nyhan, Sanders and Stokes2020) and Graham and Svolik (Reference Graham and Svolik2020) show that voters punish candidates for democratic norm violations, though the magnitude of these punishments are modest and voters may be more willing to apply them to the opposition party.

Finally, even psychological factors and message effects that make people vulnerable to claims of fraud such as directionally motivated reasoning, framing, and elite cues face boundary conditions (Cotter, Lodge and Vidigal Reference Cotter, Lodge and Vidigal2020; Druckman Reference Druckman2001; Nicholson Reference Nicholson2011). For instance, fact-checking may help promote accurate beliefs and reduce the scope of misinformation effects (Nyhan and Reifler Reference Nyhan and Reifler2017; Wood and Porter Reference Wood and Porter2019), though the effect of corrections on broader attitudes and behavioral intentions is less clear (Nyhan et al. Reference Nyhan, Porter, Reifler and Wood2019, Reference Nyhan, Reifler, Richey and Freed2014). In addition, respondents may distrust the sources of these claims and in turn discount the messages they offer.

In short, while politicians undoubtedly make unfounded claims of voter fraud, available evidence is less clear about whether such claims affect citizens’ faith in elections. Our study addresses the limitations of prior panel studies and provides direct experimental evidence of the effects of unfounded accusations of voter fraud on citizens’ confidence in elections. This approach is most closely related to that of Albertson and Guiler (Reference Albertson and Guiler2020), who show that telling respondents that “experts” believe that the 2016 election was vulnerable to manipulation and fraud increased perceptions of fraud, lowered confidence in the electoral system, and reduced willingness to accept the outcome. However, our study differs in that the accusations we test come from political leaders, a more common source in practice (experts believe voter fraud is exceptionally rare in the USA).

We specifically evaluate the effects of exposure to voter-fraud claims from politicians in the context of the aftermath of the 2018 US midterm elections. Notably, we not only test the effects of such accusations in isolation, but also examine the effects of such exposure when fraud claims are paired with fact-checks from independent experts. This design approach is critical for evaluating potential real-world responses by, for example, social media companies that seek to mitigate harm from voter-fraud claims (Klar Reference Klar2020).

Our results show that exposure to unsubstantiated claims of voter fraud from prominent Republicans reduces confidence in elections, especially among Republicans and individuals who approve of Donald Trump’s performance in office. Worryingly, exposure to fact-checks that show these claims to be unfounded does not measurably reduce the damage from these accusations. The results suggest that unsubstantiated claims of voter fraud undermine the public’s confidence in elections, particularly when the claims are politically congenial, and that these effects cannot easily be ameliorated by fact-checks or counter-messaging. However, we find no evidence that exposure to these claims reduces support for democracy itself.

From this perspective, unfounded claims of voter fraud represent a dangerous attack on the legitimacy of democratic processes. Even when based on no evidence and countered by non-partisan experts, such claims can significantly diminish the legitimacy of election outcomes among allied partisans. As the Capitol insurrection suggests, diminished respect for electoral outcomes presents real dangers for democracy (e.g., Minnite Reference Minnite2010). If electoral results are not respected, democracies cannot function (Anderson et al. Reference Anderson, Blais, Bowler, Donovan and Listhaug2005). And even if losers step down, belief in widespread voter fraud threatens to undermine public trust in elections, delegitimize election results, and promote violence or other forms of unrest.

Experimental design

We conducted our experiment among 4,283 respondents in the USA who were surveyed online in December 2018/January 2019 by YouGov (see Online Appendix A for details on the demographic characteristics of the sample, response rate, and question wording). Footnote 2 This research was approved by the institutional review boards of the University of Exeter, the University of Michigan, Princeton University, and Washington University in St. Louis. After a pre-treatment survey, respondents were randomly assigned to view either a series of non-political tweets (placebo); four tweets alleging voter fraud (low dose); the four tweets alleging voter fraud from the low-dose condition plus four additional tweets alleging voter fraud (high dose); or the four tweets from the low-dose condition alleging voter fraud plus four fact-check tweets (low dose + fact-check). Respondents then completed post-treatment survey questions measuring our outcome. Respondents were unaware of treatment condition. There was no missing data for our primary outcome, and minimal missing data for secondary outcomes (between 0.0% and 0.9%). A summary of missing data for outcome measures and moderators can be found in Table A3.

Immediately after the election, several prominent Republicans, including Florida Governor Rick Scott, Senators Lindsey Graham and Marco Rubio, and Trump himself, made unfounded allegations of voter fraud while counts were still ongoing (Lopez Reference Lopez2018). Tweets from these political elites and fact-checks of the claims were used as the treatment stimuli (see Figure 1 for an example). This design has high external validity, allowing us to show actual claims of voter fraud made by party elites to respondents in the original format in which they were seen by voters.

Figure 1. Example stimulus tweet from the experiment.

To match this format’s external validity, we draw on actual corrections produced by the Associated Press, PBS NewsHour, and NYT Politics, again in the form of tweets (see Online Appendix A). Though these messages do not come from dedicated fact-checking outlets per se, these standalone articles fit within the larger diffusion of the format through the mainstream press (Graves, Nyhan and Reifler Reference Graves, Nyhan and Reifler2015) and follow prior work on journalistic corrections (Nyhan et al. Reference Nyhan, Porter, Reifler and Wood2019; Nyhan and Reifler Reference Nyhan and Reifler2010; Pingree et al. Reference Pingree, Watson, Sui, Searles, Kalmoe, Darr, Santia and Bryanov2018).

Hypotheses and research questions

We expect that exposure to unfounded voter-fraud claims reduces confidence in elections (e.g., Alvarez, Hall and Llewellyn Reference Alvarez, Hall and Llewellyn2008; Hall, Quin Monson and Patterson Reference Hall, Monson and Patterson2009), the immediate object of criticism, and potentially undermines support for democracy itself (Inglehart Reference Inglehart2003). This expectation leads to four preregistered hypotheses and two research questions. Footnote 3

Our first three preregistered hypotheses concern the effect of exposure to voter-fraud allegations. We expect that low (H1a) and high (H2a) doses of exposure to allegations of voter fraud will reduce confidence in elections and that a high dose will have a stronger effect (H3a). The idea that increased message dosage should lead to greater effects is long-standing and intuitive (Arendt Reference Arendt2015; Cacioppo and Petty Reference Cacioppo and Petty1979) and has received some empirical support (e.g., Ratcliff et al. Reference Ratcliff, Jensen, Scherr, Krakow and Crossley2019), but evidence is limited for this claim in the domain of politics (Arendt, Marquart and Matthes Reference Arendt, Marquart and Matthes2015; Baden and Lecheler Reference Baden and Lecheler2012; Lecheler and de Vreese Reference Lecheler and de Vreese2013; Miller and Krosnick Reference Miller and Krosnick1996). Higher doses may have diminishing returns in political messaging, with large initial effects among people who have not previously been exposed to similar messages but less additional influence as exposure increases (Markovich et al. Reference Markovich, Baum, Berinsky, de Benedictis-Kessner and Yamamoto2020).

We also expect the effects of exposure to be greater when the claims are politically congenial (H1b–H3b) given the way pre-existing attitudes affect the processing of new information (e.g., Kunda Reference Kunda1990; Taber and Lodge Reference Taber and Lodge2006), including on election/voter fraud (Edelson et al. Reference Edelson, Alduncin, Krewson, Sieja and Uscinski2017; Udani, Kimball and Fogarty Reference Udani, Kimball and Fogarty2018).

Fact-checks can be effective in counteracting exposure to misinformation (Chan et al. Reference Chan, Jones, Jamieson and Albarracn2017; Fridkin, Kenney and Wintersieck Reference Fridkin, Kenney and Wintersieck2015). Our fourth hypothesis therefore predicts that fact-checks can reduce the effects of exposure to a low dose of voter-fraud misinformation on perceived electoral integrity (H4a). We also expect fact-checks will reduce the effects of voter-fraud misinformation more for audiences for whom the fraud messages are politically congenial simply because the initial effects are expected to be larger (H4b).

Finally, we also consider preregistered research questions. First, we ask whether exposure to both a low dose of allegations of voter fraud and fact-checks affects confidence in elections compared to the placebo condition baseline per Thorson (Reference Thorson2016) (RQ1a). Second, we test whether this result differs when the claims are politically congenial (RQ1b). Footnote 4 Finally, we examine whether these effects extend beyond attitudes toward electoral institutions and affect support for democracy itself (RQ2). Footnote 5

Methods

To test our main hypotheses, we examine seven survey items that tap into different aspects of election integrity (e.g., “How confident are you that election officials managed the counting of ballots fairly in the election this November?”). Descriptive statistics for all items are shown in Table 1 and complete question wording is shown in Online Appendix A. On average, respondents indicated modestly high levels of confidence in US electoral institutions and election integrity.

Table 1. Measures of Confidence in Elections

Notes: Complete question wordings for all items are provided in Online Appendix A.

Indicates that the item was only asked of respondents who indicated they voted.

§Indicates a composite measure of election confidence that was created using confirmatory factor analysis (see Online Appendix B for estimation details).

Table 2. Effect of Exposure to Voter Fraud Allegations on Election Confidence

Notes: *p < 0.05, **p < 0.01, ***p < 0.005 (two-sided). Ordinary least-squares regression models with robust standard errors. The outcome variable is a composite measure of election confidence created using confirmatory factor analysis (see Online Appendix B for estimation details).

Exploratory factor analysis (EFA) showed that these items scaled together; we therefore created a standardized outcome measure of confidence in the electoral system. All seven items loaded onto a single factor; the absolute value of the factor loadings was greater than 0.6 for all cases and typically larger than 0.8. To identify the latent space, we set the variance of the latent factor to one, allowing all treatment effects to be interpreted as sample standard deviations (SDs). A full discussion of this process is presented in Online Appendix B. Footnote 6

We estimate linear regression models that include only main effects for experimental conditions as well as models that interact treatment indicators with measures for whether voter-fraud misinformation was congenial for respondents. In our original preregistration, we stated we would test the hypotheses related to congeniality by including an interaction term with an indicator for whether or not a respondent is a Republican, which implicitly combines Democrats and independents into a single category. We found that Democrats and independents actually responded quite differently to the treatments and therefore deviate from our preregistration to estimate results separately using all three categories below (the preregistered analysis is provided in Table C3 in the Online Appendix). In addition, we also conducted exploratory analyses using approval of President Trump as an alternative moderator of whether the fraud messages were congenial.

Finally, for RQ2, we relied on a separate five-item battery measuring commitment to democratic governance reported in Online Appendix A. We analyze both the individual items and two composite scales suggested by our EFA (see Footnote 1).

Deviations from preregistered analysis plan

For transparency, we provide a summary of the deviations from our preregistration here. First, per above, we now examine potential congeniality effects for Republicans, Democrats, and independents separately rather than examining differences between Republicans and all others. Online Appendix C contains the preregistered specification in which Democrats and independents are analyzed together. Second, we present an additional, exploratory test of congeniality using Trump approval as a moderator. Third, we present main effects below for individual items from our outcome measure of election confidence in addition to the composite measure; our preregistration stated that we would report results separately for each dependent variable included in the composite measure in the Online Appendix, but we have included these models in the main text. Fourth, RQ2 deviates from our preregistration in that effects on both election confidence and support for democracy were included as outcomes of interest for H1–H4 and RQ1 pending a preregistered factor analysis of the individual items. As this factor analysis distinguished between these outcomes (see Online Appendix B), we conduct separate analyses for support for democracy. These results are discussed briefly below, but are reported in full in Online Appendix C. A complete discussion of our preregistered analyses as well as deviations are shown in Online Appendix E.

Results

We focus our presentation below on estimated treatment effects for our composite measure of election confidence. However, we present treatment effects for each component outcome measure (exploratory) as well as the composite measure of election confidence (preregistered) in Table 2. Figure 2 shows the effects for the composite measure. Since the composite measure is standardized, the effects can be directly interpreted in terms of SDs.

We find that exposure to the low-dose condition significantly reduced confidence in elections compared to the placebo condition (H1a: β = −0.147 SD, p < 0.005). This pattern also held in the high-dose condition (H2a: β = −0.168 SD, p < 0.005). Footnote 7 However, we fail to reject the null hypothesis of no difference in effects (H3a); the effects of exposure to low versus high doses of tweets alleging voter fraud are not measurably different. This result, which we calculate as the difference in treatment effects between the low-dose and high-dose conditions, is reported in the row in Table 1 labeled “Effect of higher dosage.” Footnote 8

A crucial question in this study is whether the effect of fact-check tweets can offset the effect of the tweets alleging fraud. We find that exposure to fact-checks after a low dose of unfounded voter-fraud claims did not measurably increase election confidence relative to the low-dose condition. As a result, the negative effects of exposure remain relative to the placebo condition.

Specifically, we can reject the null hypothesis of no difference in election confidence between participants exposed to the low dose + fact-check tweets versus those in the placebo condition (RQ1a: β = −0.092. SD, p < 0.05). This effect is negative, indicating the fact-check tweets do not eliminate the harmful effects of exposure to unfounded allegations of fraud on election confidence. Substantively, the effect estimate is smaller than the effect for the low-dose condition with no fact-check tweets described above (H1a: β = −0.147 SD, p < 0.005) but the difference is not reliably distinguishable from zero (H4a: β low dose + fact-checkβ low dose = −0.055 SD, p > 0.05).

Next, we examined the effect of voter-fraud messages on respondents for whom the content of those messages (and the sources who endorse them) would be congenial – the Republican identifiers and leaners whose party was seen as losing the 2018 midterm elections. We estimate how our treatment effects vary by party and by approval of President Trump in Tables C1 and C2 in Online Appendix C. The resulting marginal effect estimates are presented in Figure 3. Footnote 9

Figure 2. Marginal effect of exposure to claims of voter fraud on confidence in elections.

Notes: Difference in means (with 95% CIs) for composite measure of election confidence relative to the placebo condition.

Figure 3. Effect of exposure to claims of voter fraud on election confidence by predispositions.

Notes: Figure 3(a) shows the marginal effect by party of exposure to claims of voter fraud on composite measure of election confidence relative to the placebo condition (Table C1), while Figure 3(b) shows the marginal effect by Trump approval (Table C2). All marginal effect estimates include 95% CIs.

We first analyze the results based on party identification. We find that the effects of exposure to a high dose of voter-fraud misinformation vary significantly by party (H2b; p < 0.01), decreasing voter confidence significantly only among Republicans. By contrast, the effect of the low dosage of four tweets of voter-fraud misinformation is not measurably different between Democrats and Republicans (H1b), though the message’s marginal effect is significant for Republicans (p < 0.01) and not for Democrats. Similarly, the effect of greater dosage of fraud allegations (i.e., high versus low dosage) does not vary measurably by party (H3b).

Results are similar when we consider attitudes toward President Trump as a moderator. The effects of exposure to tweets varies significantly by approval in the high-dose condition (p < 0.005), significantly reducing election confidence only among respondents who approve of Trump. The interaction is not significant for the low-dose condition, though again the effect of the treatment is only significant among Trump approvers. Further, there is insufficient evidence to conclude that the additional effect of exposure to fact-check tweets (versus just the low dose of fraud tweets) varied by Trump approval. However, the dosage effect (low versus high dosage) varied significantly by approval (β = −0.191 SD, p < 0.05). Among disapprovers, additional dosage had no significant effect, but it reduced election confidence significantly among approvers (β = −0.128 SD, p < 0.05).

The size of the effects reported in Figure 3 are worth emphasizing. The high-dose condition, which exposed respondents to just eight tweets, reduced confidence in the electoral system by 0.27 SDs among Republicans and 0.34 SDs among Trump approvers. Even if these treatment effects diminish over time, these results indicate that a sustained diet of exposure to such unfounded accusations could substantially reduce faith in the electoral system.

We also consider whether the effects of fact-check exposure vary between Democrats and Republicans. We find the marginal effect of exposure to fact-checks (comparing the low-dose + fact-check condition to the low-dose condition) does not vary significantly by party (H4b). As a result, the negative effects of the low-dose condition on trust and confidence in elections among Republicans (β = −0.184 SD, p < 0.01) persist if they are also exposed to fact-checks in the low-dose + fact-check condition (β = −0.176 SD, p < 0.05). This pattern replicates when we instead disaggregate by Trump support. We find no measurable difference in the effects of the fact-checks by Trump approval, but the low dose + fact-check reduces election confidence among Trump supporters (β = −0.190 SD, p < 0.005) despite the presence of corrective information, mirroring the effect in the low-dose condition (β = −0.211 SD, p < 0.005). Footnote 10

Finally, we explore whether these treatments affect broader attitudes toward democracy itself. Table C4 in Online Appendix C shows that the effects of the low and high dosage voter-fraud treatments were overwhelmingly null on “Having a strong leader who does not have to bother with Congress and elections,” “Having experts, not government, make decisions,” “Having the army rule,” “Having a democratic political system,” and the perceived importance of living in a country that is governed democratically. Footnote 11 These null effects were mirrored in analyses of heterogeneous treatment effects by party and Trump approval in Online Appendix C.

Conclusion

This study presents novel experimental evidence of the effect of unsubstantiated claims of voter fraud on public confidence in elections. Using a large, nationally representative sample collected after the 2018 US elections, we show that respondents exposed to either low or high doses of voter-fraud claims reported less confidence in elections than those in a placebo condition, though there was no evidence that the treatments affected attitudes toward democracy more generally. These effects varied somewhat by party. Exposure significantly reduced confidence in elections only among Republicans and Trump supporters, though these effects only differed measurably by party or Trump approval in the high-dosage condition.

Worryingly, we found little evidence that fact-check tweets measurably reduced the effects of exposure to unfounded voter-fraud allegations. Adding corrections to the low-dose condition did not measurably reduce the effects of exposure. As a result, both Republicans and Trump approvers reported significantly lower confidence in elections after exposure to a low dose of voter-fraud allegations even when those claims were countered by fact-checks (compared to those in a placebo condition). These findings reinforce previous research on the potential lasting effects of exposure to misinformation even after it is discredited (e.g., Thorson Reference Thorson2016). Our findings also contribute to the growing understanding of the seemingly powerful role of elites in promoting misinformation (Weeks and Gil de Zúñiga Reference Weeks and Gil de Zúñiga2019) and other potentially damaging outcomes such as conspiracy beliefs (Enders and Smallpage Reference Enders and Smallpage2019) and affective polarization (Iyengar et al. Reference Iyengar, Lelkes, Levendusky, Malhotra and Westwood2019).

Future work could address a number of limitations in our study and build on our findings in several important ways. First, our treatment and dosage designs were solely based on social media posts. Additional research could explore whether media reports or editorials echoing accusations from political elites have greater effects (e.g., Coppock et al. Reference Coppock, Ekins and Kirby2018). Second, journalistic corrections could likewise be strengthened. Corrections from in-group media may be more influential; in the present case, dismissal of fraud claims by outlets like The Weekly Standard or The Daily Caller could be more credible among Republican respondents. Similarly, dismissals from prominent Republican officials themselves might be more influential as they signal intra-party disagreement (Lyons Reference Lyons2018) – a costly signal, particularly for those who have shifted positions on the issue (Baum and Groeling Reference Baum and Groeling2009; Benegal and Scruggs Reference Benegal and Scruggs2018; Lyons et al. Reference Lyons, Hasell, Tallapragada and Jamieson2019). However, such messengers may alternatively be subject to negative evaluation by way of a “black sheep effect” (Matthews and Dietz-Uhler Reference Matthews and Dietz-Uhler1998) and could be less effective for Republicans in particular (Agadjanian Reference Agadjanian2020). Third, our study examines messages that were congenial for Republicans. Though we sought to test the effects of fraud claims from the sources who have most frequently made them, a future study should also test the congeniality hypotheses we develop using Democrats as well.

While our data focuses on the US case, we strongly encourage future work to examine both the prevalence of electoral fraud claims and the effects of such claims comparatively. We suspect that there is important cross-national variation in how frequently illegitimate electoral fraud claims are made. While we would expect our central finding – exposure to elite messages alleging voter fraud undermines confidence in elections – would be replicated in other locales, there may be important nuance or scope conditions that additional cases would help reveal. For instance, variation in electoral rules and candidates’ resulting relative dependence on the media to communicate with voters may shape the nature of fraud claims themselves (Amsalem et al. Reference Amsalem, Sheafer, Walgrave, Loewen and Soroka2017). Journalistic fact-checking may be generally more effective in countries with less polarized attitudes toward the media (Lyons et al. Reference Lyons, Mérola, Reifler and Stoeckel2020). Moreover, variation in party systems may affect the consequences to party elites face for making fraudulent claims; when parties control ballot access, it may be easier to constrain problematic rhetoric in the first place (Carson and Williamson Reference Carson and Williamson2018). In addition, proportional representation (PR) systems may change the strategic calculus of using rhetoric that attacks election legitimacy because losing parties still may have access to power through coalition bargaining (and voters in PR systems may be able to more easily punish norm violations by defecting to ideologically similar parties). Finally, many countries have dramatically more fluid party attachments than the USA; when party attachment is consistently weaker (Huddy, Bankert and Davies Reference Huddy, Bankert and Davies2018), fraud claims may simply carry less weight.

It is also important to consider the potential for expressive responding (Schaffner and Luks Reference Schaffner and Luks2018) (but see Berinsky Reference Berinsky2018), which future work might rule out by soliciting higher stakes outcomes of interest (e.g., willingness to pay additional taxes to improve election security). Future research could also test the effects of allegations in a pre-election context and possibly examine effects on turnout or participation intentions. Finally, the COVID-19 pandemic highlights the importance of considering the effect of fraud allegations directed at mail voting and ballot counting, which may be especially vulnerable to unfounded allegations.

Still, our study provides new insight into the effects of unsubstantiated claims of voter fraud. We demonstrate that these allegations can undermine confidence in elections, particularly when the claims are politically congenial, and may not be effectively mitigated by fact-checking. In this way, the proliferation of unsubstantiated claims of voter fraud threatens to undermine confidence in electoral integrity and contribute to the erosion of US democracy.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2021.18

Data availability

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi: 10.7910/DVN/530JGJ.

Conflicts of interest

The authors declare no conflicts of interest.

Footnotes

We thank Democracy Fund and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 682758) for funding support. We are also grateful for funding support from the Weidenbaum Center on the Economy, Government, and Public Policy at Washington University in St. Louis. All conclusions and any errors are our own.

1 We use data collected in the aftermath of the 2018 US midterm election to examine how exposure to Republican claims of voter in fraud affect confidence in election and support for democracy. An advantage of this design is that while such claims were made, they were far less common than in 2020, allowing us to better isolate the effect of exposure.

2 This survey also included orthogonal studies reported in Guess et al. (Reference Guess, Michael, Benjamin, Montgomery, Brendan, Reifler and Sircar2020).

3 We provide a “populated pre-analysis plan” (Duflo et al. Reference Duflo, Banerjee, Finkelstein, Katz, Olken and Sautmann2020) and a link to the preregistration in Online Appendix E (the relevant preregistered hypotheses and analysis plan for this study appear in Section E). It is important to clarify that the preregistration is time-stamped February 20, 2019 even though data were collected in December 2018/January 2019. However, it was filed prior to data delivery from YouGov, which was withheld until February 27, 2019 – after the preregistration was filed. (The letter documenting the delivery date is provided here: https://osf.io/9y8db/.)

4 These RQs compare the low-dose + fact-check condition to the placebo condition while H4a and H4b compare the low-dose condition to the low-dose + fact-check condition.

5 These RQs deviate from our preregistration by splitting confidence in elections and support for democracy into separate outcome variables. Originally, we preregistered that H1–H4 and RQ1 would apply to “confidence in elections and (emphasis added) support for democracy.” However, this statement was based on a preregistered factor analysis of the individual items reported in Online Appendix B. As the factor analysis distinguished between these items, we include both (as per our preregistration) but examine them separately. Adding an RQ is meant to aid the reader’s understanding of which analyses apply to which outcome variable. See also Footnote 1.

6 As noted in Appendix B, our preregistered approach was to include seven items measuring election confidence and five additional items measuring support for democracy. We noted that if these separate batteries “represent a single construct” we would combine them into a single composite measure. Our preregistration did not specify what would be done if the items did not scale onto a single dimension. As shown in the appendix, EFA indicated that the seven election confidence items did relate to a single underlying construct. Our main analysis therefore focuses on this measure. However, the five remaining items scaled onto two separate dimensions. For the sake of completeness, we therefore analyze both those five individual items and the two composite measures that correspond to the indicated dimensions in Online Appendix C (Tables C4C7). This approach represents a deviation from our preregistration in that we did not specify how we would proceed under these circumstances.

7 We did not conduct any power analyses in advance. However, Online Appendix F uses the DeclareDesign approach to approximate the power of our design and provide context for interpreting these results (Blair et al. Reference Blair, Cooper, Coppock and Humphreys2019). These simulations show that we are well powered to detect main effects of approximately or larger, which is consistent with our estimates for H1a and H2a. However, the low-dose + corrections condition (RQ1a) falls below this threshold. In addition, despite our large sample, we are powered to detect only fairly large interaction terms (larger than approximately 0.25). The design is sufficiently powered to detect an estimand similar in magnitude to the estimate we report for the high-dose Democrat interaction in Table C1. However, we are not powered to detect interactions if the true estimand is as small as the estimate for the low-dose Democrat interaction.

8 Effects for individual outcome measures are generally but not uniformly consistent with these patterns. Most notably, none of the treatments had an effect on beliefs that ballots are secure from tampering, a claim that was not questioned in the stimuli shown to respondents.

9 Our analysis of effects by party deviates from our preregistered analysis by examining Democrats and independents separately. We discuss this in greater detail in Online Appendix C, which also contains the preregistered specification in which they are analyzed together. In addition, our analysis of effect by Trump approval is exploratory.

10 As preregistered, we include additional analyses of other possible moderators of the effects of voter-fraud message exposure in Online Appendix D (see Tables D3D9). These moderators include trust in and feelings toward the media, feelings toward Trump, conspiracy predispositions, political interest and knowledge, and pre-treatment visits to fake news sites and fact-checking sites. We find little evidence of additional heterogeneity, suggesting that the primary moderator is partisanship. A fully populated preregistration is reported in Online Appendix E (Duflo et al. Reference Duflo, Banerjee, Finkelstein, Katz, Olken and Sautmann2020).

11 We find reduced support at the level for a composite measure of support for alternatives to democracy among respondents exposed to four tweets claiming voter fraud and four fact-check tweets. All results in Table C4 are otherwise null. To assess the precision of these estimates, we estimate results from two one-sided equivalence tests at the 95% level. Across the outcome measures for which we obtain null results (all of which are measured on a 1–4 scale), we can confidently rule out effects of 0.09 or smaller for the low-dose condition (0.11 SD), 0.11 or smaller for the high-dose condition (0.11 SD), and 0.16 or smaller for the low-dose + fact-check condition (0.20 SD).

References

Agadjanian, Alexander. 2020. When Do Partisans Stop Following the Leader? Political Communication 1–19. doi: 10.1080/10584609.2020.1772418.CrossRefGoogle Scholar
Albertson, Bethany and Guiler, Kimberly. 2020. Conspiracy Theories, Election Rigging, and Support for Democratic Norms. Research & Politics 7(3): 2053168020959859.CrossRefGoogle Scholar
Alvarez, R. Michael, Hall, Thad E., Llewellyn, Morgan H.. 2008. Are Americans Confident their Ballots are Counted? The Journal of Politics 70(3): 754766.CrossRefGoogle Scholar
Amsalem, Eran, Sheafer, Tamir, Walgrave, Stefaan, Loewen, Peter John and Soroka, Stuart N.. 2017. Media Motivation and Elite Rhetoric in Comparative Perspective. Political Communication 34(3): 385403.CrossRefGoogle Scholar
Anderson, C. J., Blais, A., Bowler, S., Donovan, T. and Listhaug, O.. 2005. Losers’ Consent: Elections and Democratic Legitimacy. Oxford: Oxford University Press.CrossRefGoogle Scholar
Arendt, Florian. 2015. Toward a Dose-Response Account of Media Priming. Communication Research 42(8): 10891115.Google Scholar
Arendt, Florian, Marquart, Franziska and Matthes, Jörg. 2015. Effects of Right-Wing Populist Political Advertising on Implicit and Explicit Stereotypes. Journal of Media Psychology 27(4): 178189.CrossRefGoogle Scholar
Baden, Christian and Lecheler, Sophie. 2012. Fleeting, Fading, or Far-Reaching? A Knowledge-Based Model of the Persistence of Framing Effects. Communication Theory 22(4): 359382.Google Scholar
Baum, Matthew A. and Groeling, Tim. 2009. Shot by the Messenger: Partisan Cues and Public Opinion Regarding National Security and War. Political Behavior 31(2): 157186.CrossRefGoogle Scholar
Benegal, Salil D. and Scruggs, Lyle A.. 2018. Correcting Misinformation about Climate Change: The Impact of Partisanship in an Experimental Setting. Climatic Change 148(1): 6180.CrossRefGoogle Scholar
Berinsky, Adam J. 2018. Telling the Truth about Believing the Lies? Evidence for the Limited Prevalence of Expressive Survey Responding. Journal of Politics 80(1): 211224.CrossRefGoogle Scholar
Berlinsk, Nicoloas, Doyle, Margaret, Guess, Andrew M., Levy, Gabrielle, Lyons, Benjamin, Montgomery, Jacob M., Nyhan, Brendan and Reifler, Jason. 2021. Replication Data for: The Effects of Unsubstantiated Claims of Voter Fraud on Confidence in Elections. Retrieved from https://doi.org/10.7910/DVN/530JGJ.CrossRefGoogle Scholar
Blair, Graeme, Cooper, Jasper, Coppock, Alexander and Humphreys, Macartan. 2019. Declaring and Diagnosing Research Designs. American Political Science Review 113(3): 838859.CrossRefGoogle ScholarPubMed
Cacioppo, John T. and Petty, Richard E.. 1979. Effects of Message Repetition and Position on Cognitive Response, Recall, and Persuasion. Journal of Personality and Social Psychology 37(1): 97.CrossRefGoogle Scholar
Carey, John, Clayton, Katherine, Helmke, Gretchen, Nyhan, Brendan, Sanders, Mitchell and Stokes, Susan. 2020. Who will Defend Democracy? Evaluating Tradeoffs in Candidate Support among Partisan Donors and Voters. Journal of Elections, Public Opinion and Parties 1–16. doi: 10.1080/17457289.2020.1790577.Google Scholar
Carson, Jamie L. and Williamson, Ryan D.. 2018. Candidate Emergence in the Era of Direct Primaries. In Routledge Handbook of Primary Elections, ed. R. Boatright. New York, USA: Routledge, 1–15.Google Scholar
Chan, Man-pui Sally, Jones, Christopher R., Jamieson, Kathleen Hall and Albarracn, Dolores. 2017. Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation. Psychological Science 28(11): 15311546.Google ScholarPubMed
Coppock, Alexander, Ekins, Emily and Kirby, David. 2018. The Long-Lasting Effects of Newspaper Op-Eds on Public Opinion. Quarterly Journal of Political Science 13(1): 5987.Google Scholar
Cotter, Ryan G., Lodge, Milton and Vidigal, Robert. 2020. The Boundary Conditions of Motivated Reasoning. In The Oxford Handbook of Electoral Persuasion, eds. E. Suhay, B. Grofman, and A. H. Treschse. Oxford, USA: Oxford University Press. Online only publication, doi: 10.1093/oxfordhb/9780190860806.013.3.Google Scholar
Druckman, James N. 2001. The Implications of Framing Effects for Citizen Competence. Political Behavior 23(3): 225256.CrossRefGoogle Scholar
Duflo, Esther, Banerjee, Abhijit, Finkelstein, Amy, Katz, Lawrence F., Olken, Benjamin A. and Sautmann, Anja. 2020. In Praise of Moderation: Suggestions for the Scope and use of Pre-analysis Plans for RCTs in Economics. National Bureau of Economic Research Working Paper, April 2020.Google Scholar
Edelson, Jack, Alduncin, Alexander, Krewson, Christopher, Sieja, James A. and Uscinski, Joseph E.. 2017. The Effect of Conspiratorial Thinking and Motivated Reasoning on Belief in Election Fraud. Political Research Quarterly 70(4): 933946.CrossRefGoogle Scholar
Edwards, George C. 2006. On Deaf Ears: The Limits of the Bully Pulpit. New Haven, CT: Yale University Press.Google Scholar
Enders, Adam M. and Smallpage, Steven M.. 2019. Informational Cues, Partisan-Motivated Reasoning, and the Manipulation of Conspiracy Beliefs. Political Communication 36(1): 83102.CrossRefGoogle Scholar
Franco, Annie, Grimmer, Justin and Lim, Chloe. n.d. The Limited Effect of Presidential Public Appeal. Downloaded May 15, 2020 from https://www.dropbox.com/s/5ym94ggasbff260/public.pdf?dl=0.Google Scholar
Fridkin, Kim, Kenney, Patrick J. and Wintersieck, Amanda. 2015. Liar, Pants on Fire: How Fact-checking Influences Citizens’ Reactions to Negative Advertising. Political Communication 32(1): 127151.CrossRefGoogle Scholar
Goodman, Jack. 2021. Myanmar Coup: Does the Army Have Evidence of Voter Fraud? BBC, February 2, 2021. Downloaded February 13, 2021 from https://www.bbc.com/news/55918746.Google Scholar
Graham, Matthew H. and Svolik, Milan W.. 2020. Democracy in America? Partisanship, Polarization, and the Robustness of Support for Democracy in the United States. American Political Science Review 114(2): 392409.CrossRefGoogle Scholar
Graves, Lucas, Nyhan, Brendan and Reifler, Jason. 2015. The Diffusion of Fact-checking. Vol. 22. Arlington, VA: American Press Institute.Google Scholar
Guess, Andrew M., Michael, Lerner, Benjamin, Lyons, Montgomery, Jacob M., Brendan, Nyhan, Reifler, Jason and Sircar, Neelanjan. 2020. A Digital Media Literacy Intervention Increases Discernment between Mainstream and False News in the United States and India. Proceedings of the National Academy of Sciences 117(27): 1553615545.CrossRefGoogle ScholarPubMed
Hall, Thad E., Monson, J. Quin and Patterson, Kelly D.. 2009. The Human Dimension of Elections: How Poll Workers Shape Public Confidence in Elections. Political Research Quarterly 62(3): 507522.CrossRefGoogle Scholar
Huddy, Leonie, Bankert, Alexa and Davies, Caitlin. 2018. Expressive Versus Instrumental Partisanship in Multiparty European Systems. Political Psychology 39: 173199.Google Scholar
Hyde, Susan D. 2011. The Pseudo-Democrat’s Dilemma: Why Election Observation Became an International Norm. Ithaca, NY: Cornell University Press.Google Scholar
Inglehart, Ronald. 2003. How Solid Is Mass Support for Democracy: And How Can We Measure It? PS: Political Science and Politics 36(1): 5157.Google Scholar
Iyengar, Shanto, Lelkes, Yphtach, Levendusky, Matthew, Malhotra, Neil and Westwood, Sean J.. 2019. The Origins and Consequences of Affective Polarization in the United States. Annual Review of Political Science 22: 129146.CrossRefGoogle Scholar
Klar, Rebecca. 2020. Warning Label Added to Trump Tweet Over Potential Mail-in Voting Disinformation. The Hill, September 17, 2020. Downloaded November 6, 2020 from https://thehill.com/policy/technology/516890-warning-label-added-to-trump-tweet-over-potential-mail-in-voting.Google Scholar
Kunda, Ziva. 1990. The Case for Motivated Reasoning. Psychological Bulletin 108(3): 480498.CrossRefGoogle ScholarPubMed
Lecheler, Sophie and de Vreese, Claes H.. 2013. What a Difference A Day Makes? The Effects of Repetitive and Competitive News Framing Over Time. Communication Research 40(2): 147175.CrossRefGoogle Scholar
Levy, Morris. 2020. Winning Cures Everything? Beliefs about Voter Fraud, Voter Confidence, and the 2016 Election. Electoral Studies 102156. doi: 10.1016/j.electstud.2020.102156.CrossRefGoogle Scholar
Lopez, German. 2018. The Florida Voter Fraud Allegations, Explained. Vox, November 12, 2018. Downloaded February 16, 2020 from https://www.vox.com/policy-and-politics/2018/11/12/18084786/florida-midterm-elections-senate-governor-results-fraud.Google Scholar
Lyons, Ben, Mérola, Vittorio, Reifler, Jason and Stoeckel, Florian. 2020. How Politics Shape Views Toward Fact-checking: Evidence from Six European Countries. The International Journal of Press/Politics 25(3): 469492.CrossRefGoogle Scholar
Lyons, Benjamin A. 2018. When Readers Believe Journalists: Effects of Adjudication in Varied Dispute Contexts. International Journal of Public Opinion Research 30(4): 583606.CrossRefGoogle Scholar
Lyons, Benjamin A., Hasell, Ariel, Tallapragada, Meghnaa and Jamieson, Kathleen Hall. 2019. Conversion Messages and Attitude Change: Strong Arguments, Not Costly Signals. Public Understanding of Science 28(3): 320338.CrossRefGoogle ScholarPubMed
Markovich, Zachary, Baum, Matthew A., Berinsky, Adam J., de Benedictis-Kessner, Justin and Yamamoto, Teppei. 2020. Dynamic Persuasion: Decay and Accumulation of Partisan Media Persuasion.Google Scholar
Matthews, Douglas and Dietz-Uhler, Beth. 1998. The Black-Sheep Effect: How Positive and Negative Advertisements Affect Voters’ Perceptions of the Sponsor of the Advertisement 1. Journal of Applied Social Psychology 28(20): 19031915.CrossRefGoogle Scholar
Miller, Joanne M. and Krosnick, Jon A.. 1996. News Media Impact on the Ingredients of Presidential Evaluations: A Program of Research on the Priming Hypothesis. In Political Persuasion and Attitude Change, eds. D. C. Mutz, P. M. Sniderman, and R. A. Brody. Ann Arbor, MI: University of Michigan Press, 79–100.Google Scholar
Minnite, Lorraine C. 2010. The Myth of Voter Fraud. Ithaca, NY: Cornell University Press.Google Scholar
Nicholson, Stephen P. 2011. Dominating Cues and the Limits of Elite Influence. The Journal of Politics 73(4): 11651177.Google Scholar
Norris, Pippa. 2014. Why Electoral Integrity Matters. Cambridge: Cambridge University Press.Google Scholar
Nyhan, Brendan, Porter, Ethan, Reifler, Jason and Wood, Thomas J.. 2019. Taking Fact-checks Literally but not Seriously? The Effects of Journalistic Fact-checking on Factual Beliefs and Candidate Favorability. Political Behavior 42(3): 939960.CrossRefGoogle Scholar
Nyhan, Brendan and Reifler, Jason. 2010. When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior 32(2): 303330.Google Scholar
Nyhan, Brendan and Reifler, Jason. 2017. Do People Actually Learn from Fact-checking? Evidence from a Longitudinal Study during the 2014 Campaign. Unpublished Manuscript.Google Scholar
Nyhan, Brendan, Reifler, Jason, Richey, Sean and Freed, Gary L.. 2014. Effective Messages in Vaccine Promotion: A Randomized Trial. Pediatrics 133(4): e835e842.Google ScholarPubMed
Paddock, Richard C. 2019. Indonesia Court Rejects Presidential Candidate’s Voting Fraud Claims. New York Times, June 27, 2019. Downloaded May 4, 2020 from https://www.nytimes.com/2019/06/27/world/asia/indonesia-widodo-prabowo-election-fraud.html.Google Scholar
Pingree, Raymond J., Watson, Brian, Sui, Mingxiao, Searles, Kathleen, Kalmoe, Nathan P., Darr, Joshua P., Santia, Martina and Bryanov, Kirill. 2018. Checking Facts and Fighting Back: Why Journalists Should Defend Their Profession. PLoS One 13(12): e0208600.CrossRefGoogle ScholarPubMed
Ratcliff, Chelsea L., Jensen, Jakob D., Scherr, Courtney L., Krakow, Melinda and Crossley, Kaylee. 2019. Loss/gain Framing, Dose, and Reactance: A Message Experiment. Risk Analysis 39(12): 26402652.CrossRefGoogle ScholarPubMed
Reeves, Andrew and Rogowski, Jon C.. 2016. Unilateral Powers, Public Opinion, and the Presidency. The Journal of Politics 78(1): 137151.CrossRefGoogle Scholar
Reeves, Andrew and Rogowski, Jon C.. 2018. The Public Cost of Unilateral Action. American Journal of Political Science 62(2): 424440.CrossRefGoogle Scholar
Savarese, Mauricio. 2018. Leading Brazil Candidate Says He Fears Electoral Fraud. Associated Press. September 17, 2018. Downloaded February 16, 2020 from https://apnews.com/d75824e19eac49d9b6f2cfec8d9daf12/Leading-Brazil-candidate-says-he-fears-electoral-fraud.Google Scholar
Schaffner, Brian F. and Luks, Samantha. 2018. Misinformation or Expressive Responding? What an Inauguration Crowd Can Tell Us about the Source of Political Misinformation in Surveys. Public Opinion Quarterly 82(1): 135147.CrossRefGoogle Scholar
Sinclair, Betsy, Smith, Steven S. and Tucker, Patrick D.. 2018. ‘It’s Largely a Rigged System’: Voter Confidence and the Winner Effect in 2016. Political Research Quarterly 71(4): 854868.CrossRefGoogle Scholar
Taber, Charles S. and Lodge, Milton. 2006. Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science 50(3): 755769.Google Scholar
Thorson, Emily. 2016. Belief Echoes: The Persistent Effects of Corrected Misinformation. Political Communication 33(3): 460480.CrossRefGoogle Scholar
Udani, Adriano, Kimball, David C. and Fogarty, Brian. 2018. How Local Media Coverage of Voter Fraud Influences Partisan Perceptions in the United States. State Politics & Policy Quarterly 18(2): 193210.CrossRefGoogle Scholar
Weeks, Brian E. and Gil de Zúñiga, Homero. 2019. What’s Next? Six Observations for the Future of Political Misinformation Research. American Behavioral Scientist 65(2): 277289.Google Scholar
Wood, Thomas and Porter, Ethan. 2019. The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence. Political Behavior 41(1): 135163.CrossRefGoogle Scholar
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Figure 0

Figure 1. Example stimulus tweet from the experiment.

Figure 1

Table 1. Measures of Confidence in Elections

Figure 2

Table 2. Effect of Exposure to Voter Fraud Allegations on Election Confidence

Figure 3

Figure 2. Marginal effect of exposure to claims of voter fraud on confidence in elections.Notes: Difference in means (with 95% CIs) for composite measure of election confidence relative to the placebo condition.

Figure 4

Figure 3. Effect of exposure to claims of voter fraud on election confidence by predispositions.Notes: Figure 3(a) shows the marginal effect by party of exposure to claims of voter fraud on composite measure of election confidence relative to the placebo condition (Table C1), while Figure 3(b) shows the marginal effect by Trump approval (Table C2). All marginal effect estimates include 95% CIs.

Supplementary material: Link

Berlinski et al. Dataset

Link
Supplementary material: PDF

Berlinski et al. supplementary material

Berlinski et al. supplementary material

Download Berlinski et al. supplementary material(PDF)
PDF 12.4 MB