The 2020 US election featured an incumbent president with a well-documented penchant for spreading falsehoods (Kessler Reference Kessler2020) and culminated in an insurrection that policymakers have attributed to misinformation about the “big lie” of election fraud (Dillard Reference Dillard2020; US House Committee on Energy and Commerce Staff, 2021). While absolute levels of exposure to online misinformation are lower than media reports have suggested (Guess, Nagler, and Tucker Reference Guess, Nagler and Tucker2019), exposure to unreliable sources continues to rise (Fischer Reference Fischer2020). Belief in falsehoods may shape political attitudes and behavior in ways that damage democratic accountability (Einstein and Hochschild Reference Einstein and Hochschild2015; Nyhan Reference Nyhan2020), with some even suggesting that misinformation directly influences evaluations of political figures, thereby influencing election outcomes (Gunther, Beck, and Nisbet Reference Gunther, Beck and Nisbet2019; Jamieson Reference Jamieson2018).
Both traditional and social media have turned to fact-checks to respond to this onslaught of misinformation (Chan et al. Reference Chan2017; Graves Reference Graves2016; Walter et al. Reference Walter2019). For example, in advance of the 2020 election, Facebook announced that it would partner with fact-checking organizations to review dubious claims and promote fact-checks to its users (Culliford Reference Culliford2019). These and similar efforts are premised on the assumption that at a crucial moment of democratic reckoning—high-intensity political campaigns—factual corrections offer an effective countermeasure against misinformation.
Prior academic work on the effects of fact-checks has established, to varying degrees of certainty and generality, four important findings. First, fact-checks reliably improve factual belief accuracy, while misinformation degrades it (Chan et al. Reference Chan2017; Guess et al. Reference Guess2020; Porter and Wood Reference Porter and Wood2019; Wood and Porter Reference Wood and Porter2018). Second, although these effects fade with time, they remain detectable after immediate exposure (Berinsky Reference Berinsky2015; Porter and Wood Reference Porter and Wood2021; Swire et al. Reference Swire2017). Third, the downstream effects of fact-checks on attitudes toward the fact-checked politician or group are minuscule at best (Nyhan et al. Reference Nyhan2019; but see Loomba et al. Reference Loomba2021; Wintersieck Reference Wintersieck2017). Fourth, Democrats and Republicans both respond to fact-checks by becoming more accurate, though with important partisan asymmetries: Democrats appear to respond more strongly to fact-checks of partisan-congenial information than do Republicans (Edelson et al. Reference Edelson2021; Guay et al. Reference Guay2020; Jennings and Stroud Reference Jennings and Stroud2021; Shin and Thorson Reference Shin and Thorson2017). The goal of the present study is to systematically reconfirm these findings, while extending them to one of the most politically important settings for fact-checks: the 2020 US presidential election. While prior research has investigated the effects of fact-checks and misinformation on factual beliefs during other election seasons, no work of which we are aware investigates these particular questions at comparable scale and timing. Most work on fact-checks has evaluated single corrections, sometimes outside the context of elections (Nyhan et al. Reference Nyhan2019; Wintersieck Reference Wintersieck2017). Since party leaders' messages (including false statements) are magnified during campaigns—as is the influence of these messages on voters' beliefs and attitudes (Lenz Reference Lenz2012)—we might be concerned that real-world fact-checks deployed during campaigns would have smaller effects. In particular, a persistent concern is that the partisan polarization and media fragmentation that characterized the 2020 election cycle might render fact-checks ineffective.
To study fact-checking during the 2020 election season, we conducted eight preregistered panel experiments (total N = 17,681). For each panel study, we selected fact-check treatments from the set of recent articles produced by the nonpartisan organization PolitiFact, along with the corresponding misinformation treatments in their original forms (including Facebook posts, videos, and “fake news” articles). In total, we evaluated the effects of twenty-one widely disseminated pieces of misinformation and corresponding fact-checks during the final two months of the 2020 election. We evaluated the effects of these treatments in almost real time: on average, thirteen days separated the first appearance of the misinformation online from the inclusion of that misinformation in our experiments.
We can confirm each of the four prior findings generalize to the 2020 election. Exposure to misinformation increased false beliefs by an average of 4.3 points on a 100-point belief certainty scale. Exposure to fact-checks more than corrected this effect, decreasing false beliefs by 10.5 points. We show that 66 per cent of the initial effect of fact-checks on factual beliefs is still detectable one week after initial exposure, with 50 percent detectable after more than two weeks. Both misinformation and fact-checks have very small effects on attitudes toward politicians and political organizations: on 100-point feeling thermometers, the effects of fact-checks and misinformation on subsequent attitudes are smaller than one-quarter of a point. With regard to treatment-effect heterogeneity by partisanship, we find that Democrats became more accurate than Republicans following exposure to fact-checks that were in conflict with their partisanship. Lastly, our design enables the systematic study of treatment-effect heterogeneity along multiple dimensions of personality and cognitive style. Here, we find limited heterogeneity in the effects of misinformation but substantial heterogeneity in the effects of fact-checks. In an exploratory analysis, we offer evidence that these heterogeneities may be due to differences in the amount of time different kinds of people spent reading the treatments.
Prior Research
To contextualize our contribution, we briefly review the prior experimental evidence on each of the four empirical findings we replicated. First, recent evidence shows that fact-checks increase belief accuracy, while misinformation degrades it. One meta-analysis (Chan et al. Reference Chan2017) gathered together twenty studies with fifty-two independent treatments to show that misinformation decreases accuracy and fact-checks increase it. Porter and Wood (Reference Porter and Wood2019) inspect sixty-three issues overall (including the issue experimentally evaluated by Haglin [Reference Haglin2017]), finding that corrections significantly improved accuracy for more than 96 per cent of issues and observing no corrections “backfiring,” or reducing belief accuracy on their own. The scholarly consensus that corrections improve belief accuracy is summarized in Lewandowsky et al. (Reference Lewandowsky2020). On the misinformation side, Guess et al. (Reference Guess2020) randomize exposure to two “fake news” stories, finding that it decreases belief accuracy about the story in question.
Second, prior research shows that the effects of fact-checks endure, at least to some extent, after exposure. Porter and Wood (Reference Porter and Wood2021) recontacted participants in three countries two weeks after initial exposure to fact-checks. For eleven of seventeen issues, belief accuracy increases were still detectable at this later date. A total of 40 percent of the first-wave effect remained detectable in the second wave. Similarly, Porter, Velez, and Wood (Reference Porter, Velez and Wood2022) recontacted participants in ten countries two weeks after a randomized exposure to a COVID-19-related correction, and conclude that 39 per cent of the correction's effect remained detectable. However, there have been divergent findings on this matter; in a similar multi-wave study, Carey et al. (Reference Carey2022) find that the effects of corrections on COVID-19 belief accuracy do not persist in either the US or Great Britain (albeit across a smaller set of false claims than tested here and in the other studies).
Third, much of the available evidence shows that, at most, corrections and misinformation have very small downstream effects on attitudes. Nyhan et al. (Reference Nyhan2019) found that exposure to a fact-check of a false Donald Trump claim increased belief accuracy, including among Trump supporters, but had no discernible impact on either supporters' or opponents' attitudes toward Trump. Similarly, in their ten-country study of COVID-19 corrections and misinformation, Porter, Velez, and Wood (Reference Porter, Velez and Wood2022) do not find evidence that either corrections or misinformation affected attitudes toward vaccines. However, Wintersieck (Reference Wintersieck2017) finds that evaluations of candidates' debate performance improve when a fact-check shows they made accurate statements during the debate. Guess et al. (Reference Guess2020) find that randomized exposure to misinformation affects self-reported vote intention, but has no impact on feelings toward the media or political parties.
Fourth, studies of corrective interventions have often (though not always) found evidence of partisan asymmetries. Observational evidence of Twitter users during the 2012 election shows Republicans were more likely to tweet fact-checks of Democratic President Obama than Democrats were to tweet fact-checks of Republican candidate Romney (Shin and Thorson Reference Shin and Thorson2017). In an experiment conducted on a mock version of the Facebook news feed, Jennings and Stroud (Reference Jennings and Stroud2021) find only Democrats respond to fact-checking labels by becoming more accurate (though for a similar study that finds fact-checks making both Democrats and Republicans more accurate, see Porter and Wood Reference Porter and Wood2022). Finally, exposure to an accuracy nudge (an intervention related to, but distinct from, fact-checks) has small to negligible effects on accuracy among conservatives, with its effectiveness concentrated among Democrats and independents (Rathje et al. Reference Rathje2022).
Design
We administered six two-wave panels and two three-wave panels between September and November 2020. Four panels were fielded on Mechanical Turk and four were fielded on Lucid.
The first wave of every panel was structured identically. Before the experimental portion, we measured pretreatment covariates. After answering demographic questions, subjects completed batteries measuring political knowledge, Cognitive Reflection Test (CRT-2), need for cognition, and a ten-question personality inventory. Since demographic information is provided directly for Lucid respondents, the Mechanical Turk samples were asked more extensive demographic questions whose response categories matched the Lucid categories exactly (for the full instrument, see the Online Appendix). Subjects then participated in three independently randomized experiments, each relating to a separate topic of misinformation. For each topic, respondents were assigned to one of three conditions: pure control, misinformation only, or misinformation followed by a fact-check. In the Online Appendix, we demonstrate that these randomizations generated experimental groups balanced on pretreatment covariates.
Each week throughout the study, PolitiFact shared internal data with us about the popularity of their fact-checks (measured via web traffic). These data informed the selection of the fact-checks used in the experiments. The topics were chosen based on the following criteria. First, each received relatively high traffic on the PolitiFact website (see Figure 1). Secondly, each panel included one false claim that we anticipated would be congenial to Republicans, one false claim expected to be congenial to Democrats, and a third chosen to tap into unfolding events, regardless of expectations about differential partisan response.
Two-thirds of our tested fact-checks were produced by PolitiFact as part of their partnership with Facebook. Through that partnership, Facebook presented PolitiFact and other fact-checking organizations with a set of widely circulating misinformation, dividing the sets into three tiers based on internal data. The most popular pieces were in the first tier, while the least popular were in the third. The fourteen Facebook partnership fact-checks in our experiment were all from the first tier.
Participants saw the misinformation and fact-checks in as close to their original form as possible, including transcripts, Facebook posts, tweets, video, and “fake news” articles. The fact-checks adhered to the format and text used by PolitiFact. They included the PolitiFact logo, a headline, and a graphic illustrating the verdict (for example, “Pants on Fire”). All text and images in the fact-checks were taken from PolitiFact. The complete set of stimuli are in the Online Appendix.
Figure 1 shows the cumulative page views of each of the tested fact-checks, by congeniality. On average, fact-checks of Republican-congenial misinformation (shown in the top panel) received 138,971 views over the eight weeks following the original post. By contrast, the average fact-check of Democratic-congenial misinformation received 45,123 views. This pattern complements descriptive work (Edelson et al. Reference Edelson2021) which shows that during the 2020 election, Republican-congenial misinformation on social media received far more engagement than Democratic-congenial misinformation. Just as Republican-congenial misinformation is more popular than Democratic-congenial misinformation, so are articles debunking that misinformation.
After treatment, all respondents, including those in control conditions, answered questions about their belief in each piece of misinformation, measured via two questions. The first asked how accurate they thought the misinformation was, and the second asked how confident they were in their answer. The use of multiple outcome measures helps mitigate concerns about measurement error (Swire, DeGutis, and Lazer Reference Swire, DeGutis and Lazer2020). Our use of a confidence measure allows us to account for effects on beliefs while being mindful of the way in which limited belief certainty may impact misperceptions (Graham Reference Graham2020) and even lead researchers to conflate uninformed participants with misinformed participants (Pasek, Sood, and Krosnick Reference Pasek, Sood and Krosnick2015). Finally, to assess effects on attitudes, participants were presented with feeling thermometers for the groups and people prominently featured in the misinformation and fact-checks.
At the close of the first wave, we debriefed subjects in the misinformation conditions to inform them that the misinformation was false, showing them the corresponding fact-checks. Since we could not be certain that all participants would return for subsequent surveys, we believed it unethical to expose them to uncorrected misinformation. We therefore do not measure the over-time effect of misinformation alone, but rather the effect of misinformation plus fact-check, relative to control. Evaluating the over-time effect of fact-checked misinformation allows us to address the pressing real-world question of whether the effects of fact-checks in the presence of misinformation endure beyond immediate exposure.
To measure over-time effects, we recontacted participants at least once, with a minimum of seven days separating waves. Six of our eight panels were comprised of two waves; the remaining two featured a third wave. Post-treatment waves measured the same set of outcomes as the first wave. In the Online Appendix, we demonstrate that our treatments do not appear to change whether subjects respond to outcome questions, either immediately post-treatment or in subsequent waves, allaying concerns about differential attrition.
Results
We begin our discussion of results with the averages of the belief confidence measure for Democrats and Republicans in the control condition. Figure 1 shows a partisan gap across issues: Republicans are more likely than Democrats to believe all of the Republican-congenial false statements, but Democrats are more likely than Republicans to believe only two of the Democratic-congenial false statements.Footnote 1 These results should be interpreted with caution, as they are based on observational data gleaned from convenience samples.
Since our primary interest is in treatment effects, we next turn to the average effects of exposure to misinformation and fact-checks. Figure 2 displays the outcome distributions by topic and condition. Belief certainty is plotted on the vertical axis, ranging from 0 (completely certain the false statement is inaccurate) to 100 (completely certain the statement is accurate). The first column in each graph shows the control condition, the second the group that saw only the misinformation, and the third the group that saw both the misinformation and the fact-check. Overall, exposure to the misinformation significantly decreased accuracy in twelve out of the twenty-four opportunities. Compared to the misinformation condition, twenty of the fact-checks had statistically significant negative average effects on belief certainty and none “backfired,” or increased false beliefs. Using random-effects meta-analysis, we estimate the average misinformation effect (relative to control) to be 4.30 points (standard error = 1.07) and the average fact-check effect (relative to misinformation) to be −10.5 points (standard error = 1.1).
While fact-checks reduced false beliefs overall, we observed partisan asymmetries. The first column of Figure 3 isolates the effects of misinformation on beliefs. Being exposed to misinformation increased belief in the false claim by about the same amount across categories, both among Republicans and among Democrats. The second column of Figure 3 shows the effect of fact-checks on belief confidence. The effects are negative for Republicans and Democrats. For false claims congenial to Republicans, the effects are approximately equal in magnitude across party lines. However, Democrats update substantially further than Republicans when corrected about congenial false claims. This asymmetry is the opposite of what would be predicted by a motivated reasoning theory, under which partisans are resistant to treatments that are hostile to their partisan identity. Here, Democrats move even further than Republicans, even though the correction is counter-attitudinal for them.
We thus observe the following partisan asymmetry: once shown counter-attitudinal corrections, Democrats are made more accurate than Republicans. This difference may be attributable to differences in the partisan information environments. An inspection of PolitiFact traffic data (see Figure 1) and available social media data on the tested misinformation indicates that Republican-congenial misinformation may have been circulating more widely than Democratic-congenial misinformation (corroborating Edelson et al.'s [Reference Edelson2021] findings). This pattern could explain why Republicans responded differently than Democrats: Republicans entered our studies more likely to have seen the tested misinformation, with this prior exposure making false beliefs more difficult to dislodge. Differences in responses to counter-attitudinal fact-checking may also be explained by source cues: once exposed to a counter-attitudinal correction, Democrats may have become more accurate than Republicans because they viewed PolitiFact more favorably than did Republicans (Walker and Gottfried Reference Walker and Gottfried2019).
In Figure 4, we turn to the medium-term effects of exposure to misinformation followed by fact-checks (versus control). The top row of panels presents meta-analytic estimates for all issues, conditional on subjects responding in the first and second waves of the study. The effects of fact-checks observed immediately after treatment dissipate somewhat from Wave 1 to Wave 2, on average, at 66.4 per cent of the initial magnitude. This pattern of decay is broadly consistent across issues that are congenial to either partisan group and among both Democratic and Republican respondents (for additional estimates, see Figure 9 in the Online Appendix). The bottom row of Figure 4 shows the meta-analytic averages for the six issues in studies with three-wave panels and conditions the analysis on responding in the first and third waves of the panel. On average, the effect in Wave 3 is 50 per cent the magnitude of the original effect, though the smaller number of issues and subjects increases our uncertainty.
Next, we consider how the immediate effects of fact-checks and misinformation may differ by the following sources of potential heterogeneity: levels of political interest, political knowledge, need for cognition, performance on the Cognitive Reflection Test, and Big Five personality measures.Footnote 2 Figure 5 shows the estimated effects for subjects in the top versus bottom tercile of each index.Footnote 3 None of these covariates moderate the effects of misinformation (left panels), but several of them moderate the effects of fact-checks (right panels). This same pattern of heterogeneity persists after one week, though at diminished magnitudes (see the Online Appendix).
We consider two potential explanations for why these traits, in particular, cognitive reflection and need for cognition, are associated with the effectiveness of fact-checks. One possibility is that people high in these traits may have better reasoning skills and therefore be more capable of understanding, evaluating, and applying the evidence presented in fact-checks. A second possibility is that people high in these traits may simply be more attentive to fact-checks.
To begin to distinguish between these two mechanisms, we conduct an exploratory analysis of the relationship between traits and time spent on the misinformation page and the fact-checking page. The results (available in the Online Appendix) show that the within-trait ranking of conditional effects shown in Figure 5 is matched exactly by a within-trait ranking of average time spent reading the fact-check. This pattern leads us to believe that attention paid to fact-checks explains these heterogeneous effects, not fundamental differences in cognitive ability. Of course, this observation only backs up the question as to why people who score differently on these scales spend different amounts of time reading fact-checks.
Lastly, we find that both fact-checks and misinformation had extremely limited influence on related attitudes. In Figure 6, we display the effects of misinformation and fact-checks on political figures and groups mentioned in the misinformation, overall and conditional by party. As preregistered, we isolate attitudes toward the target of the misinformation and corresponding fact-check. For example, for misinformation alleging that Amy Coney Barrett once made homophobic remarks, we evaluate attitudes toward Barrett in the misinformation and fact-check conditions. Meta-analysis shows that the overall effects of misinformation and fact-checks on attitudes were each smaller than half a point on the 100-point feeling thermometer. While the effects were in the expected direction, with fact-checks making respondents more positive and misinformation more negative, the effects were very small. In the Online Appendix, we show that when we relax assumptions about the target of the misinformation and fact-checks, the same pattern holds.
Discussion
Eight multi-wave experiments extend prior findings about factual corrections and misinformation to the 2020 US election. Consistent with a broad set of prior findings (Lewandowsky et al. Reference Lewandowsky2020; Wintersieck Reference Wintersieck2017), we observe fact-checks improving belief accuracy. Over time, the effects attenuate, but they remain detectable. The effects of fact-checks persisted at 66 per cent the original magnitude after one week and at 50 per cent after more than two weeks. This echoes previous research too (Porter and Wood Reference Porter and Wood2021). As other research has also found (Nyhan et al. Reference Nyhan2019), even as corrections durably improve accuracy, they have, at best, minor effects on downstream political attitudes. Finally, in line with evidence attesting to partisan asymmetries in these matters (Guay et al. Reference Guay2020; Shin and Thorson Reference Shin and Thorson2017), we observe Democrats becoming more accurate than Republicans after exposure to corrections that were uncongenial to their partisanship.
Aside from partisanship, do different kinds of people respond differently to misinformation and fact-checks? Here, we find the answer is “no” with respect to misinformation and “yes” with respect to fact-checks. The Big Five personality traits, need for cognition, and political interest all moderate the effect of fact-checks. How can we reconcile homogeneity of misinformation effects with heterogeneity of correction effects? We think the answer lies in how easy it is to process our two treatment types. The misinformation treatments are short, simple, and require limited attention. The fact-checks, by contrast, offer details, evidence, and logical reasoning. Therefore, it is easier (and less time-consuming) for people to process the misinformation treatments than the fact-checks. Consequently, correction effects are consistently higher among subjects who spend more time with the fact-check treatments. Others (Pennycook et al. Reference Pennycook2021) have shown that attention shapes people's response to misinformation. The same appears to be true for fact-checks. Media organizations and fact-checkers should generate factual corrections that are effective at reducing misperceptions while being easy to process.
Several features of our findings suggest avenues for future research. Although misinformation reduces accuracy, on average, the magnitude of the effects of fact-checks is more than twice that of misinformation. Similarly, the effect of misinformation on related attitudes is tiny. Taken together, these results call into question portrayals of misinformation as a substantial barrier to democratic accountability. It is certainly possible that misinformation's harms only emerge cumulatively, after repeated exposure to many false claims, far beyond what we test here. Future research should investigate whether larger quantities of misinformation and corrections have more pronounced effects on subsequent attitudes.
The quantity of items we test stands out as a limitation of the present study. The present study is also limited by its reliance on real-world misinformation; as a result, the stimuli we test are not exactly identical to one another. This limitation should be kept in mind when examining our data on average time spent with stimuli. While we determined that trading off treatment equivalence for realism was worthwhile, we look forward to future research that takes an alternative approach.
Our results show that during the 2020 US election, misinformation degraded accuracy, while corrections improved it by roughly twice the amount, often durably so. The effects of misinformation and corrections on attitudes were negligible. Important heterogeneities emerged in our analysis, with implications for scholars, social media companies, and policymakers.
Supplementary Material
Online appendices are available at: https://doi.org/10.1017/S0007123422000631
Data Availability Statement
Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/WQXZUP
Acknowledgments
We are grateful to the team at PolitiFact, especially Angie Holan and Josie Hollingsworth, for partnering with us and sharing data. We received no compensation for the PolitiFact partnership. We thank Andrew Guess, Matt Graham, Gregory Huber, Brendan Nyhan, Thomas Nelson, Gordon Pennycook, Yamil Velez, and the Political Psychology workshop participants at Ohio State University for helpful comments. All mistakes are our own.
Financial Support
This research is supported by the John S. and James L. Knight Foundation through a grant to the Institute for Data, Democracy & Politics at The George Washington University.
Competing Interests
None.