Fake news can have serious societal consequences for science, society, and the democratic process (Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017). For example, belief in fake news has been linked to violent intentions (Jolley & Paterson, Reference Jolley and Paterson2020), lower willingness to get vaccinated against the coronavirus disease 2019 (COVID-19; Roozenbeek, Schneider, et al., Reference Roozenbeek and van der Linden2020), and decreased adherence to public health guidelines (van der Linden, Roozenbeek, et al., Reference Roozenbeek, Maertens, McClanahan and van der Linden2020). Fake rumours on the WhatsApp platform have inspired mob lynchings (Arun, Reference Arun2019) and fake news about climate change is undermining efforts to mitigate the biggest existential threat of our time (van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017).
In light of this, interest in the “psychology of fake news” has skyrocketed. In this article, we offer a rapid review and research agenda of how psychological science can help effectively counter the spread of fake news, and what factors to take into account when doing so.Footnote 1
Current Approaches to Countering Misinformation
Scholars have largely offered two different approaches to combat misinformation, one reactive, the other proactive. We review each approach in turn below.
Reactive Approaches: Debunking and Fact-Checking
The first approach concerns the efficacy of debunking and debiasing (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012). Debunking misinformation comes with several challenges, as doing so reinforces the (rhetorical frame of the) misinformation itself. A plethora of research on the illusory truth effect suggests that the mere repetition of information increases its perceived truthfulness, making even successful corrections susceptible to unintended consequences (Effron & Raj, Reference Effron and Raj2020; Fazio et al., Reference Fazio, Brashier, Payne and Marsh2015; Pennycook et al., Reference Pennycook, Cannon and Rand2018). Despite popular concerns about potential backfire-effects, where a correction inadvertently increases the belief in—or reliance on—misinformation itself, research has not found such effects to be commonplace (e.g., see Ecker et al., Reference Ecker, Lewandowsky, Jayawardana and Mladenovic2019; Swire-Thompson et al., Reference Swire-Thompson, DeGutis and Lazer2020; Wood & Porter, Reference Wood and Porter2019). Yet, there is reason to believe that debunking misinformation can still be challenging in light of both (politically) motivated cognition (Flynn et al., Reference Flynn, Nyhan and Reifler2017), and the continued influence effect (CIE) where people continue to retrieve false information from memory despite acknowledging a correction (Chan et al., Reference Chan, Jones, Hall Jamieson and Albarracín2017; Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012; Walter & Tukachinsky, Reference Walter and Tukachinsky2020). In general, effective debunking requires an alternative explanation to help resolve inconsistencies in people’s mental model (Lewandowsky, Cook, et al., Reference Lewandowsky, Smillie, Garcia, Hertwig, Weatherall, Egidy, Robertson, O’Connor, Kozyreva, Lorenz-Spreen, Blaschke and Leiser2020). But even when a correction is effective (Ecker et al., Reference Ecker, Hogan and Lewandowsky2017; MacFarlane et al., Reference MacFarlane, Tay, Hurlstone and Ecker2020), fact-checks are often outpaced by misinformation, which is known to spread faster and further than other types of information online (Petersen et al., Reference Petersen, Vincent and Westerling2019; Vosoughi et al., Reference Vosoughi, Roy and Aral2018).
Proactive Approaches: Inoculation Theory and Prebunking
In light of the shortcomings of debunking, scholars have called for more proactive interventions that reduce whether people believe and share misinformation in the first place. Prebunking describes the process of inoculation, where a forewarning combined with a pre-emptive refutation can confer psychological resistance against misinformation. Inoculation theory (McGuire, Reference McGuire1970; McGuire & Papageorgis, Reference McGuire and Papageorgis1961) is the most well-known psychological framework for conferring resistance to persuasion. It posits that pre-emptive exposure to a weakened dose of a persuasive argument can confer resistance against future attacks, much like a medical vaccine builds resistance against future illness (Compton, Reference Compton, Dillard and Shen2013; McGuire, Reference McGuire1964). A large body of inoculation research across domains has demonstrated its effectiveness in conferring resistance against (unwanted) persuasion (for reviews, see Banas & Rains, Reference Banas and Rains2010; Lewandowsky & van der Linden, Reference Lewandowsky and van der Linden2021), including misinformation about climate change (Cook et al., Reference Cook, Lewandowsky and Ecker2017; van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017), conspiracy theories (Banas & Miller, Reference Banas and Miller2013; Jolley & Douglas, Reference Jolley and Douglas2017), and astroturfing by Russian bots (Zerback et al., Reference Zerback, Töpfl and Knöpfle2020).
In particular, the distinction between active vs. passive defences has seen renewed interest (Banas & Rains, Reference Banas and Rains2010). As opposed to traditional passive inoculation where participants receive the pre-emptive refutation, during active inoculation participants are tasked with generating their own “antibodies” (e.g., counter-arguments), which is thought to engender greater resistance (McGuire & Papageorgis, Reference McGuire and Papageorgis1961). Furthermore, rather than inoculating people against specific issues, research has shown that making people aware of both their own vulnerability and the manipulative intent of others can act as a more general strategy for inducing resistance to deceptive persuasion (Sagarin et al., Reference Sagarin, Cialdini, Rice and Serna2002).
Perhaps the most well-known example of active inoculation is Bad News (Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b), an interactive fake news game where players are forewarned and exposed to weakened doses of the common techniques that are used in the production of fake news (e.g., conspiracy theories, fuelling intergroup polarization). The game simulates a social media feed and over the course of 15 to 20 minutes lets players actively generate their own “antibodies” in an interactive environment. Similar games have been developed for COVID-19 misinformation (Go Viral!, see Basol et al., Reference Basol, Roozenbeek, McClanahan, Berriche, Uenal and van der Lindenin press), climate misinformation (Cranky Uncle, see Cook, Reference Cook2019) and political misinformation during elections (Harmony Square, see Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2020). A growing body of research has shown that after playing “fake news” inoculation games, people are; (a) better at spotting fake news, (b) more confident in their ability to identify fake news, and (c) less likely to report sharing fake news with others in their network (Basol et al., Reference Basol, Roozenbeek and van der Linden2020; Roozenbeek, van der Linden, et al., Reference van der Linden, Panagopoulos and Roozenbeek2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019a, Reference Roozenbeek and van der Linden2019b, Reference Roozenbeek and van der Linden2020). Figure 1 shows screenshots from each game.
An Agenda for Future Research on Fake News Interventions
Although these advancements are promising, in this section, we outline several open questions to bear in mind when designing and testing interventions aimed at countering misinformation: How long their effectiveness remains detectable, the relevance of source effects, the role of inattention and motivated cognition, and the complexities of developing psychometrically validated instruments to measure how interventions affect susceptibility to misinformation.
The Longevity of Intervention Effects
Reflecting a broader lack of longitudinal studies in behavioural science (Hill et al., Reference Hill, Lo, Vavreck and Zaller2013; Marteau et al., Reference Marteau, Ogilvie, Roland, Suhrcke and Kelly2011; Nisa et al., Reference Nisa, Bélanger, Schumpe and Faller2019), most research on countering misinformation does not look at effects beyond two weeks (Banas & Rains, Reference Banas and Rains2010). While Swire, Berinsky, et al. (Reference Swire, Berinsky, Lewandowsky and Ecker2017) found that most effects had expired one week after a debunking intervention, Guess et al. (Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020) report that three weeks after a media-literacy intervention effects can either dissipate or endure.
Evidence from studies comparing interventions indicates that expiration rates may vary depending on the method, with inoculation-based effects generally staying intact for longer than narrative, supportive, or consensus-messaging effects (e.g., see Banas & Rains, Reference Banas and Rains2010; Compton & Pfau, Reference Compton and Pfau2005; Maertens, Anseel, et al., Reference Maertens, Anseel and van der Linden2020; Niederdeppe et al., Reference Niederdeppe, Heley and Barry2015; Pfau et al., Reference Pfau, van Bockern and Kang1992). Although some studies have found inoculation effects to decay after two weeks (Compton, Reference Compton, Dillard and Shen2013; Pfau et al., Reference Pfau, Semmler, Deatrick, Mason, Nisbett, Lane, Craig, Underhill and Banas2009; Zerback et al., Reference Zerback, Töpfl and Knöpfle2020), the literature is converging on an average inoculation effect that lasts for at least two weeks but largely dissipates within six weeks (Ivanov et al., Reference Ivanov, Parker and Dillingham2018; Maertens, Roozenbeek, et al., Reference Roozenbeek, Schneider, Dryhurst, Kerr, Freeman, Recchia, van der Bles and van der Linden2020). Research on booster sessions indicates that the longevity of effects can be prolonged by repeating interventions or regular assessment (Ivanov et al., Reference Ivanov, Parker and Dillingham2018; Maertens, Roozenbeek, et al., Reference Roozenbeek, van der Linden and Nygren2020; Pfau et al., Reference Pfau, Compton, Parker, An, Wittenberg, Ferguson, Horton and Malyshev2006).
Gaining deeper insights into the longevity of different interventions, looking beyond immediate effects, and unveiling the mechanisms behind decay (e.g., interference and forgetting), will shape future research towards more enduring interventions.
Source Effects
When individuals are exposed to persuasive messages (Petty & Cacioppo, Reference Petty, Cacioppo, Petty and Cacioppo1986; Wilson & Sherrell, Reference Wilson and Sherrell1993) and evaluate whether claims are true or false (Eagly & Chaiken, Reference Eagly and Chaiken1993), a large body of research has shown that source credibility matters (Briñol & Petty, Reference Briñol and Petty2009; Chaiken & Maheswaran, Reference Chaiken and Maheswaran1994; Maier et al., Reference Maier, Adam and Maier2017; Pornpitakpan, Reference Pornpitakpan2004; Sternthal et al., Reference Sternthal, Dholakia and Leavitt1978). A significant factor contributing to source credibility is similarity between the source and message receiver (Chaiken & Maheswaran, Reference Chaiken and Maheswaran1994; Metzger et al., Reference Metzger, Flanagin, Eyal, Lemus and Mccann2003), particularly attitudinal (Simons et al., Reference Simons, Berkowitz and Moyer1970) and ideological similarity (Marks et al., Reference Marks, Copland, Loh, Sunstein and Sharot2019).
Indeed, when readers attend to source cues, source credibility affects evaluations of online news stories (Go et al., Reference Go, Jung and Wu2014; Greer, Reference Greer2003; Sterrett et al., Reference Sterrett, Malato, Benz, Kantor, Tompson, Rosenstiel, Sonderman and Loker2019; Sundar et al., Reference Sundar, Knobloch-Westerwick and Hastall2007) and in some cases, sources impact the believability of misinformation (Amazeen & Krishna, Reference Amazeen and Krishna2020; Walter & Tukachinsky, Reference Walter and Tukachinsky2020). In general, individuals are more likely to trust claims made by ideologically congruent news sources (Gallup, 2018) and discount news from politically incongruent ones (van der Linden, Panagopoulos, et al., Reference van der Linden, Roozenbeek and Compton2020). Furthermore, polarizing sources can boost or retract from the persuasiveness of misinformation, depending on whether or not people support the attributed source (Swire, Berinsky, et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017; Swire, Ecker, et al., Reference Ecker, Hogan and Lewandowsky2017).
For debunking, organizational sources seem more effective than individuals (van der Meer & Jin, Reference van der Meer and Jin2020; Vraga & Bode, Reference Vraga and Bode2017) but only when information recipients actively assess source credibility (van Boekel et al., Reference van Boekel, Lassonde, O’Brien and Kendeou2017). Indeed, source credibility may matter little when individuals do not pay attention to the source (Albarracín et al., Reference Albarracín, Kumkale and Vento2017; Sparks & Rapp, Reference Sparks and Rapp2011), and despite highly credible sources the continued influence of misinformation may persist (Ecker & Antonio, Reference Ecker and Antonio2020). For prebunking, evidence suggests that inoculation interventions are more effective when they involve high-credibility sources (An, Reference An2003). Yet, sources may not impact accuracy perceptions of obvious fake news (Hameleers, Reference Hameleers2020), political misinformation (Dias et al., Reference Dias, Pennycook and Rand2020; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019), or fake images (Shen et al., Reference Shen, Kasra, Pan, Bassett, Malloch and O’Brien2019), potentially because these circumstances reduce news receivers’ attention to the purported sources. Overall, relatively little remains known about how people evaluate sources of political and non-political fake news.
Inattention versus Motivated Cognition
At present, there are two dominant explanations for what drives susceptibility to and sharing of fake news. The motivated reflection account proposes that reasoning can increase bias. Identity-protective cognition occurs when people with better reasoning skills use this ability to come up with reasons to defend their ideological commitments (Kahan et al., Reference Kahan, Braman, Gastil, Slovic and Mertz2007). This account is based on findings that those who have the highest levels of education (Drummond & Fischhoff, Reference Drummond and Fischhoff2017), cognitive reflection (Kahan, Reference Kahan2013), numerical ability (Kahan et al., Reference Kahan, Peters, Dawson and Slovic2017), or political knowledge (Taber et al., Reference Taber, Cann and Kucsova2009) tend to show more partisan bias on controversial issues.
The inattention account, on the other hand, suggests that people want to be accurate but are often not thinking about accuracy (Pennycook & Rand, Reference Pennycook and Rand2019, Reference Pennycook and Rand2020). This account is supported by research finding that deliberative reasoning styles (or cognitive reflection) are associated with better discernment between true and false news (Pennycook & Rand, Reference Pennycook and Rand2019). Additionally, encouraging people to pause, deliberate, or think about accuracy before rating headlines (Bago et al., Reference Bago, Rand and Pennycook2020; Fazio, Reference Fazio2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020) can lead to more accurate identification of false news for both politically congruent and politically-incongruent headlines (Pennycook & Rand, Reference Pennycook and Rand2019).
However, both theoretical accounts suffer from several shortcomings. First, it is difficult to disentangle whether partisan bias results from motivated reasoning or selective exposure to different (factual) beliefs (Druckman & McGrath, Reference Druckman and McGrath2019; Tappin et al., Reference Tappin, Pennycook and Rand2020). For instance, although ideology and education might interact in a way that enhances motivated reasoning in correlational data, exposure to facts can neutralize this tendency (van der Linden et al., Reference van der Linden, Leiserowitz and Maibach2018). Paying people to produce more accurate responses to politically contentious facts also leads to less polarized responses (Berinsky, Reference Berinsky2018; Bullock et al., Reference Bullock, Gerber, Hill and Huber2013; Bullock & Lenz, Reference Bullock and Lenz2019; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019; Prior et al., Reference Prior, Sood and Khanna2015; see also Tucker, Reference Tucker2020). On the other hand, priming partisan identity-based motivations leads to increased motivated reasoning (Bayes et al., Reference Bayes, Druckman, Goods and Molden2020; Prior et al., Reference Prior, Sood and Khanna2015).
Similarly, a recent re-analysis of Pennycook and Rand (Reference Pennycook and Rand2019) found that while cognitive reflection was indeed associated with better truth discernment, it was not associated with less partisan bias (Batailler et al., Reference Batailler, Brannon, Teas and Gawronskiin press). Other work has found large effects of partisan bias on judgements of truth (see also Tucker, Reference Tucker2020; van Bavel & Pereira, Reference van Bavel and Pereira2018). One study found that animosity toward the opposing party was the strongest psychological predictor of sharing fake news (Osmundsen et al., Reference Osmundsen, Bor, Vahlstrup, Bechmann and Petersen2020). Additionally, when Americans were asked for top-of-mind associations with the word “fake news,” they most commonly answered with news media organizations from the opposing party (e.g., Republicans will say “CNN,” and Democrats will say “Fox News”; van der Linden, Panagopoulos, et al., Reference van der Linden, Panagopoulos and Roozenbeek2020). It is therefore clear that future research would benefit from explicating how interventions target both motivational and cognitive accounts of misinformation susceptibility.
Psychometrically Validated Measurement Instruments
To date, no psychometrically validated scale exists that measures misinformation susceptibility or people’s ability to discern fake from real news. Although related scales exist, such as the Bullshit Receptivity scale (BSR; Pennycook et al., Reference Pennycook, Cheyne, Barr, Koehler and Fugelsang2015) or the conspiracy mentality scales (Brotherton et al., Reference Brotherton, French and Pickering2013; Bruder et al., Reference Bruder, Haffke, Neave, Nouripanah and Imhoff2013; Swami et al., Reference Swami, Chamorro-Premuzic and Furnham2010), these are only proxies. To measure the efficacy of fake news interventions, researchers often collect (e.g., Cook et al., Reference Cook, Lewandowsky and Ecker2017; Guess et al., Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020; Swire, Berinsky, et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017; van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017) or create (e.g., Roozenbeek, Maertens, et al., Reference Maertens, Roozenbeek, Basol and van der Linden2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b) news headlines and let participants rate the reliability or accuracy of these headlines on binary (e.g., true vs. false) or Likert (e.g., reliability 1-7) scales, resulting in an index assumed to depict how skilled people are at detecting misinformation. These indices are often of limited psychometric quality, and can suffer from varying reliability and specific item-set effects (Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020).
Recently, more attention has been given to the correct detection of both factual and false news, with some studies indicating people improving on one dimension, while not changing on the other (Guess et al., Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020; Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020). This raises questions about the role of general scepticism, and what constitutes a “good” outcome of misinformation interventions in a post-truth era (Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017). Relatedly, most methods fail to distinguish between veracity discernment and response bias (Batailler et al., Reference Batailler, Brannon, Teas and Gawronskiin press). In addition, stimuli selection is often based on a small pool of news items, which limits the representativeness of the stimuli and thus their external validity.
A validated psychometric test that provides a general score, as well as reliable subscores for false and factual news detection (Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020), is therefore required. Future research will need to harness modern psychometrics to develop a new generation of scales based on a large and representative pool of news headlines. An example is the new Misinformation Susceptibility Test (MIST, see Maertens, Götz, et al., Reference Maertens, Anseel and van der Linden2020).
Better measurement instruments combined with an informed debate on desired outcomes, should occupy a central role in the fake news intervention debate.
Implications for Policy
A number of governments and organizations have begun implementing prebunking and debunking strategies as part of their efforts to limit the spread of false information. For example, the Foreign, Commonwealth and Development Office and the Cabinet Office in the United Kingdom and the Department of Homeland Security in the United States have collaborated with researchers and practitioners to develop evidence-based tools to counter misinformation using inoculation theory and prebunking games that have been scaled across millions of people (Lewsey, Reference Lewsey2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b, Reference Roozenbeek and van der Linden2020). Twitter has also placed inoculation messages on users’ news feeds during the 2020 United States presidential election to counter the spread of political misinformation (Ingram, Reference Ingram2020).
With respect to debunking, Facebook collaborates with third-party fact checking agencies that flag misleading posts and issue corrections under these posts (Bode & Vraga, Reference Bode and Vraga2015). Similarly, Twitter uses algorithms to label dubious Tweets as misleading, disputed, or unverified (Roth & Pickles, Reference Roth and Pickles2020). The United Nations has launched “Verified”—a platform that builds a global base of volunteers who help debunk misinformation and spread fact-checked content (United Nations Department of Global Communications, 2020).
Despite these examples, the full potential of applying insights from psychology to tackle the spread of misinformation remains largely untapped (Lewandowsky, Smillie, et al., Reference Lewandowsky, Cook, Ecker, Albarracín, Amazeen, Kendeou, Lombardi, Newman, Pennycook, Porter, Rand, Rapp, Reifler, Roozenbeek, Schmid, Seifert, Sinatra, Swire-Thompson, van der Linden and Zaragoza2020; Lorenz-Spreen et al., Reference Lorenz-Spreen, Lewandowsky, Sunstein and Hertwig2020). Moreover, although individual-level approaches hold promise for policy, they also face limitations, including the uncertain long-term effectiveness of many interventions and limited ability to reach sub-populations most susceptible to misinformation (Nyhan, Reference Nyhan2020; Swire, Berinsky, et al., Reference Swire, Ecker and Lewandowsky2017). Hence, interventions targeting consumers could be complemented with top-down approaches, such as targeting the sources of misinformation themselves, discouraging political elites from spreading misinformation through reputational sanctions (Nyhan & Reifler, Reference Nyhan and Reifler2015), or limiting the reach of posts published by sources that were flagged as dubious (Allcott et al., Reference Allcott, Gentzkow and Yu2019).
Conclusion
We have illustrated the progress that psychological science has made in understanding how to counter fake news, and have laid out some of the complexities to take into account when designing and testing interventions aimed at countering misinformation. We offer some promising evidence as to how policy-makers and social media companies can help counter the spread of misinformation online, and what factors to pay attention to when doing so.