Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-22T00:50:54.804Z Has data issue: false hasContentIssue false

How Can Psychological Science Help Counter the Spread of Fake News?

Published online by Cambridge University Press:  12 April 2021

Sander van der Linden*
Affiliation:
University of Cambridge (UK)
Jon Roozenbeek
Affiliation:
University of Cambridge (UK)
Rakoen Maertens
Affiliation:
University of Cambridge (UK)
Melisa Basol
Affiliation:
University of Cambridge (UK)
Ondřej Kácha
Affiliation:
University of Cambridge (UK)
Steve Rathje
Affiliation:
University of Cambridge (UK)
Cecilie Steenbuch Traberg
Affiliation:
University of Cambridge (UK)
*
Correspondence concerning this article should be addressed to Sander van der Linden. University of Cambridge. Department of Psychology. Downing Street. CB2 3EBCambridge (UK). E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In recent years, interest in the psychology of fake news has rapidly increased. We outline the various interventions within psychological science aimed at countering the spread of fake news and misinformation online, focusing primarily on corrective (debunking) and pre-emptive (prebunking) approaches. We also offer a research agenda of open questions within the field of psychological science that relate to how and why fake news spreads and how best to counter it: the longevity of intervention effectiveness; the role of sources and source credibility; whether the sharing of fake news is best explained by the motivated cognition or the inattention accounts; and the complexities of developing psychometrically validated instruments to measure how interventions affect susceptibility to fake news at the individual level.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
© Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid 2021

Fake news can have serious societal consequences for science, society, and the democratic process (Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017). For example, belief in fake news has been linked to violent intentions (Jolley & Paterson, Reference Jolley and Paterson2020), lower willingness to get vaccinated against the coronavirus disease 2019 (COVID-19; Roozenbeek, Schneider, et al., Reference Roozenbeek and van der Linden2020), and decreased adherence to public health guidelines (van der Linden, Roozenbeek, et al., Reference Roozenbeek, Maertens, McClanahan and van der Linden2020). Fake rumours on the WhatsApp platform have inspired mob lynchings (Arun, Reference Arun2019) and fake news about climate change is undermining efforts to mitigate the biggest existential threat of our time (van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017).

In light of this, interest in the “psychology of fake news” has skyrocketed. In this article, we offer a rapid review and research agenda of how psychological science can help effectively counter the spread of fake news, and what factors to take into account when doing so.Footnote 1

Current Approaches to Countering Misinformation

Scholars have largely offered two different approaches to combat misinformation, one reactive, the other proactive. We review each approach in turn below.

Reactive Approaches: Debunking and Fact-Checking

The first approach concerns the efficacy of debunking and debiasing (Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012). Debunking misinformation comes with several challenges, as doing so reinforces the (rhetorical frame of the) misinformation itself. A plethora of research on the illusory truth effect suggests that the mere repetition of information increases its perceived truthfulness, making even successful corrections susceptible to unintended consequences (Effron & Raj, Reference Effron and Raj2020; Fazio et al., Reference Fazio, Brashier, Payne and Marsh2015; Pennycook et al., Reference Pennycook, Cannon and Rand2018). Despite popular concerns about potential backfire-effects, where a correction inadvertently increases the belief in—or reliance on—misinformation itself, research has not found such effects to be commonplace (e.g., see Ecker et al., Reference Ecker, Lewandowsky, Jayawardana and Mladenovic2019; Swire-Thompson et al., Reference Swire-Thompson, DeGutis and Lazer2020; Wood & Porter, Reference Wood and Porter2019). Yet, there is reason to believe that debunking misinformation can still be challenging in light of both (politically) motivated cognition (Flynn et al., Reference Flynn, Nyhan and Reifler2017), and the continued influence effect (CIE) where people continue to retrieve false information from memory despite acknowledging a correction (Chan et al., Reference Chan, Jones, Hall Jamieson and Albarracín2017; Lewandowsky et al., Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012; Walter & Tukachinsky, Reference Walter and Tukachinsky2020). In general, effective debunking requires an alternative explanation to help resolve inconsistencies in people’s mental model (Lewandowsky, Cook, et al., Reference Lewandowsky, Smillie, Garcia, Hertwig, Weatherall, Egidy, Robertson, O’Connor, Kozyreva, Lorenz-Spreen, Blaschke and Leiser2020). But even when a correction is effective (Ecker et al., Reference Ecker, Hogan and Lewandowsky2017; MacFarlane et al., Reference MacFarlane, Tay, Hurlstone and Ecker2020), fact-checks are often outpaced by misinformation, which is known to spread faster and further than other types of information online (Petersen et al., Reference Petersen, Vincent and Westerling2019; Vosoughi et al., Reference Vosoughi, Roy and Aral2018).

Proactive Approaches: Inoculation Theory and Prebunking

In light of the shortcomings of debunking, scholars have called for more proactive interventions that reduce whether people believe and share misinformation in the first place. Prebunking describes the process of inoculation, where a forewarning combined with a pre-emptive refutation can confer psychological resistance against misinformation. Inoculation theory (McGuire, Reference McGuire1970; McGuire & Papageorgis, Reference McGuire and Papageorgis1961) is the most well-known psychological framework for conferring resistance to persuasion. It posits that pre-emptive exposure to a weakened dose of a persuasive argument can confer resistance against future attacks, much like a medical vaccine builds resistance against future illness (Compton, Reference Compton, Dillard and Shen2013; McGuire, Reference McGuire1964). A large body of inoculation research across domains has demonstrated its effectiveness in conferring resistance against (unwanted) persuasion (for reviews, see Banas & Rains, Reference Banas and Rains2010; Lewandowsky & van der Linden, Reference Lewandowsky and van der Linden2021), including misinformation about climate change (Cook et al., Reference Cook, Lewandowsky and Ecker2017; van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017), conspiracy theories (Banas & Miller, Reference Banas and Miller2013; Jolley & Douglas, Reference Jolley and Douglas2017), and astroturfing by Russian bots (Zerback et al., Reference Zerback, Töpfl and Knöpfle2020).

In particular, the distinction between active vs. passive defences has seen renewed interest (Banas & Rains, Reference Banas and Rains2010). As opposed to traditional passive inoculation where participants receive the pre-emptive refutation, during active inoculation participants are tasked with generating their own “antibodies” (e.g., counter-arguments), which is thought to engender greater resistance (McGuire & Papageorgis, Reference McGuire and Papageorgis1961). Furthermore, rather than inoculating people against specific issues, research has shown that making people aware of both their own vulnerability and the manipulative intent of others can act as a more general strategy for inducing resistance to deceptive persuasion (Sagarin et al., Reference Sagarin, Cialdini, Rice and Serna2002).

Perhaps the most well-known example of active inoculation is Bad News (Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b), an interactive fake news game where players are forewarned and exposed to weakened doses of the common techniques that are used in the production of fake news (e.g., conspiracy theories, fuelling intergroup polarization). The game simulates a social media feed and over the course of 15 to 20 minutes lets players actively generate their own “antibodies” in an interactive environment. Similar games have been developed for COVID-19 misinformation (Go Viral!, see Basol et al., Reference Basol, Roozenbeek, McClanahan, Berriche, Uenal and van der Lindenin press), climate misinformation (Cranky Uncle, see Cook, Reference Cook2019) and political misinformation during elections (Harmony Square, see Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2020). A growing body of research has shown that after playing “fake news” inoculation games, people are; (a) better at spotting fake news, (b) more confident in their ability to identify fake news, and (c) less likely to report sharing fake news with others in their network (Basol et al., Reference Basol, Roozenbeek and van der Linden2020; Roozenbeek, van der Linden, et al., Reference van der Linden, Panagopoulos and Roozenbeek2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019a, Reference Roozenbeek and van der Linden2019b, Reference Roozenbeek and van der Linden2020). Figure 1 shows screenshots from each game.

Figure 1. Screenshots of the Bad News (www.getbadnews.com), Go Viral! (www.goviralgame.com), and Harmony Square (www.harmonysquare.game) landing pages.

An Agenda for Future Research on Fake News Interventions

Although these advancements are promising, in this section, we outline several open questions to bear in mind when designing and testing interventions aimed at countering misinformation: How long their effectiveness remains detectable, the relevance of source effects, the role of inattention and motivated cognition, and the complexities of developing psychometrically validated instruments to measure how interventions affect susceptibility to misinformation.

The Longevity of Intervention Effects

Reflecting a broader lack of longitudinal studies in behavioural science (Hill et al., Reference Hill, Lo, Vavreck and Zaller2013; Marteau et al., Reference Marteau, Ogilvie, Roland, Suhrcke and Kelly2011; Nisa et al., Reference Nisa, Bélanger, Schumpe and Faller2019), most research on countering misinformation does not look at effects beyond two weeks (Banas & Rains, Reference Banas and Rains2010). While Swire, Berinsky, et al. (Reference Swire, Berinsky, Lewandowsky and Ecker2017) found that most effects had expired one week after a debunking intervention, Guess et al. (Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020) report that three weeks after a media-literacy intervention effects can either dissipate or endure.

Evidence from studies comparing interventions indicates that expiration rates may vary depending on the method, with inoculation-based effects generally staying intact for longer than narrative, supportive, or consensus-messaging effects (e.g., see Banas & Rains, Reference Banas and Rains2010; Compton & Pfau, Reference Compton and Pfau2005; Maertens, Anseel, et al., Reference Maertens, Anseel and van der Linden2020; Niederdeppe et al., Reference Niederdeppe, Heley and Barry2015; Pfau et al., Reference Pfau, van Bockern and Kang1992). Although some studies have found inoculation effects to decay after two weeks (Compton, Reference Compton, Dillard and Shen2013; Pfau et al., Reference Pfau, Semmler, Deatrick, Mason, Nisbett, Lane, Craig, Underhill and Banas2009; Zerback et al., Reference Zerback, Töpfl and Knöpfle2020), the literature is converging on an average inoculation effect that lasts for at least two weeks but largely dissipates within six weeks (Ivanov et al., Reference Ivanov, Parker and Dillingham2018; Maertens, Roozenbeek, et al., Reference Roozenbeek, Schneider, Dryhurst, Kerr, Freeman, Recchia, van der Bles and van der Linden2020). Research on booster sessions indicates that the longevity of effects can be prolonged by repeating interventions or regular assessment (Ivanov et al., Reference Ivanov, Parker and Dillingham2018; Maertens, Roozenbeek, et al., Reference Roozenbeek, van der Linden and Nygren2020; Pfau et al., Reference Pfau, Compton, Parker, An, Wittenberg, Ferguson, Horton and Malyshev2006).

Gaining deeper insights into the longevity of different interventions, looking beyond immediate effects, and unveiling the mechanisms behind decay (e.g., interference and forgetting), will shape future research towards more enduring interventions.

Source Effects

When individuals are exposed to persuasive messages (Petty & Cacioppo, Reference Petty, Cacioppo, Petty and Cacioppo1986; Wilson & Sherrell, Reference Wilson and Sherrell1993) and evaluate whether claims are true or false (Eagly & Chaiken, Reference Eagly and Chaiken1993), a large body of research has shown that source credibility matters (Briñol & Petty, Reference Briñol and Petty2009; Chaiken & Maheswaran, Reference Chaiken and Maheswaran1994; Maier et al., Reference Maier, Adam and Maier2017; Pornpitakpan, Reference Pornpitakpan2004; Sternthal et al., Reference Sternthal, Dholakia and Leavitt1978). A significant factor contributing to source credibility is similarity between the source and message receiver (Chaiken & Maheswaran, Reference Chaiken and Maheswaran1994; Metzger et al., Reference Metzger, Flanagin, Eyal, Lemus and Mccann2003), particularly attitudinal (Simons et al., Reference Simons, Berkowitz and Moyer1970) and ideological similarity (Marks et al., Reference Marks, Copland, Loh, Sunstein and Sharot2019).

Indeed, when readers attend to source cues, source credibility affects evaluations of online news stories (Go et al., Reference Go, Jung and Wu2014; Greer, Reference Greer2003; Sterrett et al., Reference Sterrett, Malato, Benz, Kantor, Tompson, Rosenstiel, Sonderman and Loker2019; Sundar et al., Reference Sundar, Knobloch-Westerwick and Hastall2007) and in some cases, sources impact the believability of misinformation (Amazeen & Krishna, Reference Amazeen and Krishna2020; Walter & Tukachinsky, Reference Walter and Tukachinsky2020). In general, individuals are more likely to trust claims made by ideologically congruent news sources (Gallup, 2018) and discount news from politically incongruent ones (van der Linden, Panagopoulos, et al., Reference van der Linden, Roozenbeek and Compton2020). Furthermore, polarizing sources can boost or retract from the persuasiveness of misinformation, depending on whether or not people support the attributed source (Swire, Berinsky, et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017; Swire, Ecker, et al., Reference Ecker, Hogan and Lewandowsky2017).

For debunking, organizational sources seem more effective than individuals (van der Meer & Jin, Reference van der Meer and Jin2020; Vraga & Bode, Reference Vraga and Bode2017) but only when information recipients actively assess source credibility (van Boekel et al., Reference van Boekel, Lassonde, O’Brien and Kendeou2017). Indeed, source credibility may matter little when individuals do not pay attention to the source (Albarracín et al., Reference Albarracín, Kumkale and Vento2017; Sparks & Rapp, Reference Sparks and Rapp2011), and despite highly credible sources the continued influence of misinformation may persist (Ecker & Antonio, Reference Ecker and Antonio2020). For prebunking, evidence suggests that inoculation interventions are more effective when they involve high-credibility sources (An, Reference An2003). Yet, sources may not impact accuracy perceptions of obvious fake news (Hameleers, Reference Hameleers2020), political misinformation (Dias et al., Reference Dias, Pennycook and Rand2020; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019), or fake images (Shen et al., Reference Shen, Kasra, Pan, Bassett, Malloch and O’Brien2019), potentially because these circumstances reduce news receivers’ attention to the purported sources. Overall, relatively little remains known about how people evaluate sources of political and non-political fake news.

Inattention versus Motivated Cognition

At present, there are two dominant explanations for what drives susceptibility to and sharing of fake news. The motivated reflection account proposes that reasoning can increase bias. Identity-protective cognition occurs when people with better reasoning skills use this ability to come up with reasons to defend their ideological commitments (Kahan et al., Reference Kahan, Braman, Gastil, Slovic and Mertz2007). This account is based on findings that those who have the highest levels of education (Drummond & Fischhoff, Reference Drummond and Fischhoff2017), cognitive reflection (Kahan, Reference Kahan2013), numerical ability (Kahan et al., Reference Kahan, Peters, Dawson and Slovic2017), or political knowledge (Taber et al., Reference Taber, Cann and Kucsova2009) tend to show more partisan bias on controversial issues.

The inattention account, on the other hand, suggests that people want to be accurate but are often not thinking about accuracy (Pennycook & Rand, Reference Pennycook and Rand2019, Reference Pennycook and Rand2020). This account is supported by research finding that deliberative reasoning styles (or cognitive reflection) are associated with better discernment between true and false news (Pennycook & Rand, Reference Pennycook and Rand2019). Additionally, encouraging people to pause, deliberate, or think about accuracy before rating headlines (Bago et al., Reference Bago, Rand and Pennycook2020; Fazio, Reference Fazio2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020) can lead to more accurate identification of false news for both politically congruent and politically-incongruent headlines (Pennycook & Rand, Reference Pennycook and Rand2019).

However, both theoretical accounts suffer from several shortcomings. First, it is difficult to disentangle whether partisan bias results from motivated reasoning or selective exposure to different (factual) beliefs (Druckman & McGrath, Reference Druckman and McGrath2019; Tappin et al., Reference Tappin, Pennycook and Rand2020). For instance, although ideology and education might interact in a way that enhances motivated reasoning in correlational data, exposure to facts can neutralize this tendency (van der Linden et al., Reference van der Linden, Leiserowitz and Maibach2018). Paying people to produce more accurate responses to politically contentious facts also leads to less polarized responses (Berinsky, Reference Berinsky2018; Bullock et al., Reference Bullock, Gerber, Hill and Huber2013; Bullock & Lenz, Reference Bullock and Lenz2019; Jakesch et al., Reference Jakesch, Koren, Evtushenko and Naaman2019; Prior et al., Reference Prior, Sood and Khanna2015; see also Tucker, Reference Tucker2020). On the other hand, priming partisan identity-based motivations leads to increased motivated reasoning (Bayes et al., Reference Bayes, Druckman, Goods and Molden2020; Prior et al., Reference Prior, Sood and Khanna2015).

Similarly, a recent re-analysis of Pennycook and Rand (Reference Pennycook and Rand2019) found that while cognitive reflection was indeed associated with better truth discernment, it was not associated with less partisan bias (Batailler et al., Reference Batailler, Brannon, Teas and Gawronskiin press). Other work has found large effects of partisan bias on judgements of truth (see also Tucker, Reference Tucker2020; van Bavel & Pereira, Reference van Bavel and Pereira2018). One study found that animosity toward the opposing party was the strongest psychological predictor of sharing fake news (Osmundsen et al., Reference Osmundsen, Bor, Vahlstrup, Bechmann and Petersen2020). Additionally, when Americans were asked for top-of-mind associations with the word “fake news,” they most commonly answered with news media organizations from the opposing party (e.g., Republicans will say “CNN,” and Democrats will say “Fox News”; van der Linden, Panagopoulos, et al., Reference van der Linden, Panagopoulos and Roozenbeek2020). It is therefore clear that future research would benefit from explicating how interventions target both motivational and cognitive accounts of misinformation susceptibility.

Psychometrically Validated Measurement Instruments

To date, no psychometrically validated scale exists that measures misinformation susceptibility or people’s ability to discern fake from real news. Although related scales exist, such as the Bullshit Receptivity scale (BSR; Pennycook et al., Reference Pennycook, Cheyne, Barr, Koehler and Fugelsang2015) or the conspiracy mentality scales (Brotherton et al., Reference Brotherton, French and Pickering2013; Bruder et al., Reference Bruder, Haffke, Neave, Nouripanah and Imhoff2013; Swami et al., Reference Swami, Chamorro-Premuzic and Furnham2010), these are only proxies. To measure the efficacy of fake news interventions, researchers often collect (e.g., Cook et al., Reference Cook, Lewandowsky and Ecker2017; Guess et al., Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020; Swire, Berinsky, et al., Reference Swire, Berinsky, Lewandowsky and Ecker2017; van der Linden et al., Reference van der Linden, Leiserowitz, Rosenthal and Maibach2017) or create (e.g., Roozenbeek, Maertens, et al., Reference Maertens, Roozenbeek, Basol and van der Linden2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b) news headlines and let participants rate the reliability or accuracy of these headlines on binary (e.g., true vs. false) or Likert (e.g., reliability 1-7) scales, resulting in an index assumed to depict how skilled people are at detecting misinformation. These indices are often of limited psychometric quality, and can suffer from varying reliability and specific item-set effects (Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020).

Recently, more attention has been given to the correct detection of both factual and false news, with some studies indicating people improving on one dimension, while not changing on the other (Guess et al., Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020; Pennycook et al., Reference Pennycook, McPhetres, Zhang, Lu and Rand2020; Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020). This raises questions about the role of general scepticism, and what constitutes a “good” outcome of misinformation interventions in a post-truth era (Lewandowsky et al., Reference Lewandowsky, Ecker and Cook2017). Relatedly, most methods fail to distinguish between veracity discernment and response bias (Batailler et al., Reference Batailler, Brannon, Teas and Gawronskiin press). In addition, stimuli selection is often based on a small pool of news items, which limits the representativeness of the stimuli and thus their external validity.

A validated psychometric test that provides a general score, as well as reliable subscores for false and factual news detection (Roozenbeek, Maertens, et al., Reference Maertens, Anseel and van der Linden2020), is therefore required. Future research will need to harness modern psychometrics to develop a new generation of scales based on a large and representative pool of news headlines. An example is the new Misinformation Susceptibility Test (MIST, see Maertens, Götz, et al., Reference Maertens, Anseel and van der Linden2020).

Better measurement instruments combined with an informed debate on desired outcomes, should occupy a central role in the fake news intervention debate.

Implications for Policy

A number of governments and organizations have begun implementing prebunking and debunking strategies as part of their efforts to limit the spread of false information. For example, the Foreign, Commonwealth and Development Office and the Cabinet Office in the United Kingdom and the Department of Homeland Security in the United States have collaborated with researchers and practitioners to develop evidence-based tools to counter misinformation using inoculation theory and prebunking games that have been scaled across millions of people (Lewsey, Reference Lewsey2020; Roozenbeek & van der Linden, Reference Roozenbeek and van der Linden2019b, Reference Roozenbeek and van der Linden2020). Twitter has also placed inoculation messages on users’ news feeds during the 2020 United States presidential election to counter the spread of political misinformation (Ingram, Reference Ingram2020).

With respect to debunking, Facebook collaborates with third-party fact checking agencies that flag misleading posts and issue corrections under these posts (Bode & Vraga, Reference Bode and Vraga2015). Similarly, Twitter uses algorithms to label dubious Tweets as misleading, disputed, or unverified (Roth & Pickles, Reference Roth and Pickles2020). The United Nations has launched “Verified”—a platform that builds a global base of volunteers who help debunk misinformation and spread fact-checked content (United Nations Department of Global Communications, 2020).

Despite these examples, the full potential of applying insights from psychology to tackle the spread of misinformation remains largely untapped (Lewandowsky, Smillie, et al., Reference Lewandowsky, Cook, Ecker, Albarracín, Amazeen, Kendeou, Lombardi, Newman, Pennycook, Porter, Rand, Rapp, Reifler, Roozenbeek, Schmid, Seifert, Sinatra, Swire-Thompson, van der Linden and Zaragoza2020; Lorenz-Spreen et al., Reference Lorenz-Spreen, Lewandowsky, Sunstein and Hertwig2020). Moreover, although individual-level approaches hold promise for policy, they also face limitations, including the uncertain long-term effectiveness of many interventions and limited ability to reach sub-populations most susceptible to misinformation (Nyhan, Reference Nyhan2020; Swire, Berinsky, et al., Reference Swire, Ecker and Lewandowsky2017). Hence, interventions targeting consumers could be complemented with top-down approaches, such as targeting the sources of misinformation themselves, discouraging political elites from spreading misinformation through reputational sanctions (Nyhan & Reifler, Reference Nyhan and Reifler2015), or limiting the reach of posts published by sources that were flagged as dubious (Allcott et al., Reference Allcott, Gentzkow and Yu2019).

Conclusion

We have illustrated the progress that psychological science has made in understanding how to counter fake news, and have laid out some of the complexities to take into account when designing and testing interventions aimed at countering misinformation. We offer some promising evidence as to how policy-makers and social media companies can help counter the spread of misinformation online, and what factors to pay attention to when doing so.

Footnotes

Conflicts of Interest: None.

Funding Statement: This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

References

Albarracín, D., Kumkale, G. T., & Vento, P. P.-D. (2017). How people can become persuaded by weak messages presented by credible communicators: Not all sleeper effects are created equal. Journal of Experimental Social Psychology, 68, 171180. https://doi.org/10.1016/j.jesp.2016.06.009 CrossRefGoogle Scholar
Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on social media (Working Paper 25500). National Bureau of Economic Research. https://doi.org/10.3386/w25500 CrossRefGoogle Scholar
Amazeen, M., & Krishna, A. (2020, August 7). Correcting vaccine misinformation: Recognition and effects of source type on misinformation via perceived motivations and credibility. SSRN. https://doi.org/10.2139/ssrn.3698102 CrossRefGoogle Scholar
An, C. (2003). Efficacy of inoculation strategies in promoting resistance to political attack messages: Source credibility perspective [Unpublished doctoral dissertation]. University of Oklahoma.Google Scholar
Arun, C. (2019). On WhatsApp, rumours, lynchings, and the Indian Government. Economic & Political Weekly, 54(6), 3035.Google Scholar
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General, 149(8), 16081613. https://doi.org/10.1037/xge0000729 CrossRefGoogle ScholarPubMed
Banas, J. A., & Miller, G. (2013). Inducing resistance to conspiracy theory propaganda: Testing inoculation and metainoculation strategies. Human Communication Research, 39(2), 184207. https://doi.org/10.1111/hcre.12000 CrossRefGoogle Scholar
Banas, J. A., & Rains, S. A. (2010). A meta-analysis of research on inoculation theory. Communication Monographs, 77(3), 281311. https://doi.org/10.1080/03637751003758193 CrossRefGoogle Scholar
Basol, M., Roozenbeek, J., McClanahan, P., Berriche, M., Uenal, F., & van der Linden, S. (in press). Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID–19 misinformation. Big Data & Society.Google Scholar
Basol, M., Roozenbeek, J., & van der Linden, S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1), Article 2. https://doi.org/10.5334/joc.91 CrossRefGoogle ScholarPubMed
Batailler, C., Brannon, S. M., Teas, P. E., & Gawronski, B. (in press). A signal detection approach to understanding the identification of fake news. Perspectives on Psychological Science.Google Scholar
Bayes, R., Druckman, J. N., Goods, A., & Molden, D. C. (2020). When and how different motives can drive motivated political reasoning. Political Psychology, 41(5), 10311052. https://doi.org/10.1111/pops.12663 CrossRefGoogle Scholar
Berinsky, A. J. (2018). Telling the truth about believing the lies? Evidence for the limited prevalence of expressive survey responding. The Journal of Politics, 80(1), 211224. https://doi.org/10.1086/694258 CrossRefGoogle Scholar
Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. The Journal of Communication, 65(4), 619638. https://doi.org/10.1111/jcom.12166 CrossRefGoogle Scholar
Briñol, P., & Petty, R. E. (2009). Source factors in persuasion: A self-validation approach. European Review of Social Psychology, 20(1), 4996. https://doi.org/10.1080/10463280802643640 CrossRefGoogle Scholar
Brotherton, R., French, C. C., & Pickering, A. D. (2013). Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in Psychology, 4, Article 279. https://doi.org/10.3389/fpsyg.2013.00279 CrossRefGoogle ScholarPubMed
Bruder, M., Haffke, P., Neave, N., Nouripanah, N., & Imhoff, R. (2013). Measuring individual differences in generic beliefs in conspiracy theories across cultures: Conspiracy Mentality Questionnaire. Frontiers in Psychology, 4(279), Article 225. https://doi.org/10.3389/fpsyg.2013.00225 CrossRefGoogle ScholarPubMed
Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2013). Partisan bias in factual beliefs about politics (Working Paper 19080). National Bureau of Economic Research. https://doi.org/10.3386/w19080 CrossRefGoogle Scholar
Bullock, J. G., & Lenz, G. (2019). Partisan bias in surveys. Annual Review of Political Science, 22, 325342. https://doi.org/10.1146/annurev-polisci-051117-050904 CrossRefGoogle Scholar
Chaiken, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology, 66(3), 460473. https://doi.org/10.1037//0022-3514.66.3.460 CrossRefGoogle ScholarPubMed
Chan, M. S., Jones, C. R., Hall Jamieson, K., & Albarracín, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 15311546. https://doi.org/10.1177/0956797617714579 CrossRefGoogle ScholarPubMed
Compton, J. (2013). Inoculation theory. In Dillard, J. P. & Shen, L. (Eds.), The Sage handbook of persuasion: Developments in theory and practice (2 nd Ed., pp. 220237). Sage. https://doi.org/10.4135/9781452218410.n14 Google Scholar
Compton, J. A., & Pfau, M. (2005). Inoculation theory of resistance to influence at maturity: Recent progress in theory development and application and suggestions for future research. Annals of the International Communication Association, 29(1), 97146. https://doi.org/10.1080/23808985.2005.11679045 CrossRefGoogle Scholar
Cook, J. (2019). Using mobile gaming to improve resilience against climate misinformation (American Geophysical Union: Fall Meeting Abstracts, Abstract PA13A–10). SAO/NASA Astrophysics Data System. https://ui.adsabs.harvard.edu/abs/2019AGUFMPA13A..10C Google Scholar
Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLOS ONE, 12(5), Article e0175799. https://doi.org/10.1371/journal.pone.0175799 CrossRefGoogle ScholarPubMed
Dias, N., Pennycook, G., & Rand, D. G. (2020). Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harvard Kennedy School Misinformation Review, 1(1). https://doi.org/10.37016/mr-2020-001 Google Scholar
Druckman, J. N., & McGrath, M. C. (2019). The evidence for motivated reasoning in climate change preference formation. Nature Climate Change, 9(2), 111119. https://doi.org/10.1038/s41558-018-0360-1 CrossRefGoogle Scholar
Drummond, C., & Fischhoff, B. (2017). Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proceedings of the National Academy of Sciences of the United States of America, 114(36), 95879592. https://doi.org/10.1073/pnas.1704882114 CrossRefGoogle ScholarPubMed
Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Harcourt Brace Jovanovich College Publishers.Google Scholar
Ecker, U. K. H., & Antonio, L. M. (2020). Can you believe it? An investigation into the impact of retraction source credibility on the continued influence effect. PsyArXiv. https://doi.org/10.31234/osf.io/qt4w8 CrossRefGoogle Scholar
Ecker, U. K. H., Hogan, J. L., & Lewandowsky, S. (2017). Reminders and repetition of misinformation: Helping or hindering its retraction? Journal of Applied Research in Memory and Cognition, 6(2), 185192. https://doi.org/10.1016/j.jarmac.2017.01.014 CrossRefGoogle Scholar
Ecker, U. K. H., Lewandowsky, S., Jayawardana, K., & Mladenovic, A. (2019). Refutations of equivocal claims: No evidence for an ironic effect of counterargument number. Journal of Applied Research in Memory and Cognition, 8(1), 98107. https://doi.org/10.1016/j.jarmac.2018.07.005 CrossRefGoogle Scholar
Effron, D. A., & Raj, M. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31(1), 7587. https://doi.org/10.1177/0956797619887896 CrossRefGoogle ScholarPubMed
Fazio, L. K. (2020). Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harvard Kennedy School Misinformation Review, 1(2). https://doi.org/10.37016/mr-2020-009 Google Scholar
Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology . General, 144(5), 9931002. https://doi.org/10.1037/xge0000098 Google ScholarPubMed
Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Advances in Political Psychology, 38, 127150. https://doi.org/10.1111/pops.12394 CrossRefGoogle Scholar
Gallup. (2018). An online experimental platform to assess trust in the media. https://knightfoundation.org/reports/an-online-experimental-platform-to-assess-trust-in-the-media Google Scholar
Go, E., Jung, E. H., & Wu, M. (2014). The effects of source cues on online news perception. Computers in Human Behavior, 38, 358367. https://doi.org/10.1016/j.chb.2014.05.044 CrossRefGoogle Scholar
Greer, J. D. (2003). Evaluating the credibility of online information: A test of source and advertising influence. Mass Communication and Society, 6(1), 1128. https://doi.org/10.1207/S15327825MCS0601_3 CrossRefGoogle Scholar
Greifeneder, R., Jaffe, M. E., Newman, E. J., & Schwarz, N. (2020). The psychology of fake news: Accepting, sharing, and correcting misinformation. Routledge. https://doi.org/10.4324/9780429295379 Google Scholar
Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences of the United States of America, 117(27), 1553615545. https://doi.org/10.1073/pnas.1920498117 CrossRefGoogle ScholarPubMed
Hameleers, M. (2020). Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication and Society. https://doi.org/10.1080/1369118X.2020.1764603 CrossRefGoogle Scholar
Hill, S. J., Lo, J., Vavreck, L., & Zaller, J. (2013). How quickly we forget: The duration of persuasion effects from mass communication. Political Communication, 30(4), 521547. https://doi.org/10.1080/10584609.2013.828143 CrossRefGoogle Scholar
Ingram, D. (2020, October 26). Twitter launches “pre-bunks” to get ahead of voting misinformation. NBC News. https://www.nbcnews.com/tech/tech-news/twitter-launches-pre-bunks-get-ahead-voting-misinformation-n1244777 Google Scholar
Ivanov, B., Parker, K. A., & Dillingham, L. L. (2018). Testing the limits of inoculation-generated resistance. Western Journal of Speech Communication, 82(5), 648665. https://doi.org/10.1080/10570314.2018.1454600 CrossRefGoogle Scholar
Jakesch, M., Koren, M., Evtushenko, A., & Naaman, M. (2019). The role of source, headline and expressive responding in political news evaluation. SSRN. https://doi.org/10.2139/ssrn.3306403 CrossRefGoogle Scholar
Jolley, D., & Douglas, K. M. (2017). Prevention is better than cure: Addressing anti-vaccine conspiracy theories. Journal of Applied Social Psychology, 47(8), 459469. https://doi.org/10.1111/jasp.12453 CrossRefGoogle Scholar
Jolley, D., & Paterson, J. L. (2020). Pylons ablaze: Examining the role of 5G COVID–19 conspiracy beliefs and support for violence. The British Journal of Social Psychology, 59(3), 628640. https://doi.org/10.1111/bjso.12394 CrossRefGoogle ScholarPubMed
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407424.Google Scholar
Kahan, D. M., Braman, D., Gastil, J., Slovic, P., & Mertz, C. K. (2007). Culture and identity-protective cognition: Explaining the white-male effect in risk perception. Journal of Empirical Legal Studies, 4(3), 465505. https://doi.org/10.1111/j.1740-1461.2007.00097.x CrossRefGoogle Scholar
Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and enlightened self-government. Behavioural Public Policy, 1(1), 5486. https://doi.org/10.1017/bpp.2016.2 CrossRefGoogle Scholar
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 10941096. https://doi.org/10.1126/science.aao2998 CrossRefGoogle ScholarPubMed
Lewandowsky, S., Cook, J., Ecker, U. K. H., Albarracín, D., Amazeen, M. A., Kendeou, P., Lombardi, D., Newman, E. J., Pennycook, G., Porter, E., Rand, D. G., Rapp, D. N., Reifler, J., Roozenbeek, J., Schmid, P., Seifert, C. M., Sinatra, G. M., Swire-Thompson, B., van der Linden, S., … Zaragoza, M. S. (2020). The debunking handbook 2020. Databrary. https://doi.org/10.17910/b7.1182 CrossRefGoogle Scholar
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353369. https://doi.org/10.1016/j.jarmac.2017.07.008 CrossRefGoogle Scholar
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106131. https://doi.org/10.1177/1529100612451018 CrossRefGoogle ScholarPubMed
Lewandowsky, S., Smillie, L., Garcia, D., Hertwig, R., Weatherall, J., Egidy, S., Robertson, R. E., O’Connor, C., Kozyreva, A., Lorenz-Spreen, P., Blaschke, Y., & Leiser, M. R. (2020). Technology and democracy: Understanding the influence of online technologies on political behaviour and decision-making. Publications Office of the European Union. https://doi.org/10.2760/709177 CrossRefGoogle Scholar
Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology. Advance online publication. https://doi.org/10.1080/10463283.2021.1876983 CrossRefGoogle Scholar
Lewsey, F. (2020). Cambridge game “pre-bunks” COVID–19 conspiracies as part of the UK government’s fight against fake news. University of Cambridge. https://www.cam.ac.uk/stories/goviral Google Scholar
Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4(11), 11021109. https://doi.org/10.1038/s41562-020-0889-7 CrossRefGoogle ScholarPubMed
MacFarlane, D., Tay, L. Q., Hurlstone, M. J., & Ecker, U. K. H. (2020). Refuting spurious COVID–19 treatment claims reduces demand and misinformation sharing. PsyArXiv. http://doi.org/10.31234/osf.io/q3mkd CrossRefGoogle Scholar
Maertens, R., Anseel, F., & van der Linden, S. (2020). Combatting climate change misinformation: Evidence for longevity of inoculation and consensus messaging effects. Journal of Environmental Psychology, 70, Article 101455. https://doi.org/10.1016/j.jenvp.2020.101455 CrossRefGoogle Scholar
Maertens, R., Götz, F. M., Schneider, C., Roozenbeek, J., Kerr, J., Stieger, S., McClanahan, W. P., Drabot, K., & van der Linden, S. (2021). The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment [Paper presentation]. Society for Personality and Social Psychology (SPSP) Annual Convention 2021. Virtual Convention.Google Scholar
Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2020). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied. Advance online publication. https://doi.org/10.1037/xap0000315 CrossRefGoogle Scholar
Maier, M., Adam, S., & Maier, J. (2017). Does the messenger matter? A comparison of the effects of Eurosceptic messages communicated by mainstream and radical right-wing parties on citizens’ EU attitudes. Journal of Elections, Public Opinion and Parties, 27(3), 330349. https://doi.org/10.1080/17457289.2016.1273227 CrossRefGoogle Scholar
Marks, J., Copland, E., Loh, E., Sunstein, C. R., & Sharot, T. (2019). Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains. Cognition, 188, 7484. https://doi.org/10.1016/j.cognition.2018.10.003 CrossRefGoogle ScholarPubMed
Marteau, T. M., Ogilvie, D., Roland, M., Suhrcke, M., & Kelly, M. P. (2011). Judging nudging: Can nudging improve population health? BMJ, 342, Article d228. https://doi.org/10.1136/bmj.d228 CrossRefGoogle ScholarPubMed
McGuire, W. J. (1964). Inducing resistance to persuasion: Some contemporary approaches. Advances in Experimental Social Psychology, 1, 191229. https://doi.org/10.1016/S0065-2601(08)60052-0 CrossRefGoogle Scholar
McGuire, W. J. (1970). Vaccine for brainwash. Psychology Today, 3(9), 3639.Google Scholar
McGuire, W. J., & Papageorgis, D. (1961). The relative efficacy of various types of prior belief-defense in producing immunity against persuasion. Journal of Abnormal and Social Psychology, 62(2), 327337. https://doi.org/10.1037/h0042026 CrossRefGoogle ScholarPubMed
Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & Mccann, R. M. (2003). Credibility for the 21st Century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Annals of the International Communication Association, 27(1), 293335. https://doi.org/10.1080/23808985.2003.11679029 CrossRefGoogle Scholar
Niederdeppe, J., Heley, K., & Barry, C. L. (2015). Inoculation and narrative strategies in competitive framing of three health policy issues. The Journal of Communication, 65(5), 838862. https://doi.org/10.1111/jcom.12162 CrossRefGoogle Scholar
Nisa, C. F., Bélanger, J. J., Schumpe, B. M., & Faller, D. G. (2019). Meta-analysis of randomised controlled trials testing behavioural interventions to promote household action on climate change. Nature Communications, 10(1), Article 4545. https://doi.org/10.1038/s41467-019-12457-2 CrossRefGoogle ScholarPubMed
Nyhan, B. (2020). Facts and myths about misperceptions. The Journal of Economic Perspectives, 34(3), 220236. https://doi.org/10.1257/jep.34.3.220 CrossRefGoogle Scholar
Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on U.S. state legislators. American Journal of Political Science, 59(3), 628640. https://doi.org/10.1111/ajps.12162 CrossRefGoogle Scholar
Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2020). Partisan polarization is the primary psychological motivation behind “fake news” sharing on Twitter. PsyArXiv. https://doi.org/10.31234/osf.io/v45bk CrossRefGoogle Scholar
Pennycook, G., Cannon, T., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 18651880. https://doi.org/10.1037/xge0000465 CrossRefGoogle ScholarPubMed
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549563.Google Scholar
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID–19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770780. https://doi.org/10.1177/0956797620939054 CrossRefGoogle ScholarPubMed
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 3950. https://doi.org/10.1016/j.cognition.2018.06.011 CrossRefGoogle Scholar
Pennycook, G., & Rand, D. G. (2020). The cognitive science of fake news. PsyArXiv. https://doi.org/10.31234/osf.io/ar96c Google Scholar
Petersen, A. M., Vincent, E. M., & Westerling, A. L. (2019). Discrepancy in scientific authority and media visibility of climate change scientists and contrarians. Nature Communications, 10(1), Article 3502. https://doi.org/10.1038/s41467-019-09959-4 CrossRefGoogle Scholar
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In Petty, R. E. & Cacioppo, J. T. (Eds.), Communication and persuasion: Central and peripheral routes to attitude change. Springer Series in Social Psychology (pp. 124). Springer. https://doi.org/10.1007/978-1-4612-4964-1_1 CrossRefGoogle Scholar
Pfau, M., Compton, J., Parker, K. A., An, C., Wittenberg, E. M., Ferguson, M., Horton, H., & Malyshev, Y. (2006). The conundrum of the timing of counterarguing effects in resistance: Strategies to boost the persistence of counterarguing output. Communication Quarterly, 54(2), 143156. https://doi.org/10.1080/01463370600650845 CrossRefGoogle Scholar
Pfau, M., Semmler, S. M., Deatrick, L., Mason, A., Nisbett, G., Lane, L., Craig, E., Underhill, J., & Banas, J. (2009). Nuances about the role and impact of affect in inoculation. Communication Monographs, 76(1), 7398. https://doi.org/10.1080/03637750802378807 CrossRefGoogle Scholar
Pfau, M., van Bockern, S., & Kang, J. G. (1992). Use of inoculation to promote resistance to smoking initiation among adolescents. Communication Monographs, 59(3), 213230. https://doi.org/10.1080/03637759209376266 CrossRefGoogle Scholar
Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34(2), 243281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x CrossRefGoogle Scholar
Prior, M., Sood, G., & Khanna, K. (2015). You cannot be serious: The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science, 10(4), 489518. https://doi.org/10.1561/100.00014127 CrossRefGoogle Scholar
Roozenbeek, J., Maertens, R., McClanahan, W., & van der Linden, S. (2020). Disentangling item and testing effects in inoculation research on online misinformation: Solomon revisited. Educational and Psychological Measurement. Advance online publication. https://doi.org/10.1177/0013164420940378 Google Scholar
Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M., & van der Linden, S. (2020). Susceptibility to misinformation about COVID–19 around the world. Royal Society Open Science, 7(10), Article 201199. https://doi.org/10.1098/rsos.201199 CrossRefGoogle ScholarPubMed
Roozenbeek, J., & van der Linden, S. (2019a). The fake news game: Actively inoculating against the risk of misinformation. Journal of Risk Research, 22(5), 570580. https://doi.org/10.1080/13669877.2018.1443491 CrossRefGoogle Scholar
Roozenbeek, J., & van der Linden, S. (2019b). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), Article 65. https://doi.org/10.1057/s41599-019-0279-9 CrossRefGoogle Scholar
Roozenbeek, J., & van der Linden, S. (2020). Breaking Harmony Square: A game that “inoculates” against political misinformation. The Harvard Kennedy School Misinformation Review, 1(8). https://doi.org/10.37016/mr-2020-47 Google Scholar
Roozenbeek, J., van der Linden, S., & Nygren, T. (2020). Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harvard Kennedy School Misinformation Review, 1(2). http://doi.org/10.37016//mr-2020-008 Google Scholar
Roth, Y., & Pickles, N. (2020, May 11). Updating our approach to misleading information. Twitter Blog. https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html Google Scholar
Sagarin, B. J., Cialdini, R. B., Rice, W. E., & Serna, S. B. (2002). Dispelling the illusion of invulnerability: The motivations and mechanisms of resistance to persuasion. Journal of Personality and Social Psychology, 83(3), 526541. https://doi.org/10.1037//0022-3514.83.3.526 CrossRefGoogle ScholarPubMed
Shen, C., Kasra, M., Pan, W., Bassett, G. A., Malloch, Y., & O’Brien, J. F. (2019). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media & Society, 21(2), 438463. https://doi.org/10.1177/1461444818799526 CrossRefGoogle Scholar
Simons, H. W., Berkowitz, N. N., & Moyer, R. J. (1970). Similarity, credibility, and attitude change: A review and a theory. Psychological Bulletin, 73(1), 116. https://doi.org/10.1037/h0028429 CrossRefGoogle Scholar
Sparks, J. R., & Rapp, D. N. (2011). Readers’ reliance on source credibility in the service of comprehension. Journal of Experimental Psychology . Learning, Memory, and Cognition, 37(1), 230247. https://doi.org/10.1037/a0021331 CrossRefGoogle Scholar
Sternthal, B., Dholakia, R., & Leavitt, C. (1978). The persuasive effect of source credibility: Tests of cognitive response. Journal of Consumer Research, 4(4), 252260. https://doi.org/10.1086/208704 CrossRefGoogle Scholar
Sterrett, D., Malato, D., Benz, J., Kantor, L., Tompson, T., Rosenstiel, T., Sonderman, J., & Loker, K. (2019). Who shared it?: Deciding what news to trust on social media. Digital Journalism, 7(6), 783801. https://doi.org/10.1080/21670811.2019.1623702 CrossRefGoogle Scholar
Sundar, S. S., Knobloch-Westerwick, S., & Hastall, M. R. (2007). News cues: Information scent and cognitive heuristics. Journal of the American Society for Information Science and Technology, 58(3), 366378. https://doi.org/10.1002/asi.20511 CrossRefGoogle Scholar
Swami, V., Chamorro-Premuzic, T., & Furnham, A. (2010). Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology, 24(6), 749761. https://doi.org/10.1002/acp.1583 CrossRefGoogle Scholar
Swire, B., Berinsky, A. J., Lewandowsky, S., & Ecker, U. K. H. (2017). Processing political misinformation: Comprehending the Trump phenomenon. Royal Society Open Science, 4(3), Article 160802. https://doi.org/10.1098/rsos.160802 CrossRefGoogle ScholarPubMed
Swire, B., Ecker, U. K. H., & Lewandowsky, S. (2017). The role of familiarity in correcting inaccurate information. Journal of Experimental Psychology. Learning, Memory, and Cognition, 43(12), 19481961. https://doi.org/10.1037/xlm0000422 CrossRefGoogle ScholarPubMed
Swire-Thompson, B., DeGutis, J., & Lazer, D. (2020). Searching for the backfire effect: Measurement and design considerations. Journal of Applied Research in Memory and Cognition, 9(3), 286299. https://doi.org/10.1016/j.jarmac.2020.06.006 CrossRefGoogle ScholarPubMed
Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior, 31(2), 137155. https://doi.org/10.1007/s11109-008-9075-8 CrossRefGoogle Scholar
Tappin, B. M., Pennycook, G., & Rand, D. G. (2020). Rethinking the link between cognitive sophistication and politically motivated reasoning. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0000974 CrossRefGoogle Scholar
Tucker, J. A. (2020, October 21). The truth about fake news: Measuring vulnerability to fake news online. Digital Life Seminar @Cornell Tech. https://www.dli.tech.cornell.edu/seminars/The-Truth-About-Fake-News%3A-Measuring-Vulnerability-to-Fake-News-Online Google Scholar
United Nations Department of Global Communications. (2020, May 28). “Verified” initiative aims to flood digital space with facts amid COVID–19 crisis. United Nations. https://www.un.org/en/coronavirus/%E2%80%98verified%E2%80%99-initiative-aims-flood-digital-space-facts-amid-covid-19-crisis Google Scholar
van Bavel, J. J., Harris, E. A., Pärnamets, P., Rathje, S., Doell, K. C., & Tucker, J. A. (2021). Political psychology in the digital (mis) information age: A model of news belief and sharing. Social Issues and Policy Review, 15(1), 84113. https://doi.org/10.1111/sipr.1207 CrossRefGoogle Scholar
van Bavel, J. J., & Pereira, A. (2018). The partisan brain: An identity-based model of political belief. Trends in Cognitive Sciences, 22(3), 213224. https://doi.org/10.1016/j.tics.2018.01.004 CrossRefGoogle ScholarPubMed
van Boekel, M., Lassonde, K. A., O’Brien, E. J., & Kendeou, P. (2017). Source credibility and the processing of refutation texts. Memory & Cognition, 45(1), 168181. https://doi.org/10.3758/s13421-016-0649-0 CrossRefGoogle ScholarPubMed
van der Linden, S., Leiserowitz, A., & Maibach, E. (2018). Scientific agreement can neutralize politicization of facts. Nature Human Behaviour, 2(1), 23. https://doi.org/10.1038/s41562-017-0259-2 CrossRefGoogle Scholar
van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), Article 1600008. https://doi.org/10.1002/gch2.201600008 CrossRefGoogle ScholarPubMed
van der Linden, S., Panagopoulos, C., & Roozenbeek, J. (2020). You are fake news: Political bias in perceptions of fake news. Media Culture & Society, 42(3), 460470. https://doi.org/10.1177/0163443720906992 CrossRefGoogle Scholar
van der Linden, S., Roozenbeek, J., & Compton, J. (2020). Inoculating against fake news about COVID–19. Frontiers in Psychology, 11, Article 566790. https://doi.org/10.3389/fpsyg.2020.566790 CrossRefGoogle ScholarPubMed
van der Meer, T. G. L. A., & Jin, Y. (2020). Seeking formula for misinformation treatment in public health crises: The effects of corrective information type and source. Health Communication, 35(5), 560575. https://doi.org/10.1080/10410236.2019.1573295 CrossRefGoogle ScholarPubMed
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 11461151. https://doi.org/10.1126/science.aap9559 CrossRefGoogle ScholarPubMed
Vraga, E. K., & Bode, L. (2017). Using expert sources to correct health misinformation in social media. Science Communication, 39(5), 621645. https://doi.org/10.1177/1075547017731776 CrossRefGoogle Scholar
Walter, N., & Tukachinsky, R. (2020). A meta-analytic examination of the continued influence of misinformation in the face of correction: How powerful is it, why does it happen, and how to stop it? Communication Research, 47(2), 155177. https://doi.org/10.1177/0093650219854600 CrossRefGoogle Scholar
Wilson, E. J., & Sherrell, D. L. (1993). Source effects in communication and persuasion research: A meta-analysis of effect size. Journal of the Academy of Marketing Science, 21(2), Article 101. https://doi.org/10.1007/bf02894421 CrossRefGoogle Scholar
Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior, 41(1), 135163. https://doi.org/10.1007/s11109-018-9443-y CrossRefGoogle Scholar
Zerback, T., Töpfl, F., & Knöpfle, M. (2020). The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society. Advance online publication. https://doi.org/10.1177/1461444820908530 CrossRefGoogle Scholar
Figure 0

Figure 1. Screenshots of the Bad News (www.getbadnews.com), Go Viral! (www.goviralgame.com), and Harmony Square (www.harmonysquare.game) landing pages.