1 Introduction
Social media have become an important source of news for many people (Pew, 2019). Unfortunately, social media can be an outlet for purveyors of misinformation. Moreover, research suggests that false news stories may spread as much (Reference Grinberg, Joseph, Friedland, Swire-Thompson and LazerGrinberg, Joseph, Friedland, Swire-Thompson & Lazer, 2019) or more (Reference Vosoughi, Roy and AralVosoughi, Roy & Aral, 2018) on social media than news stories that have been fact-checked to be true. This may be problematic, because even a single prior exposure to a false political headline can increase later belief in the headline (Reference Pennycook, Cannon and RandPennycook, Cannon & Rand, 2018). Although recent analyses indicate that “fake news” is not as prevalent as many thought (and certainly not as prevalent as factual information) (Reference Grinberg, Joseph, Friedland, Swire-Thompson and LazerGrinberg et al., 2019; Reference Guess, Nagler and TuckerGuess, Nagler & Tucker, 2019; Reference Guess, Nyhan and ReiflerGuess, Nyhan & Reifler, 2020), false content is likely to have an impact on individual beliefs (Reference Guess, Lockett, Lyons, Montgomery, Nyhan and ReiflerGuess, Lockett, et al., 2020) — for example, the “Pizzagate” incident (Reference HsuHsu, 2017, June 13) or false beliefs about Donald Trump winning the 2020 U.S. Presidential Election (Reference Pennycook and RandPennycook & Rand, 2021a). Furthermore, content that is misleading and partisan, but not entirely false/fabricated, is likely to be widespread on social media (Reference Faris, Roberts, Etling, Bourassa, Zuckerman and BenklerFaris et al., 2017). For reasons such as these, researchers are rightfully paying increasing attention to the psychological underpinning of susceptibility to false and misleading content (Reference Levy, Ross, Hannon and de RidderLevy & Ross, 2021; Reference Pennycook and RandPennycook & Rand, 2021b).
1.1 Defining the problem
Since false and misleading content can be loosely classified into a number of categories, we begin by briefly defining a few key terms. Misinformation is false, inaccurate, or misleading information (Reference WardleWardle, 2018). A particularly flagrant form of misinformation is fake news, which refers to blatantly fabricated information that mimics online news media in form but not in content and can be political or non-political (Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer and ZittrainLazer et al., 2018). A more subtle form of political misinformation is hyperpartisan news, which is misleading coverage of events that did actually occur with a strong partisan bias (Reference Pennycook and RandPennycook & Rand, 2019b). Although fact-checking approaches tend to focus on outright falsehood (of which fake news is an example), hyperpartisan content is surely more common (Reference Bradshaw, Howard, Kollanyi and NeudertBradshaw, Howard, Kollanyi & Neudert, 2020; Reference Faris, Roberts, Etling, Bourassa, Zuckerman and BenklerFaris et al., 2017). Naturally, a clear line between “biased” and “unbiased” is difficult to draw, but our focus here is on news content that has a clear and explicit political aim and is either aligned with Democrats or with Republicans (who generally voted for Clinton or Trump, respectively, in the U.S. election of 2016). For simplicity, we will use the term “misleading” when referring to both categories here. Furthermore, while susceptibility to misleading content can manifest in multiple ways, we will focus on believing the content and/or being willing to share it on social media.
1.2 Theoretical background
What leads people to be susceptible to misleading content online? One approach is to take a cognitive science lens to the problem and consider, in particular, the role of reasoning (Reference Levy, Ross, Hannon and de RidderLevy & Ross, 2021; Reference Pennycook and RandPennycook & Rand, 2021b). Past work has operated under the framework of dual-process theory, which distinguishes between two types of cognitive processes (Reference Evans and StanovichEvans & Stanovich, 2013; Reference KahnemanKahneman, 2011; Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang & Koehler, 2015b): System 1 “intuitive” processes that do not require working memory, and are often fast and automatic; and System 2 “analytic” processes that require working memory, and are typically slow and deliberative. However, the relationship between System 2 thinking and susceptibility to misinformation is contested.
A common idea in the dual-process theory literature is that one of the most important functions of analytic processing is the correction (or, possibly, prevention) of false intuitions (Reference Evans and FrankishEvans & Frankish, 2009; Reference StanovichStanovich, 2004). This perspective is consistent with classical conceptions of reasoning processes as being directed toward supporting sound judgment and accuracy (e.g., Reference Kohlberg and GoslingKohlberg, 1969; Reference PiagetPiaget, 1932). When applied to fake and hyperpartisan news content, the implication of this perspective is straightforward: Engaging in System 2 (analytic) processing supports the accurate rejection of misleading content and helps individuals discern between what is true and false. According to this account – which we will refer to here as the “classical reasoning account” – people believe misleading news when they fail to sufficiently engage deliberative (System 2) reasoning processes (Reference Bago, Rand and PennycookBago, Rand & Pennycook, 2020; Reference Pennycook and RandPennycook & Rand, 2019c). Furthermore, the reason why misleading content is believed relates to its intuitive appeal – content that is highly emotional (Reference Martel, Pennycook and RandMartel, Pennycook & Rand, 2020), that provokes moral outrage (Reference Brady, Gantmam and Van BavelBrady, Gantmam & Van Bavel, 2020; Reference CrockettCrockett, 2017), or that draws people’s attention; since our cognitive system prioritizes miserly processing (Reference Fisk and TaylorFisk & Taylor, 1984; Reference StanovichStanovich, 2004), many individuals fail to effectively stop and reflect on their faulty intuitions. Indeed, it may be that social media are particularly conducive to inattention (Reference Weng, Flammini, Vespignani and MenczerWeng, Flammini, Vespignani & Menczer, 2012) and they may evoke social motivations (e.g., maximize getting “likes”) that distract from common accuracy motivations (Reference Pennycook and RandPennycook et al., 2021; Reference Pennycook, McPhetres, Zhang, Lu and RandPennycook, McPhetres, Zhang, Lu & Rand, 2020).
The classical reasoning account conflicts starkly with alternatives that focus more strongly on political identity and motivated reasoning. In particular, the “motivated System 2 reasoning” account (henceforth, MS2RFootnote 1) argues that people selectively believe factual information that protects their cultural (often political) commitments, and that this selective belief is actually facilitated by deliberative (System 2) thinking processes (Reference KahanKahan, 2013, 2017; Reference Kahan, Peters, Dawson and SlovicKahan et al., 2017). This MS2R account has implications opposite to those of the classical reasoning account, which has gained prominence in primarily non-political contexts: Whereas MS2R argues that explicit reasoning typically facilitates politically biased information processing (Reference Kahan, Peters, Dawson and SlovicKahan, 2017), the classical reasoning account argues that explicit reasoning typically facilitates accurate belief formation (Reference Pennycook and RandPennycook & Rand, 2019c).
1.3 Classical versus motivated reasoning
Some prior work helps to mediate between the classical and MS2R accounts; although, as we will discuss, the debate is far from settled. In a pair of studies, Reference Pennycook and RandPennycook and Rand (2019c) tested the two accounts in the context of political fake news. The MS2R account predicts that people who are more prone (and better able) to engage in deliberation should be more likely to use their cognitive sophistication to protect their prior beliefs and ideological identity. Therefore, more deliberation should be associated with increased belief in political content that is congenial with one’s partisan identity, regardless of whether it is fake or real (false or true). Reference Pennycook and RandPennycook and Rand (2019c), in contrast, found that people who are more likely and better able to engage in analytic (System 2) reasoning (measured using the Cognitive Reflection Test, CRT; Reference FrederickFrederick, 2005) were actually less likely to believe fake news regardless of whether or not it was aligned with their political ideology. Indeed, analytic thinking was associated with being better able to discriminate true and fake news headlines (see also Reference Bronstein, Pennycook, Bear, Rand and CannonBronstein et al., 2019; Reference Pehlivanoglu, Lin, Deceus, Heemskerk, Ebner and CahillPehlivanoglu et al., 2020; Reference Pennycook and RandPennycook & Rand, 2020). This result supports the classical reasoning account because it indicates that people who engage in more (and/or better) reasoning are more likely to accurately reject false partisan content and, therefore, are not more likely to engage in politically motivated System 2 reasoning. Furthermore, higher CRT individuals are more likely to accept corrections of false articles that they had previously indicated they were willing to share on social media (Reference Martel, Mosleh and RandMartel, Mosleh & Rand, 2021). Also consistent with the classical reasoning account, impeding deliberation with cognitive load and time pressure (Reference Bago, Rand and PennycookBago et al., 2020) or an instruction to rely on emotion (Reference Martel, Pennycook and RandMartel et al., 2020) reduces discrimination by increasing belief in fake news headlines – regardless of the headlines’ political alignment.
This prior work paints a fairly clear picture in the context of political fake news, but it represents a somewhat limited test of the motivated versus classical reasoning accounts in the context of misinformation – as fake news is only one part of the misinformation problem. In fact, as mentioned above, recent analyses indicate that fake news was not particularly prevalent during the 2016 U.S. Presidential Election (Reference Grinberg, Joseph, Friedland, Swire-Thompson and LazerGrinberg et al., 2019; Reference Guess, Nagler and TuckerGuess et al., 2019), perhaps because fake news is often blatantly implausible (Reference Pennycook and RandPennycook & Rand, 2019c). This poses an issue for existing work because motivated reasoning may be limited to cases where the falsehood is not so obvious or blatant, allowing more “intellectual wiggle room” in which politically motivated reasoning can operate. In contrast to fake news, hyperpartisan news is much more prevalent (Reference Faris, Roberts, Etling, Bourassa, Zuckerman and BenklerFaris et al., 2017) and is not so implausible, offering a more relevant and powerful test of the motivated versus classical reasoning accounts. Thus, in this study, we ask participants to make judgments about fake and hyperpartisan news (in addition to true or “real” news from legitimate mainstream sources).
Another limitation of past work is that it did not tease apart people’s judgments about accuracy and their willingness to share. Although Reference Pennycook and RandPennycook and Rand (2019c) asked participants to indicate their willingness to share news content on social media, this was done directly after participants made accuracy judgments. This may distort responses because asking people to judge the accuracy of a headline before deciding whether to share it has been shown to dramatically reduce sharing intentions for false headlines (Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and RandPennycook, Epstein, et al., 2021). Even judging the accuracy of a single politically neutral headline makes people more discernment in their sharing of true versus fake news (Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and RandPennycook, Epstein, et al., 2021; Reference Pennycook, McPhetres, Zhang, Lu and RandPennycook, McPhetres, et al., 2020). Thus, in the present study, we randomly assign participants to conditions in which they are asked to rate either the accuracy of headlines or indicate their willingness to share the headlines on social media, allowing a cleaner test of the role of reasoning in sharing decisions.
To summarize, we extend earlier research by examining not only fake news, but also hyperpartisan news; and by separately examining accuracy and sharing judgments. The primary goal of this investigation is to ascertain whether the classical reasoning account or MS2R account explains more variance in how people make judgments about accuracy of news content. To do this, we measure individual differences in analytic thinking via performance on the CRT and relate this to participants’ judgments about political news headlines. If analytic thinking supports (and exacerbates) motivated reasoning about biased or misleading information – as per the MS2R account – CRT performance should be positively associated with believing false or misleading news that aligns with political identity (and negatively associated for misaligned false or misleading news). By contrast, if analytic thinking has a general propensity to facilitate accurate beliefs – as per the classical reasoning account – then CRT performance should be negatively associated with believing false or misleading news regardless of political alignment. Furthermore, the classical reasoning account predicts that analytic thinking will be associated with stronger media truth discernment (i.e., higher accuracy judgments for true news relative to hyperpartisan and false news). Finally, a secondary goal of this study is to examine the relationship between CRT performance and willingness to share true, hyperpartisan, and false news because an accuracy motive might not be at the top of people’s minds when making decisions about what news to share (Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and RandPennycook, Epstein, et al., 2021; Reference Pennycook, McPhetres, Zhang, Lu and RandPennycook, McPhetres, et al., 2020). Thus, although willingness to share headlines does not offer a straightforward test of the competing hypotheses, it is of practical relevance to know if analytic thinking is associated with sharing behaviour (indeed, a study using sharing behaviour on Twitter found that people who score higher on the CRT share content from higher quality sources; Reference Mosleh, Pennycook, Arechar and RandMosleh, Pennycook, Arechar & Rand, 2021).
2 Method
We ran two studies which differed only in how participants were recruited. Therefore, for clarity of exposition, we report the methods and results for these studies together as two samples. We preregistered our study and report our target sample sizes, data exclusions, primary analyses, and all measures in the study. Sample sizes were set to be 1000 participants per sample. We arrived at this number by considering the maximum amount of money we wanted to spend on this project and the range of effect sizes found in a related study (Reference Pennycook and RandPennycook & Rand, 2019c). Data, analysis code, survey materials, preregistrations, and additional analyses are available online: https://osf.io/c287t.
2.1 Participants
Our first sample was 1000 American participants recruited using Amazon Mechanical Turk (Reference Horton, Rand and ZeckhauserHorton, Rand & Zeckhauser, 2011). Mechanical Turk users were eligible to participate in our study if their location was USA and they had a HIT approval rate of at least 90. Participants were paid US$1.30. In total, 1066 participants completed some portion of the study, and we had complete data for 996 participants. The final sample (Mean age = 34.80) included 560 males, 436 females, and 0 other. This study was run June 1st-4th, 2018 (i.e., after the 2016 election for President (Clinton vs. Trump) and congress, and before the November 2018 “midterm” election for congress).
Our second sample was 1000 American participants recruited using Lucid, an online recruiting source that aggregates survey respondents from many respondent providers (Reference Coppock and McClellanCoppock & McClellan, 2019). Lucid uses quota sampling to provide a sample that matches the national distribution on age, gender, ethnicity and geographic region. Participants are compensated in a variety of ways that include cash and various points programs. In total, 1384 participants completed some portion of the study, and we had complete data for 977 participants. The final sample (Mean age = 45.39) included 473 males, 504 females, and 0 other. This study was run June 12th-14th, 2018.
We compiled a list of false, hyperpartisan, and true news headlines. These headlines were presented in the format of Facebook posts: a picture accompanied by a headline and byline. We removed the source (e.g., “thelastlineofdefense.org”) in order to examine responses independently of familiarity with different news sources (we note, however, that manipulating the salience of the source appears to have little influence on whether people judge headlines to be true or false; Reference Dias, Pennycook and RandDias, Pennycook & Rand, 2020; Reference Pennycook and RandPennycook & Rand, 2020). Following previous work (e.g., Reference Pennycook and RandPennycook & Rand, 2019c), false news headlines were selected from well-known fact-checking websites (Snopes.com, Politifact.com, and Factcheck.org) and true news headlines were selected from mainstream news sources (e.g., NPR, New York Times). Hyperpartisan news headlines were selected from webpages that were categorised as hyperpartisan by experts (see Reference Pennycook and RandPennycook & Rand, 2019b; e.g., Dailykos.com, breitbart.com). We chose hyperpartisan headlines from across the political divide that reported actual events but in a biased or misleading manner. For example, one of our headlines was “Trump Says Stupid Things On Fox, Within 2 Hours Prosecutors Use It Against Him In Court.” This headline refers to an actual event: Donald Trump admitted that his former lawyer Michael Cohen had previously worked for him to protect him from accusations that he had had an affair with Stormy Daniels. Nonetheless, while this headline refers to an actual event, it does not summarise what Donald Trump actually said or provide useful context, but merely declares that what he said was “stupid”.
In all cases, we presented participants with headlines that had a partisan slant: they were either pro-Democrat or pro-Republican. To validate this sorting of items, we conducted a pretest (N = 467) where MTurk participants were presented a large set of false, hyperpartisan, and true political news headlines. Headlines were selected by the first author, and all three authors discussed which to retain for pre-testing. (These discussions were informal and no quantitative test of inter-rater reliability analysis was performed.) The full set of headlines consisted of 40 items from each category, although each participant rated only 20 randomly selected items in total. Participants were asked to answer four questions for each presented headline (in the following order): 1) “What is the likelihood that the above headline is true?” (on a 7-point scale from “extremely unlikely” to “extremely likely”); 2) “Assuming the above headline is entirely accurate, how favourable would it be to Democrats versus Republicans?” (on a 5-point scale from “more favourable to Democrats” to “more favourable to Republicans”); 3) “Are you familiar with the above headline (have your seen or heard about it before)? (with three response options “yes”, “unsure” and “no”); and 4) “In your opinion, is the above headline funny, amusing, or entertaining?” (on a 7-point scale from “extremely unfunny” to “extremely funny”). We then selected five items of each type that were equally different from the scale mid-point for the party favourability question (Pro-Democrat: false = 1.14, hyperpartisan = 1.14, true = 1.14; Pro-Republican: false = 1.14, hyperpartisan = 1.14, true = 1.13), meaning that the Pro-Democrat items were as favourable for the Democrat Party as the Pro-Republican items were favourable for the Republican Party, both across and within item type.
We measured analytic thinking by summing together the number of correct response to a reworded version of the original three-item CRT (Reference Shenhav, Rand and GreeneShenhav, Rand & Greene, 2012) and a non-numerical four-item CRT (Reference Thomson and OppenheimerThomson & Oppenheimer, 2016). The CRT has been shown to predict diverse psychological outcomes including epistemically suspect beliefs (Reference Pennycook, Fugelsang and KoehlerPennycook, Fugelsang & Koehler, 2015a) and to retain its predictive validity across time (Reference Stagnaro, Pennycook and RandStagnaro, Pennycook & Rand, 2018) and after multiple exposures (Reference Bialek and PennycookBialek & Pennycook, 2018). The full seven-item CRT had acceptable reliability (MTurk: Cronbach’s α = .80; Lucid: α = .69).
To code Democrat versus Republican partisanship we asked participants, “Which of the following best describes your political preference?” with six response options: “strongly Democratic”, “Democratic”, “lean Democratic”, “lean Republican”, “Republican”, “strongly Republican”.Footnote 2 Reponses were used to sort participants into two partisan groups: Democrat and Republican.
To code a preference for Hillary Clinton versus Donald Trump we asked participants, “If you absolutely had to choose between only Clinton and Trump, who would you prefer to be the President of the United States?” and offered two response options: “Hillary Clinton” and “Donald Trump”.
To identify participants who share on social media we asked, “Would you ever consider sharing something political on social media (such as Facebook and Twitter)?” and offered three response options: “yes”, “no,” and “I don’t use social media accounts”. Following previous research (Reference Pennycook, Bear, Collins and RandPennycook, Bear, Collins & Rand, 2020; Reference Pennycook and RandPennycook & Rand, 2019c) and as per our preregistration, participants who did not answer “yes” to this question had their data excluded from analyses that involved sharing as they do not allow us to cleanly examine the relationship between reasoning and the political content that people are willing to share (i.e., they may not discern between high and low quality content because they are unwilling to share any political content on social media).
2.2 Procedure
At the beginning of the survey participants were asked “Do you have a Facebook account?” and “Do you have a Twitter account?”. If they answered “no” to both questions they were sent to a debriefing screen and were not permitted to participate in the study. Next, participants were presented with 30 headlines (3 × 2 design with 10 of each veracity type × 15 Pro-Republican and 15 Pro-Democrat) with the order of headline randomized for each participant. Crucially, participants were randomly assigned to the accuracy condition or the sharing condition. In the accuracy condition, participants were asked to judge whether the headline was accurate and unbiased: “Do you think this headline describes an event that actually happened in an accurate and unbiased way?” (response options: “yes” and “no”). In the sharing condition, participants were asked to judge whether they would consider sharing the headline on social media: “Would you consider sharing this story online (for example, through Facebook or Twitter)?” (response options: “yes” and “no”). Finally, each participant was randomly assigned to either have the “yes” response option to the left of the “no” response option or vice versa.
Participants then completed the CRT and, afterward, were asked, “Have you seen any of the last seven-word problems before?” (response options: “yes”, “maybe”, “no”). In the Lucid sample, but not the MTurk sample, participants then completed the Berlin Numeracy Test (Reference Cokely, Galesic, Schulz, Ghazal and Garcia-RetameroCokely et al., 2012) for exploratory analysis of the extent to which cognitive reflection and “numeracy” predict performance (Reference Pennycook and RossPennycook & Ross, 2016). Next participants were asked demographic questions (age, gender, education, fluency in English), a question about political party affiliation (response options: Democrat, Republican, Independent, other), a political preference question (see materials section), a social liberal versus social conservative question, an economic liberal versus economic conservative question, a question about who participants voted for in the 2016 U.S. Presidential Election, a question about Clinton versus Trump preference for president of the US (see materials section), a question how whether participant’s social circle tended to vote Republican versus Democrat, a question about how participants would vote in US Congressional elections, two questions about political identity, six questions about trust in different sources of information, one question about frequency of use of social media accounts, one question about sharing political content on social media, one question about the importance of only sharing accurate news on social media, a question about belief in God, and a 14-item version of the Need For Cognitive Closure Scale (Reference Kruglanski, Gelfand, Sheveland, Babush, Hetiarachchi, Ng Bonto and GunaratnaKruglanski et al., 2017; Reference Webster and KruglanskiWebster & Kruglanski, 1994).
Finally, participants were asked if they had responded randomly at any point during the study or if they searched the internet for any of the headlines. And they were asked to provide their ZIP code, to estimate how many minutes the survey took them to complete, and to comment on the survey at their discretion (ZIP codes and comments are removed from online datafile to preserve participant anonymity).
3 Results
Descriptive statistics are reported in supplementary materials (Tables S1 and S2). Our preregistered analysis plan was to use the Democrat versus Republican partisanship question to operationalize political partisanship (see methods section). However, during peer review it was argued that the Hillary Clinton versus Donald Trump preference question should be used to operationalize political partisanship in the primary analysis (see methods section), given that the items actually used reflected this distinction more than the traditional differences between the two parties). Consequently, for analyses reported in the main text, political partisanship (which we still label as Democrat vs. Republican) is operationalized as a Clinton versus Trump preference. The preregistered analyses that use the Democrat versus Republican partisanship question to operationalize political partisanship are reported in supplementary materials (Tables S3, S4, S5, and S6). Importantly, as reported later, results are remarkably similar across these two approaches to operationalizing partisanship.
3.1 Accuracy
Table 1 shows correlations between CRT performance and perceived accuracy of headlines as a function of headline type (false, hyperpartisan, true), political slant (Pro-Democrat, Pro-Republican), and the partisanship of the participant (Democrat, Republican). Crucially, there was no evidence for a positive correlation between CRT and perceived accuracy of politically consistent fake or hyperpartisan news among either Democrats or Republicans in either sample. This is starkly inconsistent with the MS2R account, which predicts that people higher in analytic thinking should be better able to convince themselves that politically consistent headlines (i.e., Pro-Democrat headlines for Democrats, Pro-Republican headlines for Republicans) are accurate and unbiased. Rather, in most cases, higher CRT people judged fake and hyperpartisan news to be less accurate than lower CRT people. This is consistent with the classical reasoning account. There were, however, some weak relationships: judgments about hyperpartisan headlines that were pro-Democrat were not significantly associated with CRT for Democrats in either sample. Further, CRT was only very weakly (and not significantly) associated with perceived accuracy for false headlines that were Pro-Republican by Republicans in the Lucid sample (contrary to the MTurk sample, and previous work; (Reference Pennycook and RandPennycook & Rand, 2019c).
There were some relationships consistent with the MS2R account for Republicans when considering true news headlines. Specifically, while there was a consistent positive correlation between CRT and judgments of accuracy of true headlines among Democrats in both samples (and regardless of the political slant of the headlines), Republicans produced an inconsistent pattern of results: CRT was positively associated with judgments for Pro-Republican true headlines in the MTurk sample, while being negatively associated with judgments for Pro-Democrat true headlines in the MTurk sample. Both of these results were present in the Lucid sample, but they were (like other results in that sample) weak, and not statistically significant. Note also that the CRT correlations with accuracy judgments of false statements for Democracts were about equally negative for Pro-Democrat and Pro-Republican headlines, but for Republicans the correlation was more negative for Pro-Democratic headlines than Pro-Republican headlines. The results are consistent with some degree of bolstering that results from analytic thinking in Republicans. We shall return to these results in the Discussion section.
To more directly assess the association between CRT and the capacity to discern between high quality (true) and low quality (false or hyperpartisan) news, we computed media truth discernment scores for each category (i.e., a true minus false discernment score and true minus hyperpartisan discernment score).Footnote 3 Table 2 shows correlations between CRT performance and discernment for headlines as a function of partisanship and political slant for both samples. Again, there was no evidence for a negative correlation between CRT and media discernment for either sample. Rather, there was a consistent positive association between CRT and the capacity to discern between high and low quality news content, which is consistent with the classical reasoning account. However, the CRT was only weakly and not significantly associated with increased True-Hyperpartisan discernment among Republicans in the Lucid sample.
3.2 Willingness to share
Table 3 shows correlations between CRT performance and willingness to share headlines as a function of headline type (false, hyperpartisan, true), political slant (Pro-Democrat, Pro-Republican), and the partisanship of the participant (Democrat, Republican). While willingness to share social media content does not provide a direct test of the classical reasoning account versus MS2R account, it is interesting to note that the results for willingness to share was broadly similar to the results for judgments of accuracy. In particular, there was no evidence for a positive correlation between CRT and willingness to share false or hyperpartisan news. Rather, CRT was consistently negatively correlated with willingness to share false and hyperpartisan news in the MTurk sample, although these negative correlations were much weaker and mostly not significant in the Lucid sample.
Interestingly, unlike the results for judgments of accuracy, there were no cases where CRT was clearly positively associated with willingness to share true news content. Indeed, CRT was negatively correlated with willingness to share of true Pro-Republican news content for Democrats in both samples and for Republicans on MTurk (but not Republicans on Lucid). Furthermore, CRT was negatively associated with willingness to share true Pro-Democrat news content for Republicans on MTurk (but, again, not for Republicans on Lucid).
To further explore the association between CRT and overall capacity to discern between high quality (true) and low quality (false or hyperpartisan) news in terms of willingness to share, we computed discernment scores for each category (i.e., true minus false and true minus hyperpartisan). Table 4 shows correlations between CRT performance and media sharing discernment for headlines as a function of partisanship and slant for both samples. Again, there was no strong evidence for a negative correlation between CRT and media sharing discernment in any case for either sample. Rather, among Democrats, there was a consistent positive association between CRT and discernment in the MTurk sample, and a positive association for Pro-Democrat news in the Lucid sample. Republicans showed much weaker correlations, in both directions, mostly not significant.
Additional analyses are reported in supplementary materials, including pre-registered analyses that compared accuracy judgments to willingness to share judgments (Tables S7 and S8), a series of pre-registered robustness checks (Tables S9-S16), exploratory robustness checks (Tables S17-S21 and Figure S1), and exploratory investigations of item-level correlations between CRT scores and accuracy and willingness to share judgements for each headline (Figures S2-S5).
4 Discussion
Across two samples with a total of 1,973 participants, we examined the association between analytic thinking and susceptibility to politically slanted misinformation. In earlier research, the only politically slanted misinformation that was examined was false news (e.g., (Reference Pennycook and RandPennycook & Rand, 2019c), which left questions about other forms of misinformation unaddressed. In the present study we extended this work by also investigating hyperpartisan news. We found essentially no evidence for a positive relationship between analytic thinking and judging politically consistent hyperpartisan or false news headlines to be more accurate and unbiased, which does not support the idea that explicit reasoning is used in a politically motivated way (and, hence, inconsistent with the MS2R account; Reference Kahan, Peters, Dawson and SlovicKahan, 2017). Instead, we often found a negative relationship. Likewise, we found no evidence for a negative relationship between analytic thinking and discernment between true and false or hyperpartisan news headlines (regardless of political consistency), and in almost all cases we found a positive relationship. Together, these results support the claim that, overall, analytic thinking is directed more at forming accurate beliefs than reinforcing political identity and partisan motivations (Reference Pennycook and RandPennycook & Rand, 2019c).
The second sample exhibited fewer associations that support the classical reasoning account than the first sample. This is noteworthy because the second sample was provided by Lucid, a service that uses quota sampling and matches the national distribution on age, gender, ethnicity and geographic region, while the first sample was provided MTurk which does not quota sample. In addition to demographic differences, however, there was also much less variance in CRT scores on Lucid (where participants performed much worse) – which might also explain the weaker effects. Further work is needed to examine the demographics of participants who show – or fail to show – the associations predicted by the classical reasoning account (Reference Pennycook and RandPennycook & Rand, 2019c).
The differences observed in the results for Democrats and Republicans – in particular, that analytic thinking among Republicans was negatively associated with judging Democrat-leaning true news to be more accurate in the MTurk sample – merit further investigation. Note that parallel results were observed for false news: the correlation of analytic thinking with accuracy judgments of false news was more negative for Democrat-leaning false news than for Republican-leaning false news among Republicans, but Democrats did not show a difference. (The same quantitative pattern is weakly present in the sharing of news, shown in Table 3, but only in the degree of differences of correlations and not in the direction of the correlations.) These results are not inconsistent with the MS2R hypothesis, although only for Republicans.
We can think of at least three candidate explanations (which are not inconsistent with each other) for these apparent partisan differences. First, the results could depend on an idiosyncratic choice of headlines; in fact, an earlier study using a closely related paradigm did not observe such an asymmetry (Reference Pennycook and RandPennycook & Rand, 2019c). However, correlations between CRT and individual headlines showed that the present pattern of results was generally consistent across all items (supplementary materials Figures S2-S5).
Second, highly analytic conservatives (i.e., Republicans) may have a greater propensity to engage in motivated reasoning than liberals (i.e., Democrats) (Reference JostJost, 2017; Reference Jost, van der Linden, Panagopoulos and HardinJost, van der Linden, Panagopoulos & Hardin, 2018). A general partisan asymmetry has been found in research on belief in conspiracy theories (Reference van der Linden, Panagopoulos, Azevedo and Jostvan der Linden, Panagopoulos, Azevedo & Jost, 2021).
Third, heterogeneity of political ideology within the Democrat and Republican groups might lump psychologically different groups together. In particular, it has been argued that at least two dimensions — economic and social ideology — are needed to understand political ideology (Reference Feldman and JohnstonFeldman & Johnston, 2014). People who self-identify as libertarian (an identity that tends to express conservativism on economic issues and liberalism on social issues) usually vote Republican, and there is evidence that libertarians perform better on the CRT than liberals or conservatives (Reference Pennycook and RandPennycook & Rand, 2019a; Reference Yilmaz, Saribay and IyerYilmaz, Adil Saribay & Iyer, 2020). Consequently, combining libertarians (even those who ended up favouring Trump) with more socially conservative Republicans might confound analyses. However, this explanation by itself is not sufficient to account for the asymmetry we observed, since it would also require that libertarians engage in motivated bolstering more than other Republicans. We did not plan the experiment to look for these partisan differences, so a more direct test would be warranted.
Given that there were some cases where performance on the Cognitive Reflection Test did not predict better judgment – for example, most of the accuracy discernment comparisons among Republicans on Lucid – one possible argument is that these results are also consistent with a milder form of motivated reasoning. Specifically, political motivations may be the reason why analytic thinking failed to improve people’s judgments (although these differences may also be driven by factors other than political motivations, such as differing factual prior beliefs, as per Reference Tappin, Pennycook and RandTappin et al., 2020a, 2020b). Setting aside the difficulty in interpreting small effects, it would be unreasonable to conclude, based on our data, that reasoning is unaffected by political (or otherwise) motivations. Rather, our data primarily signal that, on balance, reasoning helps more than it hurts when it comes to news headline evaluation. It remains possible, if not likely, that reasoning is sometimes rendered ineffective because of political motivations (or inaccurate priors) in some contexts.
While sharing of social media content does not provide a direct test of the classical reasoning versus MS2R accounts, it is interesting to compare patterns of willingness to share misinformation from the present study to existing research. Earlier research employing a closely related paradigm asked participants if they were willing to share a headline immediately after asking them whether they thought the headline was true (Reference Pennycook and RandPennycook & Rand, 2019c). A limitation of that approach is that questioning participants about the truth of headlines may have influenced their responses about willingness to share headlines (Reference FazioFazio, 2020; Pennycook, Epstein, al., 2021; Reference Pennycook, McPhetres, Zhang, Lu and RandPennycook, McPhetres, et al., 2020). In the present study, questions about accuracy of headlines and willingness to share them were shown to different participants. This separation eliminates the potential source of bias. We found no evidence that analytic thinking predicted a greater willingness to share politically consistent hyperpartisan or false news headlines. By contrast, we often found that analytic thinking was associated with not being willing to share politically consistent false and hyperpartisan news headlines (and never found a significant positive association with willingness to share). These results are also in line with a recent study of Twitter users which found that higher CRT users were more likely to share news on Twitter from outlets that were deemed to be more trustworthy by fact-checkers (Reference Mosleh, Pennycook, Arechar and RandMosleh et al., 2021). A limitation of the present study is that we examined self-reported willingness to share, rather than actual sharing of headlines. Nonetheless, recent research has found self-reported willingness to share of social media predicts actual sharing behaviour on social media (Reference Mosleh, Pennycook and RandMosleh, Pennycook & Rand, 2020).
The present study drew participants from two different American subject pools, and the results are broadly consistently with earlier studies using an partially overlapping, yet substantially different, set of headlines (Reference Pennycook and RandPennycook & Rand, 2019c). Consequently, we would expect that the results of this research will generalize to other American samples and political headlines. Nonetheless, the extent to which these results will show cross-cultural generalizability is an open question and an important direction for future research. Moreover, at the level of individual headlines with the same political slant, the correlations between CRT and perceived accuracy, and CRT and willingness to share shows some variation (see supplementary materials Figures S2-S5), which suggests that a useful direction for future research could be to examine properties of headlines that influence the magnitude of these correlations.
In summary, earlier studies had examined the relationship between analytic thinking and assessments of fake news. In the present study, we extended this work by examining another form of misinformation: hyperpartisan news. Contrary to the MS2R account, we found little evidence consistent with people using their analytic thinking ability to maintain a belief in fake or hyperpartisan news that supports their political identity. Instead, we found that analytic thinking ability is typically associated with the rejection of misinformation, largely irrespective of ideological alignment.
We acknowledge funding from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation (GP, DR), the Social Sciences and Humanities Research Council of Canada (GP), the Canadian Institutes of Health Research (GP), the Luminate Group (GP, DR), the Australian Research Council (RMR), the William and Flora Hewlett Foundation (GP, DR), and the Templeton Foundation (DR).