Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2025-01-05T22:53:47.516Z Has data issue: false hasContentIssue false

meeting-report

Published online by Cambridge University Press:  31 December 2024

Rights & Permissions [Opens in a new window]

Abstract

Type
Review Symposium
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

Adam Berinsky’s book Political Rumors: Why We Accept Misinformation and How to Fight It offers a forceful and data-driven account of political rumor acceptance in the United States. In addition to documenting the social, psychological, and political determinants of rumor acceptance, the book takes the critical next step of considering how political scientists and strategic communicators might take action to reduce political rumor acceptance. It is on the prospect of reducing political rumor acceptance that I wish to offer a few reflections on Berinsky’s work.

Many political scientists have gone to great lengths to document the prevalence, determinants, and political and policy consequences of misinformation acceptance, broadly defined. Far fewer, in my view, then ask what we as scholars can do to put both the acceptance and spread of political rumors into decline. It is in this respect that I think Berinsky’s book, particularly Chapter 4, offers three key insights—and opportunities for critical reflection—that are crucial for political scientists interested in the study of misinformation correction.

First, expanding on his previously published research on the subject (Adam J. Berinsky, “Rumors and Health Care Reform: Experiments in Political Misinformation,” British Journal of Political Science, 47(2), 2017), Berinsky finds that exposure to debunking messages that come from surprising sources—in his case, partisan elites who may stand to benefit from rumor acceptance, but who choose to reject it anyway—can decrease political rumor acceptance. These effects primarily occur not by turning rumor adherents into skeptics, but by convincing those who express uncertainty about rumors’ veracity to reject those claims as false.

Berinsky’s findings are consistent with social psychological insights into the study of source credibility (e.g., Chanthika Pornpitakpan, “The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence,” Journal of Applied Social Psychology, 34(2), 2004). People who might be tempted to accept political rumors as true may place elevated levels of trust in those who they perceive to benefit from the spread of those rumors based on their political partisanship but choose to reject them. Correspondingly, people exposed to debunking efforts from these surprising sources may thereby be more likely to accept claims made in their counter-argumentation as true. Given the increasing politicization of scientific authority (Matt Motta, Anti-Scientific Americans: The Prevalence, Correlates, and Political Consequences of Anti-Intellectualism in the U.S, 2024), Berinsky’s efforts to—as he puts it—"flip the effects of partisanship on its head," offers a useful path forward for correcting rumor acceptance in a way that circumvents the advice of scientists and other non-partisan experts; i.e., by appealing to political (as opposed to scientific) sources.

Still, while Berinsky’s work does an excellent job demonstrating the viability of this method, it is worthwhile to consider potential challenges that might arise in its application. One important challenge concerns the availability of externally valid or “real world” examples of surprising rumor rebuttals in political reality. After all, these rebuttals are surprising for a reason: they are not commonplace. In an age of intense partisan polarization, the political gains offered by political rumor spread may simply be too advantageous for partisan elites to pass up, especially on highly polarizing issues. Those who do—such as the efforts of Senator Mitt Romney or Representative Liz Cheney to rebut political rumors about the results of the 2020 presidential election—may end up losing their jobs as elected officials or fall into disfavor with members of their political party, at both the elite and mass level. This could severely undermine their potential effectiveness as messengers hoping to reach potential rumor adherents on both sides of the partisan aisle.

Consequently, the absence of real-world examples of surprising source rebuttals (at least in some cases), may complicate political scientists’ efforts to adapt Berinsky’s insights into their own research. One theoretical solution to this problem would be to attribute hypothetical rumor corrections to surprising sources. Hypothetical endorsements, however, pose an important tradeoff between internal and external validity in the context of political communication research. While hypothetical corrections may be capable of shifting opinion in internally valid randomized controlled trials, the lack of these corrections in the “real world” may imply that these experimental treatment effects are confined only to the laboratory.

Correspondingly, I believe that Berinsky’s work encourages those of us who conduct strategic political communication to: a) pay close attention to American politics in order to identify potential sources of surprising rumor rebuttals, and b) make an effort to incorporate these surprising endorsements into our research, whether through observational and quasi-experimental analyses of how these surprising endorsements impact the opinion landscape or through the development of randomized controlled trials.

Additionally, Berinsky’s work considers the effects of psychological fluency on misinformation acceptance. His work suggests that increased exposure to political falsehoods encourages people to be more likely to accept these falsehoods as true. This finding, too, has critically important implications for political science research. Strategic political communicators have become increasingly interested in studying the effects of psychological inoculation, that is, efforts to both expose people to and debunk misinformation before its acceptance becomes widespread, in order to reduce misinformation acceptance.

Recent analyses of the literature on what is commonly referred to as “pre-bunking” suggest that these interventions are unlikely to increase misinformation acceptance, which we might expect as a result of the fluency effect (Sander Van der Linden, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity, 2023). While the effects of fluency may seem to be at odds with studies documenting the effectiveness of pre-bunking interventions, they may actually be perfectly compatible. Findings suggesting that pre-bunking interventions typically do not result in fluency-related backfiring effects (i.e., the acceptance of misinformation as true as a result of repetition) does not necessarily mean that they cannot have this effect if repeated many times.

Correspondingly, I think that Berinsky’s work cautions those of us who engage in pre-bunking research to make judicious use of this messaging strategy. While pre-bunking interventions are often both safe and effective at reducing political rumor acceptance, too many efforts to pre-bunk misinformation could, in theory, lead to a fluency-attributable backfiring effect. Efforts to quantify whether or not backfiring might occur, and to more generally consider the conditions under which pre-bunking interventions may be more or less appropriate is important and remains a worthwhile avenue for future research.

Finally, Berinsky’s work finds that the effects of political rumor correction are often short lived. Bringing together an impressive array of longitudinal data, he finds that people who reject political rumors as true, following exposure to evidence-based corrections in laboratory environments, may change their minds in just a few days. This sobering finding raises an important lesson for political scientists studying rumor acceptance. In addition to whenever possible making an effort to study the effects of misinformation correction longitudinally, we—as a scientific discipline—ought to think reflexively about the role that insights from our research might play in reducing political rumor acceptance at scale (i.e., in the general population).

Upon documenting the effectiveness of a particular misinformation correction message, it is tempting to conclude that we ought to make an effort to inject that message into public discourse both broadly and (as Berinsky’s work implies) with repeated frequency. But this is much easier said than done. While it may be possible to, say, partner with local public health departments to design and administer rumor correction messages informed by our scholarly research, these efforts may prove to be both time and resource intensive. Moreover, and as alluded to earlier, pre-bunking interventions deemed effective in the context of a single randomized controlled trial could (at least in theory) have deleterious population-level consequences if repeated frequently.

Correspondingly, I believe that Berinsky’s work cautions us to consider not only the size and direction of rigorously evaluated political misinformation correction interventions, but to both measure and think critically about how effect duration and frequency of corrective message repetition might influence population-level misinformation acceptance. Above all, perhaps the most important lesson we can take from Berinsky’s work is this. Efforts to correct political rumor acceptance are certainly not easy. But they are very much worth pursuing. While documenting the prevalence, spread, and political implications of misinformation acceptance is of course worthwhile, our ability (as political scientists) to improve public discourse about politically relevant topics hinges on our ability to provide a roadmap for putting the spread of political rumors into decline. I believe that Berinsky’s work offers an important step forward for doing precisely this.