What do we mean when we talk about “misinformation”? For many people, the term calls to mind a person browsing social media and stumbling across a post that contains a false claim. However, implicit in this imagined scenario are several problematic assumptions about the nature of political misinformation. First, it centers on the receiver of the misinformation, ignoring its creator. Second, it posits exposure as largely the result of atomized encounters, isolated from the larger social context. And finally, it ignores the larger information environment, including cues from political elites.
While empirical evidence suggests that this imagined scenario bears little resemblance to how most misinformation exposure occurs in the real world, it effectively illustrates the central components of public discourse around misinformation. A great strength of Adam Berinksy’s Political Rumors is that it pushes back on these problematic assumptions, painting a more nuanced and complete picture of rumor exposure that has the potential to inform more effective policy interventions. In this review, I summarize what I see as the book’s key contributions: its focus on misinformation creators, its emphasis on the social aspect of rumors, and its acknowledgment of elites’ roles.
Berinsky opens Political Rumors by describing a “pebble in a pond” model of dissemination that centers on the creator of misinformation (or what he calls “rumors”) as the initiating event in the chain of transmission. This focus on creators is an important part of the misinformation story that has been missing from much empirical research. A better understanding of the incentives facing potential creators is especially important when it comes to designing more effective interventions to limit the spread of misinformation. To use a different metaphor, if we imagine misinformation as a dragon marauding through the countryside, we have several ways to defend ourselves. We could equip every person with armor and a shield (e.g., information literacy interventions). We could build walls around each village (e.g., platform-level interventions such as content moderation). Or we could equip every person with a sword and attack the dragon itself—the creator of the misinformation. What might this last strategy look like? Designing effective defensive strategies requires a better understanding of the financial and political motivations of misinformation’s purveyors.
Minimizing creators’ ability to monetize misinformation demands more focused research into how structural aspects of the media environment may allow purveyors to make money from their lies. It is important to better understand how social media platforms, as well as television networks and other forms of traditional media, can enable ill-intentioned actors to earn money by creating and disseminating misinformation: for example, via advertising, merchandise sales, and membership fees (Aliaksandr Herasimenka et al. “The Political Economy of Digital Profiteering: Communication Resource Mobilization by Anti-Vaccination Actors,” Journal of Communication, 73(2), 2022). While fighting ideologically motivated rumor purveyors might be more challenging than attacking financial motivations, empirical research that demonstrates the costs of rumor dissemination could be effective at changing their incentives. For example, contrary to popular perceptions, recent research suggests that congressional candidates in the United States who endorse conspiracy theories tend to receive less public support (Benjamin Noble and Taylor Carlson, “CueAnon: What QAnon Signals about Congressional Candidates and What It Costs Them,” Political Behavior, 2024). Conducting and publicizing this type of research sends a powerful message to those who believe that spreading political misinformation is a costless way of earning votes.
Throughout the book, Berinsky explores the historical context of rumors, noting that the social drive to communicate speculations has persisted throughout history. This emphasis on the social aspect of rumors is another strength of the book. Berinsky notes that rumors “acquire their power through widespread social transmission and repetition,” and this social context in turn affects whether misinformation is accepted. We do not consume information (or misinformation) in a vacuum, but are surrounded by relevant social cues from our friends, family, and trusted sources, sometimes including political leaders. This emphasis on the social aspect of rumors informs one of his interventions tested in Chapter 4. Across several experiments, Berinsky finds that unlikely sources—e.g., a Republican refuting a rumor about a Democrat—are particularly effective at correcting misperceptions, because the social cues embedded in information affect whether people accept it. In this case, the costly signal sent by the Republican acting contrary to expectations makes the correction more believable.
Chapter 6 of Berinsky’s book, “The Role of Political Elites,” is the book’s most novel and arguably important contribution. In it, he takes a step back from the atomized message/receiver model of exposure and places misinformation in the larger political and social context. Berinsky begins by observing that Republican survey respondents are often much more likely to endorse pro-attitudinal rumors than their Democratic counterparts. He argues that this asymmetry occurs not because of fundamental personality differences, but because of different signals from Republican elites. A crowd-sourced content analysis of media coverage supports this hypothesis, showing that Republican elites are substantially less likely to reject rumors wholeheartedly. To the extent that Republican voters pick up on these elite cues, it is thus unsurprising that they follow their leaders’ cues. As Berinsky notes in his conclusion, “the battle to curb the spread of rumors and misinformation should begin with elites and the ways in which they talk about politics” (p. 163).
Political Rumors makes an important contribution to the study of misinformation by emphasizing that designing effective interventions to curb the spread of misinformation requires more attention to the larger social and political context in which that spread occurs. Relying only on a mental model of a person browsing Facebook and stumbling on “fake news” limits the scope of interventions we can imagine. By seeing misinformation not just as a problem of individual judgment but also as one of creator incentives, social context, and elite cues, we can design and test more effective strategies for increasing accuracy and improving democratic competence.