Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-23T01:35:58.510Z Has data issue: false hasContentIssue false

When Do Sources Persuade? The Effect of Source Credibility on Opinion Change

Published online by Cambridge University Press:  02 March 2022

Bernhard Clemm von Hohenberg*
Affiliation:
ASCoR, University of Amsterdam, Nieuwe Achtergracht 166, 1018 WV, Amsterdam, Netherlands; Twitter: @bernhardclemm
Andrew M. Guess
Affiliation:
Department of Politics and School of Public and International Affairs, Princeton University, Fisher Hall, Princeton, NJ 08544, USA; Twitter: @andyguess
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Discussions around declining trust in the US media can be vague about its effects. One classic answer comes from the persuasion literature, in which source credibility plays a key role. However, existing research almost universally takes credibility as a given. To overcome the potentially severe confounding that can result from this, we create a hypothetical news outlet and manipulate to what extent it is portrayed as credible. We then randomly assign subjects to read op-eds attributed to the source. Our credibility treatments are strong, increasing trust in our mock source until up to 10 days later. We find some evidence that the resulting higher perceived credibility boosts the persuasiveness of arguments about more partisan topics (but not for a less politicized issue). Though our findings are mixed, we argue that this experimental approach can fruitfully enhance our understanding of the interplay between source trust and opinion change over sustained periods.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Introduction

Americans’ trust in mainstream media is increasingly lopsided. Mirroring other institutions, overall levels of trust have generally declined, while citizens continue to polarize along partisan lines on the sources they consider to be trustworthy (Ladd Reference Ladd2011; Guess, Nyhan, and Reifler Reference Guess, Nyhan and Reifler2017). These patterns are troubling, because for functional deliberation to be possible, citizens require some common ground – but given partisan media presenting different versions of reality, increasing polarization of factual beliefs is more likely (Arceneaux and Johnson Reference Arceneaux and Johnson2013; Bakshy, Messing, and Adamic Reference Bakshy, Messing and Adamic2015; Tsfati Reference Tsfati2010). In reaction to these trends, civic actors as well as platform providers have tried to establish ideologically neutral markers of media quality. For example, Facebook introduced a “trusted sources” feature; the website and browser plugin NewsGuard provides “trust ratings” for a large range of news organizations.

Underlying these developments are fundamental questions about the relationship between people’s perceptions about news sources and the extent to which their attitudes are subject to change when they encounter information from those sources. Early studies by Hovland and others proposed the core idea that persuasion hinges on perceived credibility of the communicator (Hovland and Weiss Reference Hovland and Weiss1951; Hovland, Janis, and Kelley Reference Hovland, Lester and Kelley1953). The assumption that credible sources have the power to persuade is central to the literature, as a recent review on persuasion – listing “speakers/sources” as one of four key variables – shows (Druckman Reference Druckman2022).

The importance of source credibility is strongly supported by studies that conceive of sources as politicians, linking experimental stimuli to endorsement by an in-party or out-party member (e.g. Kam Reference Kam2005; Pink et al. Reference Pink, Chu, Druckman, Rand and Willer2021). However, the evidence is more complex when it comes to sources as media organizations. One problem of existing research is that it often takes credibility of media sources as a given, studying well-known entities such as national newspapers. This is problematic because patterns of perceived credibility can be confounded in any number of ways. For instance, people may be more likely to trust a news source because of its inherent quality (Pennycook and Rand Reference Pennycook and Rand2019), or because it is perceived to be nearly synonymous with their partisan perspective (Jurkowitz et al. Reference Jurkowitz, Mitchell, Shearer and Walker2020).

To overcome the possibility of confounding in attitudes toward sources, we create a hypothetical news outlet and, in a preregistered experiment, attempt to manipulate participants’ perceived credibility of this source, with stimuli that resemble the aforementioned attempts to highlight a source’s objective characteristics. We then independently manipulate the political slant of this outlet by randomly assigning subjects to read op-eds attributed to the source’s editorial board, in the tradition of research on persuasion (Coppock, Ekins, and Kirby Reference Coppock, Ekins and Kirby2018; Guess and Coppock Reference Guess and Coppock2018). In contrast to many previous studies, our three-wave design also allows for testing the persistence of treatments (cf. Hill et al. Reference Hill, James Lo and Zaller2013).

Our credibility interventions durably affect subjects’ attitudes toward our hypothetical source. These induced changes in perceived credibility do somewhat interact with our more partisan persuasion treatments; this is most evident for the high-credibility treatments on arguments in favor of the conservative positions we study. Taken together with suggestive evidence of heterogeneity by partisanship and pretreatment media trust, our findings are consistent with documented polarization in media attitudes (and worries about its consequences). Surprisingly, we do not find evidence that the credibility treatments have an effect on the persuasiveness of op-eds about a more nonpartisan issue.

Hypotheses

Research on source credibility seeks to understand which characteristics of sources make them more or less credible to individuals and how such perceived credibility affects communication outcomes. The classical experimental design used since the era of Hovland exposes subjects to information by either of two (or more) sources (Hovland and Weiss Reference Hovland and Weiss1951). The contrast is often between well-known, or stereotypical, entities, for example, between a “Princeton professor” and a “local high school class” (Petty, Cacioppo, and Goldman Reference Petty, Cacioppo and Goldman1981). In a typical study done in an online news context, Greer (Reference Greer2003) contrasts information by The New York Times to that by a “personal blog.”

In these designs, it often remains unclear what it is about the source that matters: Is it the New York Times’ perceived credibility that has an effect, or any other characteristic that might covary with it, such as familiarity? Furthermore, especially in highly politicized media markets, credibility perceptions are not uniform. As the growing polarization of media trust in the USA demonstrates (Jurkowitz et al. Reference Jurkowitz, Mitchell, Shearer and Walker2020), partisanship likely shapes people’s evaluations of news sources’ credibility. This suggests that research designs using the names of real news organizations as treatments (Kang et al. Reference Kang, Bae, Zhang and Shyam Sundar2011; Go, Jung, and Wu Reference Go, Jung and Wu2014) may introduce confounding with other associations those names might raise for participants.

In this study, we follow a different and novel approach, intervening on credibility perceptions more directly. Specifically, we focus on characteristics that are particularly constitutive of the quality of news sources, namely whether a source provides accurate, transparent, and financially independent reporting. Endorsement cues about these properties, that is, a statement by a third party, as well as reputation cues, that is, illustrating that these qualities are widely acclaimed, should affect perceived credibility. In contrast to prior research, we present the same, made-up source described with such cues. To begin with, we are interested in how this affects whether people trust information published by this source (Strömbäck et al. Reference Strömbäck, Tsfati, Boomgaarden, Damstra, Lindgren, Vliegenthart and Lindholm2020), how favorably they feel towards it, and how biased they consider it to be:

H1a: Induced high (low) source credibility will increase (decrease) favorability toward the source.

H1b: Induced high (low) source credibility will increase (decrease) trust toward the source.

H1c: Induced high (low) source credibility will decrease (increase) the perceived bias of the source.

It could be that providing information about the target source does not only change perceptions of that source but also affects perceptions of the media in general. For example, when a target source is described as not credible, it could be that people think of it as indicative of larger problems with the news media and lower their opinions of other outlets as well. To date, there is little evidence about such spillover effects, and we therefore pose the following questions:

RQ1a: Will induced high (low) source credibility increase (decrease) favorability toward non-target sources?

RQ1b: Will induced high (low) source credibility increase (decrease) trust toward non-target sources?

RQ1c: Will induced high (low) source credibility decrease (increase) the perceived bias of non-target sources? Footnote 1

Part of the function of news outlets is airing opinions: What should citizens think about a certain policy, candidate, or party? Much research has been devoted to the question of whether the media do, in fact, persuade (e.g., Dalton, Beck, and Huckfeldt Reference Dalton, Beck and Huckfeldt1998; Gerber, Karlan, and Bergan Reference Gerber, Karlan and Bergan2009; Jerit, Barabas, and Clifford Reference Jerit, Barabas and Clifford2013). We build on research studying persuasion at the article level, in which researchers expose subjects to either of two op-eds on some policy issue and then measure their attitudes on that issue (e.g., Cobb and Kuklinski Reference Cobb and Kuklinski1997; Coppock, Ekins, and Kirby Reference Coppock, Ekins and Kirby2018).

Several frameworks from psychology and political science suggest that persuasive messages are particularly likely to be effective for nonpartisan issues. Receivers are less likely to resist communications about unfamiliar issues (Zaller Reference Zaller1992). Dual-process models predict that receivers are more likely to pay attention to “peripheral” cues such as sources when personal involvement with the topic is low (Chaiken Reference Chaiken1980; Petty and Cacioppo Reference Petty and Cacioppo1986). As an example of such nonpartisan issues, take the case of “short-time work” policies, implemented during the COVID-19 pandemic in some European countries: These schemes allow companies to reduce working hours and wages of their employees to whom the state pays compensation. Some suggested that the US should adopt similar policies, but the topic never became widely discussed and was not as politicized as other issues related to the pandemic. Hence, we expect that people’s opinion on this policy could be swayed by an op-ed:

H2a: The short-time work op-ed treatment will increase support for short-time work policies.

The source credibility literature posits that this persuasive effect depends on whether the source has high or low credibility (Hovland, Janis, and Kelley Reference Hovland, Lester and Kelley1953; McGuire Reference McGuire, Lindsey and Aronson1969). However, experiments testing source effects in persuasion rely on preconceived notions about existing sources rather than intervening on credibility perceptions. In contrast, we examine whether the persuasive power of higher-credibility sources holds when it is manipulated directly:

H2b: Induced high (low) source credibility will increase (decrease) the effect of the op-ed treatment on support for short-time work policies.

A recurring question in persuasion research is how persistent persuasion actually is. Most early laboratory studies find that opinion changes decay within a few days or a week (cf. Hill et al. Reference Hill, James Lo and Zaller2013). Field studies yield mixed conclusions about durability (Franz and Ridout Reference Franz and Ridout2010; Huber and Arceneaux Reference Huber and Arceneaux2007; Shaw Reference Shaw1999). None of these studies have looked at effect durability against the backdrop of source credibility.Footnote 2 We predict that our persuasive communication may have an effect lasting up to 10 daysFootnote 3 and again expect the delayed effect to be moderated by the type of source:

H3a: The short-time work op-ed treatment will increase support for short-time work policies up to 10 days later.

H3b: Induced high (low) source credibility will increase (decrease) the effect of the op-ed treatment on support for short-time work policies up to 10 days later.

To obtain a more comprehensive picture of persuasion and source effects, we also examine two more controversial issues, namely gun control and economic protectionism. Both of these issues were more highly charged at the time of our study. Compared to the more nonpartisan issue of short-time work policies, effects could be somewhat weaker for partisan issues, which dual-processing theory predicts to meet more resistance among receivers. However, we expect persuasion to have some effect also for these partisan issues, as more recent studies show (e.g., Coppock, Ekins, and Kirby Reference Coppock, Ekins and Kirby2018; Guess and Coppock Reference Guess and Coppock2018). Given two treatments with opposite stances on each issue, we predict

H4a: The gun control op-ed treatments will change support for gun control in the direction of the information provided.

H4b: The protectionism op-ed treatments will change support for protectionism in the direction of the information provided.

Similar considerations as above lead us to expect another interaction with the source’s credibility:

H5a: Induced high (low) source credibility will increase (decrease) the effect of the gun control op-ed treatments on support for gun control.

H5b: Induced high (low) source credibility will increase (decrease) the effect of the protectionism op-ed treatments on support for protectionism.

Finally, we are interested to understand heterogeneity of any of the hypothesized effects. We rely on a systematic, preregistered procedure to flexibly search for treatment heterogeneity. Without clear theoretical expectations, we ask

RQ2: Do the treatment effects posited vary across subgroups?

Experimental design

Sample

We conducted a three-wave survey experiment in autumn 2020. Wave 1 was fielded on October 14 and closed on October 21; Wave 2 was in the field October 22–28; and Wave 3 between October 29 and November 2. All hypotheses, experimental procedures, and analyses were preregistered after Wave 1 was fielded but before Wave 2 data collection began (https://osf.io/bmfy2/). We investigated the statistical power of our design with the “Declare” framework (Blair et al. Reference Blair, Cooper, Coppock and Humphreys2019), finding that 2,500 participants would be sufficient to find small main effects. Our sample of US respondents was recruited by Dynata with a quota set on partisanship. Respondents who did not pass a basic attention check were filtered out. The first-wave sample had a median age of 62 years, was 49.4% female, and 55.5% college-educated. Of the 2,497 participants in Wave 1, 1,879 followed up in Wave 2, of which 1,635 followed up in Wave 3. Formal attrition tests reported in the SI (section E) show that the attrition rate was neither asymmetric between treatments nor attributable to observed sociodemographic characteristics.

Experimental procedure

After asking participants’ informed consent and measuring some pretreatment variables (see SI-D), we presented subjects with the logo of a made-up source called “24hr Nation,” which in a pretest showed a good mix of being unfamiliar to people and perceived as unbiased (see SI-A). In the remainder of Wave 1 and the other two waves, subjects were randomly assigned to three consecutive treatments (fully crossed) involving this source.

The credibility treatment manipulated whether our fictional source was presented as credible. In Wave 1, subjects saw a screenshot of the “Press Award 2019” website. In the high-credibility condition, subjects read that the “editorial team of 24hr Nation won the Independence in Journalism Award.” In the low-credibility condition, 24hr Nation was presented as winning the “Ignoble Press Award.”Footnote 4 In the control condition, no information about the source was given other than the name and logo. The treatment assignment was carried over to Wave 2, in which we aimed at strengthening the manipulation with a quality report about 24hr Nation by the fictitious “Media Checkup.” It either said that 24hr Nation “adheres to all nine of MediaCheckup’s standards of credibility and transparency” (high-credibility condition) or that 24hr Nation “severely violates basic standards of credibility and transparency” (low-credibility condition). Both stimuli were effective regarding credibility-related measures compared with a control condition in our pretest. Figure 1 shows an example (see SI-C for stimuli). The two other treatments were designed to test persuasion effects. The nonpartisan issue treatment, administered in Wave 2 after the second credibility stimulus, revolved around the benefits of short-time work policies to fight unemployment related to the pandemic. In the persuasion condition, we asked subjects to read an article adapted from a real op-ed arguing for that policy. In the placebo condition, subjects read an article about the benefits of hiking. In Wave 3, subjects were randomly assigned in the partisan issue treatment, which involved two op-eds on gun control and economic protectionism. In the “pro-Democrat” persuasion condition, subjects read one article in favor of stricter gun laws and one arguing against economic nationalism. In the “pro-Republican” condition, the articles argued for the opposite positions. Again, these were adapted from real news articles and selected to make a strong case. The placebo condition showed two articles about nonpolitical topics. All articles were presented as authored by the editorial board of 24hr Nation; the article texts were preceded by a screenshot of the fictional website (see Figure 2).

Figure 1 Screenshots used in high-credibility condition Wave 1.

Figure 2 Example of website headline screenshot in Wave 2.

Outcome measures

We captured credibility-related perception of sources with three measures: favorability toward a source, trust toward a source, and perceived bias of a source (five-point scale from “favor liberal side” to “favor conservative side”; folded for all analyses so that the midpoint as a perceived bias of 0). Note that we may refer to these three outcomes simply by “perceived credibility.” We asked these questions for the target source, 24hr Nation, and three nontarget sources (The New York Times, The Wall Street Journal, and USA Today) once in each wave. Following the nonpartisan persuasion treatment in Wave 2 and again at the beginning of Wave 3, we asked subjects about their support for short-time work policies. After the partisan issue treatments, we administered two attitude batteries on gun control and economic protectionism (also measured in Wave 2 as a pretreatment covariate). As detailed in the SI (section D), we could only reduce the gun control battery to a single index. The protectionism battery yielded inter-item correlations opposite to what we expected, so instead of averaging into an index, we just use a single item asking whether respondents considered increased tariffs positive or negative.

Results

All of the following analyses are prespecified except where indicated. We test all our primary hypotheses with two models, first, with unadjusted regressions of the outcome on the treatment, and second, with regressions including covariates selected through a lasso procedure. Full regression tables, as well as robustness checks, can be found in the SI (section G).

Perceived credibility

Looking at our first set of expectations H1a-c, Figure 3 shows that the high-credibility condition increased favorability and trust toward and decreases the perceived bias of 24hr Nation, compared to the control condition. The opposite is the case for the low-credibility condition. These effects are substantial, as the figure suggests: for example, the low-credibility treatment, compared to the control condition, reduces trust in 24hr Nation by 0.548 points on a five-point scale. We take these results as evidence that our manipulation worked as intended, as significant regression coefficients (see SI-G) also show. Notably, there are also hints of small compensatory effects: those to whom we present 24hr Nation as highly credible show slightly more negative attitudes toward The New York Times, The Wall Street Journal, and USA Today. These findings are suggestive of the possibility that the amount of available media trust is fixed: as trust in one source increases, there is less available for other sources. This “conservation of media trust,” though only suggestive in our results thus far could be tested in further research.

Figure 3 Treatment means related to H1a-c and RQ1a-c.

We also show in the SI (section H) that the credibility manipulation carries over to Wave 3, on average about 6 days after providing any information about the news source. The effects on favorability and trust toward, and on perceived bias of, 24hr Nation are still significant, although only about half as strong. Thus, our credibility treatments did not merely have a fleeting effect and the evidence suggests that people’s attitudes toward new information sources can be quite durable.

Persuasion and credibility

The second part of our analysis concerns the effects of persuasive communication on policy attitudes. In Wave 2, we asked subjects in the treatment condition to read an op-ed by 24hr Nation on a novel issue that did not clearly align with partisanship, namely the introduction of a short-time work policy to fight unemployment during the pandemic. We expected that people would express more support for this kind of policy after reading the article compared to receiving a placebo text (H2a), but that this persuasive effect would depend on whether subjects had been presented the source as low credibility or high credibility (H2b). We further hypothesized that both the main persuasion effect (H3a) and the interaction effect (H3b) would persist until the beginning of Wave 3. Figure 4 shows distributions and means on the outcome variable, that is, policy support, grouped by treatment group, both for outcome measurement directly after the persuasive message in Wave 2 (left panel), and at the beginning of Wave 3 (right panel).

Figure 4 Treatment means related to H2a, H2b, H3a, and H3b.

Testing hypotheses H2a and H2b formally, we do not find measurable main effects of persuasion directly after exposing subjects to the communication in Wave 2. What is more, even though the plot suggests that persuasion is greater when coming from a credible source, this interaction is statistically insignificant when comparing credibility treatments with control. Note that these null effects remain when we take into account an attention check directly after the treatment (see SI-I.3). Surprisingly, despite the initial lack of evidence for an effect, we find that the persuasion treatment does seem to matter when the same outcome is measured at the beginning of Wave 3 (H3a). Subjects in the persuasion condition are 0.19 points more favorable toward short-time work policies on a scale from 1 to 5 in the unadjusted model (p < 0.001; similar effect size for adjusted model, p < 0.001). However, the credibility of the source again does not affect persuasion (H3b), though it is possible that our lack of power prevented us from precisely estimating such an interactive effect.

Finally, we investigate the effects of persuasive information around partisan issues, specifically gun control and economic protectionism (see Figure 5). Testing the main effects (H4a and H4b), we find mixed results for the “pro-Republican” (for gun rights and economic protectionism) treatment. It has virtually no effect on gun rights attitudes and a small but not statistically significant effect on protectionism attitudes. The “pro-Democrat” treatment reveals some significant, but not very robust, effects: it makes people 0.17 points more supportive of gun restrictions in the saturated model (p = 0.04), but not in the unadjusted model (p = 0.13), and 0.19 points less in favor of economic protectionism in the unadjusted model (p = 0.01; adjusted model: p = 0.00).

Figure 5 Treatment means related to H4a, H4b, H5a, and H5b.

Irrespective of the lack of robust main effects, persuasion might be more effective when the communication originates from a source perceived as credible. Figure 5 suggests that differences between persuasion conditions indeed vary by credibility condition. Formal tests reveal the high-credibility condition matters, especially for the pro-Republican arguments. First, consider the guns topic (left panel of Figure 5). For the pro-Republican condition, the high-credibility treatment does increase the persuasive effect significantly: For subjects who received no information about 24hr Nation, the predicted difference between pro-Republican persuasion and control is −0.22 points (which implies that the op-ed actually achieves the reverse of its intention). But it is 0.34 points, thast is, shifting attitude to a more pro-guns position, for subjects who received high-credibility information about 24hr Nation (interaction effect in the unadjusted model: p = 0.02; adjusted model, p = 0.15).

Similarly, for the protectionism topic, the predicted difference between pro-Republican persuasion and control is −0.18 points on a five-point scale (again suggesting an adverse reaction to the op-ed) for subjects who received no information about 24hr Nation, but it is 0.38 for subjects who received high-credibility information about 24hr Nation (interaction effect in the unadjusted model: p = 0.00; adjusted model, p = 0.00). In other words, the communication in favor of gun rights and protectionism mainly worked when the source was presented in a good light.

There is also an unexpected significant, though less robust, interaction between the pro-Republican protectionism and low-credibility treatments (unadjusted model: p = 0.07; adjusted model, p = 0.03) in the same direction as the interaction with the high-credibility treatments. This was an unexpected result which we speculate could be due to two factors. First, the protectionism issue is arguably more complex and certainly less salient than gun regulation; this may have made it difficult for participants to easily map the arguments presented in the op-eds to the policy options available in the survey measures (Zaller Reference Zaller1992). Second, our credibility treatments may have more consistently affected familiarity with the source than perceptions of credibility (though we note that our control group did provide the name and logo of 24hr Nation).

Heterogeneity

We find little evidence for substantial treatment heterogeneity, though there are some indications that pretreatment general media trust and ideology moderate our effects (see SI-J for details). For example, those with higher levels of general media trust are more affected by our low-credibility treatment (compared to the control) when asked about their perceptions of 24hr Nation, but also less by the high-credibility treatment. Respondents that are more Republican and more conservative are more affected by the high-credibility treatment: they perceive 24hr Nation more positively when treated. In contrast, those more liberal and more Democrat are more affected by the low-credibility treatment, in the sense that they perceive 24hr Nation more negatively when treated. This suggests some interesting contrasts: Encountering a source unknown to them, those on the political left are more likely to be impressed by information depicting it as non-credible; those on the political right by information depicting it as credible.

Conclusion

We designed a three-wave experiment that independently randomizes the source credibility of a hypothetical online news source and the direction of arguments presented to subjects in the form of articles published by that source. On the one hand, we demonstrate that our credibility manipulations worked: we measurably and durably moved people’s favorability toward, trust in, and perceptions of bias of this new source. On the other hand, this manipulation – however strong and long-lasting – does not seem to produce consistent or robust interactions with our persuasive treatments.

There are a number of possible reasons for the pattern of results that we observe. For example, perhaps people’s trust in news sources is so strongly related to partisanship that disentangling source credibility from political slant as we do in this study reveals a counterfactual reality that rarely manifests in real life. Designs that manipulate source information in ways that make the partisan orientation of the outlet explicit may thus produce more generalizable findings (Bauer and Clemm von Hohenberg Reference Bauer and Clemm von Hohenberg2020). Still, we argue that conceptualizing credibility and slant as orthogonal as we do in this study sheds light on underlying processes that can often be confounded within the existing media ecosystem. Another possibility is that perceived source credibility may matter more for outcomes that we do not study here, such as selective exposure in information seeking (Peterson and Iyengar Reference Peterson and Iyengar2021).

We uncover a number of suggestive findings that may inform future research. For example, it seems that high-credibility treatments are more effective among those with low media trust, including Republicans and conservatives, while liberals who are more trusting of media are more receptive to low-credibility treatments. This is perhaps surprising given correlations between partisan identification and attitudes toward the media, but such observational findings easily confound these attributes. Also, our results on source credibility point to a possible thermostatic aspect of people’s relationship to media outlets: as trust and favorability toward a new, unfamiliar source increase, there is a corresponding decrease toward preexisting, familiar sources. The consequences of a potentially finite pool of source trust in an increasingly fragmented and dynamic media ecosystem have yet to be fully understood. Thus, we hope to inspire more research about question around media trust, a much-cited and oft-measured attitude whose causes and consequences are still not fully clear.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2022.2

Data availability statement

Support for this research was provided by Princeton University. The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at https://doi.org/10.7910/DVN/GEJGV8.

Conflicts of interest

The authors do not declare any conflicts of interest.

Ethics statement

We obtained informed consent from all participants, who could choose not to answer any questions or withdraw from the study at any time. Compensation was given in the form of points delivered by the survey vendor. At the end of the study, subjects were debriefed about the fictitious nature of the media outlet, “Media Checkup,” and the “Free Press Award.” This study was approved by the Princeton University IRB (protocol 12797).

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

1 Note that our pre-analysis plan includes further hypotheses H1e,f,g. As results are very similar to H1a,b,c, we provide them in the SI (section G).

2 An exception is the study by Kelman and Hovland (Reference Kelman and Hovland1953), in which subjects are reminded about the identity of the source at the second measurement of their attitude.

3 We initially based this duration on prior work assessing persistence and decay of persuasive treatment effects (Coppock Reference Coppock2016), though as we detail below, the amount of time that elapsed between waves was significantly shorter for most subjects.

4 The “Ignoble Press Award” was presented to subjects as given to the owners of 24hr Nation rather than the editorial team as in the high-credibility condition, since the awardees were not expected to attend the hypothetical “ceremony” for this mock award. We acknowledge this difference, though the second set of treatments were constructed to avoid even minor confounds of this sort.

References

Arceneaux, Kevin and Johnson, Martin. 2013. Changing Minds or Changing Channels?: Partisan News in an Age of Choice. University of Chicago Press.CrossRefGoogle Scholar
Bakshy, Eytan, Messing, Solomon, and Adamic, Lada A.. 2015. Exposure to Ideologically Diverse News and Opinion on Facebook. Science 348(6239): 1130–32.CrossRefGoogle ScholarPubMed
Bauer, Paul C. and Clemm von Hohenberg, Bernhard. 2020. Believing and Sharing Information by Fake Sources: An Experiment. Political Communication 38(6): 125. https://doi.org/10.1080/10584609.2020.1840462.Google Scholar
Blair, Graeme, Cooper, Jasper, Coppock, Alexander, and Humphreys, Macartan. 2019. Declaring and Diagnosing Research Designs. American Political Science Review 113(3): 838–59. https://doi.org/10.1017/S0003055419000194.CrossRefGoogle ScholarPubMed
Chaiken, Shelly. 1980. Heuristic versus Systematic Information Processing and the Use of Source versus Message Cues in Persuasion. Journal of Personality and Social Psychology 39(5): 752–66. https://doi.org/10.1037//0022-3514.39.5.752.CrossRefGoogle Scholar
Cobb, Michael D. and Kuklinski, James H.. 1997. Changing Minds: Political Arguments and Political Persuasion. American Journal of Political Science 41(1): 88. https://doi.org/10.2307/2111710.CrossRefGoogle Scholar
Coppock, Alexander. 2016. The Persistence of Survey Experimental Treatment Effects. Unpublished Manuscript.Google Scholar
Coppock, Alexander, Ekins, Emily, and Kirby, David. 2018. The Long-Lasting Effects of Newspaper Op-Eds on Public Opinion. Quarterly Journal of Political Science 13(1): 5987. https://doi.org/10.1561/100.00016112.CrossRefGoogle Scholar
Clemm von Hohenberg, Bernhard and Guess, Andrew M.. 2021. Replication Data for: When Do Sources Persuade? The Effect of Source Credibility on Opinion Change, Harvard Dataverse. https://doi.org/10.7910/DVN/GEJGV8.CrossRefGoogle Scholar
Dalton, Russell J., Beck, Paul A., and Huckfeldt, Robert. 1998. Partisan Cues and the Media: Information Flows in the 1992 Presidential Election. American Political Science Review 92(1): 111–26. https://doi.org/10.2307/2585932.CrossRefGoogle Scholar
Druckman, James N. 2022. A Framework for the Study of Persuasion. Annual Review of Political Science 25(1). https://www.annualreviews.org/doi/abs/10.1146/annurev-polisci-051120-110428.CrossRefGoogle Scholar
Franz, Michael M., and Ridout, Travis N.. 2010. Political Advertising and Persuasion in the 2004 and 2008 Presidential Elections. American Politics Research 38(2): 303–29. https://doi.org/10.1177/1532673X09353507.CrossRefGoogle Scholar
Gerber, AS, Karlan, D, and Bergan, D. 2009. Does the Media Matter? A Field Experiment Measuring the Effect of Newspapers On …. American Economic Journal 2009: 145. http://www.atypon-link.com/AEAP/doi/abs/10.1257/app.1.2.35.Google Scholar
Go, Eun, Jung, Eun Hwa, and Wu, Mu. 2014. The Effects of Source Cues on Online News Perception. Computers in Human Behavior 38: 358–67. https://doi.org/10.1016/j.chb.2014.05.044.CrossRefGoogle Scholar
Greer, Jennifer D. 2003. Evaluating the Credibility of Online Information: A Test of Source and Advertising Influence. Mass Communication and Society 6(1): 1128. https://doi.org/10.1207/S15327825MCS0601.CrossRefGoogle Scholar
Guess, Andrew, and Coppock, Alexander. 2018. Does Counter-Attitudinal Information Cause Backlash? Results from Three Large Survey Experiments. British Journal of Political Science 50(4), 14971515. https://doi.org/10.1017/S0007123418000327.CrossRefGoogle Scholar
Guess, Andrew, Nyhan, Brendan, and Reifler, Jason. 2017. ‘You’re Fake News!’ The 2017 Poynter Media Trust Survey.Google Scholar
Hill, Seth J., James Lo, Lynn Vavreck, and Zaller, John. 2013. How Quickly We Forget: The Duration of Persuasion Effects From Mass Communication. Political Communication 30(4): 521–47. https://doi.org/10.1080/10584609.2013.828143.CrossRefGoogle Scholar
Hovland, Carl Iver, Irving Lester, Janis, and Kelley, Harold H.. 1953. Communication and Persuasion.Google Scholar
Hovland, Carl Iver, and Weiss, Walter. 1951. The Influence of Source Credibility on Communication Effectiveness. The Public Opinion Quarterly 15(4): 635–50. https://doi.org/10.1086/266350.CrossRefGoogle Scholar
Huber, Gregory A., and Arceneaux, Kevin. 2007. Identifying the Persuasive Effectsof Presidential Advertising. American Journal of Political Science 51(4): 961–81.CrossRefGoogle Scholar
Jerit, J., Barabas, J., and Clifford, S.. 2013. Comparing Contemporaneous Laboratory and Field Experiments on Media Effects. Public Opinion Quarterly 77(1): 256–82. https://doi.org/10.1093/poq/nft005.CrossRefGoogle Scholar
Jurkowitz, Mark, Mitchell, Amy, Shearer, Elisa, and Walker, Mason. 2020. U.S. Media Polarization and the 2020 Election: A Nation Divided.Google Scholar
Kam, Cindy D. 2005. Who Toes the Party Line? Cues, Values, and Individual Differences. Political Behavior 27(2): 163–82. https://doi.org/10.1007/s11109-005-1764-y.CrossRefGoogle Scholar
Kang, Hyunjin, Bae, Keunmin, Zhang, Shaoke, and Shyam Sundar, S. 2011. Source Cues in Online News: Is the Proximate Source More Powerful Than Distal Sources? Journalism & Mass Communication Quarterly 88(4): 719–36.CrossRefGoogle Scholar
Kelman, Herbert C., and Hovland, Carl I.. 1953. “Reinstatement” of the communicator in delayed measurement of opinion change. The Journal of Abnormal and Social Psychology 48(3): 327–35.CrossRefGoogle ScholarPubMed
Ladd, Jonathan M. 2011. Why Americans Hate the Media and How It Matters. Princeton University Press.CrossRefGoogle Scholar
McGuire, W. J. 1969. The nature of attitudes and atti-tude change. In The Handbook of Social Psychology, eds. Lindsey, G. and Aronson, E.. Reading, MA: Addison-Wesley.Google Scholar
Pennycook, Gordon, and Rand, David G.. 2019. Fighting Misinformation on Social Media Using Crowdsourced Judgments of News Source Quality. Proceedings of the National Academy of Sciences 116(7): 2521–26.CrossRefGoogle ScholarPubMed
Peterson, Erik, and Iyengar, Shanto. 2021. Partisan Gaps in Political Information and Information-Seeking Behavior: Motivated Reasoning or Cheerleading? American Journal of Political Science 65(1): 133–47.CrossRefGoogle Scholar
Petty, Richard E., and Cacioppo, John T.. 1986. Communication and Persuasion. Central and Peripheral Routes to Attitude Change. New York, NY: Springer. https://doi.org/10.1007/978-1-4612-4964-1.Google Scholar
Petty, Richard E., Cacioppo, John T., and Goldman, Rachel. 1981. Personal Involvement as a Determinant of Argument-Based Persuasion. Journal of Personality and Social Psychology 41(5): 847–55. https://doi.org/10.1037/0022-3514.41.5.847.CrossRefGoogle Scholar
Pink, Sophia L., Chu, James, Druckman, James, Rand, David G., and Willer, Robb. 2021. Elite Party Cues Increase Vaccination Intentions among Republicans.CrossRefGoogle Scholar
Shaw, Daron R. 1999. The Effect of TV Ads and Candidate Appearances on Statewide Presidential Votes, 1988–96. The American Political Science Review 93(2): 345–61.CrossRefGoogle Scholar
Strömbäck, Jesper, Tsfati, Yariv, Boomgaarden, Hajo, Damstra, Alyt, Lindgren, Elina, Vliegenthart, Rens, and Lindholm, Torun. 2020. News Media Trust and Its Impact on Media Use: Toward a Framework for Future Research. Annals of the International Communication Association 44(2): 139–56. https://doi.org/10.1080/23808985.2020.1755338.CrossRefGoogle Scholar
Tsfati, Yariv. 2010. Online News Exposure and Trust in the Mainstream Media: Exploring Possible Associations. American Behavioral Scientist 54(1): 2242.CrossRefGoogle Scholar
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Figure 0

Figure 1 Screenshots used in high-credibility condition Wave 1.

Figure 1

Figure 2 Example of website headline screenshot in Wave 2.

Figure 2

Figure 3 Treatment means related to H1a-c and RQ1a-c.

Figure 3

Figure 4 Treatment means related to H2a, H2b, H3a, and H3b.

Figure 4

Figure 5 Treatment means related to H4a, H4b, H5a, and H5b.

Supplementary material: Link

Clemm von Hohenberg and Guess Dataset

Link
Supplementary material: PDF

Clemm von Hohenberg and Guess supplementary material

Clemm von Hohenberg and Guess supplementary material

Download Clemm von Hohenberg and Guess supplementary material(PDF)
PDF 9.4 MB