Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-22T17:57:43.644Z Has data issue: false hasContentIssue false

Heuristic Projection: Why Interest Group Cues May Fail to Help Citizens Hold Politicians Accountable

Published online by Cambridge University Press:  02 May 2023

David E. Broockman*
Affiliation:
Charles and Louise Travers Department of Political Science, University of California, Berkeley, CA, USA
Aaron R. Kaufman
Affiliation:
Division of Social Science, New York University Abu Dhabi, United Arab Emirates
Gabriel S. Lenz
Affiliation:
Charles and Louise Travers Department of Political Science, University of California, Berkeley, CA, USA
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

An influential perspective argues that voters use interest group ratings and endorsements to infer their representatives' actions and to hold them accountable. This paper interrogates a key assumption in this literature: that voters correctly interpret these cues, especially cues from groups with whom they disagree. For example, a pro-redistribution voter should support her representative less when she learns that Americans for Prosperity, an economically conservative group, gave her representative a 100 per cent rating. Across three studies using real interest groups and participants' actual representatives, we find limited support for this assumption. When an interest group is misaligned with voters' views and positively rates or endorses their representative, voters often: (1) mistakenly infer that the group shares their views, (2) mistakenly infer that their representative shares their views, and (3) mistakenly approve of their representative more. We call this tendency heuristic projection.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Citizens often know little about their representatives' actions in office, leaving many scholars pessimistic about voters' potential to hold their representatives accountable (for example, Achen and Bartels Reference Achen and Bartels2016; Tausanovitch and Warshaw Reference Tausanovitch and Warshaw2018).

An influential response to this pessimism is that citizens use cues from special interest groups (SIGs) about their representatives, such as SIG ratings or endorsements, as heuristics to infer their representatives' actions. For example, if citizens who support raising taxes on the wealthy see that the anti-tax Americans for Prosperity have endorsed their representative, the literature argues that these pro-tax citizens should infer that their representatives are anti-tax and so will be less likely to support them. In turn, this is thought to help citizens hold representatives accountable for their actions in office: citizens who would otherwise not know how their representative voted on important issues could infer this information from interest group cues and, in turn, then base their vote in the next election on whether their representative voted per their preferences on those issues.

The literature generally describes this idea as thoroughly established. In Appendix B.2 we quote over two dozen studies that summarize the literature as demonstrating that voters commonly use SIG cues as a helpful heuristic to more accurately infer their representatives' positions and thus hold their representatives accountable. For example, Druckman, Kifer, and Parkin (Reference Druckman, Kifer and Parkin2020) review the literature as demonstrating that SIG cues are ‘a common method by which citizens infer [candidates’] issue positions’ (p. 5). Arceneaux and Kolodny (Reference Arceneaux and Kolodny2009) further review the idea that ‘politically unaware citizens’ use interest group ratings to ‘make [voting] decisions as if they possessed full information about the candidates’. As they note, ‘the group need not even be aligned with the voters’ interests, because signals from opposition groups can also be informative by indicating [which candidates] the voter should not support’ (p. 757).

This literature's theoretical logic is propitious for voters' ability to hold their representatives accountable as SIG cues are ubiquitous, appearing in advertisements, press coverage, candidate websites, and voter guides. For example, Druckman, Kifer, and Parkin (Reference Druckman, Kifer and Parkin2020) find that Congressional incumbents' campaign websites feature, on average, over ten SIG endorsements. In another analysis of the language candidates use to sell themselves in one state's voter pamphlets, we find that 50 per cent of candidates feature SIG support (see Appendix C).

In this paper, we interrogate a key assumption of this influential literature. We ask: do voters correctly interpret and appropriately respond to cues from interest groups, especially groups they are unfamiliar with?

Theories of heuristics note that citizens must know a SIG's stance in order to use its cues to infer their representatives' actions (McKelvey and Ordeshook Reference McKelvey and Ordeshook1985); for example, a voter must know whether a SIG is itself aligned or misaligned with that voters' views for them to know whether that SIG's positive rating of their representative indicates whether their representative is aligned or misaligned with that voters' views. However, given low levels of general voter knowledge, we argue that scholars should not expect voters to be aware of many SIGs' stances. Indeed, although many studies express a view of interest group cues as informative to voters, others also note that citizens are unfamiliar with even prominent interest groups (see Dowling and Wichowsky Reference Dowling and Wichowsky2013; Leeper Reference Leeper2013). For instance, we would not expect many voters to know whether Americans for Prosperity favour higher or lower taxes, even though it is one of the most active SIGs in contemporary American elections.Footnote 1

However, would citizens who do not know what Americans for Prosperity stand for simply disregard its cues as they are unable to interpret them as the literature expects (for example, McKelvey and Ordeshook Reference McKelvey and Ordeshook1985)? We also argue that voters do not always disregard cues from SIGs they are unfamiliar with; rather, they instead on average assume SIGs they are unfamiliar with agree with them – a process we call heuristic projection. There are several reasons why this may occur. One set of reasons is psychological. Decades ago, psychologists discovered that people often assume that others share their views (Ross, Greene, and House Reference Ross, Greene and House1977), a process called 'false consensus': in the absence of reliable information about what other people think, people assume that others share their views. This is sometimes also called projection. As a result of this process, when voters know little about interest group reputations, they may naively assume that interest groups share their policy views.Footnote 2

Our arguments raise the possibility that SIG cues might be counterproductive, as voters might assume groups they are unfamiliar with probably agree with them, even in cases when the groups do not. For instance, in the traditional view, when a pro-redistribution citizen learns that Americans for Prosperity has given her representative a 100 per cent rating, she becomes less supportive of that representative because she now thinks her representative opposes redistribution or, at the worst, disregards the information if she cannot interpret it. To the extent heuristic projection operates in the real world, she would rather become more supportive of her representative because she now thinks her representative also supports redistribution. In other words, instead of disregarding unfamiliar cues from misaligned SIGs, citizen psychology could result in citizens naïvely acting as if unknown SIGs share their views.

Despite many scholars noting the view that SIG cues aid citizens' judgements (see Appendix B.2), there is surprisingly little empirical research on how citizens use SIG cues in forming their judgements of politicians' policy positions. No study we are aware of experimentally evaluates the effect of real SIG cues on voter's judgements about real politicians without introducing other confounding variables (see Appendix B.1). In contrast, most studies of how citizens use SIG cues focus on how citizens use them to form issue preferences, such as in referendums.Footnote 3 Another related set of studies examines the disclosure of sponsors in campaign ads. Intriguingly, these studies sometimes find that disclosing that an unfamiliar interest group has sponsored an ad increases its effectiveness (Brooks and Murov Reference Brooks and Murov2012; Dowling and Wichowsky Reference Dowling and Wichowsky2015; Ridout, Franz, and Fowler Reference Ridout, Franz and Fowler2015; Weber, Dunaway, and Johnson Reference Weber, Dunaway and Johnson2012). Heuristic projection may help explain these findings. Appendix B.1 provides a further review of the limited empirical literature on how citizens use SIG cues to make inferences about representatives.

In this paper, we present new evidence on how SIG cues affect citizens' judgement of their incumbent representatives. Our data includes some of the most extensive tests done on the standard view of SIG cues as helpful heuristics and several tests of our arguments regarding voter ignorance and heuristic projection. We draw this data from three original surveys. To aid external validity, we always show respondents cues from real interest groups, how those interest groups rated respondents' actual representatives, and ask about positions on real legislation (McDonald Reference McDonald2020) with competing cues (party affiliations) present, as they are in real-world electoral contexts.

Our data finds that, at least in the context of our survey-based studies, voter ignorance of SIG stances is widespread, that voters rarely use SIG cues to form more accurate judgements, and that they engage in considerable heuristic projection. Scholars frequently note the conventional view that SIGs' ratings and endorsements would help citizens hold their representatives accountable by helping them make more accurate inferences about their representatives' policy positions (see Appendix B.2). But our findings raise doubts about this view. We find that citizens, on average, assume that SIGs share their views (Study 1), that providing SIG ratings to voters does not increase the accuracy of their perceptions of Members of Congress (MCs) (Study 2a), and that they assume instead that politicians who earn positive ratings from SIGs share their views (Study 2b). As a result, citizens are more likely to support their representative for re-election when they learn their representative earned a positive rating from a SIG, even when they disagree with that SIG on policy (Study 3). Table 1 reviews our studies and survey samples, which are described in more detail below. Appendix D presents data on our samples. In Appendix C we show that the SIGs we include in our treatments include some of the most prominent SIGs in politics.

Table 1. Summary of Research Questions, Studies, and Samples in this Paper

Note: Details on sample representativeness are provided in Appendix D. Abbreviations: SIG stands for Special Interest Group; Samp. Strat. is the survey vendor Sample Strategies.

To summarize, existing literature portrays SIG cues as, at worst, harmless (when voters are unfamiliar with the SIG; for example, McKelvey and Ordeshook Reference McKelvey and Ordeshook1985), but often helpful. We question this assumption as we advance two main arguments. First, we argue that voters will often not know what SIGs stand for, meaning they may often be unable to correctly interpret their cues or form more accurate perceptions of their politicians' actions based on their cues. We support this argument in Studies 1, 2a, and 3 (as well as additional data in Appendices F and G). Second, projection is a well-documented psychological phenomenon (see, for example, Ross, Greene, and House Reference Ross, Greene and House1977). Consistent with research on projection, we argue that people may project their views onto a SIG when they do not know what the SIG stands for. We support this argument with data from Studies 1, 2b, and 3 (and in an additional study in Appendix H). In particular, at least in the context of our experiments, we find that people on average interpret SIG cues as if the SIGs agree with them, using SIG cues to reward their MCs for voting consistently with SIGs rather than with the voters' own preferences.

Our data is limited to survey-based experiments, although we always use real ratings that SIGs issued to the respondents' real MCs. As a result, rather than demonstrating definitively that heuristic projection operates in the real world, our research is best thought of as questioning the literature's assumption that SIG cues are, at worst, harmless and, at best, are often helpful. Our results, therefore, raise doubt about widely-shared hopes that interest group cues would improve accountability. With this said, and although we take steps to achieve realism in our studies, our studies leave open the question of how often voters draw the wrong inference from interest groups in the real world: the perverse effects we find may be short-lived, may not survive competitive campaigns, or have non-obvious general equilibrium implications for accountability. Lacking this evidence, our key conclusion is that scholars should not assume interest group cues help voters hold their representatives accountable and should conduct further research to determine whether they help or hurt accountability.

Study 1: Ideological Projection in Perceptions of Forty-Five Special Interest Groups

In our first study, we examine whether the public project its views onto interest groups.Footnote 4 We identified the top forty-five SIGs ordered by 2016 total campaign receipts;Footnote 5 the most active SIGs should represent the easiest test for the traditional view of heuristics. Consistent with the traditional perspective on heuristics (see Appendix B.2), campaign finance scholars and the U.S. Supreme Court itself (in Buckley v. Valeo) have speculated that campaign spending and contributions from SIGs could serve as particularly useful heuristics for voters seeking to infer what politicians stand for from the interest groups who support them (for a review, see Wood Reference Wood2018). But do voters know enough about these groups to interpret their ratings correctly? Or, to what extent does a projection (or false consensus) lead them to conclude that SIGs share their views?

To examine this question, we conducted Study 1 on a demographically representative survey of US residents with Lucid (Coppock and McClellan Reference Coppock and McClellan2019) in October 2017. (See Appendix D for information about representativeness.) We asked 3,178 Americans, ‘How liberal or conservative is [the interest group]?’ and provided seven response options from ‘extremely liberal’ to ‘extremely conservative’ as well as a ‘don't know’ option. We asked each respondent about two randomly chosen groups.Footnote 6 To measure these SIGs' actual ideologies, we use Campaign Finance scores (CFscores) (Bonica Reference Bonica2014). For ease of interpretation and to avoid making strong assumptions about the comparability of the CFscores and 7-point ideological scales, we coarsen both the SIGs' CFscores and the 7-point ideological scale into conservative, moderate, and liberal. See Appendix E for a discussion of the coding rules.

Figure 1a shows that the vast majority of these SIGs are either conservative or liberal. However, Fig. 1b shows that respondents place under half of the SIGs as either conservative or liberal, with respondents indicating many of these SIGs are moderate and, most often, saying they ‘don't know’. Figs 1c and 1d show respondents' placements of liberal and conservative SIGs, respectively. Respondents are only about 10 percentage points more likely to rate liberal groups as liberal than rate conservative groups as liberal. About 40 per cent responded ‘don't know’ to the question and another 15 per cent gave midpoint responses, suggesting frequent ignorance about these groups.Footnote 7 Therefore, many citizens appear to know little about the ideology of interest groups.

Figure 1. Study 1 – Limited Voter Knowledge of SIG Ideology. (a) Actual SIG Ideology for 45 SIGs in Study 1. (b) Respondent Placements of SIG Ideology. (c) Respondent Placements of SIG Ideology–Conservative SIGs Only. (d) Respondent Placements of SIG Ideology–Liberal SIGs Only.

Note: 3,178 respondents each rated two interest groups. Given the large sample size, the standard errors are very small for the estimates in panels b–d, about 1 per cent, so we omit confidence intervals.

One reason voters may not react appropriately when they encounter endorsements from unfamiliar interest groups is the well-established tendency for people to assume that others agree with them; that is, the false-consensus effect. Do voters exhibit this tendency with interest groups? That is, do they project their views onto those groups?

Figure 2a shows how respondents placed the forty-five SIGs, now grouped by respondents' ideological self-placements: it reveals that respondents appear to project their ideology onto the interest groups. Respondents all rated the same set of SIGs, yet among liberal respondents the most common placement given for SIGs was liberal; among moderate respondents, the most common placement given for SIGs was moderate; and among conservative respondents, the most common placement given for SIGs was conservative. All three of these patterns are consistent with widespread projection.

Figure 2. Study 1 – Voters Project Their Ideology onto Special Interest Groups. (a) SIG Placement by Respondent Ideology–All SIGs. (b) SIG Placement by Respondent Ideology–Conservative SIGs. (c) SIG Placement by Respondent Ideology–Liberal SIGs.

Note: N = 3,178 respondents each rated two interest groups. Given the large sample size, the standard errors are very small, about 1 per cent, so we omit confidence intervals. ‘DK’ means ‘don't know.’

To break down these perceptions by the actual ideology of the SIGs, Figs 2b and 2c show how respondents perceived SIGs that were conservative and liberal, respectively. Crucially, Figs 2b and 2c raise doubt that voters can use positive cues from misaligned groups as negative signals; for example, whether a pro-tax voter would understand that the anti-tax Americans for Prosperity's endorsement of their representative means that this representative opposes taxes. In particular, Fig. 2b shows that liberals are more likely to see conservative SIGs as either liberal or moderate rather than conservative (leftmost panel), while Fig. 2c shows that conservatives are more likely to see liberal SIGs as either conservative or moderate rather than liberal (rightmost panel). Projection essentially cancels out the modicum of knowledge respondents had about these groups, making them unable to recognize, on average, that opposition groups are, in fact, opposition groups. Online Appendix Figure OA2 shows this projection across the full range of respondents' self-placement on the 7-point ideological scale.

These findings raise the possibility that citizens might misinterpret ratings from misaligned interest groups. Our next studies extend these descriptive findings by way of experiments that allow us to study such consequences, focusing on real SIG ratings of voters' actual representatives and on specific issues rather than broad ideology.

Studies 2a and 2b: What Citizens Infer about Representatives from SIG Cues

So far, we have shown that voter knowledge about SIGs is limited and that voters appear to project their ideologies onto SIGs. In the next set of studies, we examine the consequences of this projection on citizens' perceptions of their representatives. In our experiments, we provide citizens with actual interest group ratings of their actual representatives in Congress and examine how these cues affect their perceptions of their actual MCs. To what extent do citizens react consistently with the traditional view that SIG cues are helpful heuristics, which would predict that these cues would help them form more accurate inferences about their representatives' actions or, at worst, make no difference? And to what extent do they react consistently with our pathological alternative, heuristic projection, where they assume that even misaligned SIGs agree with them and so interpret all SIG ratings as if they came from aligned SIGs?

In this section and the following sections, we present the results of an exploratory survey and a replication. The first survey is a national sample of 3,958 respondents, recruited through Sample Strategies, in February 2018. We had not pre-registered our tests for heuristic projection before conducting this survey so we conducted a replication survey where we did pre-register them (see Appendix I).Footnote 8 The replication survey is a national sample of 3,892 respondents recruited through Lucid in March 2020. We refer to the original survey as sample (i) and the replication as sample (ii) and, for transparency, we show all results for both. See Appendix D for statistics on the representativeness of both samples. We also designed these studies to minimize missing data from don't-know responses and other sources, as we describe below.

In both studies, we presented respondents with an interest group rating of their representative. To do so, we used Project Vote Smart to identify the SIGs that rated the largest number of sitting MCs. We paired each interest group with a key vote in both the 2015–16 and 2017–18 Congressional Sessions that closely connected to the group's main focus. Table OA1 presents the roll call and SIG pairs. In both studies, we showed the respondents the title of every bill as well as the short summaries we prepared, also shown in Table OA1.Footnote 9

In particular, we first asked respondents how they would vote on eight or nine of the seventeen bills (to reduce survey fatigue). We used the following language: ‘If you were in Congress would you vote FOR or AGAINST each of the following?’Footnote 10

We used ZIP Codes provided by the survey firms to determine each respondent's representative in the U.S. House. In the original study (sample (i)), we assigned 75 per cent to a control group that did not see the SIG ratings of their MC and assigned 25 per cent to a group that did see the ratings. In the replication study (sample (ii)), we assigned everyone to see a SIG rating; there was no pure control group that saw no rating.

In both studies, we provided respondents with the name and party of their MC.Footnote 11 For those assigned to see ratings, we then showed a randomly selected rating as follows:

Various groups often provide ‘ratings’ or ‘scorecards’ of how much they approve the votes every Member of Congress has taken. We have compiled the ‘scorecards’ of many such groups and have selected one at random to show you:

[Interest group] rated [MC name] at [score] (out of 100).

We only assigned voters to see ratings from SIGs where their MC had cast either a ‘yes’ or ‘no’ vote on the paired bill (that is, abstentions are not eligible).

To confirm that innumeracy did not drive the findings, we next asked respondents whether they thought the rating meant the SIG usually ‘agreed or disagreed’ with how their MC ‘voted in Congress’. As shown in Appendix Figure OA5, the respondents overwhelmingly understood that positive ratings indicate the SIGs usually agree with the MC and the opposite for negative ratings.

Finally, we asked the respondents about our key dependent variable: how they thought their MC voted on eight or nine bills Congress had voted on. We use the following language: ‘If you had to guess, how do you think [MC name] voted on these bills?’ For each bill, respondents could answer that their MC voted ‘yes’ or ‘no’ (with no ‘don't know’ option).

We note that this research design is both more naturalistic than most previous research has employed on this topic but still leaves open the question of whether these cues would operate differently in the real world. In particular, it is rare for existing research to use real ratings of real groups about respondents' actual representatives or to gauge perceptions of those representatives' actual votes; fictional politicians and fictional issues are the norm (see Appendix Table OA2 and McDonald Reference McDonald2020). At the same time, although we have taken steps to bolster external validity in these experiments – that is, asking about respondents' MCs, asking about actual votes, showing the MCs' party, and showing actual interest group ratings – they may still not capture the real-world context in which people see interest group endorsements. In particular, it is possible that real-world contexts provide additional information that helps citizens discern the meaning of interest group cues. We return to this point in the conclusion.

Studies 2a(i) and 2a(ii): Do Interest Group Ratings Help Citizens More Accurately Perceive their Representatives' Votes?

According to the traditional view of heuristics, when a voter learns about an interest group's rating of a voter's MC, that should, on average, help them form more accurate inferences about the ideological and policy stances of their MC. In Studies 2a(i) and 2a(ii) we investigate whether interest group ratings increase the accuracy with which voters perceive their MC's votes. For example, does seeing that the League of Conservation Voters gave their representative a perfect 100/100 rating help respondents make more accurate inferences about whether their member voted for or against a bill that allows for spraying environmentally-harmful pesticides near waterways without a permit? As described above, Study 2a(i) was conducted in a first sample and Study 2a(ii) was conducted on a new sample to replicate its results.

After we showed respondents the interest group rating, we asked them how they thought their MC voted on each of the same bills using the wording given above. The dependent variable for Studies 2a(i) and 2a(ii) is whether respondents accurately identified their MC's actual votes on these questions. No respondents skipped and we provided no ‘don't know’ option so there is no missing data.

In the control group in Study 2a(i), respondents showed no SIG rating and correctly identified 68 per cent of their MC's votes on average.Footnote 12

To examine both between- and within-subject variation, our unit of analysis is the respondent-issue pair. The experimental variation in these studies arises from random assignment to see a SIG rating or no rating (Study 2a(i)), and from which SIG rating respondents were randomly assigned to be shown (in both Studies 2a(i) and 2a(ii)). In Study 2a(i), a respondent-issue pair is in the treatment group if we assign a respondent to see a SIG rating relevant to the vote on that issue and in the control group otherwise. This means that the probability of assignment to treatment varies across respondent-issue observations because respondents' MCs varied in how many SIGs rated them (and we never showed respondents missing SIG ratings). Therefore, we include fixed effects for the number of SIG ratings available for each MC by district.Footnote 13

We first examine whether showing respondents an interest group rating helped them guess their MC's votes on a corresponding issue more accurately. To do so, we estimate OLS models with these fixed effects. The dependent variable is set to 1 if the respondent correctly identified their MC's vote on the issue corresponding to that interest group (for example, for the LCV, the pesticide issue) and 0 if their perception was inaccurate. The independent variable is set to 1 if the respondent is assigned to the treatment group and 0 otherwise.

The top panel of Figure 3 shows the estimates from the original study (i) and the bottom panel shows the estimates from our replication (ii). In the original study, on average, citizens are 0.3 percentage points less accurate in identifying their MC's vote on an issue when we showed them a rating from a relevant SIG, a difference statistically indistinguishable from zero (SE = 1.5 percentage points, t = −0.21). In the replication study citizens are 1.7 percentage points more accurate; a small effect but one that is statistically significant at conventional levels because the study is so well-powered (SE = 0.82 percentage points, t = 2.07). This finding is almost entirely driven by the Human Rights Campaign, an LGBT non-profit.

Figure 3. Studies 2a(i) and 2a(ii) – Effect of Showing Each Interest Group's Heuristic on Accurate Perception of the MC's Vote. (a) Treatment Effect of Seeing a SIG Rating on Accurate Perception of MC Vote in Original Study (2a(i)). (b) Treatment Effect of Seeing a SIG Rating on Accurate Perception of MC's Vote in Replication Study (2a(ii)).

Note: Each coefficient shows the treatment effect estimate from one regression. Interest-group-specific estimates come from regressions subsetting to issue-respondent observations for each interest group. Overall estimates are from regressions with all issue-respondent observations. All regressions include the controls and fixed effects mentioned in the text.

We find similar results for individuals high in political knowledge. In the original study, citizens who answered all four questions correctly on a political-knowledge battery are only 2.6 percentage points more accurate than when guessing whether an MC vote related to the SIG rating shown, which is both substantively small and statistically insignificant (SE = 2.6 percentage points, t = 1.0). In the replication, they are only 3.6 percentage points more accurate (SE = 2.1 percentage points, t = 1.7). These small effects are not driven by a ceiling effect, as knowledge of roll calls was 77 per cent for high-knowledge respondents in the original study, consistent with 54 per cent possessing actual knowledge when corrected for guessing. The corresponding percentages for the replication study are 74 per cent and 48 per cent, respectively.

Overall, these studies provide little support for the traditional view of heuristics as improving voters' inferences about their representatives; generally speaking, interest group cues do not seem to help voters form more accurate perceptions about how their representatives voted. In support of this conclusion, we provided voters with highly informative cues about how their MCs voted, but these cues had only the slightest of effects on accuracy. These results were consistent for even the most active interest groups and high-knowledge respondents. They held, even though we provided respondents with their representatives' party affiliations. In summary, after conducting one of the first investigations into how citizens can interpret interest group ratings of politicians in a controlled setting (see Appendix B.1), we find that there is not much evidence for the literature's widespread view that SIG cues could be ‘a common method by which citizens infer . . . issue positions’ (Druckman, Kifer, and Parkin Reference Druckman, Kifer and Parkin2020, 5), at least when it comes to ratings from the most active groups on important Congressional legislation, perhaps because few people know where interest groups stand. (See Appendices F and G.)

Studies 2b(i) and 2b(ii): Respondents Interpret Positive Interest Group Ratings as a Signal that Their Member of Congress Agrees with Them

Study 2a suggests that, in almost all cases, the traditional view of heuristics does not accurately describe how interest group ratings affect voters' perceptions of how their MCs voted. In study 2b we test for the alternative view we have outlined: heuristic projection. In this view, citizens should react as if SIGs are aligned with their issue preferences, even when they are not. That is, respondents should infer that their MC agrees with their views on more issues when they see a positive rating from any SIG and infer that their MC disagrees with their views on more issues when they see a negative rating from any SIG. When the conservative interest group Americans for Prosperity rates a candidate highly, for instance, a heuristic projection would predict that even liberal voters would naively assume that this means their MC must generally agree with their views.

In particular, we test whether seeing positive interest group ratings leads voters to assume that their MC voted as respondents said they would have on seventeen major bills and whether negative ratings have the opposite effect. We had not preregistered this prediction before conducting survey (i) so we have reported the finding in both samples (i) and a preregistered replication from sample (ii).

The dependent variable in Study 2b(i) and 2b(ii) is whether respondents think their MC's votes agree with their views. We measure this by asking the respondents their perceptions of how their MC voted just after the respondents saw the SIG ratings, similar to Studies 2a(i) and 2a(ii). However, instead of measuring accuracy by comparing the respondents' perceptions of how their MC voted when compared to reality, as in Studies 2a(i) and 2a(ii), we measure perceived agreement by comparing each respondent's perceptions of how their MC voted to that respondent's views on these same issues (which we asked at the beginning of the survey); that is, whether voters think their MC agrees with them. We did not provide a ‘don't know’ option and no respondents skipped so there is no missing data on this variable.

To check for heuristic projection, Table 2 presents regressions at the respondent level where the outcome is the number of issues where citizens guessed that their actual MC cast a vote congruent with their issue preference (as measured pretreatment). Table 2 shows the estimates for the original study (sample (i)). The treatment variable in the regression is whether the actual SIG rating that we showed to the respondent was positive (that is, >50), negative (that is, ≤50), or whether no rating was shown: we code these 1, 0, and 0.5, respectively. To improve precision, we control for the number of issues where voters agree with their MC. We also control for voter party, MC party, and the interaction, which we do not show for simplicity. Excluding these controls leaves the estimates essentially unchanged.Footnote 14

Table 2. Studies 2b(i) and 2b(ii) – Effect of SIG Rating on the Perception that Respondent's Member of Congress Cast Congruent Votes

Note: The dependent variable in all regressions is the number of issues on which respondents indicated they thought their MC cast a vote that matched their issue preference. We measured the respondents' own issue preferences pre-treatment. Each column shows a separate regression model. Standard errors are in parentheses. For the original study, ‘other controls’ include the MC party, the respondent party, and their interaction. For the replication study, ‘other controls’ includes these, their interactions with the political party and the MC party, and the controls listed in footnote 16.

†indicates significance at p < 0.10, ∗indicates significance at p < 0.05, ∗∗indicates significance at p < 0.01, ∗∗∗indicates significance at p < 0.001.

Table 2 finds that when voters saw that a SIG rated their MC positively, they inferred their MC's position matched their own. Model 1 shows that the estimated causal effect of being shown a positive rather than a negative SIG rating is an approximately 0.36 vote increase in the respondents' perceptions of the number of votes (eight or nine) that one's MC cast that agreed with one's views (p < 0.01).Footnote 15 This is equivalent to what we would observe if seeing a positive instead of a negative SIG rating caused more than one in three respondents to perceive that their MC agreed with them on an additional issue. This large magnitude is in stark contrast to the very small estimates from Study 2(a) where we found, even in the subset of Study 2(a)ii with the strongest effects, that only roughly 1 in 34 people had a more accurate view of how their member voted on an issue.

Models 2 and 3 show a version of Model 1 that is estimated separately by whether the voter and the MC are of the same party: we find the effects are larger in cases when citizens are forming a perception of out-party MCs, suggesting that our findings are not driven by motivated reasoning in favour of co-partisan MCs. The point estimate for the effect on perceptions of out-party MCs is equivalent to what we would observe if nearly half of the respondents thought they agreed with their MC on an additional issue after they learned that their MC earned a positive SIG rating. Surprisingly, these effects are almost as large as the descriptive relationship between the number of issues where citizens and MCs agree; in other words, MCs may be able to lead their constituents to think their MC agrees with them just as effectively by earning positive ratings from SIGs as by actually voting in line with their constituents' views.

Table 2, part (b) shows estimates for our preregistered replication. Across Models 1–3, we see that it closely replicates the original study but with noticeably more precision (smaller standard errors). As we preregistered, we include the same controls as in the original study plus the additional pretreatment controls we included on the survey to improve precision.Footnote 16 Excluding them leaves the results unchanged.

Appendix Figure OA6 shows the results graphically by whether the group was aligned or misaligned with the voter and by whether the respondent and their MC are of the same party. These results are less precise, but the results do not depend on the respondent party or whether the group is aligned or misaligned. Consistent with Study 2a's findings, we never see much evidence that respondents infer that positive ratings from misaligned groups signal that their MC agrees with them less, not more.

In summary, the traditional view of heuristics predicts that interest group ratings help voters form more accurate views of their representatives. Studies 2a(i) and 2a(ii) found very limited evidence that they do so. Instead, we found support for heuristic projection, wherein voters simply perceive their representatives as casting votes they agree with when their representatives receive positive ratings from a SIG, regardless of what the SIG's rating signals. Consistent with our argument in this paper, this questions the literature's assumption that voters often use SIGs to form more accurate views of how their representatives voted and suggest SIG cues might often cause harm.

Studies 3(i) and 3(ii): Perverse Consequences of Interest Group Ratings on Voters' Evaluations of their Representatives

Voters ultimately hold their representatives accountable by casting votes in elections. What do our findings indicate for how citizens use interest group heuristics when making their voting decisions? In the traditional view of heuristics, interest groups should help voters support candidates who share their policy views and oppose those who do not. By contrast, under heuristic projection, interest groups’ cues might distort voters' choices leading them to support politicians even though they disagree with their policy stances because they see a politician receive a positive SIG rating for doing so. For instance, a pro-tax voter might hear that their representative received a 100 per cent rating from the anti-tax SIG Americans for Prosperity and not only erroneously conclude that their representative must favour increasing taxes too (as we saw in Study 2b) but also decide this means they should be re-elected.

We test this possibility in Study 3, where we examine how interest group cues affect the approval of and voting intentions of MCs using the same two samples as in Studies 2a and 2b. To denote the fact that we present data from two different samples with the same research design, we denote our studies 3(i) and 3(ii) for samples (i) and (ii), respectively.

To measure the dependent variable, after showing participants the SIG rating, we also asked them a series of three MC approval questions: an approval question, a favourability question, and a generic ballot question.Footnote 17 We combine these into an index standardized to standard deviation one, consistent with our preregistration (see Appendix I). This MC Support Scale is our dependent variable in Studies 3(i) and 3(ii).

To understand how SIG cues affect citizens' approval of their MCs, we define two treatment variables. Our first treatment variable captures the predictions of the traditional view of heuristics. We call this variable ‘SIG rating signals MC matches voter issue preference’ and code it as 1 if we showed the respondent a SIG rating that sends a signal that their MC cast a vote that matches the respondent's issue preference, which was asked at the beginning of the survey before treatment. This variable would have a value of 1 if a respondent saw a positive rating from a SIG aligned on the issue (for example, an environmental regulation supporter seeing a positive rating from the LCV) or a negative rating from a SIG misaligned on the issue (for example, a gun control supporter seeing a negative rating from the NRA). Instead, we set this treatment variable to 0 if the respondent was shown a rating that would send a signal that the respondent's MC cast a vote misaligned with the respondent's issue preference. A negative rating from an aligned SIG (for example, an environmental regulation supporter seeing a negative rating from the LCV) or a positive rating from a misaligned MC (for example, a gun control supporter seeing a positive rating from the NRA) would send such a signal.Footnote 18 This variable captures the predictions of the traditional view of heuristics: voters should approve of their MC more when they see a rating that sends a signal their MC is aligned rather than misaligned with their issue preferences.

Our second treatment variable captures the alternative of heuristic projection. We call this variable ‘SIG Rating Supportive’ and code it as 1 if the rating is 51 or above and 0 if the rating is 50 or below, regardless of whether the SIG is aligned with the respondent's issue preference or not. This variable captures the predictions of heuristic projection: voters should express more approval for their MC when they see that a positive interest group rating, regardless of the signal the rating sends about whether their MC is representing their views.

Our design generates random variation in both of these treatment variables. Table 3 summarizes four treatment possibilities. Because most respondents have a mix of liberal and conservative views across issues, our random assignment of which SIG rating to show produces random variation at the respondent level in whether they saw a SIG rating of the four possible types shown in Table 3. This allows us to test whether voters can use positive cues from misaligned groups as negative signals. We can compare how citizens evaluate their representatives when they are shown positive ratings of their representatives from misaligned groups to when they see negative ratings of their representatives from misaligned groups.

Table 3. Treatments in Studies 3(i) and 3(ii)

Note: A SIG is coded as aligned if that SIG's issue preference matches the voter's issue preference as measured at the beginning of the survey. For example, an individual who favours gun control would be coded as misaligned with the NRA.

Table 4 presents the results in sample (i). To estimate the effect of SIG ratings on MC approval, we use regression to compare individuals randomly assigned to be shown SIG ratings that happened to send different signals, which are supportive and unsupportive. To ensure we only compare respondents with identical probabilities of receiving each of these four treatments, we include fixed effects for the probability that respondents would receive each distinct set of four treatment probabilities as described in Table 3. To improve precision, we also include party identification as a control, the MC party, and their interaction.

Table 4. Studies 3(i) and 3(ii) – Effect of SIG rating information on Support for a MC

Note: The dependent variable in all regressions is the preregistered MC Approval Scale, rescaled to have a standard deviation of 1. Each column shows a separate regression. All models include fixed effects for each distinct set of treatment probabilities, which ensures that we only conduct comparisons between units with the same probability of assignment to all four treatments. Columns 1–3 use all respondents. Column 4 excludes respondents who had no probability of receiving an aligned or unaligned signal or no probability of receiving a supportive or unsupportive signal (because of the interest group ratings available for their MC). To ensure that such individuals do not drive the results, Column 5 applies an even stricter exclusion rule, retaining only respondents who could have received all four of these treatments. For the original study, ‘other controls’ includes the MC party, the respondent party, and their interaction. For the replication study, ‘other controls’ includes these and numerous other pretreatment controls and their interactions with the political party and the MC party (see footnote 16).

∗indicates significance at p < 0.05, ∗∗indicates significance at p < 0.01.

Column 1 demonstrates that showing respondents a SIG rating that should send the voter a signal that their representative shares rather than does not share their views have no detectable effect on their support for their MC. That is when voters see a SIG rating which indicates their MC disagrees with them on an issue, voters are no less approving of their MC.Footnote 19 This is surprising in light of the literature's traditional assumption that voters would use SIG ratings to help hold their MCs accountable.

Column 2, however, shows that seeing supportive instead of unsupportive SIG ratings does boost citizens' support. That is, while it does not seem to matter to voters whether SIG ratings indicate that their MC agrees with their views, Column 2's result shows that voters do reward MCs for agreeing with SIGs.

Column 3 shows that this finding survives when both coefficients are present in the same regression. Some respondents had no chance of seeing an aligned or unaligned signal, or no probability of receiving a supportive or unsupportive signal, because of the interest group ratings available for their MC. Column 4 excludes these individuals from the analysis and finds similar results. Column 5 applies an even stricter exclusion rule, retaining only those respondents who could have received all four of these treatments. As the table shows, results remain largely similar, though less precisely estimated.

Table 4, part (b) shows estimates for our preregistered replication, Study 3(ii). Across Models 1–5, we see that it closely replicates the original study but with smaller standard errors. We preregistered the analyses in each of the columns and they include the same controls as in the original study plus additional pretreatment controls that interacted with the MC party (see footnote 16 for the list; excluding controls leaves the results unchanged). Regardless of the specification, the estimates imply that seeing a positive rating from an interest group increases voter support of the MC by 0.15 standard deviations, a highly significant statistical effect in line with our theory of heuristic projection. By comparison, Democrats and Republicans differ by about one standard deviation on this scale when evaluating representatives of a given party. However, contrary to the expectations of the traditional account of how voters use SIG cues as heuristics, there is no detectable effect of whether the SIG rating signals that the MC's position matches the voter's views.

Figure 4 shows these results visually. In both the original study and our replication study, for both aligned and misaligned groups, we see that the respondents approved of their MC more when they saw positive rather than negative SIG ratings. Crucially, this is true even in the case of ratings from misaligned groups. The traditional view of heuristics predicts that voters should use positive cues from misaligned groups as negative signals about politicians, but we find no evidence to support this. It might not be surprising were voters to simply disregard signals from misaligned groups because they do not know how to interpret them. Yet what we find is evidence for the novel possibility we raised: heuristic projection. Voters act as if all special interest groups are aligned with their views, acting as if positive ratings from misaligned groups represent a positive signal about their representatives.

Figure 4. Studies 3(i) and 3(ii) – Mean of MC Approval Scale by Experimental Condition.

Note: The Figure shows predicted probabilities from a regression model identical to the models in Model 3 of Table 4, but with dummies for all four of the treatments shown in Table 3. 95 per cent confidence intervals surround point estimates. Positive ratings from misaligned groups raise instead of lower approval.

These findings further raise doubts about the traditional view of how interest group cues affect voters' judgements. When respondents in our experiments received a signal that meant their MC agrees (disagrees) with them on policy, they did not become more (less) supportive of their MC on average. However, when they saw a positive (negative) interest group rating of their MC, they became more (less) supportive of them on average.

We see similar results across the spectrum of political knowledge. Although the standard errors are larger, the positive point estimates for the main effect of a supportive rating are similar for the highest knowledge respondents who answered all four political knowledge questions correctly, as are the null results for the effect of showing a rating that signals a voter-MC issue position match. This pattern holds in the original study and in the replication. Appendix Figures OA7 and OA8 present separate estimates for each SIG. Results are also similar when using only vote choice as the outcome rather than an index of multiple items, although the results from Study 3(i) escape statistical significance when using only this item (likely due to the loss in precision from using only one item instead of an index of multiple items).

Discussion

How can citizens, who know little about their representatives' actions in office, hold them accountable? An influential view has long argued that uninformed citizens can use ratings and endorsements from SIGs to make inferences about their representatives' actions in office. Reviews of the literature often treat this traditional view as if it were a well-established fact and, indeed, portray interest group cues as one of the main strategies voters use to judge their representatives' performance in office (see the review in Appendix B.2). However, a surprisingly small number of studies have evaluated how citizens use the information in SIG cues to make inferences about their representatives (see Appendix B.1). Moreover, little literature has considered our alternative theory: that voter ignorance and voter psychology might not only undermine the potential for SIG cues to be helpful but have the potential to make them counterproductive.

We conducted a series of studies investigating how citizens perceive dozens of SIGs and how their cues affect citizens' judgements. We first showed that citizens are typically unable to determine which side of an issue an interest group sits on and tend to project their views onto them (Study 1). They are, consequently, largely unable to infer what SIG cues indicate about what their representatives have done in office (Study 2a). Citizens likewise do not adjust their evaluations of their representatives upon receiving these cues in the direction traditional theories would expect; for example, when seeing a rating that indicates their representative cast a vote they disagreed with, they do not evaluate their representatives any less positively (Study 3). A very small proportion of citizens do appear to be able to infer the positions of interest groups with names that signal their positions, but such citizens and such groups are rare. The NRA may be one exception, potentially because of media coverage and its large and active membership (Lacombe Reference Lacombe2019). Citizens also appear to be able to infer positions when experimental interventions give them additional information about SIGs (for example, Boudreau and MacKenzie Reference Boudreau and MacKenzie2019; Sances Reference Sances2013). But, without that additional information, we find that voter ignorance is the norm for the vast majority of SIGs, even the most prominent SIGs (such as the AFL-CIO and the Chamber of Commerce) and even those most active in political campaigns.

But our findings do more than cast doubt on the traditional view of SIG cues as helpful heuristics. We also supported a novel argument that is even less encouraging. In particular, we find that citizens do not disregard information about SIG cues, despite usually lacking the knowledge of how to interpret them. Instead, we find evidence for a novel dynamic we call heuristic projection: citizens on average behave as if SIGs share their views, even though many SIGs do not. We find evidence of heuristic projection in three conceptually and methodologically distinct ways: citizens on average believe SIGs share their broad left-right ideology (Study 1); in experiments, citizens interpret positive SIG cues as indicating their MCs agree with them on issues (Study 2b); and, perhaps as a result, citizens more highly approve of MCs when they earn positive SIG ratings, regardless of whether the SIG shares their views (Study 3). We find these patterns across the spectrum of political knowledge.

Whether heuristic projection noticeably influences citizens in the real world remains, in our view, an open question. Existing studies have not shown that voters use naturally-occurring SIG cues in the real world to form more accurate perceptions of politicians or that the presence of SIG cues enhances accountability (see Appendix B). However, citizens may often see endorsements in contexts that help them draw the right conclusions from unfamiliar interest groups, and studies do indeed suggest that they can draw the appropriate inference if they receive additional information about the group (Boudreau and MacKenzie Reference Boudreau and MacKenzie2019; Sances Reference Sances2013). It is also possible that, in competitive campaigns, competing messages drown out heuristic projection. The key conclusion we draw from our results is, therefore, not so much that heuristic projection is operating in the real world; instead, the strength of our heuristic projection findings and the lack of evidence supporting the standard view of heuristics as helpful raises doubts about that standard view. In particular, it raises doubts about whether interest group cues do help solve the problem of democratic accountability with inattentive citizens and whether they make it worse.

Our findings raise several interesting questions for future research. In particular, research could benefit from tracing the impacts of SIG cues on behaviour, as one clear limitation of our research is its focus on survey-based outcomes (Bullock et al. Reference Bullock2015; although see Berinsky (Reference Berinsky2018)). For example, although we did show respondents actual ratings of their actual representatives, it may be that, in the real world, citizens seek out information about SIG endorsements from SIGs they are familiar with, cases in which we would expect heuristic projection to be more muted. Research could also examine whether citizens are better able to draw the appropriate inferences from interest group endorsements in real-world contexts such as in television ads or candidate profiles in voter guides. Next, to what extent might SIG strategy contribute to heuristic projection? For example, do SIGs often name and ambiguously advertise themselves to frustrate citizens' ability to use their cues as negative signals? Our research was not well-positioned to understand why SIGs choose their names or what effects their names might have on voter perceptions, but these are fruitful areas for future research. Finally, a further theoretical consideration of the general equilibrium implications of our findings, such as whether heuristic projection might create perverse incentives for politicians to cater to interest groups and how interest groups may endogenously form to take advantage of heuristic projection, represent important next steps for further understanding their implications for accountability.

Supplementary material

Online appendices are available at https://doi.org/10.1017/S0007123423000078.

Data Availability Statement

Replication Data (Broockman, Kaufman, Lenz Reference Broockman, Kaufman and Lenz2023) for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/CGMMHG.

Acknowledgements

We gratefully acknowledge helpful comments from seminar participants at UC Berkeley, the DC Political Economy Webinar, Southern Political Science Association, and Kevin Arceneaux, Jon Bendor, Cheryl Boudreau, Chuck Cameron, Alexander Coppock, Logan Dancey, James Druckman, Lee Drutman, Joanna Dunaway, Chris Elmendorf, Don Green, Alexander Hertel-Fernandez, Keith Krehbiel, Shiro Kuriwaki, Matthew LaCombe, Thomas Leeper, Zhao Li, Arthur Lupia, Scott MacKenzie, Monika McDermott, Michael Parkin, Maria Petrova, Paul Quirk, David Redlawsk, Geoffrey Sheagley, Ken Shotts, Christopher Tausanovitch, and Chris Weber. We are especially grateful for Joel Middleton's help with analyses. All errors remain our own. The studies reported herein were approved by the Committee for the Protection of Human Subjects at UC Berkeley.

Financial Support

We acknowledge the Institute of Governmental Studies at UC Berkeley, Bill and Patricia Brandt, and the Charles Percy Grant for Public Affairs Research for support.

Conflicts of Interest

The authors declare none.

Footnotes

2 Citizens may also respond superficially to the positive or negative valence of a SIG rating when unfamiliar with a SIG by substituting the more complicated question of how to interpret the rating with the simpler question of whether the rating is positive (for example, ‘a 100/100 rating sounds like a good score’), a process called attribute substitution (Kahneman and Frederick Reference Kahneman and Frederick2002). This could also be consistent with the likeability heuristic if voters on average evaluate unknown groups positively, comporting with Weber, Dunaway, and Johnson's (Reference Weber, Dunaway and Johnson2012) findings.

3 Other studies also consider other cues voters might use, such as candidate gender or party (for example, Dancey and Sheagley Reference Dancey and Sheagley2013).

4 Due to space constraints, we present additional studies documenting low levels of knowledge about the issue positions of interest groups in Appendices F and G.

5 We gathered the receipt information from an updated version of the data from Bonica (Reference Bonica2014), excluding SIGs that were specific to the 2016 election or did not have a website.

6 We lost 87 respondents who dropped out of the survey prior to answering these questions and an additional 2 respondents who did not provide an ideological self-placement.

7 Evidence suggests that midpoint responses are sometimes a sign of the absence of an opinion (Wood and Oliver Reference Wood and Oliver2012). The ‘don't know’ category includes two instances where respondents skipped questions. Appendix Figure OA1 shows the relationship between CFscore and average ideological placement by respondents for all 45 SIGs. As groups become more conservative, respondent perceptions become slightly more conservative as well, but the relationship is weak. What little knowledge of SIG ideologies we do find is driven largely by respondents with high political knowledge when placing groups with clear names, such as the National Pro-Life Alliance and the Gun Owners of America. We present this result and report on the correlates of SIG ideological knowledge in Appendix E.

8 This is consistent with one common way researchers use preregistered studies: ‘follow[ing] up on non-registered findings’ to demonstrate their robustness (Van't Veer and Giner-Sorolla Reference Van't Veer and Giner-Sorolla2016). More precisely, conducting a pre-registered replication provides the benefit that p-values below 0.05 in our pre-registered replication will only occur 5 per cent of the time by chance. Without pre-registration, multiple testing bias could inflate the Type I error rate. In this way, our pre-registered study can be thought of as a confirmatory hypothesis test of the patterns found in our prior exploratory study.

9 The SIG ratings were typically highly predictive of how MCs actually voted in this study, even conditional on the MC party. One exception is the ‘National Association of Police Organizations’, which, as shown in Figure OA4, gave positive ratings to essentially every sitting MC. Appendix Figure OA4 shows a histogram of all the ratings we presented by interest group.

10 Appendix Figure OA3 presents a flowchart for this study. We do not provide an ‘abstain’ option but, as we describe below, there is no missing data due to the way we designed the randomization.

11 Appendix H reports a supplementary study finding that providing MC partisanship does not appear to affect the results.

12 Random guessing would yield a 50 per cent correct rate. As described, respondents were always shown their representative's party, as is the case in most real-world settings. Many respondents likely inferred their MC's votes from the MC's party. Appendix H reports a supplementary study finding that providing MC partisanship does not appear to affect the results.

13 Appendix Figure OA3 provides an overview of the experimental design. We only include respondent-issue observations for issues for which a SIG rating for a respondent's MC existed and, therefore, could have been shown. Respondents were only eligible to be shown ratings from SIGs that actually rated their MCs. As a result, respondents differed in their probabilities of seeing each SIG rating. The fixed effects ensure that we conduct all comparisons among respondents who have the same probability of seeing each SIG rating.

14 The experimental variation in Study 2b(i) comes from choosing to show a SIG rating and from randomly selecting which SIG rating to show. Since MCs vary in how many positive SIG ratings they receive, the chance of being shown a positive rating varies non-randomly across respondents. To ensure our analysis only captures variation in treatment from random assignment, we include fixed effects for the number of positive ratings a voter's MC received. These fixed effects ensure that we are only comparing individuals who had the same probability of being shown positive versus negative SIG ratings. Since we asked some respondents about eight issues and others about nine, we include fixed effects for this as well.

15 When we estimate with indicator variables instead while omitting the No SIG rating category, the ≤50 coefficient estimate is −0.209 (SE = 0.096) and the >50 is 0.157 (SE = 0.086).

16 These are the left-right ideological position; the 2016 US presidential and US House generic vote choice; the favourability rating of Donald Trump, the US Congress, and police officers; Black, Hispanic, Asian, American Indian or other identification; ratings of the US economy in the future and in the present; personal financial situation; Trump favourability (asked a second time in the pretreatment survey); presidential approval; gender; and right-wing authoritarianism (four item index). We interact all these controls with the MC party.

17 In Study 3(i), respondents in the pure control condition were then exposed to material for another project so we were unable to include control participants in this comparison. See Appendix Figure OA3 for an overview of the design. The experimental variation in Study 3 is from whether respondents were randomly assigned to see negative v. positive ratings, given that respondents could be shown either.

18 Consistent with our pre-analysis plan (see Appendix I), we code all ratings 50 and below as negative and ratings 51 and above as positive. As shown in Appendix Figure OA4, the ratings are largely bimodal. Only 0.5 per cent of ratings are exactly 50.

19 For a separate project, we showed the control group how their MCs actually voted on these issues. That project finds very large effects on the Member support scale of providing this information, indicating that our null results here are not due to citizens being indifferent to the position information these interest group ratings imply.

References

Achen, CH and Bartels, LM (2016) Democracy for Realists: Why Elections do not Produce Responsive Government. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Arceneaux, K and Kolodny, R (2009) Educating the least informed: Group endorsements in a grassroots campaign. American Journal of Political Science 53(4), 755–70.CrossRefGoogle Scholar
Berinsky, AJ (2018) Telling the truth about believing the lies? Evidence for the limited prevalence of expressive survey responding. The Journal of Politics 80(1), 211–24.CrossRefGoogle Scholar
Bonica, A (2014) Mapping the ideological marketplace. American Journal of Political Science 58(2), 367–86.CrossRefGoogle Scholar
Boudreau, C and MacKenzie, SA (2019) Follow the Money? How Campaign Finance Disclosures and Policy Information from Nonpartisan Experts Affect Public Opinion in Direct Democracy Settings. Working paper.Google Scholar
Broockman, DE, Kaufman, AR and Lenz, GS (2023) Replication Data for Heuristic Projection: Why Interest Group Cues May Fail to Help Citizens Hold Politicians Accountable. Available at https://doi.org/10.7910/DVN/CGMMHG, Harvard Dataverse, V1.CrossRefGoogle Scholar
Brooks, DJ and Murov, M (2012) Assessing accountability in a post-citizens united era: The effects of attack ad sponsorship by unknown independent groups. American Politics Research 40(3), 383418.CrossRefGoogle Scholar
Bullock, JG et al. (2015) Partisan bias in factual beliefs about politics. Quarterly Journal of Political Science 10, 519–78.CrossRefGoogle Scholar
Coppock, A and McClellan, OA (2019) Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Research & Politics 6(1), 2053168018822174.CrossRefGoogle Scholar
Dancey, L and Sheagley, G (2013) Heuristics behaving badly: Party cues and voter knowledge. American Journal of Political Science 57(2), 312–25.CrossRefGoogle Scholar
Dowling, CM and Wichowsky, A (2013) Does it matter who's behind the curtain? Anonymity in political advertising and the effects of campaign finance disclosure. American Politics Research 41(6), 965–96.CrossRefGoogle Scholar
Dowling, CM and Wichowsky, A (2015) Attacks without consequence? Candidates, parties, groups, and the changing face of negative advertising. American Journal of Political Science 59(1), 1936.CrossRefGoogle Scholar
Druckman, JN, Kifer, MJ and Parkin, M (2020) Campaign rhetoric and the incumbency advantage. American Politics Research 48(1), 2243.CrossRefGoogle Scholar
Kahneman, D and Frederick, S (2002) Representativeness revisited: Attribute substitution in intuitive judgment. In Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, 4981.CrossRefGoogle Scholar
Lacombe, MJ (2019) The political weaponization of gun owners: The NRA's cultivation, dissemination, and use of a group social identity. Journal of Politics 81(4), 13421356.CrossRefGoogle Scholar
Leeper, TJ (2013) Interest Groups and Political Attitudes. Working paper, Available at http://dpsa.dk/papers/ImmigrationGroupsFraming2013-10-21.pdf.Google Scholar
McDonald, J (2020) Avoiding the hypothetical: Why “mirror experiments” are an essential part of survey research. International Journal of Public Opinion Research 32(2), 266–83.CrossRefGoogle Scholar
McKelvey, RD and Ordeshook, PC (1985) Sequential elections with limited information. American Journal of Political Science 29(3), 480512.CrossRefGoogle Scholar
Ridout, TN, Franz, MM and Fowler, EF (2015) Sponsorship, disclosure, and donors: Limiting the impact of outside group ads. Political Research Quarterly 68(1), 154–66.CrossRefGoogle Scholar
Ross, L, Greene, D and House, P (1977) The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of experimental social psychology 13(3), 279301.CrossRefGoogle Scholar
Sances, MW (2013) Is money in politics harming trust in government? Evidence from two survey experiments. Election Law Journal 12(1), 5373.CrossRefGoogle Scholar
Tausanovitch, C and Warshaw, C (2018) Does the ideological proximity between candidates and voters affect voting in US house elections? Political Behavior 40(1), 223–45.CrossRefGoogle Scholar
Van't Veer, AE and Giner-Sorolla, R (2016) Pre-registration in social psychology—A discussion and suggested template. Journal of Experimental Social Psychology 67, 212.CrossRefGoogle Scholar
Weber, C, Dunaway, J and Johnson, T (2012) It's all in the name: Source cue ambiguity and the persuasive appeal of campaign ads. Political Behavior 34(3), 561–84.CrossRefGoogle Scholar
Wood, AK (2018) Campaign finance disclosure. Annual Review of Law and Social Science 14(1), 1127.CrossRefGoogle Scholar
Wood, T and Oliver, E (2012) Toward a more reliable implementation of ideology in measures of public opinion. Public Opinion Quarterly 76(4), 636–62.CrossRefGoogle Scholar
Figure 0

Table 1. Summary of Research Questions, Studies, and Samples in this Paper

Figure 1

Figure 1. Study 1 – Limited Voter Knowledge of SIG Ideology. (a) Actual SIG Ideology for 45 SIGs in Study 1. (b) Respondent Placements of SIG Ideology. (c) Respondent Placements of SIG Ideology–Conservative SIGs Only. (d) Respondent Placements of SIG Ideology–Liberal SIGs Only.Note: 3,178 respondents each rated two interest groups. Given the large sample size, the standard errors are very small for the estimates in panels b–d, about 1 per cent, so we omit confidence intervals.

Figure 2

Figure 2. Study 1 – Voters Project Their Ideology onto Special Interest Groups. (a) SIG Placement by Respondent Ideology–All SIGs. (b) SIG Placement by Respondent Ideology–Conservative SIGs. (c) SIG Placement by Respondent Ideology–Liberal SIGs.Note: N = 3,178 respondents each rated two interest groups. Given the large sample size, the standard errors are very small, about 1 per cent, so we omit confidence intervals. ‘DK’ means ‘don't know.’

Figure 3

Figure 3. Studies 2a(i) and 2a(ii) – Effect of Showing Each Interest Group's Heuristic on Accurate Perception of the MC's Vote. (a) Treatment Effect of Seeing a SIG Rating on Accurate Perception of MC Vote in Original Study (2a(i)). (b) Treatment Effect of Seeing a SIG Rating on Accurate Perception of MC's Vote in Replication Study (2a(ii)).Note: Each coefficient shows the treatment effect estimate from one regression. Interest-group-specific estimates come from regressions subsetting to issue-respondent observations for each interest group. Overall estimates are from regressions with all issue-respondent observations. All regressions include the controls and fixed effects mentioned in the text.

Figure 4

Table 2. Studies 2b(i) and 2b(ii) – Effect of SIG Rating on the Perception that Respondent's Member of Congress Cast Congruent Votes

Figure 5

Table 3. Treatments in Studies 3(i) and 3(ii)

Figure 6

Table 4. Studies 3(i) and 3(ii) – Effect of SIG rating information on Support for a MC

Figure 7

Figure 4. Studies 3(i) and 3(ii) – Mean of MC Approval Scale by Experimental Condition.Note: The Figure shows predicted probabilities from a regression model identical to the models in Model 3 of Table 4, but with dummies for all four of the treatments shown in Table 3. 95 per cent confidence intervals surround point estimates. Positive ratings from misaligned groups raise instead of lower approval.

Supplementary material: File

Broockman et al. supplementary material
Download undefined(File)
File 676.4 KB
Supplementary material: File

Broockman_et_al._Dataset

Dataset

Download Broockman_et_al._Dataset(File)
File