Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-24T17:03:47.608Z Has data issue: false hasContentIssue false

Campaigning for the Bench: The Corrosive Effects of Campaign Speech?

Published online by Cambridge University Press:  01 January 2024

Rights & Permissions [Opens in a new window]

Abstract

A new era has emerged in the ways in which candidates for state judicial office campaign. In the past, judicial elections were largely devoid of policy content, with candidates typically touting their judicial experience and other preparation for serving as a judge. Today, in many if not most states, such campaigns are relics of the past. Modern judicial campaigns have adopted many of the practices of candidates for other types of political office, including soliciting campaign contributions, using attack ads, and even making promises about how they will decide issues if elected to the bench.

Not surprisingly, this new style of judicial campaigning has caused considerable consternation among observers of the courts, with many fearing that such activity will undermine the very legitimacy of legal institutions. Such fears, however, are grounded in practically no rigorous empirical evidence on the effects of campaign activity on public evaluations of judicial institutions.

The purpose of this article is to investigate the effects of campaign activity on the perceived legitimacy of courts. Using survey data drawn from Kentucky, I use both post hoc and experimental methods to assess whether public perceptions of courts are influenced by various sorts of campaign activity. In general, my findings are that different types of campaign activity have quite different consequences. For instance, policy pronouncements by candidates do not undermine judicial legitimacy, whereas policy promises do. Throughout the analysis, I compare perceptions of courts and legislatures, and often find that courts are far less unique than many ordinarily assume. I conclude this article with a discussion of the implications of the findings for the contemporary debate over the use of elections to select judges to the high courts of many of the American states.

Type
Articles
Copyright
© 2008 Law and Society Association.

How dangerous is campaign speech to the legitimacy of American courts? At least one of the most prominent analysts of campaigning and elections has predicted:

The spread of negative campaigning in judicial races is likely to have adverse consequences for the court system. The motives of judicial candidates will be cast into doubt, and public esteem for the judiciary will suffer. Not only will candidates for judicial office be equated with ordinary politicians, but the impartiality, independence, and professionalism of the judiciary will also be called into question. Large-scale advertising in state judicial elections will further politicize state courts in the eyes of the public (Reference IyengarIyengar 2002:697).

Moreover, even former Supreme Court Justice Sandra Day O'Connor, who voted with the majority in extending free-speech rights to candidates for judicial office, expressed serious doubts about her deciding vote in Republican Party of Minnesota v. White (2005), owing to fears that the campaigning genie has come out of the bottle, with a vengeance (Reference HirschHirsch 2006). To many, campaign speech by judges undermines popular perceptions of impartiality, a supposed bedrock of judicial legitimacy.

Moreover, in the aftermath of White, many see the problem as getting worse. According to a report from NYU's Brennan Center, “White has also produced a modest but detectable increase in the number of judicial candidates willing to speak out more on the campaign trail” (Reference SampleSample et al. 2007:34). The authors take comfort in the finding that most judicial candidates who have expressed their views on issues wound up losing their elections, but they and many others worry greatly about the long-term consequences of the policy commitments judicial candidates make while campaigning.

To date, however, practically nothing is known about the consequences of campaign activity for perceptions of impartiality and the legitimacy of courts. Do citizens view courts—federal and state—as impartial? What are the causes of those perceptions? Are they rooted, for instance, in accurate perceptions of the courts or are they instead deduced from the citizen's more general political and ideological orientations? What are the specific activities that impugn judicial legitimacy? Do campaigns teach citizens anything about the partiality or impartiality of judges and courts? Can politicized campaign activity undermine the institutional legitimacy of the American judiciary? If so, under what conditions? Unfortunately, we know little about how the public judges their courts and what the causes and consequences of these judgments may be.

The purpose of this article is therefore to investigate the consequences of various types of campaign statements for public views of courts. In particular, based upon a representative sample of the residents of Kentucky, I assess the impact of campaign activity by judges—including actual ads broadcast in judicial races in Kentucky—on the public's attitudes toward the Kentucky judiciary. The issues I consider are whether the activities are deemed appropriate for candidates for judicial office, and whether such ads influence perceptions of the impartiality of judges and the legitimacy of the Kentucky Supreme Court. In order to provide some important perspective, a portion of the investigation that follows relies upon cross-institutional analysis, comparing reactions to the Kentucky Supreme Court and the Kentucky State Senate. The analysis I report here is based on both post hoc and experimental designs, allowing uncommon confidence in the causal inferences that are drawn. My most general conclusions are that not all campaign activities undermine judicial impartiality—some do, others do not—and consequently that much more research is needed to ascertain how different aspects of campaigning fit with the expectations citizens hold of their judges and courts.

Can Campaigns Change Citizens' Views of Judicial Impartiality and the Legitimacy of Courts?

Precious few studies have ever investigated the question that defines this section of the article. Indeed, so far as I am aware, only a handful of studies have ever addressed this question.Footnote 1 These studies have generated a mix of findings, including some disconcerting ones.

Reference Gibson and CaldeiraGibson and Caldeira (2007, Reference Gibson and Caldeiran.d.) examined the impact of the ad campaigns mounted in support of or opposition to the nomination of Judge Samuel Alito to the U.S. Supreme Court. Perhaps the most important finding of the research is that the campaigns by interest groups favoring and opposing the confirmation of Judge Alito seemed to have undermined the legitimacy of the Court itself. The campaigns were politicized and taught the lesson that the Court is just another political institution, and as such, is not worthy of high esteem. Since that study is based on a three-wave panel design, allowing the direct measurement of change in attitudes toward the U.S. Supreme Court, its findings are uncommonly persuasive.

Of course, the entire question of whether studies of attitudes toward the U.S. Supreme Court can be generalized to the state judiciaries is open (for an excellent collection of essays on contemporary issues in state judicial elections, see Reference StrebStreb 2007). State courts of last resort are obviously far less salient than the U.S. Supreme Court, with the likely consequence that institutional attitudes at the state level may be considerably more malleable. It is simply unclear whether findings drawn from research on the U.S. Supreme Court apply to the state courts.

Some studies have, however, been conducted on public attitudes toward state courts, although much of that literature is dated (see, for example, Reference WalkerWalker 1977; Reference Lehne and ReynoldsLehne & Reynolds 1978; Reference FaganFagan 1981; Reference FlanaganFlanagan et al. 1985; Reference Olson and HuthOlson & Huth 1998; Reference WenzelWenzel et al. 2003; Reference OverbyOverby et al. 2004). Among the best of the lot are two recent national studies, one by Reference BeneshBenesh (2006) and the other by Reference Cann and YatesCann and Yates (2008). Both of these studies, however, had to cobble together a dependent variable based on surveys fielded for nonacademic purposes,Footnote 2 and neither focused on the effects of campaign activities on attitudes toward courts.Footnote 3 In general, scholars interested in how citizens perceive and judge their state judicial institutions have been seriously constrained by the lack of public opinion data and the shortcomings of surveys conducted by policy-oriented groups and organizations.

One study of campaign activity in state court races is relevant to the question of whether judicial campaigns undermine legitimacy. In a national survey, Reference GibsonGibson (2008a) utilized an experimental “vignette” that exposed the respondents to different types of campaign activities, including policy speech. His analysis indicates that the alarmists are partially right and partially wrong in their concern about judicial impartiality being undermined. When citizens hear issue-based speech from candidates for judicial office, court impartiality does not suffer. It seems that many Americans are not at all uncomfortable when candidates for the bench tell them how they feel about the sort of sociopolitical issues coming before courts these days. Policy talk in particular does not seem to undermine institutional legitimacy.

Gibson's research suggests that policy speech during campaigns has little effect on perceived impartiality. However, that research also found that the receipt of campaign contributions can threaten legitimacy. Contributions to candidates for judicial office imply for many a conflict of interest, even a quid pro quo relationship between the donor and the judge, which undermines perceived impartiality and legitimacy. But it is important to note that there is nothing distinctive about the judiciary on this score: Gibson found that campaign contributions to candidates for the state legislature also imply a conflict of interest and therefore can detract from the legitimacy of legislatures as well.

Finally, the experiment also indicates that attack ads undermine legislative but not judicial legitimacy. The effect is not nearly as great as that observed for campaign contributions, but citizens exposed to such negative advertisements during legislative campaigns extend less legitimacy to the institution involved, findings that are quite similar to findings from Gibson's Kentucky research (Reference GibsonGibson 2008b). Courts, perhaps owing to their “reservoir of goodwill” (Reference EastonEaston 1975:444), are little affected by the use of attack ads.Footnote 4

Gibson's analysis is limited in at least one very important sense: the data are drawn from a hypothetical vignette. Hypotheticals have their virtues, but they also have important limitations. For instance, in Gibson's Kentucky experiment, attack ads are represented by the following language:

Judge Anderson's campaign ads vigorously attack his opponent, claiming that his opponent is biased in favor of insurance companies and other such businesses, and would therefore not be able to make fair and impartial decisions if elected to the Supreme Court.

This is certainly one representation of attack ads; but it also seems a tame version compared to the vigorous attacks one sees these days in television ads, and the ad is presented without much context or emotion. Hypothetical vignettes such as these represent one way to study the effects of campaign activity on legitimacy, but only one way.

Summary

While it is certainly true that judicial campaigns have become vastly more costly and more focused on legal and political issues, to date, little evidence has been produced to document the alleged decline in the legitimacy of elected courts. Many make assumptions that judicial legitimacy is at risk, but in fact it seems that some aspects of campaigning are deleterious whereas others are not. The purpose of this article is to provide some much needed additional analysis of the consequences of judicial campaign activity.

Research Design

The analysis is based upon a three-wave panel survey conducted in Kentucky in 2006. A sample of residents was interviewed before the fall elections, during the election season, and well after the elections. Details on the survey can be found in Appendix A. Most of the analysis reported here is drawn from the third-wave interviews, with the third interview being conducted in 2007, several months after the general election in November 2006.

Why Kentucky, and what limits on generalizability flow from this research design? The optimal design for a study of the impact of campaigning on judicial legitimacy would be longitudinal in nature, tracing change in public attitudes over a period of time as new types of campaign tactics are introduced within a state. Such a study is prohibitively expensive to implement, and no such effort has ever been fielded.

An alternative strategy would be to focus on a state where politicized campaigns are relatively new but not unheard of, and then to track the impact of campaigns on legitimacy. That is the design of this research. At this point in history, states such as Ohio and Texas are not particularly revealing since citizens of those states have long witnessed highly politicized campaigning for judicial office. At the other end of the continuum, some states have, to date, been immune to politicization. For instance, in the high court elections of 2004, all of the candidates in 10 states reported raising no contributions as part of their campaigns for a seat on the state court of last resort (Reference GoldbergGoldberg et al. 2005:14).

Kentucky lies between the extremes on this continuum. For instance, in the election of 2004, the candidates were Janet Stumbo and Will Scott, and together they raised nearly half a million dollars in campaign contributions (Reference GoldbergGoldberg et al. 2005:14). By all accounts, the campaign of 2004 was fairly politicized, with candidate Scott running attack ads and candidate Stumbo running ads contrasting the two candidates (Reference GoldbergGoldberg et al. 2005:48). Among the 21 states in which judicial candidates raised at least some contributions in 2004, Kentucky defined the median, with candidates in 10 states raising less than $239,317 and candidates in 10 other states raising more than this figure. Moreover, also in 2004, abortion-related questionnaires were distributed by interest groups to judicial candidates in Kentucky. Some candidates refused to answer the questionnaires, which prompted a well-publicized lawsuit by the Family Trust Foundation challenging legal and ethical constraints on speech that appears to commit a candidate to a position that might come before the courts. The Family Trust Foundation was successful in its litigation.Footnote 5 Thus, in terms of the prior judicial election and the political context to which these respondents had most recently been exposed, some but perhaps not a very high degree of judicial politicization existed.

Finally, I note that an experimental vignette about the effects of campaigning on perceived impartiality that was part of the initial interview of the Kentucky respondents has been replicated with a national sample of Americans and produced quite similar results. Nothing about Kentucky seems to be significantly aberrant when it comes to judicial elections and the legitimacy of its Supreme Court. So although statistical theory provides little basis for generalizing these findings to other state judiciaries, Kentucky satisfies a number of design criteria that makes it a useful state for an inquiry such as this.Footnote 6

The analyses that follow are drawn from three separate sections of the interview. In the first, the respondents are asked in a straightforward manner to judge three types of campaign activity by candidates for judicial office. In particular, I investigate the consequences of campaign activity for perceived fairness and impartiality. Here I discover an important difference between general policy talk and specific policy promises.

The second portion of the analysis is based upon a formal experiment in which people are exposed (via random assignment) to actual ads broadcast by judicial candidates in Kentucky elections. All of the ads are attack ads, but, according to the results, considerably different types of attacks are portrayed in the advertisements.

Finally, a second experiment directly addresses cross-institutional similarities and differences in the effects of promises to decide issues in a certain way. This analysis is particularly revealing in its documentation of the relatively minor differences in the judgments citizens make of legislators and judges. Here, too, the evidence is that promises to decide are often judged as inappropriate.

In the final section of the article, I move away from the specific evidence on campaign effects and consider more broadly the issue of whether judges should be elected to office, and, if so, whether they should be allowed to mount ordinary campaigns for voter support.

The Context: Judicial Legitimacy on the Eve of the 2006 Elections in Kentucky

How much legitimacy did the Kentucky Supreme Court enjoy prior to the election campaign of 2006? With recourse to some fragmentary national data and the first-wave survey in this project, some tentative answers to this question can be derived.

In 2001, the group Justice at Stake Campaign conducted a national survey on public attitudes toward the state and local courts (for earlier analyses of these data, see Reference Cann and YatesCann & Yates 2008). One of the questions they asked their respondents is: “How much trust and confidence do you have in courts and judges in your state?” Responses were collected on a four-point scale that varies from “nothing at all” to “a great deal.” The data reveal that most Americans think quite highly of their state courts, with 25 percent asserting a great deal of trust and confidence and another 53 percent expressing some confidence, for a total of 78 percent asserting at least some confidence in the institutions (Justice at Stake 2002).

All 50 states are included in the Justice at Stake data set, although many states are represented by a tiny number of respondents. The average number of interviews per state is 19.3 (with a standard deviation of 18.1), with the number ranging from 1 to 84. A total of 16 states have fewer than 10 respondents in the sample. Of the states with 10 or more respondents, the average percentage of citizens expressing at least some confidence in their state judiciary is 78.4. Figure 1 reports the distribution across these states.

Source: Justice at Stake (2002).

Figure 1. Confidence in State Courts, Justice at Stake Campaign Survey, 2001

The first conclusion from this figure is that not a great deal of variability in court confidence exists across the states. In every state, a majority of the respondents express confidence in their state courts, and in most states the majority is a quite sizable one.

In this data set, there are 22 respondents from Kentucky (which is slightly above average for the states). In terms of how much they trust their courts and judges, they are not statistically distinguishable from the rest of the respondents in the sample (p = 0.770). Overall, 78 percent of Americans trust their courts and judges to at least some degree. Among Kentuckians in the sample, the figure is 73 percent. Although I realize these data provide only a weak test of whether Kentucky is aberrant, they seem to provide some empirical support for my assertion that there is no obvious reason for thinking that empirical findings from Kentucky are atypical.

In my panel survey, attitudes toward the legitimacy of the Kentucky Supreme Court were measured in the first-wave interview using a standard battery of items developed in the context of studies of the U.S. Supreme Court. These data support two conclusions: First, the Kentucky Supreme Court enjoys a considerable degree of legitimacy, and second, its legitimacy approximates that of the U.S. Supreme Court. For instance, only a small proportion of Kentuckians (19.7 percent) would “do away” with the court if it made a string of objectionable decisions, although a substantial majority (63.5 percent) would prefer a court that is less independent of the will of the people. On the measures that are identical to ones asked of national samples with regard to the U.S. Supreme Court (e.g., most recently, Reference GibsonGibson 2007), the Kentucky Supreme Court does well. As to doing away with the court 69.1 percent of Kentuckians would not abolish their supreme court, compared to the 68.9 percent of Americans who would not abolish the U.S. Supreme Court. Slightly more than one-half of Americans would limit the U.S. Supreme Court's jurisdiction (51.4 percent); 41.9 percent of Kentuckians would limit the jurisdiction of the Kentucky court (although some of this difference has to do with respondents who have no opinion about the Kentucky court). In terms of trust, 65.9 percent of the Kentucky sample say their Supreme Court can be trusted; 65.5 percent of Americans assert that the U.S. Supreme Court can be trusted. Although differences in the size of the “don't know” group cloud the comparison, more Americans think the U.S. Supreme Court gets too mixed up in politics (37.2 percent) than Kentuckians who think the Kentucky Supreme Court gets too mixed up in politics (26.7 percent). Finally, if one were to compare the Kentucky findings on the “do away with” question to data from nearly two dozen surveys around the world of attitudes toward national high courts (see Reference Gibson and CaldeiraGibson, Caldeira, and Baird 1998), the conclusion would be that few national high courts enjoy the level of public esteem enjoyed by the Kentucky Supreme Court. Indeed, the Kentucky Supreme Court is even considerably more visible to its constituents than are many high courts around the world. In general, these data seem to indicate that the Kentucky Supreme Court enjoys a considerable degree of legitimacy in the eyes of the citizens of the state (at least before the judicial elections of 2006). If there is a qualification to this assertion, it is that the idea of an independent institution blocking efforts of the majority of the people to have its way politically is not very attractive to a considerable number of Kentucky citizens. Generally, however, the Kentucky judiciary was held in reasonably high esteem by the citizens of that state prior to the 2006 elections.

Analysis

Threats to Impartiality From Campaign Activity

All respondents in the third-wave survey were asked to evaluate three types of activities said to be engaged in by a judge during a campaign. The actions are:

Issuing a campaign statement saying: “I believe the constitution gives women the right to have abortions.”

Issuing a campaign statement saying: “If elected, I will change Kentucky's law on abortion.”

Accepting campaign contributions from groups seeking to change Kentucky's law on abortion.

The respondents were asked about what consequences such activity would have for whether the individual could serve as a fair and impartial judge.Footnote 7

I selected the issue of abortion because it is salient in Kentucky (e.g., the various political and legal activity conducted by the Family Trust Foundation) and because the issue is quite relevant to state courts and state judicial elections. Many scholars (e.g., Reference CaldaroneCaldarone et al. 2007) agree about the importance of abortion decisions for state judiciaries. Abortion is typically a state issue (Reference BraceBrace et al. 1999), and abortion cases are routinely heard in state courts. In addition, abortion is often a significant issue in campaigns for state high courts (Reference BaumBaum 2003; Brennan Center for Justice 2006). Consequently, questions about judges and abortion policy most likely seemed quite realistic to the respondents.

Table 1 reports the percentages of respondents asserting that the judge can be fair and impartial in spite of the specific activity in which the judge engaged during the campaign.Footnote 8 So, for instance, for all respondents, more than one-half (55.6 percent) judge the statement about constitutional protection for abortion rights not to have impugned the impartiality of the judge.Footnote 9 I also report in this table the results according to how the respondent feels about “pro-abortion activists.” The data indicate that even among those feeling negatively toward such groups (45 degrees or colder on the 100-degree-feeling thermometer), a majority believes the judge can be fair and impartial. Not surprisingly, a larger percentage of those feeling favorable toward pro-abortion activists (55 degrees or higher) perceive the judge as impartial. From these data, it seems that policy statements offering broad interpretations of constitutional interpretation do not threaten the perceived impartiality of judges and courts, at least among the majority of the people, and even when people disagree with the policy position.

Table 1. Campaign Activity and Judicial Impartiality

Note: The percentages indicate the proportion of the respondents in the category who asserted that a judge who engaged in such activity can be fair and impartial in her or his decisionmaking on the bench. For instance, among those with “cold” (negative) feelings toward pro-abortion activists, 51.4 percent nonetheless believe that a judge who says the constitution provides for the right to have an abortion can be fair and impartial in deciding cases on abortion. N= 1,034.

Campaign contributions and direct promises to take policy action are quite a different matter. In both instances, only a minority of the respondents believe the judge can serve in a fair and impartial manner, and these judgments are entirely uninfluenced by the respondents' attitudes toward pro-abortion activists.Footnote 10 Even a majority of those sympathetic toward “pro-abortion activists” believes that the judges engaging in this sort of campaign activity cannot be fair and impartial. Although the difference is small, it is noteworthy that the effect on perceived impartiality of a direct policy promise is less severe than is the receipt of campaign contributions from a relevant interest group.

It appears from these data that, for most citizens, the line is crossed when the candidate makes a specific policy promise, but that a general assertion of one's constitutional ideology does not necessarily undermine perceptions of judicial impartiality. Most generally, it seems that relatively small differences in campaign statements can have significant consequences for public assessments of judges and the judiciary. Statements about constitutional interpretation seem not to violate the expectations of most citizens, while promises to decide issues in a certain way do. And, it should be noted, general policy speech and campaign contributions are quite different matters, having different effects on institutional legitimacy.

Nonetheless, I should note the size of the group that is not unnerved by direct policy promises and campaign contributions. It is not a majority, but it is roughly one-third of the population, for promises and contributions, respectively. Indeed, nearly one in five respondents (19.4 percent) does not find both direct policy promises and the acceptance of campaign contributions objectionable. It is unclear at present whether any political activity by judicial candidates would cause doubt among this group about the impartiality of judges. These figures remind us that the American people are heterogeneous when it comes to their expectations of judges, with, so it seems, many being satisfied with judges who sometimes act as “politicians in robes.”

The analysis reported in Table 1 has an obvious limitation. As with all post hoc research, the nature of the causality in the relationship is ambiguous. To establish causality more confidently, I now turn to a different section of the interview in which experimental methods were employed.Footnote 11

Judgments of Actual Attack Ads, Kentucky 2006

The respondents were also presented with some campaign statements actually made by judges. They were randomly assigned to hear one of the following ads. To reiterate, each person heard only a single ad, with random assignment to an ad version. These are ads that were actually aired on television in Kentucky during judicial campaigns. Through the Campaign Media Analysis Group (CMAG), all ads for candidates running for office are captured from the public airwaves and analyzed. As part of this project, I purchased the ads run in Kentucky from CMAG.

  1. 1. [Announcer]: In 2003, Circuit Judge Bill Cunningham tried to make six rapists eligible for parole. One had been out on parole for only 12 hours when he raped a 14-year-old and made her mother watch. Bill Cunningham already had tried to reduce their sentences, but our Supreme Court said no. Bill Cunningham said it was folly and a blatant injustice to keep these rapists in prison. Judge Rick Johnson believes that a life sentence means a life sentence. Please, vote for Rick Johnson for Justice on the Supreme Court.

  2. 2. [Announcer]: John Roach says he's tough on crime, but Judge Mary Noble has put thousands of criminals behind bars. John Roach, none. Judge Mary Noble has helped dozens of lives through her Drug Court Program. John Roach, none. Judge Mary Noble has been elected by the people twice. John Roach, none. Elect a real judge to the Supreme Court. Vote for Judge Mary Noble.

  3. 3. [Announcer]: David Barber is confused. He's now airing an ad that says Janet Stumbo wrote the Supreme Court opinion in the Morse Fetal Homicide Case. Barber can't tell the boys from the girls. The Morse opinion was written by Justice Bill Cooper. More confusing is that Cooper's opinion upheld the decision with which Barber concurred. He's attacking the Supreme Court for agreeing with him. David Barber: confused about his own opinions. Is he a judge, or just another politician? On November 7th, elect a judge: Janet Stumbo.

Following the presentation of the ad, the respondents were asked a series of questions, including a query about whether such an ad is appropriate for a Kentucky Supreme Court election. Figure 2 reports the results.

Note: Total N = 1032. Individual treatment condition Ns vary from 332 to 351. Cross-condition difference of means tests (on the uncollapsed response set): η = 0.38, p<0.001. The ads read: A. [Announcer]: In 2003, Circuit Judge Bill Cunningham tried to make six rapists eligible for parole. One had been out on parole for only 12 hours when he raped a 14-year-old and made her mother watch. Bill Cunningham already had tried to reduce their sentences, but our Supreme Court said no. Bill Cunningham said it was folly and a blatant injustice to keep these rapists in prison. Judge Rick Johnson believes that a life sentence means a life sentence. Please, vote for Rick Johnson for Justice on the Supreme Court. B. [Announcer]: John Roach says he's tough on crime, but Judge Mary Noble has put thousands of criminals behind bars. John Roach, none. Judge Mary Noble has helped dozens of lives through her Drug Court Program. John Roach, none. Judge Mary Noble has been elected by the people twice. John Roach, none. Elect a real judge to the Supreme Court. Vote for Judge Mary Noble. C. [Announcer]: David Barber is confused. He's now airing an ad that says Janet Stumbo wrote the Supreme Court opinion in the Morse Fetal Homicide Case. Barber can't tell the boys from the girls. The Morse opinion was written by Justice Bill Cooper. More confusing is that Cooper's opinion upheld the decision with which Barber concurred. He's attacking the Supreme Court for agreeing with him. David Barber: confused about his own opinions. Is he a judge, or just another politician? On November 7th, elect a judge: Janet Stumbo.

Figure 2. Assessments of Three Attack Advertisements Broadcast by Kentucky Judges, 2006

As it turns out, a majority of respondents approved of the first two statements, whereas only a very small percentage of the subjects (17.2 percent) thought the third campaign statement was appropriate for a candidate for judicial office in Kentucky. These differences are stark and are, of course, highly statistically significant.

It seems clear that the last statement crosses some sort of line that citizens have in their minds. The ad does seem to be caustic and shrill, and includes an important reference to ordinary politics in the question: “Is he a judge, or just another politician?” My suspicion is that this statement cues the respondents to think, “This ad sounds like politics as usual, politics as I have seen in other political races, and exactly the sort of politics of which I disapprove.” Consequently, a far greater proportion of the respondents is willing to deem the ad inappropriate.Footnote 12 The other ads may be caustic, but this ad seems to portray judges as run-of-the-mill politicians and therefore detracts from their impartiality. This finding is similar to that of Reference Gibson and CaldeiraGibson and Caldeira (2007), in that politicized ads by interest groups favoring or opposing Judge Alito's confirmation to a seat on the U.S. Supreme Court seemed to subtract from the legitimacy of that institution.

Still, perhaps the most important conclusion from this analysis is that it is indeed possible to attack one's opponent, even when one is a judge, so long as the attack is strictly confined to policy disagreement. And because this analysis is based on an experimental design, we can have considerable confidence that the ad content itself actually caused the respondents' assessments of appropriateness.Footnote 13

To this point, I have established that some types of campaign activity can indeed impugn the perceived impartiality of state courts. I have not, however, concluded that these findings are peculiar to the judiciary. It seems quite reasonable to hypothesize that all political institutions suffer from perceptions of conflicts of interest generated by campaign contributions and scurrilous attack ads. In order to pinpoint more clearly the significance, if any, of the judiciary, cross-institutional analysis comparing courts with other political institutions is necessary.

The Campaign Content Experiment

In this experiment, the respondents were asked several questions about whether particular types of campaign statements were appropriate. Several variables were manipulated in the questions, including:

  1. 1. The institution: whether the statements were made by a candidate for the Kentucky Supreme Court or the Kentucky State Senate.

  2. 2. The policy: Two-thirds of the respondents were presented with campaign assertions on the issue the respondent deemed most important in the second-wave interview; the remaining one-third heard statements about an issue other than the issue deemed most important by the subject.Footnote 14

  3. 3. The policy position: Respondents in the “most important issue” condition were then randomly assigned to hear campaign statements that were (1) contrary to the respondent's own views on the issue, or (2) not contrary to the respondent's own views. For those few respondents who had no position on the most important issue, random assignment to campaign statements representing the differing views on the issue was used. For those hearing statements about an issue other than the one designated as most important, each respondent was randomly assigned to either a pro or con statement on the issue to which the respondent had been randomly assigned. Because this was not the most important issue for the respondent, her or his substantive view on the policy is not known.

Thus, ignoring the specific issue about which the respondent was asked, there are eight major versions of the campaign statement experiment. The basic structure of the stimuli in this experiment is as follows:

Suppose a candidate for the [Kentucky Supreme Court/Kentucky State Senate] made a promise during the campaign that, if elected, he would [PRO/ANTI R's ISSUE POSITION: e.g., expand the right to abortion] in Kentucky. Would you say that this sort of campaign activity is entirely appropriate for a [INSTITUTION] election, somewhat appropriate for a [INSTITUTION] election, not very appropriate for a [INSTITUTION] election, or not at all appropriate for a [INSTITUTION] election?

As I have noted, the manipulations are: (1) the institution, (2) the importance of the issue to the respondent, and (3) whether the campaign statement is agreeable or disagreeable to the respondent.

Do people think it appropriate for candidates for public office to make statements of the sort “if elected, I promise to …,” and do opinions vary according to whether a judge or a legislator is running for office? Perhaps surprisingly, they do not: A substantial majority (about two-thirds) of the respondents find such behavior inappropriate, with a strong tendency toward rating such statements as “not at all appropriate” (twice as many responses as “not very important”). But more surprising still is the finding that, while an institutional difference exists, it is not massive, and this sort of campaign promise is deemed inappropriate for a legislative candidate by well more than one-half of the respondents (see Figure 3).

Note: N = 1,028. Cross-condition difference of means tests (on the uncollapsed response set): η = 0.09, p = 0.005.

Figure 3. The Inappropriateness of Campaign Promises, Across Institutions

The difference in replies across institutions is indeed statistically significant (but not highly so), but, while 70.6 percent of the respondents asked about a candidate for the Kentucky Supreme Court find the promise not appropriate, 62.3 percent give the same reply to a question about a candidate for the Kentucky Senate. Thus, while it may not be surprising to find that people do not approve of judges issuing campaign promises, they also do not approve of legislators engaging in such behavior, and the difference between the two institutions is far less than might have been imagined given that the traditional role of legislators is to make promises about how they will decide issues if elected. This is an important finding: many respondents who condemn campaign activity by judges also evaluate campaign activity by other political leaders as inappropriate. Without a control for the type of office under consideration (court versus not) much of the anti-campaign sentiment commonly observed might be considered to be idiosyncratic to the judiciary, when in fact the sentiments most likely generalize to campaign activity within most if not all political races.

Why do citizens object to policy promises by candidates for public office? After all, are not policy promises one of the most important reasons for holding elections in the first place? This finding is indeed unexpected and surprising.

One possible explanation of these results is that I have fundamentally misunderstood the basic objection to “promises to decide” behavior by candidates. Perhaps one implication of such promises that renders them objectionable to people is that they imply some sort of pre-commitment to an undisclosed group, perhaps even to an interest group, and not necessarily to the issue itself. Perhaps the objectionable part of policy promises has nothing to do with closed-mindedness and impartiality, but everything to do with “selling out” with “promises” to interest groups. Perhaps the key word in this question is promises, which to some may imply some sort of tawdry quid pro quo. If this is so, then it explains why promises are detrimental to perceived legitimacy in both the legislative and judicial contexts.Footnote 15

This experiment also varied the importance of the issue (to the respondent) that the campaign promise was said to address. For two-thirds of the respondents (randomly selected), the issue was the one they deemed most important to them in the second interview; for the remaining one-third of the respondents, the issue was one of the six from which the respondent chose the most important, but was not the most important issue. Table 2 reports the difference of means on the appropriateness measure, within institution, according to whether the most important or not-most-important issue was described in the question about making campaign promises.

Table 2. The Campaign Speech Experiment, Within Institutions

The differences in judged appropriateness according to the importance of the issue are not great. In the case of the State Senate, the t-test approaches statistical significance; those hearing that the promise was made on the most important issue are less likely to judge it inappropriate. For those told about a campaign promise by a candidate for the Kentucky Supreme Court, a similar difference emerges, although it is far from statistically significant. The importance of the issue alone does not have much influence over judgments about the appropriateness of making campaign statements.

Among those told about campaign promises on the issue of greatest importance to them, the sample was further divided (via random assignment) according to whether the campaign promise was in accordance with the respondent's own position or whether the promise was contrary to the respondent's preference.Footnote 16 As Table 2 also reports, policy agreement makes a tremendous difference in the judged appropriateness of the statement, for both legislators and judges. When the promise is contrary to the respondent's own position, it is thought to be quite inappropriate. But, again, this finding pertains to both judges and legislators: campaign promises are frowned upon when the candidate promises to make an unwelcome policy decision. Moreover, these data hint at the possibility that simple policy disagreement, not more general standards of appropriateness, might be the driving force in the judgments reflected in this dependent variable.

Despite the overall conclusion that campaign promises by judges and legislators are evaluated similarly, we do see in these data a slight but interesting effect of institution on judgments of appropriateness. For those told about a legislative candidate making agreeable promises, the mean response is 2.84, indicating a low level of inappropriateness. For those told about a judicial candidate, the comparable mean is 3.27. This difference indicates that some people seem to judge legislative and judicial candidates differently and that the essential basis of that judgment is not simply policy agreement or disagreement. Still, this difference should not be exaggerated. In the legislative context, 42.2 percent of the respondents find agreeable policy promises inappropriate, while in the judicial context the comparable figure is 54.5 percent. This is a difference, to be sure, but not one of the expected magnitude, in light of the traditional nature of judicial and legislative campaigns. And this finding must be understood within the overall context of disapproval of campaign statements promising to decide issues in a particular way.Footnote 17

Finally, as expected, I find no significant difference in whether the promise is pro or anti the issue among those respondents in the “not most important issue” condition, which is not surprising in that random assignment produces a context in which some respondents agree with the policy position expressed, whereas others do not.

Several important findings emerge from this experiment. First, a significant majority of the respondents find making policy promises to decide inappropriate. Second, the inter-institutional differences (i.e., between the Kentucky Supreme Court and the State Senate) are largely (but not entirely) trivial. Finally, the data hint at the possibility that the objectionable element of policy promises is not so much the inability to decide policy issues with an open mind, but may instead be related to the making of promises, which may imply some sort of conflict of interest, perhaps one cemented by much-hated campaign contributions.

Discussion and Concluding Comments

Does campaign activity by candidates for judicial office threaten the institutional legitimacy of courts? The answer to this question offered by the data analyzed here is somewhat complicated. Following my earlier research, new evidence has been produced to confirm the earlier finding that campaign contributions represent a significant threat to the perceived impartiality of judges, even if a similar threat to the legitimacy of legislators also exists. When it comes to policy talk, most Kentuckians are not off-put by general statements of policy positions, and most do not object to even fairly vigorous attack ads. At least some elements of traditional political campaign activity are acceptable to most people, even within the context of judicial elections.

But a line clearly exists for both types of activity. In terms of attack ads, charges that portray judges as ordinary politicians seem to be damaging to courts. Just as with the findings of Reference Gibson and CaldeiraGibson and Caldeira (2007) on the Judge Alito nomination to the U.S. Supreme Court, ads suggesting that judges are “politicians in robes” influence how people view judges and courts. The attack ad experiment reported here shows that candidates can indeed vigorously attack one another without overstepping the expectations of citizens; nonetheless, the invocation of “low politics” appears to cross the line in judicial races.

Similarly, specific policy promises threaten the legitimacy of both courts and legislatures. These findings diverge from those I earlier reported from the first wave of the Kentucky survey, most likely because the promises depicted in the analysis reported here are considerably more explicit and may be tied in the minds of some respondents to a quid pro quo relationship between candidates and groups. Again, it is important to stress that there seems to be little that is peculiar to the judiciary on this score. To the extent that states wish to ban “promises to decide” by judges under the theory of threats to institutional legitimacy, they should also consider banning such promises when made by legislative candidates. Most people seem to accept that candidates for judicial and legislative office hold policy views on various, relevant issues, and the expression of those views seems reasonable to most. Policy promises, however, are another matter.

The whole issue of the limits of permissible campaign activity remains to be investigated more thoroughly. It seems obvious that both the positions that “all policy talk is benign” and that “all policy talk is cancerous” are inaccurate as an empirical matter. Moreover, it seems possible that some serious interactions complicate the picture further. For instance, when promises to decide are offered in rebuttal to the assertions of a competing candidate, are they viewed in the same negative vein? Campaign activity is typically multidimensional and complicated. Research such as that reported here most likely does not fully capture that complexity, even if it moves some distance in that direction.

The empirical analysis reported in this article is obviously related to the policy debate about how we select judges in the United States and what sorts of activities can be legitimately pursued by judges. It is beyond the scope of this article to engage fully the normative debate, but some closing comments on this score seem appropriate.

Many legal elites in the United States strongly disapprove of allowing judges to make policy pronouncements during their campaigns for seats on the high courts of the states. It seems that there are several reasons why this may be so.

First, some may believe that by announcing policy views, judges in fact compromise their actual impartiality and open-mindedness. It would surprise me, however, to learn that informed observers adopt such a naïve view of the process of judicial decisionmaking. The worry cannot be simply that judges are biased in their decisionmaking, since to hold a policy view but not announce it should be of roughly equal concern to holding a policy view and announcing it. Obviously, judges, especially those with much judicial experience, have reached conclusions on myriad legal issues in their earlier decisions, so to expect that judges are tabula rasa when they confront new cases is naïve. Finally, informed observers surely recognize that the process of decisionmaking is typically highly discretionary and that discretion is often little constrained by law (e.g., of conflicting precedents, which set should be “followed”?), and therefore discretion must be controlled by the preferences of the decision maker. It would indeed be surprising to learn that observers believe that the simple process of making a policy statement somehow changes the processes of decisionmaking that judges employ. Campaign talk, one way or another, is unlikely to change the true nature of the decisionmaking processes used by judges.

A second argument is that these pledges may actually cause ordinary people to lose faith in the judiciary, irrespective of actual processes of decisionmaking. The evidence of this article is that some policy talk hurts courts; other talk does not. So the concerns of the critics of Republican Party of Minnesota v. White (2005) are not entirely discounted by the analysis presented in this article. The key issue, however, is to pinpoint exactly what sorts of speech are corrosive, instead of muzzling judges across the board.

A third argument is more complicated. Suppose that most legal elites hold at least moderately left-wing ideological positions. Further, assume that these elites generally perceive the American people as both ill-informed and at least somewhat conservative. Therefore, anything that strengthens the connection between judges and ordinary people will therefore increase the likelihood of conservative decisions by judges (e.g., the use of the death penalty; see Reference Brace and BoyeaBrace & Boyea 2008). When observers distrust or disagree with the constituents of courts, they are likely to want to minimize the influence of the constituents' preferences over the making of public policy.Footnote 18 Banning policy talk is one way to do this.

Finally, in the absence of policy cues from candidates, citizens generally have little guidance on how to cast their votes. To the extent that policy-voting is more difficult, the influence of interest groups and political parties is likely to increase. If citizens cannot vote on the issues dear to them, they will either not vote, or cast their votes on the basis of recommendations from bar and lawyer groups, media outlets, or other interest groups. Without policy cues, the influence of legal elites on the selection of judges is likely to increase. And if elections have no policy relevance, then one of the most important rationales for holding judges accountable to their constituents is itself seriously weakened, thereby opening the door to the possibility of doing away with elections altogether.

Debates about the consequences of judicial campaigns will undoubtedly continue, and I have no doubt that strategic considerations will lead some to disguise the true motives undergirding their announced policy positions. The role of the social scientist is to try to ensure that such debates are informed by rigorous empirical evidence. And perhaps the most certain conclusions produced to date is that campaign activity is multidimensional, with different types of actions have differing consequences for institutional legitimacy, and that there may well be little about campaigns for judicial office that is special or unique. Nonetheless, a large number of additional questions are still in need of rigorous scientific consideration.

Appendix A: The Panel Survey Design

The Initial Interview (t1)

This survey is the initial wave in a three-wave panel survey of the residents of Kentucky. The questionnaire was subjected to a formal test, and on the basis of the results of the pretest, it was significantly revised. The survey was conducted by Schulman, Ronca, and Bucuvalas Inc. (SRBI) during the early summer of 2006. Computer-Assisted Telephone Interviewing was used. Within households, the respondents were selected randomly. One adult age 18 or older was selected as the designated respondent in each eligible household.Footnote 19 No respondent substitution was allowed. The interviews averaged just over 20 minutes. The selected respondent was offered $10 for completing the interview. A total of 20,078 telephone numbers was used in the survey, with a resulting American Association for Public Opinion Research (AAPOR) Cooperation Rate #3 of 38.7 percent and an AAPOR Response Rate #3 of 28.7 percent (see AAPOR 2000). The final data set was subjected to some relatively minor post-stratification and was also weighted by the size of the respondent's household.

The Second Interview (t2)

In the month before the general election, the survey firm attempted to re-interview all the respondents interviewed earlier as part of the t1 survey. Of the 2,048 respondents from the first survey, interviews were completed with 1,438 individuals. The AAPOR Response Rate #3 is 78.7 percent, and the Cooperation Rate #3 is 89.4 percent.

I have carefully investigated the t2 sample to determine whether any evidence of unrepresentativeness can be found. One way in which the representativeness of the t2 sample can be assessed is to determine whether those who were interviewed in the second survey differ from those who were not interviewed. The null hypothesis (H0) is that no difference exists between the two subgroups.

With more than 1,400 completed interviews at t2 and more than 2,000 at t1, tests of statistical significance are not very useful (i.e., even trivial differences are statistically significant given this large number of cases). Therefore, I focus on the degree to which the dichotomous variable indicating a successful t2 interview predicts responses to a number of important t1 variables. The only interesting relationship discovered in this analysis (using as a criterion an eta of greater than or equal to 0.10 as an indication of a notable difference) has to do with the age of the respondent: η = 0.15.Footnote 20 The average age of those interviewed at t2 is 51.3; for those not interviewed, the age is 46.1. This finding is typical of panel surveys, with younger people being difficult to track down for subsequent interviews.

In terms of substantive variables, however, I find practically no interesting differences. For instance, in terms of knowledge of courts, I find statistically significant but trivial differences between those interviewed at t2 and those not, with those interviewed having only slightly greater knowledge of courts than those not interviewed (30.5 versus 26.4 percent, respectively, with relatively high knowledge). Awareness of the Kentucky Supreme Court is similarly distributed (79.2 versus 74.1 percent, with at least some level of awareness). In terms of support for the Kentucky Supreme Court, the correlation between the feeling thermometer responses and whether a t2 interview was conducted is 0.06; for the institutional loyalty factor score (measured, of course, at t1) the correlation is 0.07. In general, the analysis reveals that the t2 sample is biased in favor of higher levels of information and awareness, but the bias is slight indeed. Moreover, when poststratification weights are applied to the t2 data, even this minimal bias becomes entirely trivial.

The Final Interview (t3)

The final interview in the three-wave panel was conducted several months after the end of the 2006 election process. Only those respondents interviewed at t2 (N = 1,438) were eligible for the t3 interview. Using the AAPOR standards (Response Rate #3), the t3 response rate is 0.77, with a cooperation rate of 0.94 (#3). Of course, with such high rates of interviewing, practically no issues of representativeness emerge.

The question of how to weight the panel data is somewhat complicated. The t1 survey was subjected to some slight post-stratification so as to improve its representativeness (see Table A1). Weights were then developed for the t2 and t3 surveys to improve the representativeness of these subsamples. The target for the t2 and t3 weighting was the characteristics of the t1 survey. As a consequence, when I analyze the panel data, I use the t3 weight, but when I consider only the t1 data, I use the original weight variable. Since virtually all of the analysis reported in this article is based on questions asked in the third interview, the data are weighted by the t3 weight.

Table A1. Attributes of the Sample

Footnotes

This research has been supported by the Law and Social Sciences Program of the National Science Foundation (SES 0451207). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. I greatly value the support provided for this research by Steven S. Smith and the Weidenbaum Center on the Economy, Government, and Public Policy at Washington University in St. Louis. Criticism of an earlier paper of mine by Rick Lempert played an important role in stimulating this current research. I appreciate the help of John Geer in understanding the meanings of the responses to the attack ad experiment.

1 Nor can much be borrowed from research on campaign activity in other subfields of political science. Excellent research exists, for example, on the use of attack ads (e.g., Reference GeerGeer 2006), but none of those studies address courts and none address directly consequences such as perceived impartiality and institutional legitimacy.

2 So, for instance, Benesh used the following measure as her dependent variable: “What is your level of confidence in the courts in your community?” (2006:701–2). Even notwithstanding the critique of confidence measures by Reference Gibson and CaldeiraGibson, Caldeira, and Spence (2003), it is not clear how the respondents understood “courts in your community.” For a related analysis of confidence in state political institutions, see Reference Kelleher and WolakKelleher and Wolak (2007). In general, extant research seems to focus more on specific support (output approval) and less on diffuse support (institutional legitimacy). As Cann and Yates noted, “this is not a trivial distinction,” since legitimacy is indicative of a political capital that is invaluable especially when citizens are displeased with the short-term policy outputs of an institution (2008:300).

3 Some work does attempt to connect systems of judicial selection and retention to the attitudes of citizens. See for instance Reference BeneshBenesh (2006), Reference Cann and YatesCann and Yates (2008), and Reference GibsonGibson (2008a). Cross-level analysis such as this faces a number of challenging methodological problems.

4 Overall, these findings from the national survey are quite similar to findings from Gibson's Kentucky-based research (Reference GibsonGibson 2008b). The most important exception is that Gibson found a small, negative effect of attack ads on judicial legitimacy in Kentucky.

5 The Family Trust Foundation sued to overturn Canon 5B(1)(c) of Kentucky's Code of Judicial Conduct after failing in its effort to survey all candidates for judicial office in Kentucky in 2004 on a variety of contentious legal issues. The foundation succeeded in getting the Canon declared to be in violation of the First Amendment to the U.S. Constitution (Family Trust Foundation of Kentucky v. Wolnitzek, 345 F. Supp. 2d 672 [E.D. Ky. 2004]). For a discussion of campaign speech by candidates for judicial office, see Reference Bopp and WoudenbergBopp and Woudenberg (2007). Bopp successfully argued Republican Party of Minnesota v. White (2005) before the U.S. Supreme Court.

6 In some sense, no single state can ever be “representative” of some larger population or subpopulation of states, especially on matters regarding judicial selection and retention. Each state has its own somewhat idiosyncratic history with judicial campaigns (especially since politicized campaigns are relatively new). And indeed, if one looks closely at the traditional five-category description of methods of selecting judges in the United States, one finds a great deal of within-category variability, so states we often collapse together are in fact heterogeneous. Consequently, I have tried in this analysis to be cautious about overclaiming in the ability to generalize from Kentucky.

7 The question read:

Next, I would like you to think about a lawsuit concerning whether a woman has the right to have an abortion. Imagine if you will that the judge deciding the case made some statements about abortion during his last election campaign—the one back in November. If the judge said during the campaign that ‘I believe the constitution gives women the right to have abortions,’ would you think that this alone would mean that the judge cannot be fair and impartial in deciding the case, or would you think that irrespective of the statement the judge could be fair and impartial?

The two additional questions were:

If during the campaign the judge accepted campaign contributions from groups seeking to change Kentucky's law on abortion, “Would you think that this alone would mean that the judge cannot be fair and impartial in deciding the case, or would you think that irrespective of the statement the judge could be fair and impartial?” And what if the judge said during the campaign, “If elected, I will change Kentucky's law on abortion?” Would you think that this alone would mean that the judge cannot be fair and impartial in deciding the case, or would you think that irrespective of the statement the judge could be fair and impartial?

8 Since these activities were presented to the respondents in a random sequence, I have carefully considered whether the order of presentation has any influence on the responses. For the question about campaign contributions and the direct assertion that the candidate would change the law, there is no evidence whatsoever of order effects. Neither a chi-square test nor a difference of means t-test is statistically significant. However, for the statement about the candidate's interpretation of the constitution, a very slight order effect is observed. While the chi-square test of the trichotomous responses (impartial, not impartial, don't know) is not statistically significant, the t-test of the responses weighted by attitude strength is significant at 0.039, and η = 0.08. When this activity is presented last it is least likely to produce a response that the judge can be fair and impartial. However, this effect is weak: the percentages believing the judge can be fair range from 58.7 percent when the statement is offered first, to 56.1 percent when it is second, and to 51.4 percent when it is last. When the respondent hears the statement last, it is in the context of the candidate already having said that he or she would change the law and after hearing that campaign contributions had been given. However, this effect is quite marginal, it only influences the intensity (not direction) of responses, and the other two statements are entirely unaffected by order effects, so I have therefore decided to reject the null hypothesis that these responses are influenced by presentation order, and therefore to ignore this factor in the analysis that follows. Note, however, that this decision has absolutely no consequences for the substantive conclusions one draws from the data reported in Table 1.

9 Because some portion of the respondents was unable to judge these activities, the percentages believing the judge cannot be fair are not equal to 100 percent minus the percentages shown in this table. However, the “don't know” responses to these questions were rare, ranging from only 4.2 to 5.6 percent.

10 The findings reported here are nearly identical were I to use the variables indicating affect toward “anti-abortion activists.” For instance, the correlation between the direct assertion that the candidate would change the law and (unrecoded) affect toward anti-abortionists is 0.06; for affect toward pro-abortion activists, the correlation is −0.01.

11 When experiments are embedded within representative surveys, not only are findings generalizable to the larger population from which the sample is drawn (external validity), but great confidence can also be placed in causal inferences (internal validity). With random assignment of respondents to vignette versions, the proverbial “all else” can indeed be considered equal. Reference Cook and CampbellCook and Campbell (1979) first made the distinction between internal and external validity.

12 It is also possible that the results reflect something of an interview artifact, but one with substantive implications. The interviews were conducted by telephone. These ads are fairly lengthy and complicated (especially ad number 3). Under these conditions, perhaps the material at the end of the advertisement has a disproportionate influence on the respondents. The respondents may not listen carefully to the text of the ads, but they know a question will be asked at the conclusion of the ad, so they are especially attentive to the material at the close of the ad. The first two ads close with statements about judges (even if the Roach ad refers obliquely to “a real judge”). But the Stumbo ad directly implicates politics in closing, saying, “Is he a judge, or just another politician?” With this sentence, much of the ambiguity and clutter from the earlier statements in the ad may get washed away, and this last assertion may therefore have more impact on the responses. Still, I consider this a substantive effect because, given the inherent complexity of this ad, its influence on the telephone respondents is likely similar to its influence on voters who viewed the ad during Stumbo's campaign.

13 With random assignment of respondents to ad versions, it is not necessary to implement any control variables in order to estimate without bias the effect of the treatment (exposure to the ad). But because the ads differed in terms of a quite obvious characteristic of the candidates—gender—I considered whether reactions to the ads varied by the respondents' gender. In none of the three ad versions is there a statistically significant difference in the judgments of the male and female respondents in the sample. And within gender, the effects of the different ads are virtually identical. Because this experiment was implemented in the third-wave interview, conducted several months after the election, I did not measure candidate preferences and therefore cannot ascertain whether such preferences were influential. But to reiterate, the estimates of the ad effects are unbiased even in the bivariate analysis.

14 During the second interview, the respondents were asked about six political/legal issues on which the Kentucky Supreme Court might rule in the next several years. After rating each on importance, the respondents were asked to designate the issue they thought most important. Nearly all respondents were able to answer this query, and therefore the experiment referred to the individual respondent's most important issue. Only 2.5 percent of those questioned could not specify an issue as the most important (and another 0.3 percent refused to do so). This small group of respondents was randomly assigned an issue.

15 I must acknowledge that this finding may be an artifact of the specific context of the interviews. Although this experiment preceded any mention of campaign contributions in the third-wave interview, the earlier interviews asked a variety of questions about campaign contributions. The earlier interviews may therefore have contributed to priming the respondents to think about contributions whenever they heard words such as promises. Unfortunately, I have no means of assessing this hypothesis. It should be noted, however, that with random assignment of respondents to treatment conditions, the statistical estimates of the effects of the treatments are unbiased.

16 This is why the number of cases drops in the “Position: Most Important Issue” sections of Table 2. As I have noted, two-thirds of the respondents were assigned to the most-important-issue condition, and then that subsample was split evenly between agreeable and disagreeable policy promises.

17 To what degree is the impact of issue agreement/disagreement contingent upon the specific issue under consideration? Recall that the respondents were asked to indicate the most important issue to them and that two-thirds were then asked about that issue, varying the campaign promise from agreeing to disagreeing with the respondent's own position. Within each issue, is the effect of agreement/disagreement the same? It is not. On the death penalty and the tort issues, agreeable or disagreeable promises make little difference to the respondents; on the other four issues, they do. The strongest relationship is found on the issue of homosexuality and gay marriage, with weaker associations for abortion, religious displays on government property, and whether people should be allowed to burn the American flag in protest. Because I know of little theoretical basis for understanding cross-issue differences—all these issues have the status of “most important” to the respondent—I do not pursue this matter further.

18 Of course, with different assumptions about ideological preferences, such an argument could easily be recast by simply reversing the words liberal and conservative.

19 The method is that devised by Reference RizzoRizzo and colleagues (2004).

20 In all this analysis, I have used unweighted data.

References

References

American Association for Public Opinion Research (2000) Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Ann Arbor, MI: AAPOR.Google Scholar
Baum, Lawrence (2003) “Judicial Elections and Judicial Independence: The Voter's Perspective,” 64 Ohio State Law J. 1341.Google Scholar
Benesh, Sara C. (2006) “Understanding Public Confidence in American Courts,” 68 J. of Politics 697707.CrossRefGoogle Scholar
Bopp, James Jr., & Woudenberg, Anita Y. (2007) “An Announce Clause By Any Other Name: The Unconstitutionality of Disciplining Judges Who Fail to Disqualify Themselves for Exercising Their Freedom to Speak,” 55 Drake Law Rev. 723–61.Google Scholar
Brace, Paul, & Boyea, Brent D. (2008) “State Public Opinion, the Death Penalty, and the Practice of Electing Judges,” 52 American J. of Political Science 360–72.CrossRefGoogle Scholar
Brace, Paul, et al. (1999) “Judicial Choice and the Politics of Abortion: Institutions, Context, and the Autonomy of Courts,” 62 Albany Law Rev. 1265–304.Google Scholar
Brennan Center for Justice (2006) “Alabama's Supreme Court Primary Campaigns Highlight Radical Transformation of State Judicial Elections.” Press release, 2 June, New York.Google Scholar
Caldarone, Richard P., et al. (2007) “Partisan Labels and Democratic Accountability: An Analysis of State Supreme Court Abortion Decisions.” Unpublished paper, Princeton University, Princeton, NJ.Google Scholar
Cann, Damon M., & Yates, Jeff (2008) “Homegrown Institutional Legitimacy: Assessing Citizens' Diffuse Support for State Courts,” 36 American Politics Research 297329.CrossRefGoogle Scholar
Cook, Thomas D., & Campbell, Donald T. (1979) Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally.Google Scholar
Easton, David (1975) “A Re-Assessment of the Concept of Political Support,” 5 British J. of Political Science 435–57.CrossRefGoogle Scholar
Fagan, Ronald W. (1981) “Public Support for the Courts: An Explanation of Alternative Explanations,” 9 J. of Criminal Justice 403–17.CrossRefGoogle Scholar
Flanagan, Timothy, et al. (1985) “Public Perceptions of the Criminal Courts: The Role of Demographic and Related Attitudinal Variables,” 22 J. of Research in Crime and Delinquency 6682.CrossRefGoogle Scholar
Geer, John G. (2006) In Defense of Negativity: Attack Ads in Presidential Campaigns. Chicago: Univ. of Chicago Press.CrossRefGoogle Scholar
Gibson, James L. (2007) “The Legitimacy of the U.S. Supreme Court in a Polarized Polity,” 4 J. of Empirical Legal Studies 507–38.CrossRefGoogle Scholar
Gibson, James L. (2008a) “‘New-Style’ Judicial Campaigns and the Legitimacy of State High Courts: Results from a National Survey.” Unpublished manuscript, Washington University in St. Louis, presented at the 2008 Annual Meeting of the Midwest Political Science Association, Chicago, April.CrossRefGoogle Scholar
Gibson, James L. (2008b) “Challenges to the Impartiality of State Supreme Courts: Legitimacy Theory and ‘New-Style’ Judicial Campaigns,” 102 American Political Science Rev. 5975.CrossRefGoogle Scholar
Gibson, James L., & Caldeira, Gregory A. (2007) “Supreme Court Nominations, Legitimacy Theory, and the American Public: A Dynamic Test of the Theory of Positivity Bias.” Paper delivered at the 2007 Annual Meeting of the American Political Science Association, 30 Aug.–2 Sept., Chicago.CrossRefGoogle Scholar
Gibson, James L., & Caldeira, Gregory A. (n.d.) Citizens, Courts, and Confirmations: Positivity Theory and the Judgments of the American People. Princeton, NJ: Princeton Univ. Press, forthcoming.CrossRefGoogle Scholar
Gibson, James L., & Caldeira, Gregory A., et al. (1998) “On the Legitimacy of National High Courts,” 92 American Political Science Rev. 343–58.CrossRefGoogle Scholar
Gibson, James L., & Caldeira, Gregory A., et al. (2003) “Measuring Attitudes toward the United States Supreme Court,” 47 American J. of Political Science 354–67.CrossRefGoogle Scholar
Goldberg, Deborah, et al. (2005) The New Politics of Judicial Elections 2004. How Special Interest Pressure on Our Courts Has Reached a “Tipping Point” —and How to Keep Our Courts Fair and Impartial. Washington, DC: Justice at Stake Campaign [Brennan Center for Justice at New York University School of Law].Google Scholar
Hirsch, Matthew (2006) “Swing Voter's Lament: At Least One Case Still Bugs O'Connor,” Law.com, 8 Nov., http://www.law.com/jsp/law/LawArticleFriendly.jsp?id=1162893919695 (accessed 16 July 2007).Google Scholar
Iyengar, Shanto (2002) “The Effects of Media-Based Campaigns on Candidate and Voter Behavior: Implications for Judicial Elections,” 35 Indiana Law Rev. 691–99.Google Scholar
Justice at Stake (2002) “State Judges Frequency Questionnaire,” http://www.justiceatstake.org/files/JASJudgesSurveyResults.pdf (accessed 19 Aug. 2008).Google Scholar
Kelleher, Christine A., & Wolak, Jennifer (2007) “Explaining Public Confidence in the Branches of State Government,” 60 Political Research Q. 707–21.CrossRefGoogle Scholar
Lehne, Richard, & Reynolds, John (1978) “The Impact of Judicial Activism on Public Opinion,” 22 American J. of Political Science 896904.CrossRefGoogle Scholar
Olson, Susan, & Huth, David (1998) “Explaining Public Attitudes Toward Local Courts,” 20 Justice System J. 4161.Google Scholar
Overby, L. Marvin, et al. (2004) “Justice in Black and White: Race, Perceptions of Fairness, and Diffuse Support for the Judicial System in a Southern State,” 25 Justice System J. 159–81.CrossRefGoogle Scholar
Rizzo, Louis, et al. (2004) “A Minimally Intrusive Method for Sampling Persons in Random Digit Dial Surveys,” 68 Public Opinion Q. 267–74.CrossRefGoogle Scholar
Sample, James, et al. (2007) The New Politics of Judicial Elections 2006: How 2006 Was the Most Threatening Year Yet to the Fairness and Impartiality of Our Courts—and How Americans are Fighting Back. Washington, DC: Justice at Stake Campaign.Google Scholar
Streb, Matthew J., ed. (2007) Running for Judge: The Rising Political, Financial, and Legal Stakes of Judicial Elections. New York: New York Univ. Press.Google Scholar
Walker, Darlene (1977) “Citizen Contact and Legal System Support,” 58 Social Science Q. 314.Google Scholar
Wenzel, James P., et al. (2003) “The Sources of Public Confidence in State Courts: Experience and Institutions,” 31 American Politics Research 191211.CrossRefGoogle Scholar

Cases Cited

Family Trust Foundation of Kentucky v. Wolnitzek, 345 F. Supp. 2d 672 (E.D. Ky. 2004).Google Scholar
Republican Party of Minnesota v. White, 416 F.3d 738 (8th Cir. 2005).Google Scholar
Figure 0

Figure 1. Confidence in State Courts, Justice at Stake Campaign Survey, 2001

Source:Justice at Stake (2002).
Figure 1

Table 1. Campaign Activity and Judicial Impartiality

Figure 2

Figure 2. Assessments of Three Attack Advertisements Broadcast by Kentucky Judges, 2006

Note: Total N = 1032. Individual treatment condition Ns vary from 332 to 351.Cross-condition difference of means tests (on the uncollapsed response set): η = 0.38, pThe ads read:A. [Announcer]: In 2003, Circuit Judge Bill Cunningham tried to make six rapists eligible for parole. One had been out on parole for only 12 hours when he raped a 14-year-old and made her mother watch. Bill Cunningham already had tried to reduce their sentences, but our Supreme Court said no. Bill Cunningham said it was folly and a blatant injustice to keep these rapists in prison. Judge Rick Johnson believes that a life sentence means a life sentence. Please, vote for Rick Johnson for Justice on the Supreme Court.B. [Announcer]: John Roach says he's tough on crime, but Judge Mary Noble has put thousands of criminals behind bars. John Roach, none. Judge Mary Noble has helped dozens of lives through her Drug Court Program. John Roach, none. Judge Mary Noble has been elected by the people twice. John Roach, none. Elect a real judge to the Supreme Court. Vote for Judge Mary Noble.C. [Announcer]: David Barber is confused. He's now airing an ad that says Janet Stumbo wrote the Supreme Court opinion in the Morse Fetal Homicide Case. Barber can't tell the boys from the girls. The Morse opinion was written by Justice Bill Cooper. More confusing is that Cooper's opinion upheld the decision with which Barber concurred. He's attacking the Supreme Court for agreeing with him. David Barber: confused about his own opinions. Is he a judge, or just another politician? On November 7th, elect a judge: Janet Stumbo.
Figure 3

Figure 3. The Inappropriateness of Campaign Promises, Across Institutions

Note: N = 1,028.Cross-condition difference of means tests (on the uncollapsed response set): η = 0.09, p = 0.005.
Figure 4

Table 2. The Campaign Speech Experiment, Within Institutions

Figure 5

Table A1. Attributes of the Sample