When the Supreme Court makes a decision contrary to public opinion, justices are likely to worry the Court will lose public support. So, what are justices to do? One option, of course, is to move the policy content of the opinion closer to public sentiment. Yet, we know that justices seek, among other things, ideological goals (Reference Epstein and KnightEpstein and Knight 1998) and would prefer to effectuate them when feasible. Another option, then, is to seek their policy goals while mitigating the possible loss of public support. It is on this perspective we focus. We argue that justices, when they rule contrary to public opinion, will vary the clarity of majority opinions in an effort to maintain public support as best they can. While the Court has a deep reservoir of diffuse support, frequent counter-majoritarian decisions could leave it at risk (Reference Gibson, Caldeira and SpenceGibson et al. 2003: 365). By writing a clear opinion when ruling against public sentiment, justices can better inform the public why they so decided, and thereby manage any immediate loss of support they might suffer—or, think they might suffer (see, e.g., Nelson, N.d.).
We develop a measure of opinion clarity based on automated textual readability scores that we validate using human raters. Our results show public opinion strongly influences the content of Court opinions. Importantly, we analyze both macro- and case-level public opinion, providing broad-based support for our findings. In one approach, we compile an aggregate data set that includes Court decisions from 1952 to 2011, and execute a time series analysis that scrutinizes opinion clarity as a function of yearly changes in public mood. In a second approach, we rely on issue-specific public opinion polls that directly relate to individual Supreme Court cases (Reference MarshallMarshall 1989, Reference Marshall2008). Using these micro-level data, we analyze the content of specific majority opinions to determine how public opinion influences Supreme Court opinion clarity. Both empirical analyses offer considerable support for our argument that justices write clearer opinions when they deviate from public sentiment. What is more, our measure of opinion clarity is one scholars who study other institutions could employ.
These findings are important for a number of reasons. First, it is the content of the Supreme Court's opinions that influence society's behavior. Actors within society look to those opinions to determine whether they can engage in particular behaviors (Reference Spriggs and HansfordSpriggs and Hansford 2001). “[S]cholars, practitioners, lower court judges, bureaucrats, and the public closely analyze judicial opinions, dissecting their content in an endeavor to understand the doctrinal development of the law” (Reference Corley, Collins and CalvinCorley et al. 2011: 31). People must understand the content of opinions and, as such, scholars should understand the factors that influence those opinions. Our results speak to how the Court crafts the content of those opinions.
Second, the results address the Court as one institution in a broader political system where justices know they do not necessarily have the last word. That is, our approach shows how the Court is tied into a larger network of actors and audiences in the American political and legal system (Reference BaumBaum 2006). Rather than focus on how justices influence others, we show how others (i.e., the public) can influence justices. At the same time, knowing justices intentionally alter the language of their opinions to overcome audience-based obstacles tells us something that speaks to broader normative debates about democratic control. Justices appear to do what they can to overcome obstacles from public opinion. So, while public opinion seems to influence their behavior, justices appear able to circumvent the constraints of public opinion by tailoring their messages. For those interested in ensuring more accountability of judges, these results suggest such control is perhaps more difficult than previously believed.
Third, understanding how the Court alters its opinions can inform us about how the Court acquires and maintains judicial legitimacy. To be sure, we do not directly address legitimacy in this paper, but our results generate potential research avenues by which to study it. Legitimacy allows justices to accomplish their broader goals and protect the Court's institutional authority (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Gibson and CaldeiraGibson and Caldeira 2011; Reference Ura and WohlfarthUra and Wohlfarth 2010). The Court lacks the capacity to execute its own opinions. Its reason and logic are the foundations of its support. Given the Court's power ultimately comes from its legitimacy—and that sustained negative attention and unpopular decisions can erode public support for the Court (Reference Durr, Martin and WolbrechtDurr et al. 2000)—justices should avoid repeatedly calling that legitimacy into question. By writing different kinds of opinions, justices can avoid negative attention and may even be able to enhance the Court's legitimacy.
Finally, our results provide an answer to the question whether public opinion influences justices. The strategic model, perhaps the most influential model of judicial decision making, suggests justices are likely to anticipate public reactions to their decisions (among other considerations) and moderate their behavior accordingly (Reference Epstein and KnightEpstein and Knight 1998). Yet, empirical support for that theoretical claim has been mixed. Our findings suggest public opinion does in fact influence how justices behave.
A Theory of Strategic Opinion Clarity
The strategic model of judicial decision making suggests justices should be mindful of public opinion when making decisions (e.g., Bryan and Kromphardt (forthcoming); Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Enns and WohlfarthEnns and Wohlfarth 2013; Reference McGuire and StimsonMcGuire and Stimson 2004). This is the case because frequent rulings against the public could cause the Court to lose legitimacy. The Court's legitimacy is the foundation of its support. As Justice Frankfurter once claimed: “The Court's authority…rests on sustained public confidence in its moral sanction” (Reference CaldeiraCaldeira, 1986: 1209). A consistent pattern of shirking public opinion could damage the Court's legitimacy. Reference CaldeiraCaldeira (1986) finds, in part, the Court's legitimacy decreases as it strikes more federal laws and sides with criminal defendants. Related work shows courts that systematically ignore stare decisis can jeopardize their institutional legitimacy (see, e.g., Reference Zink, Spriggs and ScottZink et al. 2009). Reference Bartels and JohnstonBartels and Johnston (2013) suggest ideologues who oppose specific Court decisions are more likely to challenge the Court's legitimacy than those who approve of its decisions (cf., Reference Gibson and NelsonGibson and Nelson 2015). Collectively, these results suggest the public may respond negatively to Court decisions they dislike.
What can the Court do to protect its (immediate or long term) support when it rules against the public? We theorize that when ruling against public opinion, justices will enhance the clarity of majority opinions. By writing clearer opinions, justices can attempt to minimize the loss of support they might suffer—or think they might suffer—from jilting the public. And, while we believe justices know they have strong institutional support, they surely must be concerned about managing that goodwill and support. As Reference Gibson, Caldeira and SpenceGibson et al. (2003) state, such goodwill is not limitless. Justices must be concerned about replenishing it after drawing it down. Opinion clarity can help the Court mitigate attacks on its legitimacy.Footnote 1
Scholars have argued that opinion clarity influences the public. As Reference Vickrey, Denton and JeffersonVickrey et al. (2012) put it: “The challenge for the nation's judges…is to make sure that the public understands what is expressed in a supreme court opinion…[O]pinions serve as the court's voice because rulings communicate not only to lawyers, but also to the public…” (74). The role of opinion clarity here is critical. Clarity “is crucial in order to demonstrate fairness, ensure public and media understanding of the role of the court, and encourage acceptance of high court judgments. Effective communication starts with a well-reasoned and well-written opinion” (78). Similarly, Reference Benson and KesslerBenson and Kessler (1987) find plain legal writing is more credible and persuasive than “legalese.” The authors conducted an experiment in which they showed respondents legal briefs and petitions for rehearing that employed common language and those that contained legalese. Respondents who read legalese were significantly more likely to think the brief was unpersuasive, the writer was unconvincing, and the writer was unbelievable. The authors further demonstrate arguments presented in legalese suffer 20 percent less persuasive power than a brief with simple text and 32 percent less for a plainly worded rehearing petition. We believe justices have a sense of this dynamic. And surely, they must know when ruling against public opinion, they already have given the public a target. Why enhance risk by writing an unclear opinion the public will find to be less persuasive, less convincing, and less believable? We suspect they do not. We suspect they write clearer opinions in such instances.
Indeed, empirical evidence confirms our general belief that justices alter the content of their opinions in anticipation of negative reactions from various audiences. For example, Reference Black, Owens and BrookhartBlack et al. (2015) find the Court is more likely to cite foreign sources of law—to expand the debate and provide additional reasons for their decisions—when they render controversial decisions. Reference Corley, Howard and NixonCorley et al. (2005) show the Court is more likely to cite the Federalist Papers in controversial opinions. Nelson (N.d.) shows after the influx of television advertising in judicial campaigns, elected judges began to write opinions that were easier to read. The logic is simple. When judges had more to fear from the public, they performed “better.” This finding is consistent with our argument: when justices decide cases with outcomes against the public's broad policy preferences—and therefore, have more to fear from public reaction—they write clearer opinions. Finally, in a recent book-length treatment, Reference BlackBlack et al. (2016) find that justices alter the clarity of their opinions out of concern for how lower federal courts, the states, the public, and administrative agencies will respond.
In addition to scholarly support for our argument, recent comments by judges themselves corroborate our belief that judges use opinion language, in part, to manage public support. Justice Thomas once remarked: “We're there to write opinions that some busy person or somebody at their kitchen table can read and say, ‘I don't agree with a word he said, but I understand what he said’” (Reference FriedersdorfFriedersdorf 2013) (emphasis supplied). Similarly, Judge Steve Leben of the Kansas Court of Appeals recommends judges:
…explain things so that a layperson can understand them, whether it's an oral ruling or a written opinion. A person involved in a court proceeding is more likely to accept a court decision that he or she can understand, and the failure to explain legal concepts to the layperson leads to an unnecessary lack of understanding of what judges do (Reference LebenLeben 2011: 54) (Emphasis Supplied).
As the previous quote suggests, our argument about the use of opinion clarity is also related to literature on legitimacy and procedural fairness. Scholarship shows procedural fairness can facilitate legitimacy. Even losers in proceedings believe institutions to be legitimate when they believe they received fair procedural treatment. For example, Reference Casper, Tyler and FisherCasper et al. (1988) find procedural and distributive fairness influenced how defendants evaluated their treatment by the judicial system, independent of their sentences. Reference Sunshine and TylerSunshine and Tyler (2003) find the fairness of police procedures has a strong influence on police legitimacy. Like the positive effect of procedural fairness, opinion clarity can stanch the Court's bleeding when it rules against public opinion. When the Court explains more clearly why it ruled the way it did, the public might feel treated more fairly than if justices wrote an obfuscated opinion. Clarity can help to communicate the basis for the decision, explain better why the Court ruled the way it did, and, as a result, minimize the loss of support for having ruled against the public. Indeed, Reference Vickrey, Denton and JeffersonVickrey et al. (2012) make precisely this point, stating: “Litigants, especially losing litigants, care less about the length of opinions and more about clarity and the scope or soundness of the reasoning” (76) (Emphasis supplied). Opinion clarity can help shore up support for the Court, even among those dissatisfied.
To be sure, a fountain of scholarship suggests the Court's legitimacy is unlikely to diminish seriously by a single “bad” decision (e.g., Reference Gibson, Caldeira and SpenceGibson et al. 2003). Yet, even such scholarship recognizes judicial carelessness with public opinion might diminish the Court's legitimacy. Indeed, Reference Gibson, Caldeira and SpenceGibson et al. (2003) state: “A few rainless months do not seriously deplete a reservoir. A sustained drought, however, can exhaust the supply of water” (365). So, justices are likely to want to manage negative reactions. To prevent erosion of public confidence, they should want to take steps to justify and mitigate decisions against public opinion.
Perhaps more importantly, even if a single decision does not actually reduce the Court's legitimacy, justices are likely to be concerned it might. Despite scholarship showing public support for the Court is resilient, justices still are likely to fear backlash. The mere threat of widespread negative scrutiny by the mass public regularly shapes policymakers’ decisions (e.g., Reference ArnoldArnold 1990). Just as members of Congress are often “running scared” (Reference JacobsonJacobson 1987), justices might worry about possible negative consequences of their opinions and try to manage them with opinion clarity.
Of course, one might respond that few citizens actually read Supreme Court opinions, or they have an outdated perception of the Court. To this response, we make the following arguments. First, our theory does not hinge on whether the public actually reads opinions; all that matters is justices believe they might. A dormant public can be alerted by politicians and their actions, thereby inducing widespread public attention. Indeed, politicians regularly make decisions based on the threat their actions could receive significant attention. As (Reference KeyKey 1961: 266) explains of policymakers:
Even though few questions attract wide attention, those who decide may consciously adhere to the doctrine that they should proceed as if their every act were certain to be emblazoned on the front pages … and to command universal attention.
(Reference ArnoldArnold 1990: 68) makes a similar argument in his study of congressional policymaking:
Latent or unfocused opinions can quickly be transformed into intense and very real opinions with enormous political repercussions. Inattentiveness and lack of information today should not be confused with indifference tomorrow.
And existing literature shows in nonsalient cases, justices are concerned their decisions could trigger rebuke from an otherwise dormant public (Reference Casillas, Enns and WohlfarthCasillas et al. 2011). In other words, even if the public does not read every decision, justices certainly might worry they might.
Second, the media often lift passages directly from Court opinions, so it is likely many members of the public are in fact exposed to, and read, portions of Court opinions. An existing study shows the media borrow nontrivial amounts of the Supreme Court's opinions when reporting on them (Zilis N.d.). Specifically, the New York Times quoted 69 percent of salient opinions between the 1980 and 2008 terms, thus suggesting the public is directly exposed to some opinion language.
Third, even if the public does not read the Court's opinions, legal and political elites do—and the logic of our argument remains the same under this context. After all, elite explanation to the public likely turns on the content of the Court's opinion. By writing a clearer opinion, justices make the “translation” from elite to public smoother. In fact, existing scholarship suggests elites must respond to the way the Court frames arguments in its opinions (Reference WedekingWedeking 2010). A clearer opinion might make it easier for the media to report on the Court's decision—and a clearer opinion might allow the media to portray the Court's decision closer to how the Court would like it portrayed.Footnote 2
We recognize members of the public are more concerned about a case's outcome than its clarity. Our argument is not that opinion clarity can overcome this. Rather, we believe justices should perceive—for all of the reasons described above—that enhanced opinion clarity is an especially important attribute of their decision given the potentially negative effects of ruling against the public.Footnote 3 Indeed, would anyone claim a poorly written counter-majoritarian opinion would trigger the same public response as a well-crafted counter-majoritarian opinion? We suspect not. In fact, the remarks from the judges quoted earlier suggest judges and justices care about clarity.Footnote 4 In short, opinion clarity is not a get-out-of-jail-free card for justices. It is, however, a tool they likely believe is useful to attempt to mitigate negative public support in the face of an unpopular opinion.
Measuring Opinion Clarity
To determine whether justices craft clearer opinions when they rule against public opinion, we must construct a dependent variable that reflects the clarity of opinions. Legal clarity can, of course, take a number of different forms. Reference Owens and WedekingOwens and Wedeking (2011) identify three types of opinion clarity: doctrinal, cognitive, and rhetorical. While all three no doubt share similarities, they are distinct constructs that represent different phenomena. Doctrinal clarity is perhaps the oldest and most well-known of the three, as it focuses on “how the Court's specific treatment of doctrine [in an issue area] has remained stable or inconsistent…over time” (Reference Owens and WedekingOwens and Wedeking 2011: 1038). Cognitive clarity, on which Reference Owens and WedekingOwens and Wedeking (2011) focus, emphasizes the clarity of the ideas that are expressed. Rhetorical clarity focuses on the clarity of the external communication as it is understood by others. Depending on the goals of the research, any one of them might be appropriate for measuring clarity. Our theory focuses on how the Court decides to communicate with external, nonjudicial audiences that include both elected officials and the public. This communicative element turns on whether external audiences can understand and comprehend the content of the Court's opinion. For our purposes here, we believe rhetorical clarity is the more appropriate measurement approach.Footnote 5
We examine rhetorical clarity rather than cognitive clarity for a host of reasons. For starters, our theory does not argue the Court writes opinions with simpler ideas when it rules against public opinion; rather, we argue justices will simplify the presentation of their decisions when they vote against public opinion. Cognitive clarity represents the structure and clarity of the ideas in the mind of the justice that is expressing them. Rhetorical clarity, on the other hand, focuses on the clarity of the external communication as it is understood by others. Indeed, it is important to understand a rhetorically clear opinion is not guaranteed to be cognitively clear (and vice versa). In fact, it can be the opposite. Rhetorical clarity draws from an ability to communicate facts to others, but it does so without necessarily having a direct correspondence to the complexity of the underlying ideas. For example, some people excel at explaining complex ideas in an easy-to-understand manner while some people can make the simplest idea unclear. Given that our theory focuses on the Court and how a general audience will understand opinions, we believe our choice of rhetorical clarity as the dependent variable is the theoretically correct one. And, Justice Thomas's quote above supports us.Footnote 6
Creating the Opinion Clarity Measure
To examine opinion clarity, we exploit a range of computer-generated readability scores to analyze the text of Supreme Court majority opinions. Computer-generated scores are desirable for a number of reasons. They are easily replicated, they are objective, and they are efficient, allowing researchers to examine—and make comparisons among—a large number of long documents (e.g., court opinions). And, just as important, as we demonstrate below, they correlate strongly with how humans interact with court opinions. Scholars and policymakers use readability scores in various contexts to measure the degree of difficulty in reading a text (Reference DuBayDuBay 2004). For example, the Flesch-Kincaid Grade Level examines a text's average sentence length and the average number of syllables per word. Other measures, of which there are dozens, look at the number of letters in a word, the number of words with only one syllable, or the number of words with at least three (or six) syllables in them.
Rather than rely upon a single indicator, we take an approach that captures key commonalities among existing measures while also avoiding sensitivity to a unique aspect of any single measure. We use the R package koRpus to calculate 19 separate readability measures for every orally argued Supreme Court majority opinion from 1946 to 2012.Footnote 7 These 19 distinct formulas yield a total of 28 measures (i.e., some formulas produce more than one readability score).Footnote 8 Figure 1 identifies the general types of inputs that go into the calculation of the scores.
The words, sentences, characters, and syllables columns indicate that a formula calculates the total number of these items (e.g., total number of characters). The final three columns are for indicator variables that count the frequency of, for example, words with at least three syllables. Taking these variables as input, the readability formulas perform a variety of arithmetic functions to produce a single score for a given text. As one example, the Flesch-Kincaid Grade Level is computed as follows:
We then subjected these distinct measures to a Principal Component Analysis, which returned a single principal component that explained 77 percent of the variance in the data. This measure—Opinion Clarity—is our dependent variable. We code it such that texts with low readability (i.e., are harder to read) receive smaller scores while texts that are easier to read receive larger scores. In other words, the larger the value, the clearer the text. The measure has a mean of 0 and a standard deviation of 4.6. With a range that stretches between −44.0 (very difficult to read) and +24.5 (very easy to read), it has considerable variation. In terms of its distribution, our measure takes on the general shape of a normal distribution.
Validating the Opinion Clarity Measure
Because our dependent variable is unique, we sought to verify that it validly measures the readability of legal texts. So, we had 72 undergraduate students rate eight excerpts from the legal reasoning portion of Supreme Court majority opinions. The excerpts varied between 170 and 300 words in length, with an average length of 222 words. Four of these excerpts were of low readability and the other four were of high readability.Footnote 9 To ensure the raters had enough background and context when reading them, each excerpt was preceded by a short paragraph offering information about the facts and dispute in the case.
After reading each text, the raters answered objective multiple choice comprehension questions and subjective rating questions about the text. The objective questions involved factual queries. For example, in Santa Fe Independent School District v. Doe (2000), a case that examined school prayer, respondents read a segment from the opinion. We then asked them why the Court said the student prayer was not “private speech.” They had four options from which to choose (this full example is in the online appendix). For the subjective rating questions, we asked raters to evaluate how clear they believed the text was, how well written the excerpt was, the ease or difficulty of understanding the excerpt, and whether they knew all of the words in the excerpt. Thus, we had multiple indicators for each rater of both an objective (e.g., whether they correctly answered content questions) and subjective (e.g., how clear they believed the text was) nature.
We then combined these objective and subjective ratings to create a Rater Readability factor score. We followed Reference JonesJones et al. (2005), who argue that readability is comprised of: (a) comprehension; (b) time to complete and answer questions about the reading; and (c) how an individual subjectively perceives the text. We estimated an exploratory factor analysis model with six variables. This included the four subjective ratings (i.e., clarity, quality of the writing, ease of understanding, and difficulty with words), an objective measure of the number of correct responses to our comprehension questions, and the number of minutes it took a rater to read the text and answer the questions. The Cronbach's alpha for these six items is 0.77. The factor analysis model returned a single factor with an eigenvalue greater than one.
We next estimated a linear regression model with Rater Readability as our dependent variable. Our main independent variable was Opinion Difficulty, which was the automated readability score for each of the opinion excerpts.Footnote 10 As noted above, small values indicate low readability and large values indicate high readability. If our approach provides a valid indicator of readability for humans, we should recover a positive relationship between Opinion Difficulty and Rater Readability. We do.
The results suggest our raters did, in fact, perceive differences among our excerpts. We find a positive and statistically significant relationship between Opinion Difficulty and the rater's readability for the excerpt (p < 0.01). In other words, excerpts identified as more challenging and less readable by our computer-generated measure yielded systematically lower comprehension levels among our human raters than clearer excerpts. The substantive magnitude of the relationship is reasonably strong, too. When comparing a highly readable excerpt with one that is highly unreadable, we estimate a change in comprehension equivalent to about 1.25 standard deviations in our rater readability measure. This is substantively equivalent to jumping between the 30th and 75th percentile in Rater Readability.
Having operationalized and validated our key theoretical concept of opinion clarity, we turn our attention to analyzing how justices alter opinion clarity when they expect greater opposition to their decisions. We take a two-pronged empirical approach. First, we employ an aggregate time series analysis to examine how general changes in popular sentiment lead to changes in clarity. While we recognize public mood is broad, the approach we take is the best possible given existing data—and it is consistent with existing literature (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Flemming and WoodFlemming and Wood 1997; Reference Giles, Blackstone and ViningGiles et al. 2008; Reference McGuire and StimsonMcGuire and Stimson 2004; Reference Mishler and SheehanMishler and Sheehan 1993). Second, we then conduct an individual case-level analysis that uses issue-specific public opinion polls taken before corresponding Supreme Court decisions (Reference MarshallMarshall 1989, Reference Marshall2008) to demonstrate how justices write clearer opinions when they rule against public opinion in specific cases.
An Aggregate Analysis of Public Opinion and Clarity
We first focus on a macroanalysis of how general public mood influences Court opinions. An aggregate focus offers several benefits. It enables us to connect our analysis to the predominant analytical strategy (and measures) used in prior research. That is, most literature on the Supreme Court-public opinion relationship utilizes an aggregate indicator of the public's policy mood (Reference StimsonStimson 1991). This measure of public opinion (described below) only varies with respect to time, and thus is ideally suited for macroanalyses predicting the term-level, net-content of Court decision making.Footnote 11 What is more, a macroanalysis offers the best means to model the autocorrelation inherent in aggregate policy mood's variance and the potential for a dynamic effect of public opinion on the Court.
We test the argument that justices write clearer majority opinions when they anticipate public opposition to their decisions, using data from the 1952–2011 Court terms.Footnote 12 By analyzing a time series, we examine majority opinions across the range of issues on the docket. We expect that as public opinion becomes more liberal, justices write clearer opinions among their conservative decisions. Similarly, as public opinion becomes more conservative, justices write clearer opinions among their liberal decisions. We construct two aggregate time series of the average clarity of the Court's majority opinions each term: one series examines the Court's conservative decisions over time; the other examines its liberal decisions.Footnote 13 We separate the Court's decisions into two time series models because that approach offers the most effective modeling strategy at the aggregate level to estimate how shifts in public opinion over time predict changes in the average level of opinion clarity (a variable without an inherent ideological dimension).Footnote 14
Opinion Clarity
Our dependent variable is the mean readability score of the Court's majority opinions decided each term, using the composite index we described above.
Public Mood
Our primary covariate is yearly public mood, as measured (and updated) by Reference StimsonStimson (1991, Reference Stimson1999).Footnote 15 Public mood is a longitudinal indicator of how the public's preference for more or less government shifts over time. It is an aggregate reflection of the general tenor of public opinion (and preference over desired public policy) on the standard liberal-conservative dimension. Public Mood is the most predominant indicator of public opinion in literature that examines public opinion and the Supreme Court (e.g., Reference Casillas, Enns and WohlfarthCasillas et al. 2011; Reference Enns and WohlfarthEnns and Wohlfarth 2013; Reference Epstein and MartinEpstein and Martin 2011; Reference Giles, Blackstone and ViningGiles et al. 2008; Reference McGuire and StimsonMcGuire and Stimson 2004; Reference Mishler and SheehanMishler and Sheehan 1993), and is currently the most reliable aggregate measure of the public's general political orientation. Larger values of Public Mood reflect a more liberal public while smaller values reflect a more conservative public. We expect justices anticipate a greater prospect of public opposition to their conservative (liberal) decisions as public opinion becomes more liberal (conservative). Thus, among their conservative decisions, justices will write clearer opinions as public opinion becomes more liberal. Conversely, among their liberal decisions, justices will write clearer opinions as public opinion becomes more conservative. That is, we expect a positive relationship between Public Mood and our dependent variable when analyzing the conservative decision time series, and a negative relationship when analyzing the liberal decision time series.
Average Case Complexity
We include a control variable to account for the possibility that as cases become more (less) complex over time, opinions may have become less (more) clear. Average Case Complexity reflects, for each Supreme Court term, the average number of legal issues per case, as identified by the Supreme Court Database. That is, we first identify the number of legal issues addressed by the Court in each case,Footnote 16 compute the sum of those legal issues for the duration of each term, and then divide that sum by the total number of decisions issued by the justices during that term. For example, in the 2000 term, the Court issued 23 conservative decisions (involving constitutional or statutory provisions) and addressed a total of 30 issues among those cases. Thus, Average Case Complexity in the 2000 term would equal 1.304 for the conservative decision time series.Footnote 17
Civil Liberties Docket
We account for the potential that shifts in the issue composition of the Court's docket over time affect the average degree of opinion clarity. In particular, we expect that a greater proportion of (noncriminal procedure) civil liberties and rights cases on the docket will produce an average opinion clarity score that is less clear. We measure Civil Liberties Docket as the percentage of cases decided each term that primarily involve a civil liberties issue, excluding criminal procedure cases.Footnote 18 Consistent with the unit of analysis described above, we compute separate civil liberties time series among conservative and liberal decisions.
Separation of Powers Constraint
We also account for the potential that greater ideological divergence between the Court and Congress might lead justices to obfuscate opinions (Reference Owens, Wedeking and WohlfarthOwens et al. 2013). Using the Judicial Common Space (Reference EpsteinEpstein et al. 2007), we include a predictor that accounts for the ideological divergence between the Court and Congress. More specifically, when the median justice on the Court is either more liberal or more conservative than both chamber medians in Congress, we measure SOP Constraint as the absolute value of the ideological distance between the Court and the closest of the two chamber medians. If the median justice falls ideologically between the House and Senate chamber medians, SOP Constraint equals 0.Footnote 19
Methods and Results
Prior to estimating our models, we “prewhitened” our time series predictors by filtering them with ARIMA(p,d,q) noise models so all series (seemingly) reflect white noise (Reference Box and JenkinsBox and Jenkins 1976). This step filters out the error aggregation process within each time series to ensure our inferences are not affected by serial correlation and each series’ dependence on its own past values. That is, for each predictor, we first modeled the serial correlation inherent in the time series, extracted the residuals from that model, and then used those white-noise residuals as our (filtered) time series predictor in a standard regression model. Employing prewhitened time series predictors ensures the model is balanced and that the data are i.i.d. What is more, prewhitened filtering represents a conservative analytical approach in the time series literature (e.g., Reference Clarke and StewartClarke and Stewart 1994; Reference Granger and NewboldGranger and Newbold 1974), and offers the most stringent statistical test in the present analysis. Indeed, as Reference Box-Steffensmeier, DeBoef and LinBox-Steffensmeier et al. (2004) state, modeling prewhitened series will actually “err on the side of null findings” (525). From a substantive perspective, our statistical models will enable us to examine specifically whether “innovations” in public opinion (that are not driven by its own prior values) have an impact on Supreme Court opinion clarity (see, e.g., Reference MacKuen, Erikson and StimsonMacKuen et al. 1989).
Specifically, we filtered the Opinion Clarity time series using an ARIMA(0,1,1) filter for the liberal series and an ARIMA(0,1,2) filter for the conservative series, as their error aggregation exhibits long-term temporal dependence best represented by an integrated process that requires first-differencing along with a moving average error component.Footnote 20 Next, the error aggregation process of the Public Mood time series exhibits short-term temporal dependence that is best represented by a first-order autoregressive noise model to yield a white noise series (i.e., an AR(1) filter). The Average Case Complexity time series requires an ARIMA(1,0,1) filter to generate white noise series, among both liberal and conservative decisions. The error aggregation process in the Civil Liberties Docket predictor is best filtered using an ARIMA(1,0,1) model, among both the liberal and conservative time series. Lastly, we filtered the SOP Constraint time series using an ARIMA(1,0,0) noise model.Footnote 21
With prewhitened time series in hand, we turn to our statistical models. We employ OLS and estimate three time series regression models. To examine the relationship between Opinion Clarity and Public Mood over time, we first present a baseline model that estimates the simple binary relationship. Next, we consider a second model specification that accounts for changes in the average case context by including the Average Case Complexity and Civil Liberties Docket control predictors. Last, the third model specification includes all control predictors by adding the SOP Constraint indicator.Footnote 22
Table 1 presents our results. Across every model specification, Public Mood exhibits the expected impact on Opinion Clarity. When looking at the time series of conservative decisions, the statistically significant, positive coefficients suggest the mean opinion clarity score becomes clearer when public opinion becomes more liberal. This result is consistent across multiple specifications, including a simple baseline model and models that control for case complexity, docket composition, and SOP constraints over time. Looking, next, at the time series of liberal decisions, Public Mood again displays the expected coefficients across all model specifications. As public opinion shifts in a conservative direction, the average liberal opinion becomes increasingly clear.Footnote 23
Notes: Table entries are OLS coefficients with standard errors in parentheses. **p < 0.05; *p < 0.10; (one-tailed). The dependent variable represents the annual average Supreme Court majority opinion readability score each term (among decisions involving a constitutional provision or federal statute), 1952–2011, with larger values reflecting more clarity. All variables have been “prewhitened” with ARIMA(p,d,q) filters to yield white noise time series.
What is more, the magnitude of Public Mood's effect on Opinion Clarity suggests it is a substantively meaningful predictor of clarity. As Figure 2 shows, when viewing the conservative decision time series and statistical results from model 1(c), a shift from the minimum to maximum level of liberalism in Public Mood exhibits an expected change of nearly 2.00 units on the prewhitened opinion clarity scale.Footnote 24 That is, a shift in Public Mood can generate a change in opinion clarity that exceeds 1.50 standard deviations. Likewise, when viewing the liberal decision time series (in model 2(c)), a shift from the minimum to maximum level of conservatism in Public Mood also yields a similar expected change of approximately 1.70 units on the clarity scale. When viewing the control predictors, the results suggest that, among the Court's liberal decisions, greater (average) issue complexity and a greater proportion of (noncriminal procedure) civil liberties and rights cases both lead to an average opinion clarity score that is less clear.
An Individual-Level Analysis of Public Opinion and Clarity
The strength of the last section's aggregate analysis is that it utilizes a general indicator of public opinion that predicts aggregate opinion language across the range of issues on the Court's docket. Yet, the general public mood measure is precisely that—a general indicator. As such, it cannot fully capture differences in public opinion across specific issues the Court faces. So, this section utilizes issue-specific polling data to examine the clarity of individual cases. By using such targeted data, we offer a more precise match between Court behavior and public opinion, and can offer further support for our argument.
Of course, as scholars who study public opinion and the Supreme Court know, issue-specific (and temporally appropriate) public opinion data are scarce. Reference MarshallMarshall (2008) put it best: “Unfortunately, no published index of scientific, nationwide polls that match Supreme Court decisions exists” (29). Fortunately for us, Marshall performed an exhaustive search for polls that match public opinion with issues in Supreme Court cases (Reference MarshallMarshall 1989, Reference Marshall2008). Marshall identified polls in sources by searching for key words such as “Supreme Court” or key words from the issues discussed in particular Court opinions. He scoured many sources to find these matches, including the Roper Archive of polls, the published polls of Gallup Poll, “The Polls” section in Public Opinion Quarterly, and other various newspaper or magazine polls. If the case had multiple polls, Marshall selected the poll closest in time to the Court's decision. All polls are national samples, with each poll having at least 600 respondents, though many have far larger sample sizes. For a complete discussion and list of his criteria and thorough explanations, see (Reference MarshallMarshall 2008: 29–33) and (Reference MarshallMarshall 1989: 75–77), respectively. For our individual case-level analysis, we use Marshall's poll question-case matches among polls that preceded relevant Court decisions.
We have 106 poll questions matched to specific issues decided in Supreme Court cases that span the 1946–2004 terms.Footnote 25 Importantly, these 106 observations involve a wide range of legal issue areas. There are 26 observations in criminal procedure, 23 observations in civil rights, 24 observations in First Amendment, 12 observations in privacy, and the remaining observations spread across issues such as due process, unions, economic activity, judicial power, and federalism. While the bulk of our observations come from cases that primarily involve issues of civil rights and liberties, we note the majority of the modern Court's docket also has focused on those cases.
Opinion Clarity
Our dependent variable, Opinion Clarity, represents the composite readability score of each majority opinion in our sample.
Inconsistent With Public Opinion
Our main covariate of interest in this analysis measures whether the Court rules contrary to prevailing public opinion, as determined by Marshall's polls. We employ Marshall's measure of an “inconsistent decision,” reflecting when a Court decision “disagreed in substance with a poll majority (or plurality)” (Reference MarshallMarshall 2008: 31). Marshall's measure is appropriate because it captures the essence of our theoretical argument—the Court is concerned about the clarity of its opinions when it decides against an oppositional body larger than the supporting body. Therefore, we operationalize Inconsistent With Public Opinion as a dichotomous measure, with observations coded as 1 if there was more opposition than support for the Court's position; 0 otherwise (i.e., it is 0 when the Court rules consistent with public opinion or the poll margin was within the margin of error). We have no theoretical reason to expect the Court to consider the precise size of the opposition once it exceeds the majority of the public. That is, we have no reason to expect that justices will write a clearer opinion with 70 percent opposition compared to, say, 60 percent opposition.Footnote 26
For an example coding of a case, consider Clinton v. City of New York (1998), which struck down the line-item veto. As Reference MarshallMarshall (2008) reports: “Gallup Poll asked respondents: ‘As you may know, Congress recently approved legislation called the line item veto, which for the first time allows the President to veto some items in a spending bill without vetoing the entire bill. Do you generally favor or oppose the line item veto?’ A 65-to-24 percent majority favored the line item veto, [hence] Clinton v. City of New York was coded as ‘inconsistent’” (Reference MarshallMarshall 2008: 31).Footnote 27
Controls
To ensure the robustness of our empirical tests, we also include a number of control variables likely to influence the clarity of Court opinions. As described above, we utilize the total number of legal issues addressed in each case (according to the Supreme Court Database) to measure Case Complexity, though the results are substantively consistent when substituting the number of amicus briefs filed in each case as the indicator of case complexity. We also examine whether the decision was supported by a minimum winning coalition or the full complement of justices, as the degree of consensus and need to compromise with majority members might lead opinions to become less or more clear. We code Minimum Winning Coalition as 1 if the majority coalition was minimum winning; 0 otherwise. We code Unanimous Decision as 1 if no justices in the case registered a dissent; 0 otherwise. Next, we control for Judicial Review—cases where the Court's opinion struck down a federal or state statute, or local ordinance as unconstitutional. We code this variable as 1 if, according to the Supreme Court Database, the Court struck down a law as unconstitutional; 0 otherwise. We also control for another change in the legal status quo by accounting for when the Court Alters Precedent, coded as 1 if the Supreme Court Database so declares; 0 otherwise.
Next, we account for the separation of powers dynamic. We measure SOP Constraint as the absolute value of the distance between the median justice on the Court and the closest chamber median. When the median justice falls between the House and Senate chamber medians, SOP Constraint equals 0. When the median is more liberal or conservative than the House and Senate medians, SOP Constraint equals the absolute value of the distance between that justice and the closest pivot.Footnote 28 Next, following Reference Owens and WedekingOwens and Wedeking (2011), we account for variance in opinion clarity across different legal issue areas on the Court's docket. Thus, we include fixed effects for the primary issue area, specifying the criminal procedure category as the baseline.Footnote 29 Last, we include fixed effects for the majority opinion author to account for differences in the writing styles of individual justices.
Methods and Results
We fit OLS models with robust standard errors (but, the significant impact of public opinion on opinion clarity does not change if we instead use classical standard errors). Our results appear in Table 2, and they support our hypothesis. Model 1 shows the bivariate relationship between ruling against public sentiment and opinion clarity. It is statistically significant and positively signed, indicating that when the Court issues a ruling inconsistent with public opinion, the Court delivers a significantly clearer opinion.
Notes: Table entries are OLS regression estimates with robust standard errors in parentheses. **p < 0.05; *p < 0.10; (one-tailed). The dependent variable represents the Supreme Court majority opinion readability score for each case, with larger values reflecting more clarity. The sample of Court cases and public opinion polls come from Marshall (Reference Marshall1989, Reference Marshall2008), among those where polls temporally precede the Court's decision. Model 3 includes, but does not display, fixed effects for the majority opinion author (among those justices who wrote at least two opinions in the sample) and primary issue area of each case.
Models 2 and 3 check the robustness of this result by first including the control predictors (but without fixed effects controls), and then adding the issue area and majority opinion writer fixed effects, respectively. This last model allows us to control for the possibility that opinion clarity is driven by idiosyncratic factors—related to different legal issue contexts and justices’ writing styles—that are unrelated to our theoretical argument.Footnote 30 As the table reveals, the public opinion measure continues to be statistically significant across all models and has a magnitude that is statistically indistinguishable—via a Wald test—from the simple bivariate model. These results are robust to a host of alternative model specifications that include a litany of other potential controls not shown here. Controlling for case salience, ideological direction of decision, case disposition (i.e., reverse/affirm), or court term—to name only a few—does not change our results (additional details are available in the online appendix).
To help understand the magnitude of the estimated relationship between Inconsistent With Public Opinion and the clarity of each majority opinion, we estimated predicted values using the empirical results from Model 2. Specifically, when the Court decides a case consistent with public opinion (while holding all other predictors at their median values), its opinion readability is approximately −0.32. When the Court makes a decision that is inconsistent with public sentiment, however, the readability of the opinion is approximately +1.31, which is above the mean and indicates a substantially clearer opinion—more than three-eighths of a standard deviation increase across the sample of opinion readability. Thus, a decision inconsistent with prevailing public opinion polling yields an expected level of clarity approximately 1.63 units clearer compared to a decision that conforms to public sentiment.
Among the control predictors, the results in models 2 and 3 both suggest opinions in more complex cases—those addressing a greater number of legal issues—are significantly less clear. When the Court addresses a single legal issue in a case (while holding all other predictors at their median values), the predicted opinion clarity score is −0.32. Yet, the predicted level of opinion clarity decreases to −2.10 when increasing Case Complexity to five legal issues (i.e., one standard deviation above the sample mean).
One further point bears emphasis. The reader might be concerned about the role of political salience. The cases we examined in the individual-level analysis are among the most salient on the Court's docket. For instance, 89 of the 106 cases appeared on the front-page of the New York Times (Reference Epstein and SegalEpstein and Segal 2000); 76 of 106 appear on Congressional Quarterly's list of landmark cases. Because pollsters typically only write questions on the most politically important issues of the day, we were limited by necessity to look only at predominantly salient cases in the individual-level analysis. A potential consequence of this case selection is that the limited sample may reflect the upper-bound of public opinion's impact on opinion clarity, at least to the extent that one should expect justices to have a greater incentive to be strategic in these cases. Nevertheless, we split our aggregate time-series data into salient versus nonsalient cases. Though we do not have sufficient data to make inferences about salient cases—there are not enough liberally and conservatively decided salient cases to compute an aggregate each term—we do have enough data on nonsalient cases. When we examine only nonsalient cases, the results are substantively the same as what we present above, suggesting our results are not solely confined to salient cases. (See the online appendix for these aggregate-level results of the nonsalient cases.)
In short, whether examining the impact of public opinion on Supreme Court opinion clarity at either the aggregate or case level, the empirical results suggest justices write opinions with an eye toward anticipated public opinion.
Conclusion
Scholars have paid considerable attention to the relationship between the Court and public opinion but the results have been mixed. Surprisingly, there has been little attention devoted to how public opinion influences the Court's opinion content. We test a novel theory of how public opinion should affect opinion content. Our findings offer something new. They show public opinion does in fact influence the Supreme Court in systematic ways. In this capacity, the results have the potential to re-frame a recurring debate. Indeed, the strategic model of judicial decision making argues justices are likely to respond to public opinion. Our results support that theoretical claim—in part. While scholars have long examined various Court behaviors (e.g., voting) for evidence of the public's influence, perhaps we need to pay more attention to the content of the majority coalition's opinion language (see, e.g., Reference BlackBlack et al. 2016).
The consequences of these findings are important. They suggest justices are aware of their interdependence and employ strategies to evade obstruction. By writing clearer opinions in the face of public opposition, justices aggressively seek out their goals. Writing clearer opinions become all the more important if the public perceives the Court in political terms (Reference Bartels and JohnstonBartels and Johnston 2012). Thus, while public opinion influences judicial behavior, justices appear to respond in an effort to accomplish their broader goals. And while opinion clarity will not give the justices freedom to do whatever they wish, it is something they seem to use to mitigate possible negative responses to their counter-majoritarian opinions.
For those who support enhanced judicial accountability (and, we suppose, for those who oppose it), these findings are bittersweet. Yes, public opinion can influence the Court's behavior. Our results suggest justices do indeed alter how they write opinions as a consequence of changing public mood. But therein lies the rub: public mood seems to have an effect on justices’ opinion content, but scholars disagree whether it has an effect on their votes. To be sure, the jury is still out on whether and to what extent public opinion influences justices’ votes, but the influence of public opinion might just be an example where the packaging seems to change, but the product does not.
While these results do not speak directly to judicial legitimacy, we suspect they might indirectly relate to it. If justices can alter the content of their opinions to avoid or mitigate public rebuke, it stands to reason they could alter it so as to enhance the Court's reputation. Do justices, for example, garner more support for the Court when they speak in positive tones? When they write more legalistically? When they are collegial to one another in separate opinions? These factors might lead to enhanced legitimacy. So too could negative language harm the Court's reputation. So, though we do not examine legitimacy here, we hope future scholars analyze the link between opinion content and legitimacy.
Finally, we believe the approach we used may extend beyond the Court. The public may influence bureaucratic outputs, such as agency regulations, in terms of their clarity and its relationship to interpretation and compliance. Even though bureaucrats (like Supreme Court justices) are not elected, those who oversee and fund their decisions are directly subject to popular will. So the indirect electoral connection exists there as well. Whether bureaucrats adjust by altering the clarity of their policies is an empirical question to be tested. It is also worth emphasizing, in this vein, that our approach to measuring clarity could be adopted elsewhere. We uncovered strong evidence that our automated readability scores tap into what people actually perceive as the clarity and readability of sophisticated texts such as legal opinions. We have every reason to believe other scholars who analyze policymakers and complex texts could adopt our strategy as well.
Cases Cited
Santa Fe Independent School District v. Doe, 530 U.S. 290 (2000).
Planned Parenthood of Southeastern Pennsylvania v. Casey, 505 U.S. 833 (1992).