Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-19T03:27:53.518Z Has data issue: false hasContentIssue false

280 Characters of Contention: Analyzing Partisan Behavior on Twitter During Supreme Court Confirmation Processes

Published online by Cambridge University Press:  11 November 2024

Maron W. Sorenson
Affiliation:
Bowdoin College, Department of Government and Legal Studies, Brunswick, ME, USA
Rachael Houston*
Affiliation:
Texas Christian University, Department of Political Science, Fort Worth, TX, USA
Amanda Savage
Affiliation:
Loyola University Chicago, Department of Political Science, Chicago, IL, USA
*
Corresponding author: Rachael Houston; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We analyze a cache of tweets from partisan users concerning the confirmation hearings of Justices Brett Kavanaugh, Amy Coney Barrett, and Ketanji Brown Jackson. Using these original data, we investigate how Twitter users with partisan leanings interact with judicial nominations and confirmations. We find that these users tend to exhibit behavior consistent with offline partisan dynamics. Our analysis reveals that Democrats and Republicans express distinct emotional responses based on the alignment of nominees with their respective parties. Additionally, our study highlights the active participation of partisans in promoting politically charged topics throughout the confirmation process, starting from the vacancy stage.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of the Law and Courts Organized Section of the American Political Science Association

The fervor surrounding the nominations of Brett Kavanaugh, Amy Coney Barrett, and Ketanji Brown Jackson to the Supreme Court resonated with the impassioned language of previous confirmation battles. Amid a deluge of traditional media advertisements, op-eds, and news stories expressing both opposition and support, these nomination and confirmation processes also featured real-time, individual contributions to the intense discourse – all facilitated in only 280 characters.Footnote 1 For instance, intense discourse was evident immediately after Kavanaugh’s confirmation vote, with tweets oscillating from accusations of a “rapist on the #SCOTUS” to enthusiastic support proclaiming, “Brett Kavanaugh is not backing down… Get ‘em, Brett!”Footnote 2 This trend continued throughout Barrett’s and Jackson’s confirmation processes, with microbloggers, and specifically partisan microbloggers, sharing their opinions throughout. One Democrat expressed frustration during Barrett’s hearings: “No stimulus, no help for the American people, but you can have an all-night senate session to fix a judge 8 days before the election. Disgusting.” A Republican, on the other hand, voiced concerns about Jackson’s confirmation: “We just had a Supreme Court justice sworn in that protects pedophiles and cannot define what a woman is. That’s right. One liberal rat leaves, another liberal rat comes in. Protect your kids!! #kentajibrownjackson #pedophile.”

These tweets, reflecting a spectrum of perspectives from vehement disagreement to fervent endorsement, underscore the intricate tapestry of user reactions in the face of significant judicial appointments. However, to date, we do not know how partisan social media users talk about the confirmation process, or how those comments change (or not) across multiple processes. Our aim is to accomplish this by examining the sentiment and topics of discussions among partisan Twitter users, spanning various stages of the confirmation process and different nominees. Studying these partisan voices on Twitter in the context of Supreme Court confirmations is important for several reasons. First and foremost, the immediacy and accessibility of Twitter provides a unique lens into real-time reactions, allowing researchers to capture the dynamic interplay of conversations during critical moments in political events such as Supreme Court confirmations. The platform’s brevity, encapsulated in 280 characters, compels users to distill their viewpoints concisely, offering a snapshot of raw and unfiltered dialogue. Moreover, by focusing on partisan voices, we gain valuable insights into the polarization that characterizes contemporary political discourse, unraveling the complexities of how political affiliations influence perceptions of judicial nominees and the broader confirmation process.

To commence our analysis, we begin by reviewing existing scholarly literature on the intersection of partisanship and attitudes toward the High Court. Subsequently, we explore the value of approaching these topics through social media data. Building on insights from these literatures, in the following sections, we articulate our hypotheses, elucidate our data collection process, describe the methodology employed for the comprehensive analysis of our partisan-identified tweets, and present our findings. Generally, we find that Democrats and Republicans exhibit contrasting sentiments depending on the alignment of nominees with their respective parties. Furthermore, our analysis underscores the active involvement of partisans in perpetuating partisan topics at every stage of the confirmation process, extending even to the vacancy stage, i.e., before there’s a nominee to praise or criticize. In our conclusion, we synthesize the findings, discuss their implications for public discourse and the Court, and propose avenues for future research, offering a nuanced understanding of the intricate dynamics at play in the intersection of social media, partisanship, and Supreme Court confirmation processes.

Partisanship and views of the court

At its core, partisan dynamics significantly shape public perceptions of the Court and its functions. While non-political factors, such as support for the rule of law, do influence reactions to Court rulings, research consistently shows that partisanship and ideology hold enduring sway (Bartels and Johnston Reference Bartels and Johnston2013, Reference Bartels and Johnston2020; Christenson and Glick Reference Christenson and Glick2015; Nicholson and Hansford Reference Nicholson and Hansford2014). Bartels and Johnston (Reference Bartels and Johnston2020) argue that expecting the public to support the Court regardless of their political leanings is unrealistic; they propose that public backing for the Court is primarily driven by policy considerations rather than procedural ones. Individuals perceive rulings that align with their beliefs as fair and just, while those contrary to their views are often viewed as politically motivated and unfair. The substantial influence of partisanship, predominantly explored within the context of the Court’s decisions, may likewise permeate the process of selecting justices.

Scholars have long voiced concerns about politicizing the appointment process and its potential to undermine public confidence in the Supreme Court. Carter (Reference Carter1993) highlighted this issue over three decades ago, cautioning that the tendency of opposition to unearth negative information to discredit nominees could weaken the Court as an institution. He warned, “Our public institutions are at risk when the public has grave doubts about the nominee, and wishfulness is no substitute for the cold calculation that sometimes requires our politicians to realize that getting their way will cause more harm than good” (73). Indeed, contemporary survey-experimental scholarship has borne out Carter’s predictions, demonstrating that partisanship affects how the public views (hypothetical) nominees (Chen and Bryan Reference Chen and Bryan2018; Hoekstra and LaRowe Reference Hoekstra and LaRowe2013; Sen Reference Sen2017) as well as the Court, itself, in the wake of politicized nominations. In an experimental design, for example, Rogowski and Stone (Reference Rogowski and Stone2021) vary respondents’ exposure to different kinds of elite messaging modeled after real examples. They find that messaging that criticizes a nominee activates political identities, resulting in strong preferences for in-party nominees.

While these findings may suggest our inquiry is redundant, the impact of ideology and partisanship on support for real nominees is not nearly so uniform as the results found in survey experiment literature. For example, research by Badas and Stauffer (Reference Badas and Stauffer2018) contains survey data from seven nominees (Alito, Breyer, Garland, Kagan, Roberts, Sotomayor, and Thomas), where respondent ideology is a significant predictor of support for all but two (Breyer and Alito), however partisanship is much less impactful. Specifically, Badas and Stauffer (Reference Badas and Stauffer2018) find respondent partisanship is a significant predictor of support for Kagan and Sotomayor, however insignificant for Thomas. Their models for Breyer, Roberts, Alito, and Garland are reported using dummy variables for Republican and Democrat (rather than the party ID scale used for the others), and only three of eight of these dummy variables are significant predictors of support for the nominee: Republicans were more likely to support Roberts and Democrats more likely to support Breyer and Garland. We note, however, that the most recent survey in Badas and Stauffer (Reference Badas and Stauffer2018)’s work – Garland – is often cited as an inflection point in politicization of nominations (Zilis and Blandau Reference Zilis and Blandau2021; Truscott Reference Truscott2023).

Looking to work that examines how support for the Court is affected by these more recent nominations and vacancies, scholars often point to the now-politicized nomination and confirmation process as a source of partisan and ideological effects. Krewson and Schroedel (Reference Krewson and Schroedel2020) measure diffuse support for the Court in the aftermath of Kavanaugh’s confirmation, concluding, “It appears that the partisan hearings may have caused partisans to view the Court differently” (1435). Similar work examining Barrett’s confirmation proceedings argues, “It would be surprising if politicized and partisan confirmation hearings do not impact people’s perceptions of judiciaries and judging,” and indeed this study goes on to identify a drop in institutional support for the Court among Democrats (Krewson Reference Krewson2023).

These same patterns – where partisanship impacts support for the institution of the Court – have even been observed immediately after vacancies, i.e., before there’s a particular nominee to criticize or ‘partisan hearings’ to observe. Again, scholars similarly attribute these effects to the politicized nature of these more recent vacancies and hearings. Glick (Reference Glick2023) found that Democrats’ diffuse support for the Court dropped immediately after Justice Ginsburg’s death, as the “intense politics around the unexpectedly open seat” became clear President Trump would appoint her successor (104). Similarly, Armaly (Reference Armaly2018) found increased diffuse support among Democrats immediately after Justice Scalia’s death, but before Senate Majority Leader Mitch McConnell announced that Republicans would not vote on any Obama nominee. In other words, Democrats’ support for the Court rose when they believed President Obama would appoint Scalia’s replacement.

In general, recent work that examines how support for the Court is impacted by politicized nominations (and vacancies) paints a clear picture for the role of partisanship and ideology as driving support or disapproval for the institution of the Court, where party or ideological alignment increases support. While such findings have not always been present, and indeed were the opposite in the aftermath of Justice Alito’s confirmation (Gibson and Caldeira Reference Gibson and Caldeira2009), research from the last ten years uniformly suggests that the public now views the Court through a partisan screen of ordinary politics. Scholarship looking at indicators of support for nominees, however, is primarily focused on nominations from 10 to 30 years ago and does not identify such a uniform role for partisanship and ideology. Our study aims to bridge some of the gaps in the existing literature by providing real-time insights into online discourse and opinion formation during confirmation processes. Twitter, as a platform for immediate and widespread communication, offers a unique window into the Twitter public’s reactions to nominees and the confirmation process. By analyzing tweets related to confirmation proceedings, we can discern trends in sentiment, identify influential topics, and explore how partisan dynamics manifest in online discussions. This approach complements traditional survey-based research by capturing spontaneous and unfiltered reactions from a diverse range of voices, thereby providing a more comprehensive understanding of how the public engages with and perceives Supreme Court nominations in today’s digital age. Now, we turn to a discussion about these digital voices and why we should study them.

What is the value of social media data?

In today’s interconnected world, the value of social media data is derived from the way people utilize a platform, and this is particularly apparent with sites like Twitter. Beyond the confines of dinner tables and office corridors, Twitter serves as a bustling public square, where ideas clash, alliances form, and narratives evolve in real-time. This digital sphere functions as a sort of living laboratory where users create expressive content and thus enable researchers to explore user opinion in reaction to current and political events. Unlike the static snapshots provided by traditional polling and survey methods where attitudes are collected using likert scales and feeling thermometers, Twitter data offer richer texts, capturing the ebb and flow of online discourse in its raw, unfiltered state.

In exploring the significance of Twitter as a medium for political opinions, it’s vital to acknowledge the platform’s vast reach and popularity, boasting approximately 330 million monthly active users and 192 million daily active users (Twitter 2021). Despite this widespread usage, we must also recognize that Twitter users are not a perfectly representative sample of the general public. The platform’s user demographics, skewed towards urban areas, individuals under the age of 50, and predominantly male users, introduce inherent biases (Wojcik and Hughes Reference Wojcik and Hughes2019; Mislove et al. Reference Mislove, Lehmann, Ahn, Onnela and Rosenquist2011). While Twitter’s user base is evolving with increased gender and racial diversity over time, it’s essential to approach the data with an awareness of its limitations. Moreover, our study’s focus on analyzing tweets, not users, means we observe behavior only from those who have chosen to express their opinions about Supreme Court confirmations. We address these concerns in four ways, detailed below.

First, because we select upon engagement with the Court, we pay careful attention to the effect of political engagement in the wider Courts literature, specifically whether political engagement aligns with partisan effects such that our sample pre-determines results. Many Courts-based surveys and survey experiments utilize political knowledge; however, knowledge seems a poor proxy for tweeting. To overcome this, we identify a handful of studies that utilize self-reported measures of engagement. Most broadly, higher levels of political engagement are associated with increased support for the Court as an institution (Bartels and Johnston Reference Bartels and Johnston2020; Gibson and Caldeira Reference Gibson and Caldeira2009),Footnote 3 while respondents’ engagement has no impact on support for nomination-specific Court-curbing items (Bartels and Johnston Reference Bartels and Johnston2020, 123). These findings align with survey data reported by Badas and Stauffer (Reference Badas and Stauffer2018) showing respondents who more closely followed Clarence Thomas’s confirmation were more supportive of his confirmation. These studies suggest that having a Twitter sample with high levels of political engagement/interest in confirmation proceedings shouldn’t skew our results in the direction of predicted partisan effects; indeed, Bartels and Johnston (Reference Bartels and Johnston2020) found that engagement had a (positive) “potent effect” on support for the Court, even in relation to the negative effects of policy disagreement (116).

Second, we take steps to demonstrate that our Twitter data align with prior work which finds that sentiment in tweets tracks with public approval data. To do so, we replicate O’Connor et al. (Reference O’Connor, Balasubramanyan, Routledge and Smith2010)’s study – a seminal work in the field that’s been cited over 2,500 times. Specifically, O’Connor et al. (Reference O’Connor, Balasubramanyan, Routledge and Smith2010) collected tweets about President Obama and showed that aggregate sentiment of those tweets did, indeed, track with presidential approval polling. To replicate this work, we replace presidential approval polling and data harvested from tweets about the President with nominee approval polling and data culled from tweets about the nominees. We report the full results of that replication in our appendix but note here that our findings are highly consistent with O’Connor et al. (Reference O’Connor, Balasubramanyan, Routledge and Smith2010)’s: we find sentiment and public approval are correlated at r = 71.1% (O’Connor et al. [Reference O’Connor, Balasubramanyan, Routledge and Smith2010] recovered r = 73.1%); the correlation statistic raises to r = 81.9% when restricting our analysis to hand-identified partisan users. This replication gives us confidence that 1) utilizing methods previously used to study the political branches is applicable to our work, and 2) despite the changing nature of political messaging and social media use, those changes have not fundamentally altered the relationship between Twitter data and public opinion polling data.

Third, we echo sentiments expressed in recent Courts scholarship which notes, “…studying the population on Twitter does allow for research designs that maintain strong internal validity, allowing us to consider the comparative statistics beyond the overall level of estimated effects” (Adams-Cohen Reference Adams and Joseph2020, Reference Adams and Joseph615). Even if one remains skeptical that our data bear any resemblance to broader public attitudes, despite the O’Connor et al. (Reference O’Connor, Balasubramanyan, Routledge and Smith2010) replication detailed above, our research still illuminates two comparative differences: reactions across the different stages of the nomination and confirmation process as well as comparative differences between hand-identified Republican and Democrat users.

Fourth, and even given the limitations noted above, Twitter data are extensively used in various disciplines and contexts to gauge public responses to products, services, and current events. For our specific objectives, studies show that Twitter users are responsive to political happenings (Wang et al. Reference Wang, Can, Kazemzadeh, Fran¸ and Narayanan2012) and engage with and share information that’s favorable to their own political party (Shin and Thorson Reference Shin and Thorson2017). Additional work finds that quantity and content of political tweets can predict public opinion (Davis et al. Reference Davis, Zheng, Liu and Levy2017; Tumasjan et al. Reference Tumasjan, Sprenger, Sandner and Welpe2010; O’Connor et al. Reference O’Connor, Balasubramanyan, Routledge and Smith2010) and offer valuable information for decision-making, candidate popularity, forecasting, and governance and public trust (Tumasjan et al. Reference Tumasjan, Sprenger, Sandner and Welpe2010; Yaqub et al. Reference Yaqub, Chun, Atluri and Vaidya2017). In many aspects, the statements made on Twitter align with the conceptual framework of Zaller’s theory of public opinion formation (Zaller Reference Zaller1992). Tweets serve as expressions reflecting individuals’ thoughts, particularly influenced or primed by current events and their interactions within the Twitter network. Similar to the simultaneous interaction of opinion, media choices, and ideology (Prior Reference Prior2007), Twitter’s environment allows individuals’ tweets to serve as partial statements about their policy opinions (Clark et al. Reference Clark, Wang and Agichtein2018).

Building upon the broader research on politics and social media, a few studies explore how Twitter users respond to events at the High Court. Darwish (Reference Darwish2019) analyzes 128,000 users to find that those who communicated support or opposition to Brett Kavanaugh’s confirmation used divergent partisan hashtags and followed different Twitter accounts. Republicans used hashtags like #walkaway (from liberalism), while Democrats utilized, for example, the partisan hashtag #TheResistance (a movement that protested the presidency of Donald Trump). Additionally, Sandhu et al. (Reference Sandhu, Vinson, Mago and Giabbanelli2019) examine two waves of tweets – collected during and then one month after Kavaunaugh’s confirmation – to look for associations between the terms “Supreme Court” and “partisanship.” The authors conclude that public opinion changed after the Kavanaugh hearings because the terms only become associated with one another in their second wave of tweets. Finally, a pair of studies use Twitter data to examine reactions to the Court’s same-sex marriage jurisprudence. Clark et al. (Reference Clark, Wang and Agichtein2018) find that two early Supreme Court decisions regarding same-sex marriage (Hollingsworth v. Perry and U.S v. Windsor) affected public discourse on the topic, polarizing both discussions and mass opinion. Recent work by Adams-Cohen (Reference Adams and Joseph2020) treats Twitter data as a proxy for survey-based public opinion data in order to test between two competing theories of public response to Court decisions that were developed and initially tested using survey data several years prior to the advent of social media (structural response theory and backlash theory). Adams-Cohen (Reference Adams and Joseph2020) further notes the appeal of Twitter when attempting to gather contemporaneous data for unscheduled political events like Court opinions and vacancies.

In essence, the value of social media data, epitomized by platforms like Twitter, lies in its ability to serve as a dynamic window into a nearly endless range of real-time reactions to political events. Taken together, the research summarized above demonstrates that Twitter users tweet about Court events, and that Twitter data bear some relationship to public opinion while not necessarily being public opinion. Indeed, every political event finds expression in millions of tweets, revealing a multitude of opinions, ideas, topics, and sentiments that reflect a broader socio-political landscape. To engage this rich source of data, our study ventures beyond the surface-level analysis of Twitter trends – hashtags, likes, and retweets – to examine non-elite partisan differences based in the content of hand-identified partisan users. By scrutinizing sentiment and dissecting the topics that dominate tweets surrounding the most recent Supreme Court nominees, we aim to uncover the subtle nuances of online political discourse in the digital age.

Hypotheses

The nomination and confirmation of Supreme Court justices are highly consequential events that have far-reaching implications for the ideological balance and direction of the Court. From the moment a vacancy on the Court arises, political stakeholders, including party leaders, interest groups, and engaged citizens, immediately focus their attention on the potential nominees and their implications for the Court’s future decisions (Armaly Reference Armaly2018; Glick Reference Glick2023). While Gibson and Caldeira (Reference Gibson and Caldeira2009)’s work demonstrated that the public was largely indifferent to partisan messages in a politicized Alito confirmation process, many survey-experiment works since then have found that ideology and partisanship shape individuals’ perceptions and attitudes towards hypothetical judicial nominees (Hoekstra and LaRowe Reference Hoekstra and LaRowe2013; Chen and Bryan Reference Chen and Bryan2018; Sen Reference Sen2017), and especially when political rhetoric about a nominee is invoked (Rogowski and Stone Reference Rogowski and Stone2021). Additional recent work examining impacts on support for the Court in the wake of the Kavanaugh and Barrett proceedings reveal that partisanship affects views of the Court, and these findings were also reflected in work that captured survey data surrounding the vacancies created by the deaths of Justices Ginsburg (Glick Reference Glick2023) and Scalia (Armaly Reference Armaly2018).

These findings suggest that Democrats and Republicans are inclined to approach vacancies and the confirmation process with preexisting ideological and partisan preferences, influencing their reactions to nominees aligned with or divergent from their party platforms. Partisans, driven by their political preferences and agendas, are likely to seize this opportunity to vocalize their support or opposition to specific nominees, advocate for nominees aligned with their party’s ideology, and criticize those they perceive as threats to their policy objectives. Building on the insights of Bartels and Johnston (Reference Bartels and Johnston2020), perhaps confirmation processes aligned with an individual’s partisan beliefs may be perceived as fair and impartial, whereas those diverging from their views are construed as politically motivated and unjust. We believe this overtly political messaging will extend even to the vacancy stage, i.e., before there’s a nominee to praise or criticize.

In the context of Twitter, we anticipate these preferences to manifest in the sentiment of tweets and the topics discussed. Throughout each stage of the confirmation process, we expect partisan Twitter users to express more favorable sentiments when aligned with a nominating president. Additionally, we expect that partisan differences will emerge in topics discussed, with presumptive losers attacking the process with partisan language.

H1 Partisan Sentiment Hypothesis: Partisan Twitter users will express more (less) positive sentiment when they are aligned with a nominating president, while they will alternately express more (less) negative sentiment when not aligned with a nominating president. We expect this at every stage of the confirmation process, including the vacancy stage.

H2 Partisan Topics Hypothesis: Twitter users who anticipate a partisan loss (win) from the vacancy and nominee’s eventual appointment will invoke partisan topics more (less) centrally when discussing the nomination process.

Data and methods

We begin data collection by capturing tweets via a programmable spreadsheet tool called TAGS that links to the Twitter Search API. The Twitter Search API is used to retrieve past tweets matching a specific criterion (i.e., keyword or hashtag) within a designated search window. However, for each request we can only retrieve a 10% random sample, and Twitter does not provide any description of the algorithms used to generate the random samples.

As we are interested in sentiment and topics during Brett Kavanaugh, Amy Coney Barrett, and Ketanji Brown Jackson’s confirmation processes, we used the API tool to search for several terms at four distinct time periods: vacancy, nomination, confirmation hearings, and confirmation. We gather tweets during each of these stages and continue for roughly 24 hours after each stage. Search terms such as #SCOTUS and “confirmation hearing” are used, along with the names of the nominees and the justices who departed the bench. The included terms are non-partisan and specified using non-case-sensitive identifiers (meaning the API picks up “scotus” in addition to “SCOTUS”) in order to create a broad and facially non-partisan dataset. A full list of the stages and search terms used is available in Table A1 of the online supplemental materials.

Raw data in hand, amounting to more than 1.6 million tweets for Kavanaugh, 2.4 million for Barrett, and 3.5 million for Jackson, we first followed the lead of past scholars who have studied participation on Twitter and excluded retweets from our analysis (Hemphill, Otterbacher, and Shapiro Reference Hemphill, Otterbacher and Shapiro2013; Mazoyer et al. Reference Mazoyer, Julia Cag´e and Hudelot2020). This left us with roughly 603,000 initial tweets across the three confirmation processes.Footnote 4 In essence, this initial number of tweets can be broken down by nominee, their stage in the nomination and confirmation process, and finally by partisanship (described next). Table A2 of our online appendix contains the number of tweets which fall into each “cell” to give a sense of the amount of data we used during different types of analyses.

After eliminating retweets, we next identify the partisanship of a small proportion of users to help inform our hypotheses. Rather than employ machine learning (which does not filter for bots or news accounts), we use an observational approach to identify the partisanship of Twitter users. We employ a cadre of research assistants to examine each user’s page, read recent tweets, and ascertain from those tweets if the user appeared to be a Republican, Democrat, Independent, supporter of a third party, or was unidentifiable. We use standard inter-coder reliability measures and resolve any discrepancies in the data ourselves. We make two efforts to avoid any endogeneity problems; first we carefully exclude from sentiment analysis any tweets that were used to identify partisanship. In other words, the tweets we used to code a user as a Republican or a Democrat are exogenous to users’ tweets about the Court or the confirmation process, and often occurred months before or after the confirmations took place. Because of the time-intensive nature of coding partisanship paired with the sheer number of users we have, we randomly select approximately 4% of users who tweeted at any point during each of the three confirmations, and we code this random sample for partisanship, leaving us with 21,468 partisan-identified tweets. Second, we also remove from each tweet the terms used to search for it. For example, if TAGS returned a spreadsheet of 5,000 tweets based on the search terms “SCOTUS” and “Kavanaugh,” then those terms were removed from every tweet in that spreadsheet.

To explore the sentiment of tweets during the confirmation processes, we utilize the AFINN Sentiment Lexicon (Finn Reference Finn2011). This is an open-source lexicon which originally used crowd-sourcing to manually rate more than 2400 words from indicating very negative sentiment (-5) to very positive sentiment (5). What is attractive to us about this Lexicon is that it was explicitly developed for sentiment analysis of Tweets and other “microblogs.”Footnote 5 Once we apply the AFINN dictionary to our data, we obtain a positive, negative, and overall sentiment score for each tweet; overall sentiment is a sum of the values for each word within the tweet, ranging from -116 (“Ok. But Amy Coney Barrett f*** you f*** f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f*** you f***”),Footnote 6 to 26 (“Awesome, Awesome, Awesome, Awesome, Awesome, Awesome, totally #AWESOME!!!!! Congratulations soon to BE Associate Justice of the United States Supreme Court(#SCOTUS) #KetanjiBrownJackson!!!!! Much SUCCESS!!!!!”) with a mean score of ‒0.79.

Because we are also interested in the content of partisan tweets – especially how topics that are discussed by partisans differ – we next employ a topic modeling strategy developed by Lu, Henchion, and Namee (Reference Lu, Henchion and Namee2019). Topic models were initially developed to help researchers efficiently identify latent topics contained in large, text-based datasets. Lu, Henchion, and Namee (Reference Lu, Henchion and Namee2019) build off of the traditional Latent Dirichlet Allocation (LDA) model by adding a second processing stage designed to overcome topic-level comparison difficulties, thereby making it possible to identify topics that are most dominated by documents (tweets) from a single source (Republicans or Democrats).Footnote 7 This allows us to first fit a single LDA model for each nominee (rather than fitting separate models by party), and then next determine which topics are particularly contributed by one party or the other.Footnote 8 In plain language, this method of analysis enables us to investigate differences in what partisans tweet about without having to hand-classify individual tweets.

To ensure we have ample text for our LDAs, we began this endeavor by collecting additional tweets from our partisan users. We did this using the same query terms and time periods as our initial TAGS collection, however, now utilizing Twitter’s API v2 with an academic account that allows us to search back in time. Using this augmented corpus of partisan tweets, we follow the two-stage comparative process detailed above. Topic modeling was done in Python 3 using gensim’s latent Dirichlet allocation module. For more information about LDAs, the comparative method, how we trained our models, and how we gathered additional tweets, see the online appendix.

Finally, we classify each reported topic as “partisan” or “non-partisan” by looking for the explicit use of party names (to include “GOP”) or representatives of a party (to include phrases like “Obamacare” or “Bidenomics”); incorporation of any of these terms means a topic is “partisan.” These coding rules and definitions were developed to identify partisan floor speeches (Morris Reference Morris2001, Reference Morris107), and have been recently used to research partisan rhetoric in the Twitter data of U.S. Senators (Russell Reference Russell2021).Footnote 9

Results

Partisan user sentiment

To explore our first hypothesis, we look at each confirmation process in our dataset separately: Kavanaugh, Barrett, and Jackson. Figure 1 contains a series of density plots of sentiment for Kavanaugh (left panels), Barrett (center), and Jackson (right) separated by partisanship. Here we present disaggregated positive and negative sentiment by party of user, rather than overall mean sentiment. Democrat (Republican) users are shown in blue (light red) with overlapping density depicted in magenta. Sentiment to the right (left) of zero depicts sum of positive (negative) sentiment words per tweet, and solid red (dashed blue) lines mark means for positive and negative words by Republicans and Democrats, respectively. We begin with Kavanaugh (left column) and note that while the densities of all plots peak near zero (or little sentiment), partisan differences emerge once we move away from zero. This demonstrates the need for us to utilize disaggregated positive and negative sentiment to tease apart these differences.Footnote 10

Figure 1. Density plots of sentiment for Kavanaugh (left panels), Barrett (center), and Jackson (right) separated by partisanship. Democrat (Republican) users shown in blue (light red) with overlapping density depicted in magenta. Sentiment to right (left) of zero depicts sum of positive (negative) sentiment words per tweet, and solid red (dashed blue) lines mark means for positive and negative words by Republicans and Democrats, respectively.

Consider how partisan users tweeted about President Trump’s announcement of Kavanaugh as the nominee. There is more red on the right-hand side on the x-axis, suggesting that more Republican tweets contained positive language than Democrat tweets. To illustrate this point, a Republican user wrote, “It’s an abundant of success for @realDonaldTrump It’s like GOD open a box of blessings and is giving him and his love for America and Americans, all the best!!” suggesting that it was a blessing that President Trump was able to nominate a judge like Kavnanaugh. Similarly, another Republican user said, “When the people of the LORD get down to PRAY!” to emphasize that prayer led to the vacancy and ultimately to Kavanaugh as the next pick on the High Court. At the same time, Democrats were not as enthused with Trump’s pick. A Democrat tweeted, “@Scotus SCOTUS now the enemy of Democracy and the American people…Kavanaugh if nominated will support Trump and America will die.” Similar patterns emerge for each of the other processes as well during Kavanaugh’s confirmation. These patterns align with our first hypotheses – that is, Republican tweets contained more positivity about Trump’s announcement of Kavanaugh because putting a conservative on the bench was viewed as a benefit to the Republican Party. Democrats, on the other hand, viewed this announcement by Trump negatively because another conservative on the bench may negatively affect liberal policy outcomes.

We continue this exploration by investigating positive and negative sentiment about Amy Coney Barrett at the various stages we capture in our dataset. The middle column in Figure 1 shows the same kinds of graphs as with Kavanaugh, except now they are for tweets about Barrett from partisan users. These partisan differences are very evident during the confirmation stage. Disappointed with the results of the Senate’s vote, a Democrat tweeted, “Trump and Mitch McConnell didn’t pick Brett Kavanaugh and Amy Coney Barrett bc they’re smart! They’re picked bc they are Nasty, Cruel Racist Puppets!” A Republican, on the other hand, took time to reflect on the Senate’s choice, “Judge Amy Coney Barrett is exceptionally well-qualified to serve on the #SCOTUS.” The differences between partisans are less distinguishable in terms of sentiment for Barrett’s hearing stage, for example. This is because Republicans often had disparaging remarks for the Democrats during her entire confirmation process. For example, a Republican was angry at the hearings stage about how Democratic senators were engaging with her during questioning. But the Republican also praised the nominee in the same tweet. “@NPR Amy Coney Barrett will be the best choice this country has had in years, upholding the constitution not destroying it like you want to do‥ Stop playing politics, you’re disgusting!!” Another Republican user spoke positively about Barrett, but negatively about liberals in the same tweet. “Liberals are worried they might lose the ‘right’ to kill their baby with a conservative #SCOTUS. Abortion is as big a stain on America as slavery was. Millions of babies have been killed. Think about that. How many presidents, Mozarts, doctors have been killed. #AmyConeyBarrett.” In a similar vein, at a glance we also see that Democrats had more positive language in their tweets than Republicans during her confirmation stage. These tweets were largely about Democrats discussing the idea that “Biden and the DEMs should expand the lower federal courts ASAP” and pack the Supreme Court, more specifically. Overall, these tweets highlight the idea that partisans talk about the out-party negatively while also talking about their party positively. These tweets do not contradict our hypothesis, but speak to the complexity of how partisanship frames conversation about the Court. While partisans may write tweets with more positive language about a nominee that aligns with them politically overall, they disparage the “other side” at the same time; separating out positive and negative valences of tweets helps reveal this trend.

Finally, we turn to Kentaji Brown Jackson and sentiment from partisans across each stage. This is an interesting turn we take because now we are looking at how partisans feel about a liberal nominee from a Democratic president. Despite this switch, Figure 1 displays similar graphs to that of Kavanaugh and Barrett overall. That is, Democrat tweets contained more positive language about Jackson at all four stages. This is evident by the dotted blue line, suggesting that the mean positive sentiment is greater for Democrats than Republicans at each of these stages. During her confirmation stage, for example, a Democrat said, “Happy KBJ Day! Absolutely wondrous day. I DVR’d the moment so I can savor the memories again and again. My heart is full. The tears of joy, hope, pride, honor and happiness for KBJ and her loved ones, mentors continue to flow. America won this day. Our children won this day. Hope wins.” During Jackson’s hearings, on the other hand, a Republican tweeted, “So Judge Jackson is the only Black ‘Woman’ Judge Biden Admin could find out of 300mil ppl. One who is soft on criminals to further victimize victims by giving child predators lighter sentences to prey on more victims just like criminals who have been released to kill more victims.” But a Democrat also used negative language at this stage, criticizing the media’s role in the hearings. “The media in particular fails to convey the visual image of angry White men screaming and interrupting a Black woman, who dares not show anger for fear of being labeled unprofessional.” Again, while Figure 1 broadly highlights the idea that partisans use more positive (negative) language when discussing a nominee that aligns (does not align) with them ideologically, partisans can and do use both positive and negative sentiment for nominees.

Partisan-based differences in tweet topics

Recall our second hypothesis predicts that presumptive losers will employ partisan topics more centrally than presumptive winners, even at the vacancy stage. The results of our topic modeling comparison partially support this hypothesis. Specifically, while we find that presumptive policy losers do, often, invoke partisan themes in their most unique topics, policy winners are consistently utilizing partisan themes – where, again, we identify “partisan” language by following Morris (Reference Morris2001) and Russell (Reference Russell2021) and seeking out party names and party representatives. To help highlight these trends, we’ve underlined words that meet our definition of “partisan,” while shaded cells in Table 1 depict the top topics attributed to de-facto policy losers. We begin our analysis by focusing on the top panels of Table 1, as these panels list the two most divergent topics per party per nominee, where a divergent topic is one that identifies information most unique to a single source, Democrats or Republicans, or in other words – topics that Democrats are interested in and Republicans aren’t, and vice versa.

Table 1. All Combined Stages

Note: The top two most divergent topics by partisan users and nominees for all combined stages (top panels) and for only the vacancy stage (bottom panels) of the nomination and confirmation process. Gray cells represent de-facto policy losers with partisan terms underlined.

During proceedings for both Brett Kavanaugh and Amy Coney Barrett, Democrats’ (the policy losers) most unique topics focus on issues such as allegations of sexual assault (“believe,” “sexual,” “assault”), Senator McConnell’s blocking of Merrick Garland (“seat,” “mcconnell,” “obama,” “garland”), and the future of policies concerning gun regulations (“gun”), LGBTQ+ issues (“lgbtq,” “human,” “fear”), and health care and reproductive rights (“aca,” “healthcare,” “roevwade”). Similarly, during Ketanji Brown Jackson’s proceedings, Republicans (now presumptive policy losers) highlighted issues around the race and gender of President Biden’s nominee, with special attention paid to the fact that he said he’d nominate a black woman and then did. This was cast in the frame of Jackson having been an affirmative action choice, implying that she wasn’t qualified for the job (“woman,” “nominate black,” “action,” as in affirmative action). In addition, Republicans also focused on Jackson’s responses to two highly politicized lines of questioning from Republicans on the Judiciary Committee – trans rights (“answer,” “define woman”) and sentencing related to child pornography cases (“child porn,” “record”).

Striking and unexpected in these results are the two topics – both dominated by Democrats-as-losers – that do not include any partisan terms.Footnote 11 For example, when discussing Kavanaugh’s confirmation hearings, Democrats’ most unique topic focused on allegations that Kavanaugh had sexually assaulted Dr. Blasey-Ford at a high school party; despite this issue splitting neatly along partisan lines (Newport Reference Newport2018), use of explicitly partisan language wasn’t common enough – along with discussing the sexual assault hearing – to show in the top 15 words of the topic. Similarly, Democrats’ top-topic during Barrett’s proceedings focused on the future of policies that are central to partisan agendas (gun regulations and LGBTQ+ issues) but did so without a prevalence of partisan terminology. Even Democrats’ second most unique topic for Barrett’s confirmation includes only one partisan term, “obamacare,” where it appears within a broader conversation about health care and reproductive rights (“aca,” “healthcare,” “roevwade”) – another policy area with major partisan differences.

We next examine the bottom panels of Table 1 to explore whether partisan politics comes into play, even at the vacancy stage. If partisan Twitter users see the Court as an apolitical and non-partisan institution, then – in line with Rogowski and Stone (Reference Rogowski and Stone2021) – this is the stage where we should be most likely to see non-partisan topics because there isn’t yet a particular nominee to criticize or praise. Results from the vacancy stage, however, show partisan themes being employed in five of six topics, where the only apolitical topic comes during the vacancy caused by Ruth Bader Ginsburg’s passing (Barrett vacancy). Here Democrats’ most distinct topic expressed condolences, but also contained hope that a new Supreme Court wouldn’t erode the rights she fought for along with predictions that it would: “ACA…GONE 1964 Civil Rights Act…GONE Roe v Wade…GONE Equal Housing…GONE Affirmative Action…GONE Segregation…WELCOME BACK. Think it can’t happen? Don’t f*cking kid yourself with a 6-3 SCOTUS.” As happened twice before, however, Democrats did not rely upon partisan terms. Some of this tracks as simply thanking a justice for their service or hoping they rest in peace and are non-partisan by nature. The call to action in the wake of her death, however, is surprisingly non-partisan (at least by our chosen operationalization of “partisan”): “We need your voice right now! You have to be on the front lines of this fight to keep the Supreme Court seat. You also need to be behind the scenes working on this. We need all hands on deck!” Meanwhile, Republicans urged President Trump to replace Justice Ginsburg immediately (“trump,” “replace justiceginsburg”).

Similar partisan-based trends appear during the vacancies created by Justices Kennedy’s and Breyer’s retirements. For instance, the most unique topic for Democrats once Justice Kennedy announced his retirement (Kavanaugh vacancy) predicted that President Trump would nominate someone who’d overturn Roe v. Wade: “The same people saying Roe vs Wade won’t be overturned are the same people who said Trump wouldn’t win. It can be. It will be. Fight! No Vote on #SCOTUSnominee.” On the other side, Republicans focused on Democrats and the left, discussing then-Senate majority leader Harry Reid’s suspension of the filibuster for Court of Appeals nominees as well as liberals in general, with phrases like “liberal meltdown,” “liberal tears,” and “liberal weenies.”Footnote 12 Similarly, when Justice Breyer announced his retirement (Jackson vacancy) – Republicans lamented President Biden’s campaign promise to nominate a black woman (“Biden,” “pick,” “race”), calling it identity politics and race-based profiling (“Apparently the only qualifications for being a Supreme Court justice is [sic] race and gender.”), and Democrats, now presumptive winners, invoked past partisan grievances by mainly focusing on the Republican effort to block Merrick Garland in 2016 (“McConnell,” “republican,” “seat”).

Finally, we note that – despite the prevalence of partisan terminology used – presumptive winners often discuss their nominee in less partisan (if not non-partisan) terms, placing emphasis on procedural elements of the process and the nominee’s qualifications. The bigram “confirmation hearing” a non-partisan term appears in the top two topics of the presumptive winners for each nominee, but not for any presumptive losers. In addition, supportive partisans often tweet headlines or news links containing factual information (i.e., Kavanaugh sworn in as 114th justice, hours after senate votes to confirm) or simple congratulations (Congratulations justice Kavanaugh!). Certainly, partisan themes are present for presumptive winners, evident in things like “hillary,” “trump,” “dems,” and the slate of tweets about then-Senator Kamala Harris’s questioning of Amy Coney Barrett. However, when “winners” do use partisan topics, they often focus on past or other related grievances. Take, for example, the most unique topic contributed by Democrats during Jackson’s process – this topic is dominated by discussions of Ginni Thomas’s involvement in trying to overturn the 2020 presidential election (“trump,” “clarence thomas,” “wife,” “ginni,” “election”). Similarly, the top-topic for Republicans during Kavanaugh’s proceedings includes references to Hillary Clinton, who lost to Donald Trump two years earlier: “Trump’s victory stopped Hillary from appointing 2 or 3 justices and 167 to the federal judiciary.” This finding falls in line with survey and survey-experimental work that posits policy wins and losses are strategically framed differently, where wins highlight “procedural appropriateness” and losses are due to some type of inappropriateness, whether it be political or procedural (Bartels and Johnston Reference Bartels and Johnston2020, 28).

Discussion

We find that conversation on Twitter during nomination and confirmation processes is generally negative in sentiment and partisan, even at the vacancy stage. Although confirmation processes give people a rare chance to assess potential future justices, and perhaps increase support for the Court (Gibson and Caldeira Reference Gibson and Caldeira2009), our findings add to a recent line of literature that shows the public sees nominees – and along with them, the Court – in a partisan lens. Indeed, we find that partisan Twitter users see the nominees and the process largely through that same lens. Demonstrating that the political behavior of partisan Twitter users largely reflects partisan behavior in non-social media contexts is a significant contribution to the literature because scholars are seldom able to study how people view Supreme Court confirmations throughout their often-lengthy processes.

Now, however, we observe how users view each stage of the process, and, at each stage – even at a vacancy when there isn’t yet a nominee or even a process to politicize – partisanship rears its ugly head. This is particularly noteworthy and carries significant implications. Traditionally, one might expect the vacancy stage to be relatively free from overtly partisan discourse, given the absence of a specific nominee or formal confirmation process. However, our findings challenge this assumption and highlight the pervasiveness of partisanship in contemporary political discourse surrounding the judiciary. The fact that individuals are already engaging in partisan discussions and expressing sentiments related to confirmation processes during the vacancy stage suggests a deeper entrenchment of partisan politics within the judicial appointment process. This early engagement underscores the politicization of the Supreme Court and the extent to which it has become intertwined with broader political agendas and narratives. It also reflects a broader trend of heightened political polarization, where even the anticipation of a future nomination prompts partisan reactions and discussions. Moreover, the prevalence of partisan discourse during the vacancy stage has implications for the perceived legitimacy of the nomination and confirmation process. The public’s early engagement in partisan discussions may shape perceptions of the eventual nominee and influence the tone and tenor of the confirmation process itself. This early polarization could potentially impact the nominee’s ability to garner bipartisan support and contribute to a contentious confirmation battle once a nomination is made.

While we did recover a handful of topics that do not include political language, as defined by our coding scheme, we wonder if this operationalization captures the nuances of political discourse. Developed by Morris (Reference Morris2001) to identify partisan Congressional floor speeches, it does offer a simple and straight-forward set of rules but doesn’t account for the changing nature of politics – which, of course, it can’t. In an age where rainbow flags and 2nd Amendment emblems work as short-hand for Democratic or Republican values, static and simplified measures provide a base, but incomplete picture of political dynamics. We therefore believe that identifying overt political language is valuable, but that examining the topics, themselves – as we do above – is an important step in helping make sense of the Twittersphere’s response to political events.

These findings add to the literature in several important ways. First, we offer a novel way to study how people respond to Supreme Court confirmations. While much of the prior literature has used surveys and polls, we offer another approach for doing so. Our hope is that other scholars will take advantage of social media to study confirmations because these data have the specific advantage of being in real-time, and so not constrained to particular polling periods (Clark et al. Reference Clark, Wang and Agichtein2018). Our data, therefore, provide important insights about the entirety of the process, from vacancy to the confirmation vote itself. Because Gallup asks whether respondents are in favor of the nominee being confirmed, they always conduct their last poll prior to the Senate’s vote. Gallup notes, “Greater opposition over the course of a confirmation process is consistent with the historical trends for past Supreme Court nominees, even for those who had relatively smooth confirmations.”Footnote 13 While we recover a downward trend in sentiment between nominee announcement and the confirmation hearings, data for two of our three nominees uniquely show a positive shift in sentiment during the confirmation vote (see our appendix for details). Although we cannot say so with certainty, perhaps this lapse in time highlights why it is important to use an event monitoring approach when studying responses to prolonged events – such as confirmation processes – where prior literature has identified opinion change across the event.

Second, our findings reinforce existing scholarship that suggests that people discuss political events using partisan topics and sentiment that favors their party and disfavors the opposition. While there has been academic discussion about whether, and to what extent, confirmation processes are a political process, perhaps this matters less than what people seem to believe about the process. To this end, our findings also add to a newer line of scholarship that argues that people increasingly view the Court through political and ideological frames (Bartels and Johnston Reference Bartels and Johnston2020; Ansolabehere and White Reference Ansolabehere and White2020). Our work additionally identifies a trend of presumptive winners sometimes using partisan topics that legitimize the process and the nominee. Bartels and Johnston (Reference Bartels and Johnston2020) argue that motivated reasoning pushes partisans to “bolster the procedural appropriateness” of an agreeable Court decision (27-28). It’s possible our partisan-based topic modeling is picking up on this very dynamic, where partisans in the winning position sometimes use process and legitimation-based reasoning. Whether these kinds of tweets are motivated by sincere beliefs or by partisanship is unknown, however, and is therefore ripe for future study.

All in all, our study provides a unique glimpse into how partisan social media users discuss confirmation processes throughout their various stages, and our findings should be considered when discussing how people view the Court during this transparent part of its operations. By shedding light on the nuanced dynamics of partisanship throughout the confirmation process, our research becomes a crucial resource for policymakers, scholars, and the public. It prompts them to recognize the significant influence of partisan behavior on the perception of the Court’s operations and underscores the importance of addressing these dynamics for the sake of maintaining the Court’s legitimacy and effectiveness.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/jlc.2024.13.

Data availability statement

The replication materials are available at the Journal’s Dataverse archive.

Funding

No funding was received for this project.

Competing interest

Authors declare no conflicts of interest.

Footnotes

1 While we acknowledge that the platform is currently referred to as X, it is crucial to note that the tweets analyzed in this study were collected during a time period when the platform was known as Twitter. For clarity and consistency, we continue to refer to the platform as Twitter throughout our analysis.

2 The identities of these Twitter users are kept anonymous for privacy considerations.

3 Bartels and Johnston (Reference Bartels and Johnston2020)’s work analyzes seven separate surveys. The most recent, a Qualtrics survey done in 2017, models only self-reported political interest (196). Two other surveys utilize indices of political engagement that include self-reported interest in politics or the Court. These indices report alphas of 0.64 (2005 ASCS, 98) and 0.78 (June 2012 TAPS, 115).

4 This aligns with polling data from Pew Research which finds that around 82 percent of tweets are replies and retweets, not original posts, with retweets being the dominant type of tweet. https://www.pewresearch.org/internet/2021/11/15/2-comparing-highly-active-and-less-active-tweeters/

5 We exclude a handful of false positive/negative terms from the Lexicon. For more information on that process, see our online appendix.

6 We acknowledge that social media algorithms, which prioritize engagement, incentivize inflammatory messaging. The presence of such general toxicity could bias our results if the intensity of the messaging systematically varies along with partisanship across all contexts. We note, however, that research demonstrates partisan messaging employed by Democrats and Republicans is equally toxic (Mamakos and Finkel Reference Mamakos and Finkel2023, 5), meaning we should not expect the use of heightened messaging to bias our results when testing H1. It is certainly possible that this overly-inflammatory language could show up in topics generated from our LDA model used in H2, however our results do not bear this out.

7 For a detailed discussion of prior comparison problems, including the shortcomings of fitting and then comparing separate topic models by party, see pp. 1-3 of Lu, Henchion, and Namee (Reference Lu, Henchion and Namee2019).

8 While we acknowledge that there are newer methods developed specifically for short texts (Laureate and Buntine Reference Laureate2023), we use the LDA for a few reasons. Critically, Lu, Henchion, and Namee (Reference Lu, Henchion and Namee2019)’s comparative method was developed using the LDA and isn’t readily transportable to other topic modeling techniques. This would be an inadequate reason and inappropriate choice if LDAs were unsuitable to studying sociopolitical Twitter data. LDAs, however, have been extensively used in the social sciences (Jelodar et al. Reference Jelodar, Wang, Yuan, Feng, Jiang, Li and Zhao2019) and with microblog and social media data. Additionally, LDAs became the dominant choice in topic modeling due to their ability to consistently produce topics that humans recognize as meaningful (Jelodar et al. Reference Jelodar, Wang, Yuan, Feng, Jiang, Li and Zhao2019). Indeed, while the LDA is often referred to as a “vanilla” or “basic” model, we believe this general approach paired with Lu, Henchion, and Namee (Reference Lu, Henchion and Namee2019)’s comparative method allows us to utilize a solid base method (LDA) along with an important comparison extension. Finally, we take care to ensure our model optimization (detailed below and in our appendix) avoids common pre-processing and evaluation errors like “stemming” initial tweets or relying upon model perplexity when choosing K, the number of topics (Laureate and Buntine Reference Laureate2023, 14245-26). In sum, while there are objectively better topic models for Twitter data, the LDA isn’t an inappropriate choice and we’ve taken steps, in line with “best practices,” to ensure our topics are as meaningful as possible.

9 We extracted information via topic modeling, rather than analyzing individual tweets, for several reasons. First, topic modeling provides a panoramic view of the prevalent themes and subjects across all tweets (Murakami et al. Reference Murakami, Thompson, Hunston and Vajn2017). While a frequency count quantifies the number of tweets mentioning confirmation processes in a partisan way, providing a basic measure of engagement, it lacks depth in understanding the content and themes of the conversations. In contrast, topic modeling goes beyond counting to uncover underlying themes and topics within a body of tweets, identifying specific aspects of the confirmation process being discussed and how these topics vary across partisan lines. Further, frequency counts treat all mentions equally without distinguishing between different types of partisan messaging, whereas topic modeling can identify and categorize multiple topics within the dataset. This allows researchers to see which themes are most prevalent among partisan users and how these themes differ between Democrats and Republicans. This broad perspective enables us to discern overarching narratives or patterns that might remain obscured when scrutinizing individual tweets and classifying them as partisan or non-partisan. Second, frequency counts lack the ability to provide context for why or how users are discussing the confirmation process. Topic modeling addresses this by grouping words and phrases into coherent topics, offering insights into the motivations and concerns of users. It is particularly useful for large datasets, as it can process and summarize vast amounts of text data into meaningful topics, making it easier to analyze and interpret complex discussions. Unlike frequency counts, which offer a static and one-dimensional view of the data, topic modeling provides a dynamic and nuanced understanding by showing how topics evolve over time and how different events during the confirmation process influence the discourse. Third, and finally, individual tweets often contain noise, such as spam or advertisements that utilize trending hashtags. Topic modeling effectively filters out this noise by concentrating on the core themes, ensuring a more focused and meaningful analysis.

10 Beyond these figures, we also performed a series of difference-of-means tests, comparing sentiment across partisanship for all combined stages of these nomination and confirmation processes. Across all three types of sentiment (overall, positive, and negative), we found statistically significant differences where ideological “winners” tweet more positively than ideological “losers.” Detailed results are found in our online appendix.

11 Note that words that make up each topic are not mutually exclusive in LDA models, meaning duplicates can appear across topics by partisan group.

12 We note that “liberal” is not a term in Morris (Reference Morris2001) and Russell (Reference Russell2021)’s coding scheme – neither is “conservative,” “left,” “right,” or “maga,” for that matter.

References

Adams, -Cohen, Joseph, Nicholas. 2020. “Policy Change and Public Opinion: Measuring Shifting Political Sentiment with Social Media Data.” American Politics Research 48(5): 612621.CrossRefGoogle Scholar
Ansolabehere, Stephen D., and White, Ariel. 2020. “Policy, Politics, and Public Attitudes Toward the Supreme Court.” American Politics Research 48(3): 365376.CrossRefGoogle Scholar
Armaly, Miles T. 2018. “Politicized Nominations and Public Attitudes toward the Supreme Court in the Polarization Era.” Justice System Journal 39(3): 193209.CrossRefGoogle Scholar
Badas, Alex, and Stauffer, Katelyn E. 2018. “Someone like Me: Descriptive Representation and Support for Supreme Court Nominees.” Political Research Quarterly 7(1): 127142.CrossRefGoogle Scholar
Bartels, Brandon L. and Johnston, Christopher D.. 2013. “On the Ideological Foundations of Supreme Court Legitimacy in the American Public.” Am. Journal of Political Science 57, no. 1 (January): 184199.CrossRefGoogle Scholar
Bartels, Brandon L. and Johnston, Christopher D.. 2020. Curbing the Court Why the Public Constrains Judicial Independence. Cambridge University Press.CrossRefGoogle Scholar
Carter, Stephen L. 1993. “The Confirmation Mess, Continued.” University of Cincinnati Law Review 62:75.Google Scholar
Chen, Philip G, and Bryan, Amanda C. 2018. “Judging the ‘Vapid and Hollow Charade’: Citizen evaluations and the candor of US Supreme Court nominees.” Political Behavior 40: 495520.CrossRefGoogle Scholar
Christenson, Dino P. and Glick, David M.. 2015. “Chief Justice Roberts’s Health Care Decision Disrobed: The Microfoundations of the Supreme Court’s Legitimacy.” American Journal of Political Science 59(2): 403418.CrossRefGoogle Scholar
Clark, Staton, Wang, , and Agichtein, . 2018. “Using Twitter to Study Public Discourse in the Wake of Judicial Decisions: Public Reactions to the Supreme Court’s Same-Sex-Marriage Cases.” Journal of Law and Courts 6(1): 93126.CrossRefGoogle Scholar
Darwish, Kareem. 2019. “Quantifying polarization on twitter: the Kavanaugh nomination.” In International Conference on Social Informatics. Springer.CrossRefGoogle Scholar
Davis, Matthew A., Zheng, Kai, Liu, Yang, and Levy, Helen. 2017. “Public Response to Obamacare on Twitter.” Journal of Medical Internet Research 19(5): e167.CrossRefGoogle ScholarPubMed
Finn, Arup Nielsen. 2011. “A New ANEW: Evaluation of a Word List for Sentiment Analysis in Microblogs.” In ESWC 2011 Workshop on “Making Sense of Microposts.” CEUR Workshop Proceedings, May.Google Scholar
Gibson, James L., and Caldeira, Gregory A. 2009. Citizens, Courts, and Confirmations: Positivity Theory and the Judgments of the American People. Princeton University Press.CrossRefGoogle Scholar
Glick, David. 2023. “Is the Supreme Court’s Legitimacy Vulnerable to Intense Appointment Politics? Democrats’ Changed Views Around Justice Ginsburg’s Death.” Journal of Law and Courts 11(1): 104115.CrossRefGoogle Scholar
Hemphill, Libby, Otterbacher, Jahna, and Shapiro, Matthew. 2013. “What’s congress doing on twitter?” In 2013 Conference on Computer Supported Cooperative Work. San Antonio, TX.CrossRefGoogle Scholar
Hoekstra, Valerie, and LaRowe, Nicholas. 2013. “Judging Nominees: An Experimental Test of the Impact of Qualifications and Divisiveness on Public Support for Nominees to the Federal Courts.” Justice System Journal 34(1): 3861.Google Scholar
Jelodar, Hamed, Wang, Yongli, Yuan, Chi, Feng, Xia, Jiang, Xiahui, Li, Yanchao, and Zhao, Liang. 2019. “Latent Dirichlet allocation & Topic Modeling: Models, Applications, a Survey.” Multimedia Tools & Applications 78:15169–211.CrossRefGoogle Scholar
Krewson, Christopher N. 2023. “Political Hearings Reinforce Legal Norms: Confirmation Hearings and Views of the United States Supreme Court.” Political Research Quarterly 76(1): 418431.CrossRefGoogle Scholar
Krewson, Christopher N., and Schroedel, Jean R.. 2020. “Public Views of the U.S. Supreme Court in the Aftermath of the Kavanaugh Confirmation.” Social Science Quarterly 101(4): 14301441.CrossRefGoogle Scholar
Laureate, Caitlin Doogan Poet, and Henry Buntine Wray &Linger. 2023. “A Systematic Review of the Use of Topic Models for Short Text Social Media Analysis.” Artificial Intelligence Review 56(12): 1422314255.CrossRefGoogle Scholar
Lu, Jinghui, Henchion, Maeve M., and Namee, Brian Mac. 2019. “A topic-based approach to multiple corpus comparison.” In Irish Conference on Artificial Intelligence and Cognitive Science.Google Scholar
Mamakos, Michalis, and Finkel, Eli J. 2023. “The Social Media Discourse of Engaged Partisans is Toxic Even When Politics are Irrelevant.” PNAS Nexus 2(10): 18.CrossRefGoogle ScholarPubMed
Mazoyer, B ´eatrice, Julia Cag´e, Nicolas Herv´e, and Hudelot, C´eline. 2020. “A French corpus for event detection on twitter.” In 12th Language Resources and Evaluation Conference, 62206227.Google Scholar
Mislove, Alan, Lehmann, Sune, Ahn, Yong-Yeol, Onnela, Jukka-Pekka, and Rosenquist, James. 2011. “Understanding the Demographics of Twitter Users.International AAAI Conference on Web and Social Media 5(1): 554557.CrossRefGoogle Scholar
Morris, Jonathan S. 2001. “Reexamining the Politics of Talk: Partisan Rhetoric in the 104th House.” Legislative Studies Quarterly 26(1): 101121.CrossRefGoogle Scholar
Murakami, Akira, Thompson, Paul, Hunston, Susan, and Vajn, Dominik. 2017. “‘What is This Corpus About?’: Using Topic Modelling to Explore a Specialised Corpus.” Corpora 12(2): 243277.CrossRefGoogle Scholar
Newport, Frank. 2018. “Americans Closely Divided on Kavanaugh Confirmation.” Gallup. https://news.gallup.com/poll/243377/americans-closely-divided-kavanaugh-confirmation.aspx (Last accessed October 30, 2024).Google Scholar
Nicholson, Stephen P., and Hansford, Thomas G.. 2014. “Partisans in Robes: Party Cues and Public Acceptance of Supreme Court Decisions.” American Journal of Political Science 58(3): 620636.CrossRefGoogle Scholar
O’Connor, Brendan, Balasubramanyan, Ramnath, Routledge, Bryan, and Smith, Noah. 2010. “From tweets to polls: linking text sentiment to public opinion time series.” In International AAAI Conference on Web and Social Media, 122129.Google Scholar
Prior, Markus. 2007. Post-broadcast Democracy: How Media Choice Increases Inequality in Political Involvement and Polarizes Elections. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Rogowski, and Stone, . 2021. “How Political Contestation Over Judicial Nominations Polarizes Americans’ Attitudes Toward the Supreme Court.” British Journal of Political Science 51(3): 12511269.CrossRefGoogle Scholar
Russell, Annelise. 2021. “Minority Opposition and Asymmetric Parties? Senators’ Partisan Rhetoric on Twitter.” Political Research Quarterly 74(3): 615627.CrossRefGoogle Scholar
Sandhu, Mannila, Vinson, C Danielle, Mago, Vijay K, and Giabbanelli, Philippe J. 2019. “From Associations to Sarcasm: Mining the Shift of Opinions Regarding the Supreme Court on Twitter.” Online Social Networks and Media 14:100054.CrossRefGoogle Scholar
Sen, Maya. 2017. “How Political Signals Affect Public Support for Judicial Nominations: Evidence from a Conjoint Experiment.” Political Research Quarterly 70(2): 374393.CrossRefGoogle Scholar
Shin, Jieun, and Thorson, Kjerstin. 2017. “Partisan Selective Sharing: The Biased Diffusion of Fact-checking Messages on Social Media.” Journal of Communication 67(2): 233255.CrossRefGoogle Scholar
Truscott, Jake S. 2023. “Analyzing the Rhetoric of Supreme Court Confirmation Hearings.” Journal of Law and Courts 12(1): 122.Google Scholar
Tumasjan, Andranik, Sprenger, Timm, Sandner, Philipp, and Welpe, Isabell. 2010. “Predicting elections with Twitter: what 140 characters reveal about political sentiment.” In International AAAI Conference on Web and Social Media.CrossRefGoogle Scholar
Twitter. 2021. “Q4 and Fiscal Year 2020 Letter to Shareholders.”Google Scholar
Wang, Hao, Can, Dogan, Kazemzadeh, Abe, Fran¸, cois Bar, and Narayanan, Shrikanth. 2012. “A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle.” Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, 115120.Google Scholar
Wojcik, Stefan, and Hughes, Adam. 2019. “Sizing Up Twitter Users.” Pew Research Center.Google Scholar
Yaqub, Ussama, Chun, Soon, Atluri, Vijayalakshmi, and Vaidya, Jaideep. 2017. “Analysis of Political Discourse on Twitter in the Context of the 2016 U.S. Presidential Elections.” Government Information Quarterly 34(4): 613626.CrossRefGoogle Scholar
Zaller, John R. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Zilis, Michael, and Blandau, Rachael. 2021. “Judicial legitimacy, political polarization, and how the public views the Supreme Court.” In Oxford Research Encyclopedia of Politics.CrossRefGoogle Scholar
Figure 0

Figure 1. Density plots of sentiment for Kavanaugh (left panels), Barrett (center), and Jackson (right) separated by partisanship. Democrat (Republican) users shown in blue (light red) with overlapping density depicted in magenta. Sentiment to right (left) of zero depicts sum of positive (negative) sentiment words per tweet, and solid red (dashed blue) lines mark means for positive and negative words by Republicans and Democrats, respectively.

Figure 1

Table 1. All Combined Stages

Supplementary material: File

Sorenson et al. supplementary material

Sorenson et al. supplementary material
Download Sorenson et al. supplementary material(File)
File 293.1 KB