Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-23T13:41:09.712Z Has data issue: false hasContentIssue false

7 - Litigation Outcome Prediction, Access to Justice, and Legal Endogeneity

from Part II - Legal Tech, Litigation, and the Adversarial System

Published online by Cambridge University Press:  02 February 2023

David Freeman Engstrom
Affiliation:
Stanford University, California

Summary

The United States has a serious and persistent civil justice gap. Computationally driven litigation outcome prediction tools might offer a solution by reducing uncertainty and lowering the cost of legal services. Yet the field remains in its infancy: this chapter identifies the data, methodological, and financial limits that have impeded development in general and the potential to expand access to justice in particular. The chapter also raises a note of caution about unintended consequences. As outcome prediction reaches maturity, such tools might reify existing case outcome patterns and lock out litigants whose claims are novel or boundary-pushing. This legal endogeneity may reduce access to justice for some categories of would-be litigants and diminish the flexibility and adaptability that characterize common law reasoning. Empirical questions remain about the way(s) that outcome prediction might affect access to justice. Yet if developments continue, policymakers and practitioners should be ready to exploit the tools’ substantial potential to fill the civil justice gap while also guarding against the harms they might cause.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

The United States has a serious and persistent civil justice gap. In 1994, an American Bar Association study found that half of low- and moderate-income households had faced at least one recent civil legal problem, but only one-quarter to one-third turned to the justice system.Footnote 1 Twenty-four years later, a 2017 study by the country’s largest civil legal aid funder found that 71 percent of low-income households surveyed had experienced a civil legal need in the past year, but 86 percent of those problems received “inadequate or no legal help.”Footnote 2 Studies in individual states tell a similar story.Footnote 3

Unmet civil legal needs include a variety of high-stakes case types that affect basic safety, stability, and well-being: domestic violence restraining orders; health insurance coverage disputes; debt collection and relief actions; evictions and foreclosures; child support and custody cases; and education- and disability-related claims.Footnote 4 There is generally no legal right to counsel in these cases, and there are too few lawyers willing and able to offer representation at prices that low- and middle-income clients can afford.Footnote 5 In my home state of Georgia, for example, five or six rural counties – depending on the year – have no resident attorneys, and eighteen counties have only one or two.Footnote 6 These counties’ upper-income residents travel to the state’s urban centers for legal representation. Lower-income residents seek help from rotating legal aid lawyers who “ride circuit,” meeting clients for, say, two hours at the public library on the first Wednesday of the month.Footnote 7 Or they go without.

Can computationally driven litigation outcome prediction tools fill the civil justice gap? Maybe.

This chapter reviews the current state of outcome prediction tools and maps the ways they might affect the civil justice system. In Section 7.1, I define “computationally driven litigation outcome prediction tools” and explain how they work to forecast outcomes in civil cases. Section 7.2 outlines the theory: the potential for such tools to reduce uncertainty, thereby reducing the cost of civil legal services and helping to address unmet legal needs. Section 7.3 surveys the work that has been done thus far by academics, in commercial applications, and in the specific context of civil legal services for low- and middle-income litigants. Litigation outcome prediction has not reached maturity as a field, and Section 7.4 catalogs the data, methodological, and financial limits that have impeded development in general and the potential to expand access to justice in particular.

Section 7.5 steps back and confronts the deeper effects and the possible unintended consequences of the tools’ continued proliferation. In particular, I suggest that, even if all the problems identified in Section 7.4 can be solved and litigation outcome prediction tools can be made to work perfectly, their use raises important endogeneity concerns. Computationally driven tools might reify previous patterns, lock out litigants whose claims are novel or boundary-pushing, and shut down the innovative and flexible nature of common law reasoning. Section 7.6 closes by offering a set of proposals to stave off these risks.

Admittedly, the field of litigation prediction is not yet revolutionizing civil justice, whether for good or ill. Empirical questions remain about the way(s) that outcome prediction might affect access to justice. Yet if developments continue, policy makers and practitioners should be ready to exploit the tools’ substantial potential to fill the civil justice gap while also guarding against the harms they might cause.

7.1. Litigation Outcome Prediction Defined

I define “computationally driven litigation outcome prediction tools” as statistical or machine learning methods used to forecast the outcome of a civil litigation event, claim, or case. A litigation event may be a motion filed by either party; the relevant predicted outcome would be the judge’s decision to grant or deny, in full or in part. A claim or case outcome, on the other hand, refers to the disposition of a lawsuit, again in full or in part. My scope is civil only, though much of the analysis that follows could apply equally to criminal proceedings.

“Computationally driven” here refers to the use of statistical or machine learning models to detect patterns in past civil litigation data and exploit those patterns to predict, and to some extent explain, future outcomes. Just as actuaries compute the future risk of loss for insurance companies based on past claims data, so do outcome prediction tools attempt to compute the likelihood of future litigation events based on data gleaned from past court records.

In broad strokes, such tools take as their inputs a set of characteristics, also known as predictors, independent variables, or features, that describe the facts, legal claims, arguments, and authority, the people (judge, lawyers, litigants, expert witnesses), and the setting (location, court) of a case. Features might also come from external sources or be “engineered” by combining data. For example, the judge’s gender and years on the bench might be features, as well as the number of times the lawyers in the case had previously appeared before the same judge, the judge’s caseload, and local economic or crime data. Such information might be manually or computationally extracted from the unstructured text of legal documents and other sources – necessitating upstream text mining or natural language processing tasks – or might already be available in structured form.

These various features or case characteristics then become the inputs into one of many types of statistical or predictive models; the particular litigation outcome of interest is the target variable to be predicted.Footnote 8 When using such a tool, a lawyer would plug in the requested case characteristics and would receive an outcome prediction along with some measurement of error.

7.2. Theory: Access-to-Justice Potential

In theory, computationally driven outcome prediction, if good enough, can supplement, stretch, and reduce the cost of legal services by reducing outcome uncertainty. As Gillian Hadfield summarizes, uncertainty comes from several sources.Footnote 9 Sometimes the law is simply unclear. Other times, actors, whether police officers, prosecutors, regulators, or courts, have deliberately been given discretion. Further, an individual may subjectively discount or increase the probability of liability due to “mistakes in the determination of factual issues, and errors in the identification of the applicable legal rule.”Footnote 10 One way to resolve these uncertainties is to pay a lawyer for advice – in particular, a liability estimate.

Given a large enough training set, a predictive model may detect patterns in how courts have previously resolved vagueness and how officials have previously exercised discretion. Further, such a tool could correct the information deficits and asymmetries that may produce mistaken liability estimates. Outcome prediction tools might also obviate the need for legal representation entirely, allowing potential and actual litigants to estimate their own chances of success and proceed pro se. This could be a substantial boon for access to justice. Of course, even an outcome-informed pro se litigant may fail to navigate complex court procedures and norms successfully.Footnote 11 Fully opening the courthouse doors to self-represented litigants might also require simplification of court procedures. Still, outcome prediction tools might go a long way toward expanding access to justice, whether by serving litigants directly or by acting as a kind of force multiplier for lawyers and legal organizations, particularly those squaring off against better-resourced adversaries.Footnote 12

A second way outcome prediction tools could, in theory, open up access to justice is by enhancing the ability of legal services providers to quantify, and manage, risk. Profit-driven lawyers, as distinguished from government-funded legal services lawyers, build portfolios of cases with an eye toward managing risk.Footnote 13 Outcome prediction tools may allow lawyers to allocate their resources more efficiently, wasting less money on losing cases and freeing up lawyer time and attention for more meritorious cases, or by constructing portfolios that balance lower- and higher-risk cases.

In addition, enterprising lawyers with a higher-risk appetite might use such tools to discover new areas of practice or potential claim types that folk wisdom would advise against.Footnote 14 To draw an example from my previous work, I studied the boom in wage-and-hour lawsuits in the early 2000s and identified as one driver of the litigation spike an influx of enterprising personal injury attorneys into wage-and-hour law.Footnote 15 One early mover was a South Florida personal injury attorney named Gregg Shavitz, who discovered his clients’ unpaid wage claims by accident, became an overtime specialist, and converted his firm into one of the highest-volume wage-and-hour shops in the country. This was before the wide usage of litigation outcome prediction tools. However, one might imagine that more discoveries like Gregg Shavitz’s could be enabled by computationally driven systems, rather than by happenstance, opening up representation for more clients with previously overlooked or under-resourced claim types.Footnote 16

I return to, and complicate, this possibility in Section 7.5, where I raise concerns about outcome prediction tools’ conservatism in defining winning and losing cases, which may reduce, rather than increase, access to justice – empirical questions that remain to be resolved.

7.3 Practice: Where Are We Now?

From theory, I now turn to practice, tracing the evolution and present state of litigation outcome prediction in scholarship, commercial applications, and tools developed specifically to serve low- and middle-income litigants. This Section also begins to introduce these tools’ limitations in their present form, a topic that I explore more fully in Section 7.4.

7.3.1 Scholarship

Litigation outcome prediction is an active scholarly research area, characterized by experimentation with an array of different data sets, modeling approaches, and performance measures. Thus far, no single dominant approach has emerged.

In a useful article, Kevin Ashley traces the history of the field to the work of two academics who used a machine learning algorithm called k-nearest neighbors in the 1970s to forecast the outcome of Canadian real estate tax disputes.Footnote 17 Since then, academic work has flourished. In the United States, academic interest has focused, variously, on decisions by the US Supreme Court,Footnote 18 federal appellate courts,Footnote 19 federal district courts,Footnote 20 immigration court,Footnote 21 state trial courts,Footnote 22 and administrative agencies.Footnote 23 Case types studied include employment,Footnote 24 asylum,Footnote 25 tort and vehicular,Footnote 26 and trade secret misappropriation.Footnote 27 Other scholars outside the United States have, in turn, developed outcome prediction tools focused on the European Court of Human Rights,Footnote 28 the International Criminal Court,Footnote 29 French appeals courts,Footnote 30 the Supreme Court of the Philippines,Footnote 31 lending cases in China,Footnote 32 labor cases in Brazil,Footnote 33 public morality and freedom of expression cases in Turkey’s Constitutional Court,Footnote 34 and Canadian employment and tax cases.Footnote 35 Some of this research has spun off into commercial products, discussed in the next section.

This scholarly work reflects all the strengths and weaknesses of the wider field. Though direct comparison among studies can be difficult given different datasets and performance measures, predictive performance has ranged from relatively modest marginal classification accuracyFootnote 36 to a very high F1 score of 98 percent in one study.Footnote 37

That said, some high-performing academic approaches may suffer from research design flaws, as they appear to use the text of a court’s description of the facts of a case and the laws cited to predict the court’s ruling.Footnote 38 This is problematic, as judges or their clerks often write case descriptions and choose legal citations with pre-existing knowledge of the ruling they will issue. It is no surprise that these case descriptions predict outcomes. Further, much academic work is limited in its generalizability by the narrow band of cases used to train and test predictive models. This is due to inaccessible or missing court data, especially in the United States, a problem discussed further in Section 7.4. Finally, some researchers give short shrift to explanation, in favor of prediction.Footnote 39 Though a model may perform well in forecasting results, its practical and tactical utility may be limited if lawyers seeking to make representation decisions do not know what drives the predictions and cannot square them with their mental models of the world. As discussed further in Section 7.4, explainable predictions are becoming the new norm, as interpretations are now available for even the most “black box” predictive models. For the moment, however, explainability remains a sticking point.

7.3.2 Commercial Applications

The commercial lay of the land is similar to the academic landscape, with substantial activity and disparate approaches focused on particular case types or litigation events.

The Big Three legal research companies – LexisNexis, Westlaw, and Bloomberg Law – have all developed outcome prediction tools that sit within their existing suites of research and analysis tools. LexisNexis offers what they label “judge and court analytics” as well as “attorney and law firm analytics.” In both spaces, the offerings are more descriptive than predictive – showing, for example, “a tally of total cases for a judge or court for a specific area of law to approximate experience on a motion like yours.”Footnote 40 The predictive jump is left to the user, who decides whether to adopt the approximation as a prediction or to distinguish it from the case at hand. LexisNexis provides further predictive firepower in the form of an acquired start-up, LexMachina, which provides, among other output, estimates of judges’ likelihood of granting or denying certain motions in certain case types.Footnote 41 Westlaw offers similar options in its litigation and precedent analytics tools,Footnote 42 as does Bloomberg Law in its litigation analytics suite.Footnote 43 Fastcase, a newer entrant into the space, offers a different approach, allowing subscribers to build their own bespoke predictive and descriptive analyses, using tools and methodologies drawn from a host of partner companies.Footnote 44

A collection of smaller companies offers litigation outcome prediction focused on particular practice areas or litigation events. Docket Alarm, now owned by Fastcase, offers patent litigation analytics that produce “the likelihood of winning given a particular judge, technology area, law firm or party.”Footnote 45 In Canada, Blue J Tax builds on the scholarly work described above to offer outcome prediction in tax disputes,Footnote 46 while in the United Kingdom companies like CourtQuant “predict [case] outcome and settlement probability.”Footnote 47

A final segment of the industry are law firms’ and other players’Footnote 48 homegrown, proprietary tools. On the plaintiffs’ side, giant personal injury firm Morgan & Morgan has developed “a ‘Google-style’ operation” in which the firm “evaluate[s] ‘actionable data points’ about personal injury settlements or court proceedings” and uses the insight to “work up a case accordingly – and … do that at scale.”Footnote 49 Defense-side firms are doing the same. Dentons, the world’s largest firm, even spun off an independent analytics lab and venture firm to fund development in outcome prediction and other AI-enabled approaches to law.Footnote 50

It is difficult to assess how well any of these tools performs, as access is expensive or unavailable, the feature sets used as inputs are not always clear, and the algorithms that power the predictions are hidden. I raise some concerns about commercial model design in Section 7.4 – in particular, reliance on lawyer identity as a predictor – and, as above, return to the perpetual problem of inaccessible and missing court data.

7.3.3 Outcome Prediction for Low- and Middle-Income Litigants

For reasons explored further below, examples are scarce of computationally driven litigation outcome prediction tools engineered specifically for the kinds of cases noted in this chapter’s opening. Philadelphia’s civil legal services provider, Community Legal Services, uses a tool called Expungement Generator (EG) to determine whether criminal record expungement is possible and assist in completing the paperwork.Footnote 51 The EG does not predict outcomes, but its automated approach enables efficiency gains for an organization that prepares thousands of expungement petitions per year.Footnote 52 Similarly, an application developed in the Family Law Clinic at Duquesne University School of Law prompts litigants in child support cases to answer a set of questions, which the tool then evaluates to determine “if there is a meritorious claim for appeal to be raised” under Pennsylvania law.Footnote 53 As with the EG, the Duquesne system does not appear to use machine learning techniques, but rather to apply a set of mechanical rules. The clinic plans prediction as a next step, however, and is developing a tool that analyzes winning arguments in appellate cases in order to guide users’ own arguments.Footnote 54

7.4. Present Limits

Having surveyed the state of the outcome prediction field, I now step back and assess its limits. As David Freeman Engstrom and Jonah Gelbach rightly concluded in earlier work: “[L]egal tech tools will arrive sooner, and advance most rapidly, in legal areas where data is abundant, regulated conduct takes repetitive and stereotypical forms, legal rules are inherently stable, and case volumes are such that a repeat player stands to gain financially by investing.”Footnote 55 Many of the commercial tools highlighted above fit this profile. Tax-oriented products exploit relatively stable rules; Morgan & Morgan’s internal case evaluation system exploits the firm’s extraordinarily high case volumes.

Yet, as noted above, data’s “abundance” is an open question, as is data quality. Methodological problems may also hinder these tools’ development. In the access to justice domain, the questions of investment incentives and financial gains loom large as well. The remainder of this Section addresses these limitations.

7.4.1 Data Limitations

Predictive algorithms require access to large amounts of data from previous court cases for model training, but such bulk data is not widely or freely available in the United States from the state or federal courts or from administrative agencies that have an adjudicatory function.Footnote 56 The Big Three have invested substantial funds in compiling private troves of court documents and judicial decisions, and jealously guard those resources with high user fees, restrictive terms and conditions, and threatened and actual litigation.Footnote 57

Data inaccessibility creates serious problems for outcome prediction tools designed to meet the legal needs of low- and middle-income litigants.Footnote 58 Much of this litigation occurs in state courts, where data is sometimes poorly managed and siloed in multiple systems.Footnote 59 Moreover, there is little money in practice areas like eviction defense and public benefits appeals, in which clients, by definition, are poor. Thus, data costs are high, and financial incentives for investment in research and development are low.

Even the products offered by the monied Big Three, however, suffer from data problems. With large companies separately assembling their own private data repositories, coverage varies widely, producing remarkable disagreement about basic facts. A recent study revealed that the answers supplied to the question “How many opinions on motions for summary judgment has Judge Barbara Lynn (N.D. Tex.) issued in patent cases?” ranged from nine to thirty-two, depending on the legal research product used.Footnote 60 This is an existential problem for the future of litigation outcome prediction, as predictions are only as good as the data on which they are built.Footnote 61

A final data limitation centers on the challenges of causal explanation. Even if explainable modeling approaches are used, the case characteristics that appear to be the strongest predictors of outcomes may not, in fact, be actionable. For instance, when a predictive tool relies on attorney identity as a feature, the model’s prediction may actually be free-riding on the attorney’s own screening and selection decisions. In other words, if the presence of Lawyer A in a case is strongly predictive of a win for her clients, Lawyer A’s skills as a litigator may not be the true cause. The omitted, more predictive variable is likely the strength of the merits, and Lawyer A’s skill at assessing those merits up-front. Better data could enable better model construction, avoiding these kinds of proxy variable traps.

7.4.2 Methodological Limitations

Sitting atop these data limitations are two important methodological limitations. First, as noted above, even if predictive tools do a good job of forecasting the probable outcome of a litigation event, they may only poorly explain why the predicted outcome is likely to occur. Explanation is important for a number of related reasons, among them engendering confidence in predictions, enabling bias and error detection, and respecting the dignity of people affected by prediction.Footnote 62 Indeed, the European Union’s General Data Protection Regulation (GDPR) has established what some scholars have labeled a “right to an explanation,” consisting of a right “not to be subject to a decision based solely on automated processing” and various rights to notice of data collection.Footnote 63 Though researchers are actively developing explainable AI that can identify features’ specific importance to a prediction and generate counterfactual predictions if features change value,Footnote 64 the field has yet to converge on a single set of explainability practices, and commercial approaches vary widely.

Second, outcome prediction is limited by machine and deep learning algorithms’ inability to reason by analogy. Legal reasoning depends on analogical thinking: the ability to align one set of facts to another and guess at the likely application of the law, given the factual divergences. However, teaching AI to reason by analogy is a cutting-edge area of computer science research, and it is far from well established. As computer scientist Melanie Mitchell explains, “‘Today’s state-of-the-art neural networks are very good at certain tasks … but they’re very bad at taking what they’ve learned in one kind of situation and transferring it to another’ – the essence of analogy.”Footnote 65 There is a famous analogical example in text analytics, where a natural language processing technique known as word embedding, when trained on an enormous corpus of real-world text, is able to produce the answer “queen” when presented with the formula “king minus man plus woman.”Footnote 66 The jump from this parlor trick to full-blown legal reasoning, though, is substantial.

In short, scaling up computationally driven litigation outcome prediction tools in a way that would fill the civil justice gap would require access to more and better data and methodological advances. Making bulk federal and state court and administrative agency data and records freely and easily accessible would be a very good step.Footnote 67 Marshaling resources to support methods and tool development would be another. Foundation funding is already a common ingredient in efforts to fill the civil justice gap. I propose that large law firms pitch in as well. All firms on the AmLaw 100 could pledge a portion of their pro bono budgets toward the development of litigation outcome prediction tools to be used in pro bono and low bono settings. The ABA Foundation might play a coordinating and convening role, as it is already committed to access-to-justice initiatives. Such an effort could have a much broader impact than firms’ existing pro bono activities, which tend to focus on representation in single cases. It might also jump-start additional interest from the Big Three and other commercial competitors, who might invest more money in improving algorithms’ predictive performance and spin off free or low-cost versions of their existing suites of tools.

7.5 Unintended Consequences

Time will tell whether, when, and how the data and methodological problems identified in Section 7.4 will be solved. Assuming that they are, and litigation outcome prediction tools can reliably generate highly reliable forecasts, there still may be reason for caution.

This Section identifies two possible unintended consequences of outcome prediction tools, which could develop alongside the salutatory access to justice effects described in Section 7.2: harm to would-be litigants denied representation whose claims are novel or less viable according to predictive tools, and harm to the common law system as a whole.

Here, the assumption is that such tools have access to ample data, account for all relevant variables, and are transparent and explainable – in other words, the tools work as intended to learn from existing patterns in civil litigation outcomes and reproduce those patterns as outcome predictions. Yet it is this very reproductive nature that is cause for concern.

7.5.1 Harms to Would-Be Litigants

Consider the facts of Elisa B. v. Superior Court,Footnote 68 a case decided by the California Supreme Court in 2005. Emily B. sought child support from her estranged partner, Elisa B., for twins whom Emily had conceived via artificial insemination of her eggs during her relationship with Elisa. If Emily walked into a lawyer’s office seeking help with her child support action, the lawyer might be interested in the case’s viability: How often have similar fact patterns come before California courts, and what was their outcome? The answers might inform the lawyer’s decision about whether to offer representation.

In real life, this case was one of first impression in California. The governing law, the Uniform Parentage Act, referred to “mother” and “father” as the potential parents.Footnote 69 Searching for relevant precedent, the Elisa B. court reasoned by analogy from previous cases that involved, variously, three potential parents (one man and two women), non-biological fathers, non-biological mothers, and a woman who raised her half-brother as her son.Footnote 70 From this and other precedent, the court cobbled together a new legal rule that required Elisa B. to pay child support for her and Emily B.’s children.

I am doubtful that an outcome prediction tool would have reached this same conclusion. The number of analogical jumps that the court made would seem to be outside the capabilities of machine and deep learning, even assuming methodological advancement.Footnote 71 Further, judges’ decisions about what prior caselaw to draw upon and how many analogical leaps to make may be influenced by factors like ideology and public opinion, which could be difficult to model well. Emily B.’s claim would likely receive a very low viability score.Footnote 72

A similar cautionary tale comes from my own previous work with Camille Gear Rich and Zev Eigen on attorneys’ non-computational assessments of claim viability. We documented plaintiffs’ employment attorneys’ dim view of the likelihood of success for employment discrimination claims and their shifting of case selection decisions away from discrimination and toward easier-to-prove wage-and-hour claims.Footnote 73 One result of this shift, we observed, was that even litigants with meritorious discrimination claims were unable to find legal representation. That work happened in 2014 and 2015, before litigation outcome prediction tools were widely available, and I am not aware of subsequent empirical studies on the effect of such tools on lawyers’ intake decisions. Yet if lawyers were already using their intuition to learn from past cases and predict future outcomes, pre-AI, machine and deep learning tools could just cement these same patterns in place.

Thus, in this view, as civil litigation outcomes become more predictable, claims become commoditized. Outlier claims and clients like Emily B. may become less representable, much like high-loss risks become less insurable. While access to justice on the whole may increase, the courthouse doors may be effectively closed to some classes of potential clients who seek representation for novel or disfavored legal claims or defenses.Footnote 74

Further, to the extent that representation is denied to would-be litigants because of their own negative personal histories, ingested by a model as data points, litigation outcome prediction tools can reduce people to their worst past acts and prevent them from changing course. Take as an example a tenant with an old criminal record trying to fight an eviction, whose past conviction reduces her chance of winning according to an algorithmic viability assessment. This may be factually accurate – her criminal record may actually make eviction defense more challenging – but a creative lawyer might see other aspects of her case that an algorithmic assessment might miss. By reducing people to feature sets and exploiting the features that are most predictive of outcomes, but perhaps least representative of people’s full selves, computational tools enact dignitary harm. In the context of low-income litigants facing serious and potentially destabilizing court proceedings, and who are algorithmically denied legal representation, such tools can also cause substantial economic and social harm, reducing social mobility and locking people into place.

Indeed, machine and deep learning methods are inherently prone to what some researchers have called “value lock-in.”Footnote 75 All data is historical in the sense that it captures points in time that have passed; all machine and deep learning algorithms find patterns in historical data as a way to predict the future. This methodological design reifies past practices and locks in past patterns. As machine learning researcher Abeba Birhane and her collaborators point out, then, machine learning is not “value-neutral.”Footnote 76 And as AI pioneer Joseph Weizenbaum observed, “the computer has from the beginning been a fundamentally conservative force which solidified existing power: in place of fundamental social changes … the computer renders technical solutions that allow existing power hierarchies to remain intact.”Footnote 77 It is no accident that the anecdotes above involve a lesbian couple, employment discrimination claimants, and a tenant with a criminal record: the fear is that would-be litigants like these with the least power historically become further disempowered at the hands of computational methods.

Yet as Section 7.2 suggested, a different story might also be possible: More accurate predictions might enable lawyers to fill their case portfolios with low-risk sure winners as hedges when taking on riskier cases like Elisa B., or might help them discover and invest in previously under-resourced practice areas. At this stage, whether predictive tools would increase or decrease representation for outlier claims and clients is an open empirical question, which researchers and policy makers should work to answer as data and methods improve and outcome prediction tools become more widely used.

7.5.2 Harms to the System

I turn now to the second potential harm caused by computationally driven litigation outcome prediction: harm to the common law system itself.Footnote 78 As Charles Barzun explains, common-law reasoning “contains seeds of radicalism [in that] the case-by-case process by which the law develops means it is always open to revision. And even though its official position is one of incremental change … doctrine [is] constantly vulnerable to being upended.”Footnote 79 Barzun points to Catharine MacKinnon’s invention of sexual harassment doctrine out of Title VII’s cloth as an example of a “two-way process of interaction” between litigants, representing their real-world experience, and the courts, interpreting the law, in a shared creative process “in which the meaning and scope of application of the statute changes over time.”Footnote 80

If lawyers rely too heavily on litigation outcome prediction tools, which reproduce past patterns, the stream of new fact presentations and legal arguments flowing into the courts dries up. Litigation outcome prediction tools may produce a sort of super stare decisis by narrowing lawyers’ case selection preferences to only those case, claim, and client types that have previously appeared and been successful in court. Yet stare decisis is only one aspect of our common law system. Another competing characteristic is flexibility: A regular influx of new cases with new fact patterns and legal arguments enables the law to innovate and adapt. In other words, noise – as differentiated from signal – is a feature of the common law, not a bug. Outcome prediction tools that are too good at picking up signals and ignoring noise eliminate the structural benefits of the noise, and privilege stare decisis over flexibility by shaping the flow of cases that make their way to court.

Others, particularly Engstrom and Gelbach, have made this point, suggesting that prediction

comes at a steep cost, draining the law of its capacity to adapt to new developments or to ventilate legal rules in formal, public interpretive exercises …. The system also loses its legitimacy as a way to manage social conflict when the process of enforcing collective value judgments plays out in server farms rather than a messy deliberative and adjudicatory process, even where machine predictions prove perfectly accurate.Footnote 81

The danger is that law becomes endogenous and ossified. “Endogenous,” to repurpose a concept introduced by Lauren Edelman, means that the law’s inputs become the same as its outputs and “the content and meaning of law is determined within the social field that it is designed to regulate.”Footnote 82 “Ossified,” to borrow from Cynthia Estlund, means that the law becomes “essentially sealed off … both from democratic revision and renewal from local experimentation and innovation.”Footnote 83

7.6 Next Steps

As noted above, whether any of the unintended consequences outlined above will come to pass – and, indeed, whether access to justice improvements will come to pass as well – turns on empirical questions. Given the problems and limitations identified in Section 7.4, will litigation outcome prediction tools actually work well enough to achieve either their potential benefits or cause their potential harms? My assessment of the present state of the field suggests there is a long way to go before we reach either set of outcomes. But as the field matures, we can build in safeguards against the endogeneity risks and harms I identify above through technical, organizational, and policy interventions.

First, on the technical side, computer and data scientists, and the funders who make their work possible, should invest heavily in improving algorithmic analogical reasoning. Without the ability to reason by analogy, outcome predictors not only will miss an array of possible positive predictions, but they will also be systematically biased against fact patterns like Emily B.’s, which present issues of first impression.

Further on the technical front, developers could purposefully over-train predictive algorithms on novel, but successful, fact patterns and legal arguments in order to nudge the system off its path and make positive predictions possible even for cases that fall outside the norm. This idea is adapted from OpenAI’s work in nudging its state-of-the-art language model, GPT-3, away from its “harmful biases, such as outputting discriminatory racial text” learned from its training corpus, by over-exposing it to counter texts.Footnote 84

Technical fixes focus on outcome prediction tools’ production side. Organizational fixes target the tools’ consumers: the lawyers, law firms, and other legal organizations that might use them to influence case selection. I propose here that no decision should be made exclusively on the basis of algorithmic output. This guards against the dignitary and other real harms described above, as would-be litigants are treated as full people rather than feature sets. This also parallels the GDPR’s explanation mandate, though I suggest it here as an organizational practice that is baked into legal organizations’ decision-making processes.Footnote 85

Finally, I turn to policy. The story above assumes a profit-driven lawyer as the user of outcome prediction tools. Of course, there are other possible motivations for a lawyer’s case selection decisions, such as seeking affirmatively to establish a new interpretation of the law or right a historic wrong. These cause lawyers, from all points on the ideological spectrum, may be particularly likely to take on seemingly high-risk claim or party types, which receive low computationally determined viability scores. Government lawyers, too, may function as cause lawyers, pushing legal arguments, in accordance with administration position, that diverge from courts’ past practices. Government agencies should study trends in private attorneys’ use of litigation outcome prediction tools in the areas in which they regulate, and should make their own case selection decisions to fill gaps in representation.Footnote 86

7.7 Conclusion

This chapter has explored the consequences of computationally driven litigation outcome prediction tools for the civil justice system, with a focus on increasing access to justice. It has mapped the current state of the outcome prediction field in academic work and commercial applications, as well as in pro bono and low bono practice settings. It has also raised concerns about unintended consequences for litigants and for our legal system as a whole.

I conclude that there is plenty of reason for “techno-optimism,” to use Tanina Rostain’s term, about the potential for computationally driven litigation outcome prediction tools to close the civil justice gap.Footnote 87 However, reaching that optimistic future, while also guarding against potential harms, will require substantially more money and data, continued methodological improvement, careful organizational implementation, and strategic deployment of government resources.

Footnotes

Thanks to Madison Gibbs for her excellent research assistance and to Albert Yoon, Dan Linna, David Freeman Engstrom, Peter Molnar, and Sanjay Srivastava for insightful conversations on these topics.

1 Am. Bar Ass’n, Consortium on Legal Services and the Public, Legal Needs and Civil Justice: A Survey of Americans, Major Findings from the Comprehensive Legal Needs Study 27 (1994), https://legalaidresearch.org/2020/03/03/legal-needs-and-civil-justice-a-survey-of-americans-major-findings-from-the-comprehensive-legal-needs-study/.

2 Legal Servs. Corp., The Justice Gap: Measuring the Unmet Civil Legal Needs of Low-Income Americans 6 (2017), https://www.lsc.gov/our-impact/publications/other-publications-and-reports/justice-gap-report; see also generally Rebecca L. Sandefur & James Teufel, Assessing America’s Access to Civil Justice Crisis, 11 U.C. Irvine L. Rev. 753 (2021).

3 See, e.g., N.C. Equal Access to Justice Comm’n & NC Equal Justice All., In Pursuit of Justice, An Assessment of the Civil Legal Needs of North Carolina 4 (2021), https://ncequaljusticealliance.org/assessment/; Victor D. Quintanilla & Rachel Thelin, Indiana Civil Legal Needs Study and Legal Aid System Scan 6 (2019), https://www.repository.law.indiana.edu/facbooks/206/; Legal Servs. Corp., The Justice Gap 53 n.6 (collecting additional state studies).

4 Legal Servs. Corp., The Justice Gap, at 7.

5 Kathryn A. Sabbeth, Housing Defense as the New Gideon, 41 Harv. J.L. & Gender 55, 56–57 (2018); see also Pamela Bookman & Colleen F. Shanahan, A Tale of Two Civil Procedures, 122 Colum. L. Rev. (forthcoming 2022) (describing state courts as “lawyerless”).

6 Legal Profession, New Ga. Encyclopedia (Aug. 11, 2020), https://www.georgiaencyclopedia.org/articles/government-politics/legal-profession/; Katheryn Hayes Tucker, Here Are the Six Georgia Counties That Have No Lawyers, The Daily Report (Jan. 8, 2015), https://www.law.com/dailyreportonline/almID/1202714378330/Here-Are-the-Six-Georgia-Counties-That-Have-No-Lawyers/?/.

7 Tucker, Six Georgia Counties.

8 See Chapter 3 in this volume.

9 Gillian K. Hadfield, Weighing the Value of Vagueness: An Economic Perspective on Precision in the Law, 81 Cal. L. Rev. 541 (1994).

10 Id.

11 Bookman & Shanahan, A Tale of Two Civil Procedures, at 1617; Rebecca L. Sandefur, Elements of Professional Expertise: Understanding Relational and Substantive Expertise through Lawyers’ Impact, 80 Am. Soc. Rev. 909, 915–16 (2015).

12 David Freeman Engstrom & Jonah B. Gelbach, Legal Tech, Civil Procedure, and the Future of Adversarialism, 169 U. Pa. L. Rev. 1001, 1072 (2020); John O. McGinnis & Russell G. Pearce, The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041, 3049 (2014).

13 Herbert M. Kritzer, Contingency Fee Lawyers as Gatekeepers in the Civil Justice System, 81 Judicature 22, 23 (1997) While Kritzer focuses on contingency-free practice, his core insight extends to cases brought under fee-shifting statutes or flat-fee arrangements as well, where lawyers are likewise balancing outlay of resources against probable recovery.

14 Thanks to David Freeman Engstrom for suggesting this possibility.

15 Charlotte S. Alexander, Litigation Migrants, 56 Am. Bus. L.J. 235 (2019).

16 To carry the thought exercise further, perhaps third-party litigation financing firms could fund these sorts of risky case-selection strategies, which solo lawyers or small firms might otherwise be hesitant to adopt. Center on the Legal Profession, Harvard Law School, The Practice, Investing in Legal Futures, Litig. Fin., Sept.–Oct. 2019.

17 Kevin Ashley, A Brief History of the Changing Roles of Case Prediction in AI and Law, 36 Law in Context: A Socio-Legal Journal 93, 96 (2019) (citing Ejan Mackaay & Pierre Robillard, Predicting Judicial Decisions: The Nearest Neighbor Rule and Visual Representation of Case Patterns, 3 Datenverarbeitung im Recht 302 (1974)).

18 See, e.g., Daniel Martin Katz, Michael J. Bommarito II & Josh Blackman, A General Approach for Predicting the Behavior of the Supreme Court of the United States, 12 PLoS ONE (2017).

19 See, e.g., Sergio Galletta, Elliott Ash & Daniel L. Chen, Measuring Judicial Sentiment: Methods and Application to U.S. Circuit Courts (Aug. 19, 2021) (unpublished manuscript), https://ssrn.com/abstract=3415393.

20 Elizabeth C. Tippett et al., Does Lawyering Matter? Predicting Judicial Decisions from Legal Briefs, and What That Means for Access to Justice, 101 Texas L. Rev. (forthcoming 2022).

21 See, e.g., Matthew Dunn et al., Early Predictability of Asylum Court Decisions, 2017 Proc. ACM Conf. on AI & Law.

22 See, e.g., Devin J. McConnell et al., Case-Level Prediction of Motion Outcomes in Civil Litigation, 18 Proc. Int’l Conf. on A.I. & Law 99 (2021).

23 See, e.g., Karl Branting et al., Semi-Supervised Methods for Explainable Legal Prediction, 17 Proc. Int’l Conf. on A.I. & Law 22 (2019).

24 Tippett et al., Does Lawyering Matter?.

25 Dunn et al., Early Predictability of Asylum Court Decisions.

26 McConnell et al., Case-Level Prediction of Motion Outcomes.

27 Kevin D. Ashley & Stefanie Brüninghaus, Automatically Classifying Case Texts and Predicting Outcomes, 17 A.I. L. 125 (2009).

28 See, e.g., Nikolaos Aletras et al., Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective, 2(93) PeerJ Comput. Sci. (2016).

29 Fabien Tarissan & Raphaëlle Nollez-Goldbach, Analysing the First Case of the International Criminal Court from a Network-Science Perspective, 4 J. Complex Networks 616 (2016).

30 Paul Boniol et al., Performance in the Courtroom: Automated Processing and Visualization of Appeal Court Decisions in France, ArXiv (2020), https://arxiv.org/abs/2006.06251.

31 Michael Benedict L. Virtucio et al., Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning, 42 IEEE Int’l Conf. on Comput. Software & Applications 130 (2018).

32 Luyao Ma et al., Legal Judgment Prediction with Multi-Stage Case Representation Learning in the Real Court Setting, 44 Proc. Int’l ACM SIGIR Conf. on Rsch. & Dev. in Info. Retrieval 993 (2021).

33 Andre Lage-Freitas et al., Predicting Brazilian Court Decisions, arXiv (2019), https://arxiv.org/abs/1905.10348.

34 Mehmet Fatih Sert, Engin Yıldırım & İrfan Haşlak, Using Artificial Intelligence to Predict Decisions of the Turkish Constitutional Court, 2021 Soc. Sci. Comp. Rev. 1.

35 Maxime C. Cohen et al., The Use of AI in Legal Systems: Determining Independent Contractor vs. Employee Status (Jan. 28, 2022) (unpublished manuscript) https://ssrn.com/abstract=4013823; Yifei Yin, Farhana Zulkernine & Samuel Dahan, Determining Worker Type from Legal Text Data Using Machine Learning, 2020 IEEE Intl. Conf. on Dependable, Autonomic & Secure Computing; Benjamin Alarie et al., Using Machine Learning to Predict Outcomes in Tax Law, 58 Can. Bus. L.J. 231 (2016).

36 See, e.g., McConnell et al., Case-Level Prediction of Motion Outcomes, at 104 (reporting “maximum classification accuracy of 0.644” as compared to a naïve baseline of 0.501 using adaBoost, a decision-tree-based classification method and a variety of preprocessing steps applied to the input text).

37 See, e.g., Octavia-Maria Şulea et al., Exploring the Use of Text Classification in the Legal Domain, arXiv (2017), https://arxiv.org/abs/1710.09306 (reporting “results of 98% average F1 score in predicting a case ruling” of the French Supreme Court).

38 See id.

39 Compare, e.g., Katz et al., A General Approach for Predicting the Behavior of the Supreme Court (focusing exclusively on predictive performance), with Ma et al., Legal Judgment Prediction, at 89 (presenting interpretability strategy for “black box” neural network predictions). This may be an unfair critique, as prediction and explanation can be two entirely separate goals. A classic example illustrates the difference: A ruler who wants to know whether to spend money for a rain dance to break a drought cares about causation. The same ruler who wants to know whether it will rain tomorrow so she can bring an umbrella cares only about prediction. Will it rain or not? For more discussion, see Jon Kleinberg et al., Prediction Policy Problems, 105 Am. Econ. Rev. 491 (2015).

43 Legal Analytics, Bloomberg L., https://pro.bloomberglaw.com/legal-analytics/.

44 AI Sandbox, Fastcase, https://www.fastcase.com/sandbox/.

48 Here, legal operations service providers like Ernst and Young and other accounting and consulting firms, third-party litigation finance companies, and insurance companies that insure against litigation costs are all players that are invested in tech- and often AI-fueled outcome prediction. See, e.g., Apex Litig. Fin., https://www.apexlitigation.com/.

49 Christine Schiffner, Inside the “Google-Style” Tech Hub Driving Plaintiffs Firms’ Growth, Nat’l L.J. (Nov. 15, 2021), https://www.law.com/nationallawjournal/2021/11/15/inside-the-google-style-tech-hub-driving-plaintiffs-firms-growth/.

50 Dentons Launches Nextlaw Labs and Creates Legal Business Accelerator, Dentons (May 19, 2015) https://www.dentons.com/en/about-dentons/news-events-and-awards/news/2015/may/dentons-launches-nextlaw-labs-creates-legal-business-accelerator.

51 NateV, Expungement-Generator, GitHub, https://github.com/NateV/Expungement-Generator.

52 Id.; Rana Fayez, Meet the Disruptor: Michael Holland, Phila. Citizen (May 3, 2016), https://thephiladelphiacitizen.org/disruptor-michael-hollander-expungement-generator/.

53 Katherine L. Norton, Mind the Gap: Technology as a Lifeline for Pro Se Child Custody Appeals, 58 Duq. L. Rev. 82, 91 (2020).

54 Id.

55 Engstrom & Gelbach, Legal Tech, at 1029.

56 Charlotte S. Alexander & Mohammad Javad Feizollahi, On Dragons, Caves, Teeth, and Claws: Legal Analytics and the Problem of Court Data Access, in Computational Legal Studies: The Promise and Challenge of Data-Driven Legal Research (Ryan Whalen ed., 2020).

57 Alaina Lancaster, Judge Rejects ROSS Intelligence’s Dismissal Attempt of Thomson Reuters Suit over Westlaw Content, Law.com (Mar. 29, 2021), https://www.law.com/therecorder/2021/03/29/judge-rejects-ross-intelligences-dismissal-attempt-of-thomson-reuters-suit-over-westlaw-content/.

58 See Chapters 6, 14, and 15 in this volume.

59 See Chapter 13 in this volume.

61 Engstrom and Engstrom’s contribution to this volume identifies yet another data limitation: the absence of reliable data on cases and claims that are settled, where the contents of the settlement are unavailable. See Chapter 6 in this volume.

62 Margot E. Kaminski, The Right to Explanation, Explained, 34 Berkeley Tech. L.J. 189 (2019).

63 Id.

64 See, e.g., Scott M. Lundberg et al., From Local Explanations to Global Understanding with Explainable AI for Trees, 2 Nature Mach. Intelligence 56 (2020); What Is Explainability? Alibi, https://docs.seldon.io/projects/alibi/en/stable/overview/high_level.html#what-is-explainability.

65 John Pavlus, The Computer Scientist Training AI to Think with Analogies, Quanta Mag. (July 14, 2021), https://www.quantamagazine.org/melanie-mitchell-trains-ai-to-think-with-analogies-20210714/; see also Katie Atkinson & Trevor Bench-Capon, Reasoning with Legal Cases: Analogy or Rule Application? 17 Proc. Int’l Conf. on Artificial Intelligence & L. 12 (June 2019).

66 Emerging Technology from the arXiv, King – Man + Woman = Queen: The Marvelous Mathematics of Computational Linguistics, MIT Tech. Review (Sept. 17, 2015), https://www.technologyreview.com/2015/09/17/166211/king-man-woman-queen-the-marvelous-mathematics-of-computational-linguistics/.

67 Adam R. Pah et al., How to Build a More Open Justice System, 369(6500) Science (2020).

68 Elisa B. v. Sup. Ct., 117 P.3d 660 (Cal. 2005).

69 Id. at 664 (“The UPA defines the ‘[p]arent and child relationship’ as ‘the legal relationship existing between a child and the child’s natural or adoptive parents’ …. The term includes the mother and child relationship and the father and child relationship.”).

70 Id. at 667.

71 Atkinson & Bench-Capon, Reasoning with Legal Cases.

72 Prediction tools become like Oliver Wendell Holmes’ Vermont justice: “There is a story of a Vermont justice of the peace before whom a suit was brought by one farmer against another for breaking a churn. The justice took time to consider, and then said that he has looked through the statutes and could find nothing about churns, and gave judgment for the defendant.” Oliver Wendell Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457 (1897).

73 Charlotte S. Alexander, Zev Eigen & Camille Gear Rich, Post-Racial Hydraulics: The Hidden Dangers of the Universal Turn, 91 N.Y.U. L. Rev. 1 (2016).

74 A future of “legal singularity,” in which all outcomes are perfectly predictable, is not necessary for my argument here. Benjamin Alarie, The Path of the Law: Towards Legal Singularity, 66 U. Toronto L.J. 443 (2016). Even prediction that works well for some subclass of cases will change lawyers’ preferences for those cases over other, less certain cases. This has consequences for those clients’ civil legal needs.

75 See, e.g., Laura Weidinger et al., Ethical and Social Risks of Harm from Language Models, arXiv (2021), https://arxiv.org/abs/2112.04359.

76 Abeba Birhane et al., The Values Encoded in Machine Learning Research, arXiv (2021), https://arxiv.org/abs/2106.15590.

77 Id. (citing Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (1976)).

78 David Freeman Engstrom, Private Litigation’s Pathways: Lessons from Qui Tam Enforcement, 114 Colum. L. Rev. 1913, 1934 (2014) (“[P]rivate enforcers will tend to push into statutory and regulatory interstices.”).

79 Charles L. Barzun, The Common Law and Critical Theory, 92 Colo. L. Rev. 1, 13 (2021).

80 Id. at 8.

81 Engstrom & Gelbach, Legal Tech, at 1036–37.

82 Lauren B. Edelman, Christopher Uggen & Howard S. Erlanger, The Endogeneity of Legal Regulation: Grievance Procedures as Rational Myth, 105 Am. J. Socio. 406 (1999).

83 Cynthia L. Estlund, The Ossification of American Labor Law, 102 Colum. L. Rev. 1527, 1530 (2002). The same points have been made in connection with grant funding for scientific research, where the fear is that innovation is stifled because researchers hew too closely to the example of previous successfully funded proposals. See, e.g., Scott O. Lilienfeld, Psychology’s Replication Crisis and the Grant Culture: Righting the Ship, 12 Perspectives on Psych. Sci. 660 (2017).

84 Irene Solaiman & Christy Dennison, Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, arXiv (2021), https://arxiv.org/abs/2106.10328.

85 Kaminski, The Right to Explanation, Explained; see also Cathy O’Neill, Weapons of Math Destruction 205 (2016) (proposing a Hippocratic Oath for data scientists).

86 For an analogous use of government resources to fill private enforcement gaps, see David Weil, Improving Workplace Conditions through Strategic Enforcement, Russell Sage Found. (2010), https://www.russellsage.org/research/report/strategic-enforcement.

87 Tanina Rostain, Techno-Optimism and Access to the Legal System, 148 Daedalus 93 (2019).

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×