1. INTRODUCTION
This article explores the theme of the paths to digital justice, developing a line of research that was pioneered by Hazel Genn with a particular focus on the challenges of contemporary societies and the potential demand for automated decision-making through judicial robots.Footnote 1 On one hand, information technology opens new avenues for conflict resolution, especially due to the increased capacity to process massive amounts of information and to provide automated responses for a large number of claims that would otherwise be too costly and probably left unresolved. On the other hand, artificial intelligence provides innovative opportunities for case management within the setting of complex litigation, as software may be programmed and trained to identify identical, similar, and analogous cases in a way that reduces the volume of litigation and provides speedy decisions.Footnote 2
The paths to digital justice provide various elements for our reflection, especially for our institutional imagination on the possibilities brought by information technology, big data, and algorithmic decision-making. Not surprisingly, contemporary scholarship invites us to speculate and consider the establishment of judicial robots—that is, artificial intelligence producing decisions that are currently made by human judges, like judgments, sentences, and interlocutory decisions, among others.Footnote 3 In this context, one research question that emerges is the following: How may algorithms support juridical decision-making? Inevitably, this reflection also invites us to think about whether algorithmic decision-making could eventually substitute for judicial decision-making? Nowadays, we already imagine and speculate that algorithms may eventually replace human beings in deciding juridical controversies.
More than just a simple science-fiction story, criminal judges already use software to evaluate the potential recidivism of criminal defendants. The Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) is a risk-assessment tool widely used in the US and has supported judicial decisions in large numbers of concrete cases.Footnote 4 The algorithm was developed to assess potential recidivism and to support judicial decision-making related to the imprisonment or the release of a criminal defendant based on information technology.Footnote 5 Additionally, contemporary societies have developed various technological tools that are functionally equivalent to judicial decisions and may substitute for judges not by placing robots in robes, but by providing a low-cost, speedy, and informal alternative through online dispute resolution, for instance.Footnote 6 In this case, Internet-users already seek new pathways to digital justice through online platforms that may reduce the demand for the traditional justice system. Instead of filing a claim at a small-claims court, individuals simply file their complaints at these digital platforms that may eventually be more efficient than traditional courts due to lower costs, speedy procedures, and technologically informed outcomes.Footnote 7
This essay explores the potential of judicial robots and algorithmic decision-making based on these recent experimental pathways to digital justice and the quest for due process of law. Algorithms are powerful tools and it is difficult to imagine our future without themFootnote 8 but, if crucial decisions “were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans.”Footnote 9 Therefore, their incorporation into decision-making processes requires careful analysis and algorithmic auditing for control of procedure and fairness.Footnote 10 Instead of demonizing algorithmic decision-making and proposing their ban, this essay investigates their potential for improving the public good, for satisfying our expectations for explainability and fairness in adjudication. Importantly, this essay addresses themes that are still under development, being exploratory in the sense that it provides more questions than answers, by navigating through somewhat unknown waters.
In addition to this introduction, Section 2 investigates COMPAS, explaining the role of this new technology for risk assessment, the critique of discriminatory bias made by ProPublica, and the defence of the precision and objectivity of this tool. Section 3 discusses the important judicial precedent of State v. Loomis Footnote 11 and points of concern related to due process of law when algorithms support decision-making regarding asymmetry of information, individuation, and fairness in adjudication. Section 4 explores the possibilities and limitations of the pathways to digital justice with the potential of application for specific tasks of repetitive character, but with strong limitations of explainability and how institutions are essential for setting the relevant rules of the game. Section 5 brings inconclusive remarks.
2. JUDICIAL ROBOTS? LESSONS FROM COMPAS
Our point of departure for imagining judicial robots is inevitably the experience of COMPAS in contemporary US. Software developers defined COMPAS as “an automated decision-support software package that integrates risk and needs assessment with several other domains, including sentencing decisions, treatment and case management, and recidivism outcomes.”Footnote 12 Presented as fourth-generation (4G) correctional assessment technology, COMPAS incorporated insights from a series of different explanatory theories of criminality, such as “low self-control theory, strain theory or social exclusion, social control theory (bonding), routine activities-opportunity theory, sub-cultural or social learning theories, and a strengths or good lives perspective.”Footnote 13 As part of this comprehensive approach, the software requires information related to these theoretically relevant factors and eight criminogenic predictive factors, including professional history and educational skills; safe housing and financial conditions; and emotional, social, and familiar support.Footnote 14 The information is gathered through the databases of the criminal justice system, by integrating sentencing decisions, institutional processing, case management, treatments, and outcomes as support for correctional authorities.Footnote 15
Interestingly, COMPAS is a prodigious example of the mathematical turn in legal analysis,Footnote 16 as the probability of recidivism is measured through a risk scale developed as part of a regression model trained to predict new offences in a probation sample.Footnote 17 The system calculates a recidivism risk decile score, by translating into numbers information related to criminal involvement, non-compliance, violence, criminal association, substance abuse, financial difficulties, vocational or educational problems, family criminality, social environment, leisure, residential instability, social isolation, criminal attitudes, and criminal personality.Footnote 18 Importantly, one of the original goals of the software developers consisted exactly of reducing subjectivity, inconsistency, bias, stereotyping, and vulnerability.Footnote 19 In this context, COMPAS emerged as a technological tool with strong internal consistency and predictive validity in comparison with other similar risk-predictive instruments.Footnote 20
Therefore, we may discuss what we eventually gain and lose as part of this transformation of traditional decision-making into algorithmically supported decision-making. Initially, there is a problem concerning how human actors deal with these new technologies. Our perception of these new technologies is strongly influenced by our contact with anthropomorphic robots in popular culture.Footnote 21 Because of images of personification of robots in science fiction, our imagination may consider that new technologies are analogous to human decision-making processes. In this sense, unsurprisingly, our society may consider that algorithms reason like the human mind and are ultimately superior because of their stronger processing power. However, contemporary algorithms are designed and programmed to pursue specific tasks and are not capable of general intelligence.Footnote 22 In other words, artificial intelligence remains task-oriented and technological tools are designed for their particular and specific objects rather than to generally think, reflect, and decide.Footnote 23 Therefore, the typical mistake of imagining that artificial intelligence is simply a superior manifestation of human intelligence should be avoided. Not only should we not blindly trust algorithmically decisions, but also we should critically assess and evaluate data processing, systemic calibration, and legitimacy regarding transparency/opacity and justification for decisions.
COMPAS was also recently challenged in the public-opinion courts, as ProPublica published a critical piece entitled “Machine Bias,” in which it accused COMPAS of being “biased against blacks.”Footnote 24 According to ProPublica, there is empirical evidence that the algorithm hidden behind COMPAS leads to significant racial disparities by making mistakes with Black and White defendants in very different ways.Footnote 25 First, ProPublica provides anecdotal evidence on how Black defendants were considered to pose a high risk of recidivism in comparison to White defendants, even when their personal record and characteristics of the case did not seem to indicate dangerousness or a clear probability of committing another crime. For instance, one 18-year-old Black girl who decided to ride a small child’s bicycle and was arrested for theft was considered high-risk—Brisha Borden scored 8 in the risk-assessment scale—and a 41-year-old White man previously convicted for armed robbery who was also arrested for petty theft was considered low-risk—Vernon Prater scored 3 in the risk-assessment scale.Footnote 26 Second, ProPublica considers that these risk-assessment tools are remarkably unreliable in forecasting future criminal behaviour. In concrete terms, only 20% of the people predicted to commit violent crimes eventually really did so.Footnote 27 In terms of general crimes, the algorithm was accurate in 61% of the cases, but ProPublica criticized this percentage as being just “somewhat more accurate than a coin flip.”Footnote 28 Third, in terms of racial disparities, the mathematical formula was considered biased because it significantly produced false positives for Black defendants and false negatives for White defendants: Black defendants were wrongly labelled as future criminals at almost twice the rate of White defendants and White defendants were mislabelled as low-risk more often than Black defendants.Footnote 29 Fourth, the company responsible for the software does not disclose the mathematical formula and the calculations used for the risk scores, so that defendants and the public are unable to understand the reasons for the disparities.Footnote 30 Therefore, only the results are shared with a defendant’s attorney and they rarely have an opportunity to challenge their assessments.Footnote 31
In response to ProPublica, software developers wrote an article in which they criticized ProPublica’s piece and strongly rejected the conclusion that the software discriminated against Black defendants.Footnote 32 In their review of the empirical evidence used by ProPublica related to a sample of pre-trial defendants in Broward County, Florida, the software developers pointed to several statistical and technical errors, especially because the different base rates of recidivism for Blacks and Whites were not taken into account.Footnote 33 As Hannah Fry puts it, their explanation reveals that the algorithm leads to biased outcomes because reality is biased:
unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at prediction across the board and makes false positives and false negative mistakes at the same rate for every group of defendants.Footnote 34
In other words, the reason for the racial disparity comes from the fact that rates of arrest are not equivalent across racial groups and the algorithm simply reproduces the predictable consequences of a deeply unbalanced society: “until all groups are arrested at the same rate, this kind of bias is a mathematical certainty.”Footnote 35
Another important argument in defence of the risk-assessment tool is that COMPAS consists simply of software to support judicial decision-making regarding the probabilities of reoffending and risks of recidivism, and was not designed “to make absolute predictions about success or failure.”Footnote 36 This is an important point, because the critique of ProPublica seems to suggest that COMPAS should emulate the accurate prediction of magical oracles, as in the film Minority Report, for instance.Footnote 37 These risk-assessment tools follow a mathematical technique developed by Ernest Burgess, a professor at the University of Chicago who built in 1928 a tool to measure the probability of criminal behaviour that was superior to human intuition.Footnote 38 Nowadays, the best algorithms use the technique of random forests based on decision trees, but predictions are based on patterns from data and are often only marginally more accurate than random guessing.Footnote 39 In the end, COMPAS should not be compared to magical oracles or mechanisms for perfect prediction, but to the concrete alternative of human judgment without the support of this risk-assessment tool. In this context, COMPAS may have two advantages over this alternative: the consistency of always giving exactly the same answer for the same set of circumstances; and the efficiency of processing the data better and making better predictions.Footnote 40
This debate is relevant for our reflection on the capacity of algorithms in supporting judicial decision-making. A different but similar debate involves algorithmic decision-making and due process of law. Points of concern related to due process include this idealization of algorithms perceived either as god-like or devil-like artefacts. A realistic perspective of these technological tools is necessary. Likewise, algorithmic decision-making has been famously labelled a “black box” and we should consider the capacity for justification, explanation, and transparency of these complex processes. Moreover, due process of law also implies a discussion of the potential of general artificial intelligence and the existence of judicial robots as substitutes for human judges. These points are discussed in the next section.
3. ALGORITHMIC DECISION-MAKING AND DUE PROCESS OF LAW
Not only was COMPAS criticized in public opinion, but also was challenged in court. In State v. Loomis, Eric Loomis challenged the state of Wisconsin’s use of proprietary, closed-source risk-assessment software as part of his sentencing to six years in prison, by alleging that it violated the defendant’s due process of law.Footnote 41 Basically, Loomis presented three arguments against the use of COMPAS during sentencing:
(1) it violates a defendant’s right to be sentenced based upon accurate information, in part because the proprietary nature of COMPAS prevents him from assessing its accuracy; (2) it violates a defendant’s right to an individualized sentence; and (3) it improperly uses gendered assessments in sentencing.Footnote 42
This judicial challenge echoed a concern voiced by the then-Attorney General Eric Holder in his speech at the 57th Annual Meeting of the National Association of Criminal Defense Lawyers in 2014: even if he acknowledged the best intentions of software programmers in developing these risk-assessment algorithms, these tools could undermine the quest for individualized and equal justice.Footnote 43 In his own words,
By basing sentencing decisions on static factors and immutable characteristics—like the defendant’s education level, socioeconomic background, or neighborhood—they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.Footnote 44
Importantly, these algorithms have been originally developed for decisions on the probation and release of criminal defendants, but the extrapolation of algorithmic decision-making into sentencing required a much more careful process.Footnote 45
In the judgment, the State Supreme Court of Wisconsin decided that the use of COMPAS in sentencing did not violate due process of law or the defendant’s right to individualized sentencing, the use of accurate information, and impartiality (absence of discriminatory bias).Footnote 46 According to the court, there is no violation of due process of law because the defendant had access to the COMPAS score and the respective report, and had an opportunity to refute, supplement, and explain the COMPAS risk-assessment score.Footnote 47 Some criticized the court for failing to account for the fact that this software was developed by Northpointe—a for-profit company with a millionaire contract with the state of Wisconsin and a “biased party that cannot be relied upon to determine the accuracy of the risk assessment score.”Footnote 48 According to this opinion, the company has a strong conflict of interests and refuses to explain the value given and the breakdown for each factor, hiding details of the algorithm by alleging that it is proprietary and that the secret of the code is a core part of their business.Footnote 49 Therefore, access to the score and the respective report would be insufficient for the protection of defendant’s due-process rights, as access to the source code would be a necessary means to investigate any potential misinformation or miscalculation of risk-assessment scores.Footnote 50
Even the examination of the algorithmic source code may not be sufficient for the constitutional analysis of the judicial use of the risk-assessment tool. Lawrence Lessig popularized the notion that code is law and that we should examine legally the normativity embedded in the algorithm and commands derived from the mathematical formulas behind software.Footnote 51 However, nowadays, fixation with the unconstitutionality of a source code may imply a misunderstanding of the functioning of contemporary algorithms, as the normative analysis of the code “is unlikely to reveal any explicit discrimination.”Footnote 52 With the advent of a new generation of artificial intelligence and machine learning in which algorithmic decision-making depends on the training data,Footnote 53 examination of fairness depends on the inputs given to the algorithm and a criminal defendant “should be asking to see the data used to train the algorithm and the weights assigned to each input factor” instead of only the source code.Footnote 54 In this context, for instance, the racial discrimination attributed to COMPAS by ProPublica could eventually reveal itself as the result of a geographical discrimination or an implicit and unintentional bias due to the use of a ZIP code—which may be a proxy for race, especially in racially segregated areas.Footnote 55
The State Supreme Court of Wisconsin did not strike down the use of COMPAS in sentencing, but accepted that judicial application of these algorithmic risk-assessment tools may be problematic and issued warning labels to other judges with the following cautionary notes:
(1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighted or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross-validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.Footnote 56
These procedural safeguards to inform and alert judges are, however, normally ineffective means of transforming judicial behaviour regarding these technological tools because warning labels ignore the judge’s inability to evaluate them, to assess the weight of concerns behind criticisms, and to consider professional pressures to use COMPAS with deference while sentencing criminal defendants.Footnote 57
Even if the court’s opinion indicates prudence about the enthusiasm for algorithmic risk assessment in sentencing, problems are not limited to simply questions that may be answered by careful judicial reflection.Footnote 58 Without the adequate information on algorithmic processing, judges are unable to properly calibrate their interpretations of COMPAS and to modulate their consideration of risk-assessment tools.Footnote 59 In practical terms, defying algorithmic recommendations may be challenging and unusual for individual judges, especially because of the role that “heuristics” and “anchoring” play in supporting judicial decision-making.Footnote 60 Even if supporters of risk-assessment tools claim that these evaluations improve the quality of sentencing by making it more transparent and rational, these warning labels indicate the potential problems of algorithmic-based sentencing and suggest “considerable caution in assessing the qualitative value of these new technologies.”Footnote 61
Particularly in the case of COMPAS, we may also be dealing with a special case of the “black box” effect—that is, a case in legal rules and/or judicial decisions may maintain the algorithmic process secret—a legal black box—and perhaps even the code and/or type of artificial intelligence may maintain the algorithmic process unknown even to the software developers—a technical black box.Footnote 62 The opacity of the COMPAS algorithm comes from the proprietary characteristics of legally protected source codes that remain unknown to the defendant, her defence attorney, and the criminal judge.Footnote 63 Additionally, even if the court demanded transparency and the publication of the algorithmic formula as an open source to everyone as a prerequisite for the use of risk-assessment tools in criminal sentencing, there is the possibility that decisional rules would emerge in ways in which no one—not even the software developers—may be able to explain regarding why and how certain algorithmic decisions are made.Footnote 64 For instance, machine-learning algorithms of an artificial neural network (ANN) learn through a complex layered structure and their decisional rules are not programmed a priori and are usually unintelligible to humans.Footnote 65 Even if we may imagine that these algorithms are less biased than human judges, these machine-learning algorithms operate according to data used in their training and reproduce discrimination present in input data representative of our biased world.Footnote 66 A prodigious example of this problem comes from Microsoft’s bot Tay—launched in Twitter to behave like a regular young woman—who learned to express obscene vulgarities and hateful offences to minorities in less than one day.Footnote 67 Therefore, more than just being facially neutral, algorithms may need to be constantly retrained and affirmatively corrected against biases, so that they incorporate equal opportunity in their design and do not align with some unfair societal tendencies.Footnote 68 In the case of COMPAS, the parties could investigate whether the algorithm “has affirmatively been trained against the racism of the world.”Footnote 69 Without this sort of “algorithmic affirmative action,” algorithms may arguably be more biased than human judges.Footnote 70
Likewise, COMPAS should be evaluated according to the same standards as expected from the other actors in the criminal justice system. One of the complexities of the mathematical turn in legal analysis is the magic spell of the translation of words into numbers. Consequently, we fail to critically assess the normativity embedded in algorithmic commands because of a belief in the power of science, technology, and the objectivity of mathematical formulas.Footnote 71 However, criminal sentencing is not an easy task and may not be reduced to the output of a risk-assessment tool.Footnote 72 If we analogize COMPAS with an expert witness, the expected standard in the criminal justice system involves cross-examination. In this context, a defendant should have an equivalent right to interrogate the algorithm, especially due to the potential discrimination hidden in the data.Footnote 73 In this case, one relevant safeguard for criminal defendants could be to demand transparency and publicity of the algorithmic process, so that defendants and their attorneys may challenge algorithmic decision-making in courts.Footnote 74 Additionally, the General Data Protection Regulation (GDPR) already protects individuals against unfair algorithmic decision-making by establishing a right to information and to opt out from automated decision-making by demanding human intervention.Footnote 75 However, State v. Loomis and the controversy over COMPAS show how difficult it is to strike the right balance in terms of the level of information and freedom from automation.
The next section discusses the possibilities and limitations of the application of robots in judgments, algorithmic decision-making, and the pathways of digital justice.
4. PATHWAYS TO DIGITAL JUSTICE: POSSIBILITIES AND LIMITATIONS
The case-study of COMPAS invites a reflection on the future pathways of digital justice, the positioning of algorithms, the search for general artificial intelligence, data processing as reproduction, and a “black box.” Statistically, technology may be very efficient, as shown by the extremely low accident rate with autonomous cars.Footnote 76 However, the current state of computer science reveals that machine-learning algorithms may not learn to develop patience and planning skills. For instance, one comprehensive piece of research on an ANN algorithm playing Atari games found that it could play 29 out of 49 games at the human level, beating professional players at most of them.Footnote 77 However, human players are far better than machine-learning algorithms in 20 of the tested games.Footnote 78 If you remember, for instance, Ms Pac-Man, the ANN algorithm, scores only 12% of the score achieved by a professional player, especially because it is a game that involves patience and caution, and planning strategically on how to protect Ms Pac-Man from the ghost attacks is beyond the capacity of this machine-learning algorithm.Footnote 79 This comprehensive study reveals that the algorithm fails on Atari games that require planning.Footnote 80 One important lesson for our speculative reflection on the development of judicial robots is that contemporary artificial intelligence may not produce its decisions with prudence, which seems an essential quality for adjudication.
Another important discussion emerging from the Atari game-playing study is how much a computer may learn from scratch, which is a central question in understanding how far we are from creating general artificial intelligence.Footnote 81 In contrast to the human brain and its spontaneous understanding of different contexts, the ANN algorithm is trained for the performance of specific tasks, developing specialized artificial intelligence based on that particular training and being unable to perform tasks for which it did not receive specific training.Footnote 82 There is a lot of debate about whether general artificial intelligence will be possible in the future and perhaps ANN algorithms will remain confined to the performance of specific tasks and will not develop the general capacity for intelligence like that of humans.Footnote 83 Because, today, nobody seems to know how to create a robot with general intelligence, smart technology is confined to its specific domains, now and for the foreseeable future.Footnote 84 Nowadays, there is a lot of speculation about the possibility of artificial intelligence replacing human judgment, but it seems that there is strong potential for support in repetitive judicial activities rather than for a judicial robot fully replacing a human judge. For instance, reported cases of robot lawyers consist of automated systems for support with repetitive tasks like Ross—a technological researcher of documents and cases to assist lawyers through natural language processing—and Donotpay—a bot that functions as a digital assistant for appeals against parking tickets through document-assembly production.Footnote 85 Therefore, the path of digital justice seems to point more towards technological support for decision-making rather than to robots in robes making automated decisions without human intervention.
One important point for reflection comes from the fact that algorithmic decision-making normally reproduces patterns from the past. The case-study of COMPAS demonstrates that the outputs of the risk-assessment tool are based in the historical experience gathered through big data and representative of a large set of past decisions taken in real cases. Because algorithms are designed to find and recreate the pattern in the data sets that they were trained on, they learn to reproduce the status quo bias.Footnote 86 In addition to the potential for machine bias revealed by ProPublica, there is an even deeper problem related to path dependency in adjudication.Footnote 87 In other words, ANN algorithms are trained to produce outputs based on existing input and judicial robots would arguably produce sentences based only on existing precedent. Therefore, in this context, judicial robots would probably be unable to produce fair counter-hegemonic decisions that depart from precedent. The current state of computer science suggests that an algorithm would probably not be able to produce a watershed decision like that in Brown v. Board of Education, for instance.Footnote 88 In constitutional terms, therefore, algorithmic decision-making may contain a hidden conservative bias and a tendency to reproduce the status quo that would be problematic in terms of the counter-majoritarian role of courts and rights protection according to the democratic rule of law.Footnote 89
Another important point of concern related to artificial intelligence and judicial decision-making consists of the necessary explainability of the outcomes as part of rights protection in the democratic rule of law. Nowadays, understanding the reasons for a particular application of code that resulted in an injustice may be very difficult.Footnote 90 Our societies should monitor and prevent algorithmic injustice, by demanding more responsibility from those who collect data, build systems, and apply these rules.Footnote 91 Lack of transparency is part of the problem due to the fact that tech companies keep their algorithms secret.Footnote 92 One potential response for the normative control of algorithms consists of auditing to ensure the safety and fairness of algorithmic decision-making.Footnote 93 In comparison to the call for full publicity and open-source artificial intelligence, algorithmic audit may be performed by a controlled and discrete professional group of experts who may intervene to correct the decisional rules without revealing the code and other proprietary information to the public at large.Footnote 94 Auditing poses a series of challenges, because sophisticated algorithms may not reveal their true effects during testing, they may be able to circumvent the recommendations of auditors by building links in data sets, and public authorities may not be able to oversee their development and keep pace with the tech industry.Footnote 95 Especially difficult is the quest for transparency and explanations in the case of machine learning, when algorithms are programmed to reprogram their codes and even software designers may not be aware of the processing, which varies also according to the data used to train the algorithm.Footnote 96 Because judicial decisions are supposed to contain justifications and logical explanations of their rationale, software developers need to create an explanatory technology that may provide explanations for the technological reasoning behind algorithmic decision-making for the eventual application of these technologies in sentencing.Footnote 97 The ethical and legal requirements for transparency are not limited to publicity, but are more related to potential independent scrutiny, a shared system of justification that is comprehensible to others, and a system of accountability with checks and balances for correcting errors.Footnote 98
A crucial point for reflection is our personal standpointFootnote 99 on whether we would accept being judged by a judicial robot or would prefer to be judged by a human being. Some commentators sympathize with the idea of having an algorithm working with judges to support their work and help them to overcome their cognitive limitations, systematic bias, and random error.Footnote 100 On the other hand, others consider that there are some activities that are essentially human and that digital systems should not perform, even if the outcome may be technically better than the product of a human mind.Footnote 101 Outsourcing the activity of judging to a robot may be problematic also from a constitutional perspective in terms of the impermissible delegation of powers.Footnote 102 When imagining the future of digital justice and the potential for judicial robots and for algorithmic decision-making, we should think empirically and carefully collect the necessary data for assessing the potential pathways to justice.Footnote 103 We need to understand the demands of human beings and how artificial intelligence may reduce the asymmetries of power and information that they experience with the traditional judicial system.
There is a large potential for information technology as an enabler of access to justice, facilitating the aggregation of repetitive claims and enabling collective actors to protect relevant social interests more efficiently.Footnote 104 However, electronic gatekeepers may also create obstacles and limitations for individuals to protect their rights, such as mandatory electronic mediation as a prerequisite for accessing courts.Footnote 105 Importantly, we should consider empirically the potential avenues for digital justice and how human actors will interact with the multiple doors of the judicial system to design the pathways to digital justice and evaluate possibilities and limitations. In this sense, it seems really difficult to imagine a judicial robot replacing a Supreme Court Justice, but it will be soon equally difficult to imagine a Supreme Court Justice not working with the support of electronic clerks that collaborate with legal research and document-assembly production. At the other extreme, artificial intelligence with human supervision may be adopted to initiate dialogues towards mediation and other forms of alternative dispute resolution. Especially in the case of small claims of reduced complexity, low costs, and repetitive application of the law, there is potential for electronic arbitration requested by a defendant and binding only for him and not for the plaintiff.Footnote 106 In these cases, an important distinction may come from the different uses of technology. In COMPAS, the probability of recidivism supports deliberation in a criminal judgment. In contrast, information technology supports civil-liability judgments normally by sorting similar cases and aggregating them in preparation for a comprehensive judicial decision. Most cases of torts are decided through objective liability and without the necessity for detailed examination of fault and subjective responsibility typical of criminal judgments.
Finally, the role of institutions is essential for enabling organizations and setting the relevant rules of the game. The COMPAS case-study reveals the immediate need for innovative legal education and judicial training for the use of technological tools in adjudication. The adoption of algorithmic decision-making also requires the development of a code of ethics for artificial intelligence. Software developers emerge as the new “Philosopher Kings” and their way of handling ethics, explaining themselves, and managing accountability and dialogue is critical for ethics in digital societies.Footnote 107 However, the university education of artificial-intelligence tribes normally excludes learning about the human condition and no mandatory courses “teach students how to detect bias in data sets, how to apply philosophy to decision-making, or the ethics of inclusivity.”Footnote 108 Importantly, the relationship between ethics and law shapes the development of new technologies, as legal judgments are useful for ethical considerations and vice versa.Footnote 109 In this sense, the scrutiny of COMPAS by courts is welcome and necessary, because artificial intelligence may potentially transform fundamental tenets of our legal system.Footnote 110 The application of a code of ethics depends on the institution behind it,Footnote 111 it being necessary that the judiciary should establish its own institutional guidelines and code of ethics for using artificial intelligence according to the concrete demands for digital justice. Additionally, a set of constitutional rules—like a Bill of Rights for artificial intelligence—should be incorporated into our legal system. An algorithmic Bill of Rights seemed like science fiction in the Three Laws of Robotics by Isaac Asimov, but the idea of a set of basic norms and fundamental laws for artificial intelligence has inspired contemporary scholars to propose a constitutional regime for the regulation of the relationship between humans and robots.Footnote 112 Nowadays, the definition of rules for algorithmic decision-making, judicial robots, and due process of law emerge as an inevitable part of the constitutional rules for artificial intelligence and the path to digital justice.
5. AN UNFINISHED STORY: INCONCLUSIVE REMARKS
The story of the pathways to digital justice is still unfinished. Platforms must provide fair and efficient channels for dispute resolution to gain trust and survive in the online environment.Footnote 113 Once they do so through their resolution centres, citizens will start to ask why the small-claims court is so inconvenient and inefficient in comparison.Footnote 114 The evolution of digital justice indicates the substitution of physical settings for virtual ones, the emergence of models based on sharing data instead of confidentiality, and the potential shift from human intervention to automated processes of decision-making.Footnote 115 Access to justice may be enhanced through algorithms that can provide responses to a large number of disputes.Footnote 116 In these legal borderlands of law and technology, the role of judicial robots, the scope of algorithmic decision-making, and the protective safeguards to due process of law are still open and will depend on the social demands for pathways to justice.Footnote 117
This essay explores the possibilities and limits of an unfinished story and is closed with inconclusive remarks. One crucial question is whether to substitute human judges for judicial robots and the response depends on concrete social demands, the stage of development of artificial intelligence, and the decisional rules of the game. Perhaps an interesting analogy comes from navigation and the fact that the US had abandoned military training through celestial navigation due to the advent of GPS and brought it back in 2015 to the Naval Academy to help navy sailors to navigate the high seas.Footnote 118 Therefore, even if we rely on information technology as a support for decision-making, we should not abandon our core competency for making human judgment, as we would be left “without a rich sense of where we are and where we’re going.”Footnote 119 Interestingly, some tech giants rely on human judgment for their own internal dispute resolution. For instance, Facebook decided to curate its trending topics with a team of human journalists. After being accused of a progressive bias for excluding some conservative stories,Footnote 120 robots replaced humans as judges of trending topics and consequently fake news may become part of the newsfeed more easily now without the supervision of humans.Footnote 121 In the end, the path to justice in digital societies will come not only from the mathematical logic of algorithms, but also from our social experiences and how we reconcile the efficiency and precision of algorithmic decision-making with constitutional safeguards of due process of law.