3.1 Introduction
Automation is influencing ever more fields of law. The dream of disruption has permeated the US and British legal academies and is making inroads in Australia and Canada, as well as in civil law jurisdictions. The ideal here is law as a product, simultaneously mass producible and customizable, accessible to all and personalized, openly deprofessionalized.Footnote 1 This is the language of idealism, so common in discussions of legal technology – the Dr. Jekyll of legal automation.
But the shadow side of legal tech also lurks behind many initiatives. Legal disruption’s Mr. Hyde advances the cold economic imperative to shrink the state and its aid to the vulnerable. In Australia, the Robodebt system of automated benefit overpayment adjudication clawed back funds from beneficiaries on the basis of flawed data, false factual assumptions, and misguided assumptions about the law. In Michigan, in the United States, a similar program (aptly named “MIDAS,” for Michigan Integrated Data Automated System) “charged more than 40,000 people, billing them about five times the original benefits” – and it was later discovered that 93 percent of the charges were erroneous.Footnote 2 Meanwhile, global corporations are finding the automation of dispute settlement a convenient way to cut labor costs. This strategy is particularly tempting on platforms, which may facilitate millions of transactions each day.
When long-standing appeals to austerity and business necessity are behind “access to justice” initiatives to promote online dispute resolution, some skepticism is in order. At the limit, jurisdictions may be able to sell off their downtown real estate, setting up trusts to support a rump judicial system.Footnote 3 To be sure, even online courts require some staffing. But perhaps an avant-garde of legal cost cutters will find some inspiration from US corporations, which routinely decide buyer versus seller disputes in entirely opaque fashion.Footnote 4 In China, a large platform has charged “citizen juries” (who do not even earn money for their labor but, rather, reputation points) to decide such disputes. Build up a large enough catalog of such encounters, and a machine learning system may even be entrusted with deciding disputes based on past markers of success.Footnote 5 A complainant may lose credibility points for nervous behavior, for example, or gain points on the basis of long-standing status as someone who buys a great deal of merchandise or pays a taxes in a timely manner.
As these informal mechanisms become more common, they will test the limits of due process law. As anyone familiar with the diversity of administrative processes will realize, there is an enormous variation at present in how much opportunity a person is entitled to state their case, to demand a written explanation for a final (or intermediate) result, and to appeal. A black lung benefits case differs from a traffic violation, which in term differs from an immigration case. Courts permit agencies a fair amount of flexibility to structure their own affairs. Agencies will, in all likelihood, continue to pursue an agenda of what Julie Cohen has called “neoliberal managerialism” as they reorder their processes of investigation, case development, and decision-making.Footnote 6 That will, in turn, bring in more automated and “streamlined” processes, which courts will be called upon to accommodate.
While judicial accommodations of new agency forms are common, they are not automatic. At some point, agencies will adopt automated processes that courts can only recognize as simulacra of justice. Think, for instance, of an anti-trespassing robot equipped with facial recognition, which could instantly identify and “adjudicate” a person overstepping a boundary and text that person a notice of a fine. Or a rail ticket monitoring system that would instantly convert notice of a judgment against a person into a yearlong ban on the person buying train tickets. Other examples might be less dramatic but also worrisome. For example, consider the possibility of “mass claims rejection” for private health care providers seeking government payment for services rendered to persons with government-sponsored health insurance. Such claims processing programs may simply compare a set of claims to a corpus of past denied claims, sort new claimants’ documents into categories, and then reject them without human review.
In past work, I have explained why legislators and courts should reject most of these systems, and should always be wary of claims that justice can be automated.Footnote 7 And some initial jurisprudential stirrings are confirming that normative recommendation. For example, there has been a backlash against red-light cameras, which automatically cite drivers for failing to obey traffic laws. And even some of those who have developed natural language processing for legal settings have cautioned that they are not to be used in anything like a trial setting. These concessions are encouraging.
And yet there is another danger lurking on the horizon. Imagine a disability payment scheme that offered something like the following “contractual addendum” to beneficiaries immediately before they began receiving benefits:
The state has a duty to husband resources and to avoid inappropriate payments. By signing below, you agree to the following exchange. You will receive $20 per month extra in benefits, in addition to what you are statutorily eligible for. In exchange, you agree to permit the state (and any contractor it may choose to employ) to review all your social media accounts, in order to detect behavior indicating you are fit for work. If you are determined to be fit for work, your benefits will cease. This determination will be made by a machine learning program, and there will be no appeal.Footnote 8
There are two diametrically opposed ways of parsing such a contract. For many libertarians, the right to give up one’s rights (here, to a certain level of privacy and appeals) is effectively the most important right, since it enables contracting parties to eliminate certain forms of interference from their relationship. By contrast, for those who value legal regularity and due process, this “addendum” is anathema. Even if it is possible for the claimant to re-apply after a machine learning system has stripped her of benefits, the process offends the dignity of the claimant. A person must pass on whether such a grave step is to be taken.
These divergent approaches are mirrored in two lines of US Supreme Court jurisprudence. On the libertarian side, the Court has handed down a number of rulings affirming the “right” of workers to sign away certain rights at work, or at least the ability to contest their denial in court.Footnote 9 Partisans of “disruptive innovation” may argue that startups need to be able to impose one-sided terms of service on customers, so that investors will not be deterred from financing them. Exculpatory clauses have spread like kudzu, beckoning employers with the jurisprudential equivalent of a neutron bomb: the ability to leave laws and regulations standing, without any person capable of enforcing them.
On the other side, the Supreme Court has also made clear that the state must be limited in the degree to which it can structure entitlements when it is seeking to avoid due process obligations. A state cannot simply define an entitlement to, say, disability benefits, by folding into the entitlement itself an understanding that it can be revoked for any reason, or no reason at all. On this dignity-centered approach, the “contractual addendum” posited above is not merely one innocuous add-on, a bit of a risk the claimant must endure in order to engage in an arms’ length exchange for $20. Rather, it undoes the basic structure of the entitlement, which included the ability to make one’s case to another person and to appeal an adverse decision.
If states begin to impose such contractual bargains for automated administrative determinations, the “immoveable object” of inalienable due process rights will clash with the “irresistible force” of legal automation and libertarian conceptions of contractual “freedom.” This chapter explains why legal values must cabin (and often trump) efforts to “fast track” cases via statistical methods, machine learning (ML), or artificial intelligence. Section 3.2 explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decision maker, an explanation for that judgment, and the ability to appeal. Section 3.3 demonstrates why legal automation often threatens those rights. Section 3.4 critiques potential bargains for legal automation and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be “better off” by doing so.
3.2 Four Core Features of Due Process
Like the rule of law, “due process” is a multifaceted, complex, and perhaps even essentially contested concept.Footnote 10 As J. Roland Pennock has observed, the “roots of due process grow out of a blend of history and philosophy.”Footnote 11 While the term itself is a cornerstone of the US and UK legal systems, it has analogs in both public law and civil law systems around the world.
While many rights and immunities have been evoked as part of due process, it is important to identify a “core” conception of it that should be inalienable in all significant disputes between persons and governments. We can see this grasping for a “core” of due process in some US cases, where the interest at stake was relatively insignificant but the court still decided that the person affected by government action had to have some opportunity to explain him or herself and the contest the imposition of a punishment. For example, in Goss v. Lopez, students who were accused of misbehavior were suspended from school for ten days. The students claimed they were due some kind of hearing before suspension, and the Supreme Court agreed:
We do not believe that school authorities must be totally free from notice and hearing requirements if their schools are to operate with acceptable efficiency. Students facing temporary suspension have interests qualifying for protection of the Due Process Clause, and due process requires, in connection with a suspension of 10 days or less, that the student be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.Footnote 12
This is a fair encapsulation of some core practices of due process, which may (as the stakes rise) become supplemented by all manner of additional procedures.Footnote 13
One of the great questions raised by the current age of artificial intelligence (AI) is whether the notice and explanation of the charges (as well as the opportunity to be heard) must be discharged by a human being. So far as I can discern, no ultimate judicial authority has addressed this particular issue in the due process context. However, given that the entire line of case law arises in the context of humans confronting other humans, it does not take a stretch of the imagination to imagine such a requirement immanent in the enterprise of due process.
Moreover, legal scholars Kiel Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.”Footnote 14 The problem with robot or AI judges is that they cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.” Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:
In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment – who could, after all, have been any of us – ought to be able to say: This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party – who could likewise have been any of us – ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.
Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making; it still has intrinsic value.”
Brennan-Marquez and Henderson are building on a long tradition of scholarship that focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, applications of the US Supreme Court’s famous Mathews v. Eldridge calculus have frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity.Footnote 15 Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the “brutal need” resulting from the loss of benefits (as put in Goldberg v. Kelly), the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.
This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.”Footnote 16 A “responsible robot” may deflect blame or liability from the business that set it into the world. This is dangerous because the robot cannot truly be punished: it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied. And what is true of the robot as an object of legal judgment is also true of robots or AI as potential producers of such judgments.
3.3 How Legal Automation and Contractual Surrender of Rights Threaten Core Due Process Values
There is increasing evidence that many functions of the legal system, as it exists now, are very difficult to automate.Footnote 17 However, as Cashwell and I warned in 2015, the legal system is far from a stable and defined set of tasks to complete. As various interest groups jostle to “reform” legal systems the range of procedures needed to finalize legal determinations may shrink or expand.Footnote 18 There are many ways to limit existing legal processes, or simplify them, in order to make it easier for computation to replace or simulate them. The clauses mentioned previously – forswearing appeals of judgments generated or informed by machine learning or AI – would make non-explainable AI far easier to implement in legal systems.
This type of “moving the goalposts” may be accelerated by extant trends toward neoliberal managerialism in public administration.Footnote 19 This approach to public administration is focused on throughput, speed, case management, and efficiency. Neoliberal managerialists urge the public sector to learn from the successes of the private sector in limiting spending on disputes. One potential here is simply to outsource determinations to private actors – a move widely criticized elsewhere.Footnote 20 I am more concerned here with a contractual option: to offer to beneficiaries of government programs an opportunity for more or quicker benefits, in exchange for an agreement not to pursue appeals of termination decisions, or to thereby accepting their automated resolution.
I focus on the inducement of quicker or more benefits, because it appears to be settled law (at least in the US) that such restrictions of due process cannot be embedded into benefits themselves. A failed line of US Supreme Court decisions once attempted to restrict claimants’ due process rights by insisting that the government can create property entitlements with no due process rights attached. On this reasoning, a county might grant someone benefits with the explicit understanding that they could be terminated at any time without explanation: the “sweet” of the benefits could include the “bitter” of sudden, unreasoned denial of them. In Cleveland Board of Education v. Loudermill (1985), the Court finally discarded this line of reasoning, forcing some modicum of reasoned explanation and process for termination of property rights.
What is less clear now is whether side deals might undermine the delicate balance of rights struck by Loudermill. In the private sector, companies have successfully routed disputes with employees out of process-rich Article III courts, and into stripped-down arbitral forums, where one might even be skeptical of the impartiality of decision-makers.Footnote 21 Will the public sector follow suit? Given some current trends in the foreshortening of procedure and judgment occasioned by public sector automation, the temptation will be great.
These concerns are a logical outgrowth of a venerable literature critiquing rushed, shoddy, and otherwise improper automation of legal decision-making. In 2008, Danielle Keats Citron warned that states were cutting corners by deciding certain benefits (and other) claims automatically, on the basis of computer code that did not adequately reflect the complexity of the legal code it claimed to have reduced to computation.Footnote 22 Virginia Eubanks’s Automating Inequality has identified profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased. Eubanks argues that complex benefits determinations are not something well-meaning tech experts can “fix.” Instead, the system itself is deeply problematic, constantly shifting the goal line (in all too many states) to throw up barriers to access to care.
A growing movement for algorithmic accountability is both exposing and responding to these problems. For example, Citron and I coauthored work setting forth some basic procedural protections for those affected by governmental scoring systems.Footnote 23 The AI Now Institute has analyzed cases of improper algorithmic determinations of rights and opportunities.Footnote 24 And there is a growing body of scholarship internationally exploring the ramifications of computational dispute resolution.Footnote 25 As this work influences more agencies around the world, it is increasingly likely that responsible leadership will ensure that a certain baseline of due process values applies to automated decision-making.
Though they are generally optimistic about the role of automation and algorithms in agency decision-making, Coglianese and Lehr concede that one “due process question presented by automated adjudication stems from how such a system would affect an aggrieved party’s right to cross-examination. … Probably the only meaningful way to identify errors would be to conduct a proceeding in which an algorithm and its data are fully explored.”Footnote 26 This type of examination is at the core of Keats Citron’s concept of technological due process. It would require something like a right to an explanation of the automated profiling at the core of decision.Footnote 27
3.4 Due Process, Deals, and Unraveling
However, all such protections could be undone. The ability to explain oneself, and to hear reasoned explanations in turn, is often framed as being needlessly expensive. This expense of legal process (or administrative determinations) has helped fuel a turn to quantification, scoring, and algorithmic decision procedures.Footnote 28 A written evaluation of a person (or comprehensive analysis of future scenarios) often requires subtle judgment, exactitude in wording, and ongoing revision in response to challenges and evolving situations. A pre-set formula based on limited, easily observable variables, is far easier to calculate.Footnote 29 Moreover, even if individuals are due certain explanations and hearings as part of law, they may forego them in some contexts.
This type of rights waiver has already been deployed in some contexts. Several states in the United States allow unions to waive the due process rights of public employees.Footnote 30 We can also interpret some Employee Retirement Income Security Act (ERISA) jurisprudence as an endorsement and approval of a relatively common situation in the United States: employees effectively signing away a right to a more substantive and searching review of adverse benefit scope and insurance coverage determinations via an agreement to participate in an employer-sponsored benefit plan. The US Supreme Court has gradually interpreted ERISA to require federal courts to defer to plan administrators, echoing the deference due to agency administrators, and sometimes going beyond it.Footnote 31
True, Loudermill casts doubt on arrangements for government benefits premised on the beneficiary’s sacrificing due process protections. However, a particularly innovative and disruptive state may decide that the opinion is silent as to the baseline of what constitutes the benefit in question, and leverage that ambiguity. Consider a state that guaranteed health care to a certain category of individuals, as a “health care benefit.” Enlightened legislators further propose that the disabled, or those without robust transport options, should also receive assistance with respect to transportation to care. Austerity-minded legislators counter with a proviso: to receive transport assistance in addition to health assistance, beneficiaries need to agree to automatic adjudication of a broad class of disputes that might arise out of their beneficiary status.
The automation “deal” may also arise out of long-standing delays in receiving benefits. For example, in the United States, there have been many complaints by disability rights groups about the delays encountered by applicants for Social Security Disability Benefits, even when they are clearly entitled to them. On the other side of the political spectrum, some complain that persons who are adjudicated as disabled, and then regain capacities to work, are able to keep benefits for too long after they regain the capacity to work. This concern (and perhaps some mix of cruelty and indifference) motivated British policy makers who promoted “fit for work” reviews by private contractors.Footnote 32
It is not hard to see how the “baseline” of benefits might be defined narrowly, and all future benefits would be conditioned in this way. Nor are procedures the only constitution-level interest that may be “traded away” for faster access to more benefits. Privacy rights may be on the chopping block as well. In the United States, the Trump administration proposed reviews of the social media of persons receiving benefits.Footnote 33 The presumption of such review is that a picture of, say, a self-proclaimed depressed person smiling, or a self-proclaimed wheelchair-bound person walking, could alert authorities to potential benefits fraud. And such invasive surveillance could again feed into automated review, which could be flagged by such “suspicious activity” in a way similar to the activation of investigation at US fusion centers by “suspicious activity reports.”
What is even more troubling about these dynamics is the way in which “preferences” to avoid surveillance or preserve procedural rights might themselves become new data points for suspicion or investigation. A policymaker may wonder about the persons who refuse to accept the new due-process-lite “deal” offered by the state: What have they got to hide? Why are they so eager to preserve access to a judge and the lengthy process that may entail? Do they know some discrediting fact about their own status that we do not, and are they acting accordingly? Reflected in the economics of information as an “adverse selection problem,” this kind of speculative suspicion may become widespread. It may also arise as a byproduct of machine learning: those who refuse to relinquish privacy or procedural rights may, empirically, turn out to be more likely to pose problems for the system, or non-renewal of benefits, than those who trade away those rights. Black-boxed flagging systems may silently incorporate such data points into their own calculations.
The “what have you got to hide” rationale leads to a phenomenon deemed “unraveling” by economists of information. This dynamic has been extensively analyzed by the legal scholar Scott Peppet. The bottom line of Peppet’s analysis is that every individual decision to reveal something about himself or herself may also create social circumstances that pressure others to also disclose. For example, if only a few persons tout their grade point average (GPA) on their resumes, that disclosure may merely be an advantage for them in the job-seeking process. However, once 30 percent, 40 percent, 50 percent, or more of job-seekers include their GPAs, human resources personnel reviewing the applications may wonder about the motives of those who do not. If they assume the worst about non-revealers, it becomes a rationale for all but the very lowest GPA holders to reveal their GPA. Those at, say, the thirtieth percentile, reveal their GPA to avoid being confused with those in the twentieth or tenth percentile, and so on.
This model of unraveling parallels similar theorizing in feminist theorizing. For example, Catharine Mackinnon insisted that the “personal is political,” in part because any particular family’s division of labor helped either reinforce or challenge dominant patterns.Footnote 34 A mother may choose to quit work and stay home to raise her children, while her husband works fifty hours a week, and that may be an entirely ethical choice for her family. However, it also helps reinforce patterns of caregiving and expectations in that society which track women into unpaid work and men into paid work. It is not merely accommodating but also promoting gendered patterns of labor.Footnote 35 Like a path through a forest trod ever clearer of debris, it becomes the natural default.
This inevitably social dimension of personal choice also highlights the limits of liberalism in addressing due process trade-offs. Civil libertarians may fight the direct imposition of limitations of procedural or privacy rights by the state. However, “freedom of contract” may itself be framed as a civil liberties issue. If a person in great need wants immediate access to benefits, in exchange for letting the state monitor his social network feed (and automatically terminate benefits if suspect pictures are posted), the bare rhetoric of “freedom” also pulls in favor of permitting this deal. We need a more robust and durable theory of constitutionalism to preempt the problems that may arise here.
3.5 Backstopping the Slippery Slope toward Automated Justice
As the spread of plea bargaining in the United States shows, there is a clear and present danger of the state using its power to make an end-run around protections established in the constitution and guarded by courts. When a prosecutor threatens a defendant with a potential hundred-year sentence in a trial, or a plea for five to eight years, the coercion is obvious. By comparison, given the sclerotic slowness of much of the US administrative state, giving up rights in order to accelerate receipt of benefits is likely to seem to many liberals a humane (if tough) compromise.
Nevertheless, scholars should resist this “deal” by further developing and expanding the “unconstitutional conditions” doctrine. Daniel Farber deftly explicates the basis and purpose of the doctrine:
[One] recondite area of legal doctrine [concerns] the constitutionality of requiring waiver of a constitutional right as a condition of receiving some governmental benefit. Under the unconstitutional conditions doctrine, the government is sometimes, but by no means always, blocked from imposing such conditions on grants. This doctrine has long been considered an intellectual and doctrinal swamp. As one recent author has said, “[t]he Supreme Court’s failure to provide coherent guidance on the subject is, alas, legendary.”Footnote 36
Farber gives several concrete examples of the types of waivers that have been allowed over time. “[I]n return for government funding, family planning clinics may lose their right to engage in abortion referrals”; a criminal defendant can trade away the right to a jury trial for a lighter sentence. Farber is generally open to the exercise of this right to trade one’s rights away.Footnote 37 However, even he acknowledges that courts need to block particularly oppressive or manipulative exchanges of rights for other benefits. He offers several rationales for such blockages, including one internal to contract theory and another based on public law grounds.Footnote 38 Each is applicable to many instances of “automated justice.”
Farber’s first normative ground for unconstitutional conditions challenges to waivers of constitutional rights is the classic behavioral economics concern about situations “where asymmetrical information, imperfect rationality, or other flaws make it likely that the bargain will not be in the interests of both parties.”Footnote 39 This rationale applies particularly well to scenarios where black-box algorithms (or secret data) are used.Footnote 40 No one should be permitted to accede to an abbreviated process when the foundations of its decision-making are not available for inspection. The problem of hyperbolic discounting also looms large. A benefits applicant in brutal need of help may not be capable of fully thinking through the implications of trading away due process rights. Bare concern for survival occludes such calculations.
The second normative foundation concerns the larger social impact of the rights-waiver bargain. For example, Farber observes, “when the agreement would adversely affect the interests of third parties in some tangible way,” courts should be wary of it. The unraveling dynamic described above offers one example of this type of adverse impact on third parties from rights sacrifices. Though it may not be immediately “tangible,” it has happened in so many other scenarios that it is critical for courts to consider whether particular bargains may pave the way to a future where the “choice” to trade away a right is effectively no choice at all, because the cost of retaining it is a high level of suspicion generated by exercising (or merely retaining the right to exercise) the right.
Under this second ground, Farber also mentions that we may “block exchanges that adversely affect the social meaning of constitutional rights, degrading society’s sense of its connection with personhood.” Here again, a drift toward automated determination of legal rights and duties seems particularly apt for targeting. The right of due process at its core means something more than a bare redetermination by automated systems. Rather, it requires some ability to identify a true human face of the state, as Henderson and Brennan-Marquez’s work (discussed previously) suggests. Soldiers at war may hide their faces, but police do not. We are not at war with the state; rather, it is supposed to be serving us in a humanly recognizable way. The same is true a fortiori of agencies dispending benefits and other forms of support.
3.6 Conclusion: Writing, Thinking, and Automation in Administrative Processes
Claimants worried about the pressure to sign away rights to due process may have an ally within the administrative state: persons who now hear and decide cases. AI and ML may ease their workload, but could also be a prelude to full automation. Two contrasting cases help illuminate this possibility. In Albathani v. INS (2003), the First Circuit affirmed the Board of Immigration Appeals’ policy of “affirmance without opinion” (AWO) of certain rulings by immigration judges.Footnote 41 Though “the record of the hearing itself could not be reviewed” in the ten minutes which the Board member, on average, took to review each of more than fifty cases on the day in question, the court found it imperative to recognize “workload management devices that acknowledge the reality of high caseloads.” However, in a similar Australian administrative context, a judge ruled against a Minister in part due to the rapid disposition of two cases involving more than seven hundred pages of material. According to the judge, “43 minutes represents an insufficient time for the Minister to have engaged in the active intellectual process which the law required of him.”Footnote 42
In the short run, decision-makers at an agency may prefer the Albathani approach. As Chad Oldfather has observed in his article “Writing, Cognition, and the Nature of the Judicial Function,” unwritten, and even visceral, snap decisions have a place in our legal system.Footnote 43 They are far less tiring to generate than a written record and reasoned elaboration of how the decision-maker applied the law to the facts. However, in the long run, when the reduction of thought and responsibility for review reduces to a certain vanishing point, it is difficult for decision-makers to justify their own interposition in the legal process. A “cyberdelegation” to cheaper software may be proper then.Footnote 44
We must connect current debates on the proper role of automation in agencies to requirements for reasoned decision-making. It is probably in administrators’ best interests for courts to actively ensure thoughtful decisions by responsible persons. Otherwise, administrators may ultimately be replaced by the types of software and AI now poised to take over so many other roles now performed by humans. The temptation to accelerate, abbreviate, and automate human processes is, all too often, a prelude to destroying them.Footnote 45