In 2022, after Jake Moffatt’s grandmother died in Ontario, Mr. Moffatt booked a roundtrip flight on Air Canada to travel to Toronto for the funeral. While researching flights, Mr. Moffatt encountered a chatbot (an automated information system) on Air Canada’s website. Mr. Moffatt asked the chatbot whether it was possible to obtain a bereavement fare for the trip, to which the chatbot provided this reply:
Air Canada offers reduced bereavement fares if you need to travel because of an imminent death or a death in your immediate family… If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.
Mr. Moffatt took a screenshot of the chatbot’s answer, booked the ticket, and took the flight. Later, he tried to apply for the reduced bereavement fare, just as the chatbot had instructed. Unfortunately for Mr. Moffatt, it turns out that the chatbot had “hallucinated” the possibility of receiving a retroactive bereavement rate. While the actual Air Canada policy provided for bereavement rates, it did not permit retroactive claims for them. Had Mr. Moffatt clicked the underlined words “bereavement fares” in the chatbot’s answer, Mr. Moffatt would have been able to click a hyperlink that would have directed Mr. Moffatt to a more detailed, and accurate, statement of Air Canada’s policy, which was located on Air Canada’s website. Understandably, however, Mr. Moffatt did not think it necessary to do this. The chatbot, which had been made available by the airline on the airline’s website, had provided Mr. Moffatt with a clear answer to his question: a retroactive bereavement rate was possible. Further, this answer was being provided by Air Canada itself, not by some less reliable source on the internet. Like most people would have, Mr. Moffatt accepted the answer as true and reliable.
Having been misled by the company’s own chatbot, Mr. Moffatt filed a legal claim against Air Canada in the British Columbia Civil Resolution Tribunal. In the case, Air Canada argued that it could not be held liable for the information provided by its chatbot because the chatbot was responsible for its own actions. The tribunal that heard the case rejected Air Canada’s argument, calling it a “remarkable submission.” Mr. Moffatt, according to the tribunal’s holding, relied on the chatbot to provide accurate information, and Air Canada had a duty to ensure that its chatbot was providing accurate information. Accordingly, Air Canada was liable to Mr. Moffatt for the harm that the chatbot’s inaccurate information had caused him.1
It is easy to sympathize with Mr. Moffatt. Air Canada created and offered a chatbot on the company’s own website. Through its chatbot, the airline offered customers clear and straightforward explanations to the questions they posed about their specific situations. Air Canada presumably did this to make it easier for its customers to book flights. It was reasonable for Mr. Moffatt to think that the answers the chatbot offered would be accurate. But Air Canada was not willing to be bound by the statements made by its own chatbot, instead expecting that customers would somehow know that these statements may not reflect more detailed, underlying policies. Thus, Air Canada was both inviting the public to use the chatbot as an easy way to understand the company’s rules and, at the same time, disclaiming the public’s ability to rely on the very explanations the chatbot offered. The airline’s attempt to have it both ways does seem inherently and problematically inconsistent, or, as the tribunal judiciously put it, “remarkable.”2
Yet, the United States federal government has done something very similar to Air Canada. Administrative agencies that are responsible for running federal government operations have increasingly turned to chatbots, virtual assistants, and other automated tools to offer guidance to the public about the law. Together, these tools can be thought of as “automated legal guidance.”
This automated legal guidance takes a variety of forms. Visitors to the Internal Revenue Service’s (IRS) website encounter an Interactive Tax Assistant that “provides answers to several tax law questions specific to your individual circumstances.” If you enter information in response to prompts, the Interactive Tax Assistant will provide personalized answers to particular tax questions, such as whether you have to file a return, if you can claim a dependent, and whether you can take various deductions.3 “Emma,” a virtual assistant on the website of the US Citizenship and Immigration Services, will “help you find the immigration information you need.” Emma explains that she will “answe[r] questions based on your own words,” so that “you don’t need to know ‘government speak.’”4 The US Department of Education’s Federal Student Aid office, meanwhile, features Aidan, a “virtual assistant that can answer questions about federal student aid.”5
All of these automated legal guidance tools, and many more in various emerging and developmental stages across federal government agencies, advertise that they can help the public understand and comply with legal requirements. Similar to Air Canada’s chatbot, these tools promise to be an easy-to-use source of information about underlying rules.
But, as Mr. Moffatt experienced with Air Canada’s chatbot, there is an unexpected catch. To make federal law accessible, the tools often flatten out complexities in the underlying legal system. Further, the federal laws that these tools try to explain to the public in a straightforward way are inordinately more complicated than Air Canada’s ticketing policies. As a result, when automated legal guidance tools offer “answers,” or explanations, in response to questions, they may not, in fact, be accurate representations of the underlying law. As was the case with Air Canada, moreover, federal government agencies, perhaps somewhat surprisingly, do not allow the public to rely on the answers and explanations provided by their own chatbots, virtual assistants, and other tools. Rather, federal agencies disclaim the very tools that they make available as merely offering “information about the law,”6 rather than providing the actual federal law itself. This attempt by federal agencies to distance themselves from what their own automated legal guidance tools say is not too far from Air Canada’s protestations that the chatbot offered on the airline’s website was somehow responsible for its own actions.
Alarmingly, members of the public who rely on automated legal guidance tools have even less redress available to them than Mr. Moffatt did in the case of the Air Canada chatbot. Under current law, if an agency later challenges a position that a member of the public took in reliance on the agency’s own automated legal guidance tools, the agency would not be legally bound by what the tool said. Further, that member of the public would have a hard time relying on the statements that tool made to avoid legal penalties. Therefore, while these tools may be making the federal law appear accessible, this might take place at the cost of misrepresenting what the law is, or how it might apply in each situation. Unfortunately, members of the public may end up relying upon these tools, unknowingly, at their peril.
Why do agencies use such tools? Why does the public turn to them? While the answers to those questions are important to understand, they still do not explain how it can be that administrative agencies’ own automated legal guidance tools could be misrepresenting the underlying law. Nor do they shed light on why members of the public may be even more vulnerable if they rely on these tools than Mr. Moffatt was when he acted based on the answers he received from Air Canada’s chatbot. In the end, perhaps the most critical question is as follows: What does the existence of these tools, and the rules surrounding them, tell us about the US legal system?
To address these questions, this book builds on years of research we have undertaken regarding how agencies communicate the law to the public. It explores the roots of automated legal guidance systems, explaining why agencies and the public are increasingly turning to them. It also sheds light on why such tools systematically offer easy-to-understand explanations and applications of the law that may deviate from the underlying law.
In conducting our research, we interviewed federal agency officials who are responsible for creating and managing automated legal guidance tools. Their answers to the questions we posed offer insights into what agencies think about these tools and the roles they are serving. Based on these insights, we explore not only the benefits of such tools but also their costs, and examine the ways that some of these costs tie in with broader democracy deficits in the application of administrative law to the public. At its foundation, this book explores not only automated legal guidance systems but also the deep-seated problem of having the government try to explain an extremely complex system of laws to a population that often cannot fully understand it. In writing this book, our goal is to offer a guide toward ways to best resolve this problem, especially considering technological innovation that seems to promise only more automation of the law going forward.
Understanding federal agencies’ automated legal guidance systems must start with an understanding of federal government agencies’ duty to explain the law to the public. For instance, the IRS is bound not only to enforce the law through tax audits but also to provide important services to the public, which include explaining the law and helping the public apply it. Indeed, federal law even requires agencies to use plain writing in their public communications. These communications make clear how federal law applies to countless situations, such as the application of labor law to various working conditions, the entitlement to public benefits such as Social Security, the reach of environmental laws in many different types of building projects, and the application of tax law to personal and business transactions.
The problem is that, in all these situations, and so many more, applicable federal law is extraordinarily complex. Further, most people cannot access and understand the sources of federal law, including statutes, regulations, case law, and administrative decisions. For example, to even begin to use the formal tax law, you must find a way to access it, either through print books in a library or through a reliable online version. After accessing the tax law, you also have to understand the relative authority of its different sources, which include (1) the Internal Revenue Code, which is the governing law; (2) the Treasury Regulations, which are a binding application of the Code; and (3) case law and administrative guidance, each of which have varying levels of authority, based on how relevant they are, what jurisdiction you are in, and other factors.7
Even if you know how to find and use these sources of formal tax law authority, you still must be able to search these dense texts, beginning with the Internal Revenue Code, all the way down to administrative guidance, to find the governing law and any related provisions. This is all before embarking on what is probably the hardest task – reading the extremely complicated governing tax law and determining how it applies to your specific circumstances. Most people do not have the ability to engage in this kind of difficult legal analysis themselves, even though the federal law applies to so many aspects of their daily lives.
Administrative agencies fill the gap between the ubiquitous application of federal law to the public and the inability of many to access the formal law. They do this by offering extensive “guidance” to the public, which includes statements by administrative agencies that explain or interpret the law. Because guidance is not itself supposed to be binding law, it is generally not subject to certain procedures, such as notice-and-comment rulemaking, that agencies must use to promulgate administrative regulations. Rather, its purpose is merely to advise the public about the law. That said, administrative law scholars have recognized that, notwithstanding its relatively informal legal status, agency guidance can be highly influential on the public. In this way, guidance can have a quasi-binding function, even while it is exempt from legal procedures that are supposed to apply to the promulgation of binding law.
In response to the use of technological innovations in the private sector, like Air Canada’s website chatbot, agencies increasingly are attempting to provide guidance through the use of chatbots, virtual assistants, and the like. These tools seem to offer the government a simple and affordable way to respond to pressures on agencies to explain the law, quickly and clearly, in line with private sector standards. The rise of large language models (LLMs), built on generative or conversational artificial intelligence (AI), such as ChatGPT, has only increased pressure on federal agencies to automate their legal guidance to the public. For this reason, the federal government is already using automated legal guidance tools, like the Interactive Tax Assistant, Emma, and Aidan, to respond to tens of millions of inquiries from the public about the law every year.
As these systems are built and expanded, they have a vast capacity to influence public perceptions of law. This expansion has the potential to change the public relationship with the federal law in profound ways. In many circumstances, a precise application of the underlying law may be replaced with automated legal guidance tools’ approximation of that law. Without further study and critique, the government’s use of these tools may expose the public to a significant risk: People may not be able to reasonably rely on the very explanations or applications of the law that federal agencies are themselves offering to the public.
As technology makes agency guidance more widely accessible online, deviations in such guidance from the underlying law can become more influential as well. When automated legal guidance tools try to answer and explain legal questions that implicate a complex, and frequently ambiguous, underlying legal system, they often present the law as simpler than it is. In our research, we have identified this phenomenon as “simplexity.”
Simplexity occurs when the government presents clear and simple explanations of law that is, in fact, ambiguous or complex. Automated legal guidance relies on simplexity because of the tension between two competing forces: (1) the inherent complexity of the law and (2) the expectation that agencies should explain the law as simply as possible in ways the public can understand. Moreover, even relative to older forms of guidance, such as printed IRS publications, the use of automated legal guidance exacerbates the existence and effects of simplexity. Agencies believe that people looking to automated tools for answers to questions about law expect explanations that are “super concise.”8 This puts even more pressure on these tools to provide easy-to-understand answers, which, in turn, causes greater deviations. Further, website visitors who receive information in this way may also be highly influenced by it, especially considering that automated legal guidance provides instantaneous, often seemingly personalized, feedback.
Imagine, for instance, that you graduated from college several months ago. You were lucky enough to land a job right when you graduated, in operations management at a large manufacturing company. You like the job, but you are not sure how much your career will progress, and whether you will be able to take on a more managerial role, without a more advanced degree. A more senior (and higher-paid) colleague told you that getting a master’s degree in business administration (MBA) was critical to her own career. After looking into it, you learn that your company even offers higher pay for employees with MBAs, and the company will allow you to work while studying for the degree. You would have to pay for the MBA yourself, though, and you are not sure whether you can afford it. But if you could claim a tax deduction for the cost of the degree, you think you might be able to make it work.
To decide what to do, you first need to figure out whether you can deduct the cost of tuition for an MBA. To get an answer to this question, you visit the IRS’s website, through which you access the Interactive Tax Assistant. It asks you a short series of questions, to which you provide answers, indicated in italics below.
Question 1: Were the expenses attributable to a trade or business or employment already established at the time the education was undertaken? | Your Answer: Yes (You answer yes because you are already working at your job before you get the MBA.) |
Question 2: Was the education necessary in order to meet the minimum educational requirements of your trade or business or your employer’s trade or business? | Your Answer: No (You answer no because you already have the job, and didn’t need the MBA to get it.) |
Question 3: Was the education part of a study program that may qualify you for a new trade or business? | Your Answer: No (You answer no because you will be staying at the same job, not going to a new one, after getting the MBA.) |
Question 4: Did your employer or a law or regulation require the education for you to keep your present salary, status, or job? | Your Answer: No (You answer no because no one is requiring you to get the MBA.) |
Question 5: Did the education maintain or improve skills needed in your present work? | Your Answer: Yes (You answer yes because you will be more skilled after getting the MBA, which is why you will be getting paid more after you get it.) |
Question 6: Did your employer reimburse your educational expenses under an accountable plan? | Your Answer: No (You answer no because you will be paying for the degree yourself.) |
After answering these questions, the Interactive Tax Assistant provides you a definitive response: “Your work related education expenses are deductible.”9
While this straightforward answer can help you decide whether or not to pursue the MBA, unfortunately, the tax guidance it provides may not be correct. Not only is the underlying tax law quite complex, but it is also not settled, despite the Interactive Tax Assistant’s representation of it as clear.
There is a critical question about whether you will be engaged in a “trade or business” prior to enrolling in the MBA program, which is a legal requirement for deductibility.10 The Interactive Tax Assistant’s first question does, in fact, ask about whether the expenses are “attributable to a trade or business or employment already established at the time the education was undertaken.” But this seemingly simple question (to which you reasonably responded “yes” in this case) glosses over a much more complicated reality. For instance, whether a trade or business or employment is “already established” may depend on how long the taxpayer has been engaged in it prior to beginning the educational program. This is not determinable by reading the statute alone, however. In the case of Link v. Commissioner, the Tax Court decided that a taxpayer who graduated from college and was employed for three months as a market research analyst was not “established in a trade or business prior to enrolling in the MBA program,” and, therefore, his MBA expenses were not deductible.11 This case suggests that you may not, in fact, be able to deduct the cost of the MBA, as it might be determined that you do not have a “trade or business” prior to entering the MBA program.
This is further complicated by the fact that cases in this area have reached different results that are sometimes difficult to reconcile.12 As the Link court itself stressed, no case in this area “is dispositive of the issue in this case, as a legal or factual precedent.”13 While the Interactive Tax Assistant gave you fast and seemingly clear guidance about the deductibility of the cost of getting an MBA, its advice may not, in fact, be accurate in your particular circumstances. If you pay for the MBA and claim a corresponding deduction, you may ultimately face disallowance of the deduction on audit, and even the possibility of a penalty.
This example illustrates one of the many ways automated legal guidance can get things wrong. Members of the public may receive what appears to be tailored, clear guidance that, although it may be easy to follow, possibly also glosses over more complex, and binding, points of law. Moreover, this use of simplexity is not unique to the IRS’s Interactive Tax Assistant, but rather is inherent to federal agencies’ use of automated legal guidance, as we also discovered when we analyzed the advice provided by the US Citizenship and Immigration Services’ virtual assistant, Emma, and the Federal Student Aid’s virtual assistant, Aidan.
Worse yet, this book will show how the simplexity inherent in automated legal guidance can be particularly costly to people who are already vulnerable. In Chapter 8, for instance, we will explore the example of a chronically ill individual who hires a home health aide because she is not able to take care of her daily needs such as bathing, cooking, basic house cleaning, and administering daily medication. While a close examination of the relevant statute and legislative history reveals there is a good case that this individual can claim a tax deduction for the home health aide expense, the IRS’s automated legal guidance system says otherwise. Indeed, in our research, we found that, even if this individual correctly answered all the questions asked by the Interactive Tax Assistant, the tool would inform the taxpayer, “The Household Help Expenses are not a deductible expense. Your Household Help Expenses are not a qualified medical expense.”14 As we will explain, this statement likely reflects the IRS’s decision that the automated legal guidance system should offer responses that are correct for most taxpayers, even at the cost of offering incorrect responses for some. This decision has the benefit of offering answers that seem simple, straightforward, and easy to understand. But it also may cause some people, like the chronically ill individual in this case, to lose out on valuable tax deductions. As in this example, federal agencies’ use of automated legal guidance systems can thus have significant, distributive costs. Moreover, the fact that automated legal guidance can make the law seem so simple and straightforward can mask these thorny distributive questions, thereby shielding them from critique.
In addition to these case studies, this book also offers a fuller picture of the federal government’s adoption of automated legal guidance by describing interviews we conducted with federal agency officials who developed or worked closely with the Interactive Tax Assistant, Emma, and Aidan. Under the auspices of the Administrative Conference of the United States (ACUS), an independent federal agency that convenes expert public- and private-sector representatives to recommend improvements to administrative process and procedure, we conducted these interviews as part of a study of federal agencies’ use of automated legal guidance in 2021 and 2022. These interviews yielded three significant findings:
1. Agency officials believed that legal guidance offered by automated tools must be extremely simplified, in part because users are unwilling or unable to read complex legal rules and regulations.
2. At the same time, agency officials, because they did not think that users could or should rely on information from automated tools as a source of law, were not concerned about how that information deviated from the underlying formal law. These officials held this belief despite the widespread use of such tools, which often have no disclaimer regarding the risks of relying on them.
3. There is also little to no outside review of the use or reliability of such tools by federal agencies.
To be sure, despite its simplexity-related drawbacks, automated legal guidance has some important benefits. In some ways, the vignette about the deductibility of the MBA illustrates the real dilemma federal agencies face in advising people about the law and the useful role that automation can play in responding to it. There are no easy answers to whether you can deduct the costs of the MBA, and merely telling taxpayers that the law is very complex and, frankly, not clear, is not particularly helpful. Automated legal guidance seems to offer the government a way to skirt some of these difficulties by offering a clear answer to a specific question, even when the formal law falls short of being optimally informative. Automated legal guidance also is a useful way to reveal agencies’ views, or interpretations, of the formal law. By making agency positions regarding unsettled legal issues more transparent, it can also help ensure that agencies administer the law consistently.
However, the use of automated legal guidance also comes with important costs. Critically, we show how, precisely because of its perceived strengths, especially when compared to the often messy, or ambiguous, formal law, automated legal guidance can obscure what the binding legal provisions actually are. Additionally, when it deviates from the formal law, automated legal guidance can undermine essential features of democracy: the public’s ability to inform itself about what the laws truly are and hold the government accountable for the legal provisions it enacts. Paradoxically, these deviations can also diminish the public’s understanding of and respect for the law’s complexity, in part by explaining it in such an unrealistic and simple way.
In the end, these and other features of automated legal guidance create underappreciated inequities. As a practical matter, individuals who lack access to legal counsel will tend to follow the guidance that government chatbots, virtual assistants, and other automated tools provide, even if doing so is contrary to their own financial and other interests. By contrast, where the formal law is ambiguous, wealthy individuals and businesses that have access to sophisticated advisors are far less likely to follow guidance that is favorable to the government position. Further, during audits, challenges, and litigation, federal agencies are not bound to take positions that are consistent with statements expressed by virtual assistants, chatbots, and other online tools. Finally, the informal nature of automated legal guidance means that it is of limited use in supporting defenses against penalties for noncompliance. Individuals who can access formal law with the assistance of counsel, however, may use both statements from formal law and the advice of counsel to establish penalty defenses.
Our legal system relies on principles and provisions of administrative law to help ensure that administrative agencies treat members of the public fairly. However, administrative law fails to respond to some of the problems of automated legal guidance. This is due to administrative law’s broader inattention to the ways that agencies influence the public through informal explanations of the law. As we explore, the administrative law regime centers and privileges sophisticated parties and focuses on the ways agencies create binding law to govern those parties. This leaves the many ways that agencies influence the public with explanations of the law largely unconsidered, which, perversely, creates a legal system that provides the most protections for those with the most resources. While it would be a mistake to bog down basic explanations of the law to the public with exhaustive procedural formulation requirements, it is also important to acknowledge the ways that these explanations can affect people’s beliefs about rights and duties in ways that do not always align with the law.
By providing a careful, and sometimes critical, look at automated legal guidance, our goal in this book is not to convince readers that the government should reject automation in how it offers explanations of the law to people. In addition to the fact that automated legal guidance has benefits for both the government and the public at large, as well as potential costs, the fact is that automation is changing the way we all relate to information about the world, whether we like it or not. In light of these realities, our goal is to offer a clear-eyed assessment of automated legal guidance, in hopes of improving it.
To this end, we argue that agencies should adopt multiple policy interventions. We begin by offering detailed policy recommendations for federal agencies that have introduced, or may introduce, chatbots, virtual assistants, and other automated tools to communicate the law to the public. Our recommendations are organized into five general categories: (1) transparency; (2) reliance; (3) disclaimers; (4) process; and (5) accessibility, inclusion, and equity. We then describe how policymakers should increase participation by the members of the public in the process of designing automated legal guidance tools and should also give users the ability to formally challenge agencies when they deviate from the formal law. Finally, we consider how policymakers and government officials should prepare for a future in which technology evolves such that agencies shift from deploying automated legal guidance tools on their websites to automating individuals’ compliance actions.
With these recommendations, together with their underlying analysis, we hope that this book can help set a solid foundation for the transformation of government guidance that is happening now, and that will take place in the future. In our complex world, we expect the public to be able to understand and apply complex law, even if the reality is that most people cannot do so. The combination of this reality, with the rise of automation, means that automated legal guidance will continue to play an important, and increasing, role in our legal landscape. This book provides guidance about how to make this landscape as transparent, legitimate, and equitable as possible.