I. Introduction
The concept of “discretion” refers to the broad scope for decision-making legislator granted to public administration. However, this extensive authority for decision-making does not equate to arbitrariness. The theory of discretion seeks to regulate this human executive power within the realm of administrative decision-making in order to avoid arbitrariness.Footnote 1
With technological advancements such as artificial intelligence, certain tasks traditionally carried out by bureaucrats can now be executed by machines. These advances have shown that in specific cases discretion does not necessitate exclusive implementation by human beings; algorithms can produce outcomes similar to those achieved by decision-makers, when properly programmed by humans.
Legal literature often analyzes the use of artificial intelligence (AI) in the decision-making process. Some suggest that it should be sanctioned while others argue it ought to be avoided. The former often explains the cases of usage, establishes certain conditions, and discusses the potential impact this may represent in administrative law.Footnote 2 The latter discusses the shortcomings of algorithms compared to the human mind when exercising discretionary power.Footnote 3
This paper proposes the use of AI in single-case discretionary powers. It provides an overview, given space limitations, of both adaptations to the theory of discretion and limitations on controlling administrative discretion and avoiding arbitrariness. This paper’s contribution lies in suggesting the use of AI systems in certain cases of discretionary decision-making which involve the exercise of discretion with correlations and predictions, such as those developed by machine learning systems (a subset of AI systems). The approach taken in this paper illustrates how automated systems could potentially transform the theory of discretion and establish limits to avoid arbitrariness.
Scrutiny of the concept of administrative discretion and automated systems may lead to a paradigm shift. People frequently contribute to discretionary decisions by performing tasks that machine learning algorithms currently perform. Examples of this include using correlations and predictions to determine areas for the oversight of fruit vendors or the allocation of personnel within transportation systems taking into consideration hours of work and system requirements. Traditionally, humans should assess the context of the taking of the decision, but in specific cases, the context can be evaluated by an algorithm through pattern recognition.
This paper adopts the definition of artificial intelligence as articulated by the Organization for Economic Cooperation and Development (OECD) of 2023,Footnote 4 used for the EU Artificial Intelligence Act (EU AI Act),Footnote 5 and Executive Order 14,110 signed by US President Biden.Footnote 6 It focuses on AI systems that can autonomously infer an output (prediction, recommendations, or decisions) from objectives, either set by humans or not, and may adapt or evolve through use. Thus, the definition of AI used herein includes both expert systems and machine learning, and subsets of the latter such as large language models and natural language processing. Any specific annotations needed for any of the concepts discussed, will be addressed.
This article will proceed as follows. First, it will illustrate diverse regulatory approaches to the use of automated systems within discretionary powers. Afterwards, different administrative law theories about discretion will be examined as will how AI may be used in the decision-making process. Next, adaptations to the theory of discretion applied when AI is used will be addressed. Finally, limitations on the use of AI for discretionary powers to avoid arbitrariness will be analysed.
II. Different regulatory approaches to discretionary administrative decision-making by automated systems
Legal systems have adopted multiple approaches to mechanisms for addressing discretionary administrative decision-making by automated systems. These include complete prohibitions, permissions in specific cases and the use of hard law or soft law frameworks.Footnote 7
In the Spanish legal system, automated administrative actions are regulated under article 41 Law 40/2015, which establish the legal framework of the public sector.Footnote 8 It outlines specific conditions and safeguards for the adoption of such actions. There is, however, no explicit reference to discretionary administrative acts. Consequently, discretionary decisions can be made using automated systems. The Spanish government has addressed this matter through soft law by adopting a Digital Rights Charter,Footnote 9 requiring statutory authorisation and specific safeguards for the use of automated systems in discretionary decision-making. Academic literature in Spain ranges from advocating for total prohibitionFootnote 10 to endorsing usage in specific cases.Footnote 11
Estonia considered a draft amendment to the Administrative Procedures Act,Footnote 12 which would allow the use of expert systems for discretionary administrative decision-making. The proposal included internal guidelines specifying alternatives that the algorithm could consider in taking a decision. Legal literature has criticised this proposal, however, due to its cautious approach (excluding such newer alternatives as machine learning), its lack of consideration for current technology, and its failure to account for scenarios where human judgment is essential for discretionary decisions.Footnote 13
Under the German Administrative Procedures Act Article 35a,Footnote 14 an administrative measure can be wholly based upon an automated process provided when it is statutorily authorised and involves no exercise of discretion or margin of assessment. Legal scholars have criticised this provision for its broad applicability. Their concerns include the prior admission of decisions without human intervention (administrative silence or tacit consent) and argue that eliminating discretion even in cases where it may be useful for analyzing complex facts in the investigation stage is problematic. Additionally, the provision has been criticised due to its limited application in German jurisprudence which narrowly interprets the exercise of judgment in relation to specified prerequisites set by a provision that are not indeterminate. Footnote 15
At the European Union level, the AI Act neither specifically regulates nor prohibits discretionary decision-making by automated systems. This regulation contains safeguards such as human agency, transparency, and the assessment of impact on fundamental rights, by what are known as high-risk systems established in the Annex III.Footnote 16
In the case of the General Data Protection Regulation (GDPR), article 22 stipulates that when individual decision-making that deals with personal data is fully automated, authorisation from the Member State legislator and specific safeguards is required. The Court of Justice of the European Union (CJEU), in the “Schufa” case, emphasised the importance of the existence of safeguards, such as human intervention when requested and transparency in algorithms, in order to minimise risks but permitted decision-making processes involving discretion to take place.Footnote 17
In the United States, there is to date no comprehensive legislation over AI, but the Biden administration issued Executive Order (EO) 14110 on October 30, 2023. Under this EO, AI may be used in Federal agencies, on condition that the use be assessed, secure and monitored which allows the use of AI in federal agencies under those conditions.Footnote 18 Section 10 specifically addresses the use of AI by Federal Agencies, and Section 10.1 (b) encourages the Director of the OMB to issue guidance to “strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.” The Director is mandated to adopt safeguards and impact assessment but the EO neither explicitly addresses nor prohibits AI use in discretionary decisions.
However, Section 7.2 (i) (b), which deals with government benefits and programmes, encourages the adoption of a plan, informed by the guidance of Section 10.1 (b), to promote “processes to retain appropriate levels of discretion of expert agency staff.” Even if there is no explicit prohibition in the EO on the use of AI systems in discretionary decision-making, certain levels of discretion will be retained by humans is such cases. Prior to the enactment of the EO, legal literature had commented upon it through two lenses; some favoured the use of AI discretionary decisions while others expressed criticism.Footnote 19
There are three main reasons for the variation in the approaches to dealing with AI in discretionary decision-making. Firstly, the rapidly evolving nature of technology may hinder legislators from reaching a consensus on a rule of law approach to AI use. So far, simply implementing safeguards, assessment, monitoring, and evaluation has appeared to be adequate to permit a use of AI which balances the evolution of technology with avoiding a negative effect on fundamental rights. This is a more cautious approach to regulation, which permits AI in the taking of discretionary decision-making to be implemented.
Secondly, recognising AI’s potential value in specific cases allows for flexibility in its application within discretionary decision-making settings. The EU’s AI Act and US’s EO do not prohibit its use and employ a management risk-based approach to specific cases of usage which leads to innovation in the types of automated system required.
Thirdly, there is a tendency within the regulatory frameworks examined to not prohibit the use of AI in the exercise of discretionary power. This could be interpreted as these legal systems having and an interest in allowing it under specific kinds of controls based on safeguards or statutory authorisation. This has already been seen in cases where discretion was limited to avoid human executive excesses leading to arbitrariness. The exception would be the German legal system where the prohibition received significant critiques by legal scholarship as seen before.
III. Embedding AI in human-centered administrative discretion
1. Theories of administrative discretion
The concept of discretion in administrative law has seen important developments depending upon the legal system in question. Here we provide brief explanations of different approaches observed in the Spanish, German, European Union, and United States legal systems.Footnote 20
Before elaborating upon the Spanish and German legal systems, it should be noted that the German system has influenced Spanish legal scholars in the development of administrative discretion.Footnote 21 Additionally, to set out a clearer approach to the matter in these two legal systems is worth addressing in more detail the issue of where discretion could be allocated: prerequisites and legal status (or consequences).
The prerequisites and legal status established in a given provision bind decision-making, where the official has limited discretion. However, administrative decision-making is partially conditional upon the specification of the scope of the decision-maker’s power through discretionary reference points. These discretionary reference points are often outlined in the content of preconditions or legal status specified in legislation or regulations. “Preconditions” are the factual circumstances described in the act that must be met in order to take a determined course of action. Legal “statuses” are the different courses of administrative action that may be adopted once the legal conditions are fulfilled.
By way of example, imagine hypothetically that there is a municipal provision that states that the Local Health Authority (LHA) must oversee fruit vendors. In the event some rotten fruit is discovered, the LHA must select the best from among several options including closing the shop, imposing a fine, or publicising it in the local newspaper. In this example, the prerequisites or preconditions are the local authority’s oversight of the conditions of the fruit vendors and finding rotten fruit in one outlet. The legal statuses or consequences are closing the shop, imposing the fine or publicity in the local newspaper.
In the above example, discretion is allocated through the prerequisites and the legal consequences. In the prerequisites, the authority must exercise its judgment to complete a determination a specific fruit is rotten, since that not a concept defined by law. Also, the official must choose one of the three alternatives that best fulfills the goal of the legislation.
In the exercise of judgment in relation to the determined specified concepts that may be indeterminate, such as the concept of “rotten fruit” the official must decide what that concept means in each context. Furthermore, some concepts may involve technical assessment. In addition, discretion may be found in legal consequences when the choice is between alternatives that fulfill the policy objective.
In the Spanish legal system, both examples of discretion are found. Nevertheless, the exercise of judgment has been the focus in technical assessment, that is commonly known as “technical discretion” (theory adapted from the Italian legal system but embedded in the German scheme). Footnote 22 Although judicial review grants deference to the authority to choose freely any of the options in legal consequences when rightly argued, there is debate over whether the exercise of judgment in relation to the satisfaction of specified prerequisites which may be indeterminate should have one or multiple possible responses, which can legally be adopted by the agent.Footnote 23
The German legal system follows a similar approach. It emphasises administrative discretion based on legal status and the exercise of judgment in measured by compliance with specific prerequisites. A notable feature of this system is the application of the principle of proportionality which requires that discretionary decisions not only comport with legal standards but also balance the impacts of the decision. Footnote 24 Unlike in Spain, the exercise of judgment in prerequisites is confined to specific cases, while administrative discretion grounded in legal consequences is widely accepted.
In EU law, the concept of discretion is addressed in three circumstances: firstly, when a provision allows the EU Commission to take certain actions upon the fulfillment of specified conditions; secondly, when a decision requires a prior technical assessment in a specific context; and thirdly, when the legal consequence is established in general terms and may involve different possible outcomes. Judicial review of administrative discretion distinguishes between the executive body’s discretion in formulating policy choices and its discretion in conducting technical assessments. EU courts refrain from replacing decisions taken by the executive unless there is clear evidence of a manifest error. Footnote 25
The U.S. legal system used to adopt a distinctive approach to discretion, focusing on the statutory delegation of authority to officials at different stages of the decision-making process: the power to adopt the decision, consideration of relevant facts, and the interpretation of legal provisions.Footnote 26 Discretion was acknowledged particularly in the last stage. US courts review decisions for the absence of manifest errors, ensuring consideration of all pertinent aspects in what has been called the “hard look” review. This approach highlights the US emphasis on reviewing the procedural correctness and factual basis for discretionary decisions rather than substituting judgment. Nevertheless, in the Loper Bright vs Raimondo case, the US Supreme Court limited the scope of the long-standing Chevron doctrine, reducing automatic deference to agencies when statutory language appears unclear. The Court now allows judges to first determine whether the statute is indeed ambiguous before deferring to the agency’s interpretation.Footnote 27
Common to all these administrative law theories is the absence of statutes specifying the precise outcome that agents should pursue in a given situation. Instead, statutes delegate the outcome to the official, emphasising the need to consider the context in which the decision is made. They must consider the competing public interests involved in a decision they have to take.
While all systems recognise the role of discretion, the extent and nature of judicial review, the application of proportionality, and the handling of the specific exercise of judgment in satisfaction of individual cases vary. This paper suggests the application of AI systems to discretion in technical assessment and legal consequences, which are common to all the legal systems reviewed, subject to some particularities.
Also common to all these administrative law theories is that Judges are respectful of legislators’ decisions to delegate certain powers to the public administration, therefore the standard of judicial review in discretionary powers tends to be less stringent. Case law occasionally delineates how human judgment should contribute to discretionary decisions.Footnote 28
If judge reviews a discretionary decision based on AI, the application of proportionality must still focus on whether the outcome adheres to legal standards and impacts fairly. That is complemented by the importance of transparency and accountability in AI systems. This approach is consistent with the need that judicial review includes scrutiny of algorithmic development and implementation.
Judicial review should also respect administrative discretion unless there is clear evidence of a manifest error. This standard is analogous among different legal systems. Similar to traditional discretion, judicial review of AI systems should respect administrative discretion unless a manifest error is evident.
In this article, however, the EU legal system’s theory of discretion will be taken as the reference. Thus, elements such as the principles of duty of care, reason-giving and the rules of judicial review will be analysed. Even though the first two are not specifically intended for discretionary decision-making, they play an important role and must be considered by human officials. In fact, both the principles of duty of care and reason-giving align with the principle of good administration.Footnote 29
2. Human contribution to discretionary decision-making
Delegation of discretionary powers is essential due to the impracticality of regulating all activities performed by public authorities. This leaves room for them to decide what the best outcome would be in a given situation. When discretionary powers are delegated by statute, public authorities are mandated to perform a duty of care and to provide reasons as guarantees of a circumscribed discretion. The official’s experience and knowledge become crucial, which fosters greater deference from the judiciary towards administrators.
Case law defines the duty of care, also known as diligence, as a guarantee to the recipient of a decision that it was founded upon sufficient basis and relevant information. In some cases, the duty of care is connected to the information; the application of discretionary power results from the consistency of the facts applied with the public interest of the legislation. Moreover, this concept denotes the right to a fair hearing.Footnote 30
The duty of giving reasons is rooted in article 296 of the Treaty on the Functioning of the European Union. It seeks a more accurate and efficient decision-making process, which allows affected persons to obtain sufficient information to understand the reason the authority made the decision it did from among the available options under the legislation. This concept is also important for challenging decisions.Footnote 31
In the realm of discretionary decisions, the official’s key components are knowledge and experience. These elements, in carrying out decision-making processes combine technical resources, education, and experience within their specialised field.Footnote 32 The bureaucrat understands the context in which decisions are made, considering the specificities of each case and provides the reasons for the decision adopted.Footnote 33
3. Uses of AI in discretionary decision-making
Addressing the effectiveness of AI systems in discretionary decision-making is important to understanding their potential and limitations. AI, particularly machine learning systems,Footnote 34 excels in identifying patterns, making predictions, drafting documents, and performing complex calculations These capabilities are valuable in reducing costs and improving response times within administrative functions. However, the application of AI in discretionary decision-making must be approached with caution, given its inherent complexities.
AI systems, including expert systems and large language models, have been utilised in public administrations across EuropeFootnote 35 and the United StatesFootnote 36 for various purposes over the past decade and are showing more interest in using it.Footnote 37 These systems have proven useful for monitoring, predicting, and responding to situations, such as resource allocation and pattern detection. For instance, an emergency response authority can use AI to predict weather changes and allocate resources effectively, while an education department may use AI to optimise teacher allocation based on performance data or could be used for allocating funding for social benefits.Footnote 38
Nevertheless, while AI can be a powerful tool, its role in discretionary decision-making requires careful consideration. The formulation of decisions, which involves drafting and generating documents, can be efficiently managed by AI systems. Generative large language models, such as advanced machine learning systems, are useful tools in discretionary decision-making in this area. They excel in tasks such as drafting documents and providing data-driven insights, which can aid human decision-makers by handling routine aspects of the decision-making process. While these models can generate preliminary drafts and offer various perspectives, they should be used to complement human judgment rather than replace it. The true decision-making process involves human analysis and contextual understanding, which AI alone cannot fully replicate.
The actual making of discretionary decisions involves a deeper level of analysis – reading and understanding requests, evaluating evidence, engaging with involved parties, and balancing public interests. This nuanced process is where human expertise remains essential.
An illustrative example of AI’s capability in discretionary decision-making is the supervision of fruit vendors in cities. The Local Health Authority oversees food hygiene and sanitation compliance by fruit vendors. This means ensuring that the vendors comply with health regulations by regular inspection of vendors. The decision as to which fruit vendor must be inspected is a discretionary decision, since there are many fruit vendors in the city. The reason to decide which one to choose can be based on hunch, judgment or basically on patterns and experience. Basically, the deciding official can choose a determined fruit vendor because of previous cases of bad management of fruits by a specific vendor.
In this scenario, AI can handle the decision-making process autonomously, as it involves selecting from among predefined options based on criteria. Unlike more complex discretionary decisions, such as those requiring detailed contextual analysis and human judgment, this example demonstrates how AI can be effectively employed without requiring human involvement.
In conclusion, AI systems offer substantial benefits for administrative decision-making but, within discretionary contexts, must be used judiciously. AI should be applied to specific cases where it can perform technical assessments without infringing on individual rights. By integrating AI’s capabilities into decision-making processes, we can enhance efficiency while ensuring that human expertise and contextual understanding remain central to discretionary decisions.
As stated, it is commonly accepted that regulation of the decision-making process allows AI to be used in taking discretionary decisions. Regulation can therefore establish conditions under which AI can be implemented which would address the adaptations and limits to be described below. Its use should be authorised only regarding specific cases and where no decisions would affect fundamental rights.
4. Adaptations to the theory of discretion due to AI usage
The cases in which AI is used in the exercise of discretionary powers raise significant questions. This is particularly so regarding the need for adaptations to the theory of discretion as algorithms take over tasks traditionally performed by officials. While technological advancements must align with the rule of law, this does not prevent necessary changes in administrative law institutions. Administrative law has a history of evolving.Footnote 39
The most challenging part of allowing the usage of AI in discretionary powers is implementing its use without breaching long-time recognised fundamental individual rights. The key insights this paper contributes to the ongoing discourse are regarding three perspectives: duty of care, reason-giving, and judicial review, specifically when machine learning systems are used.Footnote 40
a. Duty of care
There is no evidence that machine learning systems can review and analyze fact situations and legal consequences in the same way as humans do because of the use of correlations. This paper suggests that the decision-making process requires a properly trained algorithm that may generate outcomes similar to those of humans through statistical analysis.
Criticism has arisen regarding the reliance of machine learning on correlations rather than following the causal reasoning employed by the human mind.Footnote 41 However, the exercise of discretionary powers does not always require a causal link.
Is true that administrative discretion often entails evaluating the context and devising a tailored solution. This incorporates creativity where necessary but does not uniformly involve creativity or contextual consideration. At times, it may involve examination of precedent, or the application of a mathematical formula based on patterns found in documents (e.g. overseeing fruit vendors) due to the failure of precedent cases to comply with the rules established in the area. In other cases, even with correlation, an outcome can be reached similar to a causal way of thinking, for instance when randomly selecting a different fruit vendor only because one recalled passing by them.
The adaptation of the theory involves mandating the human responsible for resolving the administrative remedy to evaluate the decision’s outcome and its consistency with the relevant facts, both qualitatively and quantitatively,Footnote 42 even if those facts were not considered by the algorithm, or if there were manifest errors. Additionally, scholarship suggests checking whether the result of the decision complies with the public interest involved,Footnote 43 something that can be reached by the algorithm without checking the context if the correct outcome is chosen.
Performing one’s duty of care correctly entails adopting decisions according to the context and avoiding self-binding or fettering one’s discretion. Legal scholarship has noted that self-binding can occur both in expert systemsFootnote 44 – due to initial programming – and in machine learning systems where algorithms create their own rules, thus supposedly fettering their own decisions. Self-binding criteria for public administrators could be permitted in specific cases, especially for internal decisions.Footnote 45 Although an algorithm is not properly an internal decision, its effect from this perspective is similar. If the algorithm receives authorisation from the responsible official, its content is valid for use in the administrative decision-making process. Further, the fact that machine learning systems create and update their models mirrors the adaptive nature of human intelligence since; by updating its training and experience, it can give different answers.
This adaptation of the duty of care within the theory of discretion can be accomplished by judicial reinterpretation, because the creation and development of this duty is, as stated above, through case law.
b. Reason-giving
Reason-giving in the automated state must be revisited. Two aspects will be analysed: whether the reasons provided by the algorithm can be considered as proper reason-giving and the role played by explanation.
First, it can be argued that algorithms may provide reasons that are inconsistent with what the addressee requires. From a different perspective, however, reliance on decisions made by AI is reasonable due to its substantial capability of processing large quantities of data and generating calculations and predictions which, for humans, may be complex to understand at first glance. When restricted to specific cases, such as discretionary decisions involving correlations, machine learning systems can process evidence, address requests, and deliver appropriate outcomes. The correlational process itself – if properly explained – can serve as a form of reason-giving, though offering a different interpretation of what reason-giving entails.
However, a draft elaborated with natural language processing creates wording based on patterns and predictions rather than the underlying rationale for the decision. According to this argument, the formulation of a decision with consistent wording that aligns with the addressee’s requirements could not be considered proper reason-giving because it lacks an adequate decision-making process. Unlike humans, who read the request, analyze evidence, consider all relevant interests, and apply their knowledge and experience to reach an outcome that best aligns with public policy, algorithms may fall short in this regard.
Nevertheless, EU courts have previously come to a nuanced interpretation of reason-giving, distinguishing between routine decisions based on well-founded case law and exceptional measures.Footnote 46 Administrative decisions taken without explicit reason-giving, such as tacit consent or administrative silence, illustrate a different approach to the matter. In the context of decision-making with AI systems, there is a transformation in how reason-giving is understood.
Indeed, this represents a new generation of administrative decision-making processes which overlays previous ones. It evolves from the artisanal and rudimentary to a sophisticated and mass-produced one.Footnote 47 Rather than identifying all the particularities of each case and evaluating all evidence as was traditionally done, the focus shifts to technological aspects and massification of information where common facts are relevant for decision-making, without compromising human rights.
An argument supporting a nuanced interpretation is that humans sometimes act unexpectedly in reason-giving. The reasons stated for a decision may not always reflect the true rationale, as political considerations or selective legal precedents may influence the arguments presented. Nonetheless, the reasons provided must remain internally consistent and directly related to the decision.
One of the most complex issues is “fair hearing,” as responses must be based on arguments presented by the parties involved – a challenging task for algorithms due to how they function. A potential solution is to introduce flexibility to this step of discretionary decision-making cases. For instance, the addressee could be asked to highlight aspects for which they wish to provide input, and the algorithm could be trained accordingly. If the algorithm is not programmed to handle the certain condition requested, a human could intervene to review the specific variables involved.
Identifying specific cases of discretionary decisions can nuance the way reason-giving is provided in automated decisions. For instance, the example of overseeing fruit vendors indicated that no individual rights were affected. This suggests such decisions may not need to adhere strictly to EU case law requirements for reason-giving. Compliance with Article 296 TFEU and Article 41 of the EU Charter of Fundamental Rights may be met if the capability to explain the correlations used are deemed a sufficient argument.
Regarding the second aspect, there is no consensus in legal scholarship that the explainability of the algorithm can serve as a cornerstone of AI-based administrative discretion.Footnote 48 “Explainability” means the extent to which the internal working of the algorithm can be interpreted by humans.
Although, explainability wouldn’t replace the reason-giving requisite in discretionary decisions, it is a relevant tool in understanding the steps that the algorithm followed to reach the outcome.
The EU AI Act makes a significant effort in this direction in article 13. It obliges all AI systems used in high-risk systems to comply with instructions containing its characteristics; specifically, in terms of accuracy, technical capabilities, and characteristics so as to provide relevant information that explains its output, among other things. Understanding the different types of AI will to be important in regulating them according to their diverse characteristics.Footnote 49 Normalisation will play an important role in defining these concepts on order to comply with traceability and explainability aligned with EU reason-giving standards.
c. Judicial review
Judicial review will be affected by the automation of administrative discretionary decision-making under the principle of proportionality, commonly employed as a tool to – among other things – control administrative discretion. The proportionality test must adapt to evaluate not only the balance between means and ends but also the algorithm’s alignment with statutory and regulatory requirements.
Instead, the judge must review if the algorithm was correctly developed and employed, and if the result applied one of the various public interests in play from an ex-post analysis. The judge must not only rely upon the control performed by the public agencies that certify the way the model was developed, trained, and employed by the public authority but also consult with a court-appointed expert. Also, from this viewpoint, what is relevant is how the addressee is affected by the decision and how the decision was adopted by the algorithm to take into account if the public interest chosen was appropriate considering the context and the public policy embodied. This approach comports with EU Courts practices when they conduct technical and economic assessments of the Commission where the context of facts and conclusions are significant elements of review in the decision.Footnote 50
The revised approach to judicial review suggests that traditional methods – such as the proportionality test – may require adaptation to incorporate the specifics of algorithmic decision-making.
To perform the analysis of a discretionary decision taken by AI systems, the judge must not only assess whether the algorithm was correctly developed, but also several critical factors including the considerations that must be taken into account while reviewing what will define the intensity and method of the judicial review performed. Specifically, this means the possibility of checking whether the algorithm took into account all relevant factors mandated by law, the various public interests involved and if the outcome specifically concerns what was solicited by the decision’s recipient.
Judicial review must be proportionate to the level of discretion exercised by the algorithm. For instance, if an algorithm makes decisions based on highly subjective criteria (e.g. loan approval), the review should be more intensive. It must scrutinise the algorithm’s reasoning and training data. For decisions based on clear, objective criteria, such as administrative fines for parking violations, the review may be less intensive, focusing primarily on procedural correctness.
Additionally, the judge must also assess whether the algorithm disregarded relevant matters, such as consideration of the legal competence to adopt the decision. These aspects must not be specified in a list of factors, but case law can intervene to define its role.
For instance, the algorithm allocating teachers to a public schoolFootnote 51 may not need to provide reasons (aside from explaining its functionality), but the judge must review whether the decision was the correct alternative among various options – in a similar approach to the proportionality test- but not analyzing the weight of the determined principle.
The method of the judicial review must involve the help of a court-appointed expert to check the explainability and transparency of the algorithm, the specific outcome of the decision and how it impacted the addressee of the decision.
Furthermore, significant deference has been granted to human agents when reviewing the exercise of judgment in relation to specified prerequisites in the German and Spanish legal systems. The EU courts use manifest error as a standard of review. The same approach must be applied when AI intervenes in a discretionary decision; the judge steps in only when the algorithm’s outcome makes a manifest error or there is another significant intervention in the main aspects described above (legal competence and the explainability of the AI system).
Nevertheless, the judge should consider the difference between tailor-made and in-house-produced algorithms when applying deference. The judge should be less deferent when the algorithm was not tailor-made or in-house produced. For the former, which are developed with specific objectives and contexts in mind, judges may afford greater deference and assume a higher level of alignment with the relevant legal and regulatory standards and what is expected by the same public administration to decide in each specific case. It can be understood as similar to what happens with internal directives where each public administration decides beforehand the way it will address specific aspects and there is a specific design for particular tasks and compliance with internal standards.
However, for off-the-shelf algorithms, which are designed for general use and may not account for specific regulatory requirements, judges may apply a stricter level of scrutiny to ensure compliance with legal standards due to the lack of customisation for a specific administrative environment.
AI proves particularly useful when public administration must exercise a judgment to satisfy specified prerequisites that are indeterminate and based on patterns or statistics, such as authorising operations in the electricity sector, where statistical analyses could be efficiently performed by an algorithm.
This section of the paper concludes with a suggestion of an adaptation of the theory of administrative discretion that deals with the compatibility of AI advances and the rule of law. Light has been shed on the explainability of the model, the training data and how decisions affect the addressee, which should be the focus of administrative law.
An important safeguard to achieving this goal is to maintain human involvement in resolving remedies to the administrative decision and in the judicial review of the algorithm’s outcomes. This is because AI has not yet been proven to have a sense of understanding complex legal concepts, such as public interests. The final version of the EU AI Act does not include the obligation for a human to resolve remedies regarding decisions taken with AI systems, although this was proposed by the Parliament in amendments 71 and 738. Yet, the national law of the states could include this extra level of protection to citizens when discretionary decisions are adopted, and legislative bodies might consider enacting laws that mandate human oversight in critical algorithmic decisions.
IV. Limits to the use of automated decision-making systems in the discretionary power
Legal literature criticising the use of automated systems in the exercise of discretionary powers has highlighted notable examples of algorithmic malfunction adversely affecting individual rights.Footnote 52 To avoid affecting individual rights and to restrict the abuse of power by public authorities, some limits must be set.
These limits are twofold. First, limits in the employment of AI in the discretionary decision-making process due to the advancement of technology. Second, the setting of legal limits to establish safeguards for decision recipients to avoid or mitigate risks that might affect individual rights. Hard law and standardisation must play a significant role in setting limits and prohibitions of AI usage in discretionary decision-making in the first category whereas the second kind of limits must be set in hard law complemented by soft law.
1. Limits of AI usage in discretionary decision-making due to the advancement of technology
Limits regarding the advancement of technology seek to avoid the use of AI for defined discretionary decisions due to the lack of evidence that AI systems can perform as well as humans conducting similar activities as required in specific contexts. This means not only taking into account what kinds of cases might put the addressee of the decision at risk – an approach taken by the EU AI Act – but also understanding what the current state of AI is and how could it be employed in administrative decision-making which could be addressed by soft law in each State or public authority.
The interplay between legislators, public administrators, and standardisation agencies is important in identifying the limits and prohibitions that must be set in this regard. The last noted sets limits on how the model can be employed and the first two define to which sectors and types of decisions AI is going to be employed.
Semi-automated discretionary decision-making should have a wider realm of usage, as it is in use today. Many stages of this kind of decision-making involve no discretion and can be performed by AI. An automated system may assist in grant processes by identifying and reviewing the required steps. It can aid in helping to draft the motivation for a call for grants while justifying compliance by beneficiaries. It can articulate factual circumstances where a technical assessment or a judgment concerning a specified prerequisite are absent.
On the other hand, AI usage in fully automated discretionary decision-making must be reduced to specific cases. The Spanish Digital Rights Charter establishes that legislators are required to set these specific cases, but the topic could also be addressed by directives defined by soft law due to the diverse kind of sector and myriad possibilities of AI usage. Limits should focus on three aspects: potential harm to the addressee’s rights, the achievement of public policy goals, and technical assessments.
First, AI systems should not be used when the potential result of the decision would harm the addressee’s rights. For example, the Local Health Authority developing a monitoring procedure to a fruit vendor could be effectively executed by a machine learning system provided with appropriate training data. Choosing one shop over another does not affect the owner’s rights, but adopting the sanction may, so it should not be used in that case.
Second, AI systems should be avoided or restricted when the outcome of the decision may undermine significant public policy goals, even if technical assessments are involved. Discretionary decision-making allows officials to balance various public interests and select outcomes that align with public policy goals. AI systems may not always meet these policy objectives.
Third, technical assessment. As mentioned before in the different theories of administrative discretion, one of the manifestations of discretion is that legislation allows technical assessments when adopting the decision. AI demonstrates superiority in technical contexts due to its proficiency in statistical settings to determine legal concepts stated in the law, based on correlations. AI systems usage must be reduced to cases of technical assessment types of discretionary decisions and not to policy discretion.
An example can be found in public procurement. Elements involving margins of discretion, such as defining award criteria and evaluation rules in structuring tender documents, can be determined by algorithms based on prior cases and specific procurement conditions, as an aid to human decision-making. However, the definition of the contracting object and its amount, aspects involving a high degree of policy discretion, cannot be delegated to automated systems. These decisions, encompassing considerations of opportunity, convenience, and economic factors, go beyond the technical aspects of the procurement process.
In general, discretionary decisions that should not be automated involve processes where a causal mode of thinking is essential, requiring an assessment of the particularities of the context to make informed decisions. Instances demanding a causal mode of mind processing are not suitable for full automation.
2. Legal limits of AI usage in discretionary powers
Limits on the discretionary decision-making process must include the generally accepted theory of discretion of the legal system with adaptations as above described. The EU has set significant limits. First, the EU AI Act introduces specific limits to high-risk systems, such as the specifications outlined in articles 8 to 14, whether for automated or semi-automated decision-making processes. These include human agency, transparency, information on the person’s rights affected, and non-discrimination. Furthermore, it mandates impact assessments and registration of the software used. One of the limits that matches the adaptations proposed previously is that remedies must be decided by humans which requires that they analyze the decision adopted by the algorithm. Second, Article 22 of the GDPR remains in force concerning automated decisions regarding the use of personal data.
One safeguard could involve an AI supervisory authority when discretion is allocated in decision-making. While the EU currently mandates only a national agency,Footnote 53 if the US example is followed, each authority using AI for decision-making could oversee specific cases of discretionary decision-making.
V. Conclusions
The discretionary decision-making process entails a complex and diverse set of tasks depending on the sector and public authority that performs it. The use of AI in a part or the whole decision-making process could represent benefits for public administration and for citizens by swiftly performing calculations and identifying patterns that would boost routine assignments. However, understanding the advancements of AI and the way it can be embedded in discretionary decision-making should lead to a careful adaptation to the theory of discretion that comports with the rule of law.