Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-25T19:57:01.814Z Has data issue: false hasContentIssue false

6 - A New ‘Machinery of Government’?

The Automation of Administrative Decision-Making

from Part II - Automated States

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Governments are increasingly adopting artificial intelligence (AI) tools to assist, augment, and even replace human administrators. In this chapter, Paul Miller, the NSW Ombudsman, discusses how the well-established principles of administrative law and good decision-making apply, or may be extended, to control the use of AI and other automated decision-making (ADM) tools in administrative decision-making. The chapter highlights the importance of careful design, implementation and ongoing monitoring to mitigate the risk that ADM in the public sector could be unlawful or otherwise contravene principles of good decision-making – including consideration of whether express legislative authorisation for the use of ADM technologies may be necessary or desirable.

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 116 - 135
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

6.1 Introduction: ADM and the Machinery of Government

The machinery of government are those structures, processes, and people that comprise departments and agencies, and through which governments perform their functions. The term is perhaps best known in the context of ‘MoG changes’ – the frequent adjustments made to the way departments and agencies are structured, responsibilities and staff are grouped and managed,Footnote 1 and how agencies are named.Footnote 2 For at least the last half century, the defining characteristic of the machinery of government has been public officials (the ‘bureaucrats’), structured into branches, divisions, and departments, operating pursuant to delegations, policies and procedures, and providing advice, making and implementing decisions, and delivering services for and on behalf of the government. Characterising governments as a ‘machine’ is both a metaphor and, like the term ‘bureaucracy’, can convey a somewhat pejorative connotation: machines (even ‘well-oiled machines’) are cold, unfeeling, mechanical things that operate according to the dictates of their fixed internal rules and logic.

This chapter examines a change brought about to the machinery of government that is increasingly permeating government structures and processes – the adoption of automated decision-making (ADM) tools to assist, augment, and, in some cases, replace human decision-makers. The ‘machinery of government’ metaphor has been extended to frame the discussion of this topic for three reasons. First, it more clearly focuses attention on the entire system that underpins any government administrative decision, and in which digital technology may play some role. Second, rather than assuming that new technologies must – because they are new – be unregulated, the role of new technology within the machinery of government should be considered, and therefore (at least as a starting point) the well-established laws and principles that already control and regulate the machinery of government need to be analysed. Finally, this chapter aims to consider whether there might be lessons to be learnt from the past when other significant changes have taken place in the machinery of government. For example, do the changes that are now taking place with the increasing digitisation of government decision-making suggest that we should consider a deeper examination and reform of our mechanisms of administrative review, in a similar way to what happened in Australia in the 1970s and 1980s in response to the upheavals then taking place?

In this chapter, some of the key themes addressed in detail in the NSW Ombudsman’s 2021 special report to the NSW Parliament, titled ‘The new machinery of government: using machine technology in administrative decision-making’Footnote 3 (Machine Technology report), are outlined. This chapter provides a brief context of the need for visibility of government use of ADM tools and the role of the Ombudsman, key issues at the intersection between automation and administrative law and practice, and broad considerations for agencies when designing and implementing ADM tools to support the exercise of statutory functions. The chapter concludes with a question of whether the rise of ADM tools may also warrant a reconsideration of the legal frameworks and institutional arrangements.

6.2 Context

6.2.1 The New Digital Age

We have entered a digital age, and it is widely accepted that governments must transform themselves accordingly.Footnote 4 In this context, government digital strategies often refer to a ‘digital transformation’ and the need for government to become ‘digital by design’ and ‘digital by default’.Footnote 5 It is unsurprising then that digital innovation has also begun to permeate the machinery of government, changing the ways public officials make decisions and exercise powers granted to them by Parliament through legislation.Footnote 6 ADM involves a broad cluster of current and future systems and processes that, once developed, run with limited or no human involvement, and whose output can be used to assist or even displace human administrative decision-making.Footnote 7 The technology ranges in complexity from relatively rudimentary to extremely sophisticated.

6.2.2 Government Use of ADM Tools

The use of simpler forms of ADM tools in public sector decision-making is not new. However, what is changing is the power, complexity, scale, and prevalence of ADM tools, and the extent to which they are increasingly replacing processes that have, up to now, been the exclusive domain of human decision-making. The Machine Technology report includes case studies of New South Wales (NSW) government agencies using AI and other ADM tools in administrative decision-making functions, including fines enforcement, child protection, and driver license suspensions. Such tools are also used in areas such as policing (at NSW State level) and taxation, social services and immigration (at Australian Commonwealth level). This rise of automation in government decision-making and service delivery is a global phenomenon.Footnote 8 Internationally, it has been observed that ADM tools are disproportionately used in areas that affect ‘the most vulnerable in society’ – such as policing, healthcare, welfare eligibility, predictive risk scoring (e.g., in areas such as recidivism, domestic violence, and child protection), and fraud detection.Footnote 9

As noted by the NSW Parliamentary Research Service, while there has been some international progress on increased transparency of ADM, no Australian jurisdiction appears to be working on creating a registry of ADM systems.Footnote 10 Additionally, in no Australian jurisdiction do government agencies currently have any general obligation to notify or report on their use of ADM tools. Nor does it appear that they routinely tell people if decisions are being made by or with the assistance of ADM tools. This lack of visibility means that currently it is not known how many government agencies are using, or developing, ADM tools to assist them in the exercise of their statutory functions, or which cohorts they impact. This is a substantial barrier to external scrutiny of government use of ADM tools.

6.2.3 The Risks of ‘Maladministration’

Clearly, there are many situations in which government agencies can use appropriately designed ADM tools to assist in the exercise of their functions, which will be compatible with lawful and appropriate conduct. Indeed, in some instances automation may improve aspects of good administrative conduct – such as accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias. However, if ADM tools are not designed and used in accordance with administrative law and associated principles of good administrative practice, then its use could constitute or involve ‘maladministration’ (for example, unlawful, unreasonable, or unjust conduct).Footnote 11 This is where an agency’s conduct may attract the attention of the Ombudsman – as its role generally is to oversee government agencies and officials to ensure that they are conducting themselves lawfully, making decisions reasonably, and treating all individuals equitably and fairly. Maladministration can, of course, also potentially result in legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid.Footnote 12

6.3 Administrative Law and ADM Technologies

There is an important ongoing discussion about the promises and potential pitfalls of the most highly sophisticated forms of AI technology in the public sector. However, maladministration as described above can arise when utilising technology that is substantially less ‘intelligent’ than many might expect. The case studies in the Machine Technology Report illustrate a range of issues relating to administrative conduct, for example, the automation of statutory discretion, the translation of legislation into code, and ADM governance. Only some aspects of the technologies used in those case studies would be described as AI. In any case, the focus from an administrative law and good conduct perspective is not so much on what the technology is, but what it does, and the risks involved in its use in the public sector.Footnote 13

Mistakes made when translating law into a form capable of execution by a machine will likely continue to be the most common source of unlawful conduct and maladministration in public sector use of ADM tools. While of course unaided human decision-makers can and do also make mistakes, the ramifications of automation errors may be far more significant. The likelihood of error may be higher, as the natural language of law does not lend itself easily to translation into machine code. The scale of error is likely to be magnified. The detection of error can be more difficult, as error will not necessarily be obvious to any particular person affected, and even where error is suspected, identifying its source and nature may be challenging even for the public authority itself. A machine itself is, of course, incapable of ever doubting the correctness of its own outputs. Rectifying errors may be more cumbersome, costly, and time-consuming, particularly if it requires a substantial rewriting of machine code, and especially where a third party vendor may be involved.

6.3.1 The Centrality of Administrative Law and Principles of Good Administrative Practice

Some of the broader concerns about use of ADM tools by the private sector, in terms of privacy, human rights, ethics, and so on, also apply (in some cases with greater relevance) to the public sector.Footnote 14 However, the powers, decisions, and actions of government agencies and officials are constitutionally different from that of the general private sector.

Public authorities exercise powers that impact virtually all aspects of an individual’s life – there is ‘scarcely any field of human activity which is not in some way open to aid or hindrance by the exercise of power by some public authority’.Footnote 15 The inherently ‘public’ nature of such functions (such as health, education, and transport) and the specific focus of some government service provision on groups of people likely to experience vulnerability, means that the government’s use of ADM tools will necessarily, and often significantly, impact most of society. Recipients of government services – unlike customers of private sector businesses – are also typically unable to access alternative providers or to opt out entirely if they do not like the way decisions are made and services are provided. Most importantly, governments do not just provide services – they also regulate the activity of citizens and exercise a monopoly over the use of public power and coercive force – for example, taxation, licensing, law enforcement, punishment, forms of detention, and so on. It is in the exercise of functions like these, which can affect people’s legal status, rights, and interests, that administrative decision-making principles raise particular issues that are unique to the public sector. Governments, by their nature, have a monopoly over public administrative power, but this means that the exercise of that power is controlled through public administrative law. Any use of ADM tools by government agencies must therefore be considered from an administrative law perspective – which is not to disregard or diminish the importance of other perspectives, such as broader ethicalFootnote 16 and human rightsFootnote 17 concerns.

This administrative law – the legal framework that controls government action – does not necessarily stand in the way of adopting ADM tools, but it will significantly control the purposes to which they can be put and the ways in which they can operate in any particular context. The ultimate aim of administrative law is good government according to law.Footnote 18 Administrative law is essentially principles-based and can be considered, conceptually at least, to be ‘technology agnostic’. This means that, while the technology used in government decision-making may change, the underlying norms that underpin administrative law remain constant. The essential requirements of administrative law for good decision-making can be grouped into four categories: proper authorisation, appropriate procedures, appropriate assessment, and adequate documentation. Administrative law is more complex than this simple list may suggest, and there are more technically rigorous ways of classifying its requirements.Footnote 19 There are, of course, also myriad ways in which administrative decision-making can go wrong – some of the more obvious considerations and risks when ADM tools are used are highlighted below.

6.3.2 Proper Authorisation

When Parliament creates a statutory function, it gives someone (or more than one person) power to exercise that function. This person must be a ‘legal person’, which can be a natural person (a human being) or a legally recognised entity, such as a statutory corporation, legally capable of exercising powers and being held accountable for obligations.Footnote 20 Proper authorisation means there must be legal power to make the relevant decision, that the person making the decision has the legal authority to do so, and that the decision is within the scope of decision-making power (including, in particular, within the bounds of any discretion conferred by the power). The requirement for proper authorisation means that statutory functions are not, and cannot be, granted to or delegated to ADM systems,Footnote 21 but only to a legal subject (a someone) and not a legal object (a something).Footnote 22

However, a person who has been conferred (or delegated) the function may be able to obtain assistance in performing their statutory functions, at least to some extent.Footnote 23 This is recognised by the Carltona principle.Footnote 24 In conferring a statutory function on an administrator, Parliament does not necessarily intend that the administrator personally undertake every detailed component or step of the function. As a matter of ‘administrative necessity’, some elements of a function might need to be shared with others who are taken to be acting on the administrator’s behalf. The reasoning underlying the Carltona principle appears to be sufficiently general that it could extend to permit at least some uses of ADM tools. However, the principle is based on a necessity imperative,Footnote 25 and cannot be relied upon to authorise the shared performance of a function merely on the basis that it might be more efficient or otherwise desirable to do so.Footnote 26 While the Carltona principle may be extended in the future,Footnote 27 whether and how that might happen is not clear and will depend on the particular statutory function.Footnote 28

The Carltona principle is not the only means by which administrators may obtain assistance, whether from other people or other things, to help them better perform their functions. For example, depending on the particular function, administrators can (and in some cases should, or even must) draw upon others’ scientific, medical, or other technical expertise. Sometimes, this input can even be adopted as a component of the administrator’s decision for certain purposes.Footnote 29 It can be expected that, like the obtaining of expert advice and the use of traditional forms of technology, there will be at least some forms and uses of sophisticated ADM tools that will come to be recognised as legitimate tools administrators can use to assist them to perform their functions, within the implicit authority conferred on them by the statute. However, whether and the extent to which this is so will need to be carefully considered on a case-by-case basis, taking into account the particular statutory function, the proposed technology, and the broader decision-making context in which the technology will be used.

Additionally, if the function is discretionary, ADM tools must not be used in a way that would result in that discretion being fettered or effectively abandoned. By giving an administrator a discretion, Parliament has relinquished some element of control over individual outcomes, recognising that those outcomes cannot be prescribed or pre-ordained in advance by fixed rules. But at the same time, Parliament is also prohibiting the administrator from setting and resorting to its own rigid and pre-determined rules that Parliament has chosen not to fix.Footnote 30 This means that exercising a discretion that Parliament has given to an administrator is just as important as complying with any fixed rules Parliament has prescribed. Over time, administrative law has developed specific rules concerning the exercise of statutory discretions. These include the so-called rule against dictation and rules governing (and limiting) the use of policies and other guidance material to regulate the use of discretion. Such rules are best viewed as applications of the more general principle described above – that where a statute gives discretion to an administrator, the administrator must retain and exercise that discretion. Those given a discretionary statutory function must, at the very least, ‘keep their minds open for the exceptional case’.Footnote 31 Given this principle, some uses of ADM tools in the exercise of discretionary functions may be legally risky. This was the view of the Australian Administrative Review Council, which concluded that, while ‘expert systems’ might be used to assist an administrator to exercise a discretionary function, the exercise of the discretion should not be automated and any expert systems that are designed to assist in the exercise of discretionary functions should not fetter the exercise of that function by the administrator.Footnote 32 At least on current Australian authorities, ADM tools cannot be used in the exercise of discretionary functions if (and to the extent that) it would result in the discretion being effectively disregarded or fettered.Footnote 33 If the introduction of automation into a discretionary decision-making system has the effect that the administrator is no longer able to – or does not in practice – continue to exercise genuine discretion, that system will be inconsistent with the statute that granted the discretion, and its outputs will be unlawful.Footnote 34 In practice, this suggests that discretionary decisions cannot be fully automated by ADM tools.Footnote 35

6.3.3 Appropriate Procedures

Good administrative decision-making requires a fair process. Appropriate procedures means that the decision has followed a procedurally fair process, that the procedures comply with other obligations including under privacy, freedom of information, and anti-discrimination laws, and that reasons are given for the decision (particularly where it significantly affects the rights or interests of individuals). Generally, a fair process requires decisions to be made without bias on the part of the decision-maker (‘no-bias rule’) and following a fair hearing of the person affected (‘hearing rule’). ADM tools can introduce the possibility of a different form of bias known as ‘algorithmic bias’,Footnote 36 which arises when a machine produces results that are systemically prejudiced or unfair to certain groups of people. Although it is unclear whether the presence of algorithmic bias would necessarily constitute a breach of the no-bias rule, it may still lead to unlawful decisions (based on irrelevant considerations or contravening anti-discrimination laws) or other maladministration (involving or resulting in unjust or improperly discriminatory conduct). Having appropriate procedures also means providing where required, accurate, meaningful, and understandable reasons to those who are affected by a decision, which can be challenging when ADM tools have made or contributed to the making of that decision.

6.3.4 Appropriate Assessment

Appropriate assessment means that the decision answers the right question, is based on a proper analysis of relevant material and on the merits, and is reasonable in all the circumstances. Using ADM tools in the exercise of statutory functions means translating legislation and other guidance material (such as policy) into the form of machine-readable code. A key risk is the potential for errors in this translation process, and possibly unlawful decisions being made at scale. Any errors may mean that, even in circumstances where technology can otherwise be used consistently with principles of administrative law, doubts will arise about the legality and reliability of any decisions and actions of the public agency relying upon the automation process.Footnote 37 When designing and implementing ADM tools, it is also essential to ensure that its use does not result in any obligatory considerations being overlooked or extraneous considerations coming into play. While the use of automation may enhance the consistency of outcomes, agencies with discretionary functions must also be conscious of the duty to treat individual cases on their own merits.

6.3.5 Adequate Documentation

Agencies are required to properly document and keep records of decision-making. In the context of ADM tools, this means keeping sufficient records to enable comprehensive review and audit of decisions. Documentation relating to different ‘versions’Footnote 38 of the technology, and details of any updates or changes to the system, may be particularly important.

6.4 Designing ADM Tools to Comply with the Law and Fundamental Principles of Good Government

To better manage the risks of maladministration in the use of ADM tools, there are at least five broad considerations that government agencies must address when designing and implementing ADM systems to support the exercise of an existing statutory function.Footnote 39 Dealing with those comprehensively will assist compliance with the principles of administrative law and good decision-making practice.

6.4.1 Putting in Place the Right Team

Adopting ADM tools to support a government function should not be viewed as simply, or primarily, an information technology project. Legislative interpretation requires specialist skills, and the challenge involved is likely to be especially pronounced when seeking to translate law into what amounts to a different language – that is, a form capable of being executed by a machine.Footnote 40 Agencies need to establish a multidisciplinary design team that involves lawyers, policymakers, and operational experts, as well as technicians, with clearly defined roles and responsibilities.Footnote 41 It is clearly better for all parties (including for the efficiency and reputation of the agency itself) if ADM tools are designed with those who are best placed to know whether it is delivering demonstrably lawful and fair decisions, rather than having to try to ‘retrofit’ that expertise into the system later when it is challenged in court proceedings or during an Ombudsman investigation.Footnote 42 The task of interpreting a statute to arrive at its correct meaning can be a complex task, and one that can challenge both highly experienced administrative officials and lawyers.Footnote 43 Even legal rules that appear to be straightforwardly ‘black and white’, and therefore appropriate candidates for ADM use, can nonetheless have a nuanced scope and meaning. They may also be subject to administrative law principles – such as underlying assumptions (for example, the principle of legality)Footnote 44 and procedural fairness obligations – which would not be apparent on the face of the legislation.

6.4.2 Determining the Necessary Degree of Human Involvement

Government agencies using ADM tools need to assess the appropriate degree of human involvement in the decision-making processes – discretionary and otherwise – having regard to the nature of the particular function and the statute in question. What level of human involvement is necessary? This is not a straightforward question to answer. As noted earlier, any statutory discretion will require that a person (to whom the discretion has been given or delegated) makes a decision – including whether and how to exercise their discretion. Given that ADM tools do not have a subjective mental capacity, their ‘decisions’ may not be recognised by law as a decision.Footnote 45 Merely placing a ‘human-on-top’ of a process will not, of itself, validate the use of ADM tools in the exercise of a discretionary function.Footnote 46 The need for a function to be exercised by the person to whom it is given (or delegated) has also been emphasised in Australian Federal Court decisions concerning the exercise of immigration discretions, which have referred to the need for ‘active intellectual consideration’,Footnote 47 an ‘active intellectual process’,Footnote 48 or ‘the reality of consideration’Footnote 49 by an administrator when making a discretionary decision.Footnote 50 The ‘reality of consideration’ may look different in different administrative contexts, in proportion to the nature of the function being exercised and the consequences it has for those it may affect. However, the principle remains relevant to the exercise of all discretionary functions – some level of genuine and active decision-making by a particular person is required. In a 2022 Federal Court matter, it was held that a minister failed to personally exercise a statutory power as required. The NSW Crown Solicitors Office noted, ‘The decision emphasises that, whilst departmental officers can assist with preparing draft reasons, a personal exercise of power requires a minister or relevant decision-maker to undertake the deliberate task by personally considering all relevant material and forming a personal state of satisfaction.’Footnote 51 What matters is not just that there is the required degree of human involvement on paper – there must be that human involvement in practice.

When designing and implementing ADM tools, government agencies need to also consider how the system will work in practice and over time, taking into consideration issues like natural human biases and behaviour and organisational culture. They must also recognise that those who will be making decisions supported by ADM tools in future will not necessarily be the people who were involved in its original conception, design, and implementation. The controls and mitigations that are needed to avoid ‘creeping control’ by ADM tools will need to be fully documented so they can be rigorously applied going forward.

There are several factors that are likely to be relevant to consider in determining whether there is an appropriate degree of human involvement in an ADM system. One is time – does the process afford the administrator sufficient time to properly consider the outputs of the tool and any other relevant individual circumstances of the case(s) in respect of which the function is being exercised? Does the administrator take this time in practice? Cultural acceptance is also important, particularly as it can change over time. Are there systems in place to overcome or mitigate automation-related complacency or technology bias, to scrutinise and raise queries about the output of the ADM tool, and to undertake further inquiries? If the administrator considers it appropriate, can they reject the output of the ADM tool? Is the authority of the administrator to question and reject the outputs respected and encouraged? Does it happen in practice?

Some other factorsFootnote 52 relevant to active human involvement include: an administrator’s access to source material used by the ADM tool and other relevant material to their decision, the seniority and experience of the administrator in relation to the type of decision being made, whether the administrator is considered responsible for the decisions they make, and whether the administrator can make or require changes to be made to the ADM tool to better support their decision-making. Finally, an appreciation of the decision-making impact including a genuine understanding of what their decision (and what a different decision) would mean in reality, including for the individuals who may be affected by the decision, is also likely to be relevant.Footnote 53 It is particularly important that the relevant administrator, and others responsible for analysing or working with the outputs of the technology, has a sufficient understanding of the technology and what its outputs actually mean in order to be able to use them appropriately.Footnote 54 This is likely to mean that comprehensive training, both formal and on-the-job, will be required on an ongoing basis.

6.4.3 Ensuring Transparency Including Giving Reasons

In traditional administrative decision-making, a properly prepared statement of reasons will promote accountability in at least two ways, which can be referred to as explainability and reviewability. The former enables the person who is affected by the decision to understand it, and provides a meaningful justification for the decision. The latter refers to the manner and extent to which the decision, and the process that led to the decision, can be reviewed. A review may be by the affected persons themselves, or by another person or body, such as an Ombudsman or a court, to verify that it was lawful, reasonable, and otherwise complied with norms of good decision-making. With ADM, these two aspects of accountability tend to become more distinct.

Agencies need to ensure appropriate transparency of their ADM tools, including by deciding what can and should be disclosed about their use to those whose interests may be affected. An explanation of an automated decision might include information about the ADM tool’s objectives, data used, its accuracy or success rate, and a meaningful and intelligible explanation of how the technology works to an ordinary person. When a human makes a decision, the reasons given do not refer to brain chemistry or the intricate process that commences with a particular set of synapses firing and culminates in a movement of the physical body giving rise to vocalised or written words. Likewise, explaining how an ADM tool works in a technical way, even if that explanation is fully comprehensive and accurate, will not necessarily satisfy the requirement to provide ‘reasons’ for its outputs. Reasons must be more than merely accurate – they should provide a meaningful and intelligible ‘explanation’Footnote 55 to the person who is to receive them. Generally, this means they should be in plain English, and provide information that would be intelligible to a person with no legal or technical training. Of course, the statement of reasons should also include the usual requirements for decision notices, including details of how the decision may be challenged or reviewed, and by whom. If a review is requested or required, then further ‘reasons’ may be needed, which are more technical and enable the reviewer to ‘get under the hood’ of the ADM tool to identify any possible error.

Although provision of computer source code may not be necessary or sufficient as a statement of reasons, there should be (at least) a presumption in favour of proactively publishing specifications and source code of ADM technology used in decision-making. A challenge here may arise when government engages an external provider for ADM expertise.Footnote 56 Trade secrets and commercial-in-confidence arrangements should not be more important than the value of transparency and the requirement, where it exists, to provide reasons. Contractual confidentiality obligations negotiated between parties must also be read as being subject to legislation that compels the production of information to a court, tribunal, or regulatory or integrity body.Footnote 57 As a minimum, agencies should ensure that the terms of any commercial contracts they enter in respect of ADM technology will not preclude them from providing comprehensive details (including the source code and data sets) to the Ombudsman, courts, or other review bodies to enable them to review the agency’s conduct for maladministration or legal error.

6.4.4 Verification, Testing, and Ongoing Monitoring

It is imperative both to test ADM tools before operationalising and to establish ongoing monitoring, audit, and review processes. Systems and processes need to be established up front to safeguard against inaccuracy and unintended consequences, such as algorithmic bias.Footnote 58 Agencies need to identify ways of testing that go beyond whether the ADM tool is performing according to its programming to consider whether the outputs are legal, fair, and reasonable. This means the costs of these ongoing testing requirements, governance processes, ongoing maintenance of the system, and training needs of the staff need to be factored in from the outset when evaluating the costs and benefits of moving to an automated system. Ignoring or underestimating these future costs and focusing only on apparent up-front cost-savings (by simplistically comparing an ADM tool’s build and running costs against the expenses, usually wages, of existing manual processes) will present an inflated picture of the financial benefits of automation. It also ignores other qualitative considerations, such as decision-making quality and legal risks.

6.4.5 The Role of Parliament in Authorising ADM Tools

If the implementation of ADM tools would be potentially unlawful or legally risky, this raises the question: can and should the relevant statute be amended to expressly authorise the use of ADM tools? Seeking express legislative authorisation for the use of ADM tools not only reduces the risks for agencies, but gives Parliament and the public visibility of what is being proposed, and an opportunity to consider what other regulation of the technology may be required. There is a growing practice, particularly in the Australian Commonwealth Parliament, of enacting provisions that simply authorise, in very general terms, the use of computer programs for the purpose of certain statutory decisions. A potential risk of this approach is complacency, if agencies mistakenly believe that such a provision, of itself, means that the other risks and considerations related to administrative law and good practice (see Section 6.3) do not need to be considered. Perhaps more importantly, this approach of legislating only to ‘authorise’ the use of ADM tools in simple terms seems to be a missed opportunity. If legislation is going to be introduced to enable the use of ADM tools for a particular statutory process, that also presents an opportunity for public and Parliamentary debate on the properties that the process should be required to exhibit to meet legal, Parliamentary, and community expectations of good administrative practice. Whether or not these properties are ultimately prescribed as mandatory requirements in the legislation itself (or some other overarching statutory framework), they can guide comprehensive questions that should be asked of government agencies seeking legislative authorisation of ADM tools, as illustrated below.

Is It Visible?

What information does the public, and especially those directly affected, need to be told regarding the involvement of the ADM tool, how it works, its assessed accuracy, testing schedule etc? Are the design specifications and source code publicly available – for example as ‘open access information’ under freedom of information legislation? Is an impact assessment required to be prepared and published?Footnote 59

Is It Avoidable?

Can an individual ‘opt out’ of the automation-led process and choose to have their case decided through a manual (human) process?

Is It Subject to Testing?

What testing regime must be undertaken prior to operation, and at scheduled times thereafter? What are the purposes of testing (eg compliance with specifications, accuracy, identification of algorithmic bias)? Who is to undertake that testing? What standards are to apply (eg randomised control trials)? Are the results to be made public?

Is It Explainable?

What rights do those affected by the automated outputs have to be given reasons for those outcomes? Are reasons to be provided routinely or on request? In what form must those reasons be given and what information must they contain?

Is It Accurate?

To what extent must the predictions or inferences of the ADM tool be demonstrated to be accurate? For example, is ‘better than chance’ sufficient, or is the tolerance for inaccuracy lower? How and when will accuracy be evaluated?

Is It Subject to Audit?

What audit records must the ADM tool maintain? What audits are to be conducted (internally and externally), by whom and for what purpose?

Is It Replicable?

Must the decision of the ADM tool be replicable in the sense that, if exactly the same inputs were re-entered, the ADM tool will consistently produce the same output, or can the ADM tool improve or change over time? If the latter, must the ADM tool be able to identify why the output now is different from what it was previously?

Is It Internally Reviewable?

Are the outputs of the ADM tool subject to internal review by a human decision maker? What is the nature of that review (eg full merits review)? Who has standing to seek such a review? Who has the ability to conduct that review and are they sufficiently senior and qualified to do so?

Is It Externally Reviewable?

Are the outputs of the ADM tool subject to external review or complaint to a human decision maker? What is the nature of that review (eg for example, merits review or review for error only)? Who has standing to seek such a review? If reviewable for error, what records are available to the review body to enable it to thoroughly inspect records and detect error?

Is It Compensable?

Are those who suffer detriment by an erroneous action of the ADM tool entitled to compensation, and how is that determined?

Is It Privacy Protective and Data Secure?

What privacy and data security measures and standards are required to be adhered to? Is a privacy impact assessment required to be undertaken and published? Are there particular rules limiting the collection, use and retention of personal information?

The properties suggested above are not exhaustive and the strength of any required properties may differ for different technologies and in different contexts. For example, in some situations, a process with a very strong property of reviewability may mean that a relatively weaker property of explainability will be acceptable.

6.5 Conclusion

Appropriate government use of ADM tools starts with transparency. The current lack of visibility means that it is not well known how many government agencies in NSW are using or developing ADM tools to assist in the exercise of administrative functions or what they are being used for. Nor is it possible to know who is impacted by the use of ADM tools, what validation and testing is being undertaken, whether there is ongoing monitoring for accuracy and bias, and what legal advice is being obtained to certify conformance with the requirements of administrative law.

Much of this chapter has focussed on how existing laws and norms of public sector administrative decision-making may control the use of ADM tools when used in that context. However, there are likely to be, at least initially, significant uncertainties and potentially significant gaps in the existing legal framework given the likely rapid and revolutionary changes to the way government conducts itself in the coming years. Government use of ADM tools in administrative decision-making may warrant a reconsideration of the legal frameworks, institutional arrangements, and rules that apply. It may be, for example, that existing administrative law mechanisms of redress, such as judicial review or complaint to the Ombudsman, will be considered too slow or individualised to provide an appropriate response to concerns about systemic injustices arising from algorithmic bias.Footnote 60 Modified frameworks may be needed – for example, to require the proactive external testing and auditing of systems, rather than merely reactive individual case review. If a statute is to be amended to specifically authorise particular uses of ADM tools, this creates an opportunity for Parliament to consider scaffolding a governance framework around that technology. That could include stipulating certain properties the system must exhibit in terms of transparency, accuracy, auditability, reviewability, and so on.

However, an open question is whether there is a need to consider more generally applicable legal or institutional reform, particularly to ensure that ADM tools are subject to appropriate governance, oversight, and review when used in a government context.Footnote 61 There may be precedent for this approach. The machinery of Australia’s modern administrative law – the administrative decisions tribunals, Ombudsman institutions, privacy commissions, and (in some jurisdictions) codified judicial review legislation – was largely installed in a short period of intense legislative reform, responding to what was then the new technology of modern government.Footnote 62 Ombudsman institutions (and other bodies which perform similar and potentially more specialised roles, including, for example, human rights commissions, anti-discrimination bodies, or freedom of information (FOI) and privacy commissions) have proven useful in many areas where traditional regulation and judicial enforcement are inadequate or inefficient. Ombudsman institutions also have the ability to not only respond reactively to individual complaints but also to proactively inquire into potential systemic issues, and to make public reports and recommendations to improve practices, policies, and legislation.Footnote 63 This ability to act proactively using ‘own motion’ powers may become increasingly relevant in the context of government use of ADM tools, partly because it seems less likely that complaints will be made about the technology itself – including if complainants are unaware of the role played by technology in the relevant decision. Rather, when people complain to bodies like the Ombudsman, the complaint is usually framed in terms of the outcome and impact on the individual. It must also be recognised that, if Ombudsman institutions are to perform this oversight role, there will be a need for capability growth. At present, it is likely they lack the in-house depth of technical skills and resources needed for any sophisticated deconstruction and interrogation of data quality and modelling, which may, at least in some cases, be required for effective scrutiny and investigation of ADM tools.Footnote 64

Footnotes

* NSW Ombudsman. This chapter and the presentation given to the ‘Money, Power and AI: From Automated Banks to Automated States’ conference are edited versions of a report the Ombudsman tabled in the NSW Parliament in 2021 titled ‘The New Machinery of Government: Using Machine Technology in Administrative Decision-Making’. With appreciation to all officers of the NSW Ombudsman who contributed to the preparation of that report, including in particular Christie Allan, principal project officer, and Megan Smith, legal counsel.

1 For this reason, machinery of government or ‘MoG’ has taken on the character of a verb for public servants – to be ‘mogged’ is to find oneself, through executive order, suddenly working in a different department, or unluckier still, perhaps out of a role altogether.

2 Machinery of government changes provide an opportunity for government to express its priorities and values, or at least how it wishes those to be perceived – abolishing a department or merging it as a mere ‘branch’ into another may signal that it is no longer seen as a priority; re-naming a department (like re-naming a ministerial portfolio) provides an opportunity to highlight an issue of importance or proposed focus (e.g., a Department of Customer Service).

3 NSW Ombudsman, The New Machinery of Government: Using Machine Technology in Administrative Decision-Making (Report, 29 November 2021) <The new machinery of government: using machine technology in administrative decision-making - NSW Ombudsman>.

4 See for example Australian Government, Digital Government Strategy 2018–2025 (Strategy, December 2021) <www.dta.gov.au/sites/default/files/2021–11/Digital%20Government%20Strategy_acc.pdf>.

5 See for example the first NSW Government Digital Strategy, NSW Digital Government Strategy (Strategy, May 2017) <www.digital.nsw.gov.au/sites/default/files/DigitalStrategy.pdf>; that Strategy has been revised and replaced by NSW Government, Beyond Digital (Strategy, November 2019) <www.digital.nsw.gov.au/sites/default/files/Beyond_Digital.pdf>.

6 See Andrew Le Sueur, ‘Robot Government: Automated Decision-Making and Its Implications for Parliament’ in Alexander Horne and Andrew Le Sueur (eds), Parliament: Legislation and Accountability (Oxford: Hart, 2016) 181.

7 See Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide (Guide, 2019) 5 <www.ombudsman.gov.au/__data/assets/pdf_file/0030/109596/OMB1188-Automated-Decision-Making-Report_Final-A1898885.pdf>.

8 Including health, criminal justice and education settings. A 2019 survey of US federal agency use of AI found that many agencies have experimented with AI and machine learning: David Freeman Engstrom et al, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Report, February 2020) <www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf>.

9 See Jennifer Cobbe et al, ‘Centering the Rule of Law in the Digital State’ (2020) 53(10) IEEE Computer 4; Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018).

10 Daniel Montoya and Alice Rummery, The Use of Artificial Intelligence by Government: Parliamentary and Legal Issues’ (e-brief, NSW Parliamentary Research Service, September 2020) 20.

11 For example, the NSW Ombudsman can generally investigate complaints if conduct falls within any of the following categories set out in section 26 of the Ombudsman Act 1974:

  1. (a) contrary to law,

  2. (b) unreasonable, unjust, oppressive, or improperly discriminatory,

  3. (c) in accordance with any law or established practice but the law or practice is, or may be, unreasonable, unjust, oppressive, or improperly discriminatory,

  4. (d) based wholly or partly on improper motives, irrelevant grounds, or irrelevant consideration,

  5. (e) based wholly or partly on a mistake of law or fact,

  6. (f) conduct for which reasons should be given but are not given,

  7. (g) otherwise wrong.

Conduct of the kinds set out above may be said to constitute ‘maladministration’ (although the NSW Act does not actually use that term).

12 See example ‘Services Australia Centrelink’s automated income compliance program (Robodebt)’ in NSW Ombudsman, Machine Technology Report, 27.

13 See further chapters 5–10 of NSW Ombudsman, Machine Technology Report; Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Powers’ (2018) 376(2128) Philosophical Transactions of the Royal Society A 1 for a discussion of how administrative law or ‘old law – interpreted in a new context – can help guide our algorithmic-assisted future’.

14 Many of these are discussed in Australian Human Rights Commission, Human Rights and Technology (Final Report, 1 March 2021).

15 New South Wales Law Reform Commission, Appeals in Administration (Report 16, December 1972) 6.

16 See Madeleine Waller and Paul Waller, ‘Why Predictive Algorithms Are So Risky for Public Sector Bodies’ (Article, October 2020) <https://ssrn.com/abstract=3716166> who argue that consideration of ethics may be ‘superfluous’:

The understanding of ‘ethical behaviour’ depends on social context: time, place and social norms. Hence we suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection provide the basis for appraising the use of algorithms: maladministration is the primary concern rather than a breach of ‘ethics’: at 4–5, 11.

17 Of course, although not explicitly couched in ‘human rights’ terms, a core preoccupation of administrative law and good administrative practice is the protection of fundamental human rights: see Australian Human Rights Commission, Human Rights and Technology, 55.

18 Corporation of the City of Enfield v Development Assessment Commission [2000] HCA 5; (2000) 199 CLR 135, 157 at 56.

19 For example, requirements can be grouped according to whether a failure to comply with them gives rise to a right to challenge the decision in the courts by way of judicial review, and if they do the various individual ‘grounds’ of such review. They can also be grouped broadly by considering whether a failure to comply with them would mean: (a) the decision is invalid (jurisdictional error); (b) there has been some other breach of law (other legal error); or (c) the decision, or its processes, is otherwise wrong (for example, in a way that could result in an adverse finding under section 26 of the Ombudsman Act 1974 (NSW)).

20 There have separately been questions raised as to whether the constitutionally entrenched rights of judicial review (Commonwealth of Australia Constitution Act s 75(v)) may be affected by a move towards the automation of administrative decision-making, as those rights refer to relevant orders being ‘sought against an officer of the Commonwealth’: Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation – When Is a Decision a “Decision”?’ (2019) 26 Australian Journal of Administrative Law 3132. On the other hand, it might be that this constitutional provision could ultimately come to limit the ability of the government to adopt fully autonomous machines. In particular, might it be inconsistent with this provision – and therefore constitutionally impermissible – for an agency to put in place autonomous mechanisms in such a way that would result in there being no ‘officer of the Commonwealth’ against whom orders could be sought for legal (jurisdictional) errors? See Will Bateman and Julia Powles, Submission to the Australian Human Rights Commission, Response to the Commission’s Discussion Paper (2020) (‘Any liability rules which sought to circumvent that constitutional rule (section 75(v)) would be invalid …’).

21 Currently, the law recognises as ‘legal persons’ both individuals and certain artificial persons, such as companies and other legally incorporated bodies. Despite suggestions that AI may one day develop to such a degree that the law might recognise such a system as having legal personality, this is clearly not the case today. See Will Bateman, ‘Algorithmic Decision-Making and Legality: Public Law Dimensions’ (2020) 94 Australian Law Journal 529–30.

22 Of course, it is conceivable that legislation could be amended so that something that is now required or permitted to be done by a human administrator is instead to be done in practice by a machine. However, depending on how the legislation is drafted, the proper legal characterisation will not be that the statutory function has moved (from the human administrator to the machine) but rather that the statutory function itself has changed. For example, a legislative amendment may result in an administrator, whose original statutory function is to perform a certain decision-making task, instead being conferred a statutory function to design, install, maintain, etc. a machine that will perform that task.

23 However, an administrator cannot abdicate to others those elements of a function where the administrator must form their own opinion: see New South Wales Aboriginal Land Council v Minister Administering the Crown Lands Act (the Nelson Bay Claim) [2014] NSWCA 377.

24 Carltona Ltd v Commissioner of Works [1943] 2 All ER 560.

25 ‘Practical necessity’ in O’Reilly v Commissioners of State Bank of Victoria [1983] HCA 47; (1983) 153 CLR 1 at 12.

26 New South Wales Aboriginal Land Council v Minister Administering the Crown Lands Act [2014] NSWCA 377 at 38.

27 See Katie Miller, ‘The Application of Administrative Law Principles to Technology-Assisted Decision-Making’ (2016) 86 Australian Institute of Administrative Law Forum 20 at 22. Miller argues that ‘[t]he need to avoid administrative “black boxes” which are immune from review or accountability may provide a basis for extending the Carltona principle to public servants in the context of technology-assisted decision-making to ensure that actions of technology assistants are attributable to a human decision-maker who can be held accountable’.

28 Given uncertainty around the application of the Carltona principle (which is based on an inference as to Parliament’s intent), the Commonwealth Ombudsman has suggested that the authority to use machine technology ‘will only be beyond doubt if specifically enabled by legislation’: Commonwealth Ombudsman, ‘Automated Decision-Making Guide’, 9. That is, rather than inferring that Parliament must have intended that administrators be able to seek the assistance of machines, Parliament could expressly state that intention.

There are already some rudimentary examples of such legislative provisions but, they are not without their own problems. See further chapter 15 of NSW Ombudsman, Machine Technology Report.

29 See, for example, Commissioner of Victims Rights v Dobbie [2019] NSWCA 183, which involved legislation requiring a decision-maker to obtain and have regard to a report written by a relevantly qualified person but not being legally bound to accept and act on that assessment.

30 NEAT Domestic Trading Pty Limited v AWB Limited [2003] HCA 35; (2003) 216 CLR 277 at 138.

31 Footnote Ibid at 150 citing, among other authorities R v Port of London Authority; Ex parte Kynoch Ltd [1919] 1 KB 176 at 184; Green v Daniels [1977] HCA 18; (1977) 51 ALJR 463 at 467 and Kioa v West [1985] HCA 81; (1985) 159 CLR 550 at 632–33.

32 Administrative Review Council, Automated Assistance in Administrative Decision Making (Report No 46, 1 January 2004) <www.ag.gov.au/sites/default/files/2020–03/report-46.pdf> 15–16.

33 James Emmett SC and Myles Pulsford, Legality of Automated Decision-Making Procedures for the Making of Garnishee Orders (Joint Opinion, 29 October 2020) 11 [35] from ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report: ‘Subject to consideration of issues like agency (see Carltona Ltd v Commissioner of Works [1943] 2 All ER 560) and delegation, to be validly exercised a discretionary power must be exercised by the repository of that power’.

34 Of course, machines themselves are inherently incapable of exercising discretion. Even if machines could exercise discretion, their doing so would not be consistent with the legislation, which has conferred the discretion on a particular (human) administrator.

35 See ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report for a detailed case study relating to a NSW Ombudsman investigation where proper authorisation, discretionary decision-making, and the need for a decision-maker to engage in an active intellectual process were key issues.

36 Algorithmic bias may arise without any intention to discriminate, without any awareness that it is occurring, and despite the best intentions of designers to exclude data fields that record any sensitive attributes or any obvious (to humans) proxies. See examples under ‘Algorithmic bias’ in NSW Ombudsman, Machine Technology Report, 35.

37 See example ‘Lost in translation – a simple error converting legislation into code’ in NSW Ombudsman, Machine Technology Report, 43.

38 See Miller, ‘Application of Administrative Law Principles’, 26.

39 See further chapters 11–15 of NSW Ombudsman, Machine Technology Report.

40 See Bernard McCabe, ‘Automated Decision-Making in (Good) Government’ (2020) 100 Australian Institute of Administrative Law Forum 118.

41 As far back as 2004, the Administrative Review Council emphasised the need for lawyers to be actively involved in the design of machine technology for government. Administrative Review Council, Automated Assistance in Administrative Decision Making.

42 See Miller, ‘Application of Administrative Law Principles’, 31.

43 Anna Huggins, ‘Executive Power in the Digital Age: Automation, Statutory Interpretation and Administrative Law’ in J Boughey and L Burton Crawford (eds), Interpreting Executive Power (Alexandria: The Federation Press, 2020) 117; McCabe, ‘Automated Decision-Making’, 118.

44 See the reversal of the onus of proof of the existence of a debt in the initial implementation of the Commonwealth ‘Robodebt’ system: Huggins, ‘Executive Power in the Digital Age’, 125.

45 Pintarich v Federal Commissioner of Taxation [2018] FCAFC 79; (2018) 262 FCR 41. The situation is complicated where legislation purports to deem the output of a machine to be a decision by a relevant human administrator (see chapter 15 in NSW Ombudsman, Machine Technology Report).

46 See for example ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report.

47 Navoto v Minister for Home Affairs [2019] FCAFC 135 at 89.

48 Carrascalao v Minister for Immigration and Border Protection [2017] FCAFC 107; (2017) 252 FCR 352 at 46; Chetcuti v Minister for Immigration and Border Protection [2019] FCAFC 112 at 65.

49 Minister for Immigration and Border Protection v Maioha [2018] FCAFC 216; (2018) 267 FCR 643 at 45. In Hands v Minister for Immigration and Border Protection [2018] FCAFC 225 at 3, Allsop CJ described this, in the context of decisions made under the Migration Act 1958 (Cth), as the need for an ‘honest confrontation’ with the human consequences of administrative decision-making.

50 Among other things, these cases looked at the amount of time an administrator had between when they received relevant material and the time when they made their decision. In some cases, this time period was shown to have been too short for the administrator to have even read the material before them. The court concluded that there could not have been any ‘active intellectual consideration’ undertaken in the exercise of the function, and therefore overturned the decisions on the basis that there had been no valid exercise of discretion. Carrascalao v Minister for Immigration and Border Protection [2017] FCAFC 107; (2017) 252 FCR 352; Chetcuti v Minister for Immigration and Border Protection [2019] FCAFC 112.

51 NSW Crown Solicitors Office, Administrative Law Alert: ‘Sign here’: A Word of Warning about Briefs to Ministers Exercising Statutory Power Personally to Make Administrative Decisions (Web Page, April 2022) <www.cso.nsw.gov.au/Pages/cso_resources/cso-alert-ministers-statutory-power-administrative-decisions.aspx> citing McQueen v Minister for Immigration, Citizenship, Migrant Services and Multicultural Affairs (No 3) [2022] FCA 258.

52 See further chapter 13 in NSW Ombudsman, Machine Technology Report for a more comprehensive list of considerations. Also see ‘What Does the GDPR Say about Automated Decision-Making and Profiling?’, Information Commissioner’s Office (UK) (Web Page) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-does-the-gdpr-say-about-automated-decision-making-and-profiling/#id2>.

53 Hands v Minister for Immigration and Border Protection [2018] FCAFC 225; (2018) 267 FCR 628 at 3.

54 See further Counsel’s advice at ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report and refer to Michael Guihot and Lyria Bennett Moses, Artificial Intelligence, Robots and the Law (Toronto: LexisNexis, 2020), 160.

55 Guihot and Moses, ‘Artificial Intelligence’, 151–59.

56 See eg, O’Brien v Secretary, Department Communities and Justice [2022] NSWCATAD 100. In that case a social housing tenant had applied for information about how government rental subsidies were calculated. The information sought included confidential developer algorithms and source code for an application created for the relevant government department by an external ADM tool provider. The Tribunal held that the information was not held by the department (and therefore not required to be made available to the applicant).

57 Smorgon v Australia and New Zealand Banking Group Limited [1976] HCA 53; (1976) 134 CLR 475 at 489.

58 There are various examples that demonstrate the need to verify and validate machine technology at the outset and periodically after implementation. See further chapter 14 in NSW Ombudsman, Machine Technology Report.

59 A number of commentators have proposed ‘algorithmic impact assessment’ processes be undertaken similar to environment or privacy impact assessments: see, for example Michele Loi, Algorithm Watch, Automated Decision Making in the Public Sector: An Impact Assessment Tool for Public Authorities (Report, 2021); Nicol Turner Lee, Paul Resnick, and Genie Barton, Brookings, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms (Report, 22 May 2019).

60 See Jennifer Raso, ‘AI and Administrative Law’ in Florian Martin-Bariteau and Teresa Scassa (eds), Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021); Joel Townsend, ‘Better Decisions? Robodebt and the Failings of Merits Review’ in Janina Boughey and Katie Miller (eds), The Automated State: Implications, Challenges and Opportunities (Alexandria: The Federation Press, 2021), 52, 56 (discussing the limits of existing merits review systems to address high volume, technology-assisted decision-making).

61 See for example Cobbe et al, ‘Centering the Rule of Law’, 15 (‘Given the limitations of existing laws and oversight mechanisms, … as well as the potential impact on vulnerable members of society, we argue for a comprehensive statutory framework to address public sector automation.’); Bateman, ‘Public Law Dimensions’, 530 (‘Attaining the efficiency gains promised by public sector automation in a way that minimizes legal risk is best achieved by developing a legislative framework that governs the exercise and review of automated statutory powers in a way which protects the substantive values of public law. Other jurisdictions have made steps in that direction, and there is no reason Australia could not follow suit.’); see also Terry Carney, ‘Robo-debt Illegality: The Seven Veils of Failed Guarantees of the Rule of Law?’ (2019) 44(1) Alternative Law Journal 4.

62 Robin Creyke, ‘Administrative Justice – Towards Integrity in Government’ (2007) 31(3) Melbourne University Law Review 705.

63 Cf Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge: Cambridge University Press, 2021), 220–22 (suggesting the establishment of ‘an AI Ombudsperson’).

64 Cf Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105 The Georgetown Law Journal 1190 (suggesting oversight approaches including ‘the establishment of a body of neutral and independent statistical experts to provide oversight and review, or more likely a prior rule making process informed by an expert advisory committee or subjected to a peer review process’).

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×