Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-22T12:14:10.250Z Has data issue: false hasContentIssue false

Data Protection Impact Assessments as rule of law governance mechanisms

Published online by Cambridge University Press:  30 March 2020

Swee Leng Harris*
Affiliation:
The Policy Institute, King’s College London, London, United Kingdom Policy and Public Affairs, The Legal Education Foundation, London, United Kingdom
*
*Corresponding author. Email: [email protected]

Abstract

Rule of law principles are essential for a fair and just society and apply to government activities regardless of whether those activities are undertaken by a human or automated data processing. This article explores how Data Protection Impact Assessments (DPIAs) could provide a mechanism for improved rule of law governance of data processing systems developed and used by government for public purposes in civil and administrative areas. Applying rule of law principles to two case studies provides a sketch of the issues and concerns that this article’s proposals for DPIAs seek to address. The article undertakes comparative analysis to find relevant principles and concepts for governance of data processing systems, looking at human rights impact assessments, administrative law, and process rights in environmental law. Drawing on this comparative analysis to identify specific recommendations for DPIAs, the article offers guidance on how DPIAs could be used to strengthen the governance of data processing by government in rule of law terms.

Type
Translational Article
Creative Commons
Creative Common License - CCCreative Common License - BY
Published by Cambridge University Press in association with Data for Policy. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2020

Policy Significance Statement

Data Protection Impact Assessments (DPIAs) are a relatively new development in the governance of data processing. The recommendations in this article are a tool for policy-makers and civil servants developing data processing systems for government and undertaking DPIAs, to assist government to meet rule of law standards when implementing new data processing systems. The article also adds to analysis of the governance of data processing systems for all stakeholders, including civil society.

Overview

The focus of this article is on data processing by government in the United Kingdom and rule of law principles because the law defines what government can do—the exercise of public power by government must be within the scope defined by law. Whereas for the private sector, it could be said that law defines what private actors may not do; for the public sector, there is no authority or lawful ability to act without a basis in law. Government use of data processing when exercising public power can give rise to concerns in rule of law terms, and risk failing to comply with existing principles and standards for public power. The aim of this article is not to argue against the use of automated data processing systems by government, but to highlight rule of law principles and standards that such systems should meet. This article explores the potential to use Data Protection Impact Assessments (DPIAs) as a mechanism through which to put into operation abstract rule of law principles such as equality before the law, to help ensure that government data processing systems meet rule of law principles and standards that apply to all government decision-making and activities.

The EU General Data Protection Regulation (GDPR) introduced a new requirement in Article 35 for DPIAs to be conducted when data processing is likely to pose a “high risk to the rights and freedoms of natural persons”. The UK Information Commissioner’s Office (ICO), the regulator responsible for information rights, explains that DPIAs are “a process designed to help you systematically analyse, identify and minimise the data protection risks of a project or plan” and are “a key part of … accountability obligations under the GDPR.”Footnote 1 The ICO previously encouraged the use of Privacy Impact Assessments as a means of ensuring compliance with the Data Protection Act 1998, but those assessments were limited in scope to privacy. By contrast, DPIAs have broader scope encompassing all rights and freedoms.

The analysis in this article is framed with reference to rule of law principles. That public power must be exercised in accordance with the law is one of the rule of law principles identified by the former Lord Chief Justice Lord Bingham (Bingham, Reference Bingham2010). There are different views on the scope and content of the rule of law, but for the purposes of the UK context, Lord Bingham’s approach has significant authority. The eight rule of law principles identified by Lord Bingham can be summarized as:

  1. 1. Accessibility: The law must be accessible and so far as possible intelligible, clear, and predictable.

  2. 2. Law not discretion: Questions of legal right and liability should ordinarily be resolved by the application of law and not the exercise of discretion.

  3. 3. Equality before the law: The laws of the land should apply equally to all, save to the extent that objective differences justify differentiation.

  4. 4. The exercise of power: Ministers and public officers at all levels must exercise the powers in good faith, fairly, for the purpose for which the powers were conferred, without exceeding the limits of such powers and not unreasonably.

  5. 5. Human rights: The law must afford adequate protection of fundamental human rights.

  6. 6. Dispute resolution: Means must be provided for resolving, without prohibitive cost or inordinate delay, bona fide civil disputes, which the parties themselves are unable to resolve.

  7. 7. A fair trial: Adjudicative procedures provided by the state should be fair.

  8. 8. The rule of law in the international legal order: The rule of law requires compliance by the state with its obligations in international law as in national law. (Harris and McNamara, Reference Harris and McNamara2016)

These principles are essential for a fair and just society, and apply to government activities regardless of whether those activities are undertaken by a human or automated data processing.

The rule of law analysis in this article has grown out of the author’s work at The Legal Education Foundation and the Bingham Centre for the Rule of Law.Footnote 2 The Legal Education Foundation aims to help people understand and use the law, and the implementation of these rule of law principles is a necessary precondition to that mission. When the law is implemented by government using data processing systems, there is a risk of reduced transparency and accountability in the operation of the law. For people to understand and use the law, government systems implementing the law must be transparent so that people can understand and navigate those government processes. Organizations to which The Foundation has made grants are increasingly addressing preventable legal problems that are due to government data processing systems causing unlawful wrong government decisions. There is thus a need to look for systematic ways in which these data processing systems could be improved in rule of law terms.

The focus of this article is law and governance of data in the United Kingdom in relation to government use of data processing in the exercise of public power. This article sets out two case studies of automated data processing used by government in the exercise of public power, and the rule of law concerns that arise in relation to those systems. Applying rule of law principles to these case studies provides a sketch of the issues and concerns that this article’s proposals for DPIAs seek to address. The following section of the article outlines the law on DPIAs in the GDPR, and provides international context in terms of proposals for impact assessments of data processing in other jurisdictions. Although DPIAs are also required under EU Directive 2016/680 concerning protection of personal data in the context of criminal law enforcement, the scope of this article is limited to civil and administrative law and does not include DPIAs in the criminal law context.

The article then explores the possibility of using DPIAs to improve the compliance of data processing systems used by government with rule of law principles. These sections offer some guidance on how DPIAs could be used to strengthen the governance of data processing by government in rule of law terms, drawing on comparative analysis of other governance schemes to identify specific recommendations for DPIAs. The main areas analyzed to develop the governance recommendations for DPIAs are human rights, administrative law, and environmental law. Broadly speaking, the different aspects of governance are categorized in terms of process versus substance, although that is a false distinction as failure to undertake steps in process such as public participation will affect substance, and failure to ensure that the substance informs the process means that the process will be meaningless. Finally, the article notes the shortcomings of current governance and standards for data processing by government to achieve the principle of transparency in reality.

In light of this legal analysis and comparisons to other governance schemes, this article identifies a number of recommendations for good practice in the substance and process of DPIAs by government in the United Kingdom. The article generally refers to “government” without distinguishing among UK central government, devolved government, or local government—the conclusions and recommendations are relevant to all of these levels of government.

Case Study 1: Housing Benefit and Council Tax Benefit Applications

Housing Benefits and Council Tax Benefits are welfare benefits that are administered by local authorities rather than the central government department responsible for welfare, which is the UK’s Department for Work and Pensions (DWP). Since 2012, the DWP has allowed local authorities to voluntarily adopt the use of so-called “Risk-Based Verification” systems as part of the application processes for these benefits. As explained in a DWP Local Authority Insight Survey:

Risk Based Verification (RBV) assigns a risk rating to each Housing Benefit (HB)/Council Tax Benefit (CTB) claim which determines the level of verification required. It allows more intense verification activity to be targeted at those claims which are deemed to be at highest risk of involving fraud and/or error. (DWP, 2013)

The pieces of legislation that establish and govern HB are the Social Security Contributions and Benefits Act 1992, Housing Benefit Regulations 2006, and Housing Benefit (persons who have attained the qualifying age for state pension credit) Regulations 2006. CTB was replaced by Council Tax Reduction in 2013, but for consistency, this article refers to CTB because the DWP policy documents refer to CTB. The Welfare Reform Act 2012 repealed CTB, and the Local Government Finance Act 2012 provides for Council Tax Reduction, which is further governed in England by the Council Tax Reduction Schemes (Prescribed Requirements) (England) Regulations 2012 and Council Tax Reduction Schemes (Prescribed Requirements and Default Scheme) (England) Regulations 2012.

DWP gives the following examples of the kind of risk ratings or categories that RBV assigns in the circular setting out guidance on RBV of HB/CTB Claims:

  1. 1. Low-risk claims: Only essential checks are made, such as proof of identity. Consequently, these claims are processed much faster than before and with significantly reduced effort from Benefit Officers without increasing the risk of fraud or error.

  2. 2. Medium-risk claims: These are verified in the same way as all claims currently, with evidence of original documents required. As now, current arrangements may differ from [Local Authority (LA)] to LA and it is up to LAs to ensure that they are minimizing the risk to fraud and error through the approach taken.

  3. 3. High-risk claims: Enhanced stringency is applied to verification. Individual LAs apply a variety of checking methods depending on local circumstances. This could include Credit Reference Agency checks, visits, increased documentation requirements, and so forth. Resource that has been freed up from the streamlined approach to low-risk claims can be focused on these high-risk claims. (DWP, 2011)

By way of illustration as to how this RBV works in practice, the following section extracted from a table (not the full table) in the Chichester District Council’s Risk-Based Verification Policy 2017 sets out the evidence requirements based on the category of risk assigned to an applicant (Chichester District Council, 2017):

The DWP also explains (citations omitted):

Some [Information Technology (IT)] tools use a propensity model which assesses against a number of components based on millions of claim assessments to classify the claim into one of the three categories above. Any IT system must also ensure that the risk profiles include “blind cases” where a sample of low or medium risk cases are allocated to a higher risk group, thus requiring heightened verification. This is done in order to test and refine the software assumptions.

Once the category is identified, individual claims cannot be downgraded by the benefit processor to a lower risk group. They can however, exceptionally, be upgraded if the processor has reasons to think this is appropriate. (DWP, 2011)

There appears to be no information on what data points these RBV systems use to make their assessments of risk.

Furthermore, the DWP guidance on governance leaves rule of law questions unresolved. Local authorities are required to produce an RBV Policy and a baseline against which to assess the impact of an RBV, and to undertake monthly monitoring of RBV performance. However, rule of law questions remain, for example, there are no criteria for monitoring impact in relation to protected characteristics under equality law. There are no requirements for transparency in the governance arrangements, in fact, DWP advises that the RBV Policy should not be made public “due to the sensitivity of its contents.” The underlying concern is that information on the system would allow applicants to game the system because they would know the risk profiles (DWP, 2011). However, it is not clear how the proper exercise of public power can be verified when people are not told that they have been subject to an RBV system nor the basis for its assessment of them which leads to different process thresholds for applicants. Furthermore, there would not be the same individual fraud risk in relation to the disclosure of aggregate baseline information, nor the aggregate findings of performance monitoring.

Case Study 2: Settled Status Application Process for EEA Nationals

The settled status scheme has been established by the UK Home Office in the context of Brexit to regularize the immigration status of European Economic Area (EEA) nationals and their families living in the United Kingdom. Difficulties with the identity verification aspect of the application process have been relatively high profile because it relies on mobile phone technology, but did not work on Apple devices for the first 6 months of the scheme. The automated checks of welfare and tax data to verify residence have received less attention, but are a significant automated part of the application process, albeit the decision-making process is partially, not fully, automated.

Part of the settled status scheme involves sharing data between government departments and algorithmic assessment of those data. Under the UK’s Immigration Rules Appendix EU, EEA nationals who have lived in the United Kingdom for at least 5 years are entitled to “settled status.” Those who have lived in the United Kingdom for less than 5 years are entitled to “pre-settled status,” and will need to apply for settled status when they reach the 5-year threshold. The Home Office application process uses automated data processing to analyze data from the DWP and Her Majesty’s Revenue and Customs (HMRC) (the UK central government department responsible for taxation) to verify how long applicants have been in the United Kingdom. Where the application process finds a “partial match,” the applicant is granted pre-settled status unless they challenge that decision.

The most recent statistics on the EU Settlement Scheme (2019) published by the Home Office on November 14, 2019 stated that 2,450,500 applications had been received up to the end of October, of which 1,925,300 had been concluded. Of the applications that had been concluded, 40% were granted pre-settled status (approximately 770,120) and 60% were granted settled status (approximately 1,155,180) (Home Office, 2019g).

According to evidence given by then Immigration Minister Caroline Nokes on July 16, 2019, up to May 31, 2019 when 788,200 applications had been received (Home Office, 2019d), there had been 253 administrative review applications challenging outcomes of pre-settled instead of settled status that had been received and processed since the review mechanism was established in November 2018. Administrative review is carried out by the Home Office when an applicant challenges the initial decision. Of the 253 administrative reviews challenging a grant of pre-settled status:

  1. 1. 22 of these grants of pre-settled status were upheld following the administrative review; and

  2. 2. 231 were overturned and the applicants were instead granted settled status following administrative review. “In those cases, the applicant had generally accepted a grant of pre-settled status when making their application and then given additional evidence of their eligibility for settled status alongside their application for administrative review.”Footnote 3

It is notable that around 91% of the administrative reviews were successful and the initial decision of the Home Office overturned. It is possible that there could be many other applicants who have been granted pre-settled status unlawfully when they were entitled to settled status, but who did not apply for administrative review.

As outlined above, more than one-third of the decisions made in the scheme so far have granted pre-settled status, but there were no measures in the system as initially designed to check whether those applicants had been granted the correct status. It should now be possible to approximately monitor this across the scheme because the government has introduced a question on whether applicants have been in the United Kingdom for less or more than 5 years, although the scheme did not originally include this.Footnote 4 This change to the design of the application system would be an improvement for monitoring purposes, but given the figures from administrative reviews there remains a concern as to how the Home Office will avoid and mitigate the risk of unlawful initial decisions that wrongly grant pre-settled status rather than settled status. Furthermore, the monthly monitoring statistics do not include data from this new question, which undermines the ability of external stakeholders to monitor the scheme.

There was a proposal for manual checks to mitigate the risk of errors in the automated checks resulting in applicants being wrongly granted pre-settled status instead of settled status, to which the Minister responded:

Informing an applicant of why data has not matched is likely to increase the risk of fraud and identity abuse. The new clause would change the focus of the scheme from granting status to investigating the data quality of employers or of the DWP and HMRC. We consider that a distraction that would cause unnecessary delays for applicants… In most cases, it would be far simpler and more straightforward for applicants to submit other evidence to prove residence, rather than seeking to resolve why data has not matched. Of course, the applicant can take up that issue with HMRC or the DWP if they wish.Footnote 5

Many problems with the automated checks have been reported by applicants, including employed and self-employed applicants, and there are particular concerns for applicants who are low-income or vulnerable (Dunt, Reference Dunt2019). Coram Children’s Legal Centre, an NGO with specialist expert knowledge of children’s rights and immigration, are concerned that:

a number of vulnerable groups will be negatively impacted by the current functioning of the automated data checks, used to verify length of residence:

  1. 1. A number of benefits are not included in the automated data checks, including Child Benefit (which can only be paid to one person—the person considered to have the main responsibility for caring for a child) and Child Tax Credit. This disproportionately impacts on women who are more likely to be receiving these benefits.

  2. 2. Disabled people and their carers who rely on welfare benefits will need to provide additional proof of residence. This places an additional burden on these groups who may struggle to provide relevant documentation.

  3. 3. Currently, Universal Credit can only be used as proof of residence for the main recipient. This impacts on women who are less likely to be in receipt of it, and particularly those who are in abusive or controlling relationships. (Coram Children’s Legal Centre, 2019)

Although the Home Office has consulted with user groups, there has been a lack of transparency and information on the data processing used in the settled status scheme. The memoranda of understanding for data sharing between the Home Office and DWP, and the Home Office and HMRC were published at the end of March 2019 in response to a proposed amendment from Stuart McDonald MP.Footnote 6 DPIA(s) for the settled status application process have not been published, nor has an Equality Impact Assessment.Footnote 7

Rule of Law Analysis of These Case Studies

The case studies illustrate some of the rule of law concerns that arise when government uses data processing systems for decision-making. The following rule of law analysis applies the rule of law principles articulated by Lord Bingham (set out in the opening of this article) to the case studies.

It is difficult to be confident that public officers have exercised their powers in accordance with the law when their decision-making is informed by RBV or automated checks of data for residency. This difficulty arises because there is no publicly available analysis of how the RBV systems implement the law nor clear non-technical explanation of how the systems operate. The results of automated checks in the settled status scheme do not inform applicants which categories of data were used in the checks (e.g., which DWP benefits were taken into account in assessing residence). Furthermore, the administrative review results discussed above demonstrate that many applicants entitled to settled status accepted pre-settled status, which the results from the automated checks had wrongly assigned them. This suggests that the application process did not provide those applicants with sufficient information to make an informed and correct decision on whether to accept that status. The lack of information on these data processing systems and how they work reduces the accessibility of the law. Law is made less accessible by the use of opaque systems to implement the law.

Furthermore, the design and operation of data processing systems could produce systematic indirect discrimination. The basis for sorting the “risk categories” by RBV is unknown and could be indirectly discriminatory if it is based on data points that act as proxies for characteristics such as race. Similarly, the design of the automated checks of the settled status scheme means that vulnerable groups such as children and recipients of child tax credit (predominantly women) will have a more difficult process to navigate to secure their immigration status. Can such unequal processes be said to be consistent with the principle of equality before the law?

The consequences of the use of the systems in the case studies produce further rule of law concerns in terms of impact on human rights and access to justice. There are many legal instruments that give rise to the UK’s human rights obligations under international law, including the European Convention on Human Rights (ECHR), International Covenant on Civil and Political Rights (ICCPR), and International Covenant on Economic, Social, and Cultural Rights (ICESCR). Where people are wrongly granted pre-settled status, then there is a risk that they will be denied settled status at the end of their pre-settled status period, which could affect many of their rights such as the right to family life (Article 10 ICESCR; Articles 17 and 23 ICCPR; Article 8 ECHR) and right to work (Article 6 ICESCR). Similarly, the delay or denial of people’s applications for HB or CTB because of the additional checks required by RBV classifying them as “high risk” could affect their right to an adequate standard of living (Article 11 ICESCR). Wrongly denying people such rights could be in breach of the UK’s international human rights obligations. Moreover, the opacity of the systems result in difficulties for people to understand the decisions made about them, which in turn could affect their right to access to justice (Zalnieriute et al., Reference Zalnieriute, Bennett Moses and Williams2019). It will be more difficult for people to challenge an unlawful decision if they are not informed of the basis for that decision.

Given the rule of law concerns illustrated by these case studies, there is a need for laws and processes that could be used to improve the fidelity of government data processing systems to rule of law principles.

When considering the legal governance of automated decision-making, the limits of the provisions on automated decision-making in GDPR Article 22 should be noted (Cobbe, Reference Cobbe2018). Article 22 provides that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her.” Both of the case studies probably fall outside of the scope of Article 22, because the RBV for HB or CTB claims would not have legal effects, and the automated checks for settled status applications are followed by an option to upload documents and human decision-makers are involved in the process. In order for human involvement in decision-making to be meaningful, the automated data process must be interpretable by the human decision-maker as discussed further below (Binns and Gallo, Reference Binns and Gallo2019).

DPIAs provide a mechanism to govern data processing used in government decision-making regardless of whether such data processing falls within the scope of Article 22. Article 22 provides only a partial and limited legal framework to regulate automated decision-making that is ill-suited to address the kinds of partially automated decision-making processes discussed in the case studies. The rule of law questions and concerns raised in relation to the data processing in the case studies are not answered by finding that they fall outside of the Article 22 prohibition on automated decision-making. Government use of automated data processing can be lawful in relation to Article 22, and yet fail to comply with the rule of law principles articulated by Lord Bingham.

Data Protection Impact Assessments

Rather than proposing a new set of governance mechanisms to address the rule of law concerns identified above, the following sections focus on DPIAs because they are the current governance process for many data processing systems, including partially and fully automated government decision-making systems.

Article 35 of the GDPR provides (emphasis added):

1. Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. A single assessment may address a set of similar processing operations that present similar high risks.

7. The assessment shall contain at least:

  1. (a) a systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller;

  2. (b) an assessment of the necessity and proportionality of the processing operations in relation to the purposes;

  3. (c) an assessment of the risks to the rights and freedoms of data subjects referred to in Paragraph 1; and

  4. (d) the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.

The scope of impact to be considered in DPIAs under Article 35 encompasses all rights and freedoms, and is not limited to privacy. The ICO’s Code of Practice for Privacy Impact Assessments focused on identifying and managing privacy risks in compliance with the Data Protection Act 1998, but not other human rights (ICO, 2014). As Janssen has argued, the references to “rights and freedoms” in Article 35’s provisions for DPIAs should be read in the context of EU law as referring to the rights set out in the European Union Charter of Fundamental Rights (Janssen, Reference Janssen2018). The Charter includes freedom of expression (Article 11), right to education (Article 14), right to engage in work (Article 15), equality before the law and nondiscrimination (Articles 20 and 21), social security (Article 34), and health care (Article 35). This understanding of the scope of Article 35’s reference to “rights and freedoms” is consistent with the views of the Article 29 Data Protection Working Party, an independent body on data protection for the EU (Article 29 Working Party, 2017).Footnote 8

There are no GDPR obligations for transparency or public engagement and consultation on DPIAs as such, although Article 35(9) provides that the views of data subjects on the intended processing be sought where appropriate. In the United Kingdom, the ICO has responsibility for monitoring the application of the GDPR as the relevant national independent public authority required under the GDPR. The ICO’s guidance on DPIAs only states that you “should” seek and document the views of individuals, but does not make clear that there is a legal obligation under Article 35(9) to seek the views of data subjects.Footnote 9 The enforceability of consultation under Article 35(9) is hampered by the provision’s being restricted to data subjects, which relies on the data controller identifying affected subjects and seeking their views. Without accompanying rights and obligations of public transparency, it is difficult for stakeholders to identify that they are data subjects whose views should have been sought. There are no legal requirements for public disclosure of DPIAs. There is an obligation of prior consultation with the ICO before action where a DPIA indicates that the proposed data processing “would result in a high risk.”

Although the focus of this article is law and governance of data in the United Kingdom, this article’s consideration of DPIAs is relevant across EU member states by reason of the GDPR, and there are proposals for law and policy to provide for impact assessments of data processing in other jurisdictions. Furthermore, of relevance for all Council of Europe member states (which includes all EU member states), the Council’s Commissioner for Human Rights has issued a 10-point Recommendation on AI and human rights. The first point from the Commissioner recommends that “Member states should establish a legal framework that sets out a procedure for public authorities to carry out human rights impact assessments (HRIAs) on AI systems acquired, developed and/or deployed by those authorities” (Council of Europe Commissioner for Human Rights, 2019).

In the United States, the proposed Algorithmic Accountability Bill would require “entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.” The Bill is sponsored by Democrat Senators Cory Booker and Ron Wyden, with an equivalent in the House of Representatives sponsored by Democrat Yvette Clarke (Robertson, Reference Robertson2019). Putting to one side the political question of whether the Bill might be passed into law, it illustrates the currency of the idea of impact assessments in jurisdictions outside of the EU. In Australia, the federal government’s Department of Industry, Innovation and Science recently consulted on Artificial Intelligence: Australia’s Ethics Framework (n.d.), which includes impact assessments as part of a “toolkit for ethical AI” (Dawson et al., Reference Dawson, Schleiger, Horton, McLaughlin, Robinson, Quezada and Hajkowicz2019).

Of direct relevance to this article’s discussion of data processing by government, the Canadian government now has a Directive on Automated Decision-Making (2019), which took effect on April 1, 2019. The Directive applies to the federal Canadian government’s use of automated decision-making—technology “used to recommend or make an administrative decision” about an individual or business, excluding National Security Systems (per Section 5). The Directive requires that an algorithmic impact assessment be produced prior to the production of an automated decision system. Rather than applying a one size fits all approach to automated decisions, the Directive sets out a risk-based framework whereby the proposed automated decision system is classified as one of four levels depending on the impact of the decision on environmental sustainability or the rights, health, or economic interests of individuals or communities. The Directive’s framework then sets out a sliding scale of requirements for impact assessment depending on the level at which the impact has been classified (per Section 6.1 and Appendices B and C).

For the purposes of discussion of DPIAs in the UK context, it is notable that a key objective of the Directive is that decisions made by federal government departments comply with procedural fairness and due process (per Section 4.2.1), which in UK law are crystallized in administrative law. The Directive does not provide for public participation or consultation on impact assessments prior to production of an automated decision system. However, it does require that results from Algorithmic Impact Assessments be released in a publicly accessible format, and sets transparency requirements for use of systems after production (per Section 6.2 and appendices B and C).

What Does a DPIA Process Include in the United Kingdom?

The ICO identifies the following key elements of a DPIA process:

  1. 1. Step 1: identify the need for a DPIA.

  2. 2. Step 2: describe the processing.

  3. 3. Step 3: consider consultation.

  4. 4. Step 4: assess necessity and proportionality.

  5. 5. Step 5: identify and assess risks.

  6. 6. Step 6: identify measures to mitigate the risks.

  7. 7. Step 7: sign off and record outcomes.Footnote 10

This process as described by the ICO means that DPIAs should explain the function, purpose, and anticipated consequences of proposed data processing systems for the rights and interests of individuals. Under Step 2, the description of the processing must include the nature, scope, context, and purposes of the processing. Under Step 3, the ICO recommends consulting with individuals, unless there is good reason not to. Under Step 5 to identify and assess risks, the ICO advises: “look at whether the processing could contribute to:

  1. 1. inability to exercise rights (including but not limited to privacy rights);

  2. 2. inability to access services or opportunities;

  3. 3. loss of control over the use of personal data;

  4. 4. discrimination;

  5. 5. identity theft or fraud;

  6. 6. financial loss;

  7. 7. reputational damage;

  8. 8. physical harm;

  9. 9. loss of confidentiality;

  10. 10. reidentification of pseudonymized data; or

  11. 11. any other significant economic or social disadvantage”Footnote 11

Another relevant aspect of the regulatory landscape in Great Britain is the public sector equality duty. Under Section 149 of the Equality Act, all public authorities must, in the exercise of their functions, have due regard to the need to eliminate discrimination, harassment and victimization in relation to protected characteristics such as age, sex, pregnancy and maternity, and race. To assist compliance with equality duties, public authorities often carry out an Equality Impact Assessment for proposed policies to look at whether the policy would have a disproportionate impact on persons with protected characteristics.

Given that DPIAs should look at the potential for discrimination, there would be value in government integrating DPIAs with measures to comply with the public sector equality duty. Thus, proper equality analysis of the potential for direct and indirect discrimination could be informed by technical information on the data processing, and assessment of the impact of the data processing on rights and freedoms could be informed by expertise in equality.

What Might a “Good” DPIA Look Like in Substance?

This section looks primarily at human rights and administrative law for guidance on what the substance of DPIAs should include.

Janssen suggests a framework of stages in the operation of data processing systems across which DPIAs should consider human rights risks when designing a data processing system: data capture stage, analytics stage, and distribution stage. This framework highlights the different risks that the different stages of the operation of data processing systems can raise (Janssen, Reference Janssen2018). For the data capture stage, attention should be directed to the data used in the process—what is the quality of those data given the likelihood of decision-making errors resulting from errors in data, and what categories of data will be used since special categories of data such as race or sexual orientation have higher levels of protection under the GDPR. At the analytics stage, risk to human rights arises from the design of the automated decision-making system’s weightings and inferences from analytics. Finally, the distribution stage creates risks of sharing data to other entities or for purposes that are not authorized.

If this suggested framework is applied to the case study examples, it helps to structure the risks relating to each system.

  1. 1. For HB/CTB RBV systems:

    1. (a) At data capture stage, there are questions of the source of the “statistical information and risk propensity data gathered over many years” (Central Bedfordshire Council, 2018) used to profile claims;

    2. (b) At analytics stage, how is analysis performed on claims, for example what weightings are used and is machine learning analysis applied; and

    3. (c) At distribution stage, how are risk scores used once a claim has been processed—for example, is the risk score kept on the claimant’s file such that it can be seen by local authority staff at a later time?

  2. 2. For the settled status scheme:

    1. (d) At data capture stage, there is a risk of errors in DWP or HMRC data;

    2. (e) At analytics stage, the focus is on the “business logic” applied to those data to determine whether there is a pass, partial pass, or fail; and

    3. (f) At distribution stage, for what purposes may information produced through the scheme can be accessed into the future. The memoranda of understanding on data sharing for the scheme state that DWP, HMRC, and Home Office personnel with appropriate security clearance can access information under the memorandum where there is a “genuine business need.”Footnote 12 In answer to a parliamentary question on the HMRC and Home Office data sharing, the Minister has said that “A genuine business need means only staff at the Home Office and Her Majesty’s Revenue and Customs who require access to the data to carry out their duties will be granted access.”Footnote 13

To assess the risk to rights and freedoms potentially affected by data processing, DPIAs need to look at all human rights, not just privacy. For example, data processing in the settled status scheme could affect applicants’ rights to family life, to housing, to work, and to access healthcare because of the immigration law restrictions on the right to work, right to rent, and access to health services in the context of UK immigration policy. The Human Rights, Big Data, and Technology Project (HRBDT) has proposed a human rights based approach to the design, development and implementation of big data and AI projects. HRBDT’s report shows that these technologies can affect all of the rights set out in the Universal Declaration of Human Rights—not only equality and privacy, but others such as the rights to education, healthcare, social care of the elderly, and law enforcement. HRBDT’s proposed human rights based approach would include undertaking full human rights impact assessments against all human rights (McGregor et al., Reference McGregor, Ng and Shaheed2018). The systems described in the case studies potentially engage a wide range of rights including the right to work, right to housing, right to an adequate standard of living, and freedom from discrimination.

There is existing analysis and guidance for human rights impact assessments that has been produced by the business and human rights sector, which has relevance to DPIAs regardless of whether they are conducted by a public or private entity.Footnote 14 The Human Rights Impact Assessment Guidance and Toolbox by The Danish Institute for Human Rights (DIHR) is a particularly extensive guide (Götzmann et al., Reference Götzmann, Bansal, Wrzoncki, Poulsen-Hansen, Tedaldi and Høvsgaard2016). The Guidance recommends scoping an impact assessment by considering the type of project, human rights context, and who relevant stakeholders are, then collecting data to establish a baseline to better understand human rights identified in the scoping. Where government introduces automated processes to replace existing human decision-making, the baseline could be based on data from the human decision-making process and outcomes. Per the DIHR Guidance, indicators for which data have been collected to establish a baseline could then be measured to assess change and impact from automated data processing. The impact of a system on all human rights could thus be considered and analyzed, including assessing the severity of the impact in terms of the human rights consequences.

The DPIA process is an opportunity not only to identify risks and potential impact from data processing but also to include safeguards to manage and minimize the risks as part of the development of the data processing system. The DIHR Guidance provides for planning effective management of the impact as part of the human rights impact assessment process, including what steps will be taken to avoid and minimize impacts. Partially automated decision-making systems are likely to rely on the involvement of human decision-makers as a key safeguard. This means that the information, training, policies, and guidance given to human decision-makers will be important for the safeguarding of human rights in such data processing systems.

In contrast with the DIHR’s approach to assessing impact on human rights by collecting data and measuring human rights indicators, current monitoring of the settled status scheme as reported does not have regard to some potential human rights impacts. For example, there are data on application results disaggregated by pre-settled status versus settled status, application data disaggregated by nationality and by age groups, but there are no disaggregated data relating to the sex of applicants (Home Office, 2019f). From a rule of law perspective, in relation to the proper implementation of the law through the application process, the important question for the 40% of applicants granted pre-settled status is whether this was the correct status for them or whether they were legally entitled to settled status. Including in the Home Office’s monthly Official Statistics reports the data on the answer to the question of whether applicants have been continuously resident for less or more than 5 years—which is not presently included in published reports—would help to monitor the legal accuracy of the application process if those data were disaggregated against what kind of status (if any) applicants were granted.

Specific to the area of government use of automated decision-making systems, New York University’s AI Now Institute has proposed mandatory Algorithmic Impact Assessments (AIAs). The first step in an AIA would be to define the scope of the automated decision system, then provide public notice of existing and proposed systems with information on the “purpose, reach, internal use policies, and potential impacts” (Reisman et al., Reference Reisman, Schultz, Crawford and Whittaker2018) of the systems and have a public consultation process. Public participation through consultation is discussed further below in relation to environmental governance. For a public agency’s self-assessment, AI Now notes the importance of agencies being “experts on their own automated decision systems… to ensure public trust” and emphasizes that AIAs are an opportunity for agencies to build internal capacity (Reisman et al., Reference Reisman, Schultz, Crawford and Whittaker2018). The DIHR Guidance also emphasizes the importance of the right expertise and capacity to properly carry out a human rights impact assessment. Finally, AI Now recommends allowing researchers and auditors meaningful access to automated decision systems to review the systems once they are deployed.

In line with the DIHR Guidance and AI Now’s recommendations, DPIAs will operate best if they are complemented by follow up measurement of indicators and monitoring, including by external researchers and auditors.

DPIAs should include analysis based on testing of data processing systems. AI now argues for agencies to work with vendors and researchers to conduct testing and research on automated decision-making systems. Janssen has recommended automated decision-making systems should be run in trials, giving them a dry-run “in a controlled setting, prior to release to the public at large” in order to identify and address risks to human rights before a system goes live (Janssen, Reference Janssen2018). Unlike proposed projects for which an environmental impact assessment is produced, it is possible to test data processing. Where a system already exists, for example, the Xantura, Callcredit, Capita RBV systems (Data Justice Lab, 2018), it should be possible to test those systems for a DPIA to assess whether they profile risk in a discriminatory fashion. Unlike this proposal for testing of data processing systems using a dry-run, the settled status scheme was tested on individuals in pilot phases, rather than in a sandbox or with synthetic data, and the cohorts of applicants that participated in pilot stages may not have been representative of the population.Footnote 15

Finally, assessment in DPIAs against administrative law principles will encourage compliance with rule of law principles since administrative law defines what constitutes the lawful and proper exercise of government power. Administrative law is part of the common law of England and Wales that has been developed by the courts through case law. Administrative law contains a number of principles that public authorities must meet when they exercise public power. For example, in Padfield v Minister of Agriculture, Fisheries and Food [1968] UKHL 1, the UK court of the House of Lords held that a Minister’s decision-making discretion under a piece of legislation may not be exercised to frustrate the overall policy and objects of that legislation. Cobbe has already undertaken insightful analysis of the application of administrative law principles to automated decision processes (Cobbe, Reference Cobbe2018), so there is no need for detailed discussion here. DPIAs for government data processing systems will enhance rule of law governance if they include analysis of the system and administrative law principles. A key administrative law principle that all government data processing systems must adhere to is the principle of legality—government data processing systems must correctly implement the law and not exceed the scope of the power granted under law. Including analysis of the legality of a data processing system in DPIAs will mitigate the risk of the system being found unlawful under judicial review.

In order to make the law more accessible, data processing systems should incorporate explanations of how the law is being applied and implemented, and DPIAs could include assessment of such explanations in administrative law terms. Explanations could be used where there is a duty to provide reasons for a decision, which tends to be the case for more serious decisions such as the refusal to grant a passport (Cobbe, Reference Cobbe2018). Providing explanations for decisions by default would enhance the rule of law by increasing the clarity of the law and how it has been implemented. Explanations for data processing systems used by government will help to demonstrate compliance with another administrative law principle that requires decision-makers take all relevant considerations into account and not take into account irrelevant considerations.

Partially automated systems that are developed with transparency and explanation by design will help human decision-makers to understand the nature of and basis for the result of the automated data processing, and to therefore properly take that automated processing into account in their decision (Zalnieriute et al., Reference Zalnieriute, Bennett Moses and Williams2019). This includes enabling human decision-makers to disregard the result of automated data processing where the explanation for the result indicates that irrelevant factors were taken into account in the system. As Binns and Gallo have explained:

If the inputs and outputs of AI systems are not easily interpretable, and other explanation tools are not available or reliable, there is a risk a human will not be able to meaningfully review the output of an AI system.

If meaningful reviews are not possible, the reviewer may start to just agree with the system’s recommendations without judgement or challenge, this would mean the decision was “solely automated.”

DPIAs should assess how human decision-makers in a partially automated process will interact with the automated data processing, including whether the automated data processing will be interpretable.

Lessons from Environmental Law—Good Governance through Good Process

Having considered what the substance of DPIAs should include with regard to human rights and administrative law, this section turns to environmental law for lessons on good processes that promote good governance.

The efficacy and legitimacy of the governance of data processing would be enhanced by following the example of environmental governance. As noted above, the GDPR lacks effective governance requirements for transparency or public participation in DPIAs, although the GDPR stipulates consultation with affected data subjects. By comparison, the 1998 Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters (Aarhaus Convention) provides a framework of transparency, participation, and access to justice in order to improve environmental governance. As Article 1 of the Aarhaus Convention sets out:

In order to contribute to the protection of the right of every person of present and future generations to live in an environment adequate to his or her health and well-being, each Party shall guarantee the rights of access to information, public participation in decision-making, and access to justice in environmental matters in accordance with the provisions of this Convention.

The principle of public participation under the Aarhaus Convention is strengthened and reinforced by information rights that help to overcome information asymmetry between the public and those proposing a project. The public participation principle includes providing the public with information on the proposed project and the approval process for the proposal, including opportunities for public participation in the decision (Article 6(2)). Another aspect of the principle is providing the public with access to relevant information on the proposal (Article 6(6)). Access to such information enables public participation to be informed and therefore more meaningful because, as a meeting of the Aarhaus Convention members has observed, “Access to information is an essential prerequisite for effective public participation.”Footnote 16

The process for DPIAs provides an opportunity to systematically implement the principles of transparency and public participation in relation to data processing proposed by government. As noted above, AI Now’s analysis highlights the relevance of the principles of transparency and public participation in the context of algorithmic systems. If the kind of meaningful consultation and public participation described in the Aarhaus Convention was applied in the processes of DPIAs, it could improve governance of data processing by government through informed public participation.

By contrast, little information was made available to the public on how data processing in the settled status scheme would operate prior to the scheme’s establishment, although some information was published when the scheme was established (Home Office, 2019b). For example, the memoranda of understanding on data sharing between the Home Office and the DWP and Home Office and HMRC were not disclosed to the public until the scheme was live for the public, following questions in Parliament. Those memoranda refer to DPIAs having been conducted, but the DPIAs have not been published. As such, although there were stakeholder groups established by the Home Office to consult on the settled status scheme, the participation of those groups was limited by the information available to them.

One of the reasons it is useful to compare public participation on data processing as part of a DPIA to public participation on environmental impacts is that both contexts concern public engagement with information produced through technical expertise. UK government guidance and law on environmental impact assessments is that “The Environmental Statement must include at least the information reasonably required to assess the likely significant environmental effects of the development” (Ministry of Housing, Communities, & Local Government, 2014). The specialist technical expertise needed to identify and assess environmental impacts such as environmental science expertise is analogous to the expertise needed to properly conduct a DPIA, which includes data science and human rights expertise. Public participation under Aarhaus requires the translation of technical information into a “non-technical summary” (Article 6(6)(d)), recognizing that without such summation the technical nature of the information will pose a barrier to meaningful public participation.

DPIAs have not yet been published on RBV systems for HB/CTB applications, but some Equality Impact Assessments have been published. Consideration of some examples of publicly available Equality Impact Assessments of RBV systems for HB/CTB applications suggest that there may be a failure to understand how the RBV systems determine risk. In two examples, the assessment was that the proposed RBV policies would not have an impact on people with protected characteristics because all applications would be subject to the RBV. The risk that there might be disproportionate impacts on certain groups and hence indirect discrimination, for example, the risk of all people of non-White ethnicity being classified as high risk, appears not to have been considered (Ealing Council, 2018; Rochdale Borough Council, 2015). Interestingly, another example of an equality assessment undertaken to determine whether an RBV should continue contained some useful analysis of data disaggregated by protected characteristics, with consideration of whether there was disproportionate impact on such groups (Waltham Forest Council, 2015). However, there still appear to be some shortcomings in the analysis, for example in relation to race it is not clear why the impact of RBV on non-White race groups was not considered. DPIAs would need to include much better information than these Equality Impact Assessments on the nature and functioning of RBVs in order to meaningfully assess the risk they pose to rights and interests.

Failure of Existing Policies to Secure Transparency

This section highlights the need to uphold and fulfill the principle of transparency in reality. The principle of transparency, along with other principles discussed in this article, is well recognized by the United Kingdom but not well implemented in relation to government data processing systems. This kind of gap between recognized principles and the actual issues with government data processing systems is one of the reasons for this article’s discussion of DPIAs as mechanisms for governance through which these principles could be implemented.

Transparency and openness are encouraged by UK government policy, yet seem not to be implemented in practice. In addition to the legally binding provisions of equality and data protection law, there are UK government policies relevant to the design and implementation of data processing systems. For example, the Department of Culture, Media, and Sport’s Data Ethics Framework was written to guide the design of data use by government and the public sector, aimed at all those in the public sector working with data. Principle 6 of the Framework is “Make your work transparent and be accountable.” The Guidance for this principle states:

Your work must be accountable, which is only possible if people are aware of and can understand your work.

Being open about your work is critical to helping to make better use of data across government. When discussing your work openly, be transparent about the tools, data, algorithms and the user need (unless there are reasons not to such as fraud or counter-terrorism). Provide your explanations in plain English. (Department for Digital, Culture, Media, & Sport, 2018)

Similarly, the seven principles of public life (the “Nolan principles”) are reflected in the Ministerial Code, and apply to all who work in the civil service, public bodies, and public services. Those principles include:

4. Accountability

Holders of public office are accountable to the public for their decisions and actions and must submit themselves to the scrutiny necessary to ensure this.

5. Openness

Holders of public office should act and take decisions in an open and transparent manner. Information should not be withheld from the public unless there are clear and lawful reasons for so doing. (Committee on Standards in Public Life, 1995)

Notably, the Committee on Standards in Public Life is undertaking a review into “artificial intelligence and its impact on standards across the public sector.”Footnote 17 This suggests that the Committee recognizes that data processing tools are affecting the established and accepted standards for the public sector.

Despite the emphasis on the principle of transparency and openness in UK government policy, the use of data processing by public authorities in the United Kingdom in their exercise of public power often lacks transparency. The UN Special Rapporteur on Extreme Poverty and Human Rights made the following observations after his visit to the United Kingdom in 2018 (citations omitted):

A major issue with the development of new technologies by the UK government is a lack of transparency. Even the existence of the automated systems developed by DWP’s “Analysis & Intelligence Hub” and “Risk Intelligent Service” is almost unknown. The existence, purpose and basic functioning of these automated government systems remains a mystery in many cases, fueling [sic] misconceptions and anxiety about them. Advocacy organizations and media must rely on Freedom of Information requests to clarify the scope of automated systems used by government, but such requests often fail. (Alston, Reference Alston2018)

A recent report by The Bureau of Investigative Journalism identified significant concerns about the transparency of government procurement of data systems and data processing. The authors of the report conducted an extensive investigation of government procurement data and requests under freedom of information to examine government procurement related to data processing. The report concluded that:

  1. 1. Many authorities were unwilling or unable to specify how and why they purchased these services, however, or what their precise specifications were.

  2. 2. Public authorities—national and local—are supposed to keep transparent and accessible records of the services they purchase (in part to comply with the Public Contracts Regulations 2015). We found that this was rarely the case.

  3. 3. Government transparency datasets are an inadequate tool for understanding purchases, particularly in the case of highly diverse large companies which offer a multiplicity of services … (Black and Safak, Reference Black and Safak2019)

It can be concluded therefore that the use and operation of data processing systems by government is not transparent at present. This opacity is illustrated by the systems in this article’s case studies. There is almost no information available on the data used in RBV assessments for HB and CTB, nor how those data are processed. There is more information on the automated checks in the settled status scheme in terms of the data gathered for processing and the way residence is automatically calculated, although there remain questions as to the precise business logic that the Home Office applies. There is also a question as to the purposes for which the data under the memoranda of understanding for data sharing between the Home Office and the DWP and HMRC can be accessed by Home Office, DWP and HMRC officials.Footnote 18

Requiring that government DPIAs be published would help to address the current lack of transparency on government data processing systems. The DIHR Guidance on human rights impact assessments explains the importance of reporting on the assessment and providing access to the report to stakeholders. The Aarhaus Convention also emphasizes the importance of making information publicly available. This principle of transparency through public information is relevant for the written reports from DPIAs. In the same way that all UK legislation is published on legislation.gov.uk, access to the information in DPIAs would be enhanced by having them published on one online location (not on each department’s website), in a searchable database, that is developed and maintained by an independent public authority.

Conclusion and Recommendations

Rather than arguing against government use of data processing systems, this article translates between data-focused, legal, and policy epistemologies to provide some shared understanding of the rule of law risks posed by government use of data processing systems. There could be advantages in rule of law terms to using such systems for public purposes, such as ensuring that only legally relevant considerations are taken into account in a decision as required by administrative law (Zalnieriute et al., Reference Zalnieriute, Bennett Moses and Williams2019). Interdisciplinary approaches are needed for good design and good governance of such systems, incorporating expertise across a range of areas including data science, digital technology, design, human rights and administrative law, and policy. In order for the recommendations identified in this article to uphold the rule of law effectively, it is essential that government agencies and departments have the necessary resources and expertise across these areas to carry out DPIAs and ongoing monitoring.

The purpose of this article’s recommendations for the substance and process of DPIAs is to begin a practical discussion on how to improve the conformity of government data processing systems with rule of law principles. Although the scope of this article has not included the lex specialis for data processing in the context of criminal law under EU Directive 2016/680, some of the conclusions from this analysis in relation to civil and administrative law may also be relevant to the criminal law context. This article’s analysis of human rights and administrative law identifies the following components for the substance of a DPIA for data processing by government to enhance fidelity to rule of law principles:

  1. 1. assessment of whether the data processing is within the scope of the power granted under the law (per administrative law);

  2. 2. assessment of whether the data processing system will correctly implement the law (per administrative law);

  3. 3. analysis of any risk of discrimination, including through testing and dry-runs to assess indirect discrimination—this analysis should inform or be integrated with measures to comply with the public sector equality duty (per the Equality Act in Great Britain);

  4. 4. impact assessment across all human rights, including economic, social, and cultural rights, as well as civil and political rights such as freedom of association and freedom of expression (per the Human Rights Act, ICESCR, and ICCPR);

  5. 5. analysis of how decision-makers and individuals subject to the system will interact with the system, including in particular the features of the system that will provide transparency and explanations as to the data that the system uses, the way in which it processes those data, and how the system’s data will be distributed (per administrative law);

  6. 6. identification of the indicators or data points that will be measured and collected for ongoing regular monitoring and auditing of the system, including to monitor impact on human rights and whether the system is producing just and lawful outcomes; and

  7. 7. explanation of the access to justice mechanisms for those subject to the data processing system with a goal of ensuring accountability and enforcing proper implementation of the law by government—such mechanisms could include enabling individuals subject to the system to know what data were used in the making of a decision and to correct those data if they are inaccurate (per rule of law principles and administrative law).

Learning the lessons from the principles of transparency and public participation in environmental governance, DPIA processes should include the following elements of transparency, public consultation, and testing. Such transparency, public consultation, and testing may not be strictly required by law, but will help to ensure that the DPIA and proposed data processing meet the legal standards and requirements identified in the list above and this article generally.

  1. 8. Public notice of the proposed data processing, including a nontechnical summary and explanation of how the processing system would work in terms of its purpose, reach/scope, internal use policies, and potential impacts;

  2. 9. public consultation and participation in both the design and development of the data processing system; and

  3. 10. testing of the data processing system before it goes live, with information on the results of that testing informing the public participation on the development of the system.

Finally, transparency and accountability would be enhanced if DPIAs by government were published by default, so that there was a publicly accessible database of government DPIAs. DPIAs should be followed by regular monitoring and auditing of government data processing systems, the reports from which should also be publicly available.

Acknowledgments

The author wishes to acknowledge and thank Dr. Natalie Byrom, Dr. Jennifer Cobbe, and Sam Smith for their thoughts, which informed this article. The author wrote this article as part of her role at The Legal Education Foundation, and acknowledges The Foundation’s support. A draft of this article was presented at the 2019 Data for Policy conference in London, and the author is grateful to the organizers of the conference for that opportunity. The author is also grateful to the reviewers for their comments.

Funding Statement

This work received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing Interests

The author declares no competing interests exist.

Authorship Contributions

Writing-original draft, S.L.H.; Writing-review & editing, S.L.H.

Data Availability Statement

Data availability is not applicable to this article as no new data were created or analyzed in the article, only data that are publicly available and cited in the article such as parliamentary questions.

Footnotes

2 An early draft of some of the analysis was set out in a briefing by the author for a meeting of the All-Party Parliamentary Group on the Rule of Law: Briefing for the May 13, 2019 closed roundtable meeting on Data Processing and the Rule of Law. Bingham Centre for the Rule of Law: APPG on the Rule of Law. Available at https://www.biicl.org/documents/2101_data_processing_appg_briefing_-_may_2019_002.pdf.

3 Select Committee on the European Union Justice Sub-Committee (2019), Uncorrected oral evidence—Brexit: Citizens' rights on July 16, 2019 with Rt Hon Caroline Nokes MP.

4 Letter from Home Secretary Rt Hon Sajid Javid MP to Home Affairs Committee Chair Rt Hon Yvette Cooper MP (2019); dated May 1, 2019. Available at https://www.parliament.uk/documents/commons-committees/home-affairs/Correspondence-17-19/19-05-01-Letter-from-the-Home-Secretary-relating-to-EU-Settlement-Scheme.pdf.

5 Public Bill Committee (2019): Immigration and Social Security Co-ordination (EU Withdrawal) Bill (March 5, 2019), col. 376.

6 Public Bill Committee (2019): Immigration and Social Security Co-ordination (EU Withdrawal) Bill (March 5, 2019), col. 375.

7 There is inconsistent information available as to the Home Office’s fulfilment of its equality duty. The memorandum of understanding between the Home Office and DWP concerning data sharing refers to an Equality Impact Assessment having been conducted at p. 6, (Process Level Memorandum of Understanding (PMoU), 2019b) between The Home Office and Department for Work and Pensions. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790668/Home_Office_-_DWP_API_EU_Exit_MoU.PDF). However, an answer from the Immigration Minister to a parliamentary question by Paul Blomfield MP states only that “In accordance with the public sector equality duty under section 149 of the Equality Act 2010, the Government has had due regard to the impacts of the EU Settlement Scheme on those who share a protected characteristic” (Immigration: EU Nationals: Written question—252534, Question asked by Paul Blomfield on May 9, 2019, answered by Caroline Nokes on May 14, 2019).

8 By contrast, see the views of Roger Clarke who argues that PIAs encompass a broader understanding of privacy than DPIAs, for example, Roger Clarke, “The Distinction between a PIA and a Data Protection Impact Assessment (DPIA) under the EU GDPR,” For a Panel at CPDP, Brussels, January 27, 2017, notes of January 19, 2017. Available at http://www.rogerclarke.com/DV/PIAvsDPIA.html.

12 Process Level Memorandum of Understanding (PMoU) (2019b) between The Home Office and Department for Work and Pensions. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790668/Home_Office_-_DWP_API_EU_Exit_MoU.PDF, p. 12; Process Level Memorandum of Understanding (PMoU) (2019a) between The Home Office and Her Majesty’s Revenue and Customs. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790661/Home_Office_-_HMRC_API_EU_Exit_MoU.PDF, p. 23.

13 Immigrants: EU Nationals: Written question—254812, question asked by Paul Blomfield on May 15, 2019, answered by Caroline Nokes on May 22, 2019.

14 See, for example, GBI (n.d.). Identifying human rights impacts. Available at https://gbihr.org/business-practice-portal/identifying-human-rights-impacts.

15 EU Settlement Scheme: Written statement—HCWS1387, made by Caroline Nokes (The Minister of State for Immigration) on March 7, 2019; Home Office (2019c), EU Settlement Scheme: Public Beta Testing Phase Report (May 2, 2019); see also Home Office (2019a). EU Settlement Scheme private beta testing phase 2 report, which states “Findings in this phase cannot be extrapolated to identify the likely applicant experience for all 3.5 million resident EU citizens and their family members. The PB2 cohort is not reflective of all individuals who will be eligible to apply to the EU Settlement Scheme, since it was selected in part to support the testing of specific aspects of the system, for example, the identity verification app and automated checks of HMRC and DWP data.”

16 Meeting of the Parties to the Convention on Access to Information, Public Participation in Decision-Making and Access to Justice in Environmental Matters (2014). Maastricht Recommendations on Promoting Effective Public Participation in Decision-Making in Environmental Matters (June 30 and July 1, 2014).

17 Committee on Standards in Public Life (2019). AI and Public Standards. Available at https://www.gov.uk/government/collections/ai-and-public-standards.

18 Immigrants: EU Nationals: Written question—254812. Question asked by Paul Blomfield on May 15, 2019, answered by Caroline Nokes on May 22, 2019.

References

Alston, P (2018) United Nations Special Rapporteur on Extreme Poverty and Human Rights. Statement on Visit to the United Kingdom. Available at https://www.ohchr.org/documents/issues/poverty/eom_gb_16nov2018.pdf.Google Scholar
Article 29 Data Protection Working Party (2017) Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing is “Likely to Result in a High Risk” for the Purposes of Regulation 2016/679.Google Scholar
Artificial Intelligence: Australia's Ethics Framework (2019) Australian Department of Industry, Innovation and Science. Available at https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/.Google Scholar
Bingham, T (2010) The Rule of Law. London, UK: Penguin Books.Google Scholar
Binns, R and Gallo, V (2019) Automated Decision Making: The Role of Meaningful Human Reviews. Information Commissioner’s Office. Available at https://ai-auditingframework.blogspot.com/2019/04/automated-decision-making-role-of.html.Google Scholar
Black, C and Safak, C (2019) Government Data Systems: The Bureau Investigates. The Bureau of Investigative Journalism.Google Scholar
Central Bedfordshire Council (2018) Annual Review of Risk Based Verification (RBV) Policy for Housing Benefit and Local Council Tax Support Assessments. Available at https://centralbeds.moderngov.co.uk/documents/s77223/08%20Annual%20Review%20of%20Risk%20Based%20Verification%20RBV%20Policy%20for%20Housing%20Benefit%20and%20Local%20Council%20Tax%20S.pdf.Google Scholar
Cobbe, J (2018) Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making a Pre-Review Version of a Paper in Legal Studies. Available at https://ssrn.com/abstract=3226913 or http://dx.doi.org/10.2139/ssrn.3226913.CrossRefGoogle Scholar
Committee on Standards in Public Life (1995) Guidance: The 7 Principles of Public Life. Available at https://www.gov.uk/government/publications/the-7-principles-of-public-life/the-7-principles-of-public-life--2.Google Scholar
Committee on Standards in Public Life (2019) AI and Public Standards. Available at https://www.gov.uk/government/collections/ai-and-public-standards.Google Scholar
Coram Children’s Legal Centre (2019) Uncertain Futures: The EU Settlement Scheme and Children and Young Peoples’ Right to Remain in the UK.Google Scholar
Council of Europe Commissioner for Human Rights (2019) Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights.Google Scholar
Data Justice Lab (2018) Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services.Google Scholar
Dawson, D Schleiger, E Horton, J McLaughlin, J Robinson, C Quezada, G Hajkowicz, S (2019) Artificial Intelligence: Australia’s Ethics Framework. Australia: Data61 CSIRO.Google Scholar
Department for Digital, Culture, Media & Sport (2018) Guidance: 6. Make Your Work Transparent and be Accountable. Available at https://www.gov.uk/guidance/6-make-your-work-transparent-and-be-accountable.Google Scholar
Directive on Automated Decision-Making (2019) Government of Canada. Available at https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.Google Scholar
Dunt, I (2019) Warning Lights Flashing over EU Settled Status App. politics.co.uk. Available at https://www.politics.co.uk/blogs/2019/02/06/warning-lights-flashing-over-eu-settled-status-app.Google Scholar
DWP (2011) Housing Benefit and Council Tax Benefit Circular HB/CTB S11/2011: Risk-Based Verification of HB/CTB Claims Guidance. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/633018/s11-2011.pdf.Google Scholar
DWP (2013) Local Authority Insight Survey—Wave 24.Google Scholar
Ealing Council (2018) Officer’s Decision: RBV Policy April 2018, Attaching Full Equalities Analysis Assessment. Available at https://ealing.cmis.uk.com/ealing/Document.ashx?czJKcaeAi5tUFL1DTL2UE4zNRBcoShgo=195YowuP8la3ZeGBVU2bAkxQ2iVpofBRpzm%2FXRC%2BMEjgYSxJ52Gbmw%3D%3D&rUzwRPf%2BZ3zd4E7Ikn8Lyw%3D%3D=pwRE6AGJFLDNl h225F5QMaQWCtPHwdhUfCZ%2FLUQzgA2uL5jNRG4jdQ%3D%3D&mCTIbCubSFfXsDGW9IXnlg%3D%3D=hFflUdN3100%3D&kCx1AnS9%2FpWZQ40DXFvdEw%3D%3D=hFflUdN3100%3D&uJovDxwdjMPoYv%2BAJvYtyA%3D%3D=ctNJFf55vVA%3D&FgPlIEJYlotS%2BYGoBi5olA%3D%3D=NHdURQburHA%3D&d9Qjj0ag1Pd993jsyOJqFvmyB7X0CSQK=ctNJFf55vVA%3D&WGewmoAfeNR9xqBux0r1Q8Za60lavYmz=ctNJFf55vVA%3D&WGewmoAfeNQ16B2MHuCpMRKZMwaG1PaO=ctNJFf55vVA%3D.Google Scholar
EU Settlement Scheme (2019) Written Statement—HCWS1387 made by Caroline Nokes (The Minister of State for Immigration) on March 7, 2019.Google Scholar
Götzmann, N Bansal, T Wrzoncki, E Poulsen-Hansen, C Tedaldi, J and Høvsgaard, R (2016) Human Rights Impact Assessment Guidance and Toolbox. The Danish Institute for Human Rights.Google Scholar
Harris, SL and McNamara, L (2016) The Rule of Law in Parliament: A Review of Sessions 2013–14 and 2014–15. Bingham Centre for the Rule of Law.Google Scholar
Home Office (2019a) EU Settlement Scheme Private Beta Testing Phase 2 Report (January 21, 2019).Google Scholar
Home Office (2019b) EU Settlement Scheme: UK Tax and Benefits Records Automated Check (March 29, 2019).Google Scholar
Home Office (2019c) EU Settlement Scheme Statistics, April 2019: Experimental Statistics (May 30, 2019).Google Scholar
Home Office (2019d) EU Settlement Scheme Statistics, May 2019: Experimental Statistics, 2nd Edn. (June 20, 2019).Google Scholar
Home Office (2019f) EU Settlement Scheme Quarterly Statistics (28 August 2018 to 30 September 2019; November 7, 2019).Google Scholar
Home Office (2019g) EU Settlement Scheme Statistics, October 2019: Experimental Statistics (November 14, 2019).Google Scholar
ICO (2014) Code of Practice for Privacy Impact Assessments.Google Scholar
Immigrants: EU Nationals: Written question—254812 (2019) Question Asked by Paul Blomfield on May 15, 2019, Answered by Caroline Nokes on May 22, 2019.Google Scholar
Immigration: EU Nationals: Written question—252534 (2019) Question asked by Paul Blomfield on May 9, 2019, Answered by Caroline Nokes on May 14, 2019.Google Scholar
Janssen, H (2018) Detecting New Approaches for a Fundamental Rights Impact Assessment to Automated Decision-Making. Available at https://ssrn.com/abstract=3302839 or http://dx.doi.org/10.2139/ssrn.3302839.CrossRefGoogle Scholar
Letter from Home Secretary Rt Hon Sajid Javid MP to Home Affairs Committee Chair Rt Hon Yvette Cooper MP (2019) Dated May 1, 2019. Available at https://www.parliament.uk/documents/commons-committees/home-affairs/Correspondence-17-19/19-05-01-Letter-from-the-Home-Secretary-relating-to-EU-Settlement-Scheme.pdf.Google Scholar
McGregor, L Ng, V and Shaheed, A (2018) The Universal Declaration of Human Rights at 70: Putting Human Rights at the Heart of the Design, Development and Deployment of Artificial Intelligence. Human Rights, Big Data and Technology Project. Available at https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2018/12/UDHR70_AI.pdf.Google Scholar
Meeting of the Parties to the Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters (2014) Maastricht Recommendations on Promoting Effective Public Participation in Decision-making in Environmental Matters.Google Scholar
Ministry of Housing, Communities & Local Government (2014) Guidance: Environmental Impact Assessment: Explains requirements of the Town and Country Planning (Environmental Impact Assessment) Regulations 2017. Available at https://www.gov.uk/guidance/environmental-impact-assessment.Google Scholar
Process Level Memorandum of Understanding (PMoU) (2019a) Process Level Memorandum of Understanding (PMoU) between The Home Office and Her Majesty’s Revenue and Customs. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790661/Home_Office_-_HMRC_API_EU_Exit_MoU.PDF.Google Scholar
Process Level Memorandum of Understanding (PMoU) (2019b) Process Level Memorandum of Understanding (PMoU) between The Home Office and Department for Work and Pensions. Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790668/Home_Office_-_DWP_API_EU_Exit_MoU.PDF.Google Scholar
Public Bill Committee 2019 Public Bill Committee: Immigration and Social Security Co-ordination (EU Withdrawal) Bill.Google Scholar
Reisman, D Schultz, J Crawford, K and Whittaker, M (2018) Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now.Google Scholar
Robertson, A (2019) A New Bill Would Force Companies to Check their Algorithms for Bias. The Verge. Available at https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate.Google Scholar
Rochdale Borough Council (2015) Appendix: Equality Impact Assessment of Risk Based Verification Policy. Available at http://democracy.rochdale.gov.uk/documents/s38489/Append.%202%20for%20Risk%20Based%20Verification%20Policy.pdf.Google Scholar
Select Committee on the European Union Justice Sub-Committee 2019 Uncorrected Oral Evidence—Brexit: Citizens' Rights on July 16, 2019 with Rt Hon Caroline Nokes MP.Google Scholar
Waltham Forest Council (2015) Decision: Equality Analysis—Risk Based Verification Policy. Available at https://democracy.walthamforest.gov.uk/documents/s47042/Equality%20Analysis%20RBV%20Policy%20V1%20APPENDIX%202.pdf.Google Scholar
Zalnieriute, M Bennett Moses, L and Williams, G (2019) The Rule of Law and Automation of Government Decision-Making. Forthcoming, (2019) Modern Law Review; UNSW Law Research Paper No. 19-14. Available at https://ssrn.com/abstract=3348831 or http://dx.doi.org/10.2139/ssrn.3348831.CrossRefGoogle Scholar
Submit a response

Comments

No Comments have been published for this article.