Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-22T02:36:41.595Z Has data issue: false hasContentIssue false

Part II - Facial Recognition Technology across the Globe

Jurisdictional Perspectives

Published online by Cambridge University Press:  28 March 2024

Rita Matulionyte
Affiliation:
Macquarie University, Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

9 Government Use of Facial Recognition Technologies under European Law

Simone Kuhlmann

State actors in Europe, in particular security authorities, are increasingly deploying biometric methods such as facial recognition for different purposes, especially in law enforcement, despite a lack of independent validation of the promised benefits to public safety and security. Although some rules such as the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED) are in force, a concrete legal framework addressing the use of facial recognition technologies (FRTs) in the EU so far does not exist. Given the fact that the technology is processing extremely sensitive personal data, is not always working reliably, and is associated with risks of unfair discrimination, a general ban on any use of artificial intelligence (AI) for automated recognition of human features at least in publicly accessible spaces has been demanded.Footnote 1 Against this background, this chapter adopts a fundamental rights perspective and examines whether and to what extent government use of FRT can be accepted under European law.

9.1 Government Use of Facial Recognition Technologies within the EU

The government use of FRT in the EU is limited so far. With Austria, Finland, France, Germany, Greece, Hungary, Italy, Latvia, Lithuania, Slovenia, and the Netherlands, eleven countries are already using FRT, with an upward trend, but the deployments are primarily experimental and localised so far.Footnote 2 It is mainly used by security authorities for the purposes of prevention, investigation, detection, and prosecution of criminal offences as well as the prevention of threats to public security. In addition to the most controversial use of FRT for (mass) surveillance, especially in publicly accessible spaces, FRT is primarily used among law-enforcement agencies in the EU for the purposes of forensic authentication so far.Footnote 3 The typical scenario is to match the photograph of a suspect (e.g., extracted from previous records or closed-circuit television footage) against an existing database of known individuals (e.g., a national biometric database, a driver’s licence database) for ex-post identification in a criminal investigation. Finally, on grounds of efficiency, FRT is also increasingly used by law enforcement agencies as a tool for analysing large amounts of video footage, for instance to search for a specific person or to track a person over multiple videos, since a manual analysis can be very time and resource consuming.Footnote 4

9.2 Suitability Despite Accuracy Concerns

A ban on use of FRT for law enforcement purposes is still discussed under the recurring argument that the performance of such systems is not yet appropriate. The sufficient accuracy rates cannot be achieved in real life settings, the errors are unequally distributed in the referenced population, and minorities are discriminated against.Footnote 5

If methods or systems are not reliable and mistakes occur when implementing the method or the system in practice, its use by state authorities may be unlawful. Under European law, the exercise of recognised rights and freedoms can be limited only if such limiting measures are appropriate and necessary to achieve the objectives (see Art. 52 para. 1 EU Charter of Fundamental Rights (CFR)). If this is lacking, the measures are disproportionate and thus unlawful. However, the European Court of Justice (CJEU) allows a wide margin of assessment to the legislator to assess the suitability of the measure. Only if, having regard to its objective, the measure is manifestly inappropriate, can the legality of a measure be affected. As long as the objective is promoted in any way, the measure is presumed to be appropriate, even if the employed method is not wholly reliable. Hence, the CJEU assessed the storage of biometric fingerprints in passports and travel documents for the purpose of preventing illegal entry to the EU as generally suitable, despite a not inconsiderable error rate.Footnote 6 The court came to the same conclusion with relation to the automated analysis of passenger name records (PNR) for the purposes of preventing, detecting, investigating, and prosecuting terrorist offences and serious crime.Footnote 7 With regard to the automated matching of PNR data with patterns, which is comparable in its functioning to facial recognition systems, the CJEU stated that the possibility of ‘false negatives’ and the fairly substantial number of ‘false positives’ resulting from the use of the system may limit the appropriateness of the system. However, automated processing has indeed already made it possible to identify air passengers presenting a risk in the context of the fight against terrorist offences and serious crime; this is why the system is not inappropriate.Footnote 8 Moreover, according to the CJEU, the appropriateness of the system essentially depends on the proper functioning of the subsequent verification of the results obtained under those processing operations by non-automated means.Footnote 9

The FRT technologies have made real progress towards accuracy in the last years, owing to the use of convolutional neural networks. Despite this, the accuracy and error rates of FRT systems depend strongly on the task and the conditions under which the technology is used, as well as the quality of training and comparison data.Footnote 10 The one-to-one variant has become extremely accurate.Footnote 11 It is used to confirm a person’s identity on the basis of clear reference images, such as recognising the rightful owner of a passport or smartphone (verification/authentication). On standard assessments such as the Facial Recognition Vendor Test (FRVT) of the National Institute of Standards and Technology (NIST), accuracy scores can be as high as 99.97 per cent.Footnote 12 This is, with some reductions in accuracy, true even if the face is partially covered by a mask.Footnote 13 The reason for this is that the one-to-one variant is comparatively simple. In one-to-one situations, one typically deals with standardised images often produced under ideal conditions (e.g., consistency in lighting and positioning), which correspondingly leads to a lower number of inaccuracies.

The situation is quite different if FRT is used in the one-to-many variant,Footnote 14 which receives most attention in the debate. This variant serves to determine an unknown person’s identity by comparing a facial image with a large volume of known faces (identification). For example, it can be used to identify specific offenders or suspects or track down missing persons or victims of kidnapping.Footnote 15 Compared with the verification systems, the pictures of individuals used for identification purposes were usually captured remotely and in real life settings (‘in the wild’), where the subjects do not know they are being scanned. They may not be looking directly at the camera and/or may be obscured by objects or shadows. Accordingly, the accuracy rates tend to be far lower when compared with the controlled setting. For example, NIST’s FRVT found that, when using footage of passengers entering through boarding gates (a relatively controlled setting), the best FRT system had an accuracy rate of 94.4 per cent.Footnote 16 In contrast, leading algorithms identifying individuals walking through a sporting venue – a much more challenging environment – had accuracies ranging between 36 per cent and 87 per cent, depending on camera placement.Footnote 17 These different uses with a broad range of accuracy could cause fundamental rights concerns.

9.3 Fundamental Rights Concerns

The government use of FRT interferes with the European fundamental right guarantees. First of all, the initial video recording, the subsequent retention of the footage, and the comparing of the footage with database records for the purpose of identification (matching) constitutes an interference with the right to data protection, as set out in Article 8 CFR and Article 16 Treaty on the Functioning of the European Union (TFEU).Footnote 18 Both regulations ensure identical protection of personal data against processing, including in particular the image of a person recorded or, rather, the unique facial features extracted in a template. In addition, the right to private life implemented in Article 7 CFR and Article 8 European Convention on Human Rights (ECHR) might also be violated, depending on how and for what purpose the technology is used. Article 7 CFR protects privacy to ensure the development, without outside interference, of the personality of each individual in his relations with other human beings.Footnote 19 The protection guaranteed by Article 7 CFR and Article 8 ECHR extends primarily to private zones (a person’s home or private premises). However, there can also be interaction in a public context, which may fall within the scope of private life, when the person can have the reasonable expectation to be in private (e.g., private conversation in a screened area).Footnote 20 Such expectation cannot exist in a public space where everyone is visible to any member of the public.Footnote 21

Accordingly, the use of FRT by authorities is not necessarily inconsistent with Article 7 CFR and the right to private life, as long as the video recording is made in a public space where one cannot expect to be in private and is used solely for the purpose of identification. This applies at least as long as the recording is not stored systematically and permanently.Footnote 22 If FRT is used to gain inferences about the person and their personality, for example, their behaviour, whereabouts, movement patterns, contacts, or personal characteristics such as sexual or political orientation, Article 7 CFR might be violated, as the respect of private life includes the protection of private information and free development of personality.Footnote 23

Furthermore, depending on the task for which the technology is used, FRT may affect other fundamental rights. For instance, if authorities deploy facial recognition systems in the context of public protests, during the protest or ex-post,Footnote 24 to identify participants or locate individuals suspected of offending, interference with the freedom of assembly according to Article 12 CFR and Article 11 ECHR comes into consideration.Footnote 25 In addition, there might a violation of a person’s freedom of opinion and expression as guaranteed by Article 11 CFR. Assemblies and protests are legally protected as spaces for the collective expression of opinions. The use of FRT and the consequent possibility of identification and traceability may discourage individuals from exercising their right to freedom of peaceful assembly as well as their right to freedom of expression.Footnote 26

9.4 Lawfulness under the European Rights Framework

These interferences with European fundamental rights do not make the government use of FRT generally inadmissible. Fundamental rights enshrined in the CFR and, in particular, the rights of data protection and private life, are not absolute rights; they must be considered in relation to their function in society and can be limited under certain circumstances.

9.4.1 Specific Legal Bases

First of all, Article 52(1) CFR requires a specific legal basis for any limitations to fundamental rights.Footnote 27 Thus, the specific legal basis is required that authorises the deployment of FRT systems. The EU currently has no competence to comprehensively and conclusively regulate the powers of member states’ public authorities to intervene in the processing of personal data.Footnote 28 It is therefore up to the member states to create regulations precisely describing the applications and the conditions for the use of FRT.Footnote 29 A recourse to the GDPR is not possible, as police data processing for the purpose of preventing and prosecuting criminal offences, which are as described earlier currently the main application field in the EU, is not covered by its scope (see Art. 2(2) lit. d GDPR).

When adopting a legal basis for the use of FRT in law enforcement, the member states must observe the general requirements of the LED.Footnote 30 It regulates the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offences including the prevention of threats to public security. The LED imposes some restrictions on the processing of special categories of personal data such as biometric data. For instance, the Directive permits the processing and saving of biometric data for the purpose of uniquely identifying a natural person only where ‘strictly necessary’ and, inter alia, subject to appropriate safeguards for the rights and freedoms of the data subject (see Art. 10 LED). Automated decisions based on biometric data are completely prohibited unless suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place (see Art. 11(2) LED).

9.4.2 Specific, Explicit, and Legitimate Purpose

Secondly, according to Article 52(1) and the first sentence of Article 8(2) CFR, as well as the requirements of the LED, the legal basis must specify in detail the purposes for which facial biometric data may by processed and by whom.Footnote 31 It must lay down clear and precise rules governing the scope and application of the measure in question.Footnote 32 In particular, the conditions and circumstances in which authorities are empowered to resort to any measures of secret surveillance and collection of data must be sufficiently clearly defined.Footnote 33 The reason for this is twofold: On the one hand, it should be possible for a person affected to foresee the scope and the application of the measures in question. On the other hand, the legal basis should define and restrict the authorities’ scope of action. General clauses that allow the processing of personal data for public interest purposes are, therefore, insufficient for the use of FRT. For instance, in a UK case, the Court of Appeal overturned a first instance decision and concluded that the legal framework – a Surveillance Camera Code – did not qualify as legal basis, because it was imprecise and afforded individual police officers too much discretion.Footnote 34 The court considered that it was not clear who could be placed on the watchlist, nor was it clear that there were any criteria for determining where FRT can be deployed.Footnote 35

The specified purpose for which the data processing is authorised in the legal basis must, according to the second sentence of Article 52(1) CFR, genuinely meet the objectives of the general interest recognised by the EU or meet the need to protect the rights and freedoms of others.Footnote 36 The fight against crime in order to ensure public security, which is the main purpose of FRT in the EU so far, in principle constitutes such an objective of general interest according to the case-law of the CJEU,Footnote 37 as well as European law. In several passages in the European Treaties, the European legislator expresses the role of the EU as an ‘area of freedom, security and justice’ (see Art. 67(1) TFEU), in which the ‘prevention and combating of crime’ (see Art. 3(2) TEU, Art. 67(3) TFEU) constitutes an objective of general interest. The ECHR also recognises the legitimacy of such purposes. According to Article 8(2) ECHR, an interference by a public authority can inter alia be accepted in the ‘interests of national security, public safety or the prevention of disorder or crime’, which includes the detection of crimes that have already been committed.Footnote 38

The purpose and extent to which the government use of FRT can be permitted under the European legal framework depends on the degree of interference with fundamental rights. Depending on the application and task, the deployment of FRT by authorities can affect these fundamental rights in different degrees. If authorities solely use the technology ex post to identify a person who committed a crime, and, for this purpose, match the image of the suspect against an existing database, the infringement of the fundamental rights is rather limited.Footnote 39 In this case, the conduct of the affected person causes the data processing, and the data used for matching are sometimes already available in the authorities’ databases. The situation is quite different when FRT is used by safety authorities in publicly accessible spaces for the purpose of prevention, detection, and prosecution of crime, as well as for the detection of persons of interest.Footnote 40 In this case, images are captured of the face of anyone who passes within the range of the camera, without a justification that their conduct might have a link, even an indirect or remote one, with crime and without any differentiation, limitation, or exception. Consequently, it can be described as a general, indiscriminate (mass) surveillance, where the interference with the fundamental rights and freedoms is wide ranging and must be considered to be particularly serious.Footnote 41 From a fundamental rights perspective, it must be considered that the deployment of FRT is generally more sensitive than conventional video surveillance. Unlike the latter, FRT is capable (for some applications in near real-time) to associate the footage with a specific person and the information already available about them, which enables the collection of further sensitive data and conclusions about the person’s behaviour. In addition, an official announcement of the use of FRT in a certain area reduces the intensity of the infringement of fundamental rights but does not abrogate it.Footnote 42

The extent of interference with fundamental rights caused by the deployment of FRT does not solely depend on the number of affected people, but also lies in the design of the system. If the recorded facial images from a public space are deleted automatically and immediately after the comparison with the database fails to find a match, the interference is not so severe.Footnote 43 However, if the authorities store the facial images including the information when and where the image was taken, systematically (e.g., in a biometric reference database) and for a longer period of time (e.g., to verify the identity of the affected person, hold it for later matching, or to draw inferences regarding behaviour or personality), then a considerable intrusive weight must be assumed. Further, the crucial factor is not only the length of data storage, but also the amount and type of data additionally collected when the facial image is taken. Finally, the secrecy of FRT use is also significant for the degree of interference, as it does not allow the person concerned to evade the technology or seek legal protection. This applies both for the deployment of FRT in certain areas and the storage of facial images (for instance from social media) in a reference database for later matching attempts.

9.4.3 Lessons Learned from the Case Law Concerning Data Retention

From the numerous CJEU decisions on data retention, such as those dealing with the storage of data for the purpose of the prevention and prosecution of serious crime arising in connection with the use of telecommunication services, we know that the general preventive and indiscriminate retention of traffic and location data is not compatible with the fundamental rights under Articles 7, 8, and 11 CFR. This is the case even when such data retention is conducted for the purposes of combating serious crime, preventing serious threats to public security and equally safeguarding national security.Footnote 44 The CJEU considers the collection of data for these purposes to be permissible only if it is based on certain personnel, geographical, and temporal criteria, which limit the data processing to what is strictly necessary.Footnote 45 These limits may, in particular, be determined according to the categories of persons concerned. For instance, these activities may only target people whose data are likely to reveal a link with serious crime offences. Alternatively, they may be set by using a geographical criterion, where the competent national authorities consider that there exists, in one or more geographical areas, a situation characterised by a high risk of preparation for a commission of serious criminal offences.Footnote 46 According to the CJEU, those areas may include places with a high incidence of serious crime and places that are particularly vulnerable to the commission of serious criminal offences, such as places or infrastructure that regularly receive a very high volume of visitors, or strategic locations such as airports, stations, or tollbooth areas.Footnote 47

Moreover, in one of its recent decisions on data retention, the CJEU has clarified that the processing of data relating to the civil identity of a user solely for the purpose of identifying the user concerned can be justified by the objective of preventing, investigating, detecting, and prosecuting criminal offences in general.Footnote 48 This is assuming that the data does not provide information other than that necessary for identification purposes, such as contact details of the user or information on the communication sent. Hence, if the data provides further information that allows precise conclusions concerning the private lives of the persons concerned, only the objectives of combating serious crime or preventing serious threats to public security are capable of justifying public authorities having access to a set of such traffic or location data.Footnote 49

Finally, according to the CJEU, there have to be minimum safeguards to ensure that the persons whose data have been retained have sufficient guarantees to effectively protect their personal data against the risk of abuse and against any unlawful access and use of that data.Footnote 50 The need for such safeguards is greater when personal data is subject to automatic processing and when there is a significant risk of unlawful access to the data.Footnote 51 Therefore, in addition to technical and organisational measures to ensure the protection and security of the data and their full integrity and confidentiality, there is a need for substantive and procedural rules that regulate the access to the data and to their subsequent use by authorities.Footnote 52 The legal rules that authorise the data processing, must restrict the purposes for which authorities are allowed to use the data to what is strictly necessary. The accepted safeguard against the risks of automatic processing is, apparently also from the CJEU, the individual review of the results by non-automated means, often called ‘human in the loop’.Footnote 53 It should only be noted in passing since this kind of safeguarding is a dubious idea for many reasons, especially while there is so much literature and studies on deficits in human decision-making.

9.5 Consequences for the Government Use of FRT

When trying to adopt these guidelines given by the European fundamental rights framework and the associated case-law concerning data retention in the use of FRT, it must be considered that facial recognition systems process data of similar or even higher sensitivity than traffic or location data. The processing of facial biometric data does not only enable the identification and verification of individuals. The systematic collection and evaluation of such data might – as described earlier – lead to conclusions about persons’ behaviour and whereabouts; apart from the fact that increasing attempts are being made to draw inferences about individual personal attributes from facial appearance, such as sexual or political orientation or violent tendencies.Footnote 54 However, the face is a highly personal feature that cannot simply be amended, given that FRT even works if the face is partially covered.Footnote 55 Accordingly, the existing rules addressing the processing of biometric data – the GDPR and LED – impose particularly high requirements on the processing of such data and only permit it for the prevention of threats to high-priority legal interests, including the prosecution of serious criminal offences.

The analysis shows that, despite the interference with fundamental rights such as privacy or data protection as well as possible high error rates, the European fundamental rights framework does not preclude government deployment of FRT in principle. However, a specific legal basis is required, defining clearly and precisely the purposes for which and by whom FRT can be used, who has access to the generated data, and how to proceed with the data once collected (e.g., retention and deletion periods). The law must not only consider the various applications and sectors where FRT can be used, but also address the different phases of the use, including the creation of a reference dataset and its deployment.Footnote 56 Furthermore, safeguards against abuse and any external (unauthorised) use are needed as well.

A government use of FRT for general preventive and indiscriminate mass surveillance purposes, in which individuals are recorded without a reasonable suspicion, would not be compatible with the European fundamental rights framework. In particular, establishing a state-owned biometric reference database with face images of persons without any specific reason (e.g., in order to be able to easily identify individuals in the future), would be contrary to fundamental rights. It would be nothing else but general and indiscriminate data retention. Hence, only individuals who have given the authorities a reason to do so, because they are dangerous or are suspected of having committed a crime, for example, may be recorded in the reference database. A deployment of FRT in publicly accessible spaces can only be allowed if it serves to avert threats to high-priority legal interests or to prosecute serious criminal offences. Such deployment should be geographically limited to high-risk areas or to areas with high probability of locating wanted persons.Footnote 57 This is likely to apply even if the facial image is deleted automatically and immediately after the comparison with the database is completed and no matches are found. The use of FRT systems is therefore conceivable, for instance, when tracking terrorists or serious criminals in highly frequented areas or strategic locations, such as airports, stations, or tollbooth areas. It could also be used for the surveillance of events or places where the risk of serious criminal offences is high. Moreover, the deployment of FRT may also be compatible with fundamental rights if it is used for ex-post identification of criminals, terrorists, or other persons of interest, or as a tool for the effective image and video evaluation (e.g., to recognise or track individuals in a video recording). Most importantly, the decisive factor for using FRT, which complies with fundamental rights, is that the incoming data should not be stored longer than necessary for the intended purposes and cannot be used for other purposes.

9.6 Conclusion

The analysis here leads to the conclusion that the government use of FRT can be permissible under the European fundamental rights framework if subjected to specific and strict conditions. In order to allow FRT use, a legislator should provide a specific legal basis regulating the deployment of FRT that is compatible with fundamental rights. In light of this, the EU AI-Act,Footnote 58 which provides general limitations to FRT use,Footnote 59 will not be sufficient as a legal basis, especially for the present main application of FRT by authorities: the prevention and prosecution of crime. There should be a legal basis directly legislated by the member states,Footnote 60 as the protection of national security as well as law enforcement fall under the legislative competence of the states and not the EU. In addition, an empirical study of the real effectiveness of FRT would be sensible and desirable considering the fundamental rights violations before widespread use. So far, the advocates of this technology have failed to provide enough evidence to prove that this technology can genuinely ensure public safety and security.

10 European Biometric Surveillance, Concrete Rules, and Uniform Enforcement Beyond Regulatory Abstraction and Local Enforcement

Paul De Hert and Georgios Bouchagiar
10.1 Introduction

In the era of biometric mass surveillance, novel technological implementations have led to an unprecedented monitoring of sensitive data. Among other purposes, this data has been used to discriminate based on certain characteristics (from sex to ethnic or social origin), contrary to multiple protective declarations, or draw insights into people’s emotions. Such applications call for concrete regulatory intervention that is expressly targeted at practices that may interfere with fundamental human rights, including the right to privacy and personal data protection.

Despite promising initiatives, such as the European Citizens’ Initiative’s ‘Civil society initiative for a ban on biometric mass surveillance practices’, which was registered by the European Commission in 2021,Footnote 1 regulators have failed to readily intervene (before the materialisation of the harm) with a view to banning, halting, or sanctioning certain intrusive practices. Although this failure might to some extent be justified by lengthy law-making procedures, there is an acute social need to protect people’s facial and other biometric data from constant watching by public or private actors, including for-profit firms, whose exercise of surveillance activities appears unregulated or under-regulated.

After discussing new challenging trends in the technological arena, this chapter emphasises the need for concrete rules surrounding specific technological uses and their possible harms. Technological uses (and misuses) can have a global reach, meaning they pose a global risk, with a potential for global harm that may affect numerous citizens simultaneously. Hence, there is a need for precise law-making and uniform enforcement – via joint-intervention and collaboration between regulatory entities around the globe – with a view to halting, banning, and sanctioning targeted practices interfering with fundamental human rights.

Section 10.2 discusses trends such as remote biometric surveillance, biometric monitoring targeted at classifying people on legally protected grounds, biometric processing drawing inferences on emotions or intentions, and traditional practices, such as closed-circuit television (CCTV) surveillance, whose regulation appears to require updating. It then makes the argument that these four trends must become a warning for regulators, because they have resulted in the emergence of new needs of the citizens.

Section 10.3 summarises findings of our comparative study of US initiatives that regulate facial recognition or biometric data processing. Relying on these initiatives, we highlight three regulatory building blocks for the EU. First, concreteness and precision of the law: US legal texts appear clear and expressly targeted at technological uses, vulnerable groups, or coercive state powers. Second, bright-line bans: the US prohibition-agenda includes moratoria and other techniques that may, in some instances, reach the level of unconditionality. Third, practical organisation of remedies: it is not only the civil/administrative route that citizens can follow; rather, many areas, from competition and market to criminal law, are combined to enhance effectiveness of protection.

Since the surveillance-effect appears ubiquitous and the technological reach seems transnational, the solution may lie not only in concrete law-making, but also in uniform or global enforcement. Section 10.4 discusses the 2021 Clearview-case to demonstrate that in this targeted case, joint scrutiny by different national entities and joint regulatory intervention (via rigorous investigations), had a positive effect and led to a considerable degree of enhanced protection for those affected by the firm’s mass surveillance practices. Section 10.5 summarises, comments, and makes more concrete recommendations.

10.2 Biometric Surveillance: Four Critical Trends

New technological implementations have allowed for an unprecedented regime of observation, rendering the people and their biometric data particularly vulnerable to unregulated or under-regulated state and business practices.

First, remote biometric surveillance may be aimed at matching citizens to reference datasets without their knowledge.Footnote 2 In the absence of concrete laws targeted at such practices, states can hardly guarantee their citizens that firms – whose for-profit activities may be exercised around the globe and operate without enhanced checks and balances (known from public law) – will not collect this data unnoticed. Neither can it be guaranteed that firms will not share collected biometric data with law enforcement, who may subsequently exploit such data and inferences in the name of national security or the need to effectively fight against crime. In the Clearview case (discussed in Section 10.4), citizens became explicitly exposed to a giant firm’s mass processing and excessive sharing of sensitive data with law enforcement agencies around the world.

Second, biometric monitoring can be targeted at classifying people based on specific attributes, ranging from gender and age to political views.Footnote 3 With no specific regulation, citizens are unaware of how they may be protected against these unfairly discriminative practices – as discrimination on such bases is expressly prohibited under the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights (ECHR).Footnote 4 Such protections are particularly important in an era when sensitive data is processed in an uncontrollable data-tsunami-fashion that becomes sharable with various state entities, and given that the European Court of Human Rights has held the view (and emphasised) for more than a decade that mere retention/collection of personal data may raise serious privacy-concerns.Footnote 5

Third, biometric watching can today be directed to processing with the further objective of drawing inferences on emotions or even intents.Footnote 6 Orwellian fears become relevant if citizens could suffer any detriment or mistreatment on the basis of ideas, feelings, or thoughts that, as regulators would agree, must stay untouched by any law or practice.

Fourth, old-school surveillance, for instance via CCTV systems, is no more old-school. With new applications and improvements of old technologies, citizens have come to realise that legal regimes, introduced to regulate old technological implementations, have failed to evolve and are apparently lagging behind rapidly developing tech-trends.Footnote 7 Gone are the days of a simple CCTV camera announced by an information notice that a location is under surveillance. These notices are hardly effective against powerful cameras capable of capturing detailed images from miles away.

These developments, leading to ubiquitous monitoring of all earth-citizens, must become a three-prong warning for regulators. First, although surveillance practices are very well targeted at citizens and their sensitive data, laws are not. Especially at the EU level, laws have remained untargeted, general, abstract, and neutral. Technologies such as cameras or drones are unmentioned in the 2016 General Data Protection Regulation (GDPR) or the 2016 Law Enforcement Directive (LED).Footnote 8 Much criticism has also surrounded recent efforts in the proposed AI Act to address more expressly certain emerging or materialised harms,Footnote 9 (potentially) caused by biometric and other un(der)regulated technologies.Footnote 10 Second, regulatory responses and checks, such as proportionality assessments performed by courts, must focus on and properly balance what is actually at stake, without fearing that they might look political or too activist.Footnote 11 This risk is only heightened when a regulatory framework is lacking or too vague. Third, fundamental human rights demand priority and enforcement – an argument closely linked to the second point. While the risk-based, cost/benefit rationale already underlying many fields, from environment to data protection,Footnote 12 could entertain utilitarianism-advocates, it cannot and should not replace the logic of the ‘fundamental’. There are certain sensitive areas where financial interests and security must not be over-prioritised; where fundamental human rights cannot be outweighed by being attributed numerical values in a mathematical fashion.Footnote 13

These technological trends and regulatory challenges must catch the eye of the regulator; for the watching of anyone anywhere, their sorting into whatever classes on whatever bases and for whatever purposes, the foreseeing of people’s thoughts and feelings, and the rebirth of old-school technologies escaping old-school laws have given birth to new citizens’ needs.

10.3 Regulatory Strategy: Focus on Concrete Technological Uses and their Possible Harm

The need for bright-line rules directed to concrete technological uses and possible harms has long been identified and stressed in privacy-related contexts;Footnote 14 and, in recent publications, we have resorted to the US legal regime and its piecemeal approach to make concrete recommendations that might be useful for EU audiences.Footnote 15 More concretely, we have digested about fifteen US-initiatives at federal, state, and local level. These initiatives refer either to biometrics or to face recognition.Footnote 16 On biometrics there is the federal 2020 National Biometric Information Privacy Act, which aims to tackle biometric data exploitation by private entities. What caught our attention was the setting out of concrete bans on specific manners of obtaining, exploiting, and sharing biometric data:

A private entity may not collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier or biometric information […] may not sell, lease, trade, use for advertising purposes, or otherwise profit from a person’s or a customer’s biometric identifier or biometric information […] may not disclose, redisclose, sell, lease, trade, use for advertising purposes, otherwise disseminate, or profit from such biometric identifier or biometric information […].Footnote 17

In the same vein, the 2008 Illinois Biometric Information Privacy Act sets out a number of targeted prohibitions on the processing (again, mainly obtaining, profiting, and disseminating) of biometrics by private entities (prohibitions that will play a crucial bright-line-rule role in the Clearview case discussed in Section 10.4).Footnote 18 We also appreciated the imposition of a standard of care (regarding storing, communicating, and securing) that ensures biometrics are treated in a similar way to, or are more shielded than, other confidential and sensitive information in that industry:

No private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier or biometric information […] No private entity […] may sell, lease, trade, or otherwise profit from a person’s or a customer’s biometric identifier or biometric information […] No private entity […] may disclose, redisclose, or otherwise disseminate a person’s or a customer’s biometric identifier or biometric information […] A private entity […] shall […] store, transmit, and protect from disclosure all biometric identifiers and biometric information using the reasonable standard of care within the private entity’s industry […] store, transmit, and protect from disclosure all biometric identifiers and biometric information in a manner that is the same as or more protective than the manner in which the private entity stores, transmits, and protects other confidential and sensitive information […].Footnote 19

Similar is the 2009 Texas Business and Commerce Code Sec 503.001 ‘Capture or Use of Biometric Identifier’ (obviously influenced by the Illinois Act), which forbids the capturing, disclosing, or exploiting of biometrics in commercial contexts, save for exceptional circumstances. It further requires that when securing biometrics, ‘reasonable care’ must be shown and that any measures taken must have the same level of protection (or be more shielding) than the measures taken to store their own confidential data.

The 2019 California’s Assembly Bill No. 1215 is expressly aimed at forbidding biometric surveillance by law enforcement through cameras. There is not much to say about such a clear-cut provision targeted at avoiding abuse of law enforcement powers: ‘A law enforcement agency or law enforcement officer shall not install, activate, or use any biometric surveillance system in connection with an officer camera or data collected by an officer camera [….].’Footnote 20

The 2020 California Privacy Rights Act is an EU-like tool targeted at businesses and the protection of consumers. Not only does it use GDPR-like terminology, but it also grants consumers various GDPR-like rights (including the right to correct inaccurate data or opt out of automated decision making), imposes on businesses GDPR-like obligations (such as the duty to conduct audits or risk assessments), and includes GDPR-like principles (such as data minimisation, purpose limitation, and storage limitation).

The 2020 Indiana House Bill 1238 imposes on law enforcement actors a duty to conduct a ‘surveillance technology impact and use policy’, make that policy available to the public, and update it prior to altering the technology’s function or purpose. Interestingly, these duties are set out using brief and simple phrasing:

Requires a state or local law enforcement agency […] that uses surveillance technology to prepare a surveillance technology impact and use policy […] and post the policy on the agency’s Internet web site […] Specifies the information that must be included in the policy […] Requires an agency to post an amended policy before implementing any enhancements to surveillance technology or using the technology in a purpose or manner not previously disclosed through the existing policy […].Footnote 21

The 2020 New York’s Assembly Bill A6787D aims to protect children by suspending the use of biometric technologies (including face recognition) in public and private schools. It does so through a moratorium on purchases and uses of technologies for a concrete period of time or until these technologies are proven safe: ‘Public and nonpublic elementary and secondary schools […] shall be prohibited from purchasing or utilizing biometric identifying technology for any purpose, including school security, until July first, two thousand twenty-two or until the commissioner authorizes such purchase or utilization […] whichever occurs later […].’Footnote 22

The 2021 proposed Virginia’s Senate Bill 1392 focusses on private for-profit entities that process significant amounts of personal data, including biometrics. This Bill offers clear rules protecting biometric data as sensitive personal information, whose processing is in principle prohibited. What we found novel, compared with the GDPR-regime, is the prohibition on discrimination against consumers: ‘A controller shall not discriminate against a consumer for exercising any of the consumer rights […] including denying goods or services, charging different prices or rates for goods or services, or providing a different level of quality of goods and services to the consumer […].’Footnote 23

Moving on to the US initiatives on face recognition, the proposed federal 2019 Commercial Facial Recognition Privacy Act bans the use of face recognition technology (FRT) by private actors (save where there is consent and, where possible, notification) for the purposes of facial recognition data collection, discrimination, purposes other than those of initial processing, and the sharing of facial recognition data. Though conditional, the ban on discrimination is, again, a novelty, when compared with the EU regime: ‘[I]t shall be unlawful for a controller to knowingly […] use the facial recognition technology to discriminate against an end user in violation of applicable Federal or State law […].’Footnote 24

The federal 2020 Facial Recognition and Biometric Technology Moratorium Act forbids the federal government from using face recognition or other biometric technology until expressly allowed by the law: ‘[I]t shall be unlawful for any Federal agency or Federal official […] to acquire, possess, access, or use in the United States (1) any biometric surveillance system; or (2) information derived from a biometric surveillance system operated by another entity […] The prohibition […] does not apply to activities explicitly authorized by an Act of Congress […].’Footnote 25

Washington’s Engrossed Substitute Senate Bill 6280 (2020) is targeted at state/local authorities using facial recognition services and imposes several concrete duties (such as conduct of accountability reports that are reviewable by the public), as well as restrictions (such as preventing the application of the technology to persons on concrete discriminatory grounds). What appeared interesting to us (in addition to the regulator’s concern about discrimination) was the clear ban on reliance upon the facial recognition service as the only basis for establishing ‘probable cause’ in criminal contexts or image-tampering in face recognition contexts. Nothing similar or even close to this exists in the LED:

A state or local law enforcement agency may not use the results of a facial recognition service as the sole basis to establish probable cause in a criminal investigation […] may not substantively manipulate an image for use in a facial recognition service in a manner not consistent with the facial recognition service provider’s intended use and training […].Footnote 26

The 2020 New Jersey’s Assembly Bill 989 is targeted at subjecting facial recognition technologies to accuracy- and bias-checking; again, the focus is placed on avoiding discrimination on concrete grounds: ‘The testing and auditing is required to determine whether there is a statistically significant variation in the accuracy of the facial recognition systems on the basis of race, skin tone, ethnicity, gender, or age of the individuals portrayed in the images, whether or not those categories are applied individually or in combination […].’Footnote 27

Portland’s ordinances (2020) ban the application of face recognition to public spaces and by private entities, as well as the use of FRTs by the city’s public actors (‘bureaus’). Portland clearly says ‘no’ to both state and private entities.

Baltimore’s ordinance (2021) prohibits, first, the city of Baltimore from obtaining a face recognition system and contracting other entities with a view to using such systems (some biometric security systems are exempted) and, second, private actors from obtaining, retaining, accessing, or using a face recognition system or information gathered from such a system (certain biometric security systems and Maryland’s Image Repository System are exempted). Remarkably, in case of violation of the provisions on the ban related to private actors, the ordinance provides not only for civil, but also for criminal remedies: ‘§ 18-3. Penalties […] Any person who violates any provision of this subtitle is guilty of a misdemeanor and, on conviction, is subject to a fine of not more than $1,000 or imprisonment for not more than 12 months or both fine and imprisonment […] Each day that a violation continues is a separate offense […].’Footnote 28

After analysing these US texts, we detected three key ideas that encapsulate the overall approach followed by the US regulators:Footnote 29

Concreteness and precision: We appreciated the unambiguous clarity of the US initiatives, which appear to have clear objectives and target concrete and intrusive technological uses. Compared with the EU regime, US provisions are more demanding with respect to various requirements. First, although some bans are conditional upon consent, the latter goes beyond the EU model – demanding not only that consent be ‘informed’, ‘specific’, and so forth (terms also present in the GDPR), but also focussing on the independent, genuine will of the person concerned, who must be free from outside control. These demands make the US prohibition stronger and more honest than the EU’s ban, which is accompanied by a long list of exceptions.Footnote 30 Second, some duties and prohibitions concretely set out in the US texts are completely absent in the EU. These include the prohibition on discrimination, the prohibition on profiting, the application of standards of care, and the treatment of biometric data as particularly sensitive and confidential information.

Bright-line bans: We saw explicit prohibitions on certain technologies or surveillance practices, often reaching the level of unconditionality. In this regard, Portland and its ordinances very well illustrate how both private and public actors can be prohibited from using FRTs. Remarkably, the US prohibitions aim to protect vulnerable groups (such as children) and anticipate, or probably avoid, possible abuses of coercive powers (for instance, by prohibiting law enforcement from using surveillance cameras). Even where ban-techniques, such as moratoria, can end upon the (future) introduction of laws that would allow for relevant uses, the United States demands that such laws be particularly detailed in various terms, ranging from lists of authorised entities to operation-standards, auditing duties and compliance-mechanisms. Probably, the best example is given by section 3(a)–(b) of the Federal 2020 Facial Recognition and Biometric Technology Moratorium Act quoted earlier.Footnote 31

Practical organisation of remedies: We found the United States’s supremacy in combining several legal fields (e.g., market, competition or criminal law/procedure) with a view to enhancing effectiveness of their remedy-scheme. Good examples can be found in the 2019 Commercial Facial Recognition Privacy Act (section 4(a)),Footnote 32 and in the Ordinance ‘Surveillance Technology in Baltimore’.Footnote 33

One could argue that the EU’s general approach allows for an always-present regime covering any technological implementation; and, in our recent EU–United States comparative analysis, we addressed pros and cons of both general and concrete law-making, finding persuasive arguments for both approaches.Footnote 34 However, in our opinion, what makes bright-line regulation more desirable (and more protective) is the very principle of legality.Footnote 35 If laws are general and abstract by-design, then they risk becoming human rights-incompatible by default. If law enforcement and other state actors are not told by the lawmaker in simple, clear, and detailed language what they can and cannot do, not only are citizens under-protected, but also regulators are confused. Experience has indeed shown that lack of bright-line-rule-setting has confused and puzzled regulators, who may not be able to fully foresee or tell the legal grounds upon which proposed bans can be introduced.Footnote 36

Today, with the tremendous challenges posed by the global reach of any anywhere-based tech-firm,Footnote 37 as well as the mass adoption of latest technologies and pilot programmes in both private and public arenas,Footnote 38 we encounter concrete risks from concrete uses (from school-areas involving vulnerable children to work environments obliging employees to be surveilled) that appear to demand concrete rule-setting.Footnote 39 And, in our view, effectiveness of such precise rule-making can be enhanced by uniform enforcement aimed at scrutinising, banning, or sanctioning specific surveillance practices. At least one case, namely Clearview (discussed in Section 10.4), can support the claim that the ideal solution can include both precise rule-making and uniform enforcement.

10.4 Regulatory Strategy: Uniform Enforcement

In May 2021, several national data protection authorities and organisations submitted complaints against Clearview, an American face recognition-tech firm. The firm had in its hands the (allegedly) largest known database (more than 3 billion facial images). With its AI technology, it searches for human (face) photographs in the web, stores them on its proprietary database, and sells access to other firms or law enforcement authorities.Footnote 40

Elsewhere, we have critically approached the Clearview-case, questioning the legal grounds for data collection and further processing, as well as doubting the lawfulness of sharing practices – particularly in relation to EU law enforcement authorities.Footnote 41 These concerns were recently shared by two national authorities.

Upon joint scrutiny conducted by the United Kingdom’s Information Commissioner’s Office (ICO) and the Office of the Australian Information Commissioner (OAIC), initiated in July 2020, these authorities gathered evidence from the web and searched separately for uses of relevant data by their law enforcement entities.Footnote 42 After stressing the global nature of the digital space and the resulting need for a global regulatory approach, they highlighted new challenges posed by Clearview’s practices.Footnote 43 According to the ICO’s preliminary opinion, the firm had probably failed to comply with data protection laws in various respects (including unfair processing, lack of mechanisms to avoid forever-storage, no legal basis, and opaque processing).Footnote 44 After expressing its intent to impose on the firm a provisional fine and after issuing its provisional notice to halt processing and erase relevant data,Footnote 45 the ICO imposed a fine of £7.5 million and ordered deletion.Footnote 46 While it was clarified that the firm’s services are no longer offered in the United Kingdom, the ICO stated that there is no guarantee that Clearview will stop processing data of UK citizens, in light of its opaque practices.Footnote 47

What the Clearview-case can reveal is that uniform enforcement, collaboration (in the sense of looking for ways to make different approaches work), and co-ordination can successfully tackle the transnational, global reach, risk, and potential harm of surveillance practices. The success is not the imposition of the huge fine; rather, it is the desire of the regulators (ICO and OAIC), which was actually expressed and materialised via rigorous investigations and targeted application of the law, to a concrete technological use: Clearview’s risky, opaque, and harmful practice, exercised at global level, potentially affecting each individual citizen.

Such global exercise can very well be halted and sanctioned by collaborating regulators at national level(s). One could claim that Clearview’s fine and order to delete data may fail to ‘frighten’ gigantic firms; albeit, if collaboration between national authorities were embraced by various states, then analogous fines and orders imposed/issued by various domestic entities could have a considerable impact on the financial status of Clearview and similar big firms. Indeed, state authorities, finding absence of a legal basis, have taken steps in that direction and against Clearview: Italy, for example, imposed a fine of EUR 20 million,Footnote 48 and France ordered the firm to halt processing.Footnote 49 For a further discussion of the Clearview case, we refer to the discussion by Orla Lynskey, insisting on the limits of a European human rights approach.Footnote 50 Judges and data protection authorities are inclined to avoid general statements about facial recognition and limit their intervention to cases involving facial recognition brought before them. The UK and French data protection authorities demand ‘settled evidence’ about the negative impact of this technology. Rather than banning a technology, they opt for prohibiting a certain processing activity. The Greek and Italian data protection authorities did indeed ban the Clearview processing activity, but only for future collection and processing of data through the company’s facial recognition system. The Italians moreover only ordered the company to erase the data relating to individuals in Italy. The United Kingdom’s ICO only ‘banned’ the web scraping by Clearview, but did not put a ban on Clearview’s facial recognition activities.

While in the EU Clearview’s abuses were sanctioned with fining and halting-orders, in Illinois, the firm was given a clear, quasi-permanent, and almost erga omnes-ban. More concretely, the American Civil Liberties Union (ACLU), a US-based organisation fighting for human rights and freedoms, brought its case against the giant firm, claiming violation of the Illinois Biometric Information Privacy Act. On 11 May 2022, there was a settlement accepted by the court, under which Clearview is permanently prohibited from offering its services to numerous private entities in the entire United States, as well as all entities (including the police) of the state of Illinois (the latter ban for the following five years).Footnote 51 The result is a settlement with compromises.Footnote 52 Clearview AI settled the lawsuit without admission of liability. There is a nationwide ‘Private Entity Ban’,Footnote 53 supplemented with an ‘Illinois State Ban’ (no facial recognition services for state or local government entities including Illinois law enforcement),Footnote 54 but for the law enforcement services outside Illinois there is also a law enforcement friendly ‘Savings Clause’,Footnote 55 a shaky system to prevent further web scraping without consent for Illinois residents, and with no obligation to delete past collected data.Footnote 56 It is not simple to compare the outcomes of this settlement with the preceding outcomes in the EU. Within the state of Illinois, the Illinois Biometric Information Privacy Act has delivered some of its promises and even more: Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company also has to cease selling access to its database to any entity in Illinois, including state and local police, for five years. The Illinois Act was already used successfully to settle facial recognition practices by Facebook,Footnote 57 and IBM,Footnote 58 and has clearly brought the message to the United States that even for publicly available data, a citizen may claim that processing personal data without consent violates the law.Footnote 59

Two remarks before concluding. First, in the EU, national authorities successfully defended citizens’ rights and freedoms by jointly investigating the firm’s practices and, after seeing the harm done, enforced the law and proceeded to various sanctions including halting and fining. Second, in the United States, there was a forever – and almost toward-any-party – ban prohibiting Clearview from selling its technology. Clearly enough, if the United States’s clear law-making was combined with the EU’s uniform enforcement, citizens would be better and more effectively protected against surveillance practices.

10.5 Conclusion: Precise Rule-Making and Uniform Enforcement as a Twofold Solution against Undesired Surveillance Practices

This analysis has shown that new technological trends, from monitoring of emotions to attempts to predict feelings, can pose novel, serious challenges that existing laws have failed to adequately tackle. This has in turn created new needs for global citizens: in particular, enhanced protection against increasing tech-interference. Looking to other jurisdictions for insights into how their targeted and precise regulations may better address new threats can offer useful lessons. Indeed, the US approach could offer insights into how specific uses and concrete harms could be more effectively avoided. Our argument for supremacy of the US initiatives is neither to dignify nor to deify the United States. Rather, it is to support the view that targeted and precise law-making is a matter of legality; in its absence, laws risk violating human rights by simply being abstractly designed. This is a claim we have already raised in previous publications;Footnote 60 in this chapter, we have engaged in a meta-analysis to further argue that effectiveness of bright-line-ruling can be enhanced by uniform enforcement. The Clearview-section exemplifies how collaboration in enforcing the rules can work.

In our opinion, precise laws banning, halting, and sanctioning certain practices are not to be seen as vengeance; as revenge, fighting back against firms and their mass and over-surveilling technologies. Rather, they are to be seen as sincere manifestations of legality. And, when uniformly enforced, they are to be seen as honest manifestations of fairness. If numerous firms are bringing technologies into the market, into the court, into the law enforcement area, into the school, into the employment arena, and into any other domain one might imagine, technologies could be abused by strong entities such as the state, and used against weak parties such as the individual citizen; it would therefore make sense to demand that multiple actors (from investigating entities to administrative supervisory authorities) jointly enforce precise rules from various areas, such as competition or criminal law.

With these recommendations, we do not suggest that all tech-pioneers be treated as possible criminals, who should be chased by the entire enforcement-mechanism for designing technologies that might then be abused by the state. Such a far-reaching scenario, an erga omnes-regime attacking any tech-developer, would probably not be desirable. What is desirable in our opinion is a targeted, clear, and rigorous scheme applicable to those disrespecting legality and fairness at the detriment of anyone – from our children to our neighbours, ethnic or other minorities. If, for instance, a law bans our kids being watched in classrooms or when they play in the schoolyard, because such a monitoring would have a hostile impact on their personality development, their freedom of expression, their privacy, or their very dignity, then maybe the tech-developer that violated that law by selling surveillance cameras to schools should have its criminal record permanently marked to remind society of the harm suffered by those kids. Even though, in this example, no blood was spilled and no kid died of the camera-watching, citizens may want to remember the detriment this for-profit designer caused to our kids, their personality, their freedom of expression, their privacy, and their dignity – things any citizen would die and spill blood for.

11 Lawfulness and Police Use of Facial Recognition in the United Kingdom Article 8 ECHR and Bridges v. South Wales Police

Nora Ni Loideain
11.1 Introduction

Police use of facial recognition is on the rise across Europe and beyond.Footnote 1 Public authorities state that these powerful algorithmic systems could play a major role in assisting them to prevent terrorism, reduce crime, and to more quickly locate and safeguard vulnerable persons (online and offline).Footnote 2 There is also an international consensus among policymakers, industry, academia, and civil society, that these systems pose serious risks to the rule of law and several human rights integral to the existence of a democratic society.Footnote 3 These include the rights to private life, freedom of expression, freedom of assembly and association, and equality as guaranteed under the European Convention on Human Rights (ECHR).Footnote 4

In response to these ‘profound challenges’, policymakers and researchers have called for law reform that would provide greater clarity on the limits, lawfulness, and proportionality of facial recognition and other emerging AI-based biometric systems (such as gait and emotion recognition).Footnote 5 Consequently, some local and state governments in the United States have placed legal restrictions or banned law enforcement use of facial recognition technologies.Footnote 6 During the pre-legislative stages of the proposed EU AI Act, the European Parliament has also issued calls for a ban on the use of private facial recognition databases in law enforcement.Footnote 7 The world’s first case examining the legality of a facial recognition system deployed by police, Bridges v. South Wales Police, thus remains an important precedent for policymakers, courts, and scholars worldwide.Footnote 8

This chapter focusses on the role and influence of the right to private life, as enshrined in Article 8 ECHR and the relevant case law of the European Court of Human Rights (ECtHR), in the ‘lawfulness’ assessment of the police use of live facial recognition (LFR) in Bridges. A framework that the Court of Appeal for England and Wales ultimately held was ‘not in accordance with the law’ for the purposes of Article 8(2) and therefore in breach of Article 8 ECHR.Footnote 9 The analysis also considers the emerging policy discourse prompted by Bridges in the United Kingdom (UK) surrounding the need for new legislation.Footnote 10 This marks a significant shift away from the current AI governance approach of combining new ethical standards with existing law.Footnote 11

11.2 Facial Recognition Systems in Law Enforcement: Legal and Technical Issues

Within a legal context, the use by public authorities of any preventive measures that will indiscriminately capture and analyse the biometric data of a vast number of innocent individuals raises significant questions. These include whether rule of law requirements developed for ensuring adequate limits, safeguards, and oversight for police surveillance systems in the pre-Internet era, such as closed-circuit television (CCTV), remain adequate and relevant to facial recognition systems and other modern internet-enabled and automated monitoring systems; furthermore, whether such novel and powerful AI-based technologies are strictly necessary in a democratic society and respect the presumption of innocence.Footnote 12 Privacy and data protection concerns have also been raised regarding the transparency and oversight challenges posed by the increasing role of the private sector within the areas of law enforcement and public security. These developments range from law enforcement use of data-driven tools and systems developed by industry, including commercial facial recognition software,Footnote 13 to tasking industry itself with law enforcement functions.Footnote 14

11.2.1 Facial Recognition Systems in Law Enforcement: Issues of Accountability and Bias

The use of AI-based biometric systems for law enforcement purposes raises several legal and technical issues. First, there are transparency and accountability challenges that may hinder adequate independent auditing and oversight of their overall efficiency and societal impacts. These stem from the opaque design and operation of commercial facial recognition systems, including intellectual property issues, what training datasets are used, the risk of those datasets being unfairly biased, and how exactly these automated decisions and recommendations are being made (and fairly assessed) by public authorities.Footnote 15 The second concerns scientific evidence that facial recognition software currently designed and developed by industry, and subsequently used for law enforcement, is biased with a greater risk of false identifications (‘false positives’) for women and people from black, Asian, and other minority ethnic backgrounds.Footnote 16 There is then a risk that such groups may be disproportionately affected by this technology. This is particularly problematic given the need for public trust in police powers being used lawfully and responsibly and existing evidence of racial bias across the UK justice system (and indeed in other jurisdictions), with resulting harms including false arrests and over-policing of already vulnerable communities.Footnote 17

11.2.2 Facial Recognition Systems in Law Enforcement: ‘Real-Time’ and Historical

Police trials of industry-developed facial recognition systems have been taking place in the UK since 2014.Footnote 18 Automated facial recognition (AFR) implies that a machine-based system is used for the recognition either for the entire process or assistance is provided by a human being. Live automated one-to-many matching involves near real-time video images of individuals with a curated watchlist of facial images. In a law enforcement context, this is typically used to assist the recognition of persons of interest on a watchlist, which means that police are required to verify or over-ride a possible match identified by the system (a system alert) and decide what actions to take (if any).Footnote 19 However, as regulators and scholars highlight, much uncertainty in UK law (and in laws across Europe) surrounds the complex legal framework governing police use of real-time access and historical (retrospective/post-event) of LFR and other biometric identification systems.Footnote 20

In the case of historical (post-event) facial recognition systems, individual’s facial data are compared and identified in searches by public authorities after the event with images previously collected through various sources. These include custody photographs and video footage from CCTV, body-worn police cameras, or other private devices. Both the ECtHR and the Court of Justice of the EU (CJEU) view real-time access to these automated biometric systems, as opposed to searching through previously collected facial images, as inherently more invasive.Footnote 21 Yet these judgments do not explain why tracking a person’s movements or attendance at certain events (such as public protests) over months or years should be viewed as less invasive of their privacy than one instance of real-time identification, particularly given the capacity of these automated systems to identify thousands of individuals using facial images in only a few hours.Footnote 22

Such legal uncertainty would be less likely if these issues had already been addressed in a clear legislative framework regulating law enforcement use of facial recognition systems. As the UK House of Lords rightly points out, while they play ‘an essential role in addressing breaches of the law, we cannot expect the Courts to set the framework for the deployment of new technologies’.Footnote 23 In other words, it is not the function of courts to provide a detailed and comprehensive legal framework for police powers, though they may provide careful scrutiny of the current law and its application in specific circumstances. This brings us to Article 8 ECHR and its relevance to the landmark case of Bridges where police use of LFR was (ultimately) held not to have met the legality requirements of Article 8(2).

11.3 Justifying an Interference with Article 8 ECHR
11.3.1 Police Collection of Biometric Data: An Interference with Article 8(1)

Th ECtHR has described the negative obligation to protect against arbitrary interference by a public authority with a person’s private life ‘as the essential object’ of Article 8 ECHR.Footnote 24 It is also well-established case law that the mere storage of data ‘relating to the private life of an individual’ for the prevention of crime constitutes an interference with the right to respect for private life.Footnote 25 The ECtHR Grand Chamber has further held that it is irrelevant if this information collected by interception or other secret measures has not been subsequently accessed, used, or disclosed.Footnote 26 Public information has also been held to fall within the scope of private life when it is systematically collected and stored by public authorities.Footnote 27

In determining whether the retention of this personal data involves any ‘private-life’ aspects, the ECtHR will have due regard to the specific context in which the information has been recorded and retained, the nature of the records, the way in which these records are used and processed, and the results that may be obtained.Footnote 28 These standards all derive from the long-established principle in ECHR case law that ‘private life’ is ‘a broad term not susceptible to exhaustive definition’.Footnote 29 As a result, this concept has been interpreted broadly by the Strasbourg Court in cases involving Article 8 ECHR and any data collection, retention, or use by public authorities in a law enforcement context. Even if no physical intrusion into a private place occurs, surveillance can still interfere with physical and psychological integrity and the right to respect for private life. For instance, in Zakharov v. Russia, the ECtHR Grand Chamber held Russia laws providing security agencies and police remote direct access to the databases of mobile phone providers to track users contained several ‘defects’ owing to a lack of adequate safeguards to ensure against abuse, thereby constituting a breach of Article 8 ECHR.Footnote 30

The ECtHR has also shown itself to be particularly sensitive to the ‘automated processing’ of personal data and the unique level of intrusiveness on the right to private life posed by the retention and analysis of biometric data for law enforcement purposes, particularly DNA.Footnote 31 Biometric data (DNA, fingerprints, facial images) are a highly sensitive source of personal data because they are unique to identifying an individual and may also be used to reveal other sensitive information about an individual, their relatives, or related communities, including their health or ethnicity. Consequently, the ECtHR has held that even the capacity of DNA profiles to provide a means of ‘identifying genetic relationships between individuals’ for policing purposes thus amounts to a privacy interference of a ‘highly sensitive nature’ and requires ‘very strict controls’.Footnote 32

At the time of writing, there has been no judgment to date in which the ECtHR has been required to specifically review the compatibility of police use of a LFR system with Article 8 ECHR. This is surely, however, an important question on the horizon for the Strasbourg Court, particularly as the technology has already featured in the legal analysis of related case law. In Gaughran v. United Kingdom, the ECtHR highlighted as a factor the possibility that the police ‘may also apply facial recognition and facial mapping techniques’ to the taking and retention of a custody photograph taken on the applicant’s arrest in its determination that this clearly amounted an interference with Article 8(1).Footnote 33 Current jurisprudence therefore leaves little doubt that the collection, retention, or analysis of an individual’s facial image for the prevention of crime (irrespective of where or how it was acquired) amounts to an interference with the right to private life, as guaranteed under Article 8 ECHR.

11.3.2 The Legality Requirements under Article 8(2): The Traditional Approach

Under the traditional approach of the ECtHR in its assessment of whether an interference with Article 8(1) is justified, there is a two-stage test. First, as noted earlier, the ECtHR assesses whether the complaint falls within the scope of Article 8(1) and whether the alleged interference by the contracting state (such as the UK) has engaged Article 8(1). If so, the ECtHR will then examine whether the interference with one of the protected interests in Article 8(1) (in this instance, ‘private life’) meets the conditions of Article 8(2). The three conditions examined during this second stage concern whether the interference is ‘in accordance with the law’ (legality), pursues one of the broadly framed legitimate aims under Article 8(2) (including the prevention of crime), and whether it is ‘necessary in a democratic society’ (proportionality). If a measure is determined not to have satisfied the requirements of the legality condition, the ECtHR will not proceed to examine the proportionality condition.Footnote 34

The traditional approach of the ECtHR, when determining if an interference meets the legality condition under Article 8(2), requires that the contested measure satisfy two principles. The measure must have ‘some basis in domestic law’ and, secondly, must also comply with the rule of law.Footnote 35 In its early jurisprudence, the ECtHR established that the principle of having some basis in domestic law comprises legislation and judgments.Footnote 36 The second principle focusses on the ‘quality’ of the domestic law, which involves meeting the tests of ‘accessibility’ and ‘foreseeability’.Footnote 37 As police operation and use of surveillance measures by their very nature are not open to full scrutiny by those affected or the wider public, the ECtHR has stated that it would be ‘contrary to the rule of law for the legal discretion granted to the executive or to a judge to be expressed in terms of an unfettered power’.Footnote 38

Thus, as part of the Article 8(2) foreseeability test, the ECtHR developed six ‘minimum’ safeguards the basis in domestic law should address to avoid abuses of power in the use of secret surveillance. These comprise: the nature of the offences where the measure may be applied; a definition of the categories of people that may be subjected to this measure; a limit on the duration of the measure; the procedures to be followed for the examination, use, storage of the obtained data; precautions to be taken if data is shared with other parties; and the circumstances in which obtained data should be erased or destroyed.Footnote 39 With regard to police use of emerging technologies, the ECtHR has consistently held that such measures ‘must be based on a law that is particularly precise … especially as the technology available for use is continually becoming more sophisticated’.Footnote 40 The ECtHR has further stressed, in cases where biometrics have been retained for policing purposes, that the need for data protection safeguards is ‘all the greater’ where ‘automatic processing’ is concerned.Footnote 41

This traditional approach, and the resulting legality standards developed and applied therein in landmark Article 8 ECHR judgments, have shaped and brought about notable legal reforms in domestic laws governing data retention and secret surveillance by public authorities across Europe.Footnote 42 Scholars have long recognised this impact by highlighting the major role played by this Article 8 ECHR jurisprudence in entrenching and ratcheting up data privacy standards in EU countries and within the legal system of the EU.Footnote 43 Based on these Article 8 ECHR standards, these minimum legality requirements seem no less than essential to ensuring adequate accountability and oversight of police surveillance powers. Indeed, as the ECtHR points out, this is an area ‘where abuse is potentially so easy in individual cases and could have such harmful consequences for democratic society as a whole’.Footnote 44 However, more recent case law dealing with Article 8 ECHR and the legality of police powers has diverged from this lauded approach.

11.3.3 The Legality Requirements under Article 8(2): The à la carte Approach

Two key developments in its jurisprudence have contributed to the departure of the ECtHR from its previously lauded role for setting minimum standards in the review of laws governing government surveillance and police investigatory powers across Europe.

11.3.3.1 The Hierarchy of Intrusiveness

First, the ECtHR has established that the scope of the safeguards required to meet legality requirements under Article 8(2) will depend on the nature and extent of the interference with the right to private life.Footnote 45 This means that the ECtHR will not apply the same strict-scrutiny approach regarding what requirements must be met by interferences it considers to be less intrusive and thus affect an individual’s rights under Article 8(1) less seriously.Footnote 46 Accordingly, the ECtHR may assess a measure to be justified interference with Article 8 ECHR even if the domestic legal basis does not incorporate the six minimum foreseeability safeguards.Footnote 47 Application of this ‘hierarchy of intrusiveness’ principle is clearly evident in the general legality assessments of the High Court and Court of Appeal in Bridges discussed in Section 11.4.

11.3.3.2 The Joint Analysis of Legality and Proportionality

Secondly, and perhaps more importantly, scholars have raised concerns regarding a shift away from the traditional approach of the ECtHR in its Article 8 ECHR case law dealing with data retention and state surveillance. This often takes the form of an assessment that combines the legality and proportionality conditions under Article 8(2) and conflates separate principles and requirements under the distinct conditions of legality and proportionality.Footnote 48 From a rule of law perspective, this shift away from the traditional approach to the Article 8(2) stage of assessment is highly problematic as it makes less systematic and clear what is already a case-by-case analysis by the ECtHR. The resulting assessment of the domestic law is often ad hoc, patchy, and invariably less detailed regarding what specific standards contracting states should be satisfying if a contested measure is to be considered compatible with Article 8 ECHR.

Thus, as Murphy rightly notes, this joint analysis has resulted in the ECtHR applying less scrutiny of the accessibility and foreseeability legality tests, thereby serving to weaken the substantive protection of the right to respect for private life provided under Article 8 ECHR.Footnote 49 Indeed, the ECtHR may also determine (without any detailed reasoning) that no rule of law assessment at all be undertaken and that the Article 8(2) stage assessment proceed directly to an examination of the proportionality condition. Catt v. United Kingdom illustrates the application of this à la carte approach to the requirements of Article 8(2), where the legality condition assessment is entirely omitted despite being the core issue before the ECtHR.

11.3.3.3 Catt v. United Kingdom: The Danger of Ambiguous Common Law Police Powers

The main facts in Catt involve the overt collection and subsequent retention of more than sixty records (including a photograph) on an ‘Extremism database’ concerning the applicant’s attendance at protests between 2005 and 2009. The applicant was never charged or accused of any violent conduct as part of these protests.Footnote 50 An instrumental factor in Catt and Bridges is the broad scope of the ‘common law’ in England and Wales, which allowed for the police collection and storage of information in both cases.Footnote 51 Based on the undefined scope of these police powers, and the lack of clarity regarding what fell within the concept of ‘domestic extremism’, the ECtHR in Catt states that there was ‘significant ambiguity over the criteria being used by the police to govern the collection of the data in question’.Footnote 52 A year later, in Bridges, the Court of Appeal would also criticise the same lack of clarity surrounding the criteria and limits underpinning the use of LFR by South Wales Police (SWP).

The Article 8(2) assessment in Catt then takes a curious turn. Following a bald statement that the question of whether the collection, retention, and use of the applicant’s personal data is in accordance with the law is ‘closely related to the broader issue of whether the interference was necessary in a democratic society’, the ECtHR observes that it is not necessary for the legality condition to be examined.Footnote 53 The ECtHR proceeds to then hold that the retention of the applicant’s personal data on this police database, and the fact that this retention occurred based on no ‘particular inquiry’, constituted a disproportionate interference with Article 8 ECHR.Footnote 54 The ECtHR was particularly critical that the applicant’s personal data in Catt could potentially have been retained indefinitely owing to ‘the absence of any rules setting a definitive maximum time limit on the retention of such data’.Footnote 55 The ECtHR further observes that the applicant was ‘entirely reliant’ on the application of ‘highly flexible safeguards’ in non-legally binding guidance to ensure the proportionate retention of his data.Footnote 56 In other words, as Woods rightly points out, this is ‘hardly a ringing endorsement of broad common law powers’.Footnote 57

However, despite its recognition of the ‘danger’ posed by the ambiguous approach to the scope of data collection under common law police powers,Footnote 58 the ECtHR sidesteps dealing with the lack of any clear legal basis or any assessment of the six minimum foreseeability safeguards. By departing from the traditional approach in its assessment of Article 8 ECHR, the ECtHR stops short of any detailed scrutiny of these requirements under the legality condition of Article 8(2). This allows the ECtHR to avoid addressing whether the ‘common law’ basis for police collection of personal data in the UK provides the ‘minimum degree of legal protection’ to which citizens are entitled under the rule of law in a democratic society.Footnote 59 Indeed, the curious decision of the ECtHR not to deal with these clear legality issues, and the resulting lax approach, is subject to strong criticism from members of the Strasbourg Court itself in Catt.Footnote 60 The latter stressed that the unresolved ‘quality of law’ questions posed by the contested common law police powers is actually ‘where the crux of the case lies’.Footnote 61 This à la carte approach to the rule of law requirements in Catt is also clearly evident in the assessment of the LFR system by the national courts in Bridges, examined in Section 11.4.

11.4 Bridges v. South Wales Police: The ‘Lawfulness’ of AFR Locate
11.4.1 Background and Claimant’s Arguments

This landmark case involves two rulings, the most significant being the Court of Appeal judgment delivered in 2020.Footnote 62 The claimant/appellant was Edward Bridges, a civil liberties campaigner who lived in Cardiff. His claim was supported by Liberty, an independent civil liberties organisation. The defendant was the Chief Constable of SWP. SWP is the national lead on the use of AFR in policing in the UK and has been conducting trials of the technology since 2017.Footnote 63 The software used by SWP for LFR in public places was developed by NEC (now North Gate Public Services (UK) Ltd).Footnote 64 In Bridges, AFR Locate was deployed by SWP via a live feed from CCTV cameras to match any facial images and biometrics with watchlists compiled from existing custody photographs. SWP would be alerted to a possible match by the software (subject to meeting a threshold level set by SWP) and the police would verify the match, determining whether any further action was required, such as making an arrest, if the match was confirmed.Footnote 65

Mr Bridges challenged the lawfulness of SWP’s use of the AFR Locate system in general, and made a specific complaint regarding two occasions when his image (he argued) was captured by the system. The first occasion was in a busy shopping area in December 2017, the second at a protest attended by the claimant in March 2018.Footnote 66 Regarding the legality requirements of Article 8 ECHR and use of this LFR system by SWP, the claimant submitted two main arguments. First, there is ‘no legal basis’ for the use of AFR Locate and thus SWP did not, as a matter of law, have power to deploy it (or any other use of AFR technology). Secondly, even if it was determined that some domestic basis in law existed, it was not ‘sufficient’ to be capable of constituting a justified interference under Article 8(2).Footnote 67 This contrasts with legal provisions under the Police and Criminal Evidence Act 1984 and its related Code of Practice, which specifically state the circumstances that apply to police collection and use of DNA and fingerprints.Footnote 68

The claimant submitted that to satisfy the legality condition of Article 8(2) there must be a legal framework that specifies the following five safeguards. First, the law should specify the circumstances and limits by which AFR Locate may be deployed, such as only when there is ‘reasonable suspicion’ or a ‘real possibility’ that persons who are sought may be in the location where AFR Locate is deployed. Secondly, the law should place limits on where AFR Locate may be deployed. Thirdly, the law should specify the ‘classes of people’ that may be placed on a watchlist, further arguing that this be limited to ‘serious criminals at large’. Fourthly, the law should state the sources from where images included in watchlists may be obtained. Finally, the law should provide ‘clear rules relating to biometric data obtained through use of AFR Locate’. This should include how long it may be retained and the purposes for which such information may (or may not) be used.Footnote 69

The claimant thus challenged the absence of any accessible or foreseeable legal framework (in legislation or any related Code of Practice) that explicitly and clearly regulates the obtaining and use of AFR technology by SWP (or any police force) in England and Wales. In her role as an intervener before the High Court in Bridges, the then Information Commissioner (the statutory regulator of UK data protection law) made similar arguments. While she did not seek to limit the categories of persons who might be included on watchlists, her submission was that the ‘categories of who could be included on a watchlist needed to be specified by law’. She also submitted that the purposes for which AFR Locate could be used should be specified in law. Finally, she argued that any use of AFR Locate, and any decision as to who should be included on a watchlist, needed to be the subject of ‘independent authorisation’.Footnote 70

11.4.2 Police Collection, Use, Retention of a Facial Image: An Interference with Article 8(1)

Both the High Court and the Court of Appeal engage in detail, and at length, with the Article 8 ECHR case law of the ECtHR in their assessments that SWP use of AFR Locate amounted to an infringement with the Article 8(1) rights of the applicant. As the High Court states: ‘Like fingerprints and DNA, AFR technology enables the extraction of unique information and identifiers about an individual allowing his or her identification with precision in a wide range of circumstances. Taken alone or together with other recorded metadata, AFR-derived biometric data is an important source of personal information.’Footnote 71 This determination is unsurprising for two reasons.

First, as noted earlier, the ECtHR has consistently held that the collection and use of biometric data using automated processing for police purposes constitutes an interference with Article 8(1). Secondly (and perhaps more importantly), none of the parties contested that use of the AFR Locate system by SWP constitutes an interference with Article 8(1).Footnote 72 Nevertheless, as the first judgment worldwide to hold that a police force’s use of LFR constituted an interference with Article 8 ECHR, this assessment in Bridges represents an important legal precedent in European human rights law and international human rights law.

11.4.3 Was SWP Deployment of AFR Locate ‘In Accordance with the Law’ under Article 8(2)?
11.4.3.1 High Court Finds Common Law Powers ‘Amply Sufficient’: No Breach of Article 8 ECHR

With respect to there being a lack of a specific statutory legal basis for SWP’s use of LFR, SWP and the Secretary of State submitted to the High Court that the police’s common law powers constituted ‘sufficient authority for use of this equipment’.Footnote 73 The High Court accepted this argument. In its reasoning, the High Court cited at length previous caselaw where the extent of the police’s common law powers has generally been expressed in very broad terms. In particular, the High Court relied heavily on the controversial majority verdict in the UK Supreme Court case of Catt.Footnote 74 The High Court gave considerable weight to a specific passage by Lord Sumption JSC who states in Catt that at ‘common law the police have the power to obtain and store information for policing purposes … [provided such] powers do not authorise intrusive methods of obtaining information, such as entry onto private property or acts … which would constitute an assault’.Footnote 75

The High Court then observed that the ‘only issue’ for it then to consider is whether using CCTV cameras fitted with AFR technology to obtain the biometric data of members of the public in public amounts to an ‘intrusive method’ of obtaining information as described by Lord Sumption JSC in Catt. Observing that the AFR Locate system method of obtaining an image ‘is no more intrusive than the use of CCTV in the streets’, the High Court held that such data collection did not fall outside the scope of police powers available to them at common law.Footnote 76 Regarding the use of watchlists within the AFR Locate system, the High Court swiftly concluded that as the relevant images were acquired by way of police photography of arrested persons in custody, the police already have explicit statutory powers to acquire, retain, and use such imagery under the Police and Criminal Evidence Act 1984.Footnote 77 The High Court also took no issue with the ambiguity of the broadly-framed scope for watchlists that may cover any ‘persons of interest’ to the police. The grounds for such reasoning being that the compilation of any watchlists ‘is well within the common law powers of the police … namely “all steps necessary for keeping the peace, for preventing crime or for protecting property”’.Footnote 78

The High Court briefly refers to the general requirements of accessibility and foreseeability, but there is no mention (or any engagement with) the six minimum safeguards implicitly raised in the claimant’s submission on legality. Instead, the court distinguishes the need for AFR Locate to have ‘detailed rules’ or any independent oversight to govern the scope and application of police retention and use of biometrics (as set out in the ECtHR jurisprudence) on two grounds. First, that facial recognition is ‘qualitatively different’ from the police retention of DNA that provides access to a very wide range of information about a person and, secondly, it is not a form of covert (or secret) surveillance akin to communications interception.Footnote 79 In addition to the common law, the High Court stresses that the legal framework comprises three layers, namely existing primary legislation, codes of practice, and SWP’s own local policies, which it considered to be ‘sufficiently foreseeable and accessible’.Footnote 80

In dismissing the claimant’s judicial review on all grounds, the High Court held the legal regime was adequate ‘to ensure the appropriate and non-arbitrary use of AFR Locate’, and that SWP’s use to date of AFR Locate satisfied the requirements of the UK Human Rights Act 1998 and data protection legislation.Footnote 81

11.4.3.2 ‘Fundamental Deficiencies’ in the Law: Court of Appeal Holds Breach of Article 8 ECHR

In stark contrast to the High Court judgment, the Court of Appeal held the use of the AFR Locate system by SWP to have breached the right to respect for private life, as protected under Article 8 ECHR of the UK Human Rights Act 1998, because of ‘two critical defects’ in the legal framework that leave too much discretion to individual officers.Footnote 82 The Court of Appeal highlights that the guidance (not legally binding) in the Surveillance Camera Code of Practice 2013 did not contain any requirements as to the content of local police policies as to who can be put on a watchlist. Nor does it contain any guidance as to what local policies should contain ‘as to where AFR can be deployed’.Footnote 83

The Court of Appeal further criticised the fact that SWP’s local policies did ‘not govern who could be put on a watchlist in the first place … [and] leave the question of the location simply to the discretion of individual police officers’.Footnote 84 Thus, the Court of Appeal took issue with ‘fundamental deficiencies’ of the legal framework relating to two areas of concern, namely two safeguards from the established ECtHR Article 8 ECHR case law on the six minimum foreseeability safeguards: ‘The first is what was called the “who question” at the hearing before us. The second is the “where question” … In relation to both of those questions too much discretion is currently left to individual police officers.’Footnote 85

11.4.4 Beyond Bridges: Moves towards Regulating Police Use of Facial Recognition?

The Court of Appeal judgment represents a clear departure from the legality assessment of the High Court, particularly its determination that SWP’s use of LFR does not satisfy the requirement of Article 8 ECHR (via the UK Human Rights Act 1998) of being ‘in accordance with the law’. This assessment was long-awaited by civil society and scholars who had consistently raised concerns that police deployment in England and Wales of LFR trials risked being assessed as unlawful if challenged before the courts. Two key issues were the lack of a specific legal basis authorising police use of AFR and a lack of clarity regarding the foreseeability of the applicable circumstances and safeguards by which police services across England and Wales are lawfully permitted to use these automated systems.Footnote 86 Indeed, the former Biometrics Commissioner observed in his 2017 Annual Report that the development and deployment of automated biometric systems in use by police at that time was already ‘running ahead of legislation’.Footnote 87

The Court of Appeal judgment in Bridges thus provides some clarity regarding the ‘deficiencies’ to be addressed by the current legal framework applied specifically by SWP and its deployment of a specific LFR system. Critically, however, the Court of Appeal also states that Bridges is ‘not concerned with possible use of AFR in the future on a national basis’, only the local deployment of AFR within the area of SWP.Footnote 88 Thus, the legality of police use of facial recognition systems (real-time and post-event) across the UK remains a subject of intense debate. Over 80,000 people have signed a petition (organised by UK-based non-governmental organisation Liberty) calling on the UK Government to ban all use of facial recognition in public spaces.Footnote 89 In 2022, a House of Lords report and a review on the governance of biometrics in England and Wales (commissioned by the Ada Lovelace Institute) both called for legislation that would provide greater clarity on the use, limits, and safeguards governing facial recognition and other AI-based biometric systems.Footnote 90

In the wake of the Bridges case, the Biometrics and Surveillance Camera Commissioner (BSCC) for England and Wales and civil society has also highlighted concerns regarding wide-ranging guidance from the College of Policing,Footnote 91 which gives police services considerable discretion regarding the criteria of persons who may be placed on a watchlist for LFR use. The Commissioner has noted that the broad and general scope of such guidance means LFR is not limited to the identification of suspects but may even include potential victims on such watchlists, providing police with a level of discretion that has ‘profound’ implications for constitutional freedoms.Footnote 92 A Data Protection Impact Assessment (DPIA) published by SWP concerning their use of LFR confirms the broad criteria for those persons that may be placed on a watchlist, including witnesses and persons ‘who are or may be victims of a criminal offence’.Footnote 93

It is also important to stress that the standards set out by the College of Policing APP are not legally binding. They also do not constitute a statutory code of practice. In direct reference to its legal context, the APP states that its function is to provide ‘direction to [police] forces that will enable them to ensure that their deployment of overt LFR [complies] with applicable legal requirements’.Footnote 94 An important caveat, however, for police services across England and Wales immediately follows. The APP implicitly acknowledges that such guidance is insufficient in of itself to ensure the lawfulness of LFR and proceeds to specifically advise police that they should obtain ‘expert legal advice’ to support their use of these systems.Footnote 95

In terms of developing the foreseeability safeguards as part of the legality requirements of Article 8(2) ECHR, it is submitted that legislation should require law enforcement authorities using facial recognition systems to make publicly available the ‘threshold value’ being applied by public authorities when using these systems. Where the system has been acquired from the private sector, this information should also explain if (and why) public authorities have chosen to depart from the default threshold value set by the company that has provided any facial recognition system(s) to public authorities. Independent scientific research examining the facial recognition systems being used by SWP, and the Metropolitan Police Service,Footnote 96 has specifically stated that false positive identifications ‘increase at lower face-match thresholds and start to show a statistically significant imbalance between demographics with more Black subjects having a false positive than Asian or White subjects’.Footnote 97 Thus, using a system with a lower threshold value increases the number of matching results but also increases the risk of unfair bias against certain societal groups by law enforcement, and should consequently be accompanied by the necessary justification and safeguards. Such information should also be shared in Data Protection Impact Assessments in order to alert regulators (and other independent oversight bodies) of the increased risk of bias posed towards certain groups and the safeguards being adopted by public authorities to address and mitigate these risks.

11.5 Conclusions

While some valuable guidance has been provided by the Court of Appeal in Bridges, which draws (albeit in a limited way) on the lauded legality case law of the ECtHR dealing with Article 8 ECHR and police investigatory powers, the current patchwork of law governing police use of facial recognition in the UK falls short of lawful and trustworthy public policy.

As the UK House of Lords rightly points out, it is not for the courts to set the framework for the deployment of new technologies. This chapter argues that the reasoning for this is threefold. First, court judgments are not systematic, comprehensive, or evidence based. Secondly, they represent ad hoc reviews of problematic public policymaking that only occur when (and if) a legal challenge is brought before them. Thirdly, the courts will assess only a narrow scope of issues relevant to that specific case. The implications posed by the lack of an accessible and foreseeable framework for police use of AFR in the UK are significant. This gap represents a source of confusion and legal uncertainty for policymakers, police, industry, courts, and citizens, thereby giving rise to gaps and patchy protection of affected rights and safeguards, including but not limited to the right to private life. These all serve to undermine adequate and effective compliance, oversight, evaluation, and thus public trust in the use of these novel and increasingly sophisticated police powers.

There is, however, a post-Bridges discourse on lawfulness that has moved towards enacting a law specifically tailored to regulating use of facial recognition. Such reform could address the current obscurity and uncertainty in the current patchwork of legal rules in England and Wales governing the limits and safeguards underpinning police use of facial recognition, particularly the compilation and application of watchlists. This legislation could then meet the accessibility and foreseeability tests under the legality condition of Article 8 ECHR, the ‘minimum degree of legal protection’ to which citizens are entitled under the rule of law in a democratic society.Footnote 98 Such reform would also enable ‘greater certainty and accountability’ around police use of AI-based biometric surveillance systems and other emerging technologies.Footnote 99

12 Does Big Brother Exist? Facial Recognition Technology in the United Kingdom

Giulia Gentile
12.1 Introduction

Facial recognition technology (FRT) functions by analysing key facial features to generate a mathematical representation of them, and then comparing these against the mathematical representation of known faces in a database to determine possible matches. This is based on digital images (still or from live camera feeds). In a policing context, FRT is used to help verify the identities of persons ‘of interest’ to police. State-operated surveillance involving FRT is hardly a novel phenomenon in the United Kingdom (UK). The UK has been the crib of the use of FRT. A technology that initially was used by public entities, it is now widespread also in the private sector.Footnote 1 According to a recent study, there are more than 6 million closed-circuit television (CCTV) cameras in the UK, more per citizen than in any country apart from China.Footnote 2 These cameras can take images of faces they film and compare them against a pre-defined database of images to determine if there is a match. That means they can be used to quickly identify individuals even in crowded areas such as shopping centres, airports, railway stations, and city streets. Even when a face is partially covered – by a cap or glasses, for example – they can still usually match it up with a stored image.Footnote 3

The extensive presence of FRT in the UK raises concerns from the angle of democracy and individual freedoms: is the UK becoming an ‘Orwellian’ society where all individuals are monitored, identified, and potentially controlled? As observed in the literature, mass surveillance has immediate implications on privacy rights, but the knowledge gathered through monitoring can be used to compress other individual freedoms.Footnote 4 It follows that regulation on FRT should strive to minimise the interferences with privacy, and thus other individuals’ rights, if democratic values of human dignity and pluralism are to be truly achieved.

Several non-governmental organisations (NGOs) protecting privacy rights established in the UK became the centre of important strategic litigation to protect privacy rights.Footnote 5 For instance, in the Bridges case,Footnote 6 supported by the NGO Liberty, the Court of Appeal has not only invalidated the use of facial recognition technology by South Wales Police (SWP), but also raised the attention to some unsolved issues regarding the use of FRT for law enforcement. As a matter of fact, notwithstanding the presence of multiple legal sources governing FRT, several legal and ethical issues are still unsolved. Clear legislation on FRT is missing.Footnote 7 In which circumstances should FRT not be used? What information duties should be discharged by those utilising FRT? What remedies should exist for individuals to address abuses of this technology? These are only some of the questions that should be addressed by legislators in order to prevent the emergence of an Orwellian society. What the future holds for FRT in the UK remains to be seen. Uncertainty is even higher in light of Brexit and the potential reforms to be introduced in the UK on the data protection framework.Footnote 8

This chapter outlines the framework on FRT in the UK and offers reflections on the future of this technology in that jurisdiction. It is structured as follows. First, it discusses the uses of FRT in the UK and the public perceptions surrounding this technology. Second, it explores the UK relevant legal framework, and highlights its gaps. Third, the chapter discusses the Bridges saga and its implications. Fourth, the chapter highlights selected regulatory matters on FRT that are currently unsettled and on which legislative guidance appears necessary to prevent the establishment of an Orwellian society in the UK. Conclusions follow.

12.2 FRT in the UK: Between Public and Private

To assess the impact of FRT in the UK, we need first to explore its use in this jurisdiction. The first observation is the extensive use of this technology by both private and public entities. Starting from the public sector, the first CCTV system in the UK was set up in 1953 in London for the Queen’s coronation.Footnote 9 By the 1960s, permanent CCTV began to cover certain London streets. Since then, the reach of CCTV surveillance has expanded in sporadic bursts, with many cameras installed in response to the 1990s IRA attacks and then again after 9/11 and the London Underground bombing.Footnote 10 Currently, CCTV cameras embed FRT that allows the identification of individuals against the information included in databases managed by law enforcement bodies. Policy documents produced by Metropolitan Police and the College of Policing indicate that FRT can be used to improve the fight against crime and make people’s lives safer.Footnote 11 Moreover, the British government specifies that CCTV serves four purposes: the detection of crime and emergency incidents, the recording of events for investigations and evidence, direct surveillance of suspects, and the deterrence of crime.Footnote 12 In the past, critics argued there is little evidence to support the proposition that its use has reduced levels of crime. An internal report dated 2009 produced by London’s Metropolitan Police revealed that only one camera out of every 1,000 had been involved in solving a crime.Footnote 13

However, recent documents produced by the Metropolitan Police indicate that the main advantage of using FRT is that of making manhunts more effective. It was observed that many manhunts for offenders wanted for very serious offences such as murder involve hundreds of officer and staff hours. When aggregated together, manhunts cost many thousands of policing hours across London. By comparison, the four recent trial deployments of live facial recognition (LFR) resulted in eight arrests.Footnote 14 It was also reported that LFR deployments provide opportunities for police officers to engage with a person potentially wanted by the police and the courts. Another relevant comparative metric for LFR is the policing outcomes resulting from ‘stop and search’. According to a report published in February 2020 by the Metropolitan Police,Footnote 15 13.3 per cent of stops resulted in an arrest in 2019. By contrast, 30 per cent of engagements following an adjudicated alert from the LFR system resulted in the arrest of a wanted person.Footnote 16 While the enhancement of public security and safety via FRT is a valuable goal, we should not lose sight of the significant implications of this technology on individual freedoms.

Such implications are amplified by the substantial employment of FRT by private entities in the UK. For instance, Clearview AI has collected more than 20 billion images of people’s faces and data from publicly available information on the internet and social media platforms all over the world, including in the UK, to create an online database. The Information Commissioner’s Office (ICO) has recently sanctioned this company for violation of data protection rules.Footnote 17 Further examples are supermarkets such as Tesco, Budgens, and Sainsbury, and start-ups such as Yoti and Facewatch. Such private entities utilise FRT in different fashions. For instance, Yoti, an FRT software, is used in UK cinemas to verify the age of customers,Footnote 18 while a growing number of businesses use Facewatch to share CCTV images with the police and identify suspected shoplifters entering their store.Footnote 19 Such widespread use of FRT by private entities is likely to cause invasive interferences with individual entitlements. Let us consider, for instance, the employment of FRT in supermarkets and in the workplace. The data gathered through FRT used in supermarkets might increase the potential for profiling consumers and thus limiting their choices based on selected biometric features.Footnote 20 Similarly, the use of FRT by employers could potentially facilitate profiling and monitoring employees’ behaviours and even emotional states. As a result, employees may be controlled and ultimately prevented from exercising their fundamental rights, such as the freedom of expression. Constraining and regulating the use of this technology by private entities becomes essential to prevent indiscriminate restrictions of fundamental rights.

Another peculiarity of the use of FRT in the UK is that, especially in the field of law enforcement, the deployment of this technology has occurred via partnerships between private entities providing digital services or infrastructures and public entities. For instance, the Japanese technology company NEC provides cameras to the Metropolitan Police and SWP.Footnote 21 There is no transparency on how NEC was identified as supplier to SWP. The only publicly available information is contained in a series of statements published by NEC’s and SWP’s websites.Footnote 22 This example raises the question of how the selection of specific technologies provided by private entities may shape public services. As a subsequent matter, the issue arises as to what values, principles and rules should guide public–private partnerships in the field of law enforcement, especially when dealing with the processing of sensitive personal data.

The diffusion and evolution of FRT in the UK has led to the development of a system in which civil society has been crucial in casting light on the issues attached to FRT technology and its impact on individuals’ rights. The establishment of numerous privacy-related NGOs appears to be a direct consequence of the spread in use of this technology on the UK territory. To name but a few, Privacy International, Liberty, Open Rights Group, and Big Brother Watch were all born out of the concerns surrounding mass surveillance in the UK.Footnote 23 These entities have contributed to many strategic litigation cases that have shaped the legal landscape of FRT regulation in the UK. The Bridges case, discussed in Section 12.4, is an instance of strategic litigation relating to FRT driven by the NGO Liberty. It is difficult to draw a clear connection between the work of civil society in the field of FRT and the impact of advocacy and strategic litigation on public awareness regarding the FRT challenges and risks. However, recent studies have highlighted that the UK public has a contradictory stance with reference to this technology.

In a study conducted by Steinacker and his colleagues involving more than 6,000 respondents, it was observed that while an overall of 43 per cent of respondents supported the use of surveillance, 26 per cent opposed it.Footnote 24 In the same study, 39 per cent of the interviewees expressed the view that FRT increases privacy violations and 53 per cent were of the opinion that FRT enhances surveillance.Footnote 25 These findings were confirmed by a study conducted by the Ada Lovelace Institute in 2019. The Institute commissioned YouGov to conduct an online survey with over 4,000 responses from adults aged sixteen and above. The survey asked respondents to express their views on a range of uses of FRTs in a number of settings including law enforcement, education, and in the private sector.Footnote 26 The report found that support for the use of FRT depends on the purpose. Notably, the study found that 49 per cent of the respondents supported its use in policing practices with the presence of appropriate safeguards, but 67 per cent opposed it in schools, 61 per cent on public transport, and a majority of 55 per cent wanted restrictions placed on its use by police.Footnote 27

In light of these findings, it appears that the public perception of FRT in the UK depends on the use of that technology. While this research illustrates that individuals appreciate the potential of FRT in the field of security and law enforcement, the general impression emerging from these surveys is that there is still a lack of awareness regarding the full consequences and impact of FRT on individual rights beyond privacy. This conclusion is further strengthened when one considers the significant gaps existing in the UK regulatory approach to FRT. It is argued that were the implications of FRT on individual rights’ protection entirely appreciated, a stronger social resistance against FRT would emerge in light of the current limited framework. The attention on safety and security as one of the advantages of FRT would most likely be reassessed against the worrisome implications that mass surveillance, and ultimately a police state, would have on individual freedoms. The following paragraphs outline the UK legal framework on FRT and its limits.

12.3 The Legal Framework

Until 2019, the Law Enforcement Facial Images and New Biometrics Oversight and Advisory Board oversaw the police use of automated facial recognition (AFR), LFR custody images, and new biometrics. The last meeting of the Board took place in September 2019 and alternative governance arrangements are now in place.Footnote 28 Currently, two bodies supervise the use of FRT: the ICO and the Biometric and Surveillance Camera Commissioner. The legal framework governing FRT in the UK is multi-layered. It is composed of human rights law, but also by data protection and law enforcement rules. As a result, the rights to privacy and data protection, being the most immediately entitlements affected by FRT, are to be balanced with public security and law enforcement objectives.

The starting point for analysing the UK FRT framework is the Human Rights Act, which gives effect to Article 8 ECHR, protecting the right to privacy, in the UK territory. In addition, the Data Protection Act (DPA) of 2018,Footnote 29 which transposed the EU’s General Data Protection Regulation (GDPR) in the UK, plays a crucial role in governing FRT. This Act provides the duties for controllers and processors and rights for data subjects. It grants enhanced protection for sensitive personal data,Footnote 30 and imposes specific requirements for personal data used in the context of law enforcement.Footnote 31 While under EU law data protection is a fundamental right, in the post-Brexit era data protection has lost this status since the EU Charter of Fundamental Rights is no longer binding in the UK.Footnote 32 Additionally, the UK GDPR framework may be subject to evolution in light of recent plans of the UK Government to depart from the EU legislation and case law.Footnote 33

The Protection and Freedoms (PoFA) Act 2012 is also of relevance, since it regulates the use of evidential material, including biometric material that may be gathered through FRT. Furthermore, mention should be made of the Surveillance Camera Code of Practice, originally published in 2013 and amended in November 2021. This code is an implementation of Section 29(6) of PoFA and is to be taken into account by a relevant authority in the exercise of its functions when involving the operation or use of any surveillance camera systems, or the use or processing of images or other information obtained by virtue of such systems. The code sets out twelve guiding principles, such as that there should be effective review of audit mechanisms to ensure respect for legal requirements, policies, and standards. While this code applies to public authorities, private entities are not constrained by it. The ICO has issued guidance harmonising the Surveillance Camera Code of Practice with the GDPR requirements.Footnote 34 In this sense, the guidance has a broader scope than the code. In addition, we should mention that public authorities using FRT technology have produced policy and guidance documents. To name but one example, the Metropolitan Police have issued several LFR policy documents, including Data Protection Impact assessments and the ‘Standard operating procedure’.Footnote 35 Similarly, SWP has produced multiple documents stating their approach to the deployment of FRT.Footnote 36 Finally, several guidance documents, such as those issued by the British Security Industry Association regarding the ethical and legal use of AFR,Footnote 37 or the Data Ethical Framework prepared by the UK Government, provide directives on the employment of FRT.Footnote 38 The effects and status of these guidance documents is unclear. While they may be used to guide the action of public authorities, whether or not they are binding is allegedly different from the law.Footnote 39

Overall, the private use of FRT appears less regulated than the public enforcement. However, the presence of a more developed legislative framework for the public sphere does not equate to effective FRT regulation in that sector. In 2019, the London Policing Ethics Panel advanced several recommendations concerning LFR, such as that there should be enhanced ethical governance of policing technology field research trials, and that regulation of new identification technologies should be simpler.Footnote 40 The ICO also issued an opinion on the use of LFR technology by law enforcement authorities in public places, which concluded that the use of that technology should meet the threshold of strict necessity.Footnote 41 For example, it was suggested that FRT could be used to locate a known terrorist but not indiscriminately in order to identify suspects of minor crimes.Footnote 42 The 2022 report of the Minderoo Centre for Technology and Democracy found that the use of FRT by the UK police did not meet fundamental rights standards.Footnote 43 Yet, as mentioned, private parties may also be extremely intrusive when utilising FRT. One may wonder whether this different treatment for private bodies, which are subject to less cumbersome duties when utilising FRT, is at all justified.

In the UK, the ICO, former Biometrics Commissioner, and former Surveillance Camera Commissioner have all argued that the law relating to biometric technologies is no longer fit for purpose.Footnote 44 The same point was advanced by the Court of Appeal of England and Wales in August 2020 in its judgment on the Bridges case, concluding that there were ‘fundamental deficiencies’ in the legal framework surrounding the police use of facial recognition.Footnote 45 The next paragraphs offer an overview of this case, which is pivotal in identifying existing regulatory gaps concerning FRT in the UK.

12.4 The Bridges Case

The case concerned the deployment of AFR Locate, a technology that involves the capturing of digital images of members of the public, which were then processed and compared with digital images of persons on a watchlist compiled by SWP. The claimant in the case, Edward Bridges, supported in his action by the NGO Liberty, raised complaints against the use of this technology against him on two occasions and against the use of AFR Locate in general. The watchlists used in the deployments contested by Mr Bridges included, among others, persons wanted on warrants, individuals who were unlawfully at large having escaped from lawful custody or persons simply of possible interest to SWP for intelligence purposes.

At first instance, the Divisional Court declared that Article 8 ECHR was not violated. This was because of ‘the common law powers of the police to obtain and store information for policing purposes, and [the fact] that the compilation of the watchlists is both authorised under the Police and Criminal Evidence Act 1984 and within the powers of the police at common law’.Footnote 46 The court also found that DPA 2018 and the Code of Practice on the Management of Police information provided a legal basis for the use of AFR Locate. Overall, the ‘accordance with the law’ requirement laid down in Article 8(2) ECHR was satisfied. Furthermore, the Divisional Court rejected the pleas based on data protection law. Of interest is the way in which the court delineated the scope of the margin of appreciation enjoyed by the ICO. Notably, it concluded that it was for the ICO to assess whether the documents adopted by the SWP complied with Section 42(2) of the DPA 2018, requiring the adoption of a policy document by public entities processing personal data for law enforcement purposes. The court also rejected the claim that SWP had failed to comply with the Equality Act 2010.

Mr Bridges challenged the Divisional Court’s judgment and was granted leave to appeal. In its judgement, the Court of Appeal began by considering whether the interference of privacy rights caused by the SWP was in accordance with the law, as demanded by Article 8(2) ECHR. While it found that the action of the SWP was carried out pursuant to a legal basis, it embraced a relativist approach: it advanced the view that ‘the more intrusive the act complained of, the more precise and specific must be the law said to justify it’.Footnote 47 After acknowledging that the technology involved in the case was different from that considered in previous judgments,Footnote 48 the court held that ‘the legal framework that the Divisional Court regarded as being sufficient to constitute the “law” for the purposes of Article 8(2) is on further analysis insufficient’.Footnote 49 In particular, the Court of Appeal argued that two issues remained open under the framework in place, the ‘who’ and the ‘where’ questions. As a matter of fact, the applicable law did not clarify who could be placed on the watchlist, nor was it clear that there were any criteria for determining where AFR Locate could be deployed. On this issue, the court advanced the view that the legislator should provide clearer guidance on the erasure of data of individuals who are captured by FRT but do not match the identity of any person included in the watchlist. Subsequently, the judgment moved on to the analysis of the Surveillance Camera Code of Practice. The court noted that ‘the guidance does not contain any requirements as to the content of local police policies as to who can be put on a watchlist. Nor does it contain any guidance as to what local policies should contain as to where AFR can be deployed.’Footnote 50 The court also assessed the documents issued by the SWP, and concluded that they too left unsolved the ‘who’ and ‘where’ questions. As a result, the first ground submitted by Mr Bridges concerning the violation of the legal basis requirement under Article 8 ECHR was well founded.

The court then tackled the second ground raised by Mr Bridges; that is, whether the SWP complied with principle of proportionality in the deployment of AFR Locate. The judgment found that the Divisional Court did not err in the assessment of proportionality. While the appellant had suggested that the balancing under proportionality should consider not only the FRT’s impact on a single individual, but also on the public as a whole, the Court of Appeal held that the assessment of proportionality should occur as a matter of legal principle,Footnote 51 and therefore not in abstract terms. The second ground was thus dismissed.

However, the court allowed the appeal on the third ground submitted by Mr Bridges, notably that the data protection impact assessment (DPIA) carried out by the SWP did not comply with the DPA 2018 requirements. On this issue, the Court of Appeal ruled that, since SWP had failed to comply with Article 8 ECHR, and especially the ‘in accordance with the law’ requirement, the DPIA was not compliant with the DPA 2018.

Subsequently, the Court of Appeal evaluated whether the SWP had failed to respect Section 35 of the DAP 2018, detailing the first data protection principle. The combined reading of Sections 35, 42 and Schedule 8 DPA 2018 requires public entities processing personal data for law enforcement purposes to have appropriate policy documents in place. The Court of Appeal held that, since the ICO had found that the SWP documents contained sufficient information in compliance with Section 42(2) DPA, the Divisional Court did not err in law. The fact that the ICO had later revised the guidance on FRT and law enforcement could not change the validity of the ICO’s opinion on the policy documents. Putting it differently, the updated guidance of the ICO could not have retroactive effects and invalidate the policy documents adopted by SWP.

Finally, the court considered whether SWP had breached the Equality Act 2010. To address this plea, the court evaluated the robustness of the verifications carried by SWP with reference to the potential biases entailed by the FRT. The court observed that ‘SWP have never sought to satisfy themselves, either directly or by way of independent verification, that the software program in this case does not have an unacceptable bias on grounds of race or sex. There is evidence […] that programs for AFR can sometimes have such a bias.’Footnote 52 As a result, the court concluded that the safeguards employed by the SWP were insufficient, and therefore this ground of appeal was allowed.

The Bridges saga prompts several observations. First, it demonstrates that the central provision in the reasoning of the parties as well as the court in drawing the boundaries for the use of FRT was the fundamental right to privacy protected under the ECHR. By contrast, the data protection framework was employed to ‘compensate’ and strengthen the fundamental right to privacy. Through the prism of Strasbourg case law, Article 8 ECHR appears to offer ample guidance to courts on how to achieve the protection of privacy even in the face of technological advancements such as FRT.Footnote 53 Second, the Bridges case showcases the intersectionality of FRT. This technology does not only impact privacy and data protection rights, but also other fundamental entitlements, such as the right not to be discriminated against. Yet additional fundamental rights could be found to intersect with the use of FRT, such as the freedom of expression or the right to liberty. Third, the case suggests that different understanding of the principle of proportionality and its interplay with fundamental rights can allow for stricter or laxer scrutiny over the employment of FRT. In Bridges, the Court of Appeal did not consider the ‘necessity’ requirement or the ‘stricto sensu’ proportionality; rather, it carried a soft scrutiny over the choices of the SWP. Hence, owing to the malleability of proportionality, one may wonder whether this is an effective principle to carry a precise scrutiny over the deployment of this technology and its implications. The answer to this matter depends on personal views on the very principle of proportionality. Fourth, one may wonder how much ‘law’ is needed to regulate FRT. While the Court of Appeal considers that it is not the place of the judges to dictate what the law should look like, at the same time it cast light on selected drawbacks and limitations emerging from the current framework. The Court of Appeal invited the legislator to clarify the ‘who’ and ‘where’ questions and to detail rules on the deletion of personal data for individuals captured by FRT. One could think of additional questions and issues that require legislative action. Indeed, the Bridges saga highlighted only selected open questions concerning the use of FRT in the UK. The future of FRT regulation in the UK will depend on how the uncertainty surrounding these issues is tackled.

12.5 The Future of FRT in the UK

While the Bridges case has powerfully illustrated some of the crucial gaps in the current framework on FRT, there are further unsletted matters. To name but a few: For what purposes and in what contexts is it acceptable to use FRT to capture an individual’s image? What checks and balances should be in place to ensure fairness and transparency in the use of FRT? What accountability mechanisms should be established for different usages? The list could continue. Several NGOs have produced reports partially addressing these matters. Interestingly, there seems to be convergence towards the (at least partial) halting of FRT under the current rules. For instance, the Ada Lovelace Institute commissioned the Ryder Review,Footnote 54 published in June 2022, which recommended that the use of live FRT should be suspended until the adoption of a legally binding code of practice governing its use. The presence of binding rules identifying accountable entities and means of redress for individuals are considered as crucial to enhance the protection of individuals against FRT technology.Footnote 55 The report specified that the code should not only address the public use of the technology, but also its deployment by private parties. Furthermore, the Mideroo report on the use of FRT,Footnote 56 published in October 2022, went as far as calling for a ban of FRT in the context of police activities. The report justified this recommendation in light of the blatant violations of fundamental rights by way of deployment of FRT by the police.

Interestingly, these proposals are to a certain extent in line with the position of EU institutions. For instance, the European Data Protection Board called for a general ban on any use of AI for an automated recognition of human features in publicly available spaces, as well as for AI systems categorising individuals from biometrics into clusters.Footnote 57 Moreover, in the European Parliament there is growing consensus on banning the use of this technology.Footnote 58 Whether the UK legislator and authorities involved in the regulation of FRT will reach a similar conclusion requiring the suspension, if not the banning, of FRT remains to be seen. In July 2022, Liberty published a tweet indicating that the Metropolitan Police used FRT at Oxford Circus. As a result, thousands of people walking in that area were monitored and captured by cameras. Such overt and extensive use of FRT in the UK might signify that this jurisdiction is still far away from undergoing a serious reconsideration of FRT’s limited benefits and high risks. However, several crucial changes to the current rules seem necessary. These reforms should involve increasing public awareness of the implications of FRT as well as enhancing transparency on the deployment of that technology. Another point that future legislation should tackle is how to ensure that public–private partnerships involving the digitisation of public services respect public goods and values. The opaque co-operation between NEC and SWP suggests that the public is unable to scrutinise how public entities build their co-operation with private digital providers, and therefore how much power private parties have in shaping the public sphere. Until the day such legislation is in place, it is legitimate to ask: ‘Does Big Brother exist in the UK?’

12.6 Conclusion

The UK has been a crib for the development and deployment of FRT. Since the 1950s, this technology has been largely used in the public sphere, and especially for law enforcement purposes. However, FRT has rapidly expanded, and it is now omnipresent, having landed also in the private sector. As a result, the UK legal order offers a remarkable case study to reflect on the future of FRT regulation. The existing FRT framework in the UK is multi-layered but also fragmented and incomplete. The loopholes of the rules currently in place became evident in the Bridges saga. While the first instance court considered the use of FRT by SWP lawful, the Court of Appeal identified violations of Article 8 ECHR, data protection rules, and the Equality Act 2010. Accordingly, the UK judicature has revealed the power of fundamental rights in regulating FRT and cast light on the limits of existing rules. In particular, the Court of Appeal observed that the legislator should clarify who can be placed on watchlists and where the FRT can be employed. Yet additional questions remain open, beyond those identified by the Bridges case: For what purposes and in what contexts is it acceptable to use FRT to capture an individual’s image? What checks and balances should be in place to ensure fairness and transparency in the use of FRT? What accountability mechanisms should be established for different usages? The list could continue. Several NGOs have called for halting or even banning FRT in the UK. There is general consensus that the current UK framework is insufficient. Until the point when the UK legislator takes charge of enhancing regulation relating to FRT, it is legitimate to ask: ‘Does Big Brother exist?’

13 Facial Recognition Technologies in the Public Sector Observations from Germany

Andreas Engel
13.1 Introduction

Facial recognition technologies (FRTs) have raised concerns in Germany,Footnote 1 and have not been put to use on a widespread basis. This may not be expected to change in the near future, as the current coalition treaty between the German government parties rejects comprehensive video surveillance and the use of biometric measurement for surveillance purposes.Footnote 2

This reluctance to put FRT to use may explain why, so far, the use of FRT has seldom come before German courts: Only fifty-three court decisions out of a total of 1.6 million decisions of German courts in the legal database juris include a textual reference to ‘Gesichtserkennung’, the German term for facial recognition.Footnote 3 A search for ‘Biometrie’, equivalent to ‘biometrics’, yields 991 decisions.Footnote 4 However, many of these latter decisions only have a tenuous link to FRT. These numbers suggest that FRT has rarely been the subject-matter of legal proceedings in Germany.

Nevertheless, there are individual instances in which FRT is already being employed – or has been employed – in the public sphere in Germany. Three prime examples of real-life use cases of FRT in the public sector in Germany will be discussed in further detail.

The first example concerns the pilot study involving the continuous use of FRT without specific cause, conducted at the Berlin Südkreuz train station (which has received a high degree of public attention). The second example is the use of FRT in the aftermath of the G20 riots in Hamburg. Here, FRT was employed to analyse video recordings from mass gatherings to identify suspects. As a third example, FRT cameras are being used in the city of Görlitz to combat serious border crime. In Görlitz, FRT is employed for a limited time and for specific cause. Hence, these examples illustrate different scenarios of the application of FRT. They will be discussed in turn to illustrate specific requirements and challenges, particularly with a view to the varying degree of detail of relevant legal provisions.

13.2 Constitutional Framework for FRT in the Public Sector in Germany

All cases of FRT use take place within the constitutional framework of the Grundgesetz (Basic Law – GG). FRT mainly raises concerns with regard to the right to informational self-determination (Art. 2 (1) in conjunction with Art. 1 (1) GG), which has first been recognised in a decision by the Bundesverfassungsgericht (Federal Constitutional Court – BVerfG) on the 1983 Federal Census Act.Footnote 5 Additionally, and depending on the specific context, FRT may affect other fundamental rights, such as the right to assemble (Art. 8 (1) GG).Footnote 6 And, even more fundamentally, the BVerfG has acknowledged that the constitution entails a ban on total surveillance,Footnote 7 and underlines its importance as part of Germany’s constitutional identity: ‘It is an integral part of the constitutional identity of the Federal Republic of Germany that the state may not record and register the exercise of freedoms by citizens in its entirety.’Footnote 8 So far, the BVerfG has not decided a case that directly involved the use of FRT. Absent a pertinent judgment, a recent decision by the BVerfG on automatic licence plate recognition (ALPR) may provide orientation, and guidelines for FRT can be derived a fortiori from this decision.Footnote 9 As Martini points out, both ALPR and FRT aim at automated surveillance of the public sphere and would lend themselves as tools for permanent surveillance.Footnote 10 Personal data collected via ALPR or FRT can be used to draw inferences about the persons monitored.Footnote 11 While ALPR uses information that may indirectly relate to persons, FRT surveillance directly pertains to biometric data. Thus, even higher legal standards would apply to FRT than for ALPR.Footnote 12

Specifically, in its decision on ALPR, the BVerfG has first ascertained the broad scope of the right to informational self-determination (which would be relevant both for ALPR and FRT): ‘The right to informational self-determination covers threats and violations of personality that arise for the individual from information-related measures, especially under the conditions of modern data processing.’Footnote 13

The right to informational self-determination applies even in the public sphere, where individuals have an interest in ensuring that their personal information is not collected and stored without their consent (which, again, equally concerns ALPR and FRT): ‘Even when individuals go out in public, the right to informational self-determination protects their interest in ensuring that the associated personal information is not collected in the course of automated information collection for storage with the possibility of further exploitation.’Footnote 14

Different stages of data processing have to be distinguished and need respective justification, in particular the collection, the storage and the use of data: ‘Regulations that allow for the handling of personal data by government authorities generally justify various interventions that build on each other. In particular, a distinction must be made in this respect between the collection, storage and use of data.’Footnote 15

For all stages of data processing, the basic principles of proportionality, clarity of legal rules, and certainty apply:Footnote 16

As encroachments on the right to informational self-determination, authorizations for automated license plate checks must be measured against the principle of proportionality. Accordingly, they must pursue a legitimate purpose, be suitable for achieving the purpose, necessary and proportionate in the strict sense of the term. At the same time, they must comply with the principles of clarity of legal rules and certainty, particularly in the area of data processing.Footnote 17

Proportionality in this context is understood in a narrower sense, as a prohibition of excessiveness. The pursued purpose must be proportionate to the impact on the individuals’ right to informational self-determination (the comparatively deeper impact of FRT on individual rights would affect this analysis accordingly, and a more important purpose would be required): ‘The principle of proportionality in the narrower sense as a prohibition of excessiveness is only satisfied … if the purpose pursued is not disproportionate to the weight of the intervention entailed.’Footnote 18 To be justified, automated licence plate checks must be prompted by a sufficiently concrete, objectively determined reason. Furthermore, the conditions for a check must meet a certain threshold and allow for compliance review.Footnote 19 Checks cannot be carried out arbitrarily or without a valid reason.

Furthermore, the BVerfG stressed that surveillance measures must serve ‘to protect legal interests of at least considerable weight or a comparably weighty public interest’.Footnote 20 It is crucial to note that FRT raises additional concerns about privacy and the potential for misuse. Thus, the standard for the use of FRT would be higher than for automated licence plate checks.

Moreover, the general framework for surveillance measures must also be proportionate in a broader sense of the term; that is, in an overall assessment:

In this respect, the legislature must preserve the balance between the type and intensity of the impairments of fundamental rights on the one hand and the causes justifying the interference on the other hand, for instance by establishing requirements regarding the threshold for the exercise of powers, the necessary factual basis, or the weight of the protected legal interests.Footnote 21

From these considerations, the BVerfG derives more specific procedural requirements to protect individual rights: ‘In addition, the proportionality requirements include requirements relating to transparency, individual legal protection and supervisory control as well as regulations on data use and deletion for all individual acts.’Footnote 22

13.3 Application in Specific FRT Use Cases

This general framework sets the standard for the application of FRT in specific cases and its legal basis. In this section, the chapter discusses in turn the aforementioned instances in which FRT has already been applied: a pilot study that entailed the continuous use of FRT without specific cause at Berlin Südkreuz (Section 13.3.1), the use of FRT for analysis of video recordings from mass gatherings at the G20 summit in Hamburg (Section 13.3.2), and the use of FRT for a limited time and with specific cause in the city of Görlitz (Section 13.3.3).

13.3.1 Permanent Use of FRT without Specific Cause

The federal police (Bundespolizei) from 2017 to 2018 conducted a study at Berlin Südkreuz train station to test the feasibility of the permanent use of FRT in the public sector.Footnote 23 The study comprised two phases and was conducted with volunteer test subjects. A reference database was built with pictures of these subjects. During the study, the participants passed by the cameras at Berlin Südkreuz train station and were thus monitored.

As its main conclusion from the study, the federal police stated that the technology employed makes it possible to detect and identify people in crowds automatically.Footnote 24 The federal police considered the test scores of the systems as ‘excellent’ (‘ausgezeichnet’):Footnote 25 During phases 1 and 2 of the study, the average hit rate of the three individual systems employed was 68.5 per cent and 82.8 per cent, respectively. The average false hit rate was 0.12–0.25 per cent in phase 1 and 0.07 per cent in phase 2. The overall system – interconnecting the three individual systems – had an average hit rate of 84.9 per cent in phase 1 and 91.2 per cent in phase 2, with a false hit rate of 0.67 per cent and 0.34 per cent, respectively. These results indicate that the individual and overall systems had relatively high hit rates, with relatively low rates of false hits.

Against this background, the federal police concluded that ‘the state of the art FRT systems … can make a valuable contribution to ensuring security in train stations’,Footnote 26 indicating a positive attitude towards a potential future use of FRT.

Notably, FRT measures at a train station, such as Berlin Südkreuz, would arguably not conflict with the ban on total surveillance.Footnote 27 Even if, at the specific venue, FRT is employed permanently and without specific cause, its application would not cover the exercise of freedoms by citizens in its entirety: As FRT is only used at a specific venue, it only serves to monitor citizens at this venue, but not their conduct elsewhere.Footnote 28

For the pilot study, the monitored individuals had consented beforehand to the use of FRT. However, without such consent, future permanent employment of FRT without specific cause would need a legal basis consistent with individuals’ constitutional rights and legal protections.

At first glance, an existing provision might cover such cases. According to Sec. 27, sentence 1 no. 2 Bundespolizeigesetz (Law on Federal Police – BPolG), which has a broad scope of application, ‘the federal police may use automatic image capture and image recording devices to … detect dangers to [certain specified objects, including train stations or airports], or to persons or property located there’.Footnote 29

However, Sec. 27, sentence 1 no. 2 BPolG does not address FRT specifically and does not meet the procedural requirements that the BVerfG outlined for ALPR, such as transparency, individual legal protection, and supervisory control or regulations on data use and deletion for all individual acts. While BPolG does contain procedural rules (see in particular Sec. 29 et seq. BPolG on data processing and data use), arguably more specific provisions would be required for FRT,Footnote 30 as the permanent use of FRT at specific venues would amount to a new level of intensity.Footnote 31

Similar problems arise for provisions in police laws of the Länder (German states) that allow video recordings in general but are not written for FRT specifically.Footnote 32

13.3.2 Use of FRT for Analysis of Video Recordings from Mass Gatherings

A second example, which has even been before a court, concerns the use of FRT for specific cause, the analysis of videos of riots. Hamburg police collected video and image files of the riots at the July 2017 G20 summit in Hamburg.Footnote 33 These videos and photos were partly taken by the police themselves, partly from video surveillance recordings from certain S-Bahn stations, partly from relevant recordings accessible on the internet, and partly from privately created image files.Footnote 34 The files collected were merged into one large collection of images. Using facial recognition software (which had been specially procured), the police created a reference database that contained digital (biometric) extracts of the faces (‘templates’) that had been identified by the software in the images of the basic file.Footnote 35 The number of templates in the database supposedly exceeded 100,000.Footnote 36 The database was not connected to or linked with other official databases.

For individual search sweeps, the police made the images of individual crime suspects file-compatible with this software and fed them into the reference database. Hits identified and flagged by the software were further confirmed by clerks. The individual search runs took place on the order of the public prosecutor’s office.Footnote 37

The Hamburg Commissioner for Data Protection and Freedom of Information ordered the police to delete this database. Whether this order was lawful hinged upon, inter alia, whether the creation and use of this database conformed with data protection laws.

The Verwaltungsgericht Hamburg (Hamburg Administrative Court – VG Hamburg) held that the order was unlawful. The court saw Sec. 48 (1) Bundesdatenschutzgesetz (Federal Data Protection Act – BDSG) as a sufficient legal basis for the measures in question, even though the provision is written in very broad terms: ‘The processing of special categories of personal data [which includes biometrical data, Sec. 46 no. 14 BDSG] is only permitted if it is absolutely necessary for the performance of the task.’ According to the VG Hamburg, ‘Sec. 48 (1) BDSG is unquestionably a provision on data protection.’Footnote 38 According to the court, no more specific legal provision existed for the processing of personal data carried out by the police. Therefore, the court held the provision to be applicable.Footnote 39

Consequently, the VG had to assess whether the use of FRT in this context was ‘absolutely necessary’ in the sense of Sec. 48 (1) BDSG. The court concluded it was, as a similar review of the data collected by humans would require far too much time and hence would not be a feasible alternative:

[T]he plaintiff argues … that processing the image material contained in the basic file by human evaluators would far exceed the time frame for effective criminal prosecution and would take years. The defendant does not dispute the validity of this consideration … Thus, it is self-evident to consider the establishment and use of the reference database as indispensable.Footnote 40

In a second step, the VG considered the constitutionality of the relevant provision.Footnote 41 The court held it was decisive that the Hamburg Commissioner for Data Protection and Freedom of Information had not sufficiently engaged with Sec. 48 (1) BDSG and had not tried to come to an interpretation of the norm that would conform with the constitution.Footnote 42 The court pointed to a set of aspects that would have merited further analysis by the commissioner.

Among these, the following aspects are of particular interest in the present enquiry: The VG observed that closer scrutiny of the measure’s impact on constitutional rights would have been required and that the measure in question might be distinguishable from surveillance without a specific reason. In that context, the court remarked that in the individual search runs, the software would not use further, but suppress, biometric data of the large number of unsuspected persons.Footnote 43 Moreover, the court contrasted the measures taken by the police with ALPR, which would amount to a structure that citizens might view as a system of general surveillance. The court pointed out that the analysis of a given set of videos from a specific event might not trigger the exact same concerns.Footnote 44

This decision raised heavy criticism, in particular by Mysegades.Footnote 45 Mysegades argues that even in view of the primacy of the law,Footnote 46 if Sec. 48 BDSG was an insufficient legal basis for the measure in question, the Hamburg Commissioner for Data Protection and Freedom of Information was able to deem the measure taken by the police illegal (for lack of a sufficient legal basis).Footnote 47

And, indeed, Mysegades puts forwards reasons to doubt that Sec. 48 BDSG was a sufficient legal basis for the measures taken. He points out that the BVerfG in its decision on ALPR established criteria that would apply irrespective of whether an entire surveillance system is being established. Rather, specific aspects would have to be put into consideration, such as the (high and indeterminate) number of uninvolved persons being surveyed, the covert nature of the measure, and a feeling of citizens that they were being monitored, which might flow from the broad ambit of the measure taken.Footnote 48 In particular, Mysegades contests an argument of the VG regarding the societal impact of the measures. While the VG argued that only those bystanders were being subjected to surveillance and further scrutiny who had willingly gone to the places where riots took place, Mysegades points out that Article 8 (1) GG protects the freedom to assemble, and that this freedom is infringed upon if future participants of assemblies feel the chilling effect of potentially being affected by FRT if they partake in assemblies.Footnote 49

Moreover, Mysegades emphasises that when drafting Sec. 48 BDSG, the legislator had no intention for it to apply to FRT. When the provision was passed, the pilot study at Berlin Südkreuz (see Section 13.3.1) was already under way. Thus, it stands to reason that the legislator could and would have created a more specific provision for the highly contentious and sensitive issue of FRT use.Footnote 50

Additionally, in its analysis of whether the measure was ‘absolutely necessary’, the VG refers to the necessity of automatic recognition measures for search sweeps within the data collected. Mysegades points out two issues with this approach.Footnote 51 The VG in its judgment does not provide a legal basis for all the individual steps of data processing and data collection. This also affects the court‘s analysis of whether the measure was absolutely necessary. The court held that automatic data processing was absolutely necessary, as processing the data collected by humans would not have been possible within a reasonable time frame. With this approach, the necessity of data processing is being linked to the data collected in the first step. However, the judgment does not answer whether and on what legal basis the data collection itself was justified.Footnote 52 One might expect such discussion to be linked to an over-arching goal of the measure, such as prosecution (or, in other scenarios, prevention) of crime.Footnote 53

Lastly, the court could also have considered and given more weight to further risks for citizens‘ rights, such as potential abuse of the data collected, and, at the same time, the long period of time during which data was possibly stored.Footnote 54

13.3.3 Use of FRT for a Limited Time and with Specific Cause in the City of Görlitz

The third and final example concerns the use of FRT for a specific purpose: FRT is being used in Görlitz, the easternmost city in Germany, located near the Polish and Czech borders. Since 2019, FRT cameras have been used there to combat serious border crime.Footnote 55 Görlitz is part of a corridor that is under video surveillance for a distance of 30 km. This use of FRT technology is aimed at addressing severe crimes and enhancing security in the area.

This application of FRT has its legal foundation in Section 59 Gesetz über die Aufgaben, Befugnisse, Datenverarbeitung und Organisation des Polizeivollzugsdienstes im Freistaat Sachsen (Law on the Tasks, Powers, Data Processing and Organisation of the Police Enforcement Service in the Free State of Saxony – SächsPVDG)Footnote 56:

Use of technical means to prevent severe cross-border crime

(1) The police may, in order to prevent cross-border crime [as enumerated] collect personal data by the open use of technical means to make image recordings of traffic on public roads and to record information on the place, time and direction of use in order to compare it automatically with other personal data. This applies to road sections in the border area with the Republic of Poland and the Czech Republic up to a depth of 30 kilometres, insofar as facts justify the assumption that the road section in question is of outstanding importance for cross-border crime because it is regularly used as a venue for the commission of criminal acts within the meaning of sentence 1 or for the transfer of property or assets resulting from these criminal acts. The outstanding importance for cross-border crime must be evident from facts documented by the police. Technical and organisational measures must be taken to ensure that such means are not used either individually or in combination on a widespread or continuous basis.

(2) Personal data in the sense of paragraph 1 may only be further processed to the extent that it is automatically compared with personal data of specific persons who are under police surveillance for the prevention of criminal offences within the meaning of paragraph 1 sentence 1.

The data collected has to be deleted by automated means after ninety-six hours at the latest, unless the automated comparison revealed a match, and the data are required for the prevention or prosecution of criminal offences within the meaning of paragraph 1 sentence 1.

(3) Measures pursuant to paragraph 1, including the determination of those persons whose data are absolutely necessary for [their] identification are to be processed for automated comparison, may only be ordered by the President of the State Criminal Police Office (Landeskriminalamt) or of a Police Directorate or by an official commissioned by them for this purpose. At the latest after the expiry of six months, the ordering police station shall check whether the conditions for the order still exist. The result of the examination has to be documented. The basis for the decision, including the findings in accordance with paragraph 1 sentence 3, which led to the respective operation, have to be documented for each measure.

(4) The necessity, practical application, and effects of the provisions of paragraphs 1 to 3 have to be examined by the state government. The state government has to report to the Landtag on the result of the evaluation three years after this Act comes into force.Footnote 57

This provision has been drafted specifically for FRT measures.Footnote 58 Section 59 SächsPVDG allows the collection and recording of data (image recordings) to compare them automatically with other personal data. This provision, therefore, addresses many of the issues raised by the BVerfG and in the discussion of the measures by the police after the G20 riots. In contrast with Section 48 (1) BDSG, this provision is written in more detail and thus appears less problematic as a basis for FRT measures.

In the first place, the provision clearly states its purpose – it is directed at the prevention of grave cross-border crime. Pertinent crimes are explicitly enumerated in paragraph 1 sentence 1 and include human trafficking, gang theft, robbery, and severe cases of drug trafficking.

To conform with the ban on total surveillance,Footnote 59 technical and organisational measures must be put in place to ensure that such means are not used either individually or in combination on a widespread or continuous basis (paragraph 1 sentence 4).

As regards proportionality in a narrower sense,Footnote 60 it appears particularly relevant that the provision clearly states and restricts its geographic scope to road sections in the border area, that is with the Republic of Poland and the Czech Republic up to a depth of 30 kilometres.Footnote 61 Furthermore, concrete facts documented by the police must justify the assumption that the road section in question is of outstanding importance for cross-border crime, and according to paragraph 1 sentence 4, organisational measures must guarantee FRT is not applied on a widespread basis. However, Martini points out two regards in which this provision may not be sufficiently determinate in view of the requirement of legal certainty: it may not be sufficiently clear (1) when a road section is of outstanding importance for cross-border crime and (2) how far ‘road sections’ extend.Footnote 62

The use of FRT is also limited in further respects. There is a time limit on the storage of data, and personal data may only be further processed to the extent that it is automatically compared with the personal data of specific persons who are already under surveillance for enumerated crimes. The data collected shall be deleted by automated means after ninety-six hours at the latest. If FRT procedures can be completed in a shorter time, the wording ‘at the latest’ may be viewed as a further guarantee of proportionality, requiring an earlier deletion where possible.

Section 59 (3) includes further procedural safeguards. Measures have to be based on a specific order and may only be ordered by the President of the State Criminal Police Office or of a police directorate or by an official commissioned, and have to be re-assessed as to whether the conditions for the order still exist (sentence 2). As a further procedural safeguard, the factual basis for this decision and pertinent findings by the police shall be documented for each measure. Again, potential criticism might be directed at the maximum period of six months for a re-assessment of the measure, but the wording ‘at the latest’ allows for a shorter period. Martini argues that further safeguards might be needed concerning supervision and transparency, and in particular, given the gravity of FRT, a decision by a court (instead of a member of the executive) might be in order.Footnote 63

13.4 Conclusion

So far, FRT has only been employed in individual cases in Germany. The BVerfG has acknowledged that a ban on total surveillance is part of Germany’s constitutional identity. For individual measures, constitutional key considerations concern their proportionality, the clarity of legal rules, and certainty. Furthermore, specific procedural safeguards are required.

This arguably amounts to a high threshold for FRT measures, as the examples discussed show. The permanent use of FRT without specific cause cannot be based on the existing provision Section 27, sentence 1 no. 2 BPolG, as it is not sufficiently specific. The use of FRT for analysis of video recordings from mass gatherings (as after the G20 riots) based on Section 48 (1) BDSG was viewed in a positive light by an administrative court. However, commentators raise the issues of legal clarity and certainty, and point out that the administrative court has not sufficiently explained the proportionality of the measures in this instance. Finally, even a very specific provision on the use of FRT for a limited time and with specific cause has been criticised for a potential lack of proportionality and for not being specifically determinate in certain regards.

14 A Central-Eastern Europe Perspective on FRT Regulation A Case Study of Lithuania

Eglė Kavoliūnaitė-Ragauskienė
14.1 Introduction

In Lithuania, rather than being determined by the intrinsic needs of society, legal regulation of face recognition technology (FRT) came merely as a part of the EU’s general data protection framework. Prior to this, the rules governing facial image usage of private persons were regulated mainly by the Civil Code of the Republic of Lithuania,Footnote 1 which provides that if a photo (or a part thereof), portrait, or other image of a natural person is to be reproduced, sold, displayed, and printed, the person may be photographed only with their consent – but this is not required if these actions are related to the person’s social activities, their official position, the requirement of law enforcement authorities or if the photograph is taken in a public place. However, a person’s photo (or part of it) taken in these cases may not be displayed, reproduced, or sold if this would degrade the person’s honour, dignity, or professional reputation.Footnote 2 In terms of the work of law enforcement institutions, as will be seen from the analysis presented in this chapter, the laws regulating the work of separate law enforcement institutions or laws regulating specific activities of law enforcement (as a general rule) provide that the law enforcement institutions may collect and process personal data, usually without specifying the regime applicable to the collection and processing of biometric data.

As in all of the EU member states, law enforcement institutions in Lithuania have to adhere to EU standards of FRT usage, especially those laid down in the Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data (the Law Enforcement Directive).Footnote 3 However, each country also has local national standards that transpose other requirements to the FRT framework and its practical use.

From the perspective of the effectiveness of legal regulation, it should be noted that a country might have a very definite and clear legal rule or a set of rules regulating a particular field of social relations; however, this regulation may rendered as declarative and not implemented in practice. In Lithuania there are some examples of such, and one of the most prominent involves the legal regulation of lobbying activities. At the time of consideration of the Law on Lobbying Activities in the Parliament of Lithuania, one of the Members of Parliament noted that the relations that were going to be regulated were little known in Lithuanian society, so the law was not expected to be accepted in practice. He also said during the parliamentary session that it looked as if the law was aiming to ‘prepare cosmonaut suits and then see if there would be cosmonauts willing to try them on’.Footnote 4 Indeed, this law (adopted in 2000) was one of the worst examples of legislation in Lithuania, as lobbying activities were practised despite what was stated in it until 2018 – when the law was amended significantly, this time following broad discussions with stakeholders and society. This and similar examples imply that in order for legislation to be applied in practice, it needs to fit both the legal culture and legal system of a country as well as fall in line with the views of wider society.

Keeping this in mind and recognising that society has an important role in controlling the implementation of legal acts, especially where they relate to human rights, the proper implementation of FRT regulations also relies on society and related interest groups deeming them necessary, otherwise they may remain declarative and void. If public awareness and pressure to have a law implemented properly are high, the implementing institutions are forced to take action. Usually, strong players in the performance of social control are non-governmental organisations (NGOs), especially where regulations or their improper implementation pose a threat to human rights. Therefore, it is of major importance that society and NGOs accept and understand the need and the usage of FRT in law enforcement institutions.

This section analyses the regulation of FRT usage by Lithuanian law enforcement institutions, as well as the public discussion relating to FRT usage in the media, NGO involvement, and other types of social control. Finally, the chapter considers what changes may be brought to national regulation of FRT by the EU Artificial Intelligence Act.

14.2 Legal Framework for the Use of FRT in Law Enforcement in Lithuania

In general, the basis for the use of biometric data (including facial recognition data) in Lithuania is the Law Enforcement Directive,Footnote 5 which was transposed into the Law of the Republic of Lithuania on the Legal Protection of Personal Data Processed for the Prevention, Investigation, Disclosure or Prosecution of Criminal Offences, Execution of Sanctions or National Security or Defence and other legal acts.Footnote 6 Biometric data are classified as a special category of personal data that reveal racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership; they include genetic data and data concerning health or a person’s sex life or sexual orientation. Processing of these personal data categories is only allowed when strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and only where it is authorised by the EU or Lithuanian law – for example, when it is needed to protect the vital interests of the data subject or another person; or when such processing relates to data that are manifestly made public by people themselves.Footnote 7

In Lithuania the collection and use of facial images on the one side, and processing of personal data such as biometric data, on the other, are regulated in law enforcement institutions by different laws and other legal acts. For example, the Law on Police of the Republic of Lithuania provides that with a person’s consent and/or in cases established by law, police officers are entitled to take photos and make audio or video recordings. Without a person’s consent, a police officer can take pictures of unidentified persons, persons in a helpless condition, unidentified corpses, risk group persons, and temporarily detained persons; they can be measured and their external features described, audio or video recordings can be made, fingerprints can be taken, samples can be taken for genetic testing to perform typification or for comparative research and identification, and all these data can be processed.Footnote 8 The law also states that the police can process personal data necessary for the implementation of police tasks, including the personal code, without the consent of the data subject, and that when processing data, the police have the right to collect them using technical means.Footnote 9

The Penal Code provides that the Probation Service may receive data, documents, and other information necessary for the execution of public service sentences (or to get acquainted with this information) from the state, municipalities, and other institutions, bodies, or organisations with state information resources. The Probation Service is also entitled to process the personal data of convicted persons.Footnote 10

In criminal procedure there is a general requirement that the use of technical means and their results are also subject to the requirements of public information, personal data protection, the right to inviolability of private life, and the protection of personal honour and dignity established in other laws.Footnote 11 This means that all steps in criminal procedure involving the use of biometric data should be in line with the previously mentioned provisions of the Law of the Republic of Lithuania on the Legal Protection of Personal Data Processed for the Prevention, Investigation, Disclosure or Prosecution of Criminal Offenses, Execution of Sanctions or National Security or Defence. It is therefore quite natural that, for example, the Law on Prosecution of the Republic of Lithuania,Footnote 12 or the Law on Financial Crime Investigation Service,Footnote 13 do not mention handling of any type of personal data at all.

However, a number of laws regulating the activities of law enforcement institutions do not provide clear wording on the possibility of collecting and using facial images. For example, the Law on Intelligence of the Republic of Lithuania provides only that state intelligence institutions have a right to process personal data, without clarifying what kinds of data these might be, and the provision for the performance of particular activities after having received a court permit only mentions ‘access to a person’s home, other premises or vehicles, their inspection and recording’,Footnote 14 without a clear reference to collection or use of facial images. Similarly, the Law on Criminal Intelligence does not provide clear grounds for collecting and using facial images; it does not speak about handling personal data at all. This law only mentions that criminal intelligence activities (meaning the activities of criminal intelligence officers in collecting, recording, evaluating, and using available information about criminal intelligence objects) must be carried out in accordance with the procedure established by the law, and that methods of collecting criminal intelligence information are agency activity, survey, inspection, control inspection; controlled transportation; imitation of a criminal act; ambush; tracking; covert operation; and tasks of law enforcement authorities. However, this law also mentions that human rights and freedoms cannot be violated during criminal intelligence activities. Individual limitations of these rights and freedoms are temporary and can only be applied in accordance with the procedure established by law, in order to protect the rights and freedoms of another person, property, or public and state security.Footnote 15 The Code of Administrative Offences also generally states that investigative activities may include photography, video recording, audio and video recording, footprints and casts, plans and diagrams, and other recording techniques.

Regarding the activities of the Special Investigation Service, the handling of personal data is indirectly mentioned in the law that regulates the institution’s analytical intelligence activities. It is stated that analytical anti-corruption intelligence means analytical activity carried out by the Special Investigation Service, which includes the collection, processing, comparison of information on corruption and related phenomena with other public or classified information available to the Service, obtaining qualitatively new data that is the result of these information processing processes, and use by and provision to state or municipal institutions and officials authorised to make significant decisions in terms of reducing the prevalence of corruption. The possibility of using available biometric data is provided, as in order to achieve its operational goal and implement the tasks assigned to it, the Special Investigation Service has the right to receive relevant documents from all public institutions.Footnote 16 Additional rules are applied in respect of the collection and usage of facial images and the usage of biometric data in the process of issuing identity documents and migration.Footnote 17

Thus, it can be stated that the legal rules on the collection and usage of facial images and generating/usage of biometric data in Lithuania are rather fragmented and vague. As may be seen, in most cases it is stated that law enforcement institutions may collect and process personal data needed for the fulfilment of their duties without specifying any additional restrictions or criteria. Based on the personal nature of biometric data and the rigorous collection and processing of facial images, in accordance with the laws provided here, it is possible that every person may be affected: not only those who are subject to the issuance of personal identity documents or involved in migration issues, or those in any way involved in criminal proceedings or other proceedings that relate to national security and state interests, but also any other persons who act or appear in public places.

The second important issue is that the legal acts implementing the provisions of laws stray even further from the requirements applied to data collection and processing. For example, based on the provision of the Law on Police (the police have the right to collect and process personal data necessary for the implementation of their tasks without the consent of the data subject), all municipalities in Lithuania have adopted separate rules on the use of video surveillance cameras and the data they record.Footnote 18 Video surveillance may be established with the aim ‘to identify persons who may have committed administrative offences and criminal acts’. Consequently, this means that cameras can be established in any public place and may collect video data on all persons appearing there.

Still, a nonetheless important issue is the processing of the video surveillance data and other facial images. According to the aforementioned and related legal acts, facial images can be stored in a number of databases (which are usually interlinked): the Police Information System and other police department registers and information systems, the Criminal Intelligence Information System; databases of detention facilities (prisons, probation offices, etc.); court and other authorities’ databases; databases of institutions issuing identity documents; databases of the Migration Department and the State Border Guard Service; databases of the state enterprise Regitra, which issues driving licences; databases of institutions issuing personal documents; and municipal databases.Footnote 19

In this context it is important that according to the law, as well as to the Law Enforcement Directive and the General Data Protection Regulation (GDPR),Footnote 20 ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a person, which allow or confirm the unique identification of that person. The definition of biometric data in GDPR (as well as in the Law Enforcement Directive) has generally been restricted to mean a technically defined digital representation of bodily traits that has been processed for machine or algorithmic analysis. This is suggested by the wording that data have to be subject to ‘specific technical processing’. Speaking more broadly, data processing systems do not need all of the data, but instead rely on extracting meaningful sub-parts from voice or image data, which can then be easily compared to existing ‘templates’ in a database. This implies that photographs and video images of faces are expressly excluded from the definition of biometric data both in GDPR and the Law Enforcement Directive.Footnote 21 Therefore, there is a difference between the regulatory rules applied in respect of data images (which may be regarded as personal data) and images processed with FRT technology (which then is regarded as biometric data).

To summarise, in Lithuania there is quite a significant gap between the regulatory rules that set requirements for FRT from a data protection perspective and rules regulating activities of separate law enforcement institutions and law enforcement activities. Although the standards of data protection in general seem to be sufficient, the specific laws on law enforcement institutions solely provide the possibility to collect and process personal data, including facial images, without making it known whether FRT will be used to process such images or not. Therefore, there is a possibility that the general data protection rules are only declarative and not enforced in practice. Thus, in order to understand whether the legal regulation of FRT usage in Lithuanian law enforcement is sufficient, a deeper analysis of the practical implementation of regulatory rules on the usage of FRT in Lithuania is needed.

14.3 FRT Usage in Practice

According to the respondents to the Government Use of Facial Recognition Technologies: Legal Challenges and Solutions project,Footnote 22 the volume of FRT usage in law enforcement institutions is not clear. In the course of this project the team sought to interview representatives from the institutions that are responsible for (or directly participate in) the processing of personal data, including facial images and biometric data. However, only very few representatives were willing to participate, whereas others stated they had insufficient knowledge of or competence in these issues. Moreover, according to a couple of respondents from the private sector, who were trying to investigate the use of FRT in the context of human rights, the representatives of law enforcement institutions disclosed to them that they felt comfortable as they benefited from having considerable latitude for when and how to use FRT as a result of the vague legal background.

As an example of insufficient regulatory basis for handling biometric data, including facial data processed using FRT, the case of the Register of Habitoscopic Data of the Republic of Lithuania may be analysed. This is a component of the Internal Affairs Information System – a general system for storing detailed personal identification data in a single database, which stores data on convicted persons, persons who have served a sentence of arrest or fixed-term imprisonment in the Republic of Lithuania, temporarily detained persons suspected of having committed a criminal act, wanted persons, identification marks of unidentified dead bodies or unknown helpless persons, and other categories. This data is used by pre-trial investigation institutions, border protection, customs, the prosecutor’s office and other law enforcement institutions in order to ensure the prevention of criminal acts and the fight against crime. It processes personal data for the following purposes: (1) to investigate criminal acts and ensure their prevention, organise, and carry out the search for persons, as well as to identify both unidentified corpses and unknown helpless persons according to personal identification marks; and (2) to determine the identity of a person in order to ensure the control of the movement of foreigners who have been detained by the competent control authorities for illegally crossing the state border by sea, land, or air from a third country and who have not been returned to that country.Footnote 23 The Register of Habitoscopic Data contains data on the external characteristics of a person, obtained by photographing, measuring, and describing the person’s appearance. According to this definition and the list of processed data presented in the same Order, the Register of Habitoscopic Data processes personal data that does not fall under the definition of biometric personal data, meaning no additional rules on the processing of biometric data should apply. The Order does not mention or otherwise provide grounds for processing of FRT-related biometric data. However, there was a public announcement in the media about the reorganisation and improvement of the Register of Habitoscopic Data through the ‘Modernisation of Register of Habitoscopic Data using advanced technologies of face recognition and identification tag search’ project.Footnote 24 The project description states:

[I]n the course of project activities, the Personal Face Biometric Recognition subsystem of the Register of Habitoscopic Data was modernized; using advanced facial biometric recognition technologies, the accuracy, performance, and reliability of personal facial biometric recognition was improved. Facial biometric recognition functions of the Register of Habitoscopic Data were modernised using high- facial biometric recognition software (NeoFace Watch, manufactured by NEC Corporation), which enables software users to perform facial biometric recognition (1:1; 1:N) in indirect mode, using digital facial photographs, and face-to-face biometric recognition (N:N) in live mode using real-time IP video cameras. Purchased facial biometric recognition software also includes a specially designed software component for smart devices. The ‘Face Recognition’ application of a smart device provides an opportunity for mobile face recognition of a person, that is, taking a picture of a person with a phone and performing a search (recognition) of the face image of such a person based on the captured face image data in the database of the Register of Habitoscopic Data.Footnote 25

The Police Department website provides information about a related project. It is stated that:

[This aims to] create a uniform system for collecting personal identification marks and biometric data and submitting them to the Register of Habitoscopic Data of the Ministry of the Interior of the Republic of Lithuania. After the implementation of the project, sixteen specialised workstations for collecting personal identification marks and biometric data and submitting them to the Register of Habitoscopic Data were established in the main police commissariats and detention centres of the country’s counties. It became possible to capture images of unidentified persons, take biometric data, as well as other data of an event related to a person, process them in police custody and detention facilities, register them, and transfer them to be recorded in the Register of Habitoscopic Data. After arresting a person suspected of having committed a crime, it is possible to promptly compare the person’s biometric data with the data contained in the HDR – in this way, this data will be used to reveal criminal acts faster, determine the identity of the person, conduct investigations more efficiently, conduct forensic investigations faster, and, with better quality, ensure crime prevention, public order, and public safety.Footnote 26

However, as mentioned earlier, there is no legal ground for processing biometric personal data in the Register of Habitoscopic Data, nor are there security measures to be applied in order to ensure the protection of biometric personal data based on the criteria established in the Law Enforcement Directive and the Law of the Republic of Lithuania on the Legal Protection of Personal Data Processed for the Prevention, Investigation, Disclosure or Prosecution of Criminal Offenses, Execution of Sanctions or National Security or Defence. Furthermore, there are no terms for storage of biometric data (data processed by facial recognition technologies that allows identification of a specific person) in the Register of Habitoscopic Data.

Moreover, the Order of the Minister of the Interior establishing the Register of Habitoscopic Data allows the linking of the Register of Habitoscopic Data with other state registers (Residents’ Register, Addresses’ Register, Register of Application of Preventive Measures, Official Register of Wanted Persons, Unidentified Corpses and Unknown Helpless Persons, Register of Suspected, Accused and Convicted Persons, Official Register of Criminal Acts, Register of Dactyloscopic Data, Register of DNA Data, Register of Foreigners, and Register of Events registered by the Police). However, in the description of the ‘Modernisation of Register of Habitoscopic Data using advanced technologies of face recognition and identification tag search’ project, it is stated that ‘three new integration interfaces have been created: with the Integrated Criminal Procedure Information System (IBPS), the Register of Administrative Offences (ANR), and the Lithuanian National Second Generation Schengen Information System (N.SIS)’. In other words, the Register of Habitoscopic Data has interconnections with other registers that are not found in the relevant regulatory document.

The Ministry of the Interior of the Republic of Lithuania was officially asked to provide an explanation of the differences between the current regulatory framework for the operation of the Register of Habitoscopic Data and the declared updates to the register, which are said to have been already implemented.Footnote 27 However, no response was received.

Such a situation implies not only that the regulation of collection of facial images (which falls outside the scope of ‘biometric data’ definition) and the processing of such images to generate biometric data is not regulated properly, but also that the current practices (given no information is provided about any unpublished legal regulations – which is unlikely given the requirements of transparency in the field of human rights and data protection) are likely to be in breach of the existing legal basis for such activities. First, as already mentioned, the data and information, as regulated by the Order of the Minister of the Interior on the Register of Habitoscopic Data, would be limited only to facial images and their description, with digital processing using FRT not being mentioned. The use of FRT brings the activities of the Register to a different level of legal requirement – that is, the obligation to conform with the rules applicable to biometric data processing. Second, the interconnections between the Register of Habitoscopic Data and other registers are not clear. It appears that in practice there are links to more registers than provided in the relevant Order of the Minister of the Interior, however, it is not clear what data could be exchanged. It should also be noted that a special Order of the Commissioner General of the Police restricts the transfer of facial image data (received via public surveillance cameras) to state registers to situations when there is a need to verify or specify information on a particular criminal or administrative offence.Footnote 28 However, it is still plausible that all facial images of both recognised and unrecognised persons, who may be captured by video or photo cameras established for public surveillance by accident and without taking any part in an offence, could be automatically processed for facial biometric data.

14.4 Public Acceptance of the Usage of FRT by Government Authorities in Lithuania

To begin with, the issues surrounding FRT usage by Lithuanian government authorities are not commonly mentioned in media, NGOs, or social networks. Similarly, as with all advances in artificial intelligence (AI), FRT is welcomed positively as a facilitator of general life in Lithuania. For example, the Strategy of Artificial Intelligence in Lithuania encourages integrating AI, including FRT, into all economic sectors. Specifically, regarding the public sector, it is stated that AI will be helpful in the field of crime control, optimising the daily work of public institutions and improving the provision of public services.Footnote 29 In particular, the optimisation of work is a rather attractive promise for most institutions – for example, the Kaunas Information Technology School carried out the ‘Attendance Marking Powered by Face Recognition’ project, which revealed that teachers would save time significantly if attendance of students was checked by using FRT rather than manually.Footnote 30 Moreover, FRT was even suggested as a practical solution for simplifying the checking of persons who had been vaccinated against COVID-19, with proposals to use FRT instead of the official ‘opportunities passport’ system, which was declared to be ‘outdated’.Footnote 31 In Lithuania the case law on application of FRT is a rarity. However, a recent court decision directly relating to FRT usage demonstrates how the argument about convenience may easily transform into an argument about public interest. The State Data Protection Inspection challenged an order made by a university regarding the procedure for students’ remote examinations and measures related to the processing of personal data in order to ensure fair behaviour during examinations. This document established that the following personal data will also be processed during the state-level emergency brought about by COVID-19: surveillance photos, facial biometric data, audio recording of the exam. In this case the court declared that the rules of the university were legitimate as they were necessitated by public interest.Footnote 32

Quite a strong argument with the public in favour of FRT use is the possibility of increasing public safety. Municipal institutions boast that they have introduced surveillance cameras that increase the safety of citizens. For example, the Mayor of Marijampolė municipality publicly announced that the network of sixty-four video surveillance cameras installed in 2020 has raised security in the city to a new level:

Let’s start with the fact that stationary cameras were placed at all entrances to the city, monitoring the flow of cars and scanning their licence plates. It is extremely useful for investigating various crimes, such as thefts, robberies from homes or shops. At the same time, it also has a preventive effect, since thieves try to bypass the monitored cities – they don’t want their vehicles or themselves to be captured.Footnote 33

Or, for example, a local internet news portal of Mažeikiai district proudly presents:

Almost half a dozen stationary and another fifteen mobile video surveillance cameras in Mažeikiai help to ensure the safety and order of residents in the city. With them, surveillance is performed in the busiest streets and intersections of the city, in public spaces, and near waste management container sites, and transmitted in real time to the monitoring console.Footnote 34

It seems that residents are confident and satisfied with such usage of FRT in public places. People have even complained that the video cameras do not adequately ensure safety, as upon an accident the recording is too blurry or badly angled so that not all persons captured can be identified:

It is declared that Vilnius is safe, we see advertisements, billboards, how many cameras are attached. Oh, it turns out that when there is an incident in the middle of the day, not at night, not in a corner, not somewhere behind the trees, when we start to investigate, it turns out that those cameras are of very poor quality, hung up high. Here, perhaps, is the question I would like to raise – why do we need cameras, if, as declared, safe Vilnius is not safe at all in Cathedral Square?Footnote 35

Moreover, FRT in public places is used not only for safety reasons, but also for fun: in Vilnius there was a two-year experiment in which researchers’ devices measured the face temperature, breathing rate, heartbeat, and emotions of any passers-by. The explanation was that this experiment was intended to substitute for a public poll on how people feel at a given moment in a given place, as it was a much more precise way to do so.Footnote 36

On the other side, certain aspects of FRT usage have also been criticised in the media. For example, it has been widely and critically discussed that Lithuanian institutions are using video surveillance cameras made in China, which raises doubts as regards the safety of the data recorded and potentially threatens state security.Footnote 37 Moreover, the potential for the misappropriation of FRT footage was revealed to the public in a well-known case concerning a policeman who had published online a video that had been recorded in a police car in which a drunk women took off her clothes.Footnote 38

Nonetheless, these examples of the usage of FRT being publicly criticised are rather rare, and public attention is paid only to cases that raise state security issues or where there is a manifest infringement of professional duties. The overall attitude of Lithuanian society towards FRT usage seems to be positive – at least this is what can be seen from media sources. It seems that priority is given to the vast development of FRT and other AI technologies because the public can benefit from increased convenience and safety, while human rights issues related to threats to privacy, discrimination, or false accusation are left aside. Indeed, no civil society organisations in Lithuania prioritise threats posed by usage of FRT and AI. Therefore, it may be assumed that public discourse is driven by the position of state institutions and any developers’ interests in this field – thus a critical standpoint is lacking.

14.5 What Impact on FRT Usage by Law Enforcement Institutions is Expected upon the Application of the EU Artificial Intelligence Act?

As has been noted, the fragmented regulatory basis and rather weak public control of FRT usage in Lithuania could lead to the uncontrolled usage of FRT in law enforcement. Hopefully, the application of the EU Artificial Intelligence Act may bring about some changes to this situation. In April 2021, the European Commission presented the draft Artificial Intelligence Act, which is intended to introduce high standards for an EU trustworthy AI paradigm. It sets out core horizontal rules for the development, trade, and use of AI-driven products, services, and systems across all industries within the territory of the EU. This proposal introduces a ‘product safety regime’ that is constructed around a set of four risk categories. It imposes requirements for market entrance and certification of high-risk AI systems through a mandatory CE-marking procedure. This pre-market conformity regime also applies to machine learning training, testing, and validation datasets. Thus, according to Mauritz Kop,Footnote 39 the draft AI Act combines a risk-based approach (based on the pyramid of criticality) with a modern, layered enforcement mechanism. This means that as risk increases, stricter rules apply.

Regarding the definition of ‘biometric data’ in the law enforcement area, the proposed AI Act makes a reference to the Law Enforcement Directive.Footnote 40 However, the draft Act provides separate definitions for ‘remote biometric identification system’, ‘“real-time” remote biometric identification system’, ‘“post” remote biometric identification system’, and so on., with a specific regime being applicable to these categories. For example, the draft Act states that it is prohibited to use the ‘“real-time” remote biometric identification systems’ in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

  1. (1) the targeted search for specific potential victims of crime, including missing children;

  2. (2) the prevention of a specific, substantial, and imminent threat to the life or physical safety of individuals or of a terrorist attack;

  3. (3) the detection, localisation, identification, or prosecution of a perpetrator or suspect of a criminal offence.Footnote 41

As stated in the Explanatory Memorandum to the proposed Artificial Intelligence Act, the choice of a regulation as a legal instrument is justified by the need for a uniform application of the new rules, such as definition of AI, the prohibition of certain harmful AI-enabled practices and the classification of certain AI systems. The direct applicability of a Regulation, in accordance with Article 288 TFEU, should reduce legal fragmentation and facilitate the development of a single market for lawful, safe, and trustworthy AI systems. It is expected to introduce a set of harmonised core requirements regarding ‘high-risk’ AI systems and construct obligations for providers and users of those systems – improving the protection of fundamental rights and providing legal certainty for operators and consumers alike. At the same time, the provisions of the regulation must not be too prescriptive and should instead leave room for different levels of member state to take action regarding elements that do not undermine the objectives of the initiative, in particular the internal organisation of the market surveillance system and the uptake of measures to foster innovation.Footnote 42

To summarise, the adopted Artificial Intelligence Act should bring more precision to the types of FRT used in law enforcement activities, and apply more controls to its use. However, the issue of transparency of FRT usage and making information available to the public or academics may still remain restricted as it is now, unless rising social pressures force such a practice to change.

14.6 Conclusions

There is quite a significant gap between the regulatory rules, which set requirements for FRT from a data protection perspective, and rules regulating the activities of separate law enforcement institutions and law enforcement activities. This may be because the usage of personal data, including facial images and their processing, was established in the specific laws regulating law enforcement much earlier than 2016, when the general data protection framework was established. Therefore, the Lithuanian legal framework clearly demonstrates that there are separate rules allowing the collection and processing of personal data (i.e., biometric data) in law enforcement activities, as well as separate rules that are more general and require a specific protective regime to be applied for the collection and processing of personal, including biometric, data.

Notwithstanding the fact that in theory the standards of data protection in general seem to be sufficient to protect against the rapid progression of technologies processing personal (and biometric) data and the evident threats to privacy and other human rights they pose, it still seems that the specific requirements on processing of personal data, especially processing biometric data, are not yet fully included in the practices of law enforcement in Lithuania. Moreover, it may be seen that the practices used in the development and usage of the Register of Habitoscopic Data do not comply with the regulatory requirements, in particular with the rules regulating the establishment, structure, and use of habitoscopic data. Rules on processing and sharing biometric data contained in this Register are not sufficient to ensure its proper protection, as required in data protection laws and EU documents.

Regarding the public attitudes to the regulation and usage of FRT in law enforcement in Lithuania, it may be noticed that neither society nor NGOs working in the field of human rights show any particular interest in analysing or restricting the usage of FRT in law enforcement institutions. On the contrary, media sources indicate that society at large is satisfied with the fact that the number of surveillance cameras in public places is increasing, and feels that it is a good and acceptable development that the possibility particular persons in public spaces can be recognised is increasing, as this brings the feeling of safety and order.

Although the adopted EU Artificial Intelligence Act should bring some discipline and clarity to the national regulation of FRT systems as well, as the reasons for using FRT, there are still doubts as to whether the transparency of FRT usage will be increased if societal and organisational attention and interest regarding FRT remains at the same level.

15 An Overview of Facial Recognition Technology Regulation in the United States

Mailyn Fidler and Justin (Gus) Hurwitz
15.1 Introduction

The United States generally takes a light-touch approach to regulation, and notably so in the technology sector. This approach applies equally to facial recognition technology (FRT). But while the US regulatory touch may be light, this does not mean that it is either simple or non-existent. This chapter chronicles regulation of FRT in the United States at federal, state, and local levels, and considers potential regulatory issues on the horizon.

Every chapter in this volume discusses FRT, so the reader is assumed to have more than passing familiarity with its salient technological capabilities and limitations. Very briefly, for purposes of our analysis, FRT is a tool by which computers can identify individuals by an image of their face, generally using sophisticated algorithms that compare visual characteristics of an image to vast databases of other faces. We emphasise the role of image-based analysis, because there are other FRTs that may use other indicators. For instance, specialised systems may use infra-red cameras to recognise the structure of blood vessels underneath the skin and use these to identify individuals. This is certainly a form of FRT, but because high-resolution infra-red imaging is not a pervasive technology (yet?), technologies such as this are not currently a focus of this debate. Similarly, research into FRT benefits adjacent fields, such as autofocus technologies that dramatically improve the quality of devices such as cameras. Because such technologies are not used for identification purposes, they generally are not directly implicated by the discussion in this chapter. However, restrictions on the use of one application of FRT could well affect the development or use of other applications of related technology.Footnote 1

FRT presents unique challenges as an identification technology. The simple explanation for this is that faces are pervasive. Humans present them publicly every time we are outside. We cannot meaningfully obstruct them, for both practical and social reasons – and, it bears note that FRTs are increasingly able to identify individuals based even upon obstructed images of their faces. And like all biometric markers, individuals cannot alter the appearance of their faces in the same way that they can, for instance, change a password or have multiple email addresses. Taken together, this makes faces both uniquely pervasive and uniquely persistent as identification tokens.

There is a wide range of potential use cases for FRT, ranging from contextually positive, to neutral, to potentially problematic. For instance, at the possibly innocuous end of the spectrum, FRT can be used to facilitate consumer convenience features, such as information kiosks that recognise individuals and automatically present relevant information to them. Or FRT could facilitate contactless payment or check-out features. Social media platforms can use FRT to automatically identify individuals in posted images. This information could be used for tracking purposes, or to learn information about those individuals that could be used for advertisement targeting. It could also be used to alert individuals when others post recognisable images of them, potentially giving them an opportunity to take privacy-protecting steps in response.

Perhaps the most potentially concerning class of FRT use cases stems from the widespread use of surveillance cameras in public settings – all of which can, in principle, incorporate FRT systems. In this setting, FRT can be used expansively by both private and public actors. For instance, venues such as shopping malls can use FRT-enabled security systems.Footnote 2 FRT can be used to help law enforcement identify or track fugitives or other wanted individuals in public places – or to help search for missing persons. In the most extreme setting, FRT effectively eliminates any sense of privacy or anonymity that individuals may have when in public spaces. Where before the advent of FRT there was ‘anonymity in crowds’, today there is none.

This chapter proceeds in four parts. US law presents a complex set of regulatory tools and institutions – institutions that are often overlapping and often competing with one another. Section 15.1 provides a brief overview of this myriad of institutions. Sections 15.2 and 15.3 then bifurcate the discussion along two sets of institutions: federal and state-level regulatory efforts. Section 15.2 looks at recent federal efforts relating to FRT; Section 15.3 looks at recent state-level efforts. Section 15.4 offers some observations on issues that are on the horizon for the regulation of FRT technology in the United States.

15.2 Setting the Stage: Technology Regulation in the United States

US law does not present a single, unified, legal system. The United States is a federation of more than fifty states and territories, each with unique constitutions and legal environments. The federal government has its own Constitution and enacts its own laws, which sometimes displace, sometimes co-exist with, and other times are secondary to state law. Beyond that, most ‘law’ in the United States actually comes in the form of regulations enacted by federal or state agencies. And in many settings – perhaps most notably those relating to the technology sector and privacy-related concerns – US law relies extensively on self-regulation and sectoral regulation.

Before turning to any specific US regulatory approaches to FRT, this chapter presents a brief overview of these interrelated regulatory institutions.

The starting point for understanding the US legal approach to FRT – as well as many related issues, such as privacy issues generally – is to understand US law’s emphasis on protecting individual autonomy from intrusion from the government. That is, US law is largely premised on negative rights, or rights to be free from interference from the government. This stands in stark contrast to many other legal systems that are premised on positive rights, or guarantees that the government provide or protect individual liberties.

Thus, for example, the US tradition of privacy law is largely anchored in the First and Fourth Amendments.Footnote 3 Beyond those found in these amendments, Americans have limited fundamental privacy rights against the government. These amendments protect the freedom of speech and limit the government’s ability to encroach upon individuals’ other rights without due process of law. And the privacy rights facilitated by these amendments look very different from those anchored in other concepts, such as a right to dignity or self-determination.Footnote 4 The First Amendment guarantees that individuals cannot be compelled by the government to speak, including potentially by disclosing information about themselves. And similarly, the Fourth Amendment prohibits the government from searching and seizing individuals’ property – again preventing compelled disclosure of information to the government – absent obtaining a specific warrant from a federal court subject to due process of law. Critically, both of these amendments only run against the government. Neither prohibits private entities from compelling speech or disclosure of information, such as a condition of service. And neither prevents others from sharing or disclosing facts known about others, absent specific indicia of harm.Footnote 5

These principles give rise to a defining doctrine of US privacy law: the third-party doctrine. This doctrine says simply that one has no reasonable expectation of privacy in information disclosed or publicised to a third party. If a user shares information with another private entity, such as an online service, that entity is largely (though not entirely) free to do with that information as it pleases, and the Fourth Amendment provides the user with no protection against government efforts to obtain that information. And if an individual shares their information publicly, that information may be used generally. This includes merely being seen in public – with few exceptions, under US law an individual may have their public activities tracked, documented, and shared by other individuals and private entities. The Fourth Amendment may prohibit the government from tracking individuals in this way, but does not reach other private entities – indeed, government actors may even be able to acquire information about an individual from third parties, even where the government could not have created that information itself.

There is some limited ability for the government to impose narrowing laws. For instance, it can write laws that constrain its own conduct. Laws such as the Wiretap and Stored Communications Acts were adopted principally to limit the conduct of law enforcement agencies that might have otherwise been considered permissible under the Fourth Amendment. In other cases, most notably where concrete and particular harms are identifiable from information disclosure, the government may be able to proscribe such disclosures. This most often happens in heavily regulated industries that transact in sensitive information, such as health and financial information. For instance, the Stored Communications Act prohibits electronic communications services from disclosing the contents of users’ communications except under specific circumstances. Even then, however, the First Amendment limits the extent of such regulations. For instance, a law that prohibited the exchange of consumer data for marketing or promoting prescription drugs has been found to violate the First Amendment rights of pharmaceutical research companies and manufacturers who may have reason to use that data.Footnote 6

The discussion so far has focussed primarily on the federal government as a regulator. The federal government may regulate by writing laws (which happens through Congress); it also relies extensively on federal agencies to promulgate and enforce regulations. In the United States, these regulators are generally sector-specific. For instance, the Federal Aviation Administration regulates the airline industry; the Federal Communications Commission regulates communications industry; and the Department of Health and Human Services regulates the healthcare sector. Regulators such as the Federal Trade Commission have more general regulatory authority – but the courts and Congress have generally been sceptical of efforts by such generalist regulators to use their authority to regulate pervasive or cross-industry practices.

In addition to the federal government, the states play an important role in regulating these issues. For instance, every state recognises various ‘privacy torts’. These cover harms such as intrusion upon seclusion, disclosure of private information, presenting someone in a false light, and appropriation of likeness. This means, among other things, that enforcement generally occurs in the courts of the state where a given injury occurred. There may also be substantive differences in these laws between the states, including both whether specific causes of action are even recognised and the damages available for violating them. Each state also has its own constitution and legal system – there are sometimes important differences between these constitutions, both between the states and between the states and the federal Constitution. For instance, the federal Constitution has more onerous standing requirements than many state constitutions, which means that federal courts may not be able to recognise certain types of harms as allowing judicial remedies, whereas state-level courts may be able to adjudicate those same claims.

More recently, many states have adopted specific statutes that may relate to FRTs. Illinois’s Biometric Information Privacy Act (BIPA), discussed in Section 15.4, for instance, directly affects the use of FRT and has caused firms such as Facebook to alter the services they offer to individuals located in that state. State-level laws create a complex set of implementation issues, including compliance costs and difficulties, especially where the specific requirements of a law may not be clear at the time it is enacted and the need (at times) to comply with contradictory requirements between state laws. The relationship between state and federal law can also be uncertain. In many cases, the existence of a related federal law will pre-empt state laws – it can even be the case that the non-existence of a federal law or regulation can prevent the adoption of a state law. While these issues are foundational to the operation of any legal system, as modern technology has increasingly brought state-level regulations into tension with those of other states and the federal government, there has been a surprising amount of debate in the United States as to how they ought to play out.

Extra-regulatory tools play a significant role in governing technologies such as FRT in US law. Such tools include mechanisms such as self-regulation and self-regulatory organisations, and executive regulatory tools such as government procurement policies. Self-regulation comes up in many contexts. Legally, it is closely related to consumer protection law: self-regulation often requires firms or industries to publicly disclose governance principles and to implement them in a binding way. A failure to do so might be the basis for liability based upon unfair or deceptive practices claims. Self-regulation can also be based upon the threat of legislation or even mere investigation: Congressional hearings into a firm’s or industry’s business practices can be disruptive or costly.

The use of procurement policies as a regulatory tool draws from the government’s role as a large purchaser of goods and services, including of technology products and services. Government entities can decide which goods and services to purchase without the need for legislative or regulatory authority. For instance, the president or a local government entity can often issue a policy directive that prohibits law enforcement from using certain technologies (such as FRT) or that directs how they may be used (such as for locating missing persons but not for tracking criminal suspects). Because the federal government is one of the largest purchasers of goods or services in the country (even the world), these policies have the potential to shape entire industries. A decision to use, or to not use, certain types of technology can cause private industry to invest billions of research and development dollars to develop technologies that meet those needs.

The brief discussion here offers a capsule summary of many aspects of US legal institutions that are relevant to regulation of FRT. It is far from a comprehensive introduction to US law. But for readers unfamiliar with these institutions it introduces several important idiosyncrasies and provides context for the discussion that follows – and for all readers it begins to develop themes that will be seen in the remainder of this chapter.

15.3 Federal Regulation of Facial Recognition Technology

Federal FRT regulation is still nascent in the United States. Administrative agencies have played the biggest role so far, approaching the issue through standard-setting and existing consumer protection regulation. For instance, since 2017, the National Institute of Technology and Standards (NIST), a non-regulatory agency under the auspices of the Department of Commerce, has developed standards for absolute and comparative accuracy of facial recognition algorithms and publishes results for software available through commercial vendors.Footnote 7 And these federal standards inform state approaches. For instance, Virginia allows its police to use only facial recognition software that performs well according to NIST standards.Footnote 8

The Federal Trade Commission, the US consumer protection agency, published a set of ‘best practices’ regarding the use of FRT in 2012.Footnote 9 Publications such as these ‘best practices’ may inform future Commission activity, but do not constitute legally binding rules. More recently, the Commission has enforced general consumer protection principles against at least one software company for misleading consumers about when and how facial recognition software would be used on photo and videos uploaded to the app. In 2021, the Commission settled with a photo app company called Everalbum for allegedly only using facial recognition after affirmative consent from users, when, in reality, the company automatically activated the feature.Footnote 10

The US Congress has begun debating legislation that would regulate both the US government’s own use of federal regulation technology, as well as private use of regulation technology. But the fates of each of these bills is far from certain. In June 2021, Senator Markey proposed a bill that would regulate the federal government’s own use of facial recognition.Footnote 11 The bill, which was not enacted into law but was reintroduced again in 2023, would prohibit any federal agency from using FRT, or information obtained from any such technology, without specific approval from Congress.

Congress also commissioned a study of the federal government’s use of FRT by the Government Accountability Office, which was published in August 2021.Footnote 12 Most of the agencies used some form of facial recognition to help ensure the digital security of agency devices. Six reported using the tool for law enforcement purposes and five for security purposes, including live monitoring of locations.

The Internal Revenue Service (IRS), the United States’ taxation authority, came under bipartisan scrutiny in 2022 for using FRT to verify the identities of taxpayers online. Lawmakers criticised the agency’s use of the tool as intrusive and requiring taxpayers to sacrifice privacy for data security. Advocates criticised the tool’s potential for bias.Footnote 13 The IRS eventually reversed its plans and now offers an identity verification tool that does not involve facial recognition software. But even after this controversy, other federal agencies, including the US Patent and Trademark Office, are still moving forward with plans to use the same software.Footnote 14

The US Congress has recently turned serious attention to a potential federal privacy regulation bill that would cover many contexts, including facial recognition.Footnote 15 The bill’s political future is uncertain, but the proposed language would place restrictions on the purposes for which companies could collect certain data, including facial recognition data, requires privacy policies, requires consent from consumers, and prohibits forms of algorithmic bias. Again, this bill would apply to FRT, but also to much wider categories of data. Similarly, the Federal Trade Commission (FTC) has recently announced a potential proposed rulemaking relating to ‘Commercial Surveillance and Data Security’ in which the Commission is considering, among many other issues, ‘limiting commercial surveillance practices that use or facilitate the use of facial recognition, fingerprinting, or other biometric technologies’.Footnote 16 As with legislative proposals, the future of this potential rulemaking is uncertain.

15.4 Regulating Facial Recognition Technology in the United States

States and localities have a primary advantage over the federal government when regulating new technologies: They can usually get regulations on the books faster. And, indeed, states and localities have taken an interest in regulating FRTs, but these non-federal approaches have been varied and fluid. FRT has many uses, so regulatory approaches target a similarly broad span of conduct. Some states regulate government or law enforcement use of FRT.Footnote 17 Some only regulate a sub-set of government use, such as banning use of facial recognition on drivers’ licences or on police body cameras.Footnote 18 Other states regulate the technology only as applied to vulnerable populations, although the efficacy of the laws varies.Footnote 19 Yet other states regulate commercial applications.Footnote 20

Illinois’s BIPA (2008) was one of the first state regulations of commercial use of FRT, although the bill encompasses more than just facial recognition. Before a private entity collects biometric information, the law requires (1) notice to the consumer, (2) informed consent from the consumer, (3) written policies about retention and destruction, and (4) limits retention of and profit-making from that information.Footnote 21 Suing under BIPA, consumers reached a landmark $650 million settlement against Facebook for using FRT on photos uploaded to the site without consumer consent.Footnote 22 In another major lawsuit, plaintiffs sued Clearview AI, a company that provided facial recognition software to law enforcement agencies and sector companies, under BIPA. Clearview attempted to argue that its activities – selecting and curating facial images – were protected under the First Amendment in the same way a search engine’s results might be.Footnote 23 But the suit settled, and the terms of the settlement prohibit Clearview AI from selling its database to most private companies.Footnote 24

Other states have comprehensive privacy laws that cover facial recognition data. California’s comprehensive privacy law, the California Consumer Privacy Act, for example, applies to facial recognition data, requiring companies to conform to certain obligations, including giving the consumer notice, access, and the right to have the data deleted.Footnote 25 Texas and Virginia also have some of their own state privacy laws that apply to biometric information.

State laws that allow consumers to sue companies face two possible hurdles as federal regulation catches up. The first is pre-emption, which is when a new federal law essentially is substituted for a state law on the same topic. Pre-emption could happen if federal legislation regulating FRT is passed. But the most recently proposed federal privacy legislation, the American Data Privacy and Protection Act, would not pre-empt state laws that solely cover FRTs.Footnote 26 BIPA would also remain un-pre-empted in a special carve-out.

State facial recognition laws that end up challenged in federal court – which could happen when a state resident sues a company that is based in another state – face a standing problem. In US law, standing refers to one’s legal ability to bring a suit. To have standing, a person must typically have suffered a concrete injury. Under BIPA and other state FRT laws, an injury might be defined as use of biometric data without consent. Federal courts have split on whether such an injury is concrete enough to satisfy the federal requirements for standing.Footnote 27

State regulation of police or other government actors within its own borders will not face pre-emption or standing problems. Vermont’s law regulating police use of FRT, as one of the most comprehensive such laws, provides an example at one end of the spectrum of regulation. Vermont’s law is straightforward: With one exception (child sexual exploitation), until otherwise approved by the legislature, police may not use FRT or information derived from such technology.Footnote 28 But Vermont is an outlier: At the other end of the spectrum, some states, such as Oregon, only regulate certain police or government use of FRT. For example, Oregon’s law prevents FRT from being used in conjunction with police body cameras.Footnote 29

State regulation of this technology continues to be fluid. Many state and local regulations have taken the form of moratoriums with sunset provisions, merely delaying an ultimate decision about regulation until a future date. Others have already added exceptions to their bans, as in the case of Vermont. The Vermont legislature added an exception to its ban on police use of FRT in all circumstances.Footnote 30 Now, police can use the technology in cases involving sexual exploitation of children.Footnote 31 And Virginia repealed its de facto ban on police use of the technology a little more than a year after the ban was passed.Footnote 32 As of July 2022, police in Virginia can use facial recognition software in certain investigatory circumstances, with ‘reasonable suspicion’, and only if the software achieves an accuracy score of at least 98 per cent (measuring true positives) on NIST metrics, across all demographic groups.Footnote 33 The new Virginia law demonstrates the interplay between state and federal regulation: A state law uses a federal standard to guide the technology’s use within its borders.

States, cities, and other localities have also enacted regulations on FRT use within their borders. These efforts are part of a broader trend in localities regulating the use of law enforcement technology. Advocates argue that the benefits of such governance include expanded democratic control over technology, improved responsiveness to changing technology, and more timely governance than post-hoc rules developed through challenges brought through criminal litigation.Footnote 34 At least sixteen localities throughout the United States had passed facial-recognition specific regulations as of July 2022, with others having comprehensive police surveillance regulations that also apply to facial recognition.Footnote 35

Both states and localities can regulate technology such as facial recognition in ways and at speeds that the federal government cannot. And experimentation with regulation of such technology at state and local level demonstrates ways in which these governance units are the laboratories of democracy. At the same time, the impact of these laws is limited to the boundaries of states and localities, affecting fewer people than federal regulation would. And states and local governments have their own types of problems with interest capture, raising concerns that, for instance, large companies might be able to outgun local privacy advocates. Local and state regulations are also much more easily reversible than certain federal regulations, as we have already seen with facial recognition regulation in some states and cities.

15.5 Issues on the Horizon

Prediction is an always-fraught, if often necessary, endeavour. When it comes to predicting the future of FRT, and regulation of FRT, in the United States, it is more fraught than ever. US legal and political landscapes today are tempestuous, perhaps nowhere more so than where they relate to technology. There has been growing concern about technology in recent years on both the political left and the political right – albeit animated by very different concerns. At the same time, recent judicial decisions have made the prospects of regulation less, not more, likely. A number of issues relating to FRT that are on the horizon are identified and discussed here.

We start with topics that are most likely to be discussed but that also seem least likely to actually translate into action: Federal legislation or regulation intended to broadly limit or even prohibit the use of FRT.Footnote 36 Such legislation is unlikely to come to pass in the United States without a strong bipartisan coalition supporting its adoption. Given the potential for abuses of FRT by the government and the American tradition of scepticism of government power, an outside observer might think this coalition would readily manifest. But there is strong countervailing support for law enforcement and ‘law and order’ policies. Narratives about the use of FRT to find missing children and track dangerous criminals – whether substantively valid or not – are likely to have great valence in policy discussions and make it unlikely that a necessary coalition will be able to form.

The dynamics are somewhat different when it comes to the potential for administrative regulation of FRT. Regulatory agencies enjoy some insulation from the political process: Congress has already delegated authority to agencies that may empower them to adopt regulations. The political question is therefore more limited to whether an agency’s leadership is interested in adopting given regulations. In the case of the current FTC, for instance, it is overwhelmingly clear that the agency is interested in adopting rules that could address the use of technologies such as FRT: as discussed previously, the Commission has recently issued an Advance Notice of Proposed Rulemaking relating to Commercial Surveillance and Data Security practices, which includes some consideration of rules that could limit, or at least affect, the use of FRT in the United States.

However, while the FTC may have the political will to adopt such regulations, it is less clear whether the courts will hold that it has the legal authority to do so. Recent trends in US administrative law have been hostile to expansive claims of authority by federal agencies, especially when adopting regulations that would affect entire industries or areas of commerce.Footnote 37 It does seem likely that the FTC would be able to adopt narrow rules that prescribe specific, and likely modest, requirements governing the use of FRT; it seems less likely, however, that the courts would uphold any broad regulatory moves that FTC makes, especially were they so broad as to proscribe the use of FRT or similar technologies.

Looking beyond the borders of the United States, questions will likely arise about whether US law can be harmonised with, or otherwise show comity for, FRT regimes adopted in other countries. This will most likely come up in the context of European regulations. The relationship between US and European regulations is likely to follow much the same trajectory as we have seen in the context of privacy regulations – most notably the challenges to the Safe Harbor and Privacy Shield in the Schrems litigation. To the extent that European regulations are based in European conceptions of fundamental rights, those regulations are likely to conflict with US regulations; and conversely, US regulations based in the First and Fourth Amendments are likely to conflict with European regulations.

The examples here are all likely to dominate discussion about FRT, but also seem unlikely to prove viable pathways for such regulation. This does not mean, however, that FRT regulation is unlikely. Indeed, as discussed in Sections 15.2 and 15.3, we are already seeing FRT regulation at federal and state level. And these are also where we are likely to see substantive debates over the scope, impacts, and implementation of such regulations. Such regulations, and debates, are likely to focus on government use of FRT, government access to information collected through private FRT systems, specific uses of FRT or FRT-related practices, and generally issues arising from the relationship between competing states’ laws and the federal regulations.

We are likely to continue to see governmental entities at federal, state, and local level consider whether, and under what circumstances, to use FRT. It is unlikely that there will be significant uniformity in approaches adopted. While most of these efforts will result from legislatures acting to limit the scope of their executive’s authority – for instance, by prohibiting the use of FRT by law enforcement, school systems, or other public entities – it is also conceivable that some states could find their hands forced. State constitutions embody myriad conceptions of privacy. It is certainly possible that use of FRT by governmental entities may be deemed to violate constitutional privacy protections in some states, and it would not be surprising to see litigation pushing theories such as this in coming years.

To take an example, a federal judge in Ohio recently held that a public school’s use of proctoring software that uses a student’s computer’s video camera to ‘scan’ the room in which they are taking an exam may constitute a search of private property in violation of the Fourth Amendment.Footnote 38 Similar claims could potentially be levelled at FRT systems: if courts hold that they enable pervasive tracking of individuals on an automated basis, they might be deemed to violate a reasonable expectation of privacy. Litigation challenging the constitutionality of such systems is at least possible, and probably likely, at both the state and federal level. Such restrictions would be unlikely to apply on government property or in government facilities, but could easily apply in private facilities or even in public places.

Government access to private FRT systems or data, discussed earlier, will also continue to be an issue in coming years. Here we are already seeing moves to limit government access to these systems, including requirements for judicial oversight of the processes by and circumstances in which law enforcement requests these materials. Of all efforts to regulate FRT in the United States, this is likely the least controversial and the most likely to continue to develop apace.

Limitations on government access often operate in practice by forbidding private entities from disclosing information to law enforcement. These regulations might not directly prohibit government use of information unlawfully disclosed to law enforcement (although, increasingly, they do). In this sense, they illustrate a general approach to regulating the private use of FRT: Most regulation will not prohibit the general development or use of FRTs; rather, restrictions on private entities will likely focus on specific use cases (e.g., disclosure of information generated by an FRT to law enforcement). One can speculate on a range of use cases for FRT that could be subject to regulation. For instance, the use of FRT to help evaluate job candidates could conceivably be regulated – depending upon the circumstance, such uses could even already run afoul of existing anti-discrimination laws.

A final set of issues on the horizon relate to the interplay between potential FRT regulations at the state and federal level. Importantly, these issues may arise in a range of contexts adjacent to FRT regulations. For instance, state-level privacy regulations that require disclosure of information collected about individuals, or minimisation of such information, would easily affect the technological and business practices of firms using FRT. If multiple states adopt conflicting FRT-related regulations there will be complex questions about how those regulations are applied in practice. And if the federal government also adopts other regulations – or if it deliberately decides not to adopt such regulations – there will be complex questions over whether the federal approaches to FRT pre-empt state-level regulations. On balance, this is all to say that regulation of FRT in the United States, to the extent that there are efforts to adopt such regulations, will remain fraught and unsettled for many years to come.

15.6 Conclusion

This chapter has considered the state of FRT regulation in the United States. The United States is not a monolith. It is a federation comprising a central federal authority along with more than fifty states and territories and hundreds of localities – all of which have legislative, executive, and administrative regulatory apparatuses. But they are also governed by the federal Constitution and share common foundational values. These values tend to limit the extent to which FRT can be regulated as a matter of law, as well as carrying a general disposition towards light-touch regulations.

This is not to say that FRT is, or will remain, entirely unregulated in the United States. For instance, the same disposition against government interference in private matters has already begun to result in regulations restricting the use of FRT by government actors. We are likely to see more of these regulations, including restrictions on private parties sharing access to their FRT systems with state actors (much in the same way that laws such as the Stored Communications Act prevent electronic communications services from sharing the content of communications with law enforcement without a court-issued warrant).

Outside limited circumstances, however, more expansive regulation of FRT in the United States is unlikely in the foreseeable future. While Congress and the Federal Trade Commission are both currently considering privacy regulations that might bear upon FRT to some extent, it is uncertain whether these efforts will be successful. Even if they are, those regulations will almost certainly face serious challenges under contemporary understandings of the United States Constitution, so will be subject to extensive and lengthy litigation. The US approach to FRT regulation is ultimately governed by broader US conceptions about privacy and regulation generally, which remain narrower than other jurisdictions and contested.

16 Regulating Facial Recognition in Brazil Legal and Policy Perspectives

Luca Belli , Walter Britto Gaspar , and Nicolo Zingales
16.1 Introduction

Facial recognition technology (FRT) has been in use by the Brazilian public administration for various purposes since at least 2011. It has seen an uptick in the 2018–2019 period, with noteworthy implementations in Rio de Janeiro and São Paulo, among others.Footnote 1 Nonetheless, there is no general legislation or sectoral regulation on the use of FRT – thus leaving unregulated both its general implementation and specific uses, such as for public security, public transportation systems, or identification.Footnote 2

This chapter aims at identifying vulnerabilities and opportunities posed by the use of FRT in Brazil, focussing on the current legislative and regulatory landscape. Thus, it shall attempt to describe the evolving legislative framework and assess its adequacy to deal with the risks to fundamental rights posed by such technologies.

To do so, we assume the reader’s prior knowledge of the basic functioning of facial recognition. This allows us to dive deeper into the literature concerning the adoption of FRT in Brazil (in Section 16.1), prior to reviewing the existing legislation (Section 16.2) relating to its deployment, especially in the context of law enforcement. A final section (Section 16.3) concludes with a brief analysis of this normative framework and puts forward a few suggestions on how to improve the national normative framework.

16.2 Implementations of FRT in Brazil

Information on the implementation of FRTs in Brazil is scattered. States, municipalities, and the federal government have all implemented projects utilising the technology, frequently without prior notice or consultation with civil society, which has hampered transparency and accountability. FRT is frequently introduced in the context of ‘Smart City’ programmes aiming at enhancing urban safety, in the absence of specific regulation and with no guidance from the National Data Protection Authority (ANPD) on how data protection impact assessments must be performed.Footnote 3 Alongside this, there are private implementations of FRT, which are even less transparent since there is no disclosure obligation of any kind.

Among several attempts at mapping FRT implementations in Brazil. the most recent one is Venturini and Faray,Footnote 4 which, drawing on access to information requests, search engines, and interviews with key actors, identifies six projects where facial recognition was being implemented. One is the emotion recognition contract for advertisement display purposes between Via Quatro, a private operator managing one of the subway lines at the city of São Paulo, and AdMobilize, an artificial intelligence (AI) analytics company headquartered in the United States.Footnote 5 Given the lack of notice and information over this contract, Idec, a civil society organisation acting in consumer rights issues, obtained a blocking injunction pursuant to a civil public action to uphold the rights of the users of the São Paulo subway system, where it argued that there was no consent for the collection and use of biometric data, no information on the functioning of the technology, the data processing, and its purposes, or the possibility to exercise data subject rights. Another project involved the subway administrator, Companhia do Metropolitano de São Paulo,Footnote 6 with the aim of installing FRT cameras for subway security in stations.Footnote 7

Two other projects involved surveillance of public spaces: one in the city of Campina Grande, in the state of Paraíba, and the other in Itacoatiara, in the state of Amazonas. The former involves FRT-enabled cameras running Facewatch installed during the city’s São João festival, beginning in 2019. The cameras were still being utilised in 2022, when they aided in the arrest of twenty-five people during that year’s festival and were expanded to two other cities in the same state (João Pessoa and Patos) via a command-and-control centre, totalling 1,600 cameras.Footnote 8 In Itacoatiara, a command centre was also created, with sixteen face recognition cameras for public security purposes.Footnote 9

Finally, the authors highlight the use of FRT by the Federal Data Processing Service (SERPRO), a public company, to confirm the identity of driver’s licence holders; and by SERPRO and the Social Service’s information technology company (DATAPREV) to confirm identity and provide proof of living for social security beneficiaries.

A paper focussed on FRT application in public security and police work reports on the use of these technologies in the states of Bahia, Rio de Janeiro, Santa Catarina, and Paraíba from March to October 2019.Footnote 10 Although the specifics (contracting parties, public procurement format, etc.) are not disclosed, the article contains insightful information on the efficacy of such systems, which led to 151 arrests in total. Particularly, out of forty-two cases where information on race was available, 90.5 per cent of suspects were black and 9.5 per cent were white.Footnote 11 The research also analyses one specific case where FRT was applied for four days during the Carnival at Feira de Santana, a city in the state of Bahia, with an efficacy rate of less than 4 per cent.Footnote 12

A more recent work by Nunes and colleagues goes into more detail about FRT in Rio de Janeiro.Footnote 13 The researchers scrutinise a pilot-project involving the deployment of FRT in Copacabana, during Carnival 2019, which was later expanded to two more areas of the city. The two-phase FRT programme for public security was managed by the State Military Police Office (SEPM) in a partnership with Oi, one of the major telecommunications operators in Brazil. Firstly, thirty-four FRT-enabled cameras were installed in Copacabana during a ten-day period, and coordinated by four military policemen trained by Oi and Huawei.Footnote 14 This programme was extended for two more months in the same year in additional locations in the city, increasing the number of cameras to ninety-five.

The database against which matches were checked was fed by information from the state’s Civil Police Office (Sepol), the Department of Motor Vehicles (Detran), and the missing and wanted persons database. SEPM indicated that the data was encrypted, and information regarding persons identified via facial recognition was stored and made available to public security organs and criminal justice for purposes of planning, investigation, and enforcement, while false positives were immediately discarded by the system operator at the monitoring site.Footnote 15

During the first phase, 2,993,692 facial images were captured, with 2,465 face correlations being established between those and the database records. This amounts to a 0.082 per cent match rate. There are no specific numbers for the second phase alone, but in total, from March to October 2019, sixty-three people were arrested, two missing persons were located, and five vehicles were recovered thanks to the use of FRT.Footnote 16

Another study by Instituto Igarapé identifies forty-seven use cases of FRT in Brazilian cities from 2011 to 2019, spanning sixteen states out of the twenty-seven federal units composing the Brazilian federation.Footnote 17 Most instances (twenty-one) were related to public transportation – fraud prevention in free passes. These were followed by public security (thirteen cases), education (five cases), and border control (four cases).Footnote 18 Critically, the researchers report that ‘many of the publicly announced cases focus mainly on the expected efficiency and implementation and less so on informing results’.Footnote 19 This is a perception shared by Nunes and colleagues when analysing the aforementioned case of Rio de Janeiro, pointing to a lack of metrics enabling performance reviews and stressing several instances where clarifications are needed to evaluate the projects’ objectives and results.

Traditionally, Brazilian municipalities have adopted poor data governance practices, with sensitivity to personal data protection only kicking in after the applicability of sanctions in the General Data Protection Law (LGPD) in August 2021.Footnote 20 From this perspective, a central concern regarding FRT use is the possible re-purposing of the personal data that has been collected, notably the sharing of such information with the government. For instance, in the case of FRT usage to prevent abuse of gratuity programmes in public transportation, it was revealed that the processed data may also be shared with the security forces ‘when requested’.Footnote 21 Similar concerns apply when FRT is used to monitor student attendance, such as a case in the municipality of Itumbiara, in the state of Goiás. Questioned by researchers, the municipal education office made assurances that the data were stored in the same device it was captured on and a prior Data Protection Impact Assessment had been done, although the assessment was not shared publicly or with the researchers.Footnote 22

Despite existing assurances from public bodies responsible for FRT implementation, the risk of surveillance creep remains significant – not only involving a possible transferring of biometric data to third parties, but also the receiving of such data from third parties. In July 2021, for instance, the governor of Bahia announced the expansion of a FRT project from Feira de Santana to seventy-six other cities in the state, for a total of 4,095 cameras, on a R$ 665 million partnership with a conglomerate formed by Oi and the security tech company Avantia. In making the announcement, the governor also revealed an ambition to have private security cameras connected to the system, allowing for ‘banking agencies, shopping malls and condominiums […] to connect their cameras and deliver the movements and faces of passers-by to authorities’.Footnote 23

16.3 Current Legislation, Regulation, and Governance

There is currently no specific law regarding FRTs in Brazil, whether for public or private ends, and whether in security, transportation, or any other area. Furthermore, there is no specific law or regulation framing the usage of AI systems in Brazil, although legislative efforts are being made. There is, however, a set of laws that regulate specific areas of FRT and can be used to build the basis for a regulatory framework; they are briefly explained in this section.

16.3.1 General Data Protection Law (LGPD)

A first important port of call is the LGPD. Four key elements for FRT purposes are:

  1. (1) its characterisation of biometric data, such as facial images, as ‘sensitive personal data’ (Art. 5º, II), which means that its processing can be grounded only on a more limited range of legal bases (Art. 11º);

  2. (2) the overarching principles of data processing, which set the fundamental elements of all personal data processing, including those involved in FRT (Art. 6º);

  3. (3) the right to revision regarding automated decision-making based on personal data that affect the data subject’s interests (Art. 20 and Art. 20.§1); and

  4. (4) the limited scope of the LGPD when it comes to security, prevention, and repression of criminal activities, and the obligation to perform data protection impact assessment in such cases (Art. 4.§3.).

The first point refers to the fact that, being categorised as sensitive, the codified data of every individual’s facial print, used in face recognition to identify matches, must be based on explicit and informed consent of the data subject or else be ‘indispensable’ to achieve one of seven legal bases as set in Article 11. One can imagine some of these alternative legal bases being in principle suitable to justify FRT for public interest purposes. For instance, ‘prevention of fraud’ can justify one-to-one authentication of an individual who needs to access a secure electronic system (an example being biometric authentication for one’s own bank account). Moreover, ‘compliance with legal or regulatory obligations’ allows data controllers to conduct FRT operations when this is imposed as a legal or regulatory obligation; and ‘execution of public policies’ allows the shared use of information between public entities or between public and private entities, upon prior authorisation, for the execution by the public administration of public policies.

This latter provision is a peculiarity of the Brazilian framework, allowing the sharing of datasets between government departments, executive agencies, and private entities who have been involved in the execution of public policies. However, this can only be done under terms and conditions that have been previously defined in legislation or equivalent legal sources (ordinances, resolutions, regulations, etc.), which provide a mechanism to ensure transparency and accountability of such processing.

The third relevant aspect of the LGPD concerns its principles, which construct concrete obligations for the data controllers and processors. Good faith (duty to maintain an honest and trustworthy conduct in the data processing relationship) opens the set of principles contained in Article 6 of the law,Footnote 24 followed by principles similar to those found in other data protection frameworks – for example, purpose limitation, data minimisation, security. Of particular interest for FRT are the principles of non-discrimination and responsibility and accountability, in conjunction with the transparency principle. Since LGPD principles must inform and shape the whole design and implementation of data processing, this means that controllers must be able to demonstrate that specific measures have been taken to mitigate risks, such as biased and unfair processing, and have been communicated in a clear and intelligible manner.

Owing to the invasive nature of FRT, the correct implementation of the LGPD principles requires the performance of periodic data protection impact assessments. This is particularly relevant when FRT is deployed for security purposes by public organs and law enforcement agencies, as only auditable technologies can be legitimately used by the state bodies without undermining constitutional guarantees. Unfortunately, this is far from being the case. Furthermore, a sound implementation of the transparency principle is key in the case of FRT. This not only demands an analysis and audit of FRT’s impact, but also requires that the information resulting from such analysis be transparently communicated in an accessible language.

The second key element of the LGPD that is relevant for FRT concerns automated decision-making. According to LGPD, these decisions should be structured in a way that allows for revision,Footnote 25 which, logically, also demands that the data subject be informed they are subjected to automated decision-making.Footnote 26 A hard question would be what form of communication of that information is suitable for giving notice: would this require a ‘just-in-time’ notification, or would consent to a generic statement in a controller’s privacy policy be sufficient?

Lastly, it is important to mention that the LGPD creates a rather large exception within the data protection framework regarding any data protection processing aimed exclusively at fostering public security, national defence, the safety of the country, or crime investigation and repression (Art. 4). While this exception currently leaves the door open to a wide range of illegitimate uses from state organs, the LGPD also foresees that these exceptions ‘shall be governed by a specific law, which shall contain proportional measures as strictly required to serve the public interest, subject to due process of law, general principles of protection and the rights of the data subjects set forth in this Law’ (Art. 4 §1). Furthermore, paragraph 3 of the same LGPD article also provides a key element for the purposes of FRT regulation, specifying that the ANPD will issue technical opinions or recommendations regulating the exceptions mentioned earlier and shall request a data protection impact assessment to the persons in charge of data processing for such purposes. Hence, we may assume that whenever FRT is used for safety and security reasons it is necessary to undertake a data protection impact assessment. Moreover, the ANPD has general competence to regulate how data protection impact assessments should be conducted (Art. 55-J, XIII).

16.3.2 Additional Legislation

In addition to LGPD, some other normative references are relevant to FRT. First of all, the Brazilian Constitution contains provisions on intimacy (Art. 5, X), secrecy of communications (Art. 5, XII), habeas data (Art. 5, LXXII), and personal data protection (Art. 5, LXXIX).

Secondly, the Brazilian Consumer Code applies to business-to-consumer relations, potentially impacting the viability of FRT deployments in consumer-facing applications, products, and services. For instance, it contains provisions on databases, anticipating many of the rights that would be afforded to data subjects by the LGPD in general (the Code precedes LGPD by more than two decades). Importantly, it establishes strict liability in consumer relations (Art. 12 and Art. 14); an obligation to maintain correct and updated data; and the right of the consumer to be informed of a new registry of their personal data (Art. 43).

Another relevant provision is Federal Decree no. 10.046/2019, which establishes guidelines for the sharing of data among the Federal Public Administration. This norm allowed the unification of fifty-one existing databases and created two new ones (including biometric and biographic data), and was criticised for laying out insufficient safeguards of compliance with the LGPD.Footnote 27 Two actions challenging the constitutionality of the Decree were filed before the Constitutional Court in 2021, due to its alleged clash with fundamental rights to privacy and data protection. In a unanimous decision, the court interpreted the Decree in conformity with the Constitution, clarifying data sharing must be conditioned to:

  1. (1) the pursuit of legitimate, specific, and explicit purposes;

  2. (2) the compatibility with the stated purposes;

  3. (3) compliance with the LGPD’s public sector norms;

  4. (4) its transparency and publicity, including the control mechanisms for access to the database, insertion of new data, and the security measures enabling the imposition of liability on the relevant public servant in case of abuse;

  5. (5) its respect for the norms established in specific legislation and case-law in the operations of data sharing and intelligence;

  6. (6) the existence of norms of civil responsibility of the state in case of illegality; and

  7. (7) the existence of norms of responsibility for administrative impropriety of any agent acting on behalf of the state in case of intentional violation of the duty of publicity established by Article 23 of the LGPD.Footnote 28

At the same time, the ruling found unconstitutional the part of the Decree concerning the composition of the Central Committee for Data Governance (the entity that may formulate the concrete norms and standards for data sharing under the Decree). The court gave the government sixty days to open its composition to effective participation of other democratic institutions, with minimum guarantees against undue influence on its members. In other words, the ruling consecrated the importance of both transparency and multi-stakeholder participation in the formulation of policies regarding government use of data.

Finally, Ordinance no. 793/2019 of the Ministry of Justice and Public Security is directly concerned with the use of FRT for public security purposes. This norm establishes financial incentives for security-oriented actions aimed at implementing the National Public Security and Social Defence Policy. FRT is explicitly mentioned in Article 4, §1, III, b,Footnote 29 which allows the application of funds from the National Public Security Fund (which reached more than 1 billion reais in 2021 and almost 2 billion reais in 2022) in the implementation of technologies such as video monitoring systems with facial recognition solutions, optical character recognition, and AI.Footnote 30 Although the intent to increase such applications is expressed, no safeguards in terms of transparency and accountability are described.

16.4 Discussion: Is the Existing Framework Adequate?

To assess if the existing (or proposed) legal framework regarding AI and FRT in Brazil is adequate for the protection of fundamental rights, it is first necessary to understand the risks associated with the application of these technologies. A brief discussion of these risks is presented here. Based on such an understanding, we then draw some necessary conclusions.

16.4.1 The Probable Risks of FRT Deployment in Brazil

One of the most cited and known risks is the discriminatory consequences that these technologies may have. Particularly, systems trained based on discriminatory datasets will likely tend to reproduce the biases and discriminatory tendencies inferred from the data. Systems developed by under-representative teams may suffer from more subtle dysfunctionalities – resulting from issues such as limited selection criteria set by developers, the conceptualisation of the elements that will constitute inputs and outputs of the system, and a myopic view of the results in terms of their discriminatory impacts. Poorly designed systems and datasets might result in systems that disproportionately target these populations, and consequently, new disproportionate data being generated and fed into the system.

Possa highlights how, in a country where black individuals made up 66.7 per cent of the national prisoner population in 2019 and where in 2015 the Supreme Court declared the general state of the carceral system as an ‘unconstitutional situation’, adopting public security technologies that harm the presumption of innocence and present biases toward structurally discriminated peoples only reinforces that unconstitutionality.Footnote 31

All these issues result in systems that are inept at dealing with certain aspects of the social phenomena they are built to address – in the case of FRT, systems are unable to recognise non-Caucasian, non-male faces, resulting in undue targeting of these groups, as many studies and cases have previously shown.Footnote 32 This seems also to be the case with some of the previously discussed implementations of FRT in Brazil, as anecdotal evidence suggests.Footnote 33

Those problems are compounded by AI systems’ opacity, which impairs accountability and public oversight.Footnote 34 This is further complicated by the information asymmetry between private actors who source these technologies and the public using them, or the public institutions that contract AI services. As stated by Mazzucato and colleagues: ‘The proprietary nature of most AI applications means the public lacks insight as well as the ability to design proper oversight. Advancing technical capabilities without matching adjustments to governance, institutional and organisational models is leading to failure in effectively evaluating the risks of AI and managing its opportunities.’Footnote 35

On top of all this, there are issues particular to the Brazilian context. As systematically demonstrated by Reis and others, and reflected in anecdotal evidence from various other authors previously referenced, most of the FRT being implemented by the Public Administration in the country come from foreign sources, especially China, Israel, the United States, and the United Kingdom. In many instances, contracting was based on aggressive negotiation tactics directed at conquering market dominance and locking-in the contracting administrations.Footnote 36 This trend is particularly marked in Latin American countries.Footnote 37

This raises concerns around the strategic value of technologies and the underlying personal data being collected – especially considering that data sharing terms with these private companies are not always publicly transparent.Footnote 38 One other concern is the ability of the state to incentivise the emergence of national AI and FRT capabilities, directing their development into interests aligned with national societal goals or ‘missions’,Footnote 39 and strengthening the national innovation system.Footnote 40

Much has been said in public debate about the harms of algorithmic bias and the need to combat or fix it. Powles and Nissenbaum comment on how focussing on solving bias is a reflection of society’s deference to technologists even in the fields of ethics, law, and the media, and how focus should not be shifted from discussions such as which systems really deserve to be built; which problems most need to be tackled; who is best placed to build them, and who decides?Footnote 41 Souza and Zanatta add to this debate,Footnote 42 connecting the application of FRT to a broader neo-liberal tendency for the decentralisation of state functions to technology firms, and the associated push from the market in the context of ‘surveillance capitalism’.Footnote 43 This ‘techno-solutionism’ serves as a smokescreen over the deeper-seated issues of structural racism and the surveillance state,Footnote 44 forcing public debate into the question of how to make FRT fair and efficient instead of if it is truly needed and proportional to the desired ends. An adequate regulatory framework should deal with these issues.

16.4.2 Moving from the Existing to the Ideal FRT Framework for Brazil

Based on the analysis conducted in the previous sections, we can argue that the current and proposed framework for FRT regulation adopts a rather lenient approach to the ex-ante regulation of risk – by leaving a measure of discretion to the control of high-risk applications by the public administration. Such choice may be detrimental in terms of compliance with the LGPD principles, especially considering the ANPD has demonstrated a remarkably timid stance regarding overseeing the implementation of LGPD by public bodies and law enforcement agencies – de facto leaving the correct implementation of the existing framework to the good faith and good will of the bodies that deploy FRT.

Moreover, the existing framework does not foresee a differentiated approach that customises specific obligations and safeguards based on the purposes for which FRT is implemented. As we have emphasised, the purpose for which FRT is deployed – for instance identification in the context of crime prosecution versus authentication – has a considerable impact not only on the legislation that will be applied, but also on the obligations of the data controller and the guarantees of the data subject. The complexity of this situation might be exacerbated further by the jurisdictional uncertainty over what administrative level is competent to regulate the use of FRT. Indeed, the regulation of security issues is a state issue, but data protection is an issue of exclusively federal competence.

In addition, we argue that more information needs to be pro-actively made available by public administrators and public service concessionaires on the intended FRT implementations, adopting an accountability-first stance and a transparency-by-design approach. As we have emphasised, information should be communicated in a clear and intelligible manner and should at least specify: when, where, and why FRT is used; what databases are used to train the FRT systems; what data is collected; what measures are taken to guarantee information security; with which entities data are shared, if any; and what indicators will allow to evaluate the performance of the FRT deployment, such as how many investigations and criminal proceedings are carried out and how many crimes are solved based on the use of the FRT system under discussion.

Lastly, it seems necessary that the ANPD enact regulations and publish technical guidelines on specific aspects of the data processing pipeline, which are essential to make sure FRT systems are used in compliance with LGPD. In fact, as long as critical elements such as data anonymisation, algorithmic accountability and auditing, data protection impact assessments, and data security measures remain undefined, (FRT) compliance with LGPD will continue to be extraordinarily challenging.

In Brazil, the main legal reference concerning the use of FRT, owing to their intrinsic use of personal data, is the LGPD, which is enforced and detailed by the ANPD. There are, however, other concerned institutions that should be included in the discussion. One such is the Governance Committee of the Brazilian Artificial Intelligence Strategy, a multi-stakeholder body created in April 2021 and tasked with translating the strategy – which has been criticised for being overly general and more akin to a letter of intent than to an actual strategy – into concrete objectives and actions.Footnote 45 ANPD, however, only started participating in the Committee at its fourth meeting, in December 2021, and no specific progress on these matters has yet been announced.Footnote 46

16.5 Conclusions

All in all, there are still substantial gaps in the regulation of AI and, consequently, FRT in Brazil, although a strong basis of principles is in place and there are important laws working to provide the necessary basis for the judicial protection of fundamental rights – as demonstrated by the Via Quatro case. A deeper issue with the implementation of these technologies is its scattered character – popping up in news announcements as sure techno-solutions to issues such as efficiency and public security. As discussed, this scattered nature is equally observed in the legislative scenario, with federal, state, and municipal norms and proposed bills creating a cacophony that ultimately impairs advancement of a strong position on the role that these technologies should play in society.

In this context, one major institution that might play an important role is the ANPD, which was given ample ground to not only control, but also guide data processing activities in Brazil. ANPD must embrace its role as a technical agency aimed at providing market and public implementations of innovative data-based technologies with the guidelines necessary to build technological solutions that respect fundamental rights and the means to innovate within those limitations.

Another institutional actor that could play a bigger role in the future is the Governance Committee created to implement the National Artificial Intelligence Strategy. This multi-stakeholder body is seated within the Ministry for Science, Technology, and Innovation, a crucial actor in promoting the full enjoyment of the benefits that may arise from science and innovation, especially in promoting economic and social development. However, this must be guided by a strategic vision that recognises the position that Brazil occupies in the process of recovering its industrial basis and catching-up with advanced economies.

Overall, the debate on FRT in Brazil has been marked by two movements that appear contrary to each other. On the one hand, reliance on FRT as a solution to immediate issues brings about hastened implementations that do not provide the necessary degree of transparency, accountability, proportionality analysis, and sensitivity to the fundamental rights to privacy and data protection. On the other hand, as mentioned in the opening of Section 16.2, civil society has reacted with increasing degrees of rejection of these technologies, reaching a generalised sentiment for the ban of FRT in the surveillance of public spaces. In the midst of these movements, existing and proposed norms seem to tackle some of the problematic aspects of FRT use, but fall short of giving a systematic and unified answer.

17 FRT Regulation in China

Jyh-An Lee and Peng Zhou
17.1 Introduction

Facial recognition technology (FRT) applications enjoy a staggering level of penetration in China. Valuing the technology’s function in facilitating social control and public security, the Chinese government has not only implemented it widely,Footnote 1 but also used it to build a national surveillance architecture together with other mechanisms, such as the social credit system.Footnote 2 When providing telecommunications, banking, and transportation and other services, an increasing number of state-owned enterprises record citizens’ facial data for their FRT systems.Footnote 3 FRT-empowered applications are also commonly adopted in the private sector,Footnote 4 for functions such as online payment, residential security, and hospital checking in.Footnote 5 The rapid development and wide adoption of FRT has made China a global leader in this field. In a recent round of the 1:N section of the US National Institute of Standard and Technology’s (NIST’s) Face Recognition Vendor Test, where algorithm providers compete for accuracy, the Hong Kong-based industry giant SenseTime came out on top, together with another China-based service provider.Footnote 6 SenseTime, as Asia’s largest artificial intelligence (AI) software company, has 22 per cent share of China’s computer-vision market.Footnote 7 Moreover, surveillance camera makers, such as Hangzhou Hikvision Digital Technology, Zhejiang Dahua Technology, and Megvii Technology, are also leaders in the industry and provide essential equipment for China’s pervasive implementation of FRT.Footnote 8

FRT has triggered serious privacy concerns in many countries, and China is of no exception. Although some commentators indicate that Chinese culture is more tolerant towards privacy violations than that of Western countries and many Chinese favour FRT because of increased security or convenience,Footnote 9 there have been extensive debates concerning the justification and proper scope of FRT adoption in the country. China has been working on developing a regulatory framework for FRT since 2020. Although this framework aimed to substantially enhance personal data protection, there have been increasing risks and challenges to protect citizens’ data in the FRT environment.

This chapter first introduces China’s legal framework regulating FRT and analyses the underlying problems. Although current laws and regulations have restricted the deployment of FRT under some circumstances, these restrictions may function poorly when the technology is installed by the government or when it is deployed for the purpose of protecting public security. We use two cases to illustrate this asymmetric regulatory model, which can be traced to systematic preferences that existed prior to recent legislative efforts advancing personal data protection. Based on these case studies and evaluation of relevant regulations, this chapter explains why China has developed this distinctive asymmetric regulatory model towards FRT specifically and personal data generally.

17.2 Regulating FRT in a Fishbowl Society

Given China’s over-arching national security drive built on a strong state-centric approach to data governance, its turn to strengthen personal information protection can be somewhat of a puzzle.Footnote 10 Heavy investment in FRT and the extensive use by the Chinese government in security applications often portray an invasively transparent ‘fishbowl society’ straight from Orwellian nightmares.Footnote 11 Although the move to more robust protection of personal information appears to conflict with this perception, China has provided an interesting example regarding how authoritarian states balance their digital surveillance and the protection of individuals’ personal data. The case of FRT regulations and their enforcement is a particular case to illustrate the challenges of maintaining this balance in China.

17.2.1 National Laws and Judicial Interpretations

As early as 2012, the Standing Committee of the Eleventh People’s Congress, which is China’s top legislative authority, declared its determination to protect digital privacy and planned to legislate data protection principles, such as specific limitations to the collection of personal information and other necessary precautions to safeguard privacy.Footnote 12 The 2020 PRC Civil Code (the Civil Code) marked a major shift to the regulatory landscape for the protection of personal information, including biometric data.Footnote 13 Prior to the Civil Code, China had no laws regulating FRT. Piecemeal regulations on personal data protection were scattered mostly under laws addressing cyber-crime and cyber-security breaches.Footnote 14 The Civil Code dedicates a new chapter to Chinese privacy laws and views personal information as a basic civil right (with the first clause declaring such right in the General Provisions of the Civil Law that came in 2017, as an interim step towards the Civil Code).Footnote 15 Article 1035 of the Civil Code establishes general data protection principles, such as purpose and scope limitations as well as the requirement for informed consent by data subjects in processing personal information.Footnote 16

Following the Civil Code, the Supreme People’s Court issued the Judicial Interpretation on the Regulation of FRT (the Judicial Interpretation) in 2021.Footnote 17 The Judicial Interpretation confirms that facial data falls within the scope of biometrically identifiable information, a type of personal information, prescribed by Article 1034 of the Civil Code.Footnote 18 Article 2 of the Judicial Interpretation specifically forbids the use of the technology by ‘information processors’ in public spaces such as hotels, shopping malls, and airports, unless otherwise authorised by authorities.Footnote 19 As a reflection of widespread use of facial scanning for identity verification and authentication purposes on residential and commercial properties, Article 10 forbids using FRT without individual consent.Footnote 20 The Judicial Interpretation also strengthened remedies for data subjects, including monetary damages and injunctive relief.Footnote 21 According to Article 5 of the Judicial Interpretation, liability can be exempted under some circumstances, such as on public security grounds.Footnote 22

Shortly afterwards, the Standing Committee of the National People’s Congress passed the PRC Personal Information Protection Law (the PIPL), with a focus on the obligations and liabilities of ‘personal information processors’ (PIPs).Footnote 23 Article 33 stipulates that rules under the PIPL apply to state agencies as well.Footnote 24 Moreover, the PIPL views biometric data as a type of ‘sensitive personal information’,Footnote 25 and the processing of such information is subject to a higher standard of protection. PIPs have to obtain independent ‘opt-in’ consent from data subjects to process such information and inform the latter of the necessity of processing measures as well as the impact on their rights.Footnote 26 For individuals under the age of fourteen, such consent must be obtained from parents or statutory agents.Footnote 27 Notably, the law allows image collection and personal identification equipment in public places for the purpose of safeguarding public security.Footnote 28 Thus, this rule provided a legal basis for security cameras widely deployed by the government.

Several local governments’ metropolises have since introduced regulations at provincial and municipal levels to target more narrowly defined scenarios of FRT applications, such as for identity verifications on residential properties.Footnote 29 The Municipal Government of Hangzhou, for example, amended its Regulation on Realty Management in 2020, limiting the compulsory collection and verification of biometric data such as facial information on residential and commercial properties.Footnote 30

17.2.2 Problems Underlying the Current Regulatory Framework

Although China has adopted many internationally recognised data protection principles in its domestic laws,Footnote 31 its laws, regulations, and practices regarding FRT and their impact on personal data protection are still controversial. While the consent of data subject is required for another party’s data collection, processing, and use, all these procedures can be omitted in the name of public security. A major challenge for personal data protection, in the context of deploying FRT for security purposes, is that the concept of public security does not seem to have any limit and can be interpreted quite expansively.

Taking the hospitality industry, for example, although the Judicial Interpretation specifically forbids the deployment of FRT in places such as hotels, it allows ‘laws and regulations’ to override this rule for security reasons.Footnote 32 To enforce the real-name registration rules,Footnote 33 quite a few local governments have mandated hotels to verify the identity of their guests by deploying FRT systems connected to the police database and scanning their faces at check-ins.Footnote 34 Although it is not clear whether the hotels have the legal right to process the facial data of their guests, local governments might take advantage of the vague language of the PIPL and infringe on personal data by interpreting the law in a less protective way. Article 13 of the PIPL allows data processing without the data subject’s consent for the purpose of ‘fulfilling legal responsibility or obligation’.Footnote 35 Local governments can easily argue that requiring hotels to implement FRT is to ‘fulfil its legal responsibility or obligation’ regarding real-name registration or sector-specific safety policies. This typical example demonstrates that many of the personal data protection mechanisms regarding FRT provided in the laws and judicial interpretation could in reality function less effectively.

Another problem is the asymmetric regulation of FRT in the public and private sectors. While government agencies ordinarily have more chances to be exempted from personal data liabilities because of public security reasons, their liability for data breach is also lighter than that of private parties. While a private party’s data misuse would result in both civil and administrative liabilities,Footnote 36 Article 68 of the PIPL indicates that violation of personal data rights by the government only leads to administrative liabilities, which would rely on self-correction measures conducted by state agencies.Footnote 37 Under this asymmetric framework, it is not surprising that administrative agencies may weigh their own convenience purpose more than personal data protection and thus use FRT in an unbalanced way. The technology has also been deployed to police individuals, including for minor misbehaviour such as jaywalking or wearing pyjamas in public places.Footnote 38 It is even reported that the government has used FRT on toilet paper dispensers installed in public toilets to fight off paper thieves.Footnote 39 During the COVID-19 pandemic, FRT was deployed comprehensively to verify identities and to monitor and control virus outbreaks on a regular basis.Footnote 40

17.3 Case Studies

In recent years, several FRT-related incidents have caught wide public attention and led to lively debates on the potential harm brought by this technology to society.Footnote 41 The most noticeable two cases were both raised by law professors challenging the justification of FRT use in citizens’ daily lives. Their outcomes, however, differed significantly. While one professor successfully convinced the court that enterprises could not unilaterally impose FRT on its consumers, the other failed to stop its pervasive use in Beijing metro stations.

17.3.1 The Hangzhou Safari Park

China had its first lawsuit concerning the commercial use of FRT in 2019.Footnote 42 Bing Guo, a law professor specialising in data protection law, sued Hangzhou Safari Park (HSP) for illegally imposing FRT-based access control after he purchased the annual pass.Footnote 43 The Fuyang District People’s Court in Hangzhou ruled that HSP breached its contract with Guo by unilaterally changing its entrance policy.Footnote 44 However, the court failed to find any data protection violation because the plaintiff agreed to take a photo when he purchased the pass.Footnote 45

In the second instance, the Hangzhou Intermediate People’s Court’s viewpoint was more favourable to the plaintiff on HSP’s use of his facial data. The court explained that biometric information concerning facial characteristics was more sensitive than most other types of personal data.Footnote 46 Therefore, although there was no clear standard in the law regulating FRT at that time, the court held that HSP’s use of this technology should be subject to more scrutiny.Footnote 47 Based on such understanding, the court ruled on 9 April 2021 that HSP was liable for using the plaintiff’s facial data in the FRT systems without his consent.Footnote 48

Some might believe that the political atmosphere was also favourable for Guo. While the Hangzhou Intermediate People’s Court was hearing the case, the National People’s Congress passed the Civil Code on 28 May 2020, with personal information protection as one of its salient points. China Central Television, the nation’s largest state broadcaster, collaborated with China’s Supreme People’s Court and showcased this case as one of the ten benchmark cases in 2021.Footnote 49 Official publications by China’s judiciary likewise prized the case as a sign of a progressive, more benevolent legal system.Footnote 50

Nevertheless, Guo himself was not satisfied with the judgment. He argued that the use of FRT by HSP was illegal per se,Footnote 51 but this viewpoint was not accepted by the court. Given the pervasive FRT in China, agreeing with Guo could be a step too far.

17.3.2 The Beijing Metro Station

In January 2022, Tsinghua law professor Dongyan Lao posted a long essay about China’s social and legal problems on Weibo – the Chinese equivalent of Twitter.Footnote 52 One thing Lao lamented was her failed attempt to prevent the use of FRT in Beijing’s subway stations.Footnote 53

When the Beijing Subway Limited Company proposed to implement FRT in its ‘real-name-based passenger’ system, Lao was among the first against it.Footnote 54 In 2019, the Beijing’s Rail Transit Control Centre, which is the administrative body responsible for underground transport in Beijing, announced the plan of enhancing subway station security by building an FRT-based railway passenger classification system.Footnote 55 The Centre explained that this system would not only protect public security of the Beijing subway, but also promote traffic efficiency.Footnote 56 The system was based on an AI-enabled facial image database, which could push security alerts automatically to personnel on site and drastically lessen their workloads.Footnote 57

Shortly after the announcement, Lao openly expressed concerns regarding the over-intrusiveness of FRT in public venues and questioned the justification of this decision.Footnote 58 While China did not have any legislation regulating the FRT at that time, Lao argued that the rail transit agency had no authority to make such a decision without conducting a public hearing.Footnote 59 In addition, Lao indicated that the system treated all passengers as potential criminals and therefore violated the presumption of innocence doctrine, which is fundamental to any modern criminal law system.Footnote 60 Shortly after this criticism, Lao’s Weibo account was suspended and her posts were no longer available.Footnote 61

To Lao’s dismay, although the Centre postponed the plan of implementing FRT for nearly two years, it started to introduce the system in several stations in 2022.Footnote 62 The Centre compromised by adopting the FRT-based system on a voluntary basis. Passengers could get an express pass by completing real-name registration and uploading their facial data.Footnote 63 Beijing municipal government explained that the facial data was also linked to vaccination and testing results for the purpose of pandemic control. The Beijing municipal government announced in May 2022 that the system would be further linked to China’s ‘health code’ – the mobile application used by Chinese people for mandatory checks on location data as well as COVID-19 testing reports.Footnote 64 Linking facial data to other types of sensitive personal information such as one’s records of geo-location, could construe a form of highly aggregated data profiling. Information that does not seem to pose immediate harm might be less innocuous once a person’s social relationships and patterns of behaviour are revealed through an extended period of data collection and aggregation. This aggregation problem can lead to highly intrusive portrayals of an individual’s intimate life details, posing a unique threat to one’s privacy. Lao’s case reveals that the use of FRT for public security purposes can be easily justified by the authority and that challenging the government’s use of FRT can face unsurmountable difficulties.

17.4 FRT in the Surveillance State

Although the Civil Code and PIPL have advanced personal data protection in China, Sections 17.2 and 17.3 have revealed that FRT used by the public sector has not been subject to much limitation. The government can always justify such use for the purpose of public security. This asymmetric regulatory model is rooted in China’s unique political economy and regulatory philosophy.

First, the asymmetric regulatory model has been hugely influenced by China’s unique human rights values. The fundamentals of China’s human rights are different from those of the Western world. In the Western world, human rights were designed to protect individuals from state power from the beginning.Footnote 65 However, China has viewed human rights as derived from the state, which reigns supreme over the individual.Footnote 66 Consequently, China’s approach to human rights has been largely state-centric and emphasises individual responsibilities over individual rights.Footnote 67 Privacy is no exception. China’s data protection philosophy is built on the view that data collection and analysis should be actively cultivated to boost state capacity to achieve a wide range of social governance objectives.Footnote 68 Although the law provides citizens with considerable protection for their data privacy, it also creates numerous opportunities for the government to infringe upon citizens’ privacy. This understanding well explains why the public security interest, which is usually represented by the government, is always superior to personal data rights.

Second, Chinese law’s tolerance of FRT is closely related to its real-name registration policy. While anonymity is an important instrument to promote citizens’ free speech and to protect them against government retribution in many countries,Footnote 69 the Chinese government has strictly enforced a nationwide ‘real-name registration’ policy to maintain social and political stability by eliminating digital anonymity.Footnote 70 Under this policy, Chinese authorities have required users to register their real identities with internet and telecommunications service providers when using their services through various authentication mechanisms for easy traceability since the early 2000s.Footnote 71 The wide adoption of FRT has been a natural development to streamline the enforcement of the real-name registration policy because this technology has become the most efficient and effective identity verification technique.Footnote 72 Mobile users, for example, are required to register through facial scanning when buying new SIM cards.Footnote 73

Third, China is an unparalleled surveillance state extensively using digital technologies to maintain its regime. Personal data, including facial data, is a key resource for the Chinese government to implement its ambitious national plans towards an algorithmically governed socialist state.Footnote 74 The collection and processing of facial data has become increasingly essential for the government to build an effective surveillance system and to carry out economic plans, such as the ambitious ‘smart city’ initiative.Footnote 75 According to a recent report analysing more than 100,000 government bidding documents from China, one FRT-based project in Fujian Province alone could produce more than 2.5 billion images to be stored by the police in the cloud at any given time.Footnote 76 Given the extensive integration of FRT in public infrastructures, it is unlikely that the Chinese judiciary and government would easily declare such use illegal or unjustified. Similarly, it will be too costly for the legislators to roll back FRT deployment prescribed by other branches of the authorities.Footnote 77

17.5 Conclusion

With the enactment of the Civil Code and PIPL, China has substantially enhanced its personal data protection. According to these two laws and the Judicial Interpretation on FRT, facial data is defined as sensitive personal information, and the deployment of FRT is more restrictive. The case of HSP represents the country’s determination to prevent the over-use of facial data in the private sector. However, China still faces serious challenges regarding FRT-related personal data protection under its asymmetric regulatory framework. While the use of FRT is increasingly regulated in the country, the regulatory restrictions can be invariably lifted for the reason of public security. Government agencies have invariably claimed this regulatory exemption for its massive FRT deployment. Moreover, the liability for the government’s abuse or misuse of personal data is quite insignificant compared with that that for private parties. This asymmetric framework has resulted from China’s unique human rights philosophy, the endeavour to enforce a real-name registration policy, and, more importantly, its determination to sustain a digital surveillance state.

18 Principled Regulation of Facial Recognition Technology A View from Australia and New Zealand

Nessa Lynch and Liz Campbell
18.1 Introduction

Scholarly treatment of facial recognition technology (FRT) has focussed on human rights impacts,Footnote 1 with frequent calls for the prohibition of the technology.Footnote 2 While acknowledging the potentially detrimental and discriminatory uses that FRT use by the state has, this chapter seeks to advance discussion on what principled regulation of FRT might look like. It should be possible to prohibit or regulate unacceptable usage while retaining less hazardous uses.Footnote 3 In this chapter, we reflect on the principled use and regulation of FRT in the public sector, with a focus on Australia and Aotearoa New Zealand. We draw on our experiences as researchers in this area and from our professional involvement in oversight and regulatory mechanisms in these jurisdictions and elsewhere. Both countries have seen significant growth in the use of FRT, but regulation remains patchwork. In comparison with other jurisdictions, human rights protections and avenues for individual citizens to complain and seek redress remain insufficient in Australia and New Zealand.

A note on scope and terminology. In this chapter we concentrate on FRT use by the state or public sector – by which we mean government, police, and security use. Regulation of private sector use is a wider issue that is outside the scope of this chapter.

18.2 Context
18.2.1 What Is FRT?

FRT is a term used to describe a range of technologies involving processing of a person’s facial image.Footnote 4 A facial image is a biometric that means a biological measurement or characteristic that can be used to identify an individual person. Though it may be collected from a distance, in public, and without the person’s knowledge or consent, it remains an intrusion on the individual’s privacy.Footnote 5 FRT may enhance and speed up existing human capabilities (such as finding an individual person in video footage) or create new capabilities (such as purporting to detect emotional states of people in crowds).

18.2.2 Contemporary Usage in the Public Sector in Australia and New Zealand Jurisdictions

FRT is a fast-growing technology, and it has many uses and potential uses in the public sector. In previous joint work we have canvassed the many usages of FRT across various sectors in New Zealand,Footnote 6 and discussed uses and potential uses in policing internationally and in New Zealand.Footnote 7 It is not possible here to review these uses in detail, but the main use-cases will be discussed briefly now.

First, the use of FRT is established in border security and immigration – the Smart Gate system widely in use at the Australian and New Zealand borders. The Australian Electronic Travel Authority may now be obtained by means of an app, using FRT. These use-cases are in the ‘verification’ category principally – comparing an individual’s biometric template with another, but ‘identification’ (one to many) use-cases are also apparent.Footnote 8 Biometric data (including facial images) may be used to make or guide decisions.Footnote 9 Detection of identity fraud is the principal use-case.

Second, there is security usage by central government, local government, and policing authorities in camera networks in public spaces. For instance, police and councils in Perth and Melbourne use FRT to identify particular individuals,Footnote 10 and Adelaide is proposing to use FRT through its closed-circuit television (CCTV) network.Footnote 11

Thirdly, FRT technology may be used in policing. In Lynch and Chen’s independent review of New Zealand Police’s use and potential use of FRT, it was found that current or imminent planned use of FRT by New Zealand Police was limited and relatively low risk, including authentication for access to devices such as iPhones, identity matching, and retrospective analysis of lawfully acquired footage in limited situations. There was no evidence that the police are using or formally planning the use of live automated FRT. By contrast, police forces across Australia use live FRT as a means of preventing and investigating crime.Footnote 12 Facial images may also be submitted manually by a specified list of law enforcement, anti-corruption, and security agencies to the federal Identity Matching Services for a ‘Face Identification Service matching request’. This does not connect to live video feeds, such as CCTV, and is not available to private sector or local government authorities.Footnote 13

Fourthly, digital identity face recognition can be used to access certain government services online.Footnote 14 For instance, in Australia, signing into the MyGov account to access government services can be through FRT.

18.2.3 A Spectrum of Impact on Individual and Collective Rights

The variety of use-cases for FRT means a spectrum of impact on individual and societal rights and interests. As we expand on through case-studies, FRT can impact rights and interests such as privacy (both individual and collective), freedom of association, lawful protest, freedom from discrimination, and fair trial rights.Footnote 15

As discussed earlier, it is vital to note that FRT has a range of use cases, ranging from consensual one-on-one identity verification (e.g., at the border) to widespread and intrusive live biometric tracking in public spaces. FRT technologies can have many legitimate and socially acceptable uses, including speed and scale improvements in processing evidential footage, identity matching, security and entry controls, and digital identity.Footnote 16 Factors such as who is operating the system, what the purposes are, whether there is independent authorisation or oversight, whether the person has consented to the collection and processing of their facial image, and whether the benefits are proportionate to the impacts are all relevant in considering the appropriate uses of FRT.Footnote 17

18.2.4 Case Studies of Human Rights Impact

As an example of the rights and interests engaged by live automated FRT (AFR) in the context of a largely unregulated environment, there has been a legal challenge to police use in Wales. AFR is being deployed by police forces across England and Wales, with the Metropolitan Police and South Wales Police (SWP) among others trialling AFR for both live surveillance and identity verification.Footnote 18 As in Australia and New Zealand, the Westminster Parliament has not introduced any specific laws relating to AFR, but rather the police maintain that common law and human rights principles, the Data Protection Act 2018, and the Surveillance Camera Code of Practice provide a valid legal basis.

In the first ever legal challenge to the use of AFR, a Mr Bridges (described as a civil liberties campaigner) challenged the legality of SWP’s general use and two particular deployments of AFR on the grounds that these were contrary to the Human Rights Act 1998, Data Protection legislation, and that the decision to implement was not taken in accordance with the Equality Act 2010.Footnote 19 The Divisional Court rejected this application.

On appeal, the Court of Appeal ruled that the Divisional Court erred in its finding that the measures were ‘in accordance with the law’. The court engaged in a holistic analysis of whether the framework governing the SWP’s use of live AFR was reasonably accessible and predictable in its application,Footnote 20 and sufficient to guard against ‘overbroad discretion resulting in arbitrary, and thus disproportionate, interference with Convention rights’.Footnote 21 While the Court of Appeal rejected that statutory authorisation was needed, it accepted that AFR requires more safeguards than for overt photography.Footnote 22 The legal framework gave too much discretion to individual officers to determine who was on the watchlist, and where AFR could be deployed.Footnote 23 Moreover, the Court of Appeal held that the SWP never had due regard to the need to eliminate discrimination on the basis of sex and race.Footnote 24

That said, the Appeal Court held that the SWP’s use of AFR was a proportionate interference with the European Court of Human Rights Article 8 right to privacy and family life, and as such was ‘necessary’ and ‘in pursuit of a legitimate aim’ under Article 8(2).

South Wales Police indicated that it would not appeal the Court of Appeal’s decision: ‘There is nothing in the Court of Appeal judgment that fundamentally undermines the use of facial recognition to protect the public. This judgment will only strengthen the work which is already underway to ensure that the operational policies we have in place can withstand robust legal challenge and public scrutiny.’Footnote 25

In this region, a key illustration of the impacts on privacy concerns is the use by Australian police of Clearview AI’s facial recognition software.Footnote 26 Though there has not been a legal challenge in the courts here, the Office of the Australian Information Commissioner (OAIC) has investigated and made findings as to the use of this software. Clearview AI’s technology operates by harvesting images from publicly available web sources and offering its technologies to government and law enforcement agencies.Footnote 27 From October 2019 until March 2020, Clearview AI offered free trials to the Australian Federal Police, Victoria Police, Queensland Police Service, and South Australia Police.Footnote 28 This revelation about its use was despite initial police denials.Footnote 29

In November 2021, following a joint investigation with the United Kingdom’s Information Commissioner’s Office, the OAIC found that Clearview AI breached Australia’s privacy laws through its practice of harvesting biometric information from the web and disclosing it though a facial recognition tool. In a summary released with the OAIC’s formal determination, the OAIC found that Clearview AI breached the Privacy Act 1988 (Cth) by:

  • collecting Australians’ sensitive information without consent;

  • collecting personal information by unfair means;

  • not taking reasonable steps to notify individuals of the collection of personal information;

  • not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure;

  • not taking reasonable steps to implement practices, procedures, and systems to ensure compliance with the Australian Privacy Principles.Footnote 30

Following the investigation, Clearview AI blocked all requests for user accounts from Australia, and there is no evidence of Australian users of the technology since March 2020.Footnote 31 Further, the OAIC required that all scraped images and related content be destroyed as they breached the Privacy Act.Footnote 32 Subsequently, the OAIC determined that the Australian Federal Police failed to comply with its privacy obligations in using the Clearview AI facial recognition tool, and instructed the AFP to review and improve its practices, procedures, systems, and training in relation to privacy assessments.Footnote 33

18.3 Options for Principled Regulation

Despite the considerable impact on individual and collective rights and interests, there is no discrete law governing the use of FRT in either Australia or New Zealand. Patently, FRT can be subject to existing legislative regimes such as privacy and search and surveillance, but unlike other forms of biometrics, such as fingerprints and DNA, the collection and processing of facial images remains largely unregulated.

In this section we canvass various options for principled regulation of FRT, at state and international level, with different degrees of specificity and latitude. These include proposals for domestic legislation, a case study of cross-national regulation, state-level principles, and self-governance.

18.3.1 Domestic Legislation

We favour the introduction of specific and tailored legislative provisions with an associated code of conduct to regulate the use of FRT by public entities. In March 2021, the Australian Human Rights Commission (AHRC) released its report Human Rights and Technology, which assesses the impact of FRT and biometric technology and makes the case for regulation.Footnote 34 The report recognises the potential human rights impacts arising from the use of these technologies, including most obviously to the right to privacy.Footnote 35 To guard against this, the AHRC recommends that commonwealth, state, and territory governments should:

Introduce legislation that regulates the use of facial recognition and other biometric technology. The legislation should:

  1. (a) expressly protect human rights

  2. (b) apply to the use of this technology in decision making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement

  3. (c) be developed through in-depth consultation with the community, industry and expert bodies such as the Australian Human Rights Commission and the Office of the Australian Information Commissioner.Footnote 36

Until such reforms can be enacted, the AHRC recommends a moratorium on the use of facial recognition and biometric technologies that would fit within para. (a) above.Footnote 37

In September 2022, the newly formed Human Technology Institute based at the University of Technology Sydney released a report.Footnote 38 This proposes reform to existing regulation around FRT and outlines a Model Law ‘to foster innovation and enable the responsible use of FRT, while protecting against the risks posed to human rights’.Footnote 39 While the report recognises that FRT can be used consistently with international human rights law, ‘FRT necessarily also engages, and often limits or restricts, a range of human rights’.Footnote 40

Reform to existing law dealing indirectly with FRT in Australia is needed because of the rapid development and deployment of FRT which can extract, store, and process a vast amount of information. Australia has existing laws that apply to the deployment and use of FRT, including privacy laws that regulate the handling of biometric information, but ‘on the whole, these existing laws are inadequate in addressing many of the risks associated with FRT’.Footnote 41

The report sets out the following purposes of the Model Law:

  • Uphold human rights

  • Apply a risk-based approach

  • Support compliance

  • Transparency in the use of FRT

  • Effective oversight and regulation

  • Accountability and redress

  • Jurisdictional compatibility.Footnote 42

The human rights risks of FRT are discussed in Section 31–2, including infringements on the right to privacy and intrusion into private life. Other concerns are raised in relation to rights to equality and non-discrimination, and here the report authors note the Bridges case and the acknowledged discriminatory impact of FRT through inherently discriminatory algorithms. The potential of FRT to interfere with the right not to be subject to arbitrary arrest or detention and the rights to equality before the law and to a fair trial are also considered.

The Model Law includes specific legal requirements for the deployment of FRT, including compliance with specific technical standards,Footnote 43 and specific privacy law requirements.Footnote 44 Importantly, the Model Law also contemplates assigning regulatory oversight to a body that has human rights expertise, specifically expertise in privacy rights. The report suggests that potential regulators could be the OAIC or the AHRC, but notes that whatever regulatory body is given regulatory responsibility it must be provided with necessary financial and other resources to fulfil its role adequately in a sustainable long-term way.Footnote 45

The risks of a legislative gap are clear. Indeed, ClubsNSW (the representative body for registered clubs in New South Wales, NSW) announced its intention to proceed with the roll-out of FRT in all NSW pubs and clubs (it is already being used at about a hundred licensed venues) after the NSW government announced that it would not proceed with law reform on the regulation of FRT.Footnote 46

18.3.2 State-Level Principles and Guidance

In the absence of legislation, many jurisdictions worldwide have established state level principles and guidance to regulate algorithm and data driven technologies such as FRT. New Zealand is the first country to establish standards for algorithm usage by government and public sector agencies.Footnote 47 The Algorithm Charter sets principles for public sector agencies using algorithms to make or guide decisions to which agencies can commit publicly. The term ‘algorithm’ is undefined, with a focus on the impact of the decision made using the algorithm rather than the complexity of the algorithm itself.

The Algorithm Charter requires transparency in algorithm use, respect for the Treaty partnership (with the Indigenous people of Aotearoa New Zealand), a focus on people, use of data that is fit for purpose, safeguarding privacy, human rights and ethics, and retention of oversight by human operators.Footnote 48 Also in New Zealand, the Government Chief Data Steward and the Privacy Commissioner have jointly issued guidelines for public sector use of data and analytics, with similar emphasis on transparency, societal benefit, retaining human oversight, and focussing on people:Footnote 49

Principles and guidance of this nature are useful in setting high level expectations and entrenching fundamental values, but lack any regulatory enforcement mechanism. Unlike legislation, they cannot be used to respond to individual breaches of rights or provide an objective mechanism for redress.

18.3.3 Cross-National Standards

The Artificial Intelligence Act (AI Act) is a nearly-finalised European Union law that will introduce a common regulatory and legal framework for AI across all sectors (excluding the military) and all types of AI.Footnote 50 This is important because, like the General Data Protection Regulation (GDPR), the AI Act will have extra-territorial effect and immense influence on national laws, given the extent of the EU market. Technology suppliers are likely to align product design with these regulations even in non-EU countries. It seeks to so do through ‘a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market’.Footnote 51

AI is defined in the proposed AI Act in a two-stage model. First, it is defined in Article 3 somewhat generally by reference to the concept ‘artificial intelligence system’, which is ‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with’. Annex I lists the techniques as:

  • machine learning approaches, including learning supervised, unsupervised, and by reinforcement, using a wide variety of methods, including deep learning;

  • approaches based on logic and knowledge, namely knowledge representation, inductive (logic) programming, knowledge bases, inferences and deduction engines, reasoning systems (symbolic), and expert systems; and

  • statistical approaches, Bayes estimation, research and optimisation methods.

Regulation of AI technologies under the proposed Act are based on a risk assessment model. This model is complex. Article 5(1)(d) bans ‘real-time remote biometric identification systems in publicly accessible spaces for law-enforcement purposes (and so would cover a Bridges-type scenario). However, the ban does not cover FRT used by law-enforcement that is not real-time, or that is used by other public or private entities but equally pose a threat to fundamental human rights.Footnote 52 Nevertheless, the majority of FRT is classified as a high-risk AI (save for emotional recognition systems), which is a classification updated in accordance with technological advances and takes into account not only the technology itself, but also the use to which that technology may be put.Footnote 53

In a similar way to the GDPR, the proposed AI Act has a presumption prohibiting high-risk AI systems unless their use is subject to various requirements including a control and monitoring procedure and requirements to report serious incidents and malfunctions of these high-risk AI systems (Art. 6, Annex III). Conversely, those systems designated as being low-risk may be used without being subject to these requirements (Art. 52(2)).

A concern about the proposed AI Act in the EU is ‘its silence on the right to take legal action against suppliers or users of AI systems for non-compliance with its rules’.Footnote 54 Other concerns have been raised about the potential for conflicts between bodies and institutions set up to regulate AI under the proposed law.Footnote 55 Concerns have also been raised about the broadness of the definition of AI in the proposed law, such that it does not account for combinations of algorithms and data and potentially covers software not generally considered AI.Footnote 56 These are fair criticisms.

Notwithstanding these concerns about the proposed AI Act, it has been argued that the AI Act will have international significance. Indeed, Dan Svantesson argues that the Act will first have an impact in Australia in the same way that the GDPR impacts cross-border data flows, with the likelihood being that it will become the default international setting for dealing with AI given the size of the EU market.Footnote 57 Second, and perhaps more substantially, the AI Act may also apply indirectly to Australian actors who operate within the EU market, such as by providing AI systems.Footnote 58 Also important is the ability of the AI Act to be utilised in law reform in Australia and New Zealand as the basis for progressing towards an regional approach to the regulation of AI.Footnote 59

At the time of writing, the AI Act has been voted on in the EU Parliament, and lawmakers are now conducting the negotiation to finalise the provisions of the new legislation, which could include revising definitions, revising the list of prohibited systems and the parameters of obligations on suppliers.Footnote 60

On 12 May 2022, the European Data Protection Board adopted Guidelines 05/2022 on the use of FRT in the area of law enforcement (Guidelines 05/2022).Footnote 61 The Guidelines recognise that FRT ‘may be used to automatically recognise individuals based on his/her face’ and is ‘often based on artificial intelligence such as machine learning technologies’.Footnote 62 For law enforcement agencies, Guidelines 05/2022 recognise that such technologies promise ‘solutions to relatively new challenges such as investigations of big data, but also to known problems, in particular with regard to under-staffing and observation and search measures’.Footnote 63 The Guidelines recognise that the application of such technology by law enforcement agencies engages a number of human rights, including the right to respect for private and family life under Article 8 of the European Convention on Human Rights.Footnote 64 More broadly, the application of FRT by law enforcement will – and to some extend already does – have significant implications for individuals and groups of people, including minorities. The application of FRT is considerably prone to interfere with fundamental rights beyond the right to protection of personal data.Footnote 65

Turning to the technology, the Guidelines differentiate FRT from biometric technology because the former technology can fulfil two distinct functions, namely: (1) the identification of a person in order to verify who that person claims to be (one-to-one verification); and (2) identification of a person among a group of individuals, in a specific area, image or database (one-to-many identification).Footnote 66 It is the unique functions to which FRT can be put and the potential consequences of its use that justify special regulation.

The Guidelines next summarise the applicable legal framework as a guide ‘for consideration when assessing future legislative and administrative measures as well as implementing existing legislation on a case-by-case basis that involve FRT’.Footnote 67

The remainder of the Guidelines contains a number of annexes; these include Annex II (practical guidance for managing FRT projects in law enforcement agencies) and Annex III (practical examples). These form a potential starting point for the development of law enforcement agency guidelines, including of the kind contemplated by the English and Welsh Court of Appeal in Bridges.

18.3.4 Self-Governance

In the absence of legislative or robust state-level regulation, some state actors have moved to establish self-regulation. In New Zealand, trials of a FRT application (Clearview AI) by a section of New Zealand Police in 2020 sparked a review of the use of technology, owing to the adverse publicity generated and also the lack of any firm legislative or regulatory regime to govern its use.

Initial Guidelines for the trial of emerging technology were published in September 2020, and the Police Manual Chapter was published in July 2022.Footnote 68 New Zealand Police are now required to seek advice from senior management even when responding to an offer from a technology company and even when the new technology would only be explored in a non-operational test setting. Approval for any trial must go through a formal governance and risk assurance process. Submissions for approval are expected to consider ethical and legal considerations, including public expectations and legal obligations surrounding the right to privacy.

However, there is no reference in the guidelines to the principles of human rights (such as the right to be free from discrimination, freedom of expression, the right to peacefully protest).

In April 2023, New Zealand Police publicly released a stocktake list of technology capabilities. This is an extensive list that details all instances of technology capabilities – from routine business procedures to state-of-the-art technologies.Footnote 69

Further, an independent review of FRT (carried out by one of the present authors with a co-author) investigated and reported on use and potential use of FRT within New Zealand Police and made ten recommendations, which were accepted by the leadership.Footnote 70 This included a commitment to continue to pause any consideration of live automated FRT, ensure continuous governance and oversight of deployment of FRT, implement guidelines for access to a third party system, embed a culture of ethical use of data in the organisation, and implement a system for ongoing horizon scanning.

Again, in the absence of a state level regulatory mechanism, New Zealand Police has established an expert panel (composed of experts with expertise in technology, governance, assurance, criminal law, and Te Ao Māori). This panel’s role is ‘to provide advice and oversight from an ethical and policy perspective of emergent technologies’.Footnote 71

In another example of self-regulation, Scotland has a moratorium on live AFR in policing. While Police Scotland’s strategy document Policing 2026 included a proposal to introduce AFR,Footnote 72 a Scottish parliamentary committee was critical of this owing to its discriminatory implications, lack of justification for its need, and its radical departure from the principle of policing by consent.Footnote 73 Police Scotland responded that the force was not using live FRT currently and that it would ensure safeguards were in place prior to doing so; it was agreed that the impact of its use should be fully understood before it was introduced.Footnote 74

These decisions by police organisations to self-regulate the use of technology are probably driven as much by perceptions of social licence and public attitudes as principle. It demonstrates again that state-level regulation is required to provide an objective and transparent standard, with mechanisms for redress.

18.3.5 A Robust Regulator

Any regulation of FRT must be accompanied by a robust regulator.

A case study of a regulator in a comparable jurisdiction is the Biometrics Commissioner role in Scotland, who has established a Code of Practice for biometric data use (encompassing facial images) in policing. Scottish law defines biometric data as ‘information about an individual’s physical, biological, physiological or behavioural characteristics which is capable of being used, on its own or in combination with other information … to establish the identity of an individual’.Footnote 75

The purposes of the Scottish Biometrics Commissioner are to review law, policy, and practice relating to collection, retention, use, and disposal of biometric data by Police Scotland, keep the public informed and aware of powers and duties related to biometric data (e.g., how the powers are used and monitored, and how the public can challenge exercise of these powers), and monitor the impact of the Code of Practice and raise awareness of the Code.

As another example, the AHRC report cited earlier argues that the rise of AI technology (including FRT) provides an important moment to develop standards and apply regulation in a way that supports innovation while also addressing risk of human rights harm.Footnote 76 To this end, the AHRC recommends the establishment of an AI Safety Commission in Australia ‘to support regulators, policy makers, government and business [to] apply laws and other standards in respect of AI-informed decision making’.Footnote 77

18.4 Conclusion

While biometric technologies such as FRT have become more prevalent and more complex, and are being utilised in increasingly diverse situations, legislation, regulation, and frameworks to guide ethical use are less well developed.

This chapter has demonstrated how state agencies, particularly in policing and security services in New Zealand and Australia, have a broad discretion as to their use of FRT.

We suggest that FRT should be used only when predicated upon explicit statutory authorisation and following appropriate ethical review.Footnote 78

Principled regulations should comprise a national statutory framework with a concomitant code of practice. Moreover, we recommend independent approval and oversight of the proportionality and necessity of operations. Jurisdictions should have a robust regulator, with the Scottish Biometrics Commissioner being a good example.

19 Morocco’s Governance of Cities and Borders AI-Enhanced Surveillance, Facial Recognition, and Human Rights

Sylvia I. Bergh , Issam Cherrat , Francesco Colin , Katharina Natter , and Ben Wagner
19.1 IntroductionFootnote *

Owing to advances around artificial intelligence (AI), such as computer vision and facial recognition, digital surveillance technologies are becoming cheaper and easier to use as everyday tools of governance worldwide.Footnote 1 Typically developed by companies and governments in the Global North and tested in the Global South or on the ‘periphery’ of powerful actors,Footnote 2 they are becoming key tools of governance in both democratic and authoritarian contexts.Footnote 3 As the AI Global Surveillance Index shows,Footnote 4 countries with authoritarian systems and low levels of political rights are investing particularly heavily in AI surveillance techniques such as advanced analytic systems, facial recognition cameras, and sophisticated monitoring capabilities.Footnote 5

AI surveillance offers governments two major capabilities. First, it allows regimes to automate many tracking and monitoring functions formerly delegated to human operators. This brings cost efficiencies, decreases reliance on security forces, and over-rides potential principal–agent loyalty problems.Footnote 6 Second, as AI systems never tire or fatigue, AI technology can cast a much wider surveillance net than traditional control methods. As Feldstein points out, ‘this creates a substantial “chilling effect” even without resorting to physical violence as citizens never know if an automated bot is monitoring their text messages, reading their social media posts, or geotracking their movements around town’.Footnote 7

Some scholars have observed the radical interdependence of the global AI development ecosystem, as only a few countries can afford to build their own local AI ecosystems.Footnote 8 For example, China is a major supplier of AI surveillance, with Huawei alone providing technology to at least fifty countries. France, Germany, Japan, and the United States are also major players in this sector.Footnote 9 As a consequence, the governance challenges around AI-enhanced technologies are inherently trans-national.

Indeed, the rise of AI accentuates several existing challenges for human rights law around (digital) technology. For example, such technology obscures the identity of the violator and makes violations themselves more invisible. This makes it much harder for citizens to hold duty bearers accountable.Footnote 10 It is also becoming much less clear whom citizens should try to hold to account in the first place. The current framework for addressing human rights harms inflicted by business entities is built on the distinction between public authority exercised by the state (which gives rise to a binding obligation to respect and protect rights) and private authority exercised by a company (which gives rise to a moral responsibility to respect rights). However, the distinction between the public and private spheres is becoming increasingly blurred, and as a result, it is less clear how human rights law is applicable.Footnote 11 Instead, citizens must rely on states to take seriously their duty to protect individuals from harms by non-state actors, such as requiring private companies to institutionalise the practice of technology risk and impact assessments.Footnote 12 It is clear that this is a formidable challenge in liberal democratic countries, let alone in authoritarian ones such as Morocco.

In this chapter, we focus on the role played by AI-enhanced surveillance tools in Morocco’s governance of cities and borders. We ask to what extent AI technologies are deployed in Morocco, and how they could reshape existing modes of public governance. We address these questions in two areas: urban surveillance and the control of migration at the Moroccan–Spanish border. We focus on the use of facial recognition technologies (FRT) in AI-enhanced cameras in particular, but we also address other technologies and other uses of AI, following a pragmatic approach that investigated where it was possible to access data. Indeed, AI surveillance is not a stand-alone instrument of repression, but complements existing forms of repression. As Feldstein observes, ‘it forms part of a suite of digital repression tools – information and communications technologies used to surveil, intimidate, coerce, and harass opponents in order to inflict a penalty on a target and deter specific activities or beliefs that challenge the state’.Footnote 13

The chapter is structured as follows. First, we outline the legal framework and governance context around Morocco’s use of AI technologies for urban and border surveillance. We then discuss our methodological approach, including some of the key limitations we faced during the research, before sharing our findings with respect to the use of FRT in the governance of cities and borders, respectively. Subsequently, we discuss AI-enhanced surveillance as an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide. We also reflect on the challenges and opportunities of monitoring human rights in the face of increasing deployment of AI-enhanced technologies in authoritarian governance.

19.2 The Legal Framework and Governance Context in Morocco

The Moroccan governance system has been described as ‘an entrenched neo-authoritarian system’.Footnote 14 Over the past decades, the monarchy has repeatedly weakened the political opposition by co-opting major parties into government. Human rights violations, lack of press freedom, and the harassment of human rights non-governmental organisations (NGOs) persist. However, while these deficiencies have attracted the attention of human rights organisations and press freedom watchdogs, they have not been properly taken up by inter-governmental actors. Quite the contrary: in the wider regional context, Morocco’s political stability has been viewed as an asset and is likely to become even more valuable (to the EU and United States), further insulating the regime from critiques of its civil and human rights records.Footnote 15

At the same time, Morocco is one of the highest performers in e-governance in Africa.Footnote 16 Morocco has more than 27 million internet users, or 75 per cent of its population,Footnote 17 and ranks high in the UN’s 2016 E-Government Survey in terms of e-participation, e-consultation and online service delivery as well as in its E-Government Development Index, a composite indicator used to measure the willingness and capacity of national administrations to use information and communications technologies to deliver public services. Indeed, Morocco’s new development model focusses on consolidating technological added value, and public administrations are increasingly making use of algorithms in online public services.Footnote 18 The combination of authoritarian rule and advanced use of e-government makes Morocco a particularly interesting case to study the role of AI in technologies in public governance.

At first glance, Morocco’s legal framework around privacy seems robust. The constitution contains an explicit protection of the right to privacy (Art. 24), there is a data protection law (Law no 09-08, promulgated in February 2009), and a data protection agency, the Commission nationale de contrôle de la protection des données à caractère personnel (CNDP). In addition, Morocco is a signatory of a number of treaties with privacy implications, including the Council of Europe’s Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data and its additional Protocols.Footnote 19 Furthermore, in 2018, Morocco joined the Open Government Partnership, an inter-governmental organisation promoting government transparency and citizen participation, and in 2021 it submitted its second two-year action plan, including twenty-two commitments that span a wide range of participatory, accountable, and transparent governance areas.Footnote 20 Since 2010, the security sector has also been put on a clearer legal footing: private security was regulated, state security services were given statutes and mandated to better respect human rights, thanks in no small measure to the efforts of human rights organisations.Footnote 21

Yet, as HagmannFootnote 22 points out, some laws still lack implementation decrees. In addition, clientelist and political interests continue to influence whether and how penal provisions are implemented, human rights abuses investigated and reprimanded, or demonstrations and NGOs authorised and banned. Indeed, Privacy International notes that ‘there remains vast grey areas regarding the discretionary powers offered to judges and intelligence agencies’ when it comes to rules around legitimate breaches of individual privacy.Footnote 23 This situation is worsened by the fact that the judiciary is not independent and that public scrutiny and democratic oversight over the work of intelligence services is lacking. In addition, the law on the protection of personal data (09–08) does not cover those data collected in the interest of national defence or the interior or exterior security of the state.Footnote 24

Against the backdrop of this legal framework, technological tools are already integrated in everyday authoritarian governance and surveillance, especially at urban level. Traditionally based on a wide network of informants (car guards, local shop owners, informal vendors and beggars, etc.), mass surveillance is evolving through the use of technology. Phone tapping is common for listening to conversations, and more refined tools for surveillance have also been employed. For example, during the regional and local elections of September 2015, 30,000 mobile phone lines of candidates and regional or provincial party officials, in addition to local government officials and others, were reportedly tapped at the request of the Ministry of Interior.Footnote 25 Another example is the Moroccan government’s use, at least since October 2017, of Pegasus spyware produced by the Israeli firm NSO Group, to surveil and attack human rights defenders.Footnote 26 The general impression is that these technologies have allowed the bringing about of a more ‘surgical’ approach to the repression of dissent, one that systematically targets key figures,Footnote 27 instead of the population as a whole.

Israeli companies are not the only provider of surveillance technology to Moroccan authorities. In 2015, there was a leak confirming that Morocco had bought the technology of Italian spyware company Hacking Team.Footnote 28 In June 2017, an investigation by BBC Arabic and the Danish newspaper Dagbladet revealed that UK defence firm BAE Systems had sold mass surveillance technologies – called Evident – through its Danish subsidiary ETI to six Middle Eastern governments, including Morocco.Footnote 29 There are also concerns that the European Neighbourhood Instrument may have been used to fund the training of Moroccan authorities in ‘telephone tapping and video recordings’ and ‘special investigation techniques for electronic surveillance’.Footnote 30 More recently, there have been plausible but unconfirmed reports that the Moroccan police used COVID-19 mobile passport application check-ins to track the movements of citizens and identify those who disobeyed the rules of the state of emergency linked to COVID-19 measures.Footnote 31

In terms of surveillance in public spaces, protests still see a heavy deployment of security forces. Violence and arrests of demonstrators are still common and recent research shows how the strategic use of violence to clamp down on protest events can serve as a tool for regime survival.Footnote 32 However, whether and to what extent AI-enhanced technologies are used to respond to and control these events remains unclear. So far, most information on the use of AI-enhanced surveillance of public spaces has come from news reporting on the business arrangements and calls for tenders – either leaked or public – concerning the development of these technologies. This includes a series of articles detailing the deployment of video-surveillance technologies in Al Hoceïma, Agadir, Casablanca, Marrakech, and Meknes.Footnote 33 In July 2022, the weekly magazine TelQuel also published an interview with a high-level police officer about the potential improvement achieved by the use of drones and AI in Casablanca, which is the only example of a public statement by a government official on the matter.Footnote 34

With regard to border control, AI-enhanced technologies have been introduced by countries around the world not only to deter or stop irregular migration by surveilling borders, but also to serve as systems for tracking, controlling, and accelerating cross-border mobility more generally. In Morocco, for instance, this has resulted in the mounting of facial recognition cameras or procurement of thermal imaging cameras at its borders with the two Spanish enclave cities Ceuta and Melilla, largely funded by the EU.Footnote 35 These borders are regularly mediatised as the main entry points for Sub-Saharan African migrants into Spain (and thus the EU), but they also experience a high daily flow of visitors and cross-border workers throughout the year. For Spain, controlling and preventing irregular migration into Ceuta and Melilla has been a hot topic since the 1990s, while Morocco considers Ceuta and Melilla as cities colonised by Spain, and they are therefore a regular cause of diplomatic crisis between the Moroccan and Spanish governments.Footnote 36 Spanish media regularly accuse Morocco of attempts to put pressure on Spain by not effectively controlling borders, while Morocco invokes its unwillingness to play a gendarme role and calls for an integrated and participative approach to deal with the issue, including financial support from the EU.Footnote 37

Managing the Spanish–Moroccan border is certainly big business given the considerable budgets allocated by the EU to ‘fight’ irregular migration and to ‘protect’ Ceuta and Melilla.Footnote 38 For example, Indra Sistemas, a Spanish information technology and defence systems company, received at least 26.6 million euros across forty public contracts (twenty-eight without public tender), mostly from the Spanish Ministries of the Interior and Defence, for migratory control tasks including: the maintenance of the Integrated External Surveillance System (SIVE) of the Civil Guard, the installation of radars on the southern border and facial recognition at border posts, or the integration of the new ‘intelligent borders’ system.Footnote 39 The French technology giant Atos also obtained at least twenty-six contracts from the Spanish Ministry of the Interior from 2014 to 2019, totalling more than 18.7 million euros, in order to repair and supply equipment for the SIVE. Similarly, from 2014 to 2019, the Government of Spain awarded French company Thales at least eleven migration control contracts (3.8 million euros in total), most of them to supply night vision systems and their respective maintenance services.Footnote 40 The most recent deal over 4.8 million euros on the procurement of thermal surveillance cameras for the Moroccan Ministry of Interior has been concluded between the Spanish defence equipment company Etel 88 and the Spanish development agency FIIAPP.Footnote 41

In co-operation with mostly European tech companies, both Moroccan cities as well as Morocco’s northern border with Spain have thus seen the increasing deployment of and reliance on AI-enhanced technologies as governance tools. While Morocco’s legal framework around privacy and data protection has also been upgraded over the last decade, its limited implementation and the vast leverage of security and legal actors in interpreting the law, however, raise a host of challenges on the intersection between AI technologies and human rights. In the next section, we outline how we methodologically approached our research on urban and border surveillance, as well as some of the key limitations we faced.

19.3 Methodology

Francesco Colin and Issam Cherrat conducted the fieldwork for the urban and border surveillance cases, respectively, during the period March to August 2022. The fieldwork relied on extensive desk reviews, including various published and unpublished publications on the use of technology in urban and border governance, academic studies, grey literature from NGOs and state institutions, as well as the available press in French and English.

The other main component of the fieldwork was a set of nine semi-structured interviews on urban surveillance and twenty interviews on border surveillance with key stakeholders. Given the scarcity of information on the topic of the research, as well as the difficulty in accessing knowledgeable actors, we relied on snowball sampling. The interviewees included scholars, journalists, public officials, police officers, and civil society actors. Great care was taken during the research process to ensure the safety and anonymity of the interviewees and researchers. All interviewees were informed orally about the scope of the research and gave their consent prior to the interviews. Only two interviewees in the urban use case granted their permission to be recorded, while all of them asked to be quoted anonymously in the research outputs. Six interviewees contacted for the case study on border governance declined to have their interview mentioned in the study.

The study was severely limited by the broader security context in which the research was carried out. The sensitive nature of the topic – related to matters of national security and territorial integrity – as well as its relatively novel application in the Moroccan context made it complicated to access information and interviewees. All data revolving around surveillance practices is perceived to be the exclusive competence of Morocco’s security apparatus, and thus represents a subject on which one simply cannot ask too many questions. The limited information on the use of these technologies was extracted from calls for tender (CFT), to the extent that they were in the public domain. Such documents are accessible only when they get leaked by the press or in the case of privileged sources. In addition, many of these calls were still ongoing (or had been re-issued) at the time of writing. Finally, it was not possible to acquire first-hand information on the actual functioning of these technologies. Our attempts to establish direct contact with the staff of the intelligence services have been unsuccessful, and none of the interviewees had a direct knowledge of how these technologies are employed on the ground.

In addition to the issue of access, the overall general climate of repression (and associated fears of reprisals for speaking out) limited the fieldwork on both urban and border surveillance. Despite the precautions taken by the researchers, such as the exclusive use of secure platforms for communication, interviewees measured their words carefully when speaking about surveillance technologies. Representatives of private companies engaged in the deployment of these technologies were unwilling to participate in the research, saying that they did not want to jeopardise the relationship with the General Directorate for National Security (DGSN), the national police force, in the event of future tenders. In the case of border controls, several stakeholders refused to be interviewed once they learned about the topic, and access to information from government institutions was denied.

19.4 AI-Enhanced Technologies in Moroccan Cities

Across Moroccan cities, pilot experiences with AI-enhanced technologies are being developed for a plethora of applications – such as traffic management, monitoring of air quality, and energy efficiency, but also irrigation and waste collection.Footnote 42 However, generally speaking, in Morocco ‘there is still very little actual AI in smart cities’.Footnote 43 As we noted earlier, most of the available data on the procurement of AI-enhanced technology such as facial recognition cameras is based on CFT documents, but information on its actual deployment, functioning, and use is extremely scarce.

Based on the desk review, it was possible to develop Table 19.1, which provides a schematic summary of the technology deployed in the urban context in Morocco.

Table 19.1 Review of video-surveillance technologies in Moroccan citiesFootnote a

CityTechnology deploymentMain stakeholders involved
Al HoceïmaExisting installation: no cameras installed.
Future projects: 60 cameras in ‘strategic areas’ of the city.
Tender managed by the Agence pour la promotion et développement du Nord (SDL).
AgadirExisting installation: no cameras installed.
Future projects: 220 video-surveillance cameras to be installed and a new HQ to manage them.
Tender launched by the Agadir Souss Massa Aménagement (SDL).
It has been awarded to TPF Ingénierie (France). The separate tender for the HQ has yet to be awarded.
CasablancaExisting installation: 60 cameras deployed in 2015; 500 cameras deployed in 2016; 150 cameras deployed in 2017.
Future projects: 577 new cameras currently under CFT; 2 drones; new HQ to control operations.
Tender co-ordinated by CasaTransport (SDL), on behalf of the DGSN.
The companies Tactys and CeRyX (France) developed the technical elements, and the tender has been unsuccessful in early 2022.
FezExisting installation: exact number unclear, sources report ‘a hundred cameras in the main arteries of the city’Footnote b
Future projects: no information on future projects.
Project co-ordinated by the DGSN. Cameras installed by Sphinx Electric (Morocco) in 2018.
MarrakechExisting installation: no information available.
Future projects: 223 new cameras in the old medina and a new data centre to be installed in Jamaâ el Fna square.
Tender co-ordinated by Al Omrane, which has been awarded to Sphinx Electric (Morocco) in February 2022.
MeknesExisting installation: no cameras installed.
Future projects: new video surveillance system (2021).
Project launched by the municipality.
RabatExisting installation: no information available.
Future projects: video-surveillance of the forest ring road surrounding the city (‘Ceinture verte’).
Tenders are managed by Rabat Région Aménagement. The new project has yet to be launched officially.
TangierExisting installation: 200 cameras installed.
Future projects: no information on future projects.
Project co-ordinated by the DGSN. Cameras deployed by Cires Technology (France).

a Last updated: 19 August 2022.

b Anaïs Lefébure and Mehdi Mahmoud, ‘Casablanca, Marrakech, Dakhla … Nos villes sous haute surveillance?’ (2 July 2021), Tel Quel, https://telquel.ma/2021/07/02/casablanca-marrakech-dakhla-nos-villes-bientot-sous-haute-surveillance_1727723.

Source: Compilation by Francesco Colin from multiple sources.

Although Table 19.1 shows an impressive deployment of technology, especially for the city of Casablanca, these numbers still pale in comparison with other countries: while in Casablanca there are ‘only’ 0.74 cameras per 1,000 people, in the ten most surveilled cities of the world this ratio ranges from 62.52 to 8.77 cameras.Footnote 44 Moreover, although the absolute lack of transparency surrounding these projects does not allow us to trace a clear timeline, we know that the deployment of high-tech surveillance technology has accelerated after the 2011 bombing at the Café Argana in Marrakech – as the attacked area was supposed to be covered by video surveillance, but apparently cameras were not working.Footnote 45 Furthermore, Table 19.1 shows that the current wave of projects represents an extension of past deployments in some cities (Casablanca, Marrakech, Rabat) and the creation of new installations in others (Al Hoceïma, Agadir, Menkes). In any case, interviewees generally agree that these projects only represent the beginning, rather than the end, of such endeavours.Footnote 46

However, there is no information available on the concrete way in which AI is employed in the analysis of images captured through these cameras. As one interviewee put it, ‘we know there is a computer at the central police station in Rabat, but god knows what goes on there’.Footnote 47 CFT documents provide some useful information here: the CFT for the expansion of the surveillance system in the city of Casablanca runs to 389 pages and outlines the type of technology that needs to be provided, as well as its potential. It specifies that the cameras must be able to perform facial recognition tasks, that is, to be able to identify a target against a picture or a recorded photo, either in real time and on recorded footage (p.12 CFT’s Annex).Footnote 48 The CFT for the new video surveillance project in the city of Al Hoceïma provides more details in terms of cameras’ (desired) capabilities: to compare the data collected via video surveillance against existing databases of pictures, to easily add pictures to databases based on live video feed, and to search for a specific person through an image added to the system.Footnote 49 The capacity to identify a target on the basis of an image is the central function of face recognition systems in these contexts.Footnote 50 However, the role of central security agents on the ground is still substantial: they would need to perform dynamic search functions (based on video metadata and on visual inputs), compile reports based on different data types and sources, and ensure co-ordination with police intervention on the ground.

In line with discursive shifts in the global surveillance trade,Footnote 51 the massive investment by Moroccan cities in AI-enhanced surveillance technology is presented as a shift from the ‘smart’ to the ‘safe’ city. Official discourse stresses the physical security purposes of the systems, such as catching accidents and thefts on camera. However, a small but increasing number of Moroccan civil society actors are raising concerns about the consequences of mass transmission of personal data to government entities in terms of privacy and individual liberties.Footnote 52 In addition, our study found that the legal framework attributes all control of local security issues to the local representatives of the Ministry of the Interior, rather than to elected local governments, limiting public oversight and accountability.

19.5 AI-Enhanced Technologies at Moroccan Borders

Unlike the unclear situation with regard to urban surveillance, it is known that the borders between Morocco and the two Spanish enclaves Ceuta and Melilla are progressively being transformed into ‘smart’ borders through the increasing deployment of AI-enhanced cameras and FRT. Yet, border control management is surrounded by secrecy in Morocco. Topics related to national security are treated with suspicion, and only fragmentary pieces of information are leaked to the press. It is almost impossible to know which firm or company has won a tender to install cameras or such equipment on the borders between Morocco and Spain, including the use of FRT cameras in its airports.Footnote 53 In the words of an interviewee who did not want to be listed, ‘as a police officer, we do not ask about these things, I guess we do not have the right even, we are simply trained to use new technologies when they are deployed’. Therefore, for this section on the use of FRT at the Moroccan–Spanish border, we are relying on information from the Spanish side.

According to the Spanish Minister of Interior’s declaration in March 2022, Spain has modernised its entire technological systems at the border posts in Beni Enzar, Melilla, and in El Tarajal, Ceuta.Footnote 54 This modernisation consists of the implementation of a fast-track system for cross-border workers, the installation of fifty-two posts for greater agility in the passage of people, and sixteen registration kiosks featuring the control and collection of biometric data. Furthermore, currently up to thirty-five cameras equipped with facial recognition systems are being installed between the entry and exit points of the borders of Ceuta and Melilla. The project is based on an entry control system with FRT, in which, in addition to the thirty-five cameras, there are four micro-domes,Footnote 55 and a software platform to host the Live Face Identification System for the control of the closed-circuit television system. It is implemented by the company Gunnebo Iberia, a subsidiary of the Swedish world leader in security products, and Thales Spain, a subsidiary of the technological multi-national dedicated to the development of information systems for the aerospace, defence, and security markets.

Overall, the use of AI-enhanced cameras at the Ceuta and Melilla borders aims to shorten border control processing, enhance security at the crossings, and increase control over people and goods entering and exiting the border. The main problem at the Tarajal entry gate (Ceuta), for example, was that the poor existing infrastructures made it impossible to control the waiting line and to systematically track who enters and leaves through this passage. The poor infrastructure allowed for the smuggling of illegal goods and made it difficult to track whether minors entered Ceuta irregularly. The main objective of the deployment of new technologies is thus to monitor the number of people who enter and leave and to detect the number of people who do not return after a period of time. The technology used allows for flexible mobile facial scanning, that is, the inspection of people inside cars, trucks, buses, and on motorcycles or bicycles.Footnote 56 It will also allow the implementation of ‘black lists’ during the passage of border control, showing personal information of the individual transiting through the border if they are registered on such a list. Deploying this technology, it is expected that some 40,000 facial readings per day can be carried out in Ceuta and 85,000 in Melilla.

Civil society actors have already drawn attention to the risks inherent to the use of those technologies at the Ceuta and Melilla borders: more than forty Spanish organisations and associations signed a statement rejecting the ‘smart borders’ project.Footnote 57 They emphasised that the project’s ambition to ‘exercise greater security control through the use of artificial intelligence, by collecting biometrics, such as facial recognition, fingerprints, […] poses a risk of violating human rights’.Footnote 58 They particularly highlighted that ‘the collection of biometric data for people who do not have a European passport is not in accordance with the principle of proportionality’.Footnote 59 Indeed, as another civil society association highlighted, the Spanish–Moroccan borders risk being turned ‘into a laboratory for security practices’. They argue that with regards to the right to data privacy, ‘this will not happen at other borders such as Barajas-Madrid airport and will not happen with European citizens, but will happen at the borders where migrants cross in a state of extreme poverty, and it will happen with populations suffering from racism’.Footnote 60

In terms of migration control, the deployment of FRT at the Moroccan–Spanish border is probably effective in controlling regular migration, for instance by facilitating and speeding up the circulation of individuals and cars, but less so when it comes to attempts at irregular migration. While it will inevitably make it harder for those migrants who need to reach Spanish territory in order to claim their rights to protection, that is, asylum seekers and unaccompanied minors, FRTs cannot predict when migrants will attempt to pass the fences of Ceuta and Melilla and are ineffective when thousands gather and decide to climb the fence simultaneously, such as the 2022 attempts in Melilla that caused the death of thirty-seven migrants.Footnote 61 Moreover, migrants adapt their border crossing strategies according to the technologies in place, for example in Fnideq,Footnote 62 where irregular migrants reach Ceuta’s shore by swimming when it is foggy and cameras cannot detect them. Lastly, the installation of ‘smart borders’ in the north of Morocco has (once again) redirected irregular migrants towards the longer, more costly and deadly migratory routes in the south of Morocco, where one can reach the (Spanish) Canary Islands through a perilous journey by boat.Footnote 63

Despite Spain’s massive investment in these technologies, it remains unclear what the actual effects and outcomes of FRT used are on Moroccan–Spanish border dynamics since no official reports have been released yet. The impression from the field is that the new ‘smart’ border has slightly improved the quality of daily tasks, but that border crossings are still overwhelmed during periods of intense flux (during summer and national holidays). Furthermore, irregular migration dynamics seem to not have been affected by the use of new technologies, as migrants have adapted their strategies to cross the border.

19.6 Discussion

Despite the rapidly increasing use of FRT in Moroccan urban and border surveillance, public debate around these issues is still lacking in Morocco. In both cases, authorities justify the use of AI-enhanced technologies by the will to improve users’ experience and security.Footnote 64 Between the high sensitivity of the data that is captured through these technologies and the generalised opacity with which it is treated, there are grounds to be concerned for the respect of citizens’ right to privacy.

From the two cases analysed, two cross-cutting issues emerge. The first one is the involvement of external actors, such as international donors, multi-national companies and foreign states, which makes AI-enhanced surveillance an inherently trans-national issue. External actors play a key role in the development and financing of FRTs, in their installation on the ground, but also in the (limited) monitoring of human rights protection frameworks. International donors provide funding for urban and border surveillance projects for instance, but they could also play a more active role in enforcing mechanisms for transparency in using such surveillance infrastructures. Yet, this is not always the case. For Casablanca’s first video surveillance projects (2015 and 2017), part of the funding came from a World Bank loan through a project to improve urban transportation. Although the World Bank raised concerns about the use of its funds, and demanded an audit that concluded the video surveillance system was not eligible for funding in the framework of their project, the project was still financed.Footnote 65 Recently, the World Bank even approved an increase of 100 million dollars (in addition to the already committed 200 million dollars) to finance further development projects by the city of Casablanca.Footnote 66

Similarly, the EU is extensively funding border control and surveillance technologies in Morocco, with little transparency concerning their use and few requirements in terms of associated human rights protection. For instance, the Moroccan DGSN acquired spying software from the Swedish firm MSAB and the US company Oxygen Forensic with funding from the Africa Emergency Fund, set up by the EU in 2015 for its ‘fight against irregular migration’.Footnote 67 While this technology transfer project was implemented in the context of migration co-operation, the EU has no effective mechanism in place to prevent the misuse of such technologies for other repressive activities. More generally, although the EU has timidly tried to regulate the export of high-risk surveillance,Footnote 68 it faces resistance from Members States.Footnote 69 Additional rules were put in place in the revision of the EU’s Export Control Framework under EU Regulation 2021/821. But although export controls in the EU are becoming increasingly strict, EU Member States still often find ways to export these technologies that are deemed relevant for reasons of national security.Footnote 70

The second cross-cutting issue that emerges from the analysis is that Morocco’s existing legal framework through which these projects are launched and implemented provides important obstacles to any kind of public oversight. Most of the tenders that accompany the development of these projects are circulated behind closed doors and not made public. Occasional leaks to the press are the main way in which these projects come to public knowledge. However, when tenders are unsuccessful, Moroccan law authorises the contracting authority to proceed through ‘over-the-counter’ contracts – which do not require any kind of publicity.Footnote 71 In other words, the companies that will implement these projects are selected directly by the contracting authority without a public tendering process, raising important questions in terms of transparency of the use of public funds. This will be the case for the 720 million Moroccan Dirhams project that will set up the new video surveillance system in the city of Casablanca.Footnote 72 Some interviewees also noted that when these tenders escape public scrutiny, they tend to be attributed to companies that have close ties to the regime.Footnote 73

While the leaked CFTs provide some insights into which cameras are installed and how many, they leave Moroccan citizens and civil society in the dark as to how they will actually be used. Companies deploying these technologies argue that they have no control over their end-use. For instance, a source working for Huawei in Morocco highlighted: ‘if Huawei sells video surveillance products, it does not have access to what the final clients do with them, and does not participate in their installation’.Footnote 74 Similarly, European companies seem impervious to ethical concerns for the eventual misuse of the technology provided. In the framework of these projects, they ‘do what they are asked to without too much resistance’.Footnote 75

Other state institutions that should monitor the ethical implications of the use of AI-enabled surveillance technologies are not raising any concerns either. In its position paper on the digital transition, the Moroccan Conseil Économique, Social et Environmental defines AI development as a ‘national priority’, but it does not touch upon the use of AI in urban video surveillance. Similarly, the Conseil National des Droits Humains recently organised an international colloquium to discuss ethical implications of uses of AI, but it dealt with this topic from a purely academic perspective and avoided raising the issue of FRT-based surveillance by state authorities.Footnote 76 Lastly, while raising the issue of the storage and analysis of personal data through facial recognition by private actors, the Commission nationale de contrôle de la protection des données à caractère personnel seemed untroubled by the use of the same technologies by security services and the exponential increase in the collection of personal data. In short, video surveillance is treated as the sole prerogative of the security apparatus, and so far public monitoring actors have avoided to engage directly with the topic. It seems that, implicitly and explicitly, public security should not be the public’s concern.

19.7 Conclusion

Our analysis shows that the umbrella argument of public security is applied not only to the use of AI-enhanced technologies in Moroccan urban spaces and at the Moroccan-Spanish border, but also to their deployment, oversight, and monitoring. As a result, the information on whether (and eventually how) high-tech surveillance technology is used is confidential, national security agencies are seemingly exempt from the monitoring of other state institutions, and independent actors are expected to trust that these institutions are acting in citizens’ best interest. This makes effective public oversight impossible, and amplifies the potential for it to be used for ‘surgical’ repression.

The lack of oversight is also nurtured by the absence of a public debate – and ostensibly of public interest – on the matter. An exemplary anecdote is that among the inhabitants of Casablanca, many think that the cameras around the city do not work, and are put there only to bring about an improvement in public behaviour.Footnote 77 Kindling a public discussion on the securitisation of public spaces through high-tech surveillance was one of the ambitions of the TelQuel issue of July 2021, but so far this debate is still lacking.Footnote 78 On the contrary, interviewees perceived Moroccans as being quite ill-informed about related issues of personal data protection.Footnote 79

However, if the future plans inventoried in this chapter are indeed implemented, Morocco is rapidly advancing towards the implementation of AI-enabled technologies in urban and border surveillance, including FRT. It is clear that state institutions plan to use these technologies extensively, and the lack of (trans)national institutional oversight and public debate on the matter should raise concerns about the extent to which such implementation will affect citizens’ rights. Until the topic is picked up in public debate and diplomatic relations and reforms in the way this technologies are purchased and governed, Moroccan authorities will continue to conduct widespread AI-enabled surveillance without any oversight or accountability.Footnote 80

Footnotes

9 Government Use of Facial Recognition Technologies under European Law

1 European Data Protection Board (EDPB), ‘EDPB & EDPS call for ban on use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination’, Press Release (21 June 2021), https://edpb.europa.eu/news/news/2021/edpb-edps-call-ban-use-ai-automated-recognition-human-features-publicly-accessible_en; European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters.

2 ‘Biometric & behavioural mass surveillance in EU member states’ (2021), Report for the Greens/EFA in the European Parliament, p. 36 et seq.

3 Footnote Ibid., p. 38.

4 Stephan Schindler, ‘Biometrische Videoüberwachung – Zur Zulässigkeit biometrischer Gesichtserkennung in Verbindung mit Videoüberwachung zur Bekämpfung von Straftaten’ (2020), p. 211; Gerrit Hornung and Stephan Schindler, ‘Das biometrische Auge der Polizei’ (2017) 5 ZD 203.

5 European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters.

6 CJEU, Case 291/12, Michael Schwarz v. Stadt Bochum [2013], ECLI:EU:C:2013:670, p. 43.

7 CJEU, Case 817/19, Ligue des droits humains v. Conseil des ministres [2022], ECLI:EU:C:2022:491, pp. 123–124.

8 Footnote Ibid., p. 123.

9 Footnote Ibid., p. 124.

10 Davide Castelvecch, ‘Beating biometric bias’ (2022) 587 Nature 348.

11 Footnote Ibid., p. 349; William Crumpler, ‘How accurate are facial recognition systems – And why does it matter?’ (14 April 2020), Center for Strategic and International Studies, www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-%E2%80%93-and-why-does-it-matter.

12 Crumpler, ‘How accurate are facial recognition systems?’.

13 Mei Ngan, Patrick Grother, and Kayee Hanaoka, ‘Ongoing Face Recognition Vendor Test (FRVT), Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms’ (November 2020), NISTIR 8331, https://doi.org/10.6028/NIST.IR.8331.

14 Castelvecchi, ‘Beating biometric bias’, p. 349.

15 For more examples see EDPB, Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, Version 1.0, p. 9.

16 Patrick Grother, George Quinn, and Mei Ngan, ‘Face in Video Evaluation (FIVE): Face recognition of non-cooperative subjects’ (March 2017), NISTIR 8173, https://doi.org/10.6028/NIST.IR.8173.

18 European Parliamentary Research Service (EPRS), ‘Regulating facial recognition in the EU’ (September 2021), p. 10.

19 ECtHR, Niemietz v. Germany, judgment of 16 December 1992, Series A no. 251-B, pp. 33–34, § 29; Botta v. Italy, judgment of 24 February 1998, Reports of Judgments and Decisions 1998-I, p. 422, § 32.

20 ECtHR, P.G. and J.H. v. the United Kingdom, no. 44787/98, § 56; Peck v. the United Kingdom, no. 44647/98, § 57.

21 See ECtHR, Uzun v. Germany, judgment, no. 35623/05, § 44.

23 See ECtHR, Niemietz v. Germany, judgment of 16 December 1992, Series A no. 251-B, pp. 33–34, § 29; Botta v. Italy, judgment of 24 February 1998, Reports of Judgments and Decisions 1998-I, p. 422, § 32.

24 For example, the police in Hamburg used FRT after the G20 summit in 2017 to identify offenders from private recordings and police videos as well as image and video material from S-Bahn stations and from the media.

25 Schindler, ‘Biometrische Videoüberwachung’, p. 384.

26 See ‘Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests’ (2020), Report of the United Nations High Commissioner for Human Rights.

27 EDPB, Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, Version 1.0, p. 13.

28 Art. 16(2) TFEU only empowers the EU to adopt rules relating to the protection of individuals with regard to the processing of personal data by the member states when carrying out activities that fall within the scope of EU law and on the free movement of such data.

29 Schindler, ‘Biometrische Videoüberwachung’, p. 404; see also Vera Lúcia Raposo, ‘The use of facial recognition technology by law enforcement in Europe: A non-Orwellian draft proposal’(2022) European Journal on Criminal Policy and Research, https://doi.org/10.1007/s10610-022-09512-y.

30 See in detail EDPB, Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, Version 1.0, p. 17 ff.

31 EDPB, Guidelines 05/2022, p. 13; EPRS, ‘Regulating facial recognition in the EU’, p. 14; Raposo, ‘The use of facial recognition technology’.

32 CJEU, Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, 54; Case 203/15 and 698/15, Tele2 Sveringe AB v. Post- och telestyrelsen [2016], ECLI:EU:C:2016:970, 109.

33 ECtHR, Shimovolos v. Russia, no. 30194/09, § 68.

34 Judgement in Case No. C1/2019/2670, Court of Appeal, 11 August 2020, 90–96.

36 See EDPB, Guidelines 05/2022, p. 14.

37 CJEU, Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, 42; see also Case C-145/09, Land Baden-Württemberg v. Panagiotos Tsakouridis [2010], EU:C:2010:708, 45–47.

38 See ECtHR, S. and Marper v. The United Kingdom, nos. 30562/04 and 30566/04, 100.

39 See Schindler, ‘Biometrische Videoüberwachung’, p. 613.

40 Footnote Ibid., p. 608 et seq.

41 See in this regard CJEU, Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, 57–65.

42 Mario Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Sicherheit und Freiheit’ (2022), 1–2 NVwZ-Extra 6.

43 See BVerfG, Urt. v. 18.12.2018, Kfz-Kennzeichenkontrolle 2, ECLI:DE:BVerfG:2018:rs20181218.1bvr014215, 47–51.

44 CJEU, Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, 51; Case 140/20, G.D. v. Commissioner of An Garda Síochána [2022], ECLI:EU:C:2022:258, 101.

45 See CJEU, Case 511/18, 512/18 and 520/18, La Quadrature du Net v. Premier minister [2020], ECLI:EU:C:2020:791, 147; C-203/15 and C-698/15, Tele2 Sverige AB v. Post- och telestyrelsen [2016], EU:C:2016:970, 108.

46 See CJEU, Case 511/18, 512/18 and 520/18, La Quadrature du Net v. Premier minister [2020], ECLI:EU:C:2020:791, 148–150; Case-203/15 and C-698/15, Tele2 Sverige AB v. Post- och telestyrelsen [2016], EU:C:2016:970, 111.

47 See CJEU, Case 511/18, 512/18 and 520/18, La Quadrature du Net v. Premier minister [2020], ECLI:EU:C:2020:791, 150.

48 Footnote Ibid., paras. 158–159; Case 746/18, H.K. v. Prokuratuur [2021], ECLI:EU:C:2021:152, 34.

49 CJEU, Case 746/18, H.K. v. Prokuratuur [2021], ECLI:EU:C:2021:152, 35.

50 CJEU, Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, para. 54; Case 746/18, H.K. v. Prokuratuur [2021], ECLI:EU:C:2021:152, 48.

51 Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, para. 55; Case 511/18, 512/18 and 520/18, La Quadrature du Net v. Premier minister [2020], ECLI:EU:C:2020:791, 132.

52 See Case 293/12 and 594/12, Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources [2014], ECLI:EU:C:2014:238, 61–66.

53 Case 817/19, Ligue des droits humains v. Conseil des ministres [2022], ECLI:EU:C:2022:491, para. 203.

54 See Michael Kosinki, ‘Facial recognition technology can expose political orientation from naturalistic facial images’ (2021) 11 Scientific Reports 100, https://doi.org/10.1038/s41598-020-79310-1.

55 See Ngan, Grother, and Hanaoka, ‘Face recognition accuracy with face masks’.

56 See Council of Europe, ‘Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data, Convention 108’ (28 January 2021), Guidelines on facial recognition, p. 8.

57 A limitation to categories of persons is not technically possible when deployed in public spaces.

58 Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence and amending certain Union Legislative Acts, COM (2021) 206 final.

59 The AI-Act only contains limitations for the use of real-time remote biometric identification systems deployed in publicity accessible space for the purpose of law enforcement (see Art. 5(1) lit. d of the Commission’s AI-Act Proposal). However, the Commission’s original AI-Act Proposal is silent regarding other modalities of FRT for law enforcement purposes, such as when this technology does not take place in real-time or it is not performed in public spaces. Only the proposals made by the European Parliament address these deployment modalities of FRT, see European Parliament, P9_TA(2023)0236.

60 Recognised by Art. 5(4) of the AI-Act Proposal.

10 European Biometric Surveillance, Concrete Rules, and Uniform Enforcement Beyond Regulatory Abstraction and Local Enforcement

1 The procedure of collecting signatures for the ‘Civil society initiative for a ban on biometric mass surveillance practices’ (initiated at the beginning of 2021) is ongoing. See: Commission Implementing Decision (EU) 2021/360 of 19 February 2021 on the extension of the periods for the collection of statements of support for certain European citizens’ initiatives pursuant to Regulation (EU) 2020/1042 of the European Parliament and of the Council (notified under document C(2021) 1121) (2021) OJ L69/9; Commission Implementing Decision (EU) 2021/944 of 3 June 2021 on the extension of the periods for the collection of statements of support for certain European citizens’ initiatives pursuant to Regulation (EU) 2020/1042 of the European Parliament and of the Council (notified under document C(2021) 3879).

2 European Parliamentary Research Service, ‘Person identification, human rights and ethical principles: Rethinking biometrics in the era of artificial intelligence’ (16 December 2021), European Union, p. I, refers to ‘remote biometric identification’ as ‘AI systems used for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified’, www.europarl.europa.eu/stoa/en/document/EPRS_STU(2021)697191.

3 European Parliamentary Research Service, ‘Person identification’, p. I, defines ‘biometric categorisation’ as ‘AI systems used for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data’.

4 Charter of Fundamental Rights of the European Union (2012) OJ C326/391, Art. 21. ‘1. Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited. 2. Within the scope of application of the Treaties and without prejudice to any of their specific provisions, any discrimination on grounds of nationality shall be prohibited […].’ European Convention on Human Rights (as amended by Protocols Nos 11, 14 and 15 supplemented by Protocols Nos 1, 4, 6, 7, 12, 13 and 16), Art. 14. ‘The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status […].’

5 S and Marper v. the United Kingdom, Application Nos 30562/04 and 30566/04 (ECtHR, 4 December 2008) 121. ‘The Court […] reiterates that the mere retention and storing of personal data by public authorities, however obtained, are to be regarded as having direct impact on the private life interest of an individual concerned, irrespective of whether subsequent use is made of the data […].’

6 European Parliamentary Research Service, ‘Person identification, human rights and ethical principles’, p. I, sees ‘emotion recognition’ as ‘AI systems used for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data’.

7 On old (CCTV) modes of surveillance that keep being subjected to new soft law, in light of technological developments and further implementations, see: ICO, ‘Video surveillance (including guidance for organisations using CCTV)’ (n.d.): ‘Traditional closed circuit television (CCTV) also continues to evolve into more complex artificial intelligence (AI) based surveillance systems. These can process more sensitive categories of personal data […] The ways in which the technology is used also continue to develop. This includes connected databases utilising Automatic Number Plate Recognition (ANPR) or the use of Facial Recognition Technology (FRT) in public spaces […]’, https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-video-surveillance/.

8 Paul De Hert and Georgios Bouchagiar, ‘Visual and biometric surveillance in the EU. Saying “no” to mass surveillance practices?’ (2022) 27(2) Information Polity 193.

9 See, among many others, European Parliamentary Research Service, ‘Person identification, human rights and ethical principles’, 53ff, finding regulatory failures and gaps and suggesting, among others, more specific and targeted regulation and bans on certain uses.

10 Although some of these efforts and AI-proposals appear promising, it remains to be seen whether they will be effectively realised. See Maximilian Gahntz, Mark Surman, and Mozilla Insights, ‘How to make sure the EU’s AI Act delivers on its promise’ (25 April 2022), Mozilla Foundation, https://foundation.mozilla.org/en/blog/how-to-make-sure-the-eu-ai-act-delivers-on-its-promise/#:~:text=The%20draft%20AI%20Act%20includes,before%20they%20can%20be%20deployed. In our view, these efforts need to be taken seriously. Regulators simply must provide the citizen a response to materialised, detected, or emerging risks and harms. At least those states that see themselves as pioneers in a tech-field should make themselves analogously responsibilised towards those affected by their technological expertise and uses. Compare Els Kindt, ‘Biometric data processing: Is the legislator keeping up or just keeping up appearances?’ in Gloria González, Rosamunde Van Brakel and Paul De Hert (eds.), Research Handbook on Privacy and Data Protection Law (Edward Elgar, 2022), pp. 375, 396: ‘[T]he responsibility of the States to regulate the automated use of unique and other human characteristics cannot be underestimated: Any State claiming a pioneer role in the development of new technologies bears special responsibility for “striking the right balance” […].’

11 See, on the refusal of the judges in Bridges to test the proportionality of facial recognition systems, Nóra Ni Loideain, Chapter 11 in this volume. See also De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

12 See, among others, Gabe Maldoff, ‘White Paper – The risk-based approach in the GDPR: Interpretation and implications’ (March 2016), IAPP, https://iapp.org/resources/article/the-risk-based-approach-in-the-gdpr-interpretation-and-implications/; Gianclaudio Malgieri, ‘Malgieri & Ienca on European Law Blog: “The EU regulates AI but forgets to protect our mind”’ (7 July 2021), Gianclaudio Malgieri, European Law Blog, www.gianclaudiomalgieri.eu/2021/07/07/malgieri-ienca-on-european-law-blog-the-eu-regulates-ai-but-forgets-to-protect-our-mind/; European Commission, ‘Environmental risks’ (n.d.), https://ec.europa.eu/environment/risks/index_en.htm.

13 Compare with Orla Lynskeyon the possible role of law in this area: either shaping proportionate surveillance or banning facial recognition since it affects the core of individual and collective rights and interests. Orla Lynskey, ‘Keynote address in facial recognition in the modern state’ (15 September 2022), UNSW Allens Hub, https://allenshub.unsw.edu.au/events/facial-recognition-modern-state.

14 See, among others, McKay Cunningham, ‘Next generation privacy: The internet of things, data exhaust, and reforming regulation by risk of harm’ (2014) 2(2) Groningen Journal of International Law 115, 142, 144. ‘Privacy laws should focus on data use, not collection. Privacy laws should identify and address the specific harm or risk associated with the use of sensitive data in particular contexts […]’; Paul De Hert, ‘The future of privacy – Addressing singularities to identify bright-line rules that speak to us’ (2016) 2(4) European Data Protection Law Review 461.

15 Paul De Hert and Georgios Bouchagiar, ‘Facial recognition, visual and biometric data in the US. Recent, promising developments to regulate intrusive technologies’ (2021) 7(29) Brussels Privacy Hub https://brusselsprivacyhub.eu/publications/wp729; De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

16 For full reference of these initiatives, see: De Hert and Bouchagiar, ‘Facial recognition, visual and biometric data in the US’; De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

17 Federal 2020 National Biometric Information Privacy Act, section 3(b)–(d).

18 For a list of lawsuits, based on Illinois Biometric Information Privacy Act and revealing that some actors are becoming nervous and uneasy in light of risks connected with FRTs and machine learning-implementations, see Debra Bernard, Susan Fahringer, and Nicola Menaldo, ‘New biometrics lawsuits signal potential legal risks in AI’ (2 April 2020), Perkins Coie, www.perkinscoie.com/en/news-insights/new-biometrics-lawsuits-signal-potential-legal-risks-in-AI.html.

19 2008 Illinois Biometric Information Privacy Act, section 15(b)–(e).

20 2019 California’s Assembly Bill No. 1215, section 2(b).

21 House Bill 1238.

22 New York’s Assembly Bill, subdivision 2.

23 Senate Bill 1392, section 59.1-574, subsection A.

24 Federal 2019 Commercial Facial Recognition Privacy Act, section 3(a)(2–4).

25 Federal 2020 Facial Recognition and Biometric Technology Moratorium Act, section 3(a)–(b).

26 Engrossed Substitute Senate Bill 6280, section 11(5), (7).

27 Assembly Bill 989.

28 Ordinance ‘Surveillance Technology in Baltimore’, ‘Article 19. Police Ordinances’, ‘Subtitle 18. Surveillance’.

29 For full analysis of our conclusions, see: De Hert and Bouchagiar, ‘Facial recognition, visual and biometric data in the US’; De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

30 A good example of this can be found in Federal 2020 National Biometric Information Privacy Act, section 2(4): ‘The term written release means specific, discrete, freely given, unambiguous, and informed written consent given by an individual who is not under any duress or undue influence of an entity or third party at the time such consent is given; or […] in the context of employment, a release executed by an employee as a condition of employment […].’

31 ‘[I]t shall be unlawful for any Federal agency or Federal official […] to acquire, possess, access, or use in the United States (1) any biometric surveillance system; or (2) information derived from a biometric surveillance system operated by another entity […] (t)he prohibition set forth in subsection (a) does not apply to activities explicitly authorized by an Act of Congress that describes, with particularity (1) the entities permitted to use the biometric surveillance system, the specific type of biometric authorized, the purposes for such use, and any prohibited uses; (2) standards for use and management of information derived from the biometric surveillance system, including data retention, sharing, access, and audit trails; (3) auditing requirements to ensure the accuracy of biometric surveillance system technologies, standards for minimum accuracy rates, and accuracy rates by gender, skin color, and age; (4) rigorous protections for due process, privacy, free speech and association, and racial, gender, and religious equity; and (5) mechanisms to ensure compliance with the provisions of the Act […].’

32 ‘A violation of section 3 shall be treated as a violation of a rule defining an unfair or deceptive act or practice […].’

33 ‘Any person who violates any provision of this subtitle is guilty of a misdemeanor and, on conviction, is subject to a fine of not more than $1,000 or imprisonment for not more than 12 months or both fine and imprisonment […].’

34 De Hert and Bouchagiar, ‘Facial recognition, visual and biometric data in the US’.

35 Paul De Hert and Gianclaudio Malgieri, ‘One European legal framework for surveillance: The ECtHR’s expanded legality testing copied by the CJEU’ in Valsamis Mitsilegas and Niovi Vavoula (eds.), Surveillance and Privacy in the Digital Age. European, Transatlantic and Global Perspectives (Hart, 2021), p. 255.

36 See, for instance, European Parliament, ‘Parliamentary questions’ (13 August 2021), European Parliament: ‘In a joint opinion, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have called for a general ban on the use of AI for the automated recognition of human features – such as of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals – in publicly accessible spaces. The EDPS and EDPB recommend tightening the draft EU Artificial Intelligence Act, as they consider that the current proposal does not cover a wide enough scope. 1. To what extent does the Commission take the views of the EDPS, EDPB and the 175 civil society organisations mentioned in the article above into account? 2. Does the automated recognition of human features constitute interference with the fundamental rights of EU citizens? 3. Is the Commission aiming to ban the automated recognition of human features? If so, on what grounds? […]’, www.europarl.europa.eu/doceo/document/E-9-2021-003888_EN.html.

37 There is a discussion on serious challenges in the US in an interview of Helena Wootton and Stewart Dresner with Justin Antonipillai, Privacy Laws & Business, ‘Privacy Paths’ podcast, Episode 17: ‘US privacy laws most likely to be adopted and when’ (10 November 2021), www.privacylaws.com/podcasts/.

38 On smart-contracting-programmes in the EU agenda targeted at the public sphere, from voting to establishing digital identities, see, among others, EU Blockchain, ‘Observatory and forum’ (n.d.), www.eublockchainforum.eu/initiative-map.

39 On face recognition in schools, see Asress Adimi Gikay, ‘On facial recognition technology in schools, power imbalance and consent: European data protection authorities should re-examine their approach’ (20 December 2021), EU Law Analysis, http://eulawanalysis.blogspot.com/2021/12/on-facial-recognition-technology-in.html. On recent initiatives in the United States, introducing concrete duties to employers who use monitoring technologies, see Hunton Andrews Kurth, ‘New York State requires private employers to notify employees of electronic monitoring’ (12 November 2021), Hunton Privacy Blog, www.huntonprivacyblog.com/2021/11/12/new-york-state-requires-private-employers-to-notify-employees-of-electronic-monitoring/#more-20908. This refers to New York’s law A.430/S.2628, introduced in 2021 (effective from May 2022), demanding private employers to give employees prior written notice (before hiring) of their monitoring technologies.

40 Privacy International, ‘Challenge against Clearview AI in Europe’ (2 June 2021), EDRi, https://edri.org/our-work/challenge-against-clearview-ai-in-europe/.

41 De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

42 OAIC, ‘OAIC and ICO conclude joint investigation into Clearview AI’ (3 November 2021), www.oaic.gov.au/updates/news-and-media/oaic-and-ico-conclude-joint-investigation-into-clearview-ai.

43 Footnote Ibid. ‘Our digital world is international and so our regulatory work must be international too, particularly where we are looking to anticipate, interpret and influence developments in tech for the global good […] The issues raised by Clearview AI’s business practices presented novel concerns in a number of jurisdictions. By partnering together, the OAIC and ICO have been able to contribute to an international position, and shape our global regulatory environment […].’

44 ICO, ‘ICO issues provisional view to fine Clearview AI Inc over £17 million’ (29 November 2021), https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/. ‘The ICO’s preliminary view is that Clearview AI Inc appears to have failed to comply with UK data protection laws in several ways including by […] failing to process the information of people in the UK in a way they are likely to expect or that is fair […] failing to have a process in place to stop the data being retained indefinitely […] failing to have a lawful reason for collecting the information […] failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR) […] failing to inform people in the UK about what is happening to their data; and asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed […].’

46 ICO, ‘Clearview AI Inc.’ (26 May 2022), https://ico.org.uk/action-weve-taken/enforcement/clearview-ai-inc-mpn/.

47 ICO, ‘ICO issues provisional view’. ‘Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge. We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously […].’

48 Hermes Center and Reclaim Your Face, ‘Italian DPA fines Clearview AI for illegally monitoring and processing biometric data of Italian citizens’ (23 March 2022), EDRi, https://edri.org/our-work/italian-dpa-fines-clearview-ai-for-illegally-monitoring-and-processing-biometric-data-of-italian-citizens/.

49 CNIL, ‘Facial recognition: The CNIL orders CLEARVIEW AI to stop reusing photographs available on the internet’ (16 December 2021), www.cnil.fr/en/facial-recognition-cnil-orders-clearview-ai-stop-reusing-photographs-available-internet.

50 Lynskey, ‘Keynote address in facial recognition in the modern state’.

51 For the text of the settlement, see www.aclu.org/cases/aclu-v-clearview-ai. See also ACLU, ‘In big win, settlement ensures Clearview AI complies with groundbreaking Illinois biometric privacy law’ (9 May 2022), www.aclu.org/press-releases/big-win-settlement-ensures-clearview-ai-complies-with-groundbreaking-illinois. See also Security.nl, ‘Clearview AI beperkt gebruik van massale gezichtsherkenningsdatabase’ (10 May 2022), www.security.nl/posting/752955/Clearview+AI+beperkt+gebruik+van+massale+gezichtsherkenningsdatabase.

52 Compare Arti, ‘Clearview Ai vs ACLU lawsuit is nothing but a facade of fake hopes and claims’ (18 May 2022), Analytics Insight, www.analyticsinsight.net/clearview-ai-vs-aclu-lawsuit-is-nothing-but-a-facade-of-fake-hopes-and-claims/.

53 Clearview AI has agreed to a nationwide injunction barring access to the Clearview App by (1) any private entity or private individuals unless such access is compliant with BIPA; or (2) any governmental employee not acting in his or her official capacity.

54 Clearview has agreed to a five-year injunction against access to the Clearview App (1) by Illinois state and local agencies and their contractors; (2) by any private entity located in Illinois even if permissible under BIPA; and (3) by employees of Illinois state and local agencies and their contractors, whether in their individual or official capacities.

55 There will be no restrictions on Clearview’s ability to work with or contract with (1) third parties outside Illinois; (2) federal agencies whether in Illinois or outside Illinois; and (3) state or local government agencies outside Illinois.

56 This is the ‘Opt-Out Program’ for Illinois residents in the settlement, by which an Illinois resident will be allowed to submit a photo to Clearview and compel Clearview, on a best-efforts basis, to block search results and prevent any future collection of facial recognition data or images of such person. A last element of the settlement is ‘Illinois Photo Screening’, in which Clearview has agreed, on a best-efforts basis, not to access or use any of its existing ‘Illinois-based’ facial recognition data.

57 On the In re Facebook Biometric Information Privacy Litigation settlement of 2020, see J. Cleary, ‘Facial recognition: Clearview-ACLU settlement charts a new path for BIPA and the First Amendment’ (2022) September The National Law Review 1.

58 On Vance v. IBM and Janecyk v. International Business Machines, see D. Bernard, Susan Fahringer, and Nicola Menaldo, ‘New biometrics lawsuits signal potential legal risks in AI’ (2020) 3/5, The Journal of Robotics, Artificial Intelligence & Law 353356.

60 De Hert and Bouchagiar, ‘Facial recognition, visual and biometric data in the US’; De Hert and Bouchagiar, ‘Visual and biometric surveillance in the EU’.

11 Lawfulness and Police Use of Facial Recognition in the United Kingdom Article 8 ECHR and Bridges v. South Wales Police

1 Nessa Lynch et al., Facial Recognition Technology in New Zealand (The Law Foundation, 2020); European Digital Rights, The Rise and Rise of Biometric Mass Surveillance in the EU (EDRi, 2021); US Government Accountability Office, ‘Facial recognition technology, GAO-21-526’ (2021); House of Lords, ‘Technology rules? The advent of new technologies in the justice system’ (2022), HL Paper 180; Nicola Kelly, ‘Facial recognition smartwatches to be used to monitor foreign offenders in UK’ (5 August 2022), The Guardian; Laura Kayali, ‘French privacy chief warns against using facial recognition for 2024 Olympics’ (24 January 2023), Politico.

2 World Economic Forum, ‘A policy framework for responsible limits on facial recognition’ (3 November 2022), pp. 15–18.

3 Big Brother Watch,Face off: The lawless growth of facial recognition in the UK’ (May 2018), pp. 9–19; Pete Fussey and Daragh Murray, ‘Independent report on the London metropolitan police’s services trial of live facial recognition technology’ (July), pp. 5–6; Information Commission’s Office, ‘The use of live facial recognition technology by law enforcement in public places’ (2019), ICO Opinion; Kate Crawford, ‘Regulate facial recognition technology’ (2019) 572 Nature 565; European Digital Rights, The Rise and Rise of Biometric Mass Surveillance, pp. 12–13; Biometrics, Forensics and Ethics Group, ‘Briefing note on the ethical issues arising from public–private collaboration in the use of live facial recognition technology’ (2021), UK Government; Sarah Bird, ‘Responsible AI investments and safeguards for facial recognition’ (21 June 2022), Microsoft Azure AI; Matthew Ryder KC, Independent Legal Review of the Governance of Biometric Data in England and Wales (Ada Lovelace Institute, 2022); Information Commissioner’s Office, ‘ICO fines facial recognition company Clearview AI Inc more than £7.5 m’ (May 2022); Clothilde Goujard, ‘Europe edges closer to a ban on facial recognition’ (20 September 2022), Politico.

4 On the risks of unlawful discrimination from AI-based systems used for predictive policing, see EU Agency for Fundamental Rights (FRA), ‘Bias in algorithms: AI and discrimination’ (2022), FRA Report, pp. 36–48; FRA, ‘Facial recognition technology: Fundamental rights considerations in law enforcement’ (2019), FRA Paper, pp. 27–28.

5 Bethan Davies, Martin Innes, and Andrew Dawson (2018), ‘An evaluation of South Wales Police’s use of automated facial recognition’ (September 2018), Report, Crime & Security Research Institute, Cardiff University, p. 43; Crawford, ‘Regulate facial recognition technology’; Information Commission’s Office, ‘The use of live facial recognition technology’, pp. 21–22; House of Lords, ‘Technology rules?’, pp. 76–77. See generally European Data Protection Board and European Data Protection Supervisor, ‘EDPB-EDPS Joint Opinion 5/2021’ (18 June 2021).

6 Kashmir Hill, ‘How one state managed to actually write rules on facial recognition’ (27 February 2021), New York Times.

7 European Parliament, ‘Report on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters’ (2021), Report-A9-0232/2021.

8 R (Bridges) v. South Wales Police [2019] EWHC 2341, High Court; [2020] EWCA Civ 1058, Court of Appeal.

9 [2020] EWCA Civ 1058 [210]. Other legal issues addressed in Bridges, concerning proportionality, data protection, and equality law, are beyond the scope of this chapter. On these areas, see Lorna Woods, ‘Automated facial recognition in the UK’ (2020) 6 European Data Protection Law Review 455; Monika Zalnieriute, ‘Burning bridges: The automated facial recognition technology and public space surveillance in the modern state’ (2021) 22 Columbia Science & Technology Law Review 284; Joe Purshouse and Liz Campbell, ‘Automated facial recognition and policing’ (2022) 42 Legal Studies 209.

10 Biometrics and Surveillance Camera Commissioner, ‘Annual report 2022’ (2023), UK Government, p. 6; House of Lords, ‘Technology rules?’, pp. 27–32; Ryder, Independent Legal Review, pp. 62–67.

11 See generally Julia Black and Andrew Murray, ‘Regulating AI and machine learning’ (2019) 10(3) European Journal of Law and Technology.

12 Deryck Beyleveld and Roger Brownsword, ‘Punitive and preventing justice in an era of profiling, smart prediction, and practical preclusion’ (2019) 15 International Journal of Law in Context 198. See generally Andrew Ashworth and Lucia Zedner, Preventive Justice (Oxford University Press, 2014).

13 For instance, multiple US public authorities have used Clearview commercial facial recognition software for law enforcement purposes since 2020: see US Government Accountability Office, ‘Facial recognition technology’, pp. 26–28.

14 See Nora Ni Loideain, ‘Cape Town as a smart and safe city: Implications for privacy and data protection’ (2017) 4 International Data Privacy Law 314; Nadezhda Purtova, ‘Between the GDPR and the police directive’ (2018) 8 International Data Privacy Law 52; Orla Lynskey, ‘Criminal justice profiling and EU data protection law’ (2019) 15 International Journal of Law in Context 162; Sarah Brayne, Predict and Surveil (Oxford University Press, 2020); Helen Warrell and Nic Fildes, ‘Amazon strikes deal with UK spy agencies to host top secret material’ (25 October 2021), Financial Times; Litska Strikwerda, ‘Predictive policing’ (2020) 94 The Police Journal 259; Stanisław Tosza, ‘Internet service providers as law enforcers and adjudicators’ (2021) 43 Computer Law & Security Review.

15 Linda Geddes, ‘Digital forensics experts prone to bias, study shows’ (31 May 2021), The Guardian; FRA, ‘Bias in Algorithms’, p. 19.

16 Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’(2018), Proceedings of Machine Learning Research 81 Conference on Fairness, Accountability, and Transparency, pp. 1–15; US National Institute of Standards and Technology (NIST), ‘NIST study evaluates effects of race, age, sex on face recognition software’ (19 December 2019).

17 Dominic Casciani, ‘Sarah Everard’s murder and the questions the Met police now face’ (2 October 2021), BBC News; UK Government, ‘The Lammy Review: An independent review into the treatment of, and outcomes for, Black, Asian and minority ethnic individuals in the criminal justice system’ (2017); Kashmir Hill, ‘Wrongfully accused by an algorithm’ (24 June 2021), New York Times; Jane Bradley, ‘“Troubling” race disparity is found in UK prosecution decisions’ (8 February 2023), New York Times.

18 BBC News, ‘Leicestershire police trial facial recognition software’ (15 July 2014); Fussey and Murray, ‘Independent report’.

19 Biometrics, Forensics and Ethics Group, ‘Briefing note’, p. 4.

20 Woods, ‘Automated facial recognition’, p. 561; Zalnieriute, ‘Burning bridges’, p. 287; Nora Ni Loideain, ‘A trustworthy framework that respects fundamental rights? The draft EU AI Act and police use of biometrics’ (4 August 2021), Information Law & Policy Centre; European Data Protection Board and European Data Protection Supervisor, ‘EDPB-EDPS Joint Opinion 5/2021’; Biometrics and Surveillance Camera Commissioner, ‘Annual Report 2022’, pp. 59–60.

21 Ben Faiza v. France [2018] ECHR 153; Joined Cases C-511/18, C-512/18, C-520/18, La Quadrature du Net and Others, judgment of 6 October 2020 (ECLI:EU:C:2020:791) [187].

22 Ni Loideain, ‘A trustworthy framework’.

23 House of Lords, ‘Technology rules?’, p. 50. See generally Robert Baldwin, Martin Cave, and Martin Lodge, Understanding Regulation (Oxford University Press, 2011).

24 M.D. and Others v. Spain [2022] ECHR 527 [52].

25 Leander v. Sweden (1987) 9 EHRR 433 [48]; Catt v. United Kingdom [2019] ECHR 76 [93].

26 Amann v. Switzerland (2000) ECHR 87 [69].

27 M.M. v. United Kingdom [2012] ECHR 1906 [187]

28 Gaughran v. United Kingdom [2020] ECHR 144 [63]–[70]; M.D. and Others v. Spain [54].

29 S and Marper v. United Kingdom (2009) 48 EHRR 50 [66] (emphasis added).

30 Zakharov v. Russia (2016) 63 EHRR 17.

31 S and Marper v. United Kingdom [66]–[86]; Gaughran v. United Kingdom [63]–[70].

32 Gaughran v. United Kingdom [81]. On law enforcement use of this technique in the UK, see Biometrics and Forensics Ethics Group, Should We Be Making Use of Genetic Genealogy to Assist in Solving Crime? (UK Government, 2020).

33 Gaughran v. United Kingdom [68]–[70].

34 On the traditional approach of the ECtHR regarding the legality condition, see generally Geranne Lautenbach, The Concept of the Rule of Law and the European Court of Human Rights (Oxford University Press, 2013).

35 Malone v. United Kingdom (1985) 7 EHRR 14 [67].

36 Huvig v. France [1990] ECHR 9 [28].

37 Zakharov v. Russia [228].

38 Weber and Saravia v. Germany [2006] ECHR 1173 [94] (admissibility decision).

39 Huvig v. France [1990] ECHR 9 [34]; Big Brother Watch v. United Kingdom [2021] ECHR 439 [335]. Although established in the 1990s, some scholars refer to the six foreseeability safeguards as the ‘Weber criteria’ following Weber and Saravia.

40 Huvig v. France (1990) 12 EHRR 547 [32]; Zakharov v. Russia [229] (emphasis added).

41 S and Marper v. United Kingdom [103]; M.K. v. France [2013] ECHR 341 [35]; Aycaguer v. France [2017] ECHR 587 [38].

42 See, for instance, Lorena Winter, ‘Telephone tapping in the Spanish criminal procedure’ (2007) 13 Jura 7; John Spencer, ‘Telephone-tap evidence and administrative detention in the UK’ in Marianne Wade and Almir Maljevic (eds.), A War on Terror? (Springer, 2010); T. J. McIntyre and Ian O’Donnell, ‘Criminals, data protection, and the right to a second chance’ (2017) 58 The Irish Jurist 27.

43 David Feldman, ‘Secrecy, dignity or autonomy? Views of privacy as a civil liberty’ (1994) 47(4) Public Law 5458; Aileen McHarg, ‘Reconciling human rights and the public interest’ (1999) 62 Modern Law Review 671; Lee Bygrave, Data Privacy Law: An International Perspective (Oxford University Press, 2014), p. 86; see generally Nora Ni Loideain, EU Data Privacy Law and Serious Crime (Oxford University Press, 2024).

44 Klass v. Germany [1978] ECHR 4 [56]; Zakharov v. Russia [233].

45 P.G. and J.H. v. United Kingdom [2001] ECHR 550 [46].

46 Janneke Gerards, General Principles of the European Convention on Human Rights (Cambridge University Press, 2019), p. 222.

47 See, for instance, Breyer v. Germany [2020] ECHR 95. The CJEU has also followed the Art. 8 ECHR jurisprudence of the ECtHR and applies what this author describes as the ‘hierarchy of intrusiveness’ in its landmark data retention judgments: Ni Loideain, EU Data Privacy Law.

48 Marie-Helen Murphy, ‘A shift in the approach of the European Court of Human Rights in surveillance cases’ (2014) European Human Rights Law 507; Kirsty Hughes, ‘Mass surveillance and the European Court of Human Rights’ (2018) European Human Rights Law 589; Nora Ni Loideain, ‘Not so grand: The Big Brother Watch ECtHR Grand Chamber Judgment’ (28 May 2021), Information Law & Policy Centre.

49 Murphy, ‘A shift in the approach’, p. 513. See further Ni Loideain, EU Data Privacy Law.

50 Catt v. United Kingdom [8]. The applicant was twice arrested for being part of demonstrations that blocked a public highway.

51 Footnote Ibid., [34]. No specific case law is provided in Catt v. United Kingdom regarding the basis for these police powers.

52 Footnote Ibid., [97] (emphasis added).

53 Footnote Ibid., [106].

54 Footnote Ibid., [33], [124]–[128].

55 Footnote Ibid., [106] [124]–[128].

56 Footnote Ibid., [119]. Since 2013, this guidance has been issued by the College of Policing.

57 Woods, ‘Automated facial recognition’, p. 460.

58 Catt v. United Kingdom, [123].

59 As held in the leading case law on Art. 8 ECHR, lawfulness, and police powers: Malone v. United Kingdom (1985) 7 EHRR 14; Huvig v. France (1990) 12 EHRR 528; Valenzuela v. Spain (1999) 28 EHRR 483.

60 Catt v. United Kingdom. See Separate Opinion of Judge Koskelo joined by Judge Felici [12]–[15].

62 [2019] EWHC 2341; [2020] EWCA Civ 1058.

63 [2020] EWCA Civ 1058, paras 10–26. The Metropolitan Police also conducted ten trials of LFR technology between 2016 and 2019: Fussey and Murray, ‘Independent report’.

64 [2020] EWCA Civ 1058, para. 10. NEC has been awarded contracts for providing facial recognition systems to other police services since 2014, including the Metropolitan Police and Leicestershire Police.

65 Davies, Innes, and Dawson, ‘An evaluation of South Wales Police’s use’, p. 13.

66 [2020] EWCA Civ 1058, paras 27–30.

67 [2019] EWHC 2341 [63].

68 Footnote Ibid., [64].

71 Footnote Ibid., [62].

72 Footnote Ibid., [57].

73 Footnote Ibid., [68].

74 This judgment was subsequently reviewed by the ECtHR (see Section 11.3.3.3).

75 [2019] EWHC 2341 [71].

76 Footnote Ibid., [75].

77 Footnote Ibid., [76].

78 Footnote Ibid., [77].

79 Footnote Ibid., [82]–[83].

80 Footnote Ibid., [84]. The primary legislation is the Data Protection Act 2018, which does not specifically refer to facial recognition technology.

81 [2020] EWCA Civ 1058 [61].

82 Footnote Ibid., [120].

84 Footnote Ibid., [129]–[130].

85 Footnote Ibid., [91].

86 See, for instance, Fussey and Murray, ‘Independent report’, pp. 8–9. The former Biometrics Commissioner also raised issues regarding the lack of any specific legal basis for the use of LFR systems: Office of the Biometrics Commissioner, ‘Annual Report 2017’ (2018), UK Government, p. 86.

88 [2020] EWHC 2341, para. 159.

89 Liberty, ‘Resist facial recognition’ (n.d.), Online petition, https://action.libertyhumanrights.org.uk/page/50456/petition/1.

90 House of Lords, ‘Technology rules?’, p. 31; Ryder, Independent Legal Review, pp. 62–67.

91 College of Policing, ‘Authorised Profession Practice (APP) on live facial recognition’ (last updated 21 March 2022), www.college.police.uk/app/live-facial-recognition/live-facial-recognition.

92 Alexander Martin, ‘Police warned against “sinister” use of facial recognition to find potential witnesses and not just suspects’ (4 April 2022), Sky News.

93 SWP DPIA, 4–5 (emphasis added). The DPIA further states that: ‘It is possible that the personal data of individuals aged under 18 years, those under 13 years, a person with a disability or vulnerable adults will be processed where there is a policing need and it is deemed to be necessary and proportionate to locate and/or safeguard these individuals.’ See further www.south-wales.police.uk/police-forces/south-wales-police/areas/about-us/about-us/facial-recognition-technology/live-facial-recognition-documents/.

94 College of Policing, ‘Authorised profession practice’, p. 5.

96 The largest police force in England and Wales.

97 National Physical Laboratory, ‘Facial recognition technology in law enforcement equitability study’ (March 2023), NPL Report MS 43, para. 1.4.5. https://science.police.uk/site/assets/files/3396/frt-equitability-study_mar2023.pdf. See further Davies, Innes, and Dawson, ‘An evaluation of South Wales Police’s use’ and their findings and recommendations regarding false positives and threshold value settings by SWP in their trials of FRT in 2019.

98 As held by the ECtHR in its leading case law dealing with Art. 8 ECHR, lawfulness, and police powers: Malone v. United Kingdom (1985) 7 EHRR 14; Huvig v. France (1990) 12 EHRR 528; Valenzuela v. Spain (1999) 28 EHRR 483.

99 Biometrics and Surveillance Camera Commissioner, ‘Annual Report’.

12 Does Big Brother Exist? Facial Recognition Technology in the United Kingdom

1 See comments throughout this chapter.

2 Silkie Carlo, ‘Britain has more surveillance cameras per person than any country except China. That’s a massive risk to our free society’ (17 May 2019), Time, https://time.com/5590343/uk-facial-recognition-cameras-china/.

4 European Parliament, ‘The US Surveillance programmes and their impact on EU citizens’ fundamental rights’ (2013), www.europarl.europa.eu/RegData/etudes/note/join/2013/474405/IPOL-LIBE_NT(2013)474405_EN.pdf; José R. Augustina and Gemma Galdon Clavell, ‘The impact of CCTV on fundamental rights and crime prevention strategies: The case of the Catalan Control Commission of Video Surveillance Devices’ (2011) 27(2) Computer Law and Security Review 168174.

5 See further references in this chapter.

6 [2020] EWCA Civ 1058.

7 Kay L. Ritchie, Charlotte Cartledge, Bethany Growns, An Yan, Yuqing Wang, Kun Guo, Robin S. S. Kramer, Gary Edmond, Kristy A. Martire, Mehera San Roque, and David White, ‘Public attitudes towards the use of automatic facial recognition technology in criminal justice systems around the world’ 2021 16(10) PLoS ONE.

8 UK Government, ‘Data: A new direction’ (23 June 2022), Department for Digital, Culture, Media & Sport, www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation.

9 Philipp Chertoff, ‘Facial recognition has its eye on the U.K.’ (7 February2020), Lawfare, www.lawfareblog.com/facial-recognition-has-its-eye-uk.

11 Metropolitan Police Service, ‘Facial recognition’ (2022), www.met.police.uk/advice/advice-and-information/fr/facial-recognition. College of Policing, ‘Live facial recognition technology guidance published’ (22 March 2022), www.college.police.uk/article/live-facial-recognition-technology-guidance-published.

12 Parliamentary Office of Science and Technology, ‘Postnote: CCTV’, Number 175 (April 2022), www.parliament.uk/globalassets/documents/post/pn175.pdf.

13 Christopher Hope, ‘1,000 CCTV cameras to solve just one crime, Met Police admits’ (25 August 2009), The Telegraph, www.telegraph.co.uk/news/uknews/crime/6082530/1000-CCTV-cameras-to-solve-just-one-crime-Met-Police-admits.html.

14 National Physical Laboratory and Metropolitan Police Service, ‘Metropolitan Police Service live facial recognition trials’ (February 2020), www.met.police.uk/SysSiteAssets/media/downloads/central/services/accessing-information/facial-recognition/met-evaluation-report.pdf.

17 ICO, ‘ICO fines facial recognition database company Clearview AI Inc more than £7.5 m and orders UK data to be deleted’ (23 May 2022), https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/.

18 Sofi Summers, ‘UK Cinema Association partners with digital identity provider Yoti to ease “proof of age” challenges at cinemas’ (27 May 2022), www.yoti.com/blog/uk-cinema-association-partners-yoti-proof-of-age/.

19 Chris Vallance. ‘Facial recognition “watch-list” trial in UK stores’, BBC (16 December 2005), www.bbc.co.uk/programmes/p03c7srr.

20 Shoshana Zuboff, ‘Big other: Surveillance capitalism and the prospects of an information civilization’ (2015) 30 Journal of Information Technology 7589.

21 NEC, ‘South Wales Police – Smarter recognition, safer community’ (n.d.), www.necsws.com/case-studies/public-safety/facial-recognition/facial-recognition-south-wales-police.

24 Léa Steinacker, Miriam Mechel, Genia Kostka, Damian Borth, ‘Facial recognition: A cross-national survey on public acceptance, privacy and discrimination’ (2020), https://arxiv.org/pdf/2008.07275.pdf.

26 Ada Lovelace Institute, ‘Beyond face value: Public attitudes to facial recognition technology’ (September 2019), www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL.pdf.

27 Steinacker et al., ‘Facial recognition’.

28 UK Government, ‘Law enforcement facial images and new biometrics oversight and advisory board’ (n.d.), www.gov.uk/government/groups/law-enforcement-facial-images-and-new-biometrics-oversight-and-advisory-board.

30 See section 42.

31 See part 3 of the DPA 2018.

32 Marco Galimberti, ‘Farewell to the EU Charter: Brexit and fundamental rights protection’ (2021) (1) Nordic Journal of European Law 3652.

33 UK Government, ‘Data: A new direction’, Department for Digital, Culture, Media & Sport (10 September 2021), www.gov.uk/government/consultations/data-a-new-direction.

35 Metropolitan Police, ‘Data protection impact assessment’ (n.d.), www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/impact-assessments/lfr-dpia.pdf; Metropolitan Police, ‘Standard Operating Procedure (SOP) for the overt deployment of Live Facial Recognition (LFR) technology’ (29 November 2022), www.met.police.uk/SysSiteAssets/media/downloads/force-content/met/advice/lfr/policy-documents/lfr-sop.pdf.

37 See BSIA, ‘Automated facial recognition: A guide to ethical use’ (1 January 2021), BSIA Artificial Intelligence Series, www.bsia.co.uk/zappfiles/bsia-front/public-guides/form_347_automated_facial%20recognition_a_guide_to_ethical_and_legal_use-compressed.pdf.

38 UK Government, ‘Data ethics framework’ (16 September 2020), Central Digital and Data Office, www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020.

39 Giulia Gentile, ‘“Verba volant, quoque (soft law) scripta?” An analysis of the legal effects of national soft law implementing EU soft law in France and the UK’ in M. Eliantonio, E. Korkea-Aho, and O. Stefan (eds.), EU Soft Law in the Member States: Theoretical Findings and Empirical Evidence (Hart Publishing, 2021), pp. 7998.

40 Ritchie et al., ‘Public attitudes’.

41 ICO, ‘The use of live facial recognition technology by law enforcement in public places’ (31 October 2019), https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf.

42 Ritchie et al., ‘Public attitudes’.

43 See Evani Radiya-Dixit, ‘A sociotechnical audit: Assessing police use of facial recognition’ (October 2022), Minderoo Centre for Technology and Democracy, www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf; Vikram Dodd, ‘UK police use of live facial recognition unlawful and unethical, report finds’ (27 October 2022), The Guardian, www.theguardian.com/technology/2022/oct/27/live-facial-recognition-police-study-uk?CMP=share_btn_tw.

44 See Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (March 2021), www.adalovelaceinstitute.org/wp-content/uploads/2021/03/Citizens_Biometrics_Council_final_report.pdf.

45 [2020] EWCA Civ 1058 (Bridges) para. 91.

46 Footnote Ibid., para. 38.

47 Footnote Ibid., para. 82, citing R (Wood) v Metropolitan Police Commissioner [2009] EWCA Civ 414.

48 S v UK Apps nos. 30562/04 and 30566/04 (ECHR, 4 December 2008) and R (Catt) v Association of Chief Police Officers [2015] UKSC 9.

49 EWCA Civ 1058 (Bridges) para. 90.

50 Footnote Ibid., para. 120.

51 Footnote Ibid., para. 139.

52 Footnote Ibid., para. 199.

55 See Julia Black and Andrew D. Murray, ‘Regulating AI and machine learning: Setting the regulatory agenda’ (2019) 10(3) European Journal of Law and Technology, https://eprints.lse.ac.uk/102953/4/722_3282_1_PB.pdf.

56 See Radiya-Dixit, ‘A sociotechnical audit’.

57 See EDPB, ‘Joint opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence’ (18 June 2021), EDPB and EDPS, https://edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf.

58 See Clothilde Goujard, ‘Europe edges closer to a ban on facial recognition’ (20 September 2022), Politico, www.politico.eu/article/europe-edges-closer-to-a-ban-on-facial-recognition/.

13 Facial Recognition Technologies in the Public Sector Observations from Germany

1 See, e.g., Der Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, ‘Bundesdatenschutzbeauftragter mahnt Zurückhaltung bei Gesichtserkennung an’ (2019), www.bfdi.bund.de/SharedDocs/Pressemitteilungen/DE/2019/02_Zur%C3%BCckhaltungbeiGesichtserkennung.html; Dirk Heckmann, ‘Gesichtserkennung muss streng reguliert werden’ (2020), jurisPR-ITR 16/2020 Anm. 1; see also Marie-Theres Tinnefeld, ‘… fertig ist das Gesicht – eine Betrachtung im Spiegel digitaler Gesichtserkennungssysteme’ (2018) MMR 777; Amélie P. Heldt, ‘Gesichtserkennung: Schlüssel oder Spitzel? Einsatz intelligenter Gesichtserfassungssysteme im öffentlichen Raum’ (2019) MMR 285. Note that the manuscript was, by and large, finalized in December 2022, with only minor edits being made in the subsequent publishing process.

2 Koalitionsvertrag 2021–2025 zwischen SPD, Bündnis 90/Die Grünen und FDP, pp. 15, 86, www.bundesregierung.de/breg-de/service/gesetzesvorhaben/koalitionsvertrag-2021-1990800.

3 The search via www.juris.de was conducted on 15 September 2022. For the relevance and limitations of such searches, see, e.g., Andreas Engel, ‘The ECHR in the German Legal System – A qualitative and quantitative introduction’ in Matteo Fornasier and Marella Stanzione (eds.), The European Convention on Human Rights and Its Impact on National Private Law: Italo-German Perspectives (Intersentia, 2023), parts 3.1 and 3.2.

4 The search via www.juris.de was conducted on 15 September 2022.

5 BVerfGE 65, 1; for a recent discussion see, e.g., Philipp Lassahn, ‘Datenschutz und Personenschutz’ (2022) 61 Der Staat 407.

6 See in particular Heldt, ‘Gesichtserkennung’, 285, 288. On issues of discrimination, see Stephan Schindler, Biometrische Videoüberwachung (Nomos, 2021), pp. 641666.

7 See also Timo Rademacher, ‘Predictive Policing im deutschen Polizeirecht’ (2017) 142 AöR 366, 399 et seq. for an extensive discussion of the reasons for such a ban in the context of predictive policing.

8 BVerfGE 125, 260, 324, para. 218, translation provided by the BVerfG, www.bverfg.de/e/rs20100302_1bvr025608en.html; see Mario Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Sicherheit und Freiheit’ (2022) 41(1–2) NVwZ-Extra 7 fn. 97 with further references to the jurisprudence of the BVerfG.

9 BVerfGE 150, 244; cf. Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, 7; Stephan Schindler, ‘Noch einmal: Pilotprojekt zur intelligenten Videoüberwachung am Bahnhof Berlin Südkreuz’ (2017) ZD-Aktuell 5799; for an in-depth-comparison, see Schindler, Biometrische Videoüberwachung, pp. 199–201.

10 Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, p. 7.

12 But see VG Hamburg, BeckRS 2019, 40195 para. 104–105, hinting at a possible distinction for cases where FRT is applied in the context of crime.

13 BVerfGE 150, 244, para. 37.

14 Footnote Ibid., para. 39.

15 Footnote Ibid., para. 42.

16 Regarding the basic principles of proportionality, see, e.g., Bernd Grzeszick, ‘Art. 20 GG’ in Rupert Scholz, Matthias Herdegen, and Hans H. Klein (eds.), Dürig/Herzog/Scholz, Grundgesetzkommentar (98th ed., CH Beck, 2022), paras 109 et seq. with further references. Regarding clarity of legal rules, see, e.g., Udo di Fabio, ‘Art. 2 GG’ in Dürig/Herzog/Scholz, Grundgesetzkommentar, para. 184, 186. Regarding certainty, see, e.g., Bernd Grzeszick, ‘Art. 20 GG’ in Dürig/Herzog/Scholz, Grundgesetzkommentar, paras 58 et seq. with further references, also on the relation between legal clarity and certainty.

17 BVerfGE 150, 244, para. 82.

18 Footnote Ibid., para. 90.

19 Footnote Ibid., paras 91, 112.

20 Footnote Ibid., para. 95.

21 Footnote Ibid., para. 100.

22 Footnote Ibid., para. 101.

23 See, generally, Bundespolizeipräsidium Potsdam, ‘Biometrische Gesichtserkennung’ des Bundespolizeipräsidiums im Rahmen der Erprobung von Systemen zur intelligenten Videoanalyse durch das Bundesministerium des Innern, für Bau und Heimat, das Bundespolizeipräsidium, das Bundeskriminalamt und die Deutsche Bahn AG am Bahnhof Berlin Südkreuz – Abschlussbericht – p. 36 et seq, www.bundespolizei.de/Web/DE/04Aktuelles/01Meldungen/2018/10/181011_abschlussbericht_gesichtserkennung_down.pdf;jsessionid=37519C29A2E21493673F09F9BD416715.1_cid289?__blob=publicationFile&v=1; Schindler, ‘Pilotprojekt zur intelligenten Videoüberwachung’; Kai Wendt, ‘Einsatz von intelligenter Videoüberwachung: BMI plant Testlauf an Bahnhöfen’ (2017) ZD-Aktuell 2017, 5724; Schindler, Biometrische Videoüberwachung, pp. 195 et seq. (and pp. 190 et seq. for further examples).

24 Bundespolizeipräsidium Potsdam, ‘Biometrische Gesichtserkennung’, p. 7.

25 Footnote Ibid., pp. 7–8, 23 et seq.; for a more critical view, see the assessment by the Chaos Computer Club, Germany’s largest association of hackers, www.ccc.de/en/updates/2018/debakel-am-suedkreuz.

26 Bundespolizeipräsidium Potsdam, ‘Biometrische Gesichtserkennung’, p. 38.

27 Cf. BVerfGE 125, 260 (324, para. 218).

28 Cf. Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, pp. 7–8.

29 On this provision and FRT, see Footnote ibid., pp. 8–9 (with a discussion of further provisions at pp. 9–11).

30 Footnote Ibid., p. 8.

31 Footnote Ibid., p. 6; Schindler, ‘Biometrische Videoüberwachung’, pp. 608–613.

32 See the list of provisions at Michael W. Müller and Thomas Schwabenbauer, ‘G. Informationsverarbeitung im Polizei- und Strafverfahrensrecht’ in Matthias Bäcker, Erhard Denninger, and Kurt Graulich (eds.), Handbuch des Polizeirechts (7th ed., CH Beck, 2021), paras 662 (video surveillance in general) and 672 (video surveillance of objects in danger). See, in more detail, Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, p. 9.

34 VG Hamburg, BeckRS 2019, 40195, para. 3; see also Schindler, Biometrische Videoüberwachung, pp. 214–216.

35 VG Hamburg, BeckRS 2019, 40195, para. 4.

36 Jan Mysegades, ‘Keine staatliche Gesichtserkennung ohne Spezial-Rechtsgrundlage’ (2020) NVwZ 852.

37 VG Hamburg, BeckRS 2019, 40195, para. 5.

38 Footnote Ibid., para. 75.

39 Footnote Ibid., para. 75. Holger Greve, ‘§ 48 BDSG’ in Martin Eßer, Philipp Kramer and Kai von Lewinski (eds.), Auernhammer, DSGVO/BDSG (8th ed., Carl Heymanns, 2023), para. 5 explains in more detail that the use of general clauses is acceptable even in the law of data protection. Greve argues that in cases where fundamental rights are gravely affected by new technologies, Sec. 48 BDSG can apply only for an interim period until the legislator has had the time to draft a more specific provision: Footnote Ibid., para 8. For a broader and deeper analysis of the role of general clauses in data protection law, see Nikolaus Marsch and Timo RademacherGeneralklauseln im Datenschutzrecht’ (2021) 54 Die Verwaltung 1, with similar conclusions.

40 VG Hamburg, BeckRS 2019, 40195, para. 81.

41 Here, the specific procedural facts of the case were also discussed. As the Hamburg Commissioner for Data Protection and Freedom of Information had issued an administrative act, the principle of the primacy of law came into play. The VG – obiter – expressed grave doubts whether the administrative act – as a decision by the executive branch for an individual case – could at all apply, as it might override statutory law, VG Hamburg, BeckRS 2019, 40195, paras 93 et seq.

42 In this context, see Greve, ‘§ 48 BDSG’, para. 21 on how Sec. 48 BDSG conforms with the constitution.

43 VG Hamburg, BeckRS 2019, 40195, para. 101.

44 Footnote Ibid., para. 102–103.

45 Mysegades, ‘Keine staatliche Gesichtserkennung’, p. 852. On Sec. 48 (1) BDSG and FRT, see also Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, pp. 9–10; Marion Albers and Anna Schimke, ‘§ 48 BDSG’ in Heinrich Amadeus Wolff and Stefan Brink (eds.), BeckOK Datenschutzrecht (42nd ed., CH Beck, 2022), paras 11–12; Frank Braun, ‘§ 48 BDSG’ in Peter Gola and Dirk Heckmann (eds.), DS-GVO/BDSG (3rd ed., CH Beck, 2022), para. 10; Florian Albrecht, ‘§ 32 NPOG’ in Markus Möstl and Bernhard Weiner (eds.), BeckOK Polizei- und Ordnungsrecht Niedersachsen (25th ed., CH Beck, 2022), para. 14b; Moritz Votteler, ‘48 BDSG’ in Andreas Decker, Johann Bader and Peter Kothe (eds.), BeckOK Migrations- und Integrationsrecht (13th ed., CH Beck, 2022), para. 5.

46 Cf. Greve, § 48 BDSG, para. 21.

47 Mysegades, ‘Keine staatliche Gesichtserkennung’, pp. 852–853.

48 Footnote Ibid., p. 854.

51 Footnote Ibid., p. 855.

52 See also Braun, ‘§ 48 BDSG’, para. 10: the decision ‘shows how not to’ assess whether a measure was ‘absolutely necessary’.

53 On the background of the provision and for more details on the test whether a measure is absolutely necessary, see Greve, ‘§ 48 BDSG’, para. 15.

54 Mysegades, ‘Keine staatliche Gesichtserkennung’, p. 855.

55 Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, p. 11.

56 For a discussion of this provision, see in particular Footnote ibid., pp. 11–13, which also addresses concerns with regard to Art. 10 JHA Directive.

57 Translation by the author.

58 Sächsischer Landtag, Drucksache (LT-Drs) 6/14791, p. 186; Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, p. 11.

59 See BVerfGE 125, 260, 324, para. 218.

60 Cf. BVerfGE 150, 244, para. 90.

61 Cf. Martini, ‘Gesichtserkennung im Spannungsfeld zwischen Freiheit und Sicherheit’, p. 12.

63 Footnote Ibid., p. 12.

14 A Central-Eastern Europe Perspective on FRT Regulation A Case Study of Lithuania

1 Civil Code of the Republic of Lithuania (Identification code 1001010ISTAIII-1864).

2 Art. 2.22 of the Civil Code of the Republic of Lithuania.

3 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA. [2016] OJ L 119, pp. 89–131.

4 Alvidas Lukošaitis, ‘Lobizmas užsienio šalyse ir Lietuvoje: teisinio reguliavimo ir institucionalizacijos problemos2(62) Politologija 342, at 34.

5 Directive (EU) 2016/680, pp. 89–131.

6 Law of the Republic of Lithuania on the Legal Protection of Personal Data Processed for the Prevention, Investigation, Disclosure or Prosecution of Criminal Offenses, Execution of Sanctions or National Security or Defence (Identification code 1111010ISTA0XI-1336).

7 Law of the Republic of Lithuania on the Legal Protection of Personal Data, Art. 8.

8 Law on Police of the Republic of Lithuania (Identification code 1001010ISTAIII-2048), Art. 22(1).

9 Law on Police of the Republic of Lithuania, Art. 9(1) and (2).

10 Penal Code of the Republic of Lithuania (Identification code 1021010ISTA00IX-994), Art. 43(1).

11 Code of Criminal Procedure of the Republic of Lithuania (Identification code 1021010ISTA00IX-785), Art. 260(4).

12 Law on the Prosecution of the Republic of Lithuania (Identification code 0941010ISTA000I-599).

13 Law on Financial Crime Investigation Service of the Republic of Lithuania (Identification code 1021010ISTA00IX-816).

14 Law on Intelligence of the Republic of Lithuania (Identification code 1001010ISTAIII-1861), Art. 9(2) and Art. 13(1).

15 Law on Criminal Intelligence of the Republic of Lithuania (Identification code 1121010ISTA0XI-2234), Art. 2(7) and 8) and Art. 5(1).

16 Law on Special Investigation Service of the Republic of Lithuania (Identification code 1001010ISTAIII-1649), Art. 8(1) and (9).

17 Law on Identity Card and Passport of the Republic of Lithuania (Identification code 2014-21281); Law on the Legal Status of Foreigners of the Republic of Lithuania (Identification code 1041010ISTA0IX-2206), Law on Service Passport of the Republic of Lithuania (Identification code 1001010ISTAIII-1527), Order of the Minister of the Interior of the Republic of Lithuania on the Approval of Rules on Issuing Driving Licences for Motor Vehicles (Identification code 1082310ISAK001V-328), Order of the Minister of the Interior of the Republic of Lithuania on the Approval of Requirements for Personal Document Photos (Identification code 1022310ISAK00000569), etc.

18 For example, see Order of Biržai District Municipal Council. On the approval of the description of the procedure for handling video surveillance cameras installed in the territory of the municipality of Biržai district and their fixed video data (Identification code 2022-05136); Order of Tauragė District Municipal Council. On the approval of the description of the procedure for the use of video surveillance cameras installed in public spaces of the Tauragė district municipality and their fixed data (Identification code 2022-07557), Order of Kaišiadorys District Municipal Council. Regarding the approval of the description of the procedure for the use of video surveillance cameras and their fixed data installed in the territory of the municipality of Kaišiadorys district (Identification code 2022-13200).

19 For more information, see TELEFI Project, ‘Towards the European level exchange of facial images’ (7 February 2020), Legal analysis for TELEFI project, www.telefi-project.eu/sites/default/files/TELEFI_LegalAnalysis.pdf

20 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L 119.

21 Amba Kak, ‘Regulating biometrics: Global approaches and urgent questions’ (1 September 2020), AI Now Institute, https://ainowinstitute.org/publication/regulating-biometrics-global-approaches-and-open-questions, p. 20.

22 Government use of facial recognition technologies: Legal challenges and solutions (Face-AI). Project funded by the Research Council of Lithuania, contract No. S-MIP-21–38.

23 Order of Minister of the Interior of the Republic of Lithuania: On the reorganisation of the departmental register of identification marks of persons who have served a sentence of arrest or fixed-term imprisonment into a Register of Habitoscopic data (Identification code 1132310ISAK001V-440), para. 4.

24 ‘As part of the project funded by the Internal Security Fund, the Habitoscopic Data Register was modernised to introduce advanced biometric recognition technologies for a person’s face.’ IRD, ‘Įgyvendinant Vidaus saugumo fondo lėšomis finansuojamą projektą modernizuotas Habitoskopinių duomenų registras – įdiegtos pažangios asmens veido biometrinio atpažinimo technologijos’ (5 April 2020), https://ird.lt/lt/naujienos/igyvendinant-vidaus-saugumo-fondo-lesomis-finansuojama-projekta-modernizuotas-habitoskopiniu-duomenu-registras-idiegtos-pazangios-asmens-veido-biometrinio-atpazinimo-technologijos.

25 IRD, ‘Įgyvendinant Vidaus saugumo fondo lėšomis finansuojamą projektą modernizuotas Habitoskopinių duomenų registras – įdiegtos pažangios asmens veido biometrinio atpažinimo technologijos’ (4 May 2020), https://ird.lt/lt/naujienos/igyvendinant-vidaus-saugumo-fondo-lesomis-finansuojama-projekta-modernizuotas-habitoskopiniu-duomenu-registras-idiegtos-pazangios-asmens-veido-biometrinio-atpazinimo-technologijos.

26 Lietuvos policija, ‘Sukurta vienoda asmens atpažinimo žymių ir biometrinių duomenų rinkimo Sistema’ (13 August 2020), Lietuvos policija, https://policija.lrv.lt/lt/naujienos/sukurta-vienoda-asmens-atpazinimo-zymiu-ir-biometriniu-duomenu-rinkimo-sistema.

27 On 14 June 2022, an official letter was sent from the Law Institute of the Lithuanian Centre for Social Sciences to the Ministry of the Interior kindly requesting to indicate the legal basis on which biometric personal data are processed in the Register of Habitoscopic Data and to indicate what security measures are applied in order to ensure the protection of biometric personal data based on the criteria established in the law; to specify the Register of Habitoscopic Data (including database archive) storage terms of biometric data (data processed by facial recognition technologies that allow identification of a specific person) and the legal basis for their regulation; to indicate whether there are integrations of the Register of Habitoscopic Data with other databases/registries (e.g., the register of events registered by the police or the traffic accident information system), to submit the legal act/s regulating Order/s No. 1V-440 linking of registers/databases not mentioned in the Order itself; to specify which data not mentioned in the Order are transferred between the Registers. Finally, it was asked to provide legal regulation (references to specific legal acts and their structural parts, and if these legal acts are not published publicly – to attach their copies), establishing restrictions on the processing of personal images obtained from other registers with FRTs and to describe how this is implemented in practice (e.g., if a person is suspected of having committed an administrative offence, will the image of the suspect from the available video/photo material be processed by facial recognition technology in all cases, and in which cases is this not done), and to indicate the specific legal regulation.

28 Lietuvos Policijos Generalinis Komisaras, ‘Order of the Commissioner General of the Police Department under the Ministry of the Interior of the Republic of Lithuania on the approval of rules for processing data captured by video surveillance in police institutions’ (19 February 2020), Paras 3 and 4. Lietuvos Policijos Generalinis Komisaras, https://policija.lrv.lt/uploads/policija/documents/files/Vaizdo%20stebejimo%20duomenu%20tvarkymo%20taisykles.pdf.

29 Ministry of Economics and Innovation of the Republic of Lithuania, ‘Strategy of artificial intelligence in Lithuania’ (n.d.), https://eimin.lrv.lt/uploads/eimin/documents/files/DI_strategija_LT(1).pdf.

30 Paulius Briedis, ‘Attendance marking powered by Face Recognition’ (2022) (KA2 Strategic partnerships project, Introducing artificial intelligence to vocational schools in Europe No. 2020-1-LT01-KA202-078015), https://docs.google.com/presentation/d/1L6Gj5yI8mgR-V3g83OicVEbyd0IYJ5_KmKsVsvX3wKM/edit?fbclid=IwAR0M7z5PuhnE2qNz1K42p61tPquS4O8dHK-ievqNY7FRHbjoFNleFW8b6p0#slide=id.g12cc187cc22_2_842.

31 Dovydas Vitkauskas, ‘Galimybių pasą turėtų keisti veido atpažinimo Sistema’ (7 October 2021), Delfi, www.delfi.lt/verslas/nuomones/dovydas-vitkauskas-galimybiu-pasa-turetu-keisti-veido-atpazinimo-sistema.d?id=88361491.

32 LRT.lt, ‘Teismas: Vilniaus universitetas galėjo naudoti veido atpažinimo funkciją per atsiskaitymus’ (13 May 2022), www.lrt.lt/naujienos/mokslas-ir-it/11/1693760/teismas-vilniaus-universitetas-galejo-naudoti-veido-atpazinimo-funkcija-per-atsiskaitymus.

33 Telia, ‘Marijampolėje gyventojų saugumą užtikrina stiklinės akys: tokio poveikio nesitikėjo’ (13 April 2022), Delfi.lt, www.delfi.lt/uzsakomasis-turinys/premium/marijampoleje-gyventoju-sauguma-uztikrina-stiklines-akys-tokio-poveikio-nesitikejo.d?id=89958475.

34 Mažeikių rajono savivaldybė, ‘Vaizdo stebėjimo kameros mieste – daugiau saugumo ir tvarkos’ (14 January 2021), Budas.lt, www.budas.lt/regionu-naujienos/naujienos-mazeikiuose/41960-vaizdo-stebejimo-kameros-mieste-daugiau-saugumo-ir-tvarkos.

35 Živilė Kairytė, ‘16-metį vilnietės sūnų užpuolė Katedros aikštėje: skubiai prašo pagalbos’ (30 August 2022), TV3.lt, www.tv3.lt/naujiena/gyvenimas/16-meti-vilnietes-sunu-uzpuole-katedros-aiksteje-skubiai-praso-pagalbos-n1185568.

36 Made in Vilnius, ‘Mokslininkai Vilniaus gatvėse matuoja praeivių emocijas, temperatūrą bei kvėpavimo dažnį’ (24 December 2019), Delfi.lt, www.delfi.lt/miestai/vilnius/mokslininkai-vilniaus-gatvese-matuoja-praeiviu-emocijas-temperatura-bei-kvepavimo-dazni.d?id=83040699.

37 Paulius Vaitekėnas, ‘Kaune gyventojus stebi žmonių sekimu pagarsėjusios kinų kameros: fiksuos žmonių veidus ir KET pažeidimus’ (29 January 2020), LRT.lt, www.lrt.lt/naujienos/eismas/7/1137677/kaune-gyventojus-stebi-zmoniu-sekimu-pagarsejusios-kinu-kameros-fiksuos-zmoniu-veidus-ir-ket-pazeidimus?fbclid=IwAR1VKjHQEWAWLVo3d5IJJpvYCv09ZLlgovZtkGpfAJPiaLvFIgMxA23HFM0; Ignas Jačauskas, ‘NKSC: kiniškos vaizdo stebėjimo kameros turi saugumo spragų’ (29 May 2020), Diena.lt, www.diena.lt/naujienos/lietuva/salies-pulsas/nksc-kiniskos-vaizdo-stebejimo-kameros-turi-saugumo-spragu-969413; LRT tyrimai, ‘Lietuvos vadovus saugo kameros, kurių bijo amerikiečiai’ (29 January 2020), LRT.lt, www.lrt.lt/naujienos/lrt-tyrimai/5/1137518/lrt-tyrimas-lietuvos-vadovus-saugokameros-kuriu-bijo-amerikieciai?fbclid=IwAR2Y9BLDthGBeGX4RrNa9v0zrDww6E3myMXU0iJFwJELIPTbe8znM-mVaKY; Valdemaras Šukšta, ‘“Kiniška akis” Kaune: nors palaiminimo miesto gatvėse naudoti kameras dar negauta, policija tyliai jas jau išmėgina’ (19 November 2021), LRT.lt, www.lrt.lt/naujienos/lietuvoje/2/1541495/kiniska-akis-kaune-nors-palaiminimo-miesto-gatvese-naudoti-kameras-dar-negauta-policija-tyliaijas-jau-ismegina.

38 Andrius Vaitkevičius, ‘Į viešumą pateko Vilniaus policininkų darytas vaizdo įrašas – skandalas neišvengiamas’ (29 January 2020), Lrytas.lt, www.lrytas.lt/lietuvosdiena/kriminalai/2020/01/20/news/i-viesuma-pateko-vilniaus-policininku-darytas-vaizdo-irasas-skandalas-neisvengiamas-13326794.

39 Mauritz Kop, ‘EU Artificial Intelligence Act: The European approach to AI’ (2021) (2) Transatlantic Antitrust and IPR Developments, Stanford Law School, https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai.

40 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (Com/2021/206 Final), Recital 7.

41 Proposal for the Artificial Intelligence Act, Art. 5(1)(d).

42 Explanatory Memorandum to the Proposal for the Artificial Intelligence Act (COM(2021) 206 final), para. 2.4.

15 An Overview of Facial Recognition Technology Regulation in the United States

1 While not discussed at length in this chapter, there are other potentially problematic uses of FRT beyond identification. For instance, firms are developing technologies that purport to use characteristics of facial expressions during interviews to help assess the suitability of candidates for jobs.

2 Joel Schipper, ‘Jefferson Mall adds new security system with facial recognition’ (2 August 2022), WDRB, www.wdrb.com/news/jefferson-mall-adds-new-security-system-with-facial-recognition/article_66714a42-128b-11ed-b95c-7fc634889bf9.html.

3 James Whitman, ‘The two Western cultures of privacy: Dignity versus liberty’ (2003–2004) 113 Yale Law Journal 1151.

5 Such ‘indicia of harm’ include, for instance, the so-called privacy torts. These are state-level (not federal offences) that typically include intrusion upon seclusion, disclosure of private information, false light, and appropriation of likeness. To be actionable, however, these torts generally require demonstration of concrete harm, such as monetary loss or conduct amounting to trespass. At federal level, statutes such as the Wiretap Act and Stored Communications Act create liability for specific conduct that is akin to a violation of privacy.

6 Sorrell v. IMS Health (2011) 564 U.S. 552; see also Zauderer v. Office of Disc. Counsel (1985) 471 U.S. 626 (providing other protections for commercial speech).

7 Patrick Grother, Mei Ngan, Kayee Hanaoka, Joyce Yang, and Austin Hom, ‘Ongoing face recognition vendor test (FRVT)’ (28 July 2022), National Institute of Standards and Technology, https://pages.nist.gov/frvt/reports/11/frvt_11_report.pdf.

8 117th Congress American Data Privacy and Protection Act 2022.

9 Federal Trade Commission, ‘Facing facts: Best practices for common uses of facial recognition technologies’ (22 October 2012), www.ftc.gov/sites/default/files/documents/reports/facing-facts-best-practices-common-uses-facial-recognition-technologies/121022facialtechrpt.pdf.

10 Federal Trade Commission, ‘FTC finalizes settlement with photo app developer related to misuse of facial recognition technology’ (27 May 2021), www.ftc.gov/news-events/news/press-releases/2021/05/ftc-finalizes-settlement-photo-app-developer-related-misuse-facial-recognition-technology.

11 117th Congress Facial Recognition and Biometric Technology Moratorium Act of 2021, S.2052. This bill was not enacted into law during the 117th Congress. On March 7, 2023, the bill was re-introduced for consideration in the 118th Congress. 118th Congress Facial Recognition and Biometric Technology Moratorium Act of 2023, S.681.

12 US Government Accountability Office, ‘Facial recognition technology: Current and planned uses by federal agencies’ (24 August 2021), www.gao.gov/products/gao-21-526.

13 ACLU, ‘Coalition letter on government use of facial recognition identify verification services’ (14 February 2022), www.aclu.org/letter/coalition-letter-government-use-facial-recognition-identify-verification-services.

14 Alessandro Mascellino, ‘USPTO to start verifying identities, including with biometrics, for trademark submission’ (1 July 2022), BiometricUpdate.com, www.biometricupdate.com/202207/uspto-to-start-verifying-identities-including-with-biometrics-for-trademark-submission.

15 117th Congress American Data Privacy and Protection Act 2022.

16 Federal Trade Commission, ‘FTC explores rules cracking down on commercial surveillance and lax data security practices’ (11 August 2022), www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices.

17 Washington, Oregon, California, Colorado, and Alabama all have limited government actor or police use. Up to date information about state regulation of facial recognition technology can be found at www.banfacialrecognition.com/map/.

18 Oregon’s regulation only encompasses the technology applied to drivers licenses. California’s applies to police body cameras. See City of Portland, Oregon, ‘City Council approves ordinances banning use of face recognition technologies by City of Portland bureaus and by private entities in public spaces’ (9 September 2020), Portland.gov, www.portland.gov/smart-city-pdx/news/2020/9/9/city-council-approves-ordinances-banning-use-face-recognition#:~:text=The%20second%20ordinance%20will%20go,and%20visitors%2C%20first%20and%20foremost; Jeffrey Dastin, ‘California legislature bars facial recognition for police body cameras’ (12 September 2019), Reuters, www.reuters.com/article/us-california-facial-recognition/california-legislature-bars-facial-recognition-for-police-body-cameras-idUSKCN1VX2ZP.

19 New York’s regulation bans use of the technology in schools. Colorado’s regulation includes a moratorium on new facial recognition technologies in schools for a period of time. See Chris Burt, ‘New York school districts plan facial recognition security despite ban’ (29 June 2022), BiometricUpdate.com, www.biometricupdate.com/202206/new-york-school-districts-plan-facial-recognition-security-despite-ban; Rachel Sandler, ‘New York issues first-in-nation moratorium on facial recognition in schools’ (22 December 2020), Forbes, www.forbes.com/sites/rachelsandler/2020/12/22/new-york-issues-first-in-nation-moratorium-on-facial-recognition-in-schools/; Linn F. Freedman, ‘Colorado law restricts use of facial recognition technology by government agencies’ (2022) XII National Law Review 12.

20 Illinoisand Texas both require informed consent before private actors can deploy facial recognition technology. See also 740 Illinois Compiled Statutes 14 and what follows (2008); Texas Business & Commerce Code Annotated s 503.001 (West 2017).

21 Jason Binimow, ‘State statutes regulating collection or disclosure of consumer biometric or genetic information’ (originally published 2019), Volume 41 of the 7th series of American Law Reports, Article 4 Section 2, Annotation *2.

22 Taylor Hatmaker, ‘Facebook will pay $650 million to settle class action suit centered on Illinois privacy law’ (1 March 2021), TechCrunch, https://techcrunch.com/2021/03/01/facebook-illinois-class-action-bipa/.

23 Jameel Jaffer and Ramya Krishnan, ‘Clearview AI’s first amendment theory threatens privacy – And free speech, too’ (17 November 2020), Slate, https://slate.com/technology/2020/11/clearview-ai-first-amendment-illinois-lawsuit.html.

24 Cyrus Farivar, ‘Clearview AI settles facial recognition suit with ACLU, will alter some practices’ (9 May 2022), Forbes, www.forbes.com/sites/cyrusfarivar/2022/05/09/clearview-ai-facial-recognition-suit-with-aclu/; American Civil Liberties Union, et al., v. Clearview AI, inc. (case documents available at www.aclu.org/cases/aclu-v-clearview-ai) (citation pending).

25 California Civil Code s 1798.100 and what follows.

26 117th Congress American Data Privacy and Protection Act 2022.

27 Carmen Sobczak, ‘BIPA and Article III standing: Are notice and consent more than “bare procedural” rights?’ (2020) 35 Berkeley Technical Law Journal 1391.

28 2020 Vermont Acts and Resolves 799 s 14.

29 2019 Oregon Revised Statutes s 133.741.

30 ACLU, ‘ACLU of Vermont statement on the enactment of S.124, the nation’s strongest statewide ban on law enforcement use of facial recognition technology’ (8 October 2020), www.acluvt.org/en/news/aclu-vermont-statement-enactment-s124-nations-strongest-statewide-ban-law-enforcement-use.

31 2020 Vermont Acts and Resolves 799.

32 Denise Lavoie, ‘Virginia lawmakers ban police use of facial recognition’ (29 March 2021), APNews, https://apnews.com/article/technology-legislature-police-law-enforcement-agencies-legislation-033d77787d4e28559f08e5e31a5cb8f7.

33 2020 Vermont Acts and Resolves 799.

34 See, e.g., Mailyn Fidler, ‘Local police surveillance and the administrative Fourth Amendment’ (2020) 36 Santa Clara High Technology Law Journal 481; Barry Friedman and Maria Ponomarenko, ‘Democratic policing’ (2015) 90 NYU Law Review 103; Vincent Sutherland, ‘The master’s tools and a mission: Using community control and oversight laws to resist and abolish police surveillance technologies’ (2023) 70 (2) UCLA Law Review.

35 See www.banfacialrecognition.com/map/ for an updated list of facial recognition local regulations; see also Mailyn Fidler, ‘Fourteen places have passed local surveillance laws. See how they’re doing’ (3 September 2020), Lawfare, www.lawfareblog.com/fourteen-places-have-passed-local-surveillance-laws-heres-how-theyre-doing.

36 For examples of calls for such regulations, see, Evan Selinger and Woodrow Hartzog, ‘The inconsentability of facial surveillance’ (2019) 66 Loyola Law Review 101, 102; Woodrow Hartzog and Evan Selinger, ‘Facial recognition is the perfect tool for oppression’ (2 August2018), Medium, https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66&gt – this proposes an outright ban on the use of facial recognition technology; Lindsey Barrett, ‘Ban facial recognition technologies for children-and for everyone else’ (2020) 26 Boston University Journal of Science & Technology Law 223.

37 See West Virginia v. EPA, [2022] 597 U.S. (Law Reports citation pending).

38 Ogletree v. Cleveland State University, ___ F.Supp.3d ___, 2022 WL 17826730 (N.D. Ohio, December 20, 2022).

16 Regulating Facial Recognition in Brazil Legal and Policy Perspectives

1 Instituto Igarapé, ‘Reconhecimento facial no Brasil’ (2021), https://igarape.org.br/infografico-reconhecimento-facial-no-brasil/; Jonas Valente, ‘Tecnologias de reconhecimento facial são usadas em 37 cidades no país’ (19 September 2019), Agência Brasil, https://agenciabrasil.ebc.com.br/geral/noticia/2019-09/tecnologias-de-reconhecimento-facial-sao-usadas-em-37-cidades-no-pais.

2 FRA, ‘Facial recognition technology: Fundamental rights considerations in the context of law enforcement’ (2019), European Union Agency for Fundamental Rights, https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper.pdf; Lucas Introna and David Wood, ‘Picturing algorithmic surveillance: The politics of facial recognition systems’ (2004) 2 Surveillance & Society 177.

3 Jess Reia and Luca Belli, ‘Smart cities no Brasil: regulação, tecnologia e direitos’ (2021), http://bibliotecadigital.fgv.br:80/dspace/handle/10438/31403; Luca Belli, ‘BRICS countries to build digital sovereignty’ in Luca Belli (ed.), CyberBRICS: Cybersecurity Regulations in the BRICS Countries (Springer International Publishing, 2021), https://doi.org/10.1007/978-3-030-56405-6_7; Luca Belli, ‘Como implementar a LGPD por meio da Avaliação de Impacto Sobre Privacidade e Ética de Dados (AIPED)’ in Laura Schertel Mendes, Danilo Doneda, Ingo Wolfgang Sarlet, Otavio Luiz Rodrigues Jr. and Bruno Bioni (eds.), Tratado de Proteção de Dados Pessoais (Forense, 2021).

4 Jamila Venturini and Vladimir Garay, ‘Reconhecimento Facial Na América Latina: Tendências Na Implementação de Uma Tecnologia Perversa’ (2021), Fundación Karisma, https://estudio.reconocimientofacial.info/.

5 Via Quatro informed in a press announcement that the technology would be implemented, without giving further details. Later news revealed that this was done through a partnership with LG and the pharmaceutical company Hyperapharma consisting in the projecting of their ads on digital screens of the subway equipped with cameras that would read and register the emotions in response to the ads.

6 This was an ‘empresa de economia mista’ – a mixed controllership company in which the state is the controlling shareholder, but the company is legally structured as a private entity.

7 Both Via Quatro and Cia. do Metropolitano de São Paulo’s intent were halted by civil public actions moved by civil society organisations, the state’s prosecutor office, and public defender’s office. Via Quatro’s implementation was grounded to a halt by judicial decree, but the case against Metropolitano de São Paulo is still ongoing (although an interim decision suspended the use of FRT).

8 Governo de Paraíba, João Azevêdo Inaugura Centro Integrado de Comando e Controle e Sertão Ganha Equipamento Referência Para a Segurança Pública Do Nordeste. Governo Da Paraíba’ (2022), https://paraiba.pb.gov.br/noticias/joao-azevedo-inaugura-centro-integrado-de-comando-e-controle-e-sertao-ganha-equipamento-referencia-para-a-seguranca-publica-do-nordeste; Portal Correio, ‘Reconhecimento facial pemite a prisão de 25 procurados da Justiça no São João de Campina Grande’ (11 July 2022), https://portalcorreio.com.br/reconhecimento-facial-pemite-a-prisao-de-25-procurados-da-justica-no-sao-joao-de-campina-grande/. It is not clear whether the 1,600 cameras in use in 2022 are a continuation of the 2019 implementation of Facewatch, since public announcements found on the state government’s website merely mention the use of ‘facial recognition’, without specifying contractors and technology used.

9 Portal de Amazônia, ‘Itacoatiara Terá Centro Integrado de Câmeras Com Reconhecimento Facial e de Placas de Veículos’ (6 April 2021), https://deamazonia.com.br/?q=278-conteudo-196736-itacoatiara-tera-centro-integrado-de-cameras-com-reconhecimento-facial-e-de-placas-de-veiculos.

10 Pablo Nunes, ‘Novas Ferramentas, Velhas Práticas: Reconhecimento Facial e Policiamento No Brasil’ in Rede de Observatórios da Segurança & CESeC (eds.), Retratos da Violência: Cinco meses de monitoramento, análises e descobertas (Rede de Observatórios da Segurança/CESeC, 2019), pp. 6770.

11 Footnote Ibid., p. 69.

12 Footnote Ibid., p. 68.

13 Pablo Nunes, Mariah Rafaela Silva, and Samuel R. de Oliveira, ‘Um Rio de câmeras com olhos seletivos: Uso do reconhecimento facial pela polícia fluminense’ (2022), O Panoptico, https://opanoptico.com.br/Caso/um-rio-de-cameras-com-olhos-seletivos-uso-do-reconhecimento-facial-pela-policia-fluminense/.

14 Although Oi was the contracting party, the technology utilised was developed and provided by Huawei.

15 Nunes, Silva, and Oliveira, ‘Um Rio de cameras’, p. 11.

16 Instituto Igarapé, ‘Videomonitoramento Webreport’ (2020), https://igarape.org.br/videomonitoramento-webreport/; Nunes, Silva, and de Oliveira, ‘Um Rio de cameras’.

17 Instituto Igarapé, ‘Reconhecimento facial no Brasil’.

20 Luca Belli and Danilo Doneda, ‘Municipal data governance: An analysis of Brazilian and European practices/Governança de Dados Municipal: Uma Análise Das Práticas Brasileiras e Européias’ (2020) 12 Revista de Direito da Cidade 1588. For a non-official translation of the LGPD, see Luca Belli, Laila Lorenzon, Luã Fergus and Walter B. Gaspar, ‘The Brazilian General Data Protection Law (LGPD) – Unofficial English version’ (22 January 2020), CyberBRICS, https://cyberbrics.info/brazilian-general-data-protection-law-lgpd-unofficial-english-version/.

21 Leonardo Zvarick, ‘Reconhecimento Facial Bloqueia 331 Mil Bilhetes Únicos Em SP – 12/06/2019’ (12 June 2019), São Paulo Agora, https://agora.folha.uol.com.br/sao-paulo/2019/06/reconhecimento-facial-bloqueia-331-mil-bilhetes-unicos-em-sp.shtml.

22 Bárbara Simão, Blenda Santos, Carolina Reis, Eduarda Costa, Elora Fernandes, Enrico Roberto, Felipe Rocha and Rafaela de Alcântara, ‘Cidades Inteligentes e Dados Pessoais: Recomendações e boas práticas’ (2022), Internet Lab, ARTICLE 19, LAPIN, p. 47.

23 Cíntia Falcão, ‘A Bahia está virando um laboratório de reconhecimento facial’ (2021), The Intercept Brasil, https://theintercept.com/2021/09/20/rui-costa-esta-transformando-a-bahia-em-um-laboratorio-de-vigilancia-com-reconhecimento-facial/.

24 Good faith (boa fé) is divided in Brazilian legal doctrine into subjective and objective manifestations. In the case of its use in Art. 6 of LGPD, as well as in Art. 422 of the Brazilian Civil Code, it is meant in its objective form, that is, a duty to behave according to the legitimate expectations of one another in a legal relationship. Bioni (2019) comments on this point connecting the objective good faith contained in LGPD to the concept of contextual privacy, based on the trust between parties in a data processing relationship that the information shared will not be used in manners that contradict the original context of its sharing. See B. R. Bioni, ‘Proteção de dados pessoais : a função e os limites do consentimento’ (Forense, 2019), http://bibliotecadigital.tse.jus.br/xmlui/handle/bdtse/5973.

25 Not necessarily human revision, although one could argue an automated revision of automated decisions constitutes another instance of possible ‘revision’ under the law.

26 This is an accessory obligation – since one cannot assert one’s right if one is unaware of the fact that there is a situation that gives rise to that right. It can also be derived from the general transparency principle.

27 Estela Aranha, ‘Elaboração de parecer sobre a legalidade dos Decretos no 10.046/2019 e 10.047/2019 em face das normas que disciplinam os direitos fundamentais à proteção de dados e à privacidade no ordenamento jurídico brasileiro’ (12 February 2020), OABRJ, www.oabrj.org.br/noticias/comissao-protecao-dados-privacidade-lanca-parecer-sobre-decretos-federais-criam-grande.

28 Gilmar Mendes, Voto Conjunto ADI 6649 e ADPF 695.

29 Portaria no. 793/19, de 24 outubro de 2019, Imprensa Nacional de 25 outubro (Brazil), www.in.gov.br/en/web/dou/-/portaria-n-793-de-24-de-outubro-de-2019-223853575).

30 Portal da Transparência, ‘Fundo Nacional de Segurança Pública’ (n.d.), www.portaltransparencia.gov.br/orgaos/30911?ano=2022.

31 Alisson Possa, ‘O reconhecimento facial como instrumento de reforço do estado de coisas inconstitucionais no Brasil’ (2021) 1 IDP Law Review 134.

32 João Victor Archegas and Christian Perrone, ‘Don’t snoop on me’ (16 December 2021), Verfassungsblog: On Matters Constitutional, https://intr2dok.vifa-recht.de/receive/mir_mods_00011576; Moriah Daugherty, Katie Evans, Edward J. George, Sabrina McCubbin, Harrison Rudolph, Ilana Ullman, Sara Ainsworth, David Houck, Megan Iorio, Matthew Kahn, Eric Olson, Jaime Petenko and Kelly Singleton, ‘The perpetual line-up: Unregulated police face recognition in America’ (18 October 2016), Georgetown Law Center on Privacy and Technology, www.perpetuallineup.org; Karen Hao and Jonathan Stray, ‘Can you make AI fairer than a judge? Play our courtroom algorithm game’ (17 October 2019), MIT Technology Review, www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/; Will Douglas Heaven, ‘Predictive policing algorithms are racist. They need to be dismantled’ (17 July 2020), MIT Technology Review, www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/; Jennifer Lynch, ‘Face off: Law enforcement use of face recognition technology’ (May 2019), Electronic Frontier Foundation, www.eff.org/files/2019/05/28/face-off-report.pdf

33 Carolina Reis, Eduarda Costa Almeida, Fernando Fellows Dourado and Felipe Rocha da Silva, ‘Vigilância automatizada: uso de reconhecimento facial pela Administração Pública no Brasil’ (7 July 2021), LAPIN, p. 51, https://lapin.org.br/2021/07/07/vigilancia-automatizada-uso-de-reconhecimento-facial-pela-administracao-publica-no-brasil/. Nunes, ‘Novas Ferramentas’.

34 Frank Pasquale, ‘Secret algorithms threaten the rule of law’ (1 June 2017), MIT Technology Review, www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/.

35 Mariana Mazzucato, Marietje Schaake, Seb Krier and Josh Entsminger, ‘Governing artificial intelligence in the public interest’ (28 July 2022), UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP WP 2022–12), www.ucl.ac.uk/bartlett/public-purpose/wp2022-12.

36 Reis et al., ‘Vigilância automatizada’.

37 Gaspar Pisanu and Verónica Arroyo, ‘Surveillance tech in Latin America: Made abroad, deployed at home’ (9 August 2021), Access Now, www.accessnow.org/surveillance-tech-in-latin-america-made-abroad-deployed-at-home/.

38 Nunes, Silva, and de Oliveira, ‘Um Rio de câmeras’; Reis et al., ‘Vigilância automatizada’; Reia and Belli, ‘Smart cities no Brasil’.

39 Mazzucato et al., ‘Governing artificial intelligence’; Mariana Mazzucato and Josh Ryan-Collins, ‘Putting value creation back into “public value”: From market-fixing to market-shaping’ (2022)25(4) Journal of Economic Policy Reform 345360.

40 Glauco Arbix, Mario Sergio Salerno, Guilherme Amaral, and Leonardo Melo Lins, ‘Avanços, equívocos e instabilidade das políticas de inovação no Brasil’ (2017) 36 Novos estudos CEBRAP 9; Chris Freeman, ‘The economics of technical change’ (1994) 18 Cambridge Journal of Economics 463; Chris Freeman, ‘The “national system of innovation” in historical perspective’ (1995) 19 Cambridge Journal of Economics 5.

41 Julia Powles and Helen Nissenbaum, ‘The seductive diversion of “solving” bias in artificial intelligence’ (7 December 2018), OneZero, https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.

42 M. Souza and R. Zanatta, ‘The problem of automated facial recognition technologies in Brazil: Social countermovements and the new frontiers of fundamental rights’ (2021) 1 Latin American Human Rights Studies, https://revistas.ufg.br/lahrs/article/view/69423.

43 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Kindle) (Profile Books, 2019).

44 Evgeny Morozov, Big Tech: A Ascensão Dos Dados e a Morte Da Política (Ubu Editora, 2019).

45 Walter Gaspar and Yasmin Curzi de Mendonca, ‘Artificial intelligence in Brazil still lacks a strategy’ (2021), Report by the Center for Technology and Society at FGV Law School, https://cyberbrics.info/wp-content/uploads/2021/05/EBIA-en-2.pdf; Ronaldo Lemos, ‘Estratégia de IA Brasileira é Patética’ (2021), Folha de São Paulo, www1.folha.uol.com.br/colunas/ronaldolemos/2021/04/estrategia-de-ia-brasileira-e-patetica.shtml; Eduardo Magrani, ‘Estratégia Brasileira de Inteligência Artificial: Comentários Sobre a Portaria 4.617/2021 Do MCTI’ (2021), https://secureservercdn.net/192.169.220.85/dxc.177.myftpupload.com/wp-content/uploads/2021/12/OPINION-Brasil-PORT-.pdf?time=1643260747; Francisco Saboya, ‘Existe Mesmo Uma Estratégia Brasileira de Inteligência Artificial?’ (13 April 2021), Canal MyNews, https://canalmynews.com.br/francisco-saboya/existe-mesmo-uma-estrategia-brasileira-de-inteligencia-artificial/.

46 MCTI, ‘Inteligência Artificial Estratégia – Repositório. Ministério Da Ciência, Tecnologia e Inovações – Gov.Br’ (n.d.), www.gov.br/mcti/pt-br/acompanhe-o-mcti/transformacaodigital/inteligencia-artificial-estrategia-repositorio.

17 FRT Regulation in China

1 See, e.g., Seungha Lee, ‘Coming into focus: China’s facial recognition regulations’ (4 May 2020), Center for Strategic & International Studies, www.csis.org/blogs/trustee-china-hand/coming-focus-chinas-facial-recognition-regulations.

2 Qingxiu Bu, ‘The global governance on automated facial recognition (AFR): Ethical and legal opportunities and privacy challenges’ (2021) 2 Int. Cybersecurity L Rev. 113145, at 130.

3 See Yan Luo and Rui Guo, ‘Facial recognition in China: Current status, comparative approach and the road ahead’ (2021) 25 U. Pa. J.L. & Soc. Change 153179, at 160–162.

4 See Masha Borak, ‘Facial recognition is used in China for everything from refuse collection to toilet roll dispensers and its citizens are growing increasingly alarmed, survey shows’ (27 January 2021), South China Morning Post, www.scmp.com/tech/innovation/article/3119281/facial-recognition-used-china-everything-refuse-collection-toilet.

5 Tristan G. Brown, Alexander Statman, and Celine Sui, ‘Public debate on facial recognition technologies in China’ (Summer 2021), MIT Case Studies in Social and Ethical Responsibilities of Computing, https://doi.org/10.21428/2c646de5.37712c5c.

6 See Chris Burt, ‘Top performing developers steady in updated NIST facial recognition 1:N test results’ (4 May 2022), BiometricUpdate.com, www.biometricupdate.com/202205/top-performing-developers-steady-in-updated-nist-facial-recognition-1n-test-results.

7 See Daniel Ren, ‘AI, machine learning tech promises US$6000 billion annually for China economy as it pervades industries, says McKinsey’ (25 July 2022), South China Morning Post, www.scmp.com/business/banking-finance/article/3186409/ai-machine-learning-tech-promises-us600-billion-annually.

10 Ngoc Son Bui and Jyh-An Lee, ‘Comparative cybersecurity law in socialist Asia’ (2022) 55 Vand. J. Transnat’l L. 631680, at 660–662.

11 See Jonathan Turley, ‘Anonymity, obscurity, and technology: Reconsidering privacy in the age of biometrics’ (2020) 100 B.U. L. Rev. 2179–2261, at 2185–2186.

12 Quanguorenmin Daibiaodahui Changwuweiyuanhui Guanyu Jiaqiang Wangluoxinxibaohu de Jueding (《全国人民代表大会常务委员会关于加强网络信息保护的决定》) [Decision of the Standing Committee of the National People’s Congress on Strenghening Information Protection on Networks] (2012). Issued by the Standing Committee of the National People’s Congress, on 28 December.

13 Zhonghua Renmin Gongheguo Minfadian (《中华人民共和国民法典》) [Civil Code of the People’s Republic of China (Civil Code)] (2020). Promulgated by the Standing Committee of the National People’s Congress on 28 May, effective on 1 January 2021 (hereafter Civil Code), Art. 1034.

14 See, e.g., Zhonghua Renmin Gongheguo Wangluo Anquan Fa (《中华人民共和国网络安全法》) [Cybersecurity Law of the People’s Republic of China] (2016). Promulgated by the Standing Committee of the National People’s Congress on 7 November, effective on 1 June 2017), Art. 41.

15 Civil Code, Chapter 6; Zhonghua Renmin Gongheguo Minfa Zongze (《中华人民共和国民法总则》) [General Provisions of the Civil Law of the People’s Republic of China] (2017). Promulgated by the Standing Committee of the National People’s Congress on 15 March, effective on 1 October 2017, Art. 111.

16 Civil Code, Art. 1035.

17 Zuigao Renmin Fayuan Guanyu Shenli Shiyong Renlian Shibie Jishu Chuli Geren Xinxi Xiangguan Minshi Anjian Shiyong Falu Ruogan Wenti De Guiding (《最高人民法院关于审理使用人脸识别技术处理个人信息相关民事案件适用法律若干问题的规定》) [Provisions of the Supreme People’s Court on Several Issues concerning the Application of Law in the Trial of Civil Cases Relating to Processing of Personal Information by Using the Facial Recognition Technology] (2021). Promulgated by the Judicial Committee of the Supreme People’s Court on 8 June, effective on 1 August 2021 (hereafter FRT Judicial Interpretation).

18 Footnote Ibid., Art. 1.

19 Footnote Ibid., Art. 2.

20 Footnote Ibid., Art.10.

21 Footnote Ibid., Art. 8 and Art.9.

22 Footnote Ibid., Art. 5.

23 Zhonghua Renmin Gongheguo Geren Xinxi Baohufa (《中华人民共和国个人信息保护法》) [Personal Information Protection Law of the People’s Republic of China (PIPL)]. Promulgated by the Standing Committee of the National People’s Congress on 20 Aug 2021, effective on 1 November 2021 (hereafter PIPL).

24 Footnote Ibid., Art. 33.

25 Footnote Ibid., Art. 28.

26 Footnote Ibid., Art. 29.

27 Footnote Ibid., Art. 31.

28 Footnote Ibid., Art. 26.

29 See, e.g., Hangzhoushi Wuye Guanli Tiaoli (《杭州市物业管理条例》) [Hangzhou Realty Management Regulation] (Hangzhou, China) (2021). Promulgated by the Standing Committee of People’s Congress in Hangzhou on 9 August, effective on 1 March 2022, Art. 50; Shanghai Shi Shuju Tiaoli (《上海市数据条例》) [Shanghai Data Regulation] (2021). Promulgated by the Standing Committee of People’s Congress in Shanghai on 25 November, effective on 1 January 2022, Art. 23; Shenzhen Jingji Tequ Shuju Tiaoli (《深圳经济特区数据条例》) [Data Regulations of Shenzhen Special Economic Zone] (2021). Promulgated by the Standing Committee of People’s Congress in Shanghai on 29 June, effective on 1 January 2022, Art. 19.

31 See James Y. Wang, ‘The best data plan is to have a game plan: Obstacles and solutions to reaching international data privacy agreements’ (2022) 28 Mich. Tech. L. Rev. 385419, at 401–444.

32 See FRT Judicial Interpretation, Art. 1 and Art. 5.

33 See Jyh-An Lee and Ching-Yi Liu, ‘Real-name registration rules and the fading digital anonymity in China’ (2016) 25 Wash. Int’l L.J. 134, at 11–15.

34 In Hunan Province, for example, according to provincial-level real-name registration measures, hotels are required to deploy police systems (the Lüguanye Zhian Guanli Xinxi Xitong, or Public Security Administration Information System) at check-ins to collect facial data. Failing to comply to these measures would deny guests from staying at hotels. In Yushu City of the Qinghai Province, local police started to upgrade the system with FRT-empowered capabilities in 2019. See Hunan Sheng Luguanye Luke Zhusu Shiming Dengji Guanli Guiding (《湖南省旅馆业旅游住宿实名登记管理规定》) [Provisions on the Administration of Real-Name Registration for the Hospitality Industry in Hunan Province] (2021) Promulgated by the Provincial Public Security Department of Hunan Province on 1 December, effective on 1 January 2022, Art.4; The Paper Government Affairs, Lihaile! Yushushi Lüguan Ruzhu Jiang Kaiqi Shualian Shidai (《厉害了!玉树市旅馆入住将开启“刷脸”时代》) [Amazing! Yushu Hotels Now Use Facial Recognition to Check in Guests], The Paper (20 November 2019) www.thepaper.cn/newsDetail_forward_5017320.

35 See PIPL, Art. 13.

36 See FRT Judicial Interpretation, Art. 8; PIPL, Art. 66 and Art. 69.

37 See PRC PIPL, Art. 68. A recent case might illustrate this point. In April 2022, a member of the Big Data Authority in Henan Province was identified in a scandal linked to illicit tempering of personal information from the ‘health code’ mobile application to wilfully prevent people from retrieving their money from banks that are involved in financial scams. After a public outcry, people deemed directly responsible, including the person from the Big Data Authority, were given administrative and intra-party sanctions, which cited the authority of both the PRC Law on Administrative Discipline for Public Officials (2020) and the party’s disciplinary regulations. See, e.g., Phoebe Zhang, ‘China officials who abused health codes to stop bank protests punished’ (23 June 2022), South China Morning Post, www.scmp.com/news/china/politics/article/3182742/china-officials-who-abused-health-codes-stop-bank-protests.

38 See, e.g., John Wagner Givens and Debra Lam, ‘Smarter cities or Bigger Brother? How the race for smart cities could determine the future of China, democracy, and privacy’ (2020) 47 Fordham Urb. L.J. 829882, at 865.

39 Footnote Ibid., 865–866.

40 See, e.g., Jacques deLisle and Shen Kui, ‘China’s response to Covid-19’ (2021) 73 Admin. L. Rev. 1951, 47–48.

41 Brown, Statman, and Sui, ‘Public debate on facial recognition technologies’.

43 See Guobing Su Hangzhou Yesheng Dongwushijie Youxian Gongsi Fuwu Hetong Jiufen An (郭兵诉杭州野生动物世界有限公司服务合同纠纷案) [Guo Bing v. Hangzhou Safari Park Co., Ltd.], Hangzhou Fuyang District People’s Court Case No. (2019) Zhe 0111 Minchu 6971, 20 November 2020.

46 Guobing Su Hangzhou Yesheng Dongwushijie Youxian Gongsi Fuwu Hetong Jiufen An (郭兵诉杭州野生动物世界有限公司服务合同纠纷案) [Guo Bing v. Hangzhou Safari Park Co., Ltd.], Hangzhou Interm. People’s Ct. of Zhejiang Province Case No. (2020) Zhe 01 Minzhong 10940, 9 April 2021.

49 See, e.g., China Daily, ‘Xin Shidai Tuidong Fazhi Jincheng 2021 Niandu Shida Anjian Jiexiao’ (《“新时代推动法治进程2021年度十大案件”揭晓》) [Revealing ten cases of the year 2021 for the progress of the rule of law in the new era] (22 January 2022), https://cn.chinadaily.com.cn/a/202201/22/WS61ebd6caa3107be497a036f7.html.

50 See, e.g., China Court, ‘Renlian Shibie Jiufen Diyi An: Geren Xinxi Sifa Baohu De Dianfan’ (《人脸识别第一案:个人信息司法保护的典范》) [The first court case involving facial recognition technology: A judicial epitome for personal information protection] (8 March 2022), www.chinacourt.org/article/detail/2022/03/id/6562816.shtml.

51 See, e.g., Ye Yuan, ‘A professor, a zoo, and the future of facial recognition in China’ (26 April 2021), Sixth Tone, www.sixthtone.com/news/1007300/a-professor%2C-a-zoo%2C-and-the-future-of-facial-recognition-in-china.

52 See David Cowhig, ‘2022: Chinese law prof’s lament and encouragement’ (29 January 2022), David Cowhig’s Translation Blog, https://gaodawei.wordpress.com/2022/01/29/2022-chinese-law-profs-lament-and-encouragement/.

54 See Jeffrey Ding, ‘ChinAI #77: A strong argument against facial recognition in the Beijing subway’ (10 December 2019), ChinAI Newsletter, https://chinai.substack.com/p/chinai-77-a-strong-argument-against.

55 Masha Borak, ‘Beijing’s subway system will use facial recognition to single out people for different security measures’ (1 November 2019), South China Morning Post, www.scmp.com/abacus/tech/article/3035661/beijings-subway-system-will-use-facial-recognition-single-out-people.

56 See Jeffrey Ding’s translation of Lao’s post at Ding, ChinAI #77.

57 See Beijing News, ‘Beijing Ditie Youwang Yingyong Renlian Shibie Jishu’ (《北京地铁安检有望应用人脸识别技术》) [Beijing Metro security checks set to adopt facial recognition technology] (30 October 2019), http://epaper.bjnews.com.cn/html/2019-10/30/content_769638.htm?div=0.

58 See Jeffrey Ding’s blog: Ding, ChinAI #77

59 Footnote Ibid. for Ding’s translation.

61 See, e.g., Stella Chen, ‘Weibo chairman backs Chinese censor’s crackdown and promises “ecologically sound” cyberspace’ (25 September 2022), South China Morning Post, www.scmp.com/news/china/politics/article/3193605/weibo-chairman-backs-chinese-censors-crackdown-and-promises.

62 See Cowhig’s translation of Lao’s essay: Cowhig, ‘Chinese law prof’s lament’.

63 See Southern Metropolis Daily, ‘Beijing Ditie Youjian Shualian Anjian, Yin Yinsi Xielu Danyou Zhuanjia: Yingxian Zhengqiu Yijian’ (《北京地铁又见刷脸安检,引隐私泄露担忧 专家:应先征求意见》) [Beijing Metro resorts to facial recognition for security checks, causing concerns for data leaks. Experts: should consult the public’s opinion] (29 December 2021), Southern Metropolis Daily, https://m.mp.oeeee.com/a/BAAFRD000020211229638893.html.

64 See, e.g., Coco Feng, ‘Coronavirus: Beijing, fighting Omicron, adds identity info to transport passes to speed up checks of Covid-19 status’ (18 May 2022), South China Morning Post, www.scmp.com/tech/article/3178195/coronavirus-beijing-fighting-omicron-adds-identity-info-transport-passes-speed.

65 Jyh-An Lee, ‘Hacking into China’s cybersecurity law’ (2018) 53 Wake Forest L. Rev. 57104, at 99–100.

66 Footnote Ibid., 100.

68 William Chaskes, ‘The three laws: The Chinese Communist Party throws down the data regulation gauntlet’ (2022) 79 Wash. & Lee L. Rev 11691224, at 1182–1184.

69 Christopher Slobogin, ‘Public privacy: Camera surveillance of public places and the right to anonymity’ (2002) 72 Miss. L.J. 213315, at 240–243.

70 See Lee and Liu, ‘Real-name registration rules’, pp. 11–15.

72 Elizabeth A. Rowe, ‘Regulating facial recognition technology in the private sector’ (2020) 24 Stan. Tech. L. Rev. 154, at 23–24.

73 See Lily Kuo, ‘China brings in mandatory facial recognition for mobile phone users’ (2 December 2019), The Guardian, www.theguardian.com/world/2019/dec/02/china-brings-in-mandatory-facial-recognition-for-mobile-phone-users.

74 Ira S. Rubinstein, Gregory T. Nojeim, and Ronald D. Lee, ‘Systematic government access to personal data: a comparative analysis’ (2014) 4(2) International Data Privacy Law 96119, at 98, https://doi.org/10.1093/idpl/ipu004; Kevin Werbach, ‘Orwell that ends well? Social credit as regulation for the algorithmic age’ 2022 (4) U. Ill. L. Rev, 14171475, at 1427–1431.

75 Givens and Lam, ‘Smarter cities or Bigger Brother?’, 851–858.

76 Isabelle Qian, Muyi Xiao, Paul Mozur, and Alexander Cardia, ‘Four takeaways from a Times investigation into China’s expanding surveillance state’ (21 June 2022), New York Times, www.nytimes.com/2022/06/21/world/asia/china-surveillance-investigation.html.

77 See Luo and Guo, ‘Facial recognition in China’, 178.

18 Principled Regulation of Facial Recognition Technology A View from Australia and New Zealand

1 Joe Purshouse and Liz Campbell, ‘Privacy, crime control and police use of automated facial recognition technology’ (2019) 3 Criminal Law Review 188–204.

2 Lindsey Barret, ‘Ban facial recognition technologies for children-and for everyone else’ (2020) 26 BUJ Sci. & Tech. L. 223–286

3 Nessa Lynch, ‘Beyond the ban – Principled regulation of facial recognition technology’ in Kelly Pendergast and Anna Pendergast (eds.), More Zeros and Ones: Digital Technology, Maintenance and Equity in Aotearoa New Zealand (Bridget Williams Books, 2022), pp. 121182.

4 Nessa Lynch and Andrew Chen, ‘Facial recognition technology – Considerations for use in policing’ (December 2021), New Zealand Police.

5 Purshouse and Campbell, ‘Privacy, crime control and police use’.

6 Nessa Lynch, Liz Campbell, Joe Purshouse, and Marcin Betkier, ‘Facial recognition technology in New Zealand: Towards a legal and ethical framework’ (December 2020), The Law Foundation of New Zealand.

7 Lynch and Chen, ‘Facial recognition technology’.

8 For example, in passport fraud detection. See Lynch et al., ‘Facial recognition technology in New Zealand’.

9 Immigration Act 2009 (NZ) s. 30.

10 City of Melbourne, ‘Safe city cameras’ (n.d.), www.melbourne.vic.gov.au/community/safety-emergency/pages/safe-city-cameras.aspx; Elias Visontay, ‘Councils tracking our faces on the sly’ (29 August 2019), The Australian, www.theaustralian.com.au/nation/councils-tracking-our-faces-on-the-sly/news-story/eea2b51fa82b076796ad7e294e111d3e.

11 Erik Tlozek, ‘SA Police could use Adelaide City facial recognition technology, despite being asked not to’ (20 June 2022), ABC News, www.abc.net.au/news/2022-06-20/sa-police-could-use-adelaide-city-facial-recognition-technology/101166064

12 See NT Police, Fire and Emergency Services, ‘Success for Northern Territory Police at IAwards’ (20 June 2016), Media release, https://pfes.nt.gov.au/newsroom/2016/success-northern-territory-police-iawards; NSW Government, ‘NSW Police Force and facial recognition’ (2022) www.police.nsw.gov.au/crime/terrorism/terrorism_categories/facial_recognition.

13 Australian Government, ‘ID match’ (2022), www.idmatch.gov.au.

14 Judy Skatssoon, ‘600k MyGov accounts now connected to digital ID’ (24 October 2021), Government News, www.governmentnews.com.au/600k-mygov-accounts-now-connected-to-digital-id/

15 Bethan Davies, Martin Innes, and Andrew Dawson, ‘An evaluation of South Wales Police’s use of automated facial recognition’ (September 2018), Report, Universities’ Police Science Institute and Crime & Security Research Institute, Cardiff University; Suzanne Shale, Deborah Bowman, Priyah Singh, and Leif Wenar, ‘London Policing Ethics Panel: Final report on live facial recognition’ (May 2019), London Policing Ethics Panel, London.

16 Lynch and Chen, ‘Facial recognition technology’.

17 Lynch et al., ‘Facial recognition technology in New Zealand’; Lynch and Chen, ‘Facial recognition technology’.

18 Gareth Corfield, ‘Tech firm used by Met and MoD forced to delete billions of Facebook photos’ (23 May 2022), The Telegraph; Home Office UK, ‘Police transformation fund: Successful bids 2016 to 2017’ (4 September 2017), www.gov.uk/government/publications/police-transformation-fund-successful-bids-2016-to-2017

19 R (Bridges) v. The Chief Constable of South Wales [2019] EWHC 2341 (Admin).

20 Here, R (Catt) v. Association of Chief Police Officers [2015] UKSC 9 at [11]–[14] per Lord Sumption was cited with approval.

21 Beghal v. Director of Public Prosecutions [2016] AC 88 at [31] and [32] per Lord Hughes.

22 R (Bridges) v. The Chief Constable of South Wales, [85]–[90].

23 Footnote Ibid., [96].

24 Footnote Ibid., [199]. See Joy Buolamwini and Timnit Gebru, ‘Gender shades: intersectional accuracy disparities in commercial gender classification’ Conference on Fairness, Accountability, and Transparency, New York (February 2018); Joy Buolamwini, ‘Response: Racial and gender bias in Amazon Rekognition – Commercial AI system for analyzing faces’ (25 January 2019), Medium, https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced.

25 South Wales Police, ‘Response to the Court of Appeal judgment on the use of facial recognition technology’, South Wales Police, Media release (11 August 2020), www.south-wales.police.uk/en/newsroom/response-to-the-court-of-appeal-judgment-on-the-use-of-facial-recognition-technology/

26 Stephanie Palmer-Derrien, ‘Aussie entrepreneur launches “disturbing and unethical” facial recognition tech in Silicon Valley’ (22 January 2020), Smart Company, www.smartcompany.com.au/startupsmart/news/aussie-clearview-ai/.

27 Hannah Ryan, ‘Australian Police have run hundreds of searches on Clearview AI’s facial recognition tool’ (28 February 2020), BuzzFeed News, www.buzzfeed.com/hannahryan/clearview-ai-australia-police.

28 Commissioner Initiated Investigation into Clearview AI, Inc. (Privacy) [2021] AlCmr 54 (14 October 2021) [8].

29 Jake Goldenfein, ‘Australian police are using the Clearview AI facial recognition system with no accountability’ (4 March 2020), The Conversation, https://theconversation.com/australian-police-are-using-the-clearview-ai-facial-recognition-system-with-no-accountability-132667.

30 Office of the Australian Information Commission (OAIC), ‘Clearview AI breached Australians’ privacy’, OAIC Media Release (2 November 2021), www.oaic.gov.au/updates/news-and-media/clearview-ai-breached-australians-privacy. See further, Commissioner Initiated Investigation into Clearview AI, Inc.

31 Commissioner Initiated Investigation into Clearview AI, Inc., at [239].

32 Footnote Ibid., at [242].

33 Office of the Australian Information Commissioner (OAIC), ‘AFP ordered to strengthen privacy governance’ (16 December 2021), www.oaic.gov.au/updates/news-and-media/afp-ordered-to-strengthen-privacy-governance.

34 Australian Human Rights Commission (AHRC), ‘Human rights and technology’ (March 2021), Final report, p. 9.

35 Footnote Ibid., pp. 114–116.

36 Footnote Ibid., p. 116.

38 Nicholas Davis, Lauren Perry, and Edward Santow, ‘Facial recognition technology: Towards a model law’ (September 2022), Report, Human Technology Institute, University of Technology Sydney, September. One of the report’s authors, Professor Edward Santow, is the former Australian Human Rights Commissioner and worked on the AHRC’s report just discussed.

39 Footnote Ibid., p. 7.

41 Footnote Ibid., p. 8.

42 Footnote Ibid., p. 13.

43 Footnote Ibid., p. 65.

44 Footnote Ibid., pp. 67–68.

45 Footnote Ibid., p. 80.

46 Tasmin Rose, ‘Clubs likely to proceed with facial recognition after NSW Government shelves reform bill’ (2 November 2022), The Guardian Australia, www.theguardian.com/australia-news/2022/nov/02/clubs-likely-to-proceed-with-facial-recognition-after-nsw-government-shelves-reform-bill.

47 Charlotte Graham-McLay, ‘New Zealand claims world first in setting standards for government use of algorithms’ (28 July 2020), The Guardian, www.theguardian.com/world/2020/jul/28/new-zealand-claims-world-first-in-setting-standards-for-government-use-of-algorithms.

49 Privacy Commissioner and the Government Chief Data Steward, ‘Principles for the safe and effective use of data and analytics’ (16 May 2018), www.privacy.org.nz/publications/guidance-resources/principles-for-the-safe-and-effective-use-of-data-and-analytics-guidance/

50 Proposal for Artificial Intelligence Act (European Commission, 2021/0106 (COD)).

52 Vera Lúca Raposo, ‘Ex machina: Preliminary critical assessment of the European Draft Act on Artificial Intelligence’ (2022) 30 International Journal of Law and Information Technology 88, 95.

53 Footnote Ibid., p. 96.

54 Footnote Ibid., p. 103.

55 Footnote Ibid., p. 107.

56 Footnote Ibid., p. 91.

57 Dan Svantesson, ‘The European Union Artificial Intelligence Act: Potential implications for Australia’ (2022) 47 Alternative Law Journal 4, 6.

58 Footnote Ibid., pp. 6–8.

59 Footnote Ibid., pp. 8–9.

60 Tambiama Madiega, Briefing: Artificial Intelligence Act (2nd ed., European Parliamentary Research Service, 2023).

61 Guidelines on the Use of Facial Recognition Technology in the Area of Law Enforcement, Guidelines No 05/2022 (European Data Protection Board, European Union, adopted 12 May 2022) (Guidelines 05/2022).

62 Footnote Ibid., p. 6.

63 Footnote Ibid., p. 6.

64 Footnote Ibid., p. 2.

66 Footnote Ibid., p. 7.

67 Footnote Ibid., p. 11.

68 New Zealand Police, ‘Trial or adoption of new policing technology – Police Manual chapter’ (July 2022), www.police.govt.nz/about-us/publication/trial-or-adoption-new-policing-technology-police-manual-chapter.

69 New Zealand Police, ‘NZ Police technology capabilities list’ (April 2023), www.police.govt.nz/sites/default/files/publications/technology-capabilites-list.pdf.

70 Lynch and Chen, ‘Facial recognition technology’.

71 New Zealand Police, ‘Advisory panel on emergent technologies’ (2022) www.police.govt.nz/about-us/programmes-and-initiatives/police-use-emergent-technologies/advisory-panel-emergent.

72 Police Scotland and Scottish Police Authority, ‘Policing 2026: Our 10-year strategy for policing in Scotland’ (2017), Report.

73 Justice Sub-Committee on Policing, ‘Facial recognition: How policing in Scotland makes use of this technology’ (11 February 2020), SP Paper 678, 1st Report, 2020 (Session 5).

74 Letter from Assistant Chief Constable Duncan Sloan to Justice Sub-Committee Convener (8 April 2020).

75 Scottish Biometrics Commissioner Act 2020, s 23(1) and (2).

76 AHRC, ‘Human rights and technology’, p. 127.

78 Cf. Biometrics and Forensics Ethics Group, ‘Ethical issues arising from the police use of live facial recognition technology’ (February 2019), where the pilot project had begun already.

19 Morocco’s Governance of Cities and Borders AI-Enhanced Surveillance, Facial Recognition, and Human Rights

a Last updated: 19 August 2022.

b Anaïs Lefébure and Mehdi Mahmoud, ‘Casablanca, Marrakech, Dakhla … Nos villes sous haute surveillance?’ (2 July 2021), Tel Quel, https://telquel.ma/2021/07/02/casablanca-marrakech-dakhla-nos-villes-bientot-sous-haute-surveillance_1727723.

* We are grateful to the former Centre of Expertise on Global Governance at The Hague University of Applied Sciences and the Institute of Security and Global Affairs at Leiden University for the seed grant that made this research possible.

1 Louise Eley and Ben Rampton, ‘Everyday surveillance, Goffman, and unfocused interaction’ (2020) 18 Surveillance & Society 199–215, https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/13346; David Lyon, Theorizing Surveillance: The Panopticon and Beyond (William Publishing, 2006); David Lyon, Surveillance Society: Monitoring Everyday Life (Open University Press, 2001); Rocco Bellanova, Kristina Irion, Katja Lindskov Jacobsen, Francesco Ragazzi, Rune Saugmann, Lucy Suchman, Jesus Benito-Picazo, Enrique Domínguez, Esteban J. Palomo, and Ezequiel López-Rubio, ‘Toward a critique of algorithmic violence’ (2021) 15 International Political Sociology 121150. https://academic.oup.com/ips/article/15/1/121/6170592; Jesus Benito-Picazo et al., ‘Deep learning-based video surveillance system managed by low cost hardware and panoramic cameras’ (2020) 27 Integrated Computer-Aided Engineering 373387, www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/ICA-200632; Eley and Rampton, ‘Everyday surveillance’; Francesco Ragazzi, ‘Security Vision: The algorithmic security politics of computer vision’ (2021), www.securityvision.io/.

2 Jozef Andraško, Matúš Mesarčík, and Ondrej Hamul’ák, ‘The regulatory intersections between artificial intelligence, data protection and cyber security: Challenges and opportunities for the EU Legal Framework’ (2021) 36 AI & SOCIETY 623636, https://link.springer.com/10.1007/s00146-020-01125-5; Steve Gold, ‘Military biometrics on the frontline’ (2010) 2010(10) Biometric Technology Today 79, https://linkinghub.elsevier.com/retrieve/pii/S0969476510702071; Josh Chin and Clément Bürge, ‘Twelve days in Xinjiang: How China’s surveillance state overwhelms daily life’ (20 December 2017), Wall Street Journal, www.wsj.com/articles/twelve-days-in-xinjiang-how-chinas-surveillance-state-overwhelms-daily-life-1513700355.

3 Taylor C. Boas, ‘Weaving the authoritarian web: The control of internet use in nondemocratic regimes’ in John Zysman and Abraham Newman (eds.), How Revolutionary Was the Digital Revolution? National Responses, Market Transitions, and Global Technology (Stanford University Press, 2006), pp. 373390; Bert Hoffmann, ‘Civil society in the digital age: How the internet changes state–society relations in authoritarian regimes. The case of Cuba’ in Francesco Cavatorta (ed.), Civil Society Activism under Authoritarian Rule: A Comparative Perspective (Routledge, 2012), pp. 219244; Lydia Khalil, ‘Digital authoritarianism, China and COVID’ (2 November 2020), Lowy Institute, www.lowyinstitute.org/publications/digital-authoritarianism-china-covid; Justin Sherman, ‘Digital authoritarianism and implications for US national security’ (2021) 6(1) The Cyber Defense Review 107118, www.jstor.org/stable/2699411; Ben Wagner, ‘Whose politics? Whose rights? Transparency, capture and dual-use export controls’ (2021) 31 Security and Human Rights 3546, https://brill.com/view/journals/shrs/31/1-4/article-p35_35.xml.

4 Steven Feldstein, ‘The global expansion of AI surveillance’ (17 September 2019), Carnegie Endowment for International Peace, Paper, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.

5 See Footnote ibid., pp. 18–19, for details on the technologies themselves. Suffice it to state here that facial recognition ‘is a biometric technology that uses cameras – both video or still images – to match stored or live footage of individuals with images from a database. […] They can scan distinctive facial features in order to create detailed biometric maps of individuals without obtaining consent. Often facial recognition surveillance cameras are mobile and concealable.’ However, advanced video surveillance and facial recognition cameras could not function without cloud computing capabilities. If video surveillance is the ‘eyes’ then cloud services are the ‘brains’ that connect cameras and hardware to the cloud computing models via 5G networks. However, as cloud computing in isolation is not inherently oriented toward surveillance, these secondary technologies are categorised as ‘enabling technologies’: Footnote ibid., p. 21.

7 Footnote Ibid., p. 13.

8 Roxana Akhmetova and Erin Harris, ‘Politics of technology: The use of artificial intelligence by US and Canadian immigration agencies and their impacts on human rights’ in Emre E. Korkmaz (ed.), Digital Identity, Virtual Borders and Social Media (Edward Elgar Publishing, 2021), pp. 5272, www.elgaronline.com/view/edcoll/9781789909142/9781789909142.00008.xml; Ausma Bernot, ‘Transnational state-corporate symbiosis of public security: China’s exports of surveillance technologies’ (2021) 10(2) International Journal for Crime, Justice and Social Democracy 159173, www.crimejusticejournal.com/article/view/1908; Peter Dauvergne, ‘The globalization of artificial intelligence: Consequences for the politics of environmentalism’ (2021) 18 Globalizations 285299. www.tandfonline.com/doi/full/10.1080/14747731.2020.1785670; Orabile Mudongo, ‘Africa’s expansion of AI surveillance – Regional gaps and key trends’ (26 February 2021), Briefing Paper, Africa Portal, www.africaportal.org/publications/africas-expansion-ai-surveillance-regional-gaps-and-key-trends/; Ben Wagner, ‘After the Arab spring: New paths for human rights and the internet in European foreign policy’ (July 2012), Briefing Paper, European Parliament: Directorate-General for External Policies, European Union, www.europarl.europa.eu/RegData/etudes/note/join/2012/457102/EXPO-DROI_NT%282012%29457102_EN.pdf; Ben Wagner, ‘Push-button-autocracy in Tunisia: Analysing the role of internet infrastructure, institutions and international markets in creating a Tunisian censorship regime’ (2012) 36(6) Telecommunications Policy 484492, https://linkinghub.elsevier.com/retrieve/pii/S0308596112000675.

9 Feldstein, ‘The global expansion of AI surveillance’, p. 8.

10 Molly K. Land and Jay D. Aronson, ‘Human rights and technology: New challenges for justice and accountability’ (2020) 16(1) Annual Review of Law and Social Science 223240, www.annualreviews.org/doi/10.1146/annurev-lawsocsci-060220-081955.

11 Footnote Ibid., p. 226.

12 Footnote Ibid., pp. 226, 235.

13 Feldstein, ‘The global expansion of AI surveillance’, p. 16.

14 Bertelsmann Stiftung, ‘BTI 2022 country report – Morocco’ (2022), p. 38.

16 Privacy International, ‘State of privacy Morocco’ (26 January 2019), https://privacyinternational.org/state-privacy/1007/state-privacy-morocco.

17 Mounir Bensalah, ‘Toward an ethical code of AI and human rights in Morocco’ (2021) 1(2) Arribat – International Journal of Human Rights 187203.

19 Privacy International, ‘State of privacy Morocco’.

21 Jonas Hagmann, ‘Globalizing control research: The politics of urban security in and beyond the Alaouite kingdom of Morocco’ (2021) 6(4) Journal of Global Security Studies 123, https://academic.oup.com/jogss/article/doi/10.1093/jogss/ogab004/6208882; Privacy International, ‘State of privacy Morocco’.

22 Hagmann, ‘Globalizing control research’.

23 Privacy International, ‘State of privacy Morocco’.

24 Anaïs Lefébure and Mehdi Mahmoud, ‘De la “smart” à la “safe” city, au détriment de nos vies privées?’ (2 July 2021), Tel Quel, https://telquel.ma/2021/07/02/de-la-smart-a-la-safe-city-au-detriment-de-nos-vies-privees_1727757.

25 Privacy International, ‘State of privacy Morocco’.

26 Amnesty International, ‘Morocco: Human rights defenders targeted with NSO Group’s spyware’ (10 October 2019), www.amnesty.org/en/latest/research/2019/10/morocco-human-rights-defenders-targeted-with-nso-groups-spyware/; Bill Marczak, John Scott-Railton, Sarah McKune, Bahr Abdul Razzak, and Ron Deibert, ‘HIDE AND SEEK: Tracking NSO Group’s Pegasus spyware to operations in 45 countries’ (18 September 2018), The Citizen Lab, https://citizenlab.ca/2018/09/hide-and-seek-tracking-nso-groups-pegasus-spyware-to-operations-in-45-countries/; Bethan McKernan, ‘Emmanuel Macron “pushes for Israeli inquiry” into NSO spyware concerns’ (25 July 2021), The Guardian, www.theguardian.com/world/2021/jul/25/emmanuel-macron-pushes-for-israeli-inquiry-into-nso-spyware-concerns. ‘Once installed on a phone, the [Pegasus] software can extract all of the data that is already on the device, such as text messages, contacts, GPS location, email and browser history. It can additionally create new data by using the phone’s microphone and camera to record the user’s surroundings and ambient sounds.’ N. Hopkins and D. Sabbagh, ‘WhatsApp spyware attack was attempt to hack human rights data, says lawyer’ (14 May 2019), The Guardian, www.theguardian.com/technology/2019/may/14/whatsapp-spyware-vulnerability-targeted-lawyer-says-attempt-was-desperate, cited in Land and Aronson, ‘Human rights and technology’, p. 228.

27 Such as Mâati Monjib, Hicham Mansouri, Taoufik Bouachrine, Souleiman Raissouni, and Omar Radi, Marruecos y el cambio de ciclo: en busca de un nuevo pacto social y de nuevas legitimidades, ed. Alfonso Casani and Beatriz Tomé-Alonso (Fundacionalternativas, 2021), p. 11, https://.org/wp-content/uploads/2022/07/115f8026034f62907a4d1382c8788886.pdf.

28 Privacy International, ‘Eight things we know so far from the Hacking Team hack’ (9 July 2015), https://privacyinternational.org/news-analysis/1395/eight-things-we-know-so-far-hacking-team-hack.

29 Privacy International, ‘State of privacy Morocco’.

30 Parliamentary question dated 19 November 2019 (E-003890/2019/rev.1) to the Commission, Rule 138, Pierfrancesco Majorino (S&D), www.europarl.europa.eu/doceo/document/E-9-2019-003890_EN.html.

31 Antónia do Carmo Barriga, Ana Filipa Martins, Maria João Simões, and Délcio Faustino, ‘The COVID-19 pandemic: Yet another catalyst for governmental mass surveillance?’ (2020) 2(1) Social Sciences & Humanities Open 15, https://linkinghub.elsevier.com/retrieve/pii/S2590291120300851.

32 Chantal E. Berman, ‘Policing the organizational threat in Morocco: Protest and public violence in liberal autocracies’ (2021) 65(3) American Journal of Political Science 733754.

33 TelQuel dedicated an entire issue to the use of high-tech surveillance in Moroccan cities, available at the following link: https://telquel.ma/sommaire/securite-nos-villes-sous-haute-surveillance; Kenza Filali, ‘Le Maroc parmi les importateurs de materies d’espionnage Britannique’ (11 March 2018), Le Desk, https://ledesk.ma/encontinu/le-maroc-parmi-les-importateurs-de-materiel-despionnage-britannique/; Kenza Filali, ‘El Mahdi El Majidi s’allie au Francais Cerbair specialiste des solutions anti-drones’ (9 June 2021), Le Desk, https://ledesk.ma/enoff/el-mahdi-el-majidi-sallie-au-francais-cerbair-specialiste-des-solutions-anti-drones/; Africa Intelligence, ‘National police to expand all-seeing eye on Casablanca’ (21 April 2021), www.africaintelligence.com/north-africa/2021/04/21/national-police-to-expand-all-seeing-eye-on-casablanca,109659676-art.

34 Yassine Majdi, ‘Fathi Hassan (DGSN): Nous planchons sur le recours aux drones et a l’intelligence artificielle’ (July 2022), Tel Quel, https://telquel.ma/sponsors/fathi-hassan-dgsn-nous-planchons-sur-le-recours-aux-drones-et-a-lintelligence-artificielle_1774312.

35 Africa Intelligence, ‘Spain and EU to supply border surveillance equipment to Morocco’ (14 June 2022), www.africaintelligence.com/north-africa/2022/06/14/spain-and-eu-to-supply-border-surveillance-equipment-to-morocco,109791869-art.

36 K. Natter, ‘The formation of Morocco’s policy towards irregular migration (2000–2007): Political rationale and policy processes’ (2014) 52 International Migration 1528, https://doi.org/10.1111/imig.12114; R. Andersson. ‘Hardwiring the frontier? The politics of security technology in Europe’s “fight against illegal migration”’. (2016) 47(1) Security Dialogue 2239, https://doi.org/10.1177/0967010615606044.

37 Público, ‘Diez multinacionales se embolsan el 65% del dinero que España destina a frenar la migración’ (1 July 2020), El control de la migración, un oscuro negocio, https://temas.publico.es/control-migracion-oscuro-negocio/2020/07/01/diez-multinacionales-se-embolsan-el-65-del-dinero-que-espana-destina-a-frenar-la-migracion/.

38 R. Andersson, Illegality, Inc.: Clandestine Migration and the Business of Bordering Europe (University of California Press, 2014); El Confidencial and Fundación PorCausa, ‘Fronteras SA: la industria del control migratorio’ (n.d.), www.elconfidencial.com/espana/2022-07-15/fronteras-industria-control-migratorio_3460287/.

39 Público, ‘Interior Implantará Un Sistema de Reconocimiento Facial En La Frontera de Melilla’ (27 July 2015), www.publico.es/politica/interior-implantara-sistema-reconocimiento-facial.html.

40 Fundación Por Causa, ‘Industria Del Control Migratorio 2: Quién Se Lleva El Dinero?’ (2020), https://porcausa.org/somos-lo-que-hacemos/industria-del-control-migratorio/

41 Africa Intelligence, ‘Spain and EU to supply border surveillance equipment’.

42 Interviewee 8, 18 July 2022. Due to security and ethical concerns, it is not possible to provide further details about interviewees.

43 Interviewee 3, 10 June 2022.

45 Interviewee 6, 29 June 2022. See also France Media Agency, ‘À Marrakech, 38 caméras sur la place Jamaa el Fna’ (4 May 2012), La Presse, www.lapresse.ca/voyage/destinations/afrique/maroc/201205/04/01-4522102-a-marrakech-38-cameras-sur-la-place-jamaa-el-fna.php.

46 Interviewee 2, 31 May 2022; interviewee 5, 24 June 2022; interviewee 8, 18 July 2022.

47 Interviewee 5, 24 June 2022.

48 Casa Transports SA, ‘Cahier Des Clauses Techniques Particulières (CCTP) – Préstations de Réalisation de La 2ème Phase Du Poste Central de La Gestion de La Circulation et de La Vidéoprotection de Casablanca’ (2021) [unpublished].

49 The system deployed in Al Hoceïma also needs to allow the future integration of other ‘intelligent analytics’, such as intrusion detection, people’s count, gatherings, etc. (p. 43).

50 Interviewee 8, 18 July 2022.

51 Privacy International, ‘From smart cities to safe cities: Normalising the police state?’ (15 August 2018), https://privacyinternational.org/long-read/2231/smart-cities-safe-cities-normalising-police-state.

52 Lefébure and Mahmoud, ‘De la “smart” à la “safe” city’; Lefébure and Mahmoud, ‘Casablanca, Marrakech, Dakhla’; see also Hagmann, ‘Globalizing control research’, for a case study of the surveillance system in place in Marrakech. Interviewee 9, 22 July 2022.

53 Ayoub Khattabi, ‘La reconnaissance faciale bientôt à l’aéroport de Rabat-Salé’ (4 August 2022), le360, https://fr.le360.ma/economie/la-reconnaissance-faciale-bientot-a-laeroport-de-rabat-sale-264740.

54 Senado, ‘Diario de Sesiones Senado 22 Marzo 2022’ (2022), www.senado.es/legis14/publicaciones/pdf/senado/ds/DS_P_14_83.PDF.

55 Wide-angle dome cameras with a small form factor (i.e., building them as small as possible).

56 Fundación por Causa, ‘Industria Del Control Migratorio 2: Quién Se Lleva El Dinero?’.

57 Amal Kennin, ‘Munẓmāt Isbānya Tantqad Mašrwaʿ “Ālḥudwd Ālḏakya” Fy Sbta Wa Mlylya [Spanish organizations criticize the “smart borders” project in Ceuta and Melilla]’ (14 January 2022), Hespress, www.hespress.com/منظمات-إسبانية-تنتقد-مشروع-الحدود-الذ930243-.html.

60 Mohammad Okba, ‘Paula Guerra Cáceres: “La Inteligencia Artificial Es Una Amenaza Para Los Migrantes y Es Una Forma de Control Migratorio”’ [Paula Guerra Cáceres: Artificial intelligence is a menace for migrants and a form of migratory control] (14 June 2022), Bayana, https://baynana.es/es/paula-guerra-caceres-la-inteligencia-artificial-es-una-amenaza-para-los-migrantes-y-es-una-forma-de-control-migratorio/.

61 Aurélie Collas and Sandrine Morel, ‘Au Maroc, Dans l’enclave de Melilla, Une Tentative d’entrée de Migrants Tourne Au Drame’ (27 June 2022), Le Monde, www.lemonde.fr/afrique/article/2022/06/27/a-melilla-une-tentative-d-entree-de-migrants-tourne-au-drame_6132174_3212.html.

62 A Moroccan city neighbouring Ceuta.

63 Andersson, Illegality, Inc.; El Confidencial and Fundación PorCausa, ‘Fronteras SA: la industria del control migratorio’.

64 Khattabi, ‘La reconnaissance faciale’.

65 Interviewee 6, 29 June 2022.

66 World Bank, ‘World Bank supports additional financing for the Casablanca Municipal Support Program-for-results’, World Bank, Press Release (22 June 2022), www.worldbank.org/en/news/press-release/2022/06/22/world-bank-supports-additional-financing-for-the-casablanca-municipal-support-program-for-results.

67 Lorenzo D’Agostino, Zach Campbell, and Maximilian Popp, ‘Wie die EU Marokkos Überwachungsapparat aufrüstet’ [How the EU is arming Morocco’s surveillance apparatus] (25 July 2022), Der Spiegel, www.spiegel.de/ausland/marokko-wie-die-eu-rabats-ueberwachungsapparat-aufruestet-a-d3f4c00e-4d39-41ba-be6c-e4f4ba650351.

68 See Ot L. van Daalen, Joris V.J. van Hoboken, and Melinda Rucz, ‘Export control of cybersurveillance items in the new dual-use regulation: The challenges of applying human rights logic to export control’ (2022) 48 Computer Law & Security Review 105789; European Parliament, ‘Draft Report by the Committee of Inquiry to investigate the use of Pegasus and equivalent surveillance spyware’ (2022), www.sophieintveld.eu/nl/pega-draft-report.

69 See Sabrina Winter, ‘Spähsoftware für Autokraten – Wie die Europäische Union ihre Kontrollen aufweichte – und Deutschland half’ (5 October 2023), FragDenStaat, https://fragdenstaat.de/blog/2023/10/05/wie-die-europaische-union-ihre-kontrollen-aufweichte-und-deutschland-half/.

70 Wagner, ‘Whose politics? Whose rights?’.

71 Interviewee 6, 29 June 2022.

72 Equivalent to roughly 69 million euros.

73 Interviewee 5, 24 June 2022; interviewee 6, 29 June 2022.

74 Lefébure and Mahmoud, ‘Casablanca, Marrakech, Dakhla’.

75 Interviewee 6, 29 June 2022.

76 A press release of the event is available at http://cndh.org.ma/an/taxonomy/term/447.

77 Interviewee 8, 18 July 2022.

78 ‘Vidéosurveillance: au doigt et à l’œil’ (2 July 2021), TelQuel, https://telquel.ma/2021/07/02/videosurveillance-au-doigt-et-a-loeil_1727702.

79 Interviewee 2, 31 May 2022; interviewee 5, 24 June 2022; interviewee 8, 18 July 2022.

80 Interviewee 8, 18 July 2022.

Figure 0

Table 19.1 Review of video-surveillance technologies in Moroccan citiesa

Source: Compilation by Francesco Colin from multiple sources.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×