Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-22T03:22:04.913Z Has data issue: false hasContentIssue false

Part I - Facial Recognition Technology in Context

Technical and Legal Challenges

Published online by Cambridge University Press:  28 March 2024

Rita Matulionyte
Affiliation:
Macquarie University, Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

1 Facial Recognition Technology Key Issues and Emerging Concerns

Neil Selwyn , Mark Andrejevic , Chris O’Neill , Xin Gu , and Gavin Smith
1.1 Introduction

Facial recognition technology (FRT) is fast becoming a defining technology of our times. The prospect of widespread automated facial recognition is currently provoking a range of polarised responses – from fears over the rise of authoritarian control through to enthusiasm over the individual conveniences that might arise from being instantly recognised by machines. In this sense, FRT is a much talked about, but poorly understood, topic of contemporary social, political, and legal importance. As such, we need to think carefully about exactly what ‘facial recognition’ is, what facial recognition does, and, most importantly, what we as a society want facial recognition to become.

Before this chapter progresses further into the claims and controversies surrounding FRT, a few basic definitions and distinctions are required. While various forms of technology fall under the broad aegis of ‘facial recognition’, we are essentially talking about technology that can detect and extract a human face from a digital image and then match this face against a database of pre-identified faces. Beyond this, it is useful to distinguish three distinct forms of facial technologies that are currently being developed and implemented. First, and most widespread to date, are relatively constrained forms of FRT that work to match a human face extracted from a digital image against one pre-identified face. This ‘one-to-one’ matching will be familiar to the many smartphone users who have opted for the ‘Face-ID’ feature. The goal of one-to-one matching (sometimes termed ‘verification’ or ‘authentication’) is to verify that someone is who they purport to be. A smartphone, for example, is programmed to ascertain if a face in front of the camera belongs to its registered user (or not) and then unlock itself accordingly (or not).

In this manner, one-to-one facial recognition makes no further judgements beyond these repeated one-off acts of attempted identification. Crucially, the software is not capable of identifying who else might be attempting to unlock the device. In contrast, a second ‘one-to-many’ form of FRT is capable of picking a face out of a crowd and matching it to an identity by comparing the captured face to a database containing thousands (or even millions) of faces. This form of isolating any face from a crowd and making an identification has more scope for mass surveillance and tracking. Alongside these forms of facial recognition technologies designed to either verify or ascertain who someone is, is a third form of ‘facial processing’ technologies, ones that seek to infer what someone is like, or even how someone is feeling. This is technology that extracts faces from digital images and looks for matches against databases of facial expressions and specific characteristics associated with gender, race, and age, or in some cases even emotional state, personality type, and behavioural intention. This form of facial scanning has prompted much interest of late, leading to all manner of applications. During the height of the COVID-19 pandemic, for example, we saw the development of facial processing technology designed to recognise high body temperature and thus infer symptoms of virality through the medium of the face.

All told, considerable time, investment, and effort is now being directed towards these different areas of facial research and development. For computer scientists and software developers working in the fields of computer vision and pattern matching, developing a system that can scan and map the contours and landmarks of a human face is seen as a significant computational challenge. From this technical perspective, facial recognition is conceived as a complex exercise in object recognition, with the face just one of many different real-life objects that computer systems are being trained to identify (such as stop signs on freeways and boxes in warehouses). However, from a broader point of view, the capacity to remotely identify faces en masse is obviously of considerable social significance. For example, from a personal standpoint, most people would consider the process of being seen and scrutinised by another to be a deeply intimate act. Similarly, the promise of knowing who anyone is at any time has an understandable appeal to a large number of social actors and authorities for a range of different reasons. A society where one is always recognised might be seen as a convenience by some, but as a threat by others. While some people might welcome the end of obscurity, others might rightfully bemoan the death of privacy. In all these ways, then, the social, cultural, and political questions that surround FRT should be seen as even more complex and contestable than the algorithms, geometric models, and image enhancement techniques that drive them.

1.2 The Increasing Capabilities and Controversies of Facial Recognition Technology

Facial recognition has come a long way since the initial breakthroughs made by Woody Bledsoe’s Panoramic Research lab in Palo Alto nearly sixty years ago. By 1967 Bledsoe’s team had already developed advanced pointillistic methods that could assign scores to faces and make matches with a mugshot database of what was described as 400 ‘adult male Caucasians’. Despite steady subsequent technical advances throughout the 1970s and onwards, FRT became practicable on a genuinely large scale only during the 2010s, with official testing by the US National Institute of Standards and Technology, reporting accuracy rates for mass-installed systems in excess of 99 per cent by 2018.

As with all forms of AI and automated decision-making, FRT development over the past ten years has benefited from general advances in computational processing power, especially deep learning techniques, and the data storage capabilities required to develop and train large-scale machine learning models. However, more specifically, the forms of FRT that we are now seeing in the 2020s have also benefited from advances in cheap and powerful camera hardware throughout the 2010s (with high-definition cameras installed in public places, objects, and personal devices), alongside the collation of massive sets of pre-labelled photographed faces harvested from publicly accessible social media accounts.

Thus, while the technical ‘proof of concept’ for FRT has been long established, the society-wide acceleration of this technology during the 2020s has been spurred primarily by recent ‘visual turns’ in consumer digital electronics and popular culture towards video and photo content creation, and the rising popularity of self-documenting everyday life. But, equally, it has been stimulated by the desire of organisations to find automated solutions for managing the problem of distancing and anonymity that networked digital technologies have effected, as well as by vendors who market the virtues of the technology as a means to improve security, convenience, and efficiency, while eliminating the perceived fallibilities of human-mediated recognition systems. Thus, a combination of cultural factors, alongside exceptional societal events such as the COVID-19 pandemic, and the wider political economic will to propose and embrace techno-solutions for redressing social issues and to increasingly automate access to various spaces and services, has fashioned receptive conditions for an expansion in FRT and its concurrent normalisation.

And yet the recent rise to prominence of FRT has also led to a fast-growing and forceful counter-commentary around the possible social harms of this technology being further developed and implemented. Growing numbers of critics contend that this is technology that is profoundly discriminatory and biased, and is something that inevitably will be used to reinforce power asymmetries and leverage unfair ends. Such push-back is grounded in a litany of controversies and misuses of FRT over the past few years. For example, the United States has seen regular instances of FRT-driven racialised discrimination by law enforcement and security agencies – not least repeated instances of US police using facial recognition to initiate unwarranted arrests, false imprisonment, and other miscarriages of justice towards minoritised social groups. Similar concerns have been raised over FRT eroding civil liberties and human rights – constituting what Knutson describes as conditions of ‘suspicionless surveillance’, with state authorities emboldened to embark on delimited ‘fishing expeditions’ for all kinds of information about individuals.Footnote 1

Elsewhere, FRT has proven a key element of Chinese authorities’ suppression of Muslim Uyghur populations, as well as in illegal targeting of political protesters by authorities in Myanmar and Russia. Moreover, for others, FRT represents a further stage in the body’s progressive colonisation by capital, as the technology has enabled the capture of increasingly detailed information about individuals’ activities as they move through public and shared spaces. This data can be used to sort and manipulate consumers according to commercial imperatives, tailoring the provision of products and services so that consumption behaviours are maximised. All told, many commentators contend that there have already been sufficient examples of egregious, discriminatory, and harmful uses of FRT in everyday contexts to warrant the cessation of its future development.

Indeed, as far as many critics are concerned, there is already ample justification for the outright banning of facial recognition technologies. According to Hartzog and Selinger, ‘the future of human flourishing depends on facial recognition technology being banned before the systems become too entrenched in our lives’.Footnote 2 Similarly, Luke Stark’s thesis that ‘facial recognition is the plutonium of AI’ advocates the shutdown of FRT applications in all but the most controlled circumstances.Footnote 3 In Stark’s view, the potential harms of using FRT for any purpose in public settings are sufficient reason to render its use too risky – akin to using a nuclear weapon to demolish a building. Such calls for the total suppression of FRTs have been growing in prominence. As noted scholar-activist Albert Fox Cahn recently put it: ‘Facial recognition is biased, broken, and antithetical to democracy. … Banning facial recognition won’t just protect civil rights: it’s a matter of life and death.’Footnote 4

1.3 Justifications for Facial Recognition as Part of Everyday Life

While some readers might well feel sympathetic to such arguments, there are also many practical reasons to raise doubts that such bans could ever be practically feasible, even with sufficient political and public support. Proponents of FRT counter that it is not possible to simply ‘dis-invent’ this technology. They argue that FRTs are now deeply woven throughout the fabric of our digital ecosystems and that commercial imperatives for the information technology and surveillance industries to continue developing FRT products remain too lucrative to give up. Indeed, the technology is already becoming a standard option for closed-circuit television (CCTV) equipment and is regularly used by police even in jurisdictions without any formal rules governing its deployment. The industry-led and practitioner-backed promissory discourse that propagates the various virtues of FRT is already so deeply entrenched in organisational thinking and practice that it would seem highly unlikely for systems and applications to be withdrawn in the various social contexts where they now operate. In this sense, we perhaps need to look beyond polarised discussions over the fundamental need (or not) for the existence of such technology and instead pay closer attention to the everyday implications of FRT as it gets increasingly rolled out across various domains of everyday life to transform how people, things, and processes are governed.

Proponents of FRT – especially those with a commercial interest in encouraging public and political acceptance of the technology – will often point to a number of compelling ‘use cases’ that even the staunchest opponents of FRT will find difficult to refute. One common example is the use of FRT to reunite kidnapped, lost, or otherwise missing children with their families. The controversial face recognition company Clearview AI, which has scraped billions of face images from online sources, has highlighted the use of the app to identify victims and perpetrators of child sexual abuse.Footnote 5 Other pro-social use cases include the use of face recognition to identify people whose documentation has been lost or destroyed during natural disasters, as well as the development of specialised facial recognition software to identify the victims of war and disaster, providing some sense of closure to loved ones and avoiding the time and cost of alternative methods (such as DNA analysis or dental records). Even critics such as Luke Stark concede that FRT might have merit as a specialised accessibility tool for visually impaired people. Indeed, given the fundamental human need to know who other people are, it is always possible to think of potential applications of this technology that seemingly make intuitive or empathetic sense.

Of course, were FRT to remain restricted to such exceptional ‘potential limited use cases’,Footnote 6 then most people would rarely – if ever – come into contact with the technology, and therefore the concerns raised earlier over society-wide discrimination, biases, and harms would be of little significance. Nevertheless, we already live in times where a much wider range of actual applications of FRT have proven to be largely ignored or presumed uncontentious by a majority of the general public. These ‘everyday’ uses of FRT, we would argue, already mark the normalisation of a technology that is elsewhere perceived as controversial when in the hands of police, security services, the military, and other authorities.

These ‘pro-social’ uses span a diverse range of everyday contexts and settings. Perhaps one of the most established installations of facial recognition can be found at airports. FRT is a key component of ‘paperless boarding’ procedures, allowing airline travellers to use this one-to-one biometric matching capacity between their e-passport photo and their physical face to check in, register their bag-drop, and then proceed through the departure and arrival gates. A major rationale for this automated infrastructure is that it makes travel processes more seamless, lessening queues and cutting costs, while also enhancing the recognition capacities (and thus organisational efficiency) of the airport authority. For instance, various studies on recognition have illustrated that the technology outperforms human recognisers, in this case, the security officials and airline clerks stationed at passport control or check-in counters. Another public setting with a long history of FRT is the casino industry. Most large casinos now operate some form of FRT. For example, the technology is strategically used to enforce blocklists of banned patrons, to enforce ‘responsible gaming’ by identifying under-age and ‘impaired’ players, and to support the exclusion of self-identified problem gamblers, as well as for recognising VIP guests and other high spending customers at the door who can then be quickly escorted to private areas and given preferential treatment.

Various forms of facial recognition and facial processing technology are also being deployed in retail settings. The most obvious application is to augment retail stores’ use of CCTV to identify known shoplifters or troublemakers before they gain entry to the premises. Yet, as is the case with casinos, a range of other retail uses have also come to the fore – such as using FRT to recognise repeat customers; target screen-based advertising to particular demographics; collect information on how different customers use retail space and engage with particular arrangements of goods; and gauge satisfaction levels by monitoring the facial expressions of shoppers waiting in checkout lines or engaging with particular advertisements. Another major retail development is the use of ‘facial authentication’ technology to facilitate payment for goods – replacing the need to present a card and then tap in a four-digit PIN with so-called ‘Pay By Face’ systems, and thus lessening the ‘friction’ that stems from a customer forgetting or wrongly entering their code on the EFTPOS terminal, while also reducing opportunities for fraudulent activity to occur.

Alongside these cases, there are other instances of FRT being used in the realms of work, education, and healthcare. For example, the growth of FRT in schools, universities, and other educational settings encompasses a growing range of activities, including students using ‘face ID’ to pay for canteen meals and to check out library books; the detection of unauthorised campus incursions; the automated proctoring of online exams; and even gauging students’ emotions, moods, and levels of concentration as they engage with content from the curriculum and different modes of teaching delivery. Similarly, FRT is finding a place in various work settings – often for ‘facial access control’ into buildings and for governing the floors and areas that employees and contractors can (and cannot) enter, as well as for registering who is in the building and where people are in the case of emergency. Other facial recognition applications also allow factory and construction employees to clock in for work via contactless ‘facial time attendance’ applications, and – in a more disciplinary sense – can be utilised to monitor the productivity and activities of office staff who are working from home. Similarly, in healthcare contexts, FRT is being used for multiple purposes, from more efficient recognition of patients’ identities as they enter clinical facilities so that the need for documentation is reduced (a handy administrative feature in the case of a medical emergency or to support those suffering from mental conditions such as dementia or psychosis), to improving knowledge on wait times and thus better targeting resources and services. FRT is also used to enhance facility security by controlling access to clinical facilities and identifying visitors who have previously caused trouble, as well as for patient monitoring and diagnosis, even to the point of purportedly being able to ‘detect pain, monitor patients’ health status, or even identify symptoms of some illnesses’.Footnote 7

These workplace technologies are complemented by the rise of domestic forms of FRT – with various products now being sold to homeowners and landlords. One growing market is home security, with various manufacturers producing low-cost security systems with facial recognition capabilities. For example, homeowners are now using Wi-Fi-enabled, high-definition camera systems that can send ‘familiar face alerts’ when a person arrives on their doorstep. Anyone with an inclination towards low-cost total surveillance can run up to a dozen separate facial recognition cameras inside a house and its surrounding outside spaces. Facial recognition capabilities are also being enrolled into other ‘smart living’ products, such as the rise of in-car facial processing. Here, some high-end models are beginning to feature in-car cameras and facial analysis technology to infer driver fatigue and trigger ‘drowsiness alerts’. Some systems also promise to recognise the faces of different drivers and adjust seating, mirror, lighting, and in-car temperatures to fit the personal preferences of whoever is sitting behind the wheel.

1.4 The Limits of Facial Recognition ‘for Good’: Emerging Concerns

Each of these ‘everyday’ forms of FRT might appear innocuous enough, but taken as a whole, they mark a societal turn towards facial technologies underpinned by a growing ecosystem of FRT, perhaps even biometric consciousness, that is becoming woven into the infrastructural fabric of our urban environments, our social relations, and our everyday lives. Most importantly, it could be argued that these growing everyday uses of FRT distract from the various latent and more overt harms that many people consider this technology to perpetuate, specifically in a landscape where the technology and its diverse applications remain either under-regulated or not regulated at all. Thus, in contrast to the seemingly steady acceptance and practical take-up of FRT throughout our public spaces, public institutions, and private lives, there is a pressing need to pay renewed attention to the everyday implications of these technologies in situ, especially to temper some of the political rhetoric and industry hyperbole being pushed by various proponents of these systems.

1.4.1 Function Creep

A first point of contention is the tendency of FRT to be adopted for an ever-expanding range of purposes in any of these settings – in what might be described as processes of ‘function creep’. The argument here is that even ostensibly benign implementations of FRT introduce logics of automated monitoring, tracking, sorting, and blocking into everyday public and private spaces that can then lead quickly onto further (and initially unanticipated) applications – what Andrejevic describes as a cascading logic of automation.Footnote 8 For example, scanning the faces of casino guests to identify self-excluded problem gamblers in real time may seem like a virtuous use of the technology. Yet the introduction of the technology fits with other uses that casino-owners and marketers might also welcome. As noted earlier, facial recognition can be a discreet way of recognising VIP guests and other lucrative ‘high rollers’ at the door who can quickly be whisked away from the general melee and then provided with personalised services to capture, or manipulate, their loyalty to (and thus expenditure in) the venue. This logic can then easily be extended into recognising and deterring repeat customers who spend only small amounts of money or whose appearance is not in keeping with the desired aesthetic of the premise, or to identify croupiers whose tables are not particularly profitable.

This cascading logic soon extends to various other applications. To continue the casino example, face recognition could be used to identify and prey on excessive gamblers, using incentives to entice them to spend beyond their means – thereby contributing to the ongoing toll the industry takes on those with gambling addictions. What if every vending machine in a casino could recognise customers through the medium of their faces before displaying prices? A vending machine that could adjust the prices based on information about customers’ casino spending patterns and winnings might be programmed to serve as Robin Hood and to charge the wealthy more to subsidise the less fortunate. The more likely impulse and outcome, however, would be for casino operators to attempt to extract from every consumer as much as they would be willing to pay over the standardised price at any given moment. It is easy to envision systems that gauge the motivation of a purchaser at a particular moment, subject to environmental conditions (‘how thirsty do they appear to be?’, ‘what kind of mood do they seem to express?’, ‘with whom are they associating?’, and so on).

This tendency for function creep is already evident in the implementation of facial recognition by governments and state authorities. For example, the development of facial recognition ‘check-in’ systems during the pandemic lockdowns to monitor COVID-19 cases undergoing home quarantine have since been repurposed by police forces in regions of India to enforce periods of house arrest. Similarly, in 2022 the UK government contracted a tech company specialising in monitoring devices for vulnerable older adults to produce facial recognition watches capable of tracking the location of migrants who have been charged with criminal offences. This technology is now being used to require migrants to scan their faces and log their geolocation on a smartwatch device up to five times a day.Footnote 9 Similarly, Moscow authorities’ use of the city’s network of over 175,000 facial recognition-enabled cameras to identify anti-war protesters also drew criticism from commentators upset at the re-appropriation of a system that was previously introduced under the guise of ensuring visitor safety for the 2018 FIFA World Cup and then expanded to help track COVID-19 quarantine regulations. All these examples illustrate the concern that the logics of monitoring, recording, tracking, and profiling – and the intensified forms of surveillance that result – are likely to exacerbate (and certainly not mitigate) the manipulative, controlling, or authoritarian tendencies of the places within which they are implemented.

1.4.2 The Many Breakdowns, Errors, and Technical Failures of FRT

A second category of harms are those of error and misrecognition – whether this is misrecognition of people’s presumed identities and/or misrecognition of their inferred characteristics and attributes. In this sense, one fundamental problem is the fact that many implementations of FRT simply do not work in the ways promised. In terms of simple bald numbers, while reported levels of ‘false positives’ and ‘false negatives’ remain encouraging in statistical terms, they still involve large numbers of people being erroneously ‘recognised’ by these systems in real life. Even implementations of FRT to quicken the process of airport boarding only report success rates ‘well in excess’ of 99 per cent (i.e., wrongly preventing one in every few hundred passengers boarding the plane). Airports boast the ideal conditions for FRT in terms of well-lit settings, high-quality passport photographs, high-spec cameras, and compliant passengers wanting to be recognised by the camera to authenticate their identity and thus mobility. Unsurprisingly, error rates are considerably higher for FRT systems that are not located within similar ideal conditions. More egregious still is the actual capacity of facial processing systems to infer personal characteristics and affective states. As Crawford and many others have pointed out,Footnote 10 the idea of automated facial analysis and inference is highly flawed – in short, it is simply not possible to accurately infer someone’s gender, race, or age through a face, let alone anticipate and thus modulate their emotions or future behaviours. As a consequence of technological limitations, as well as flaws regarding the knowability of human cognition and controllability of futures, this imaginary remains better off situated in the science fiction genre than as a plausible part of current policy and practice.

Whether or not one is perturbed by not being allowed on a plane at the first attempt or correctly recognised as feeling happy (or sad) probably depends on how often this inconvenience occurs – and what its consequences are. An erroneous emotion inference might simply result in a misdirected advertising appeal. However, in another instance it could jeopardise one’s job prospects, or might even lead to someone being placed under police suspicion. System failures can have more alarming consequences – as reflected in the false arrests of innocent misrecognised individuals, people being denied access to social welfare benefits or Uber drivers being refused access to their work-shift and thereby their income. When a face recognition system fails or makes erroneous decisions, it can be onerous and time-consuming to prove that the machine (and its complex coding script) is wrong. Moreover, trial programs and test-cases continue to show the propensity of FRT to misrecognise certain groups of people more frequently than others. In particular, trials of FRT continue to show racial bias and a particular propensity to mis-recognise women of colour.Footnote 11 Similarly, these systems continue to work less successfully with people wearing head-coverings and veils, and those with facial tattoos – in other words, people who do not conform to the ‘majority’ appearance in many parts of the world.Footnote 12

Of course, not being immediately recognised as a frequent flyer or a regular casino customer is unlikely to lead to serious inconvenience or long-term harm in the same way that being the victim of false arrest can generate trauma and distrust – or even ruin someone’s life. Yet even these ‘minor’ misrecognitions and denials might well constitute further micro-aggressions in a day already replete with them. In celebrating the conveniences of contactless payments and skipping queues, we need to remember that FRTs are not experienced by every ‘user’ as making everyday life smoother, frictionless, and more convenient. These systems are layered on long histories of oppression and inequity, and often add further technological weight or a superficial technological veneer to already existing processes of social division and differentiation.

1.4.3 The Circumstantial Nature of Facial Recognition ‘Benefits’

As these previous points suggest, it is important to recognise how the nature and extent of these harms is experienced disproportionately – with already minoritised populations bearing the worst effects. Indeed, the diverging personal experiences of technology (what Ruha Benjamin describes as ‘vertical realities’ of how different groups encounter the same technology) go some way to explaining why FRT is still being welcomed and embraced by many people.Footnote 13 While many groups experience facial recognition as a technology of surveillance and control, the same technologies are experienced as sources of convenience and security by others. As Benjamin reminds us, ‘power is, if anything, relational. If someone is experiencing the underside of an unjust system, others, then, are experiencing its upside’.Footnote 14

In this sense, much of what might appear as seemingly innocuous examples of FRT are apt examples of what Chris Gilliard and David Golumbia term ‘luxury surveillance’ – the willingness of middle-class consumers to pay a premium for tracking and monitoring technologies (such as personal GPS devices and home smart camera systems) that get imposed unwillingly in alternative guises on marginalised groups. This asymmetry highlights the complicated nature of debates over the benefits and harms of the insertion of FRT into public spaces and into the weave of everyday social relations. Indeed, ‘smart door-bells’, sentient cars, and ‘Pay By Face’ kiosks are all examples of how seemingly innocuous facial recognition features are being quietly added to some of the most familiar and intimate settings of middle-class lives, at the same time as major push-back occurs against the broader use of this technology in public spaces and by police and security forces, where the stakes are perceived to be higher or much less certain. At the moment, many middle-class people seem willing to accept two different modes of the same technology. On the one hand is the ‘smart’ convenience of being able to use one’s face to unlock a smartphone, pay for a coffee, open a bank account, or drive to work in comfort. On the other hand is the general unease at the ‘intrusive’ and largely unregulated use of FRT in their child’s school, in their local shopping centre, or by their local police force.

Yet this ambiguity could be seen as a slippery slope – weakening protections for how the same technology might be used on less privileged populations in more constrained circumstances. The more that FRT is integrated into everyday objects such as cars, phones, watches, and doorbells, the more difficult it is to argue for the complete banning of the technology on grounds of human rights or racial discrimination. Even requesting limitations on application gets harder the more diversified, hard-wired, and normalised the technology becomes. Thus the downside of middle-class consumers continuing to engage with forms of facial recognition that they personally feel ‘work for them’ is the decreased opportunities to initiate meaningful conversations about whether this is technology that we collectively want to have in our societies and, if so, under what kinds of conditions. As Gilliard and Golumbia conclude:

We need to develop a much deeper way of talking about surveillance technology and a much richer set of measures with which to regulate their use. Just as much, we need to recognize that voluntarily adopting surveillance isn’t an isolated choice we make only for ourselves but one that impacts others in a variety of ways we may not recognize. We need always to be asking what exactly it is that we are enthusiastically paying for, who ‘we’ are and who is ‘them’ on the outside, and what all of us are being made subject to when we allow (and even demand) surveillance technology to proliferate as wildly as it does today.Footnote 15

1.4.4 The Harms of FRT Cannot Be ‘Fixed’

A fourth point of contention are the ways in which discussion of the harms of FRT in political, industry, and academic circles continues to be limited by a fundamental mismatch between computational and societal understandings around issues of ‘bias’. The idea that FRT can be ‘fixed’ by better data practices and technical rigour conveys a particular mindset – that algorithms and AI models are not biased in and of themselves. Instead, algorithms and AI models simply amplify bias that might have crept into the datasets that they are trained in and by, and/or through the data that they are fed. As such, it might appear that any data-driven bias is ultimately correctable with better data. Nevertheless, as Deb Raji describes, this is not the case.Footnote 16 Of course, it is right to acknowledge that the initial generation of data can reflect historical bias and that the datasets used to develop algorithmic models will often contain representation and measurement bias. However, every aspect of an algorithmic system is a result of programming and design decisions and can therefore contain additional biases. These include decisions about how tasks are conceived and codified, as well as how choices are modelled. In particular, algorithmic models are also subject to what are termed aggregation and evaluation biases. All told, any outcome of an algorithmic model is shaped by subjective human judgements, interpretations, and discretionary decisions along the way, and these are reflected in how the algorithm then autonomously performs its work and acts on the world. In this sense, many critics argue that FRT developers are best advised to focus on increasing the diversity of their research and development teams, rather than merely the diversity of their training datasets.

Yet increasing the diversity of AI development teams will do little to improve how the algorithmic outputs and predictions of FRTs are then used in practice – by, for example, racist police officers, profit-seeking casino owners, and suspicious employers. Ultimately, concerns over the bias and discriminatory dimensions of FRT relate to the harms that an FRT system can do. As many of the examples outlined in previous sections of this chapter suggest, there are a lot of harms that are initiated and amplified through the use of FRT. While many of these are existing harms, the bottom line remains that FRT used in a biased and divided society will result in biased outcomes, which will then result in the exacerbation of harm already being disproportionally experienced by socially marginalised groups. Thus, as Alex Allbright puts it, rather than focussing on the biases of predictive tools in isolation, we also need to consider how they are used in different contexts – not least social settings and institutional systems that are ‘chock-full’ of human judgements, human discretions, and human biases.Footnote 17

In this sense, all of the harms of FRT discussed so far in this chapter need to be seen in terms of biased datasets, biased models, and the biased contexts and uneven social relations within which any algorithmic system is situated and used. To the extent that it concentrates new forms of monitoring and surveillance power in the hands of commercial and state entities, the deployment of facial recognition contributes to these asymmetries. This means that algorithmic ‘bias’ is not simply a technical data problem, but a sociotechnical problem constituted both by human relations and the ensuing human–data relations that seek to represent and organise the former (and therefore not something that can ever be ‘fixed’). Humans will always act in subjective ways, our societies will always be unequal and discriminatory. As such, our data-driven tools will inevitably be at least as flawed as the worldviews of the people who make and use them. Moreover, our data-driven tools are most likely to amplify existing differences and unfairness, and to do so in opaque ways, unless they are deliberately designed to be biased towards more inclusive outcomes and ‘positive’ discrimination.

All told, there cannot be a completely objective, neutral, and value-free facial recognition system – our societies and our technologies simply do not and cannot work along such lines. The danger, of course, is not that FRT will reproduce existing biases and inequalities but that, as an efficient and powerful tool, it will exacerbate them – and create new ones. As such, the development of a more ‘effective’ or ‘accurate’ means of oppression is not one to be welcomed. Instead, many applications of FRT can be accused of bolstering what Ruha Benjamin terms ‘engineered inequality’ by entrenching injustices and disadvantage but in ways that may superficially appear as more objective and scientific, especially given their design and implementation ‘in a society structured by interlocking forms of domination’.Footnote 18 Thus, as far as Benjamin is concerned, more inclusive datasets ‘is not a straightforward good but is often a form of unwanted exposure’.Footnote 19

1.5 Future Directions and Concerns

The development of FRT to date clearly raises a host of important and challenging issues for regulators and legislators to address. Before we consider the prospects for what this handbook describes as ‘possible future directions in regulating governments’ use of FRT at national, regional and international levels’, it is also worth considering the broader logics and emerging forms of FRT and facial processing that have been put into train by the development of FRT to date, and the further issues, concerns, and imperatives that this raises.

One obvious emerging application of concern is the growing use of facial processing to attempt to discern internal mental states. Thus, for example, face recognition has been used by job screeners to evaluate the stress levels and even the veracity of interviewees. While these inferences are without scientific basis, this does not necessarily stop them from being put to use in ways that affect people’s life chances. This raises the human rights issue of protecting the so-called forum internum – that is, control over the disclosure of one’s thoughts, attitudes, and beliefs. Inferential technologies seek to bypass the ability of individuals to control the disclosure of the innermost sentiments and thoughts by reading these directly from visible external signs. We are familiar with the attempt to ‘read’ sentiment through non-verbal cues during the course of interpersonal interactions, but automated systems provide these hunches with the patina of (false) scientific accuracy and machinic neutrality in potentially dangerous and misleading ways. The inferential use of this type of automated inference for any type of decision making that affects people’s life chances should be strictly limited.

Second is the prospect of the remote, continuous, passive collection of facial biometric data at scale, and across all public, semi-public and private spaces. At stake is not simply the diminishment of individual privacy, but also the space for democratic participation and deliberation. Unleashed on the world, such technology has a very high potential for a host of new forms of social sorting and stalking. Marketers would like to be able to identify individuals in order to target and manipulate them more effectively, and to implement customised offers and pricing. Employers, health insurers, and security officials would be interested in using it for the purposes of background checking and forensic investigations. With such technology in hand, a range of entities could create their own proprietary databases of big spenders, poor tippers, potential troublemakers, and a proliferating array of more and less desirable customers, patients, employees, tenants, clients, students, and more.

Indeed, the continued integration of facial processing capabilities into urban CCTV systems with automated facial recognition also marks a fundamental shift in how surveillance in public space operates. Standard ‘dumb’ forms of CCTV see the same thing and record what people already see in public and shared space – but do not add extra information. The ability to add face detection and recognition enables new strategies of surveillance and control that are familiar from the online world. For example, with facial recognition, the target of CCTV surveillance can shift from particular individuals or groups to overall patterns. Cameras that track all the individuals within their reach enable so-called pattern of life analysis, looking for different patterns of activity that facilitate social sorting and predictive analytics. For example, the system might learn that particular patterns of movement or interaction with others correlate with the likelihood of an individual making a purchase, getting into a fight, or committing a crime. This type of analysis does not necessarily require identifying individuals, merely recognising and tracking them over time and across space.

Finally, then, there are concerns over how FRT is part of an increasing turn towards surveillance as a replacement of trust. As the philosopher Byung-Chul Han puts it, ‘Whenever information is very easy to obtain, as is the case today, the social system switches from trust to control.’Footnote 20 No amount of surveillance can ever fully replace trust, but it can undermine it, leading to an unfillable gap that serves as an alibi for ever more comprehensive and ubiquitous data collection. Han describes a resulting imperative to collect data about everything, all the time, in terms of the rise of ‘the society of transparency’. It is not hard to trace the symptoms of this society across the realms of social practice: the collection of increasingly comprehensive data in the workplace, the home, the marketing realm, and public spaces. As sensors and network connections along with data storage and processing become cheaper and more powerful, more data can be collected with respect to everything and anything. Face recognition makes it possible to link data collected about our activities in shared and public spaces to our specific identities – and thus to link it with all the other data troves that have been accumulating both online and offline. All told, the concern here is that the technology addresses broader tendencies towards the automated forms of control that characterise social acceleration and the crisis of social trust associated with the changing information environment.Footnote 21

1.6 The Need for (and Prospects of) Regulation and Oversight

With all these issues in mind, it seems reasonable to conclude that FRT requires to be subject to heightened scrutiny and accountability. For many commentators, this scrutiny should involve increased regulatory control, government oversight, and increased public understanding of the issues arising from what is set to be a defining technology of the next decade and beyond. That said, as this chapter’s brief overview of the sociotechnical complexity of the technology suggests, any efforts to regulate and hold FRT to account will not be easy. We therefore conclude by briefly considering a number of important concerns regarding the philosophical and regulatory implications of FRT, issues that will be developed and refined further in the remainder of the book.

As with most discussions of technology and society, many of the main concerns over FRT relate to issues of power. Of course, it is possible to imagine uses of FRT that redress existing power imbalances, and provide otherwise marginalised and disempowered populations with a means of resisting authoritarian control and to hold power accountable. For example, during the 2020 Black Lives Matter protests, activists in Portland developed FRT to allow street protesters to identify and expose violent police officers. Nevertheless, while it can be used for sousveillance, the mainstream roll-out of FRT across society looks set to deepen asymmetry of power in favour of institutions. Indeed, there is an inherent asymmetry in both power and knowledge associated with these processes of datafication. Only those with access to the databases and the processing power can collect, store, and put this information to use. In practice, therefore, face recognition is likely to become one more tool used primarily by well-resourced organisations and agencies that can afford the necessary processing power and monitoring infrastructure.

As such, any efforts to regulate FRT need to focus on issues of civil rights and democracy, the potential misuse of institutional power, and resulting harms to marginalised and minoritised groups. In this sense, one of the profound shifts envisioned by the widespread use of automated facial recognition is the loss of the ability to opt-out. When public spaces we need to access for the conduct of our daily lives – such as the shops where we get our food, or the sidewalks and streets we travel – become equipped with face recognition, we do not have a meaningful choice of whether to consent to the use of the technology. In many cases we may have no idea that the technology is in place, since it can operate passively at a distance. The prevalence of existing CCTV networks makes it possible to implement facial recognition in many spaces without significantly transforming the visible physical infrastructure.

Following this logic, then, it is likely that automated face recognition in the near future will become a standard feature of existing CCTV surveillance systems. Regulatory regimes that rely on public notification are ineffective if they do not offer genuine opt-out provisions – and such provisions are all but impossible in shared and public spaces that people need to access. When face recognition is installed in public parks or squares – or in commercial locations such as shopping centres, the only choice will be to submit to their monitoring gaze or avoid those spaces. Under such conditions, their decision to use those spaces cannot be construed as a meaningful form of consent. In many cities CCTV has become so ubiquitous that its use passes without public notification. Without specific restrictions on its use, facial recognition is likely to follow the same trajectory. Seen in this light, there are many reasons why regulation and other attempts to hold FRT to account faces an uphill battle (if not the prospect of being thwarted altogether). This is not to say that regulation is not possible. For example, more than two dozen municipalities in the United States banned government use of one-to-many face recognition during the first few years of the 2020s, and the European Union continues to moot strict regulation of its use in public spaces. Nevertheless, the use of the technology by private entities for security and marketing and by government agencies for policing continues apace.

All our future discussions of possible FRT regulation and legislation therefore need to remain mindful of the strong factors driving continued demand for FRT and its uptake. For example, the promise of convenience and security combined with increasing accuracy and lower cost all serve as strong drivers for the uptake of the technology. There are also sustained commercial imperatives to continue this technology – not least the emergence of a $5 billion FRT industry that is estimated to grow to $50 billion by 2030. At the same time, we are living in a world where there are a number of powerful authoritarian drivers to continue the uptake of FRT regardless of pushback from civil society. As discussed earlier in this chapter, universal automated access comes at the expense of perpetual tracking and identification. In addition to the pathologies of bias and the danger of data breaches and hacking, there is also the threat of authoritarian levels of control. Widespread facial recognition creates the prospect of a tool that could, in the wrong hands, be used to stifle political opposition and chill speech and legitimate forms of protest. It can also be used to extract detailed information about people’s private lives, further shifting control over personal information into the hands of those who own and control the monitoring infrastructure.

Regardless of such impediments and adversaries, many people contend that the time to develop clear regulations in keeping with commitments to democracy and human rights is now. Building support for such regulation will require concerted public education programmes that focus on the capabilities and potential harms of the technology. At the moment, its potential uses and capabilities are not understood widely and are often framed in terms of personal privacy invasion rather than its potentially deleterious effects on democracy and civic life. Developing appropriate regulation will also require negotiating the tension between the commercial pressures of the data-driven surveillance economy, the security imperatives of law enforcement, and civic values of freedom of expression, movement, and personal autonomy. The outcome we need to avoid is the one towards which we seem to be headed: a situation in which the widespread deployment of the technology takes place in a regulatory vacuum without public scrutiny or accountability.

The legal challenge of FRT lies in the fact that the consent scheme is not the best approach to protect individual rights as discussed earlier. And in some contexts, preventing its uses based on individual rights’ argument may not be in the interest of the general public. In this complex situation, we should not be forced into making a choice between protecting the individuals and protecting the society at large (an argument that Chinese lawmakers are now working on through the introduction of a revised data protection law effective in 2021). Instead, we need to develop laws that will not obscure self-governance (individual rights protection) in relation to the promotion of the application of FRT as public interests. The boundaries of legal application of FRT need to be established. In it, the liability of those who are collecting, collating, and analysing facial data should be a key consideration. For example, if the use of FRT is permitted, the re-use of such information without individual authorisation should be prohibited. The emphasis should also be about how to prevent harms resulting from public interest exceptions.

1.7 Conclusions

These are just a few opening observations and points in what needs to be a prolonged society-wide discussion over the next decade and beyond. While it is unlikely that a consensus will ever be reached, it is possible to develop a clear sense of the boundaries that we want to see established around this fast-changing set of technologies. That said, such is the pace of change within biometrics and AI, it might well be that facial recognition technology is only a passing phase – researchers and developers are already getting enthused over the potential scanning of various other bodily features as a route to individual identification and inference. Yet many of the logics highlighted in this chapter apply to whatever other part of the human body this technology’s gaze is next trained on – be it gait, voice, heartbeat, or other.

Of course, many of the issues raised in this chapter are not unique to FRT per se – as McQuillan reminds us, every instance of ‘socially applied AI has a tendency to punch down: that is, the collateral damage that comes from its statistical fragility ends up hurting the less privileged’.Footnote 22 Nevertheless, it is worth spending time unpacking what is peculiar about the computational processing of one’s face as the focal point for this punching down and cascading harm. This chapter has therefore presented a selection of issues that we identify from the perspective of sociology as well as culture, media, and surveillance studies. There are many other disciplines also scrutinising these issues from across the humanities and social sciences – all of which are worth engaging with as bringing a valuable context to legal discussions of FRT. Yet we hope that the law and legal disciplines can bring an important and distinctive set of insights in taking these issues and conversations forward. Legal discussions of technology bring a valuable pragmatism to the otherwise ambiguous social science portrayals of problematic technologies such as FRT – striving to develop ‘a legitimate and pragmatic agenda for channelling technology in the public interest’.Footnote 23 We look forward to these conversations continuing across the rest of this handbook and beyond.

2 Facial Recognition Technologies 101 Technical Insights

Ali Akbari
2.1 Introduction

The best way to anticipate the risks and concerns about the trustworthiness of facial recognition technologies (FRT) is to understand the way they operate and how such decision-making algorithms differ from other conventional information technology (IT) systems. This chapter presents a gentle introduction to characteristics, building blocks, and some of the techniques used in artificial intelligence (AI) and FRT solutions that are enabled by AI. Owing to simplification and limitation, this is by no means a complete or precise representation of such technologies. However, it is enough to better understand some of the available choices, the implications that might come with them, and considerations to help minimise some of the unwanted impacts.

When talking about facial recognition technologies, usually the first thing that comes to mind is identifying a person from their photo. However, when analysing an image that includes a face, quite a few processes can be done. Apart from the initial general image preparation and enhancement steps, everything starts with a face detection process. This is the process to find the location of all of the faces within an image, which usually follows by extracting that part of the image and applying some alignments to prepare it for the next steps.

Face recognition that follows the detection step deals with assessing the identity of the person in the extracted face image and can be either an identification or a verification process. Face identification is when a 1:N, or one-to-many, search happens and the target face image is compared with a database of many known facial images. If the search is successful, the identity of the person in the image is found. For example, when doing a police check, a newly taken photo of the person might be checked against a database of criminal mugshots to find if that person had any past records. In the verification process, by performing a 1:1, or one-to-one check, we are actually trying to confirm an assumed identity by comparing a new facial image with a previously confirmed photo. A good example for this can be when a newly taken photo at a border checkpoint is compared with the photo on the passport to confirm it is the same person.

Although it is not always categorised under the facial recognition topic, another form of facial image processing is face categorisation or analysis. Here, rather than the identity of the person in the image, other characteristics and specifications are important. Detecting some demographic information such as gender, age, or ethnicity, facial expression detection, and emotion recognition are a few examples with applications such as sentiment analysis, targeted advertisement, attention detection, or driver fatigue identification. However, this sub-category is not the focus in this text.

All of the above-mentioned processes on facial images fall under the computer vision field of research, which is about techniques and methods that enable computers to understand images and extract various information from them. This closely relates to image processing, which can, for example, modify and enhance medical images but not necessarily extract information or automatically make decisions based on them. Eventually, if we go one step further, along with computer vision and image processing, any other unstructured data processing such as speech processing or natural language processing falls under the umbrella of AI. The importance of this recognition is that facial recognition technologies inherit a lot of their characteristics from AI, and in the next section we take a closer look at some of these specifications to better understand some of the underlying complexities and challenges of FRT.

2.2 What Is AI?

Although there have been many debates around the definition of AI, we do not yet have one universally accepted version. The definition by the Organisation for Economic Co-operation and Development (OECD) is among one of the more commonly referenced ones: ‘Artificial Intelligence (AI) refers to computer systems that can perform tasks or make predictions, recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human instructions.’Footnote 1

2.2.1 AI versus Conventional IT

While the OECD has provided a good definition, in order to better understand AI systems and their characteristics it would be beneficial to compare them with conventional IT systems. This can be considered across the following three dimensions:

  • Instructions – In order to achieve a goal, in conventional IT systems, explicit and step by step instructions are provided. However, AI systems are given objectives and the system comes up with the best solution to achieve it. This is one of the most important factors that makes the behaviour of AI systems not necessarily predictable because the exact solution is not dictated by the developers of the system.

  • Code – The core of a conventional IT system is the codebase in one of the programming languages that carries the above-mentioned instructions. Although AI systems also contain codes that define the algorithms, the critical component that enables them to act intelligently is a knowledge base. The algorithms apply this knowledge on the inputs to the system to make decisions and perform tasks (so called outputs).

  • Maintenance – It is very common to have periodic maintenance on conventional IT systems to fix any bugs that are found or add/improve features. Moreover, an AI system that is completely free of bugs and performing perfectly might gradually drift and start behaving poorly. This can be because of changes in the environment or the internal parameters of the models in the case of continuous learning capability (this is discussed further in Section 2.3.4). Owing to this characteristic, apart from maintenance, AI systems need continuous monitoring to make sure they perform as expected along their life cycle.

2.2.2 Contributors in AI Systems

A common challenge with FRT and more broadly AI systems is to understand their behaviour, explain how the system works or a decision was made, or define the scope of responsibilities and accountability. Looking from this angle, it is also worth reminding ourselves of another characteristic of AI systems, which is the possibility of many players contributing to building and applying such solutions.

For example, let us consider a face recognition solution being used for police checks. The algorithm might be from one of the latest breakthroughs developed by a research centre or university and publicly published in a paper. Then a technology provider may implement this algorithm in their commercial tools to create an excellent face matching engine. However, in order to properly train the models in this engine, they leverage the data being collected and prepared by a third company that may or may not have commercial interest in it. This face matching engine by itself only accepts two input images and outputs a similarity score that cannot be used directly by police. Hence a fourth company comes into play by integrating this face matching engine in a larger biometrics management solution in which all required databases, functionalities, and user interfaces exactly match the police check requirements. Before putting this solution into operation, the fifth player is the police department, which, in collaboration with the fourth company, runs tests and decides the suitable parameters and configuration that this solution should use when implemented. Finally, the end users who will take a photo during operation of the system may affect success as the sixth player by providing the image with the best conditions.

In such a complex scenario, with so many contributors to the success or failure of an FRT solution, investigating the behaviour of the system or one specific decision is not as easy as in the case of other simpler software solutions.

2.3 AI Life Cycle and Success Factor Considerations

Considering the foregoing, the life cycle of AI systems also differs slightly from the common software development life cycle. Figure 2.1 is a simple view of these life cycle steps.

Figure 2.1 AI system life cycle

2.3.1 Design

Following the inception of an idea or identification of a need, it all starts with the design. Many critical decisions are made at this stage that can be based on various hypotheses and potentially reviewed and corrected in the later steps. Such decisions may include but are not limited to the operations requirements, relevant data to be collected, expected data characteristics, availability of training data or approaches to create them, suitable algorithms and techniques, and acceptance criteria before going into operation. For example, an FRT-based access control system developer might assume that their solution is going to be always used indoors and in a controlled imaging environment, and decide only simple preprocesses are required based on this consideration. A system developed based on this design may perform very poorly if used for outdoor access control and in a crowded environment with varying light and shade conditions.

2.3.2 Data Preparation

The data preparation can be one of the most time-consuming and critical steps of the work. As discussed in Sections 2.3.3 and 2.6, this can also be an important factor in success, failure, or unwanted behaviour of the system. This stage covers all the data collection or creation, quality assessment, cleaning, feature engineering, and labelling steps. When it comes to the data for building and training AI models, especially in a complex and sensitive problem such as face recognition, there is always the difficult trade-off between volume, quality, and cost. More data helps to build stronger models, but curating lots of high-quality data is very costly. Owing to the time costs and other limitations in the creation of such datasets, sometimes the developers are forced to rely on lower quality publicly available or crowd-sourced datasets, or pay professional data curation companies to help them with this step. For a few examples of the datasets commonly used in FRT development, you can refer to Labeled Face in the Wild,Footnote 2 Megaface,Footnote 3 or Ms-celeb-1m.Footnote 4 However, developers should note that not only it is a very difficult task to have a thorough quality check on such huge datasets, but also each has its own characteristics and limitations that are not necessarily beneficial for any type of FRT development activity. Inadequate use of such datasets might lead to unwanted bias in FRT solutions that only gets noticed after repeatedly causing problems.

2.3.3 Modelling and Validation

When the data is prepared, actual development of the system can get started. The core of this stage, which is one of the most iterative steps in the AI life cycle, is to find the most suitable algorithms and configurations, and train some models by applying the algorithms to previously prepared training data. This is followed with running enough test and validation processes to become confident of the suitability of the models for the intended application. Usually, many iterations are required to get to the desirable performance levels and to confidently sign off a model to operate in the production environment. Incorrect selection of the algorithms or performance metrics and validation criteria can easily cause misleading results. For example, when checking a suspect’s photo against a database of previous criminal records, we may want to consider different acceptance levels for false positive versus false negative rates; hence, a straight accuracy measure is not enough to pass or fail a model. Similarly, for a sensitive application, we might want to check such measures separately for various cohorts across demographic dimensions such as gender and ethnicity, to minimise any chances of bias. An accurate technical understanding of performance measurement metrics and meaning is critical in the correct selection and application of FRT. Unfortunately, a lack of adequate AI literacy among some of the business operators of FRT technologies can cause the choice of solutions that are not suitable for their application. For example, a technology that works well for a 1:1 verification and access control to a digital device does not necessarily perform as well as 1:N search within a criminal database.

2.3.4 Operation and Monitoring

Following the build and passing all readiness tests successfully, the AI system is deployed and put into operation. AI systems, as any other software, need considerations such as infrastructure and architecture to address the required security, availability, speed performance, and so on. Additionally, as briefly discussed earlier, operators should make sure that the conditions of the application are suitable and match what the models were intended and built for. What should not be forgotten is that AI systems, especially in high-risk applications, are not ‘set and forget’ technologies. If an AI system performs very well when initially implemented, that does not necessarily mean it will continue to keep performing at the same level. If continuous learning is used, the models keep dynamically changing and adapting themselves, which of course means the new behaviour needs to be monitored and confirmed. However, even if the models are static and not changing, a drift can still happen, which changes the performance of the models. This can be due to changes in the concept and the environment in which the model is performing. For example, specific facial expressions in different cultures might appear differently. Hence, an FRT system that is built successfully to detect various facial expressions in a specific country might start behaving poorly when too many people from a different cultural background start interacting with it. A monitoring process alongside the main solution makes sure such unexpected changes are detected in time to be addressed properly. For instance, a very simple monitoring process for the scenario described here can be to observe the ratio of various expressions that are detected on a regular basis. If a persistent shift in detecting some specific expressions happens, it can be a signal to start an investigation. A good approach is to build the pairing monitoring processes in parallel with the design and development of the main models.

2.3.5 Review

Review can happen periodically, similar to with conventional software, or based on triggers coming from the monitoring process. It can be considered as a combination of simplified evaluation and design steps that identifies the gaps between the existing circumstances of the AI system and the most recent requirements. As a result of such an assessment, the AI models may go through another round of redesign and retraining or be completely retired because of changes in circumstances.

2.4 Under the Hood of AI

At a very simplistic level and in a classic view, an AI system consists of a form of representation of knowledge, an inference engine, and an optional learn or retrain mechanism, as illustrated in the Figure 2.2.

Figure 2.2 AI system key components

Knowledge in an AI system may be encoded and represented in different forms including and not limited to rules, graphs, statistical distributions, mathematical equations and their parameters, or a combination of these. The knowledge base represents facts, information, skills or experiences from human knowledge or existing relationships, associations, or other relevant information in the environment that can help in achieving the main objective of the AI system. For example, in an FRT system the knowledge might define what shapes, colours, or patterns can indicate the location of a human face in the input image. Or it can suggest what areas and measurements on the face would be the most discriminating factors between two different human faces. However, it is not always as explicit and explainable as these examples.

Inference engine consists of the algorithms, mechanisms, and processes that allow the AI system to apply knowledge to the input facts and observations and to come up with the solutions for achieving its objective, making a prediction or a decision. The type of inference engine depends on the knowledge representation model to be able to apply that specific type of model, and usually they come as a pair. However, these two components are not always necessarily separable. For example, in AI systems based on artificial neural networks (ANNs), the knowledge is stored as the trained parameters and weights of the network. In such cases we can consider that the inference engine and knowledge base are combined as an ANN algorithm together with its parameters after training.

Learn or retrain, as already mentioned, is an optional component of the AI system. Many AI systems after being fully trained and put into operation remain static and do not receive any feedback from the environment. However, when the ‘learn’ component exists, after making a decision or prediction, the AI system receives feedback that indicates the correct output. The learning mechanism compares the predicted output with the feedback and, in case of any deviation or error, it tries to readjust the knowledge to gradually minimise the overall error rate of the system. For example, every time that your mobile phone Face ID fails to identify your face and you immediately unlock the phone using your passcode, it can be used as a feedback signal to improve your face model on the phone by using the most recently captured image. While this is a great feature for improving AI models, it also has the risk of changing their behaviour in an unexpected or unwanted manner. In the example just given, if with each failure your mobile phone keeps expanding the scope of acceptable facial features that unlock your phone, it may end up accepting other people whose faces are only similar to yours.

2.4.1 The Source of Knowledge

We have just mentioned how the knowledge base might be updated and improved based on the feedback received during the operation. But what is the source of the knowledge and how that knowledge base is created in the first place? Generally speaking, during the initial build of an AI system the knowledge base can be created either manually by the experts or automatically using suitable data. You might have previously seen illustrations similar to Figure 2.3, which tries to explain the relation between AI and machine learning (ML). However, before getting to the details of ML, it might be good to consider what is AI outside the ML subset.

Figure 2.3 AI versus ML

The AI techniques outside the ML subset are called Symbolic AI or sometimes referred to as Good Old-Fashioned AI. This is mostly based on the human expert knowledge in that specific domain, and the knowledge base here is being manually curated and encoded by the AI developers. As a result of that, it is mostly human readable (hence symbolic) and usually separable from the inference part of the system as described in the building blocks of AI earlier. Expert Systems are one of the well-known and more successful examples of symbolic AI, where their knowledge is mainly stored as ‘if-then’ rules.Footnote 5

Symbolic AI systems are relatively reliable, predictable, and more explainable owing to their transparency and the readability of their knowledge base. However, the manual curation of the knowledge base makes it less generalisable and more importantly converts the knowledge acquisition or updating step into a bottleneck owing to the limited availability of the domain experts to collaborate with the developers. Symbolic AI solutions have therefore had limited success, and we have not heard much about them recently.

To obtain knowledge without experts dictating it, another approach is to observe and automatically learn from the relevant examples, which is the basis of computational learning theory and ML techniques. There is a wide range of ML techniques starting from statistical models and mathematical regression analysis to more algorithmic methods such as decision trees, support vector machines, and ANNs, which are one of the most well-known subsets of ML in the past couple of years, thanks to the huge success stories of deep neural networks.Footnote 6 When enough sample data is provided, these algorithms are capable of training models with automatically encoded knowledge that is required to achieve their objectives when put into operation. The table in Figure 2.4 summarises some of the key differences between these two groups of AI techniques.

Figure 2.4 Symbolic AI versus ML

2.4.2 Different Methods of Learning

Depending on the type and specifications of the data available to learn from, there are several different methods of learning in ML algorithms. Each one of these options has strengths and weaknesses. In an application such as FRT where we might not easily access any type of dataset that we want, it is important to be aware of the potentials and limitations of different methods. Below are a few examples among many of these methods; it is an increasing list.

Supervised learning is one of the most common and broadly applied methods. It can be utilised when at the time of creating and training ML models there are enough samples of input data along with their expected output (labels). In an identity verification example under FRT domain, the trained model would normally be expected to receive two face images and give a similarity score. In such a case, the training dataset includes many pairs of facial images along with a manually allocated label, which is 1 when those are photos of the same person and 0 otherwise. In FRT applications, preparing large enough labelled datasets for supervised learning purposes is time consuming, expensive, and subject to human errors such as bias.

Unsupervised learning applies when only samples of the input data are available for the training period and the answers are unknown or unavailable. As you can imagine, this method is only useful for some specific use cases. Clustering and association models are common examples of this learning method. For example, in a facial expression categorisation application, during the training phase a model can be given lots of facial images and learns how to group them together based on similarity of the facial expression, without necessarily having a specific name for those groups. For such FRT, it might be easier to source unlabelled sample data in larger volumes, for example through web scraping. However, this is subject to privacy implications and hidden quality issues, and thus works for limited applications only.

Reinforcement learning is used when neither the samples nor the answers are available as a batch in the beginning. Rather, a reward function is maximised through trial and error while the model gradually learns in operation. For example, you can imagine an AI system that wants to display the most attractive faces from a database to its user. There is no prior dataset to train the model for each new user, however, assuming the amount of time the viewer spends before swiping to the next photo is a sign of attractiveness, the model gradually learns which facial features can maximise this target. In such situations, the learning mechanism should also balance between exploring new territories and exploiting current knowledge to avoid possibilities of local maxima traps. It is easy to imagine that only very few FRT applications can rely on such trial and error methods to learn.

Semi-supervised learning can be considered as the combination of supervised and unsupervised learning. This can be applied when there is a larger amount of training samples, but only a smaller subset of them is labelled. In such scenarios, in order to make the unlabelled subset useful in a supervised manner, some assumptions such as continuity or clustering are made to relate them to the labelled subset of the samples. Let us imagine a large set of personal photos with only a few of them labelled with names for training a facial identification model. If we know which subsets are taken from the same family albums, we may be able to associate a lot more of those unnamed photos and label them with the correct names to be used for better training of the models. Although this can help with the data labelling challenge for FRT applications, the assumptions necessarily made during this process can introduce the risk of unwanted error in the training process.

Self-supervised learning helps in another way with the challenge of labelled data availability, especially when a very large volume of training data is required, such as deep learning. Instead of a manual preparation of the training signals, this approach uses some automated processes to convert input data to meaningful relations that can be used to train the models. For example, to build and train some of the largest language models, training data is scraped from any possible source on the internet. Then, an AI developer could use, for example, a process to remove parts of the sentences, and the main model is trained to predict and fill in the blanks. In this way the answer (training signal) is automatically created, and the language model learns all meaningful structures and word relationships in human language. In the FRT domain you can think of other processes, including distortions to a face image such as shadows or rotation, or taking different frames of the same face from a video. This produces a set of different facial images that are already known to be of the same person and can be used directly for training of the models without additional manual labelling.

2.5 Facial Recognition Approaches

Similar to the AI techniques, facial recognition approaches were initially more similar to Symbolic AI. They were naturally more inclined towards the way humans might approach the problem and were inspired by anthropometry.Footnote 7 Owing to the difficulty of extracting all important facial features and accurate measurements that could be easily impacted by small variation in the images, there was limited success in such works until more data driven approaches were introduced; these were based on mathematical and statistical methods and had a holistic approach to face recognition, an example being Eigenfaces,Footnote 8 which is basically the eigenvectors of the training grayscale face images (An eigenvector of a matrix is a non-zero vector that, when multiplied by the matrix, results in a scaled version of itself.). This shift towards ML techniques got more mature and successful by combining the two approaches through other ideas such as neural networks in DeepFace,Footnote 9 and many other similar works. More in-depth review of the history of FRT is discussed in Chapter 3, so here we just look at technical characteristics and differences of these approaches.

Feature analysis approaches rely on the detection of facial features and their measurements. Here, each face image is converted to a numeric vector in a multi-dimensional space and the face recognition challenge is simplified to more common classification or regression problems. Similar to symbolic AI, the majority of the knowledge, if not all, is manually encoded in the form of rules that instruct how to detect the face within an image and identify each of its components to be measured accurately. These rules may rely on basic image and signal processing techniques such as edge detection and segmentation. This makes the implementation easier and, as mentioned earlier when discussing symbolic AI, the process and its decision-making is more transparent and explainable. However, intrinsic to these approaches is the limited generalisability challenge of symbolic AI. In ideal and controlled conditions these methods can be quite accurate, but changes in the imaging condition can dramatically impact the performance. This is because in the new conditions, including different angles, resolution, or shadows and partial coverage, the prescribed rules might not apply any more, and it would not be practical to manually find all these variations and customise new rules for them.

Holistic approaches became popular after the introduction of Eigenfaces in the early 1990s.Footnote 10 Rather than trying to detect facial features based on human definition of a face, these approaches consider the image in its pixel form as a vector in a high dimensional space and apply dimensionality reduction techniques combined with other mathematical and statistical approaches that do not rely on what is inside the image. This largely simplifies the problem by avoiding the facial feature extraction and measurement step, together with its sensitivities. This shifts the face recognition approach towards the classic ML techniques and changes the training to a data-driven problem rather than manual rule development. Unfortunately, purely holistic approaches still suffer from a few challenges, including statistical distribution assumptions behind the method that do not always apply, and any deviation from the controlled imaging condition makes it worse.

Deep neural networks made a leap in the advancement and success of face recognition approaches. After Eigenfaces and its variations, there were many other small improvements made to the holistic approaches by adding some generic feature extraction steps such as Gabor prior to the main classifier,Footnote 11 followed by some neural network-based ML approaches. However, it was not as successful until the introduction of deep learning for image processing,Footnote 12 and applying it for face recognition.Footnote 13 Convolutional neural networks convert the feature extraction and selection from the images to an unsupervised process, so it is not as challenging as manually defined facial features and not too generic like the Gabor filters used prior to some of the holistic approaches. The increasingly complex and important features that are automatically selected are used in a supervised learning layer to deliver the classification or recognition function.Footnote 14 This is the key in the success of object and face recognition of deep neural networks.

2.6 The Gift and the Curse of Complexity

Many variations of ANNs have been used in ML applications including face recognition. However, the so called shallow neural networks were not as successful owing to their limited learning capacity. Advancements in hardware, use of graphical processing units, and cloud computing to increase processing power along with access to more training data (big data) made the introduction of deep learning possible. In addition to novel network structures and the use of more sophisticated nodes such as convolutional functions, another important factor in the increased capacity of learning of DNNs is the overall complexity and scale of the network parameters to train. For example, the first experimental DNN used in FaceNet includes a total of 140 million parameters to train.

While the complexity of DNNs increases their success in learning to solve challenging problems such as face recognition, these new algorithms become increasingly data hungry. Without going into too much detail, if the number of training samples are too small compared with the number of parameters of the model, rather than learning a generalised solution for solving the problem it overfits the model and memorises the answer only for that specific subset. This causes the model to perform very well for the training samples, as it has memorised the correct answers for the training set, but fail when it comes to test and unseen samples, owing to a lack of generalisation and fitting the model only to the previously seen examples. Therefore, such successful face recognition systems based on DNNs or a variation of them are actually trained on large training facial datasets, which can be the source of new risks and concerns.

Privacy and security concerns are one of the first to pay attention to. It is difficult and expensive to create new and large face datasets with all appropriate consents in place. Many of these large datasets are collected from the web and from a few different sources where copyright and privacy statements raise problems from both legal and ethics perspectives. Additionally, after collection, such datasets could be potentially a good target for cyber-attacks, especially if the images can be correlated to other information that may be publicly available about the same person.

Data labelling is the next challenge after collection of the suitable dataset. It is labour intensive to manually label such large datasets to be used as a supervised learning signal for the models. As discussed earlier, self-supervised learning is one of the next best choices for data-heavy algorithms such as DNNs. However, this introduces the risk of incorrect assumptions in the self-supervised logic and the missing of some problems in the training process even when performance measures seem to be adequate.

Hidden data quality issues might be the key to most of the well-known face recognition failures. Usually, a lot of automation or crowdsourcing is involved in the preparation of such large face datasets. This can prevent thorough quality checks across the samples and labels, which can lead to flawed models and cause unexpected behaviour in special cases despite high performance results during the test and evaluation. Bias and discrimination are among the most common misbehaviours of FRT models, which can be either due to such hidden data quality issues or simply the difficulty of obtaining a well-balanced large sample across all cohorts.

2.7 Conclusion

Face recognition is one of the complex applications of AI and inherits many of its limitations and challenges. We have made a quick review of some of the important considerations, choices, and potential pitfalls of AI techniques and more specifically FRT systems. Given this is a relatively new technology being used in our daily lives, it is crucial to increase the awareness and literacy of such technologies and their potential implications from a multi-disciplinary angle for all its stakeholders, from its developers and providers to the operators, regulators, and the end users.

Now that with DNNs the reported performance of FR models is reaching or surpassing human performance,Footnote 15 a critical question is why we still hear so many examples of failure and find FR models insufficiently reliable in practice. Among many reasons, such as data quality discussed earlier, the difference between development and operation conditions can be one of the common factors. The dataset that the model is trained and tested on may not be a good representation of what the model will receive when put into operation. Such differences can be due to imaging conditions, demographic distribution, or other factors. Additionally, we should not forget that the performance tests are usually done directly on the FRT model. However, an FRT-based solution has a lot of other software components and configurable decision-making logic that will be applied to the facial image similarity scores. For example, such surrounding configurable logic can easily introduce human bias to a FRT solution with a good performing model at core. Finally, it is worth reminding that like many other software and digital solutions, FRT systems can be subject to adversarial attacks. It might be a lot easier to fool a DNN-based FR model using adversarial samples or patches compared with the human potential for identifying such attempts.Footnote 16

Hence, considering all such intentional and unintentional risks, are the benefits of FRT worth it? Rather than giving a blanket yes/no answer, it should be concluded that this depends on the application and impact levels. However, making a conscious decision based on a realistic understanding of potentials and limitations of technology, along with having humans in the loop, can significantly help to minimise these risks.

3 FRT in ‘Bloom’ Beyond Single Origin Narratives

Simon Michael Taylor
3.1 Introduction

On 10 September 2020, Pace Gallery in London held an exhibition by the artist Trevor Paglen examining the visual products from artificial intelligence and digital data systems.Footnote 1 Titled ‘Bloom’, the exhibition featured an over-sized sculpture of a human head. Bald, white, and possibly male, this eerily symmetrical ‘standard head’ had been modelled on measurements from canonical experiments in facial recognition history by Woody Wilson Bledsoe, Charles Bisson, and Helen Chan Wolf occuring at Panoramic Research Laboratory in 1964.Footnote 2

Centring this ‘standard head’ in the space, Paglen surrounded it with photographic prints of leaves and flowers re-composed from RAW camera files by computer vision algorithms. These machine visualisations of nature encircled the ‘standard head’ illustrating how digital imaging using autonomous toolsets can achieve significantly different graphical outcomes. The exhibit foregrounded face recognition technology yet provoked viewers to consider the cross-practice connections between computing and data classification, humans and nature, and how image-making is becoming technically autonomous.Footnote 3 Another take-away is how these systems require multi-faceted elements to work and the ‘mushrooming and blossoming from all kinds of datasets’.Footnote 4

As a form of networked visual surveillance, facial recognition technology (FRT) works from the extent to which it operates in larger information infrastructures, FRT ‘is not a single technology but an umbrella term for a set of technologies’.Footnote 5 These digitally networked systems allow imaging data to transform from one state to another, and transfer from one site to another. Recent improvements in FRT, as a remote identification system, has reached a point to be technically possible to capture biometric images and data from subjects in public, private, and personal spaces, or interactions online, without their consent or awareness, or adequate regulatory oversight. This includes a distribution of sensitive and personal user information between state and private-sector organisations, while contributing to training machine learning tools using honeypots of data, and enabling ‘ever more sophisticated and effective forms of social control’.Footnote 6

Unlike the suggestion of Paglen’s exhibition, the origins of FRT cannot be reduced to the experiments in 1964. We need to widen the lens as the technical operations Stakeholders inside these systems are globally distributed and as Chair of Electronic Frontiers Australia’s Policy Committee, Angus Murray iterated require ‘bargains of trust’.Footnote 7 For example, Domestic and federal police agencies use systems that rely on huge amounts of data aggregation in private cloud servers and proprietary hardware that store and transmit data from online platforms, smart devices, foreign owned closed-circuit television (CCTV) companies and creators of wearable body cameras.Footnote 8 In Australia, retail outlets such as Bunnings use FRT and identity data to extract information from social media, where most people have images of themselves uploaded. They perform analysis based on the specific visits and transactions for certain shoppers.Footnote 9 Similarly images captured in public spaces, of crowds or of protesters, can be matched to social media posts or online forums managed by global technology firms, such as Facebook and Google, or transnational intelligence agencies such as the NSA and GCHQ. In the United Kingdom, Daragh Murray witnessed FRT software draw rectangles around the faces of people in public streets from a live CCTV feed. The system then extracted key features and compared these with stored features of criminal suspects in a watch list.Footnote 10 Matching an image to a watchlist is not the only function to consider here, but a need to query the distribution and ownership of data in the system being collectively assembled by the Tokyo-based technology giant NEC, in the example provided above.Footnote 11 Other examples of this diffuse and operational data flow include how China’s Zhejiang Dahua Technology Co. Ltd sold thermal imaging cameras, armed with facial recognition software, to scan workers entering Amazon factories during COVID-19, that is despite them being black-trade listed in the United States.Footnote 12

FRT and its computer procedures are therefore systems and ‘technologies in the making’, not artefacts with singularly defined origins and easy to regulate outcomes.Footnote 13 While an abundance of research looks at the use of FRT in border security and biometric surveillance,Footnote 14 retail shopping or school aged education,Footnote 15 and the gendering and racial divide between datasets with calls to ban these systems,Footnote 16 other elements also require scholarly, legislative, and regulatory attention.

This chapter considers how large-scale technical systems such as FRT have bloomed yet build on the echnical roots of multiple systems and the provenance of data sources that remain under considered. Tracing the genealogical origins and provenance of such datasets and statistical toolsets plays an important role in framing current uses for regulatory challenges. In this regard, this chapter presents empirical findings from research on early Indian statistical measures, the convergence of Chinese and Western technology companies, and the increase in computer vision experiments including those conducted on animals for bio security identification purposes. This chapter argues these diverse material innovations and information domains not only act as testbeds for FRT systems, but encompass some of the globalised products contained in FRT infrastructure.Footnote 17

3.2 FRT Does Not Have a Singular Origin, They Are ‘Systems in Motion’

Bledsoe’s ‘standard head’ algorithm didn’t remain at the University of Texas nor in the domain of artificial intelligence history. Owing to funding by the RAND Corporation, the algorithm worked its way into informational models for law enforcement purposes. In the development of the New York State Intelligence and Identification System (NYSIIS), Bledsoe was recruited to develop his algorithm to computationally solve ‘the mug-file problem’.Footnote 18 By contributing to the world’s first computerised criminal-justice information-sharing system,Footnote 19 as Stephanie Dick posits, Bledsoe’s algorithm and its ideas travelled with his over-simplifications and data assumptions in tow.Footnote 20 This influenced not only law enforcement databases and decisions on criminal targets in the United States, but also FRT developments that followed.Footnote 21 In its final state the algorithm was not used to automatically detect faces – as FRT does now – but contributed to a standardisation of ‘mug shot’ photos for computer filing systems. Bledsoe, who was later the president of the Association for the Advancement of Artificial Intelligence, used 2,000 images of police mug shots as his ‘database’ for making comparisons with a new set of photographs to detect any similarity. This American National Standards Institute Database, whose archives of mug shots featured convicted criminals (and those just accused), was the predominant source of visual information for Bledsoe’s facial-recognition technology (a role now filled by social media).Footnote 22 To this end, Bledsoe and his Panoramic Research collaborators manually drew over human facial features with a device that resembled an iPad called a GRAFACON or RAND tablet. By using a stylus, images were rotated and re-drawn onto the tablet and recorded as coordinates on a grid. This produced a relatively high-resolution computer readable image. A list of distances were calculated and recorded as a person’s identification code for locations such as the mouth, nose, or eyes.Footnote 23 Facial recognition (at this time) was a mathematical code of distances between features, drastically reducing individual and social nuances between them, and largely informed by Bayesian decision theory to use ‘22 measurements to make an educated guess about the whole’.Footnote 24

In essence, Bledsoe had computerised the mug shot into a ‘fully automated Bertillon system for the face’.Footnote 25 This system, invented by French criminologists Cesare Lombroso and Alphonse Bertillon in 1879, gained wide acceptance as a reliable and scientific method for criminal investigation, despite problematic eighteenth-century anthropometric experiments. The mug shot was invented to recognise criminal suspects who were repeatedly arrested: portraits were drawn and statistically labelled on common morphological characteristics.Footnote 26 The resulted ‘mug shots’ were standardised and collected by police departments and accepted as evidence in courts. Photo IDs modelled on the mug shot not only became an official format for policing, but have become standard issue in nation-state passports presented at airports and for driver’s licence photographs. The first ever US photo driver’s licence, issued in 1958, was created by French security company IDEMIA – a world leader in biometric security. Founded in 1922 as the defence contractor SAGEM, it then became SAGEM-Morpho in the 1980s, and parts of IDEMIA go back even further, and they have effectively led to every shift in the photo identity issuance and credentialling in the US since.Footnote 27

Bledsoe’s 1960s laboratory experiments thus relied on two separate building blocks invented in France. Hampered by the technology of his era, Bledsoe’s ideas for FRT were not truly operationalised until the 1990s – driven by a technological wave of mobile phone and personal computer sales, online networked wireless video systems, and digital cameras.Footnote 28 Yet the experimental use of FRT is still being conducted in a way largely never done before.Footnote 29 Clare Garvie contends that forms of automated imaging for policing actions remain unregulated and represent a ‘forensic science without rules’:

[T]here are no rules when it comes to what images police can submit to facial recognition [databases] and algorithms to help generate investigative leads. As a consequence, agencies across the country can, and do, submit all manner of probe photos – low-quality surveillance camera stills, social media photos with filtering, and scanned photo album pictures. Records from police departments show they may also include computer-generated 3D facial features, or composite and artistic sketches.Footnote 30

In the next section, I explore how the automation of FRT relies not only on a diverse manufacturing of ‘images’ – products of reduction, appropriation, transformation, or digital manipulation – and situated instances of exploitation conducted in South America, the United States, France, Russia, Japan, and China to name a few different jurisdictions, but also how modern FRT resurrects a century old vision of ‘statistical surveillance’.Footnote 31 To do so, I consider how a 100 year old mathematical experiment in British India has aided the probabilistic functionality of autonomous FRT systems.

3.3 The ‘Mind Boggling Systems’ Where Everyone Only Ever Has One ID

In 1991 Turk and Pentland produced the first real-time automated face recognition.Footnote 32 Famously, this was deployed at the crowded USA Super Bowl in 2001. This experimental trial was called ‘Facefinder’. The system captured surveillance images of the crowd and compared them with a database of digital mug shots held by Tampa police, the Florida Department of Law Enforcement and the FBI.Footnote 33 The experiment not only demonstrated the potential for remote surveillance of crowds, but also led to the National Institute of Standards creating a Face Recognition Vendor Test (FRVT) to evaluate this emerging FRT market.

A quick look at the ongoing FRVT of 1: N facial algorithms reveals a globalised picture: ‘The report lists accuracy results alongside developer names as a useful comparison of facial recognition algorithms and assessment of absolute capability. The developer totals constitute a substantial majority of the face recognition industry.’Footnote 34 This includes performance figures for 203 prototype algorithms from the research laboratories of over fifty commercial developers and one university. Similar to Beldsoe’s 1960s experiments for NYSIIS, this evaluative test scenario also uses frontal mug shots and profile view mug shots alongside desktop webcam photos, visa application photos, immigration lane photos, and traveller kiosk photos.

A brief survey of this report illustrates the scale and scope of a global FRT market. To name a few vendors, the developers and their places of origin include NEC (Tokyo); Microsoft (United States); Veritas (Spain); Herta Security (Spain); AnyVision (Israel); IDEMIA (France), utilised in Kenya and in Turkey; Daon (Ireland); Dahua (China); Moonwalk (China); Sensetime (China); Hyperverge (California); Cognitec (Germany); QNAP (Taiwan); Tevian (Russia); VisionLabs (Russia/Netherlands); Clearview AI (United States); DeepGlint (China) and finally Neurotechnology (Lithuania), which is a provider of deep-learning-based solutions for high-precision biometric identification and object recognition technology.

Importantly, the Lithuania based Neurotechnology recently partnered with Tata Consultancy Services as one of three biometric service providers for the largest biometric ID system in the world, Aadhaar.Footnote 35 Co-ordinated by The Unique Identification Authority of India, the system registers people and compares their facial biometric with the existing records of 1.3 billion people to verify applicants have not registered under a different name. Aadhaar is ‘a mind-boggling system’, says Anil Jain, a computer scientist who consulted on the scheme, ‘and the beauty is that it ensures one person has only one ID’.Footnote 36

India has a rich history of producing material and statistical innovations to identify individuals based on their physical characteristics.Footnote 37 In 2020, Google posted an online tribute to Professor Prasanta Chandra Mahalanobis (1893–1972) as part of its ‘Arts and Culture’ series.Footnote 38 Mahalanobis is famous for creating new statistical and biometric functions as key technologies he advocated to the world through his Indian Statistical Institute.Footnote 39 The global celebration of his work was recognised in part after his creation of a similarity distance metric in 1936. This was produced from his specific interest in racial classification.Footnote 40 He developed a biometric function to analyse and identify people based on physical and racial similarity. To do so he compared data collected from the Chittagong Hill Tract area (modern Bangladesh) with international race data sets collected from Swedish and Chinese records.Footnote 41 He then set about learning how to create an identification of race based on statistical measurements of facial features and their similarity, which he could apply in India. The aim was to help identify exotic and ethnic caste groups to be classified in the British colonial administration.Footnote 42

Significantly, he also innovated by using facial photographs of living subjects to compare the accuracy of his biometric measurements, compared with analysing skulls in the era’s practice of phrenology.Footnote 43 By testing his distance function with the invention of an experimental imaging device in 1937, Mahalanobis was a central figure in pushing ‘part of a biometric nationalism in which the face provided a form of data’.Footnote 44 His metric, commonly known as a Mahalanobis Distance Function, despite being created eighty-six years ago, is consistently used in modern FRT.

Even the most sophisticated and large-scale FRT systems necessitate this basic approach of comparing images on facial features by using scores that compare a match of the similarity.Footnote 45

In technical terms, the selection of a decision metric – such as the Mahalanobis Distance Function – ‘[h]elps to measure distances between specific facial features and generate a unique representation (as a ‘facial signature’) for each human face.Footnote 46 Similar to Bledsoe’s code, this is then compared with a database of stored images in order to match a face to similar images.

In this regard, similarity measure functions operationalise the matching process as a critical decision-making module. Selection of the proper similarity measure is thus an important determination for the accuracy of the matching result. Such measures include Minkowski distances, Mahalanobis distances, Hansdorff distances, Euclidean, and cosine-based distances.Footnote 47 Yet the Mahalanobis distance is the best at structuring data for unknown targets. This is critical to criminal subject investigations for matching suspects from surveillance images in supermarkets, stadiums, or of protest crowds. The similarity measure enables high-speed cluster analysis – critical to a speed of decision-making – especially for faces with a high-number of variables and in relation to fitting an unknown person into a known database. FRT can then determine if an unknown image (taken from a web profile or a surveillance camera) matches a person in the database (compared with drivers’ licences or mug shots). This approach is also suitable for machine learning and is a prominent approach for training systems on person re-identification by ‘improving classification through exploiting structures in the data’.Footnote 48

As Adriana Dongus suggests, ‘[t]he large datasets produced by science and law enforcement at the turn of the nineteenth century continue to form the material backbone and precedent to current machine learning.’Footnote 49 By examining the critical and ubiquitous distribution and embedment of early decision classifiers, we establish the importance of selecting certain rule functions in ‘a statistical layer’ of FRT systems.

When applied to machine learning, this includes assigning weights to autonomously identify the importance in probable matches. This is used in image-labelled data sets,Footnote 50 to estimating facial position poses from video,Footnote 51 to automatically locating an unproductive worker on a factory floor,Footnote 52 or identifying ethnic minority faces in a crowd, as is occurring in China with the Uyghur (Uighur) population. While much important work on facial recognition is salient to the United States,Footnote 53 there is a need to examine how FRT is conditioned on a globalised supply chain. This includes the ‘production, schematization, maintenance, inflection, and reproduction of certain [decision] rules’ and how they replicate use of problematic standards in public surveillance.Footnote 54

Indeed, there has been a ‘tendency to gloss over the amount of effort that goes into developing and integrating new technologies and systems with older technologies’.Footnote 55 Computation moves fast – yet many lessons remain and are yet to be learned.

From legislative, ethical, and regulatory standpoints, it is worth noting that biometric systems and data (including use of statistical functions and facial images) are constructed on complex and interoperable supply chains involving third-party vendors needed to make these systems work. Yet there is potential incentives built within these globalised computing systems to exploit regulatory gaps and vulnerabilities that could be used against various human populations at a later date.Footnote 56 The final section examines how Mahalanobis’s 100 year old experiment is relevant not only to our digital identity systems today, such as the United Nations High Commission for Refugees (UNHCR) Population Registration and Identity Management Eco-SystemFootnote 57 but builds on different use-cases. These include not only nation-state surveillance, such as the identification and detection of ethnic minorities in China, but the increasing datafication of animals and computerisation of biosecurity measures in agriculture that can be transferrable to human populations.Footnote 58

3.4 Dynamic Matching Strategies in FRT Extend Beyond Recognising Human Beings

To securely identify forcibly displaced persons seeking UNHCR repatriation assistance at refugee processing centres the UNHCR records biometrics such as iris, fingerprints, and facial metrics.Footnote 59 Driven in part by a Biometric Matching Engine developed by Accenture, this Population Registration and Identity Management Eco-System (PRIMES) employs a patented ‘dynamic matching strategy’ comprising at least two sets of biometric modalities.Footnote 60 With the advent of new, technologically advanced modes of biometric data gathering and analysis, some of the current ‘international legal thought, doctrine, and practice are, in the main, poorly equipped to deal with them’, especially in situations of forced migration.Footnote 61 One reason is the lack of manual processing options and how the introduction of machine learning can lift the collection of sensitive and personally identifiable information outside the scope of pre-existing legal methods. In grappling with new forms of quantification and statistics these systems do not just contain hundred-year old statistical decision functions but the pairing of imaging, data aggregation, and machine learning at scale. The autonomy granted to machine learning may remove abilities to interrogate the validity of the earlier datasets and matching results a system relies on to achieve a result. Such logic clusters ever increasing data collections into new ‘probabilistic dependencies’.Footnote 62 Yet what this curtails are reasonable efforts to disentangle bias from standardised classifications, and how the natural divergences that occur between humans, different social groups, and their situated actions, are erased in deference to the calculative inferences instead. In the use of FRT there is always ‘politics attached’. Avi Marciano illustrated this in the context of Israel where biometric standards establish hierarchies for decision making by defining particular bodies as ‘ineligible’ to access.Footnote 63

Some FRTs are directly complicit in human rights abuses, including a reported detention of up to 1.5 million Uyghur Muslims in Xinjiang.Footnote 64 Owing to the increasing scale of an inescapable surveillance that the Chinese Communist Party has funded, ubiquitous CCTV systems and facial recognition are operationalised in public spaces alongside the monitoring of online communications and patterns-of-life data from mobile phones. Idealised as an all-seeing pervasive surveillance network enabled by a state manufacturing of computer vision technology, digital platforms, and data aggregation centres,Footnote 65 the simplified idea that Chinese technology and its authoritarian state surveillance system are indigenous is significantly flawed. Before China started using CCTV systems and facial pattern-matching techniques to identify ethnic minorities in Xinjiang Province, Bledsoe proposed to the Defence Department Advanced Research Projects Agency (then known as ARPA) that it should support Panoramic Research Laboratory in studying the feasibility of using facial characteristics to determine a person’s racial background.Footnote 66 This is another instance of the politics and the power of FRT recurring and returning and re-playing into new uses, new places, and new eras, yet with similar purposes.

Western companies were involved in the creation of these systems at the start. The export of surveillance technologies from the Global North to China started in the 1970s. It is only now that Chinese technology companies are found competing with and replacing those suppliers in a globalised market.Footnote 67 The current status of FRT developed in China with known human rights and privacy violations is not adequately restricted by regulatory frameworks in Europe and the United States.Footnote 68 To better disentangle use-cases requires not only a more through mapping of globally entangled and technical supply-chains, whether through critical research or in the building of oversight capabilities such as independent risk assessments, compliance audits, or technical red-teaming in the light of such swiftly evolving material properties.

A contemporary focus on understanding FRT must therefore be concerned not only with the implementation and implications for nation and state-bound privacy law, but to make transparent infrastructural supply chains and situated origins of datasets and technical domains they were created in. This should not simply be restricted to law enforcement and public organisations being required to undertake better procurement strategies – often limited to purchasing orders or responses to requests for information – but to identify the exact sources of the FRT hardware, software, decision functions, and datasets.Footnote 69

Indeed, there are circumstances in which we may need to look further afield. This includes so-called dual-use systems that are adopted not just from domains in nation state and military operations but those trained on animals within precision agriculture.Footnote 70 In the shift from classical identification methods to computer vision tools, the future of farming lies in the paddock-to-plate digital identification of each product. Whether for cross-border bio-security purposes or the optimisation of meat traceability FRT is seen as a viable investment to remotely track animals. These systems commonly utilize open-source software architectures, machine learning and modular camera systems.Footnote 71 Yet in the computational transference between animal bodies, digital and data visualisation, and informational materials, we collapse into the heart of Trevor Paglen’s art project titled in ‘Bloom’. The visualisation and classification of all images and all bodies helps to establish the adoption of autonomous methods. This includes initiatives from the global accounting firm KPMG and Meat and Livestock Australia to collect data that translate into efforts to strengthen computer vision market positions. Agribusinesses are not yet treated as handling any sensitive data or training bodily surveillance systems nor are they subjected to regulatory approaches that can throw their data practices into question.Footnote 72

As Mark Maguire suggests, a genealogical and infrastructural approach to FRT ‘demands we consider how technologies are an assemblage of different elements delivered from specific contexts’ yet re-made, aggregated, customised, adapted, and re-purposed for newly defined, profit-driven, and yet often speculative objectives.Footnote 73

3.5 Conclusion

At the time of Bledsoe’s experiments there was a meeting between the administrative management of the NYSIIS law enforcement data bases and the computer design company Systems Development Corporation (SDC) of Santa Monica, California, in September 1964.Footnote 74 The aim was to decide in what manner to proceed with the implementation of the system, and what techniques to commission for deployment. In summary, the critical inflexion point centred on: ‘First buy the computer and decide what to put on it; (2) Or do an extensive feasibility analysis and, as a result of that study, decide on the computer (how large and powerful) and the functions to be performed.’Footnote 75

As the technical capacity of computing systems in the 1960s was nascent, SDC lacked capability to deliver the required system at scale. Yet this allowed a pause for discussion, consideration, and to recognise that computing capabilities must be defined for a particular purpose, and there should be a thorough vetting of the modular building blocks the system would contain.Footnote 76 The title of that report was ‘A System in Motion’, and it recognised that multiple capabilities – from query and search functions onto image recognition – could not be adequately managed and regulated when developed at once. The NYSIIS report stated the application of computers to solve recognition problems for law enforcement was a foregone conclusion. Yet the question remained whether social institutions and organisations should allow for deploying use of complete automation, especially as they function as a sum of moving, and largely unknown ‘experimental parts’?Footnote 77

Although most state departments and law enforcement undertake basic steps to adhere to industry best practices, such as compliance, testing, and legal obligations to avoid public scrutiny, these approaches often lack consistency. FRT is an experimental practice constituted by practices and elements that can be hidden from view, trialed and tested in domains unsuitable to be deemed fit-for-purpose. Whether being trained on exploitative data captured from refugees, prisoners, or operationalised on farm animals, this is called ‘the deploy and comply problem’ and requires public consultation and impact considerations before being put into action.Footnote 78 A prime example is the use of Clearview AI facial algorithms by New Zealand Police in 2020 without consulting the Privacy Commissioner or considering the impacts to vulnerable Indigenous groups.Footnote 79 This is indicative of multiple instances of harm, error, oppression, and inequality that have been caused by autonomous decision and surveillance systems.Footnote 80 What is needed are efforts to trace, assess, and determine if the modular ‘elements’ of an FRT system are legitimate, credible, feasible, and reasonable. This challenge seeks to ringfence the ‘lineage of intent’ – yet can FRT systems be restricted by ethical, legal and technical guardrails to specific, deliberate, and predefined purposes?Footnote 81 This is what this book is seeking to address.

4 Transparency of Facial Recognition Technology and Trade Secrets

Rita Matulionyte
4.1 Introduction

Facial recognition technology (FRT) is being increasingly used by border authorities, law enforcement, and other government institutions around the world. Research shows that among the 100 most populated countries in the world, seven out of ten governments are using FRT on a large-scale basis.Footnote 1 One of the major challenges related to this technology is the lack of transparency and explainability surrounding it. Numerous reports have indicated that there is insufficient transparency and explainability around the use of artificial intelligence (AI), including FRT, in the government sector.Footnote 2 There are still no clear rules, guidelines, or frameworks as to the level and kind of transparency and explainability that should be expected from government institutions when using AI more generally, and FRT in particular.Footnote 3 The EU General Data Protection Regulation (GDPR) is among the first instruments to establish a right of explanation in relation to automated decisions,Footnote 4 but its scope is very limited.Footnote 5 The proposed EU Artificial Intelligence Act (Draft EU AI Act) sets minimum transparency standards to high-risk AI technologies that include FRT.Footnote 6 However, these transparency obligations are generic to all high-risk AI technologies and do not detail transparency requirements for FRT specifically.

Transparency and explainability are arguably essential to ensuring the accountability of government institutions using FRT; empowering supervisory authorities to detect, investigate, and punish breaches of laws or fundamental rights obligations; allowing individuals affected by an AI system’s outcome to challenge the decision generated using AI systems;Footnote 7 and enabling AI developers to evaluate the quality of the AI system.Footnote 8 According to the proposed EU AI Act, ‘transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress’.Footnote 9

At the same time, one should note that transparency and explainability of FRT alone would not help remedy essential problems associated with FRT use, and might further contribute to its negative impacts in some cases. For instance, if an individual learns about the government use of FRT in public spaces where public gatherings take place, this might discourage her from participating in such gatherings and thus have a ‘chilling effect’ on the exercise of her human rights, such as freedom of speech and freedom of association.Footnote 10 These considerations have to be kept in mind when determining the desirable levels of FRT transparency and explainability.

While there is extensive technical literature on transparency and explainability of AI in general,Footnote 11 and of FRT more specifically,Footnote 12 there is very limited legal academic discussion about the requisite extent of transparency and explainability of FRT technologies, and challenges in ensuring it, such as trade secrets. The goal of this chapter is to examine to what extent trade secrets create a barrier in ensuring transparent and explainable FRT and whether current trade secret laws provide any solutions to this problem.

This chapter first identifies the extent to which transparency and explainability is needed in relation to FRT among different stakeholders. Second, after briefly examining which types of information about AI could be potentially protected as trade secrets, it identifies situations in which trade secret protection may inhibit transparent and explainable FRT. It then analyses whether the current trade secret law, in particular the ‘public interest’ exception, is capable of addressing the conflict between the proprietary interests of trade secret owners and AI transparency needs of certain stakeholders. This chapter focusses on FRT in law enforcement, with a greater emphasis on real-time biometric identification technologies that are considered the highest risk.Footnote 13

Apart from the critical literature analysis, this chapter relies on empirical data collected through thirty-two interviews with experts in AI technology. The interviews were conducted with representatives from five stakeholder groups: police officers, government representatives, non-governmental organisation (NGO) representatives, IT experts (in academia and private sector), and legal experts (in academia and private sector) from Europe, the United States, and Asia-Pacific (October 2021–March 2022, online). The data collected from these interviews is especially useful when identifying the transparency and explainability needs of different stakeholders (Section 4.2).

Keeping in mind the lack of consensus on the terms ‘AI transparency’ and ‘AI explainability’, for the purpose of this chapter we define the concepts as follows. First, we understand the ‘AI transparency’ principle as a requirement to provide information about the AI model, its algorithm, and its data. The AI transparency principle could require disclosing very general information, such as ‘when AI is being used’,Footnote 14 or more specific information about the AI module – for example, its algorithmic parameters, training, validation, and testing information. While this concept of transparency might require providing very different levels of information for different stakeholders, it does not include information about how AI decisions are being generated. The latter is covered by the principle of ‘AI explainability’, which we define in a narrow technical way; that is, as an explanation of how an AI module functions, and how it generates a particular output. Such explanations are normally provided using so called Explainable AI (XAI) techniques.Footnote 15 Generally speaking, XAI techniques might be ‘global’, explaining the features of the entire module; or ‘local’, which explain how a specific output has been generated.Footnote 16 While this chapter largely focusses on FRT transparency and its possible conflict with trade secret protection, it also briefly reflects upon the need for FRT to be explainable.

In the following sections, we discuss the scope of explainability and transparency that different stakeholders need in relation to FRT in law enforcement (Section 4.2), in which situations trade secrets may conflict with these transparency and explainability needs (Section 4.3), and whether the ‘public interest’ defence under trade secrets law is capable of addressing this conflict (Section 4.4).

4.2 FRT Transparency and Explainability: Who Needs It and How Much?

Before examining whether trade secrets conflict with FRT transparency and explainability principles, we need to clearly identify the level of transparency and explainability that different stakeholders require in relation to FRT. We demonstrate that different stakeholders need very different types of information, some of which is – and some is not – protected by trade secrets.

For the purpose of this analysis, we identified six categories of stakeholders who have legitimate interests in certain levels of transparency and/or explainability around FRT technologies: (1) individuals exposed to FRT; (2) police officers who directly use the technology; (3) police authorities that acquire/procure the technology and need to ensure its quality; (4) court participants, especially court experts, who need access to technical information to assess whether the technology is of sufficient quality; (5) certification and auditing bodies examining whether the FRT meets the required standards; and finally (6) public interest organisations (NGOs and public research institutions) whose purpose is to ensure, in general terms, that the technology is high quality, ethical, legal, and is used for the overall public benefit.

As could be expected, our interviews with stakeholders have shown that different stakeholders have different explainability and transparency needs in relation to FRT.

4.2.1 FRT Explainability

In terms of the explainability of FRT, few stakeholders need it as a matter of necessity. Among the identified stakeholder groups, certification and auditing bodies that examine the quality of technology might potentially find XAI techniques useful – as these may help identify whether, for instance, a specific AI module is biased or contains errors.Footnote 17 For similar reasons, XAI techniques might be relied upon by public interest organisations, such as NGOs and research institutions, that have expertise in AI technologies and want to assess the quality of a specific FRT technology used by police. AI developers themselves have been using XAI techniques for a similar purpose; that is, to identify AI errors during the development process and eliminate them before deploying them in practice.Footnote 18 However, XAI techniques themselves do not currently have quality guarantees and often face issues as to quality and reliability.Footnote 19 It is thus questionable whether experts assessing the quality of AI, or FRT more specifically, would give much weigh to such explanations.

Other stakeholders – police authorities, police officers, and affected individuals – are unlikely to find explanations generated by XAI techniques useful, mainly because of the technical knowledge that is required to understand such explanations. Further, according to some interviewees, when FRT is used for identification purposes, users do not need an explanation at all as the match made by FRT could be easily double checked by a police officer.Footnote 20

Importantly, explanations generated by XAI techniques are unlikely to interfere with trade secret protection as they do not disclose substantial amounts of confidential information. As discussed later, in order to be protected by trade secrets, information should be of independent commercial value and kept secret.Footnote 21 XAI techniques, if integrated in the FRT system, would provide explanations to the end users, which, by their nature, would not be secret. Thus, owing to its limited relevance for our debate on FRT and trade secrets, FRT explainability will not be analysed here any further.

4.2.2 FRT Transparency Needs

In contrast, transparency around FRT is required by all stakeholders, although to differing extents. Depending on the level of transparency/information needed, stakeholders could be divided into three groups: those with (1) relatively low transparency needs, (2) high transparency needs, and (3) varying/medium transparency needs.

4.2.2.1 Low Transparency Needs

Individuals exposed to FRT, and law enforcement officers directly using the technology, require relatively general non-technical information about FRT (thus ‘low transparency’). Individuals have a legitimate interest in knowing where, when and for what purpose the technology is used; its accuracy levels and effectiveness; legal safeguards put around the use of this technology; and in which circumstances and how they can complain about inappropriate or illegal use of FRT.Footnote 22 After individuals have been exposed to the technology and if this has led to adverse effects (e.g., potential violation of their rights), they might require a more detailed ex post explanation as to why a specific decision (e.g., to stop and question the individual) was made and how FRT was used in this context. Still, they do not need any detailed technical explanations about how the technology was developed, trained, or how exactly it functions, as they do not have the technical knowledge required for the interpretation of this information.

As one of our interviewees explained (in the context of migration/border control):

So, for example, if I am a citizen stakeholder [and] my application for a visa is denied and it’s based on my looks [that suggests that I] have some criminal records, then, of course, it has impacted me and I’m not happy, and I will ask for answers. Even [if the] activities [were] rectified, still [I’ll ask for] answers on how come did you make this mistake? Why did you take me wrong [as] another person and it cost me my travel to be cancelled? So, to have explainability at this level, potentially you don’t need to explain all of the algorithms. It’s a matter of explaining why this sort of decision was made. For example, there was this person with similar facial features and the same name; or whatever some high-level explanation of what happened in the process that explains why mistake happened, etc.Footnote 23

Second, police officers who directly use the technology will want access to general information about how the system functions, what types of data were used to train the system, the accuracy rates in different settings, how it should be used, its limitations, and so on.Footnote 24

In addition, these stakeholders would benefit from user-friendly explanations about, for instance, which pictures in the watch-list were found to be sufficiently similar to the probe picture and the accuracy rate with relation to that specific match.Footnote 25 This would allow police officers to assess the extent to which they could rely on a specific FRT outcome before proceeding with an action (e.g., stopping an individual for questioning or arrest). Information needs might differ between real-time/live FRT and post FRT (i.e., when FRT is used to find a match for a picture taken some time ago), as the former is considered higher risk.Footnote 26

4.2.2.2 High Transparency Needs

Stakeholder groups that are required to assess the quality of a FRT system – certification and auditing authorities, and court experts – have high transparency needs. In order to conduct an expert examination of FRT technology, certification and auditing bodies require access to detailed technical information about the system. This might include algorithmic parameters, training data, processes and methods, validation/verification data and processes, as well as testing procedures and outcomes.

As one of the interviewed IT experts explained:

But if, for example, there is an audit happening. […] then of course, at that level explainability means something completely different. It’s about explaining how the system was designed, how it was being used, what sort of algorithms, what sort of data was used for the training, what sort of design and build decisions were made, and so on.Footnote 27

Similar highly technical information could be demanded in court proceedings by court experts who are invited to assess the quality of FRT used by law enforcement authorities during legal proceedings. Detailed technical information would be necessary to provide technically sound conclusions.

4.2.2.3 Medium/Varying Transparency Needs

The third group of stakeholders might have varied information needs depending on their level of knowledge about AI technologies. Namely, law enforcement authorities, when acquiring the FRT system, would need information that allows them to judge the quality and reliability of the FRT system in question. If they have only general knowledge about FRT, they will merely want to know whether the technology meets the industry standards and whether it was certified/validated by independent bodies;Footnote 28 how accurate it is; whether it has been trialled in real life settings, the trial results, and so on. If they have expert knowledge in AI/FRT (e.g., in their IT team), they might demand more technical information, for example, about datasets on which it was trained and validated, and validation and testing information.

As a final stakeholder group, public interest organisations (researchers and NGOs) have a legitimate interest in accessing information about government FRT use as ‘they are the ones that are most likely to initiate […] strategic litigation and other initiatives’,Footnote 29 and ensure that government is accountable for the use of this technology.Footnote 30 Similarly to law enforcement, their transparency needs will differ depending on their expertise and purpose. Those without expert knowledge in AI might be interested in general information as to which situations and purposes, and to what extent, law enforcement is using FRT; the accuracy levels and effectiveness of the technology in achieving the intended aims (e.g., whether the use of FRT led to the arrest of suspected persons or preventing a crime); and whether there have been human rights impact assessments conducted at the procurement level and their results.Footnote 31 Those with technical expertise in AI might want access to algorithmic parameters and weights, training and validation/verification data, or similar technical information, allowing them to assess the accuracy and possible bias of the technology (similar to the high level transparency discussed earlier).Footnote 32

These three levels of transparency are relevant when determining the situations in which trade secret protection might become a barrier to ensuring the transparency demanded by stakeholders.

4.3 In Which Situations Might Trade Secrets Inhibit Transparency of FRT?

There are a number of challenges in ensuring transparency around FRT.Footnote 33 One of them is trade secrets, which can arguably create barriers to ensuring transparency of AI technologies in general and FRT technologies in particular. The example often used is the State v. Loomis case decided by a US court, in which the defendant was denied access to the parameters of the risk assessment algorithm COMPAS owing to trade secrets.Footnote 34 In this section, we demonstrate that the answer is more nuanced: while trade secrets might create barriers to transparent FRT in some situations (‘actual conflict’ situations), they are unlikely to interfere with transparency needs in other situations (‘no conflict’ and ‘nominal conflict’ situations).

4.3.1 The Scope of Trade Secret Protection

In order to understand the situations in which trade secrets interfere with transparency needs around FRT, it is first necessary to clarify which information about FRT could be potentially protected by trade secrets.

Trade secrets are of special importance in protecting intellectual property (IP) rights underlying AI modules, including FRT. In contrast to other IP rights (patents, copyright), trade secrets could be used to protect any elements of AI modules as long as they provide independent commercial value and are kept secret.Footnote 35 Trade secret protection requires neither investment in the registration process nor public disclosure of the innovation.Footnote 36 While trade secret protection has its limitations, such as a possibility to reverse engineer technology protected by trade secrets,Footnote 37 and a lack of protection against third-party disclosure,Footnote 38 the software industry has so far successfully used trade secrets to protect its commercial interests.Footnote 39

As far as trade secrets and AI are concerned, courts have already indicated that at least certain parts of AI modules can be protected as trade secrets, such as source code, algorithms, and the way a business utilises AI to implement a particular solution.Footnote 40 Keeping in mind the requirements for trade secret protection – secret nature and commercial value – a range of information about AI (including FRT) could be possibly protected by trade secrets: the architecture of the algorithm, its parameters and weights; source code in which the algorithm is coded; information about the training, validation and verification of the algorithm, including training and validation/verification data, methods and processes; real life testing information (in which settings it was tested, and the methods and outcomes of testing), and so on. All this information is often seen by AI developers as of commercial value and kept secret,Footnote 41 and thus could be potentially protected as trade secrets.Footnote 42

4.3.2 When is the Conflict between Trade Secrets and the AI Transparency Principle Likely to Arise?

Keeping in mind the broad range of information about the FRT that could be protected as trade secrets and the transparency needs of stakeholders (identified earlier), three types of situations could be distinguished.

4.3.2.1 No Conflict Situations

First, in some situations, there would be no conflict between stakeholder’s transparency needs and trade secret protection as the information requested by the stakeholder is generally not protected by trade secrets. For instance, individuals subject to FRT would only want general information about the fact that FRT is used by a government authority, where and for what purposes it is used, and so on.Footnote 43 Similarly, police officers using the technology would only need a general understanding of how the technology functions, in which situations it could be used, its accuracy rates, and so on.Footnote 44 Owing to its generally public nature and lack of independent economic value, this information would normally not be protected as trade secrets.

4.3.2.2 Nominal Conflicts

In some other instances, ‘nominal’ conflict situations are likely to arise. First, certification and auditing organisations that are examining the quality of FRT technologies might require access to extensive technical information related to FRT that has commercial value and could be protected by trade secrets, such as algorithmic parameters, training, validation and verification information, and all information related to real-life trials.Footnote 45 Similar information might be requested in court proceedings by court experts who are invited to assess the reliability of the FRT system in question.Footnote 46 As discussed earlier, these types of technical information are likely to be protected as trade secrets: AI developers consider them commercially valuable and tend to keep them secret.Footnote 47

However, we refer to these types of situations as ‘nominal’ conflicts since they could be managed under existing confidentiality/trade secret rules that form part of certification/auditing processes or court procedures. Certification and auditing organisations are normally subject to confidentiality and use the confidential information provided by AI developers for assessment purposes only. Similarly, in court investigations, procedural rules determine how trade secrets disclosed during the court proceedings are protected from disclosure to third parties or to the public.Footnote 48 Since these situations are already addressed under current regulatory or governance frameworks, we will not examine them further.

4.3.2.3 Actual Conflicts

The third type of situations – related to transparency needs of law enforcement authorities and public interest organisations – are of most concern, and we refer to them as ‘actual conflicts’.

Law enforcement authorities might need access to certain technical information about the FRT (e.g., training, validation and testing information) in order to evaluate its reliability before procuring it.Footnote 49 Public interest organisations, such as NGOs and research organisations, might need access to even more detailed technical information (algorithms, training and validation data, testing data) in order to provide an independent evaluation of the effectiveness of the FRT system used by law enforcement.Footnote 50 As mentioned earlier, technical information is generally considered by AI developers as commercially valuable and is likely to be kept confidential.

It is worth noting that law enforcement authorities are able to obtain certain information through contract negotiation.Footnote 51 However, it is questionable whether this solution is suitable in all cases. Owing to a lack of adequate legal advice, bargaining power, or simply the novel nature of AI technologies, law enforcement authorities might fail to negotiate for appropriate access to all essential information that will be needed during the entire life cycle of the FRT system. Government authorities using AI tools acquired from third parties have already encountered the problem of subsequently getting access to certain confidential information about the AI module.Footnote 52

Similarly, while public interest organisations might acquire certain information about FRT used by government through freedom of information requests,Footnote 53 this solution is limited as the legislation generally protects trade secrets from public disclosure.Footnote 54 Therefore, we see both of these situations as an actual conflict between trade secret rights of AI developers and the AI transparency needs of two major groups of stakeholders (law enforcement authorities and public interest organisations).

4.4 Does Trade Secret Law Provide Adequate Solutions?

Trade secret law provides certain limitations that are meant to serve the interests of the public. Namely, in common law jurisdictions, when a breach of confidentiality is claimed, the defendant could raise a so-called public interest defence. In short, it allows defendants to avoid liability for disclosing a trade secret if they can prove the disclosure was in the public interest.Footnote 55 As explained by the House of Lords, protection of confidential information is based on the public interest in maintaining confidences, but the public interest sometimes favours disclosure rather than secrecy.Footnote 56 However, this public interest defence is of limited, if any, use in addressing the conflict between trade secrets and the legitimate transparency needs of identified stakeholders in an FRT scenario.

First, the scope of this defence is unclear.Footnote 57 Some judicial sources suggest the existence of a broad public interest defence, which is based upon freedom of the press and the public’s right to know the truth.Footnote 58 Other court judgments suggest that the defence should encompass no more than an application of the general equitable defence of clean hands, namely the information that exposes a serious wrongdoing of the plaintiff should not be classified as confidential in any case (iniquity rule).Footnote 59 For instance, Australian courts have confirmed that disclosure in the public interest should be construed narrowly; it should limited to information affecting national security, concerning breach of law, fraud, or otherwise destructive to the public, and must be more than simply the public’s interest in the truth being told.Footnote 60

Most importantly, the defence does not provide interested stakeholders with an active right to request information about the FRT technology and its parameters. It is merely a passive defence that could be invoked by a defendant only after they have disclosed the information (or where there is an imminent threat of such a disclosure). In order to disclose the information, the defendant should already have access to the information, which is not the situation of law enforcement authorities or public interest organisations seeking information about the FRT.

The public interest defence could be possibly useful in some exceptional situations. For instance, the employee/contractor of an FRT developer might disclose certain confidential technical information about the FRT system with the public or a specific stakeholder (public authority, NGO, etc.) in order to demonstrate that the AI developer did not comply with legal requirements when developing the FRT system and/or misled the public and/or the government authority as to the accuracy of the FRT technology, for example. If breach of confidence is claimed against this person, they could argue that the disclosure served the public interest: the use of an FRT system that is of low quality or biased may lead to incorrect identification of individuals, especially ethnic or gender minorities, which may further result in the arrest of innocent people and violation of their human rights. The defendant could argue that the disclosure of technical information about such an FRT system would thus help prevent harm from occurring.

Even then, the ability of a defendant to rely on the public interest defence is questionable. For instance, the court might accept the defence if the information is disclosed to government authorities responsible for prosecuting breaches of law or fraud, as ‘proper authorities’ for public disclosure purposes,Footnote 61 but not to public interest organisations or the public generally.Footnote 62 While the law enforcement authority (which is also the user of FRT in this case) might qualify as a ‘proper authority’, a public interest organisation is unlikely to meet this criterion.

Furthermore, if a narrow interpretation of the public interest defence is applied, the defendant would have to prove that the disclosed information relates to ‘misdeeds of a serious nature and importance to the country’.Footnote 63 It is questionable whether a low quality or biased FRT, or the AI developer hiding information about this, would qualify as a misdeed of such serious nature. More problematically, the defendant might not know whether the FRT does not meet certain industry or legal standards until the technical information is disclosed and an independent examination is carried out.

4.5 Conclusions

It is without doubt that transparency is needed around the development, functioning, and use of FRT in the law enforcement sector. The analysis here has shown that in some cases trade secrets do not impede the transparency around FRT needed by some stakeholders (e.g., affected individuals or direct users of FRT) and some possible conflicts could be resolved through existing arrangements and laws (e.g., with relation to the transparency needs of certification and auditing organisations, and court participants). However, trade secrets might conflict with the transparency needs of some stakeholders, especially law enforcement authorities (after acquiring the technology) and public interest organisations that might want access to confidential technical information to assess the quality of the FRT system. Unfortunately, trade secret law, with its unclear and limited public interest exception, is unable to address this conflict. Further research is needed as to how the balance between the proprietary interests of AI developers and transparency needs of other stakeholders (law enforcement authorities and public interest organisations) could be established.

5 Privacy’s Loose Grip on Facial Recognition Law and the Operational Image

Jake Goldenfein
5.1 Introduction

‘Privacy’ has long been central to understanding the impacts of facial recognition and related technologies. Privacy informs the intuitions, harms, and legal regimes that frame these technological systems. Privacy and data protection law already have a ready-at-hand toolkit for related practices such as closed-circuit television (CCTV) in public space, surreptitious photography, and biometric data processing. These regimes measure facial recognition applications against familiar privacy and data protection categories such as proportionality, necessity, and legality, as well as identifiability and consent. But as facial recognition becomes more widespread and diverse, and the tools, ecosystems, and supply chains for facial recognition become more visible and better understood, these privacy and data protection concepts are becoming more difficult to consistently apply.

For as long as privacy has been deployed to constrain facial recognition, analysts have been decrying its inadequacy. This research typically identifies some novel dimension of harm associated with facial recognition that evades existing regulatory strategies. This chapter proposes an alternate diagnosis for why privacy fails to deliver premised on the nature of facial recognition as a broader socio-technical system. The jurisprudence shows that privacy and data protection function as intended at the level of ‘applications’ such as one-to-one and one-to-many identification and identity verification systems. But emerging cases show how privacy concepts become awkward and even incoherent when addressing different dimensions of the facial recognition ecosystem – at the level of ‘tools’ and supply chains, such as biometric image search engines and the production of facial image datasets. Inconsistencies in how law connects to this part of the facial recognition ecosystem challenge the suitability of regulatory concepts like identifiability and consent, the nature of harm being addressed, and perhaps most fundamentally, how privacy conceptualises the nature of online images. New rules for facial recognition products and applications are being included in the in the risk-based regulatory regimes for artificial intelligence (AI) in development around the world. In the EU, these include prohibitions on untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. But as described below, the industrial organisation of the facial dataset business will continue to thwart these regulatory efforts, and privacy and data protection will continue to be legal bases for litigation against companies using facial recognition today and in the future.

This chapter offers an account as to why privacy concepts lose traction in this arena. It argues that existing regulatory approaches reflect an understanding of images as primarily ‘representational’, whereas facial recognition demonstrates that online images are better understood as ‘operational’ or ‘operative’. The operational image does not simply represent a referent but actively enables and participates in a sequence of automated operations. These operations take place at the level of facial recognition supply chains, where existing law struggles to find traction. Law’s inability to come to terms with the operational image pushes existing legal categories to the limits of their utility.

5.2 Facial Analysis and Identification

Privacy law and emerging AI regulations have effectively addressed the ‘watch list’ type facial recognition applications that come up in human rights litigation. For instance, the 2020 Bridges v. South Wales Police decision found the South Wales Police (SWP) force’s use of facial recognition in public to identify individuals on a watch list was a violation of Article 8 of the European Convention on Human Rights (ECHR).Footnote 1 SWP deployed their surveillance system at large public events, using CCTV towers that collected footage of individuals in public, and performed real-time facial recognition against a database of persons of interest. Despite legislation allowing for the creation of that watchlist, the exact parameters for inclusion were not clear. The practice violated the ECHR Article 8 because, while proportionate and strictly necessary for the law enforcement purpose for which it was deployed, it failed to be ‘in accordance with the law’ in certain respects. Specifically, the enabling legislation and applicable Codes of Conduct failed to adequately specify rules around who could be the subject of surveillance (i.e., who could be placed on a watch list in the first place), or where facial recognition systems could be deployed. The enabling law thus gave police too much discretion. These issues have also clearly informed the regulation of biometric identification by law enforcement in the EU AI Act.

But the Bridges case also highlighted some conceptual issues of interest to the argument made in this chapter. In particular, the court’s conceptualisation of facial recognition as something different from both (1) police taking photographs of people in public and (2) the collection of biometric data such as fingerprints.Footnote 2 Facial recognition occupied a place somewhere between the two in terms of level of intrusion, generating some conceptual discomfort for privacy. And while this was ultimately of little consequence to the court’s decision, with facial recognition easily enough absorbed into a human rights proportionality analysis without having to delve deeper into facial recognition’s ‘in-between’ character, the inability to analogise with existing police techniques for this in-betweenness was not merely a matter of novelty. This type of watch list surveillance and associated photography, including real-time (non-automated) identification, has been practised by police for decades. But facial recognition’s in-between character reflected something more fundamental about the media system that automates the identification task – its operationalism.

The argument made here is that facial recognition and related techniques are a function of the operational image.Footnote 3 The central insight of operationalism is that the ontology of images has shifted from one of representation to that of an element in a sequence of operations that are typically machine executed. Mark Andrejevic and Zala Volcic, for instance, describe the ‘operational enclosure’ through which the operational image includes automated identification, social sorting, decision-making, and responses that enable the governance of space.Footnote 4 Their basic example is facial recognition in retail stores that, when identifying a person on a watch list, not only calls security, but also actively locks the doors. This example also exemplifies Trevor Paglen’s emphasis that the audience for (operational) images is no longer humans but rather machines.Footnote 5

The operational image reconfigures images as the communicative instruments of automated non-human visuality. Images consumed by humans are increasingly the output of machines staging what they ‘see’ as a derivative function. But the primary audience of an image is a complex network of machines, with human-legibility a trivial or arbitrary secondary process. As Andrejevic and Volcic note, ‘In the case of facial recognition technology, there is, still, a camera with a lens, but for the purposes of recognition and response no image need be produced.’Footnote 6 The operational function of an image in the facial recognition context is, on the one hand, its capacity to communicate biometric information to other machines, which can then trigger various actions as described by Andrejevic and Volcic. On the other hand, facial images themselves have become operational through their absorption into an ecosystem and economy of image databases, search engines, and AI model training and benchmarking. In other words, online images are operationalised by the biometric supply chain. This additional operational character is revealed through the existence of companies and tools like Clearview AI, as well as the proliferating number of massive image datasets built from web-scraping and surreptitious public photography.Footnote 7

Privacy and data protection law struggle to accommodate this theorisation of images and this domain of economic activity. For instance, the operational image ontology suggests images are always already enrolled in a biometric recognition process. Privacy and data protection law, however, understand images as ‘representations’ of a referent, amenable to subsequent human interpretation and inference. Under the GDPR, for example, images are only considered biometric data after ‘specific technical processing’ that renders it comprehensible to a machine.Footnote 8 In other words, privacy and data protection law insist on the separation of images and any biometric information that can be derived from them.Footnote 9 This means images alone cannot be biometric data. Various authors have pointed out that this is contrary to technical understandings of biometrics,Footnote 10 which would conceptualise every image as also a biometric sample, and the beginning of a biometric ‘operation’. And as discussed Section 5.4.1.1, companies such as Clearview AI are exposing that a degree of processing of images, even if simply for aggregation in datasets, is already the default status of images online.

The following sections describe the different treatment of image and biometric data in existing law, with a focus on how the operational character of images expresses itself as conceptual confusion in how privacy addresses the tools and supply chains that make up the facial recognition ecosystem.

5.3 What Kind of Data Is That?
5.3.1 Images

The following section spells out some of the internal ambiguities and inconsistencies that make the application of privacy and data protection to facial recognition supply chains difficult. The ambiguities exist even at the most basic definitional level. Privacy law typically deals with images that are identified, in cases where publication might diminish seclusion or reputation. Data protection law also governs anonymous images because the definition of ‘personal data’, the threshold for data protection’s application, only requires that data be reasonably identifiable rather than identified.Footnote 11 There is a general presumption that images including a face satisfy that definition, the processing of which then requires a ‘lawful basis’, the most relevant being consent or the legitimate interests of the data processor.

The presumption that images showing a person’s face are always personal data is not entirely settled, however. Even European national data protection authorities give conflicting advice. For instance, the UK Information Commissioner’s Office notes that an image taken in public containing recognisable faces may not be personal data if the image is not subsequently processed to learn or decide anything about any of the individuals that are imaged.Footnote 12 The German data protection authority, however, argues that all images of people contain personal data: ‘photographs, whether analogue or digital, always contain personal data … if persons can be identified on it’.Footnote 13 Advice given by other institutions is even more confusing. For instance, Oxford University’s staff guidance on data protection suggests images will be personal data if individuals are the ‘focus’ of an image, but not if those individuals or groups are not the focus of the image, whatever that means.

Identification and identifiability are not always central to facial recognition and analysis, however. Not all facial recognition or analysis tasks link images to natural persons. Some may identify the same person across multiple instances of a database or across multiple cameras recording physical space. In these cases, there is an argument that facial images used in the biometric process still constitute personal information on the principle of ‘singling out’. This early interpretation of ‘identified’ proposed by the Article 29 Working Party captures systems that distinguish an individual from a group of people without the need to connect them to a natural person.Footnote 14 Although cited several times in the jurisprudence, this definition is not necessarily authoritative.

5.3.2 Biometric Data

Under the GDPR, biometric data is a sub-species of personal data defined as the output of specific technical processing with a view to unique identification of a natural person.Footnote 15 It qualifies as a ‘special category of personal data’, requiring higher levels of protection including explicit consent for processing. The definition of ‘identified’ in this context is narrower than for personal data, as it requires a clear connection to a natural person. As Bilgesu Sumer notes, ‘Under the current system, the threshold for identifiability for biometric data can be invoked only if there is an already identified individual under the GDPR.’Footnote 16 Some data privacy laws, such as Australia’s, include ‘biometric templates’ as protected ‘sensitive data’. But as mentioned earlier, not all biometric templates are identified or created for the sake of identification, meaning the Australian definition raises confusing questions of whether there can be ‘sensitive data’ that is not also ‘personal data’.

If and when biometric data constitutes personal data at all was a live question in policy discussions around the scope of data protection at the turn of the millennium. In 2003, the Article 29 Working Party suggested that biometric data is not personal data when templates are stored without images.Footnote 17 By 2012, however, that same group, without much elaboration, indicated that ‘in most cases biometric data are personal data’.Footnote 18 Biometric data was not considered sensitive (or a special category of) data at that point though, because it did not reveal sensitive characteristics about the identified person. This position evolved again with the GDPR, as policymakers began describing certain intrinsically sensitive characteristics of biometric data, such as its persistence (non-changeability, non-deletability), its capacity to make bodies ‘machine readable’, its use in categorisation and segregation functions, and the way it could be used to track users across space without ever linking to their natural identity.Footnote 19 However, if the purpose of processing biometric data is ‘categorisation’ rather than unique identification of a natural person, it is still not considered processing of a special category of personal data.

Data protection (and privacy) law’s relationship to biometrics – the requirement that a natural person be identified for biometric data to be considered a special category of personal data, and the related exclusion of unprocessed (or raw) images or videos from the definition of biometric data – are strongly informed by older biometric techniques. They imagine a database containing biometric information generated through enrolling an individual in a biometric system such as fingerprinting or DNA extraction. Privacy law identifies DNA and fingerprint information as especially sensitive types of identity information, necessitating rigorous protections and checks and balances.Footnote 20 But the law that developed around these techniques did not anticipate the reality that biometric ‘enrolment’ is no longer the only way to build a biometric system. It did not anticipate that any image contains within itself, easily coaxed out through readily available algorithmic methods, biometric data that might readily contribute to the construction of a facial image dataset or facial recognition search engine, or some other part of the biometric supply chain.

The realities of biometric supply chains and facial recognition ecosystems trouble these long held settlements undergirding existing regulatory strategies. The separation between ordinary portraits and biometric samples embedded in the data protection law does not match the reality that all images are now already also ‘biometric samples’ – the first step in the biometric processing pipeline. Acknowledging the operational character of images would help make sense of juridical treatments of facial recognition, under privacy and data protection, that are becoming increasingly diverse, as well as assist in drawing adequate legal attention to the processes and supply chains that make up the broader facial recognition ecosystem and economy. This is the less-visible system of circulation involving a range of corporate, government and university actors, using a variety of techniques such as web scraping and surreptitious photography, to produce products for research and profit such as benchmarking datasets, training datasets, facial recognition models, and search tools.

5.4 Representationalism versus Operationalism in the Case Law
5.4.1 Non-Identity Matching Cases

While images are operationalised for facial recognition through supply chains, privacy and data protection’s failure to attend to the operational image manifests at all levels of the facial recognition ecosystem. Facial recognition is not always used to match a biometric template with a natural person. Facial analysis sometimes involves consumer profiling (demographics, sentiment analysis, etc.) or location tracking (i.e., identifying a person as they move through a store/space). These instances highlight some confusion and inconsistency within privacy and data protection’s conceptual apparatuses.

The Office of the Australian Information Commissioner (OAIC), for instance, evaluated a profiling system used by the 7–11 chain of convenience stores. Without clear notice, 7–11 deployed a facial recognition system for demographic (age and gender) analysis of individuals that engaged with a customer feedback tablet. The system also created a faceprint (i.e., biometric template) for the sake of quality control. To ensure the same person did not give multiple survey results within a twenty-four-hour period, faceprints were stored and compared, with multiple matches within a twenty-four-hour period flagged as potentially non-genuine feedback responses.

7–11 argued that neither the images collected nor faceprints extracted were personal information because they were not collected or processed for the sake of identifying a natural person. The images were also automatically blurred when viewed by human staff. The OAIC determined, however, that the twenty-four-hour matching system ‘singled out’ individuals by comparing each person’s faceprint against all other faceprints held in the system, which required giving them a unique identifier. Here, the images and faceprints were linked by a ‘purpose’, which was the pseudo-identification. Contrary to other similar legal regimes (i.e., the US State of Illinois Biometric Information Privacy Act (BIPA),Footnote 21 and the GDPR), the OAIC even found that the raw images collected were biometric information, and thus sensitive information, because they were collected for the purpose of biometric identification. The recombination of image and biometric data in this case that is so explicitly rejected elsewhere would reflect some acknowledgement of the operational character of the image, but it is better understood as an outlier, representing conceptual confusion more than a considered position. It has not been replicated in subsequent OAIC determinations considering facial recognition.Footnote 22

Other legal regimes, such as BIPA, more explicitly avoid the issue of how to conceptualise images in a biometric context. Rather than recognise images as potentially also ‘biometric samples’, BIPA simply excludes photographs from its definition of biometric identifiers. The creation of biometric information alone invokes the Act, eliding the issue of biometric data and identifiability.Footnote 23 To that end, TikTok’s collection of facial landmarks used in demographic profiling for advertising and augmented reality ‘filters’ and ‘stickers’ was illegal under BIPA. Despite TikTok’s arguments that all biometric data collected was anonymous, it ultimately settled the case for $92 million as questions of identifiability and anonymity (i.e., the relations of biometric information to images) are not relevant to the BIPA regime that applies as soon as biometric data has been generated.

The diversity of legal treatments and the problems associated with maintaining the separation between images and biometric data only intensifies as we move further along the facial recognition supply chain.

5.4.1.1 Clearview AI Cases

Clearview AI collects as many images of people available online as possible (approximately 1.5 billion images collected per month), storing them in a database linked to their source URLs. Clearview AI extracts biometric information from every face in every image and uses that biometric data to create a unique mathematical hash for each face. Those hashes make the image database searchable via a ‘probe image’ that is itself hashed and compared against the database. Any matches between the probe image and the image database are then provided to the user along with image URLs. Litigation so far has assumed the availability of the system only to law enforcement (and related entities), although Clearview AI now also provides biometric products to the private market.

Judicial treatment of Clearview AI has consistently found that the company processes personal and sensitive data, and therefore requires consent from the individuals in the images it collects. Clearview AI persistently argues that the data it processes is neither personal nor sensitive, but fails on this claim. The French data protection authority, CNIL, similarly (although somewhat circularly) stipulated in its finding against Clearview AI that images are personal data as soon as an individual can be recognised, and that Clearview AI’s capacity to compare an image with another makes those images identifiable.Footnote 24 Because Clearview AI does not perform a specific processing operation for the unique identification of a natural person however, the images it collects and biometric data it extracts are not special categories of personal data. Because Clearview AI only processes personal data and not special categories of personal data, that processing could be lawful even without consent under GDPR Article 6, for instance if in the legitimate interests of the company. However, the court dismissed the possibility of any legitimate interest because individuals who placed their images online would not have ‘reasonably expected’ those images to participate in a biometric search engine that might be used for law enforcement purposes.Footnote 25 But this finding around reasonable expectations is a flimsy hook on which to hang Clearview AI’s privacy violations, and explicitly rejects the operational character of images. Do individuals still expect that images published online are not used to train AI models or produce image datasets? Do individuals still believe that the function of an online image is its presentation to other humans? How long can such expectations persist?

There was a similar moment in an Australian finding against Clearview AI. The Australian regulator determined that the images collected by Clearview AI were personal data because Clearview AI’s purpose is to facilitate identification.Footnote 26 And the biometric data was sensitive because the Australian definition includes biometric templates even if not used for the specific identification of a natural person. When contemplating whether Clearview AI satisfied any exceptions for processing sensitive information without consent, the OAIC indicated that the individuals whose personal and sensitive information was being collected by Clearview AI would not have been aware or had any reasonable expectation that their images would be scraped and held in a database. Further, no law enforcement exceptions applied because ‘only a very small fraction of individuals included in the database would ever have any interaction with law enforcement’.Footnote 27

These discussions of ‘reasonable expectation’ expose something about data privacy law’s relationship to operationalism. On one hand, the breach of reasonable expectations about images law enforcement databases makes sense – there is a liberal privacy harm associated with being enrolled in a police database when a person is not deserving of suspicion. That has served as a normative boundary in privacy jurisprudence for some time. But on the other hand, this is not really enrolment in a police database: Clearview AI’s database is an index of all the images on the internet that is, at the moment, primarily available only to police, but increasingly to private parties. Determining whether Clearview AI breached data privacy law with a normative standard associated with delimiting the state’s policing powers,Footnote 28 does not seem adequate if we understand Clearview AI as just one of the large and growing number of image databases and biometric services that operationalise facial images by scraping the internet. What Clearview AI explicitly demonstrates is that there is no longer a police database; the internet is already an image database that is operationalised through a biometric supply chain.

Online images are sometimes viewed by humans or police, but they are primarily viewed by other machines such as web-scraping software and facial recognition algorithms for the sake of assembling the facial image datasets and searchable biometric databases that power a broader biometrics economy and ecosystem. Regulating these systems by consent (as required when defining the biometric data involved as sensitive – or a special category of personal – data) only makes sense when we imagine the internet as a media system browsed by humans,Footnote 29 where image consumption and processing is neither automatic nor at scale. Clearview AI is a jarring demonstration of the reality that humans do not browse the internet; the internet browses us.

5.4.1.2 Scraping and Dataset Cases

Clearview AI has exposed how legal settlements informed by rhetorics of ‘open internet’ that, for instance, stabilised the legality of web-scraping, indexing, and enabled search engines to evolve, are now straining in the context of massive data aggregation for training large machine learning models.Footnote 30 Facial recognition has its own scraping dynamics that produce not only search engines, but also facial image datasets that, while frequently produced by research teams in non-commercial contexts, have massive economic value and include a huge number of individuals. The market for datasets was estimated to be $9 billion in 2022.Footnote 31 There are a number of giant image datasets containing images of any person for whom there are a multitude of images available online – be they celebrities, political figures, or activists.Footnote 32 For instance, the ‘Have I been Trained’ tool can identify whether individuals are included in the notorious LAION 5B and LAION 400M datasets, used to train a substantial number of AI tools, and since refined into a large number of other industrially valuable image datasets.Footnote 33 To some extent, the new rules in the EU AI Act will prohibit this type of indiscriminate scraping by Clearview AI. But because the rules only address ‘untargeted’ scraping for the creation of ‘facial recognition databases’ it will hardly disturb the facial image dataset industry. As discussed below, apart from Clearview AI, the majority of the industry is vertically dis-integrated, meaning entities doing scraping are producing facial image datasets not biometrically identified facial recognition databases like Clearview AI.

Scraping and image datasets are often produced by companies or research institutions not themselves involved in biometric analysis or facial recognition applications, but who still perform a critical task in the facial recognition supply chain. Companies producing image datasets typically argue that images without names do not constitute personal information. Alternatively, they may claim to only index image URLs not the images themselves (i.e., making images available for other parties to download) so as to not process image data at all. If they are processing images, that processing is claimed to be legal because it is in the legitimate interests of the entity,Footnote 34 Many image datasets made available without any associated biometric information, with subsequent users performing biometric analysis to link particular individuals across multiple images. Sometimes they are simply used to test and benchmark algorithmic models, enabling a demonstration of an algorithm’s efficacy.Footnote 35 These companies mostly evade privacy scrutiny and will likely avoid regulation by the AI Act. Clearview AI managed to attract legal attention for its supply chain activities because its vertical integration (i.e., because it scraped the images, ran the biometric analysis, and sold the identification service) linked those supply chains to the product / application level where privacy and data protection more comfortably apply.

Image datasets are also created without web-scraping – typically through surreptitious photography. Facial recognition in public space has different demands to identify verification systems that use portraits for biometric enrolment. Images scraped from the web are frequently too posed and flat-angled to produce biometric models able to identify individuals from images and video captured from more common surveillance vantage points. Facial recognition in the wild needs images of people walking around, looking at their phones, being unknowingly recorded. This is why, for instance datasets such as Brainwash, produced with a webcam in a café, capturing images of returning customers waiting to order coffee, as well as the Duke-Multi-Target, Multi-Camera Unconstrained College Student Dataset, produced with synchronised surveillance cameras taking pictures of students walking between classes from a university office window, are so valuable.Footnote 36 Data scientists are increasingly seeking access to CCTV footage for building novel datasets.Footnote 37 Although surveillance for dataset construction does not raise the same risk of real-time mass surveillance that animates privacy thinking, in the world of operationalism, those images still participate in the facial recognition ecosystem and economy, raising new critical questions that few existing legal concepts, let alone privacy, are able to answer.

A comprehensive analysis is beyond the scope of this chapter, but no legal regime clearly imposes meaningful limitations in this domain. The HiQ v. LinkedIn case seemingly upheld the legality of scraping under the US Computer Fraud and Abuse Act, even if contrary to platform terms of service.Footnote 38 Scraping does not interfere with personal property interests because there are no property rights in data. Exploitation of Creative Commons non-commercial licensed images is permissible because of the data laundering (commercial/non-commercial) techniques described in footnote 37 as well as the general copyright exemptions for research purposes.Footnote 39 Some argue that scraping images to build datasets or train algorithms does not involve market substitution or replication of any ‘expressive’ dimension of images, meaning it may not violate copyright anyway.Footnote 40 There are already fair use (or equivalent) exceptions for search engines in many jurisdictions.Footnote 41 The US privacy-adjacent right of publicity is unlikely to apply when a scraped image has no commercial value prior to its appropriation and exploitation and does not result in subsequent publication.Footnote 42

It will be interesting to see the outcome of the pending Vance v. IBM litigation concerning IBM’s refining of Flickr’s YFCC100M dataset into the Diversity in Faces dataset.Footnote 43 But this case also deals only with governance of biometric information and not the images from which that biometric data is derived, meaning it will not enjoin dataset creation more generally. At the same time, industry- and research-aligned actors have started pushing in the other direction, arguing for freedoms to use and reuse datasets,Footnote 44 rights ‘to process data’ without consent,Footnote 45 with clear exceptions for copyright or usufructuary rights over property interests to maximise capacities to build and train machine learning models.Footnote 46

5.5 Conclusion

The way privacy and data protection are configured may make sense if online images are representations of individuals, browsed by humans, at risk of certain autonomy effects; but it makes much less sense if images are already part of a socio-technical ecosystem, viewed primarily by machines, used to train and benchmark facial recognition algorithms in order to produce economic value. Privacy and data protection’s representationalism struggles to grasp the mobilisation of images as supply chain components in a dynamic biometric ecology. This chapter has argued that the issues in this ‘back end’ of the facial recognition ecosystem are very different from those that have been typically raised in privacy discussions. Here, regulatory questions intersect with what has become a new frontier of value creation in the digital economy – facial recognition model training. The concern is no longer exclusively losing anonymity in public, but also information being captured from public spaces, not for the sake of identifying you, but for the sake of generating an archive of images of you in the wild in order to train facial recognition models and extract economic value. Once we pay attention to how facial recognition systems are built and function, privacy and data protection start to lose their grip.

6 Facial Recognition Technology and Potential for Bias and Discrimination

Marcus Smith and Monique Mann
6.1 Introduction

Facial recognition technology (FRT) is one of several data-based technologies contributing to a shift in the criminal justice system, and society more broadly, towards ‘automated’ decision-making processes. Related technologies include other forms of biometric identification and predictive policing tools. These technology-based applications can potentially improve investigative efficiency but raise questions about bias and discrimination.Footnote 1 It is important for designers of these systems to understand the potential for technology to operate as a tool that may can discriminate, furthering biases that are already entrenched in the criminal justice system.

This chapter examines how FRT contributes to racial discrimination in the criminal justice system, potentially exacerbating existing over-representation of racial minorities. From one perspective, this technology may be viewed by some as a value neutral, objective, decision-making tool, free from human prejudice and error. However, it is also recognised that FRT, and the associated algorithms, are dependent on datasets that influence its performance and accuracy.Footnote 2 If the input data is biased, so too is the algorithm, and consequently the eventual decisions and outputs. Moreover, this discriminatory potential inherent in the technology is compounded by existing discrimination and over-representation of minority groups.

The chapter is divided into four parts: the first discusses FRT, including current applications. The second discusses the potential for bias and discrimination in the criminal justice system in relation to FRT. The third moves away from a focus on technology and considers social and structural discrimination, integrating the relevant critical literature into our argument. Finally, we conclude that even if the technology could be designed in a way that was completely free from discrimination in a techno-determinist sense, it may still be used to discriminate, given, for example, the long-standing over-policing and disproportionate representation of marginalised groups in the criminal justice system. This should be considered by governments when regulating FRT and by law enforcement and judicial officers making decisions that are informed by it.

6.2 Facial Recognition Applications and Issues

The face is central to an individual’s identity and, consequently, to identifying suspects in criminal investigations. The analysis of faces by law enforcement has progressed from descriptions and sketches of suspects to the contemporary biometric integrated closed-circuit television (CCTV) technology widely used around the world in both the public and private sectors today.Footnote 3 Although there are many applications of FRT, its fundamental process remains the same. FRT involves the automated extraction, digitisation, and comparison of the geometric distribution of facial features in a way that can identify individuals. It begins with a digital image of a subject’s face, from which a contour map of their features is created and then converted into a digital template. An algorithm compares digital templates of facial images and ranks them according to similarity.Footnote 4

There are two ways in which FRT is used. The first, and less controversial, is one-to-one matching. It is used to verify the identity of a person; for example, in a security feature granting access to a smartphone or to compare a person at an international border. The use of FRT expanded rapidly following the 9/11 terrorist attacks in 2001, when it was widely integrated into passports and international border control security systems, allowing the comparison of a facial template with a live image created using SmartGate technology.Footnote 5

The second way it can be used is one-to-many searching: the focus of this chapter. One-to-many searching seeks to identify an unknown person, for example by scanning CCTV footage of a crowd or images gathered from social media sites or more widely on the internet. Police could search based on a photograph of an unknown suspect to identify them or search for a known person in a crowd in real time. The integration of FRT with CCTV to identify unknown persons in public spaces is a major change that has taken place progressively over the past twenty years, to the point where it is normalised and widely used today. Examples of this type of application include not only fixed cameras, but also cameras on vehicles, body worn cameras, and drones, to search public spaces for persons of interest using integrated FRT.Footnote 6

More recently, FRT has been used to search images from the internet, including images uploaded to social media, from sites such as Twitter, Instagram, LinkedIn, Google, and Facebook. Facebook alone has over 250 billion images uploaded.Footnote 7 The use of Clearview AI by law enforcement agencies around the world came to light in 2020, and the company has been the subject of public debate and controversy, not least from social media and other internet companies that commenced legal action over the right to use these images. They claim its business model is in contravention of the terms of service of the websites the images were harvested from. In addition to the widespread use of the Clearview AI application by law enforcement agencies, the company also provides its services to the private sector, raising broader concerns. Clients that use the company’s services for security purposes include the National Basketball Association, Bank of America, and Best Buy.Footnote 8 The use of images from the internet demonstrates how facial templates can be collected and used in ways that individuals may not be aware of and has the potential to connect many sources of data. It also provides insights into the scale of use of FRT, adding to the significance of racial discrimination and other pertinent issues in this context.Footnote 9

There are inherent limitations in the use of FRT when deployed for the purposes of one-to-many identification that extend beyond bias and discrimination. Accuracy is impacted by factors such as the quality of images and cameras used, and the background and lighting conditions when the images were taken. Individual changes can impact on accuracy, including plastic surgery, ageing, weight gain, and facial coverings, such as the surgical masks that became commonplace during the COVID-19 pandemic.Footnote 10 In 2020, technology companies including IBM, Amazon, and Microsoft announced they would pause (or cease altogether), sales of their FRT to law enforcement and border security agencies owing to concerns around accuracy and privacy (Clearview AI was a notable exception to this position).Footnote 11 There have also been bans by some local governments in the United States – Somerville, Massachusetts, and San Francisco, California – which have outlawed any city department, including law enforcement, from using FRT.Footnote 12

FRT has been found to be less accurate when used for the purposes of identifying people with darker skin tones, meaning that police deployment of FRT in criminal investigations can increase the likelihood that ethnic minorities will be wrongfully identified and prosecuted for crimes that they have not committed.Footnote 13 If this is not considered and addressed, it will likely increase the interaction of these individuals with police and compound their existing over-representation in the criminal justice system.

The issues we have raised in relation to racial discrimination cannot be viewed in isolation. In liberal democracies, there is ongoing tension between security, individual privacy, autonomy, and democratic accountability. The rapid growth and application of FRT in both the private and public sectors creates a power imbalance between individuals and the state (and corporations) and should be limited to specific and justified purposes (i.e., where the use of FRT is deemed to be both necessary and proportionate), with associated data and images carefully protected. As far as FRT being justified for security purposes, and privacy concerns mitigated, it must be subject to accountability mechanisms to prevent it being misused. Moreover, citizens should be informed about the potential use of their images for facial recognition and should have meaningfully consented to their use. Whether these systems are operated by public or private sector agencies or law enforcement, regulatory options should be publicly debated, and their use governed by legislation and subject to judicial review.

6.3 Data, Bias, and Racial Discrimination

In 2020, a police investigation in Detroit involving Robert Williams received attention in the national press in the United States. Williams, an African American man, was arrested for shoplifting based on facial recognition identification. He was held for thirty hours before posting bail; but it was later established to be a false match based on his driver’s licence photograph and distorted crime scene surveillance footage. The police department provided an apology and instigated a review of the use of FRT. Williams commenced litigation against the police department seeking compensation for his treatment.Footnote 14 The incident highlights the risks of inaccurate technology being used to identify suspects and relied upon in an arrest. Williams’s case is one of several similar examples from across the United States that has drawn attention to the potential for racial bias to occur in relation to facial recognition, and for this to exacerbate the over-representation of minorities.

These incidents took place around the same time as the murder of George Floyd by a police officer, and the subsequent attention on the issue of racial discrimination through the Black Lives Matter movement.

The existing over-representation of minority groups in police databases will mean that they are more likely to be identified using facial recognition. Brian Jefferson notes that in the United States more than three-quarters of the black male population is listed in criminal justice databases.Footnote 15 Because facial images are included in these databases, they can also be used by analysis by FRT. Depending on the specific use cases (i.e., how the technology is deployed and the watchlists used), it is reasonable to suggest that FRT directs police towards those individuals who are already known to them.

There are also data-based reasons why minority groups may be subjected to mis-identification, or over-identification, in relation to FRT, as established by empirical studies on the issue of racial bias associated with FRT. In 2019, a National Institute of Standards and Technology (NIST) report indicated that the technology achieved significantly lower rates of accuracy in African American and Asian faces – in fact, it found that faces of these races were between 10 and 100 times more likely to be mis-identified, when compared with white male faces.Footnote 16 This is supported by other research which has found that the mis-identification rate for dark-skinned women is about 35 per cent, fifty times higher than white males.Footnote 17

The reason for this rate of mis-identification is the data inputs that the algorithms undertaking the matching rely upon. It has been established that, on average, the datasets used to train the algorithms comprise approximately 80 per cent ‘lighter skinned’ subjects.Footnote 18 The issues with accuracy are therefore likely to be caused by ethnic representation in datasets used to create and train the matching algorithms. Designers of the technology need to consider the racial representation in the datasets used to train facial recognition algorithms. Failing to rectify this issue, by not proactively taking steps to include representative representation in the FRT datasets, could constitute a form of racism, whether that is intended or an oversight.Footnote 19

This is especially concerning given that ethnic minorities are already disproportionately scrutinised by law enforcement and over-represented in the criminal justice system. Increased error rates and mis-identification by facial recognition and other new technologies may compound this serious existing problem. This should be a focus for those building facial recognition systems – designing out the potential for racial discrimination by embedding racial equality in the data used to train the algorithms. Beyond this issue, any form of identification technology should not be relied upon in isolation, but only ever used in the context of other circumstantial evidence in an investigation. However, addressing the technology will only ever be part of the solution. As Damien Patrick Williams notes, ‘merely putting more Black faces in the training data will not change the fact that, at base, these systems themselves will be most often deployed within a framework of racialised and gendered carceral justice’.Footnote 20

6.4 Social and Structural Discrimination

Police attention is not equally applied across the population; racial minorities are subject to disproportionate criminal justice system intervention. The consequences of this are most clearly seen in the disproportionate over-representation of minority groups in prisons around the world. This context is a necessary consideration when thinking about FRT and discrimination, because as we have described, technology can potentially perpetuate racial inequality. The following part of this chapter moves on from technical or technologically deterministic sources of bias and discrimination introduced above (i.e., those within the data or algorithms under-pinning the technology) and adopts a broader structural and social view. It considers facial recognition as a socio-technical phenomenon and argues there is a need to dis-aggregate the technical and social dimensions to discrimination, as well as understand their interaction, and to do so it is necessary to clearly define and evaluate the use cases of technology vis-à-vis specific social and institutional contexts.

It has been recently argued that ‘assisted’ (rather than ‘automated’) facial recognition is a more suitable descriptor for the technology given the way that it is used to inform and direct police activities and operations (rather than truly ‘automate’ them).Footnote 21 Pete Fussey and colleagues’ research examines a range of organisational, system, and operator factors, including the processes of human–computer interaction, and demonstrates how technical and environmental influences impact on the operation of facial recognition systems deployed by police. Fussey argues that ‘while practitioners shape and condition the application and potential of their technological instruments, these practices, forms of action and ways of thinking are simultaneously shaped and conditioned by these technologies and the affordances they bring’.Footnote 22 They conclude that ‘operator decision-making activities involving discretionary and suspicious judgements over who should be stopped once a possible identification has been articulated by the algorithm’ and that ‘technological capability is conditioned by police discretion, but police discretion itself is also contingent on the operational and technical environment’.Footnote 23 These are important considerations, because the roots of discrimination in policing do not stem entirely from the use of new technology in and of itself, but rather the institutions of policing and the actions of police officers in discretionary and discriminatory enforcement of the law.

Work by Simon Egbert and Monique Mann on discrimination and predictive policing technologies also draws attention to the socio-technical interactions between the inputs/outputs of predictive technologies and the street level decisions made by police.Footnote 24 Egbert and Mann argue that predictive policing is ‘a socio-technical assemblage, encompassing not only the technical predictions themselves, but also the enactment of the predictions on the street level police – which can also have serious ramifications including discrimination’.Footnote 25 Connecting this argument to the work by Fussey, we argue that like predictive policing technologies, facial recognition technologies operate within a wider socio-technical assemblage that is shaped by the technology and wider social and structural factors such as police discretion and long-standing discrimination by police and criminal justice institutions. We contend that more attention needs to be directed to the social and structural contexts of technologies to understand their discriminatory potential when examining discrimination in policing, including in the application and use of facial recognition technologies.

Even if FRT could be designed to be perfectly ‘bias free’ from a technological perspective, it may still be targeted specifically against racial minorities or deployed in contexts that control and oppress them. An example of the relevance of such contextual considerations in which technology is deployed with discriminatory potential and impacts are the Smart City developments in Darwin, Australia. Pat O’Malley and Gavin Smith examine the deployment of this programme to improve public safety and public spaces, which involved the deployment of an extensive network of CCTV cameras.Footnote 26 While administrators assert that the video analytics do not include facial recognition software, there is nothing to prevent police from using facial recognition software on the CCTV footage collected. This is significant given the stark over-representation of Indigenous people in the criminal justice system in this part of Australia. For example, in 2016–2017, Indigenous people comprised 84 per cent of the prison population, and Indigenous youth comprised almost 95 per cent of those in youth detention, in addition to many other forms of disadvantage demonstrating social-economic inequality and injustice.

O’Malley and Smith argue that the Smart City technologies deployed in Darwin are ‘directed at the monitoring and control of [Indigenous] people in public places’ and draw attention to the ‘very real prospect of the system being used to sharpen a criminalising gaze on the predominantly marginalised and excluded bodies of the Indigenous people living in and around the city’.Footnote 27 The risk is that the surveillant capabilities of the Smart City in Darwin will create negative and disproportionate impacts for Indigenous people, not only because they are already the focus on a racialized criminal justice system, but also by virtue of their daily presence in public spaces in Darwin, which is connected to social factors including unemployment and homelessness, which is in turn a consequence of Australia’s colonial past and the dispossession of Indigenous people from their lands. O’Malley and Smith conclude that ‘the impacts of Smart City programmes on crime control cannot be read off in a technocratically deterministic fashion … but must be situated and analysed in specific contexts’ and that the ‘enduring legacies of colonialism have done much to shape the nature and implications of Smart Cities projects’.Footnote 28

This demonstrates the importance of a focus on social, political, and historical context when thinking about how technology might be ‘biased’ or ‘discriminatory’, and the need to understand the specific use cases of policing technologies, including but not limited to FRT. Even if technologically ‘bias free’ forms of facial recognition were indeed available, we could assume that they will be deployed in ways that are not ‘neutral’ and, rather, would operate to further marginalise, discriminate against, and control certain groups, especially those that are already the most marginalised and oppressed. This is pertinent given critiques by Sara Yates that ‘the narrative that [FRTs] are problematic only due to their lack of transparency and inaccuracy is faulty’.Footnote 29 Yates argues that ‘if these tools are allowed to be used by law enforcement, whether they have been reformed to address the accuracy and transparency issues … they will still be used disproportionally against marginalized groups and people of colour…’.Footnote 30 A focus on addressing discrimination in FRT through only technologically deterministic approaches will not remedy broader historical social injustices and harm done by police institutions and the criminal justice system, nor will banning or outlawing facial recognition. As Yates acknowledges, ‘the greatest harm from these systems does not come from these tools themselves, but instead from the unjust institutions that use them’.Footnote 31 While calling for bans on FRT may be intuitively appealing, they will not resolve institutional and systemic racism and injustices perpetrated by such institutions.

The task must be to first address these fundamental injustices, or they will recur in the guise of objective technology.Footnote 32 There is a need to disaggregate the technical and social dimensions to bias and discrimination and seek to better understand the specific use cases of technology within specific institutional and social contexts. It is necessary to understand these various sources of bias and discrimination, for example those that arise from individuals (i.e., police/operator discretion), the way the system is designed (i.e., in public places that racial minorities tend to frequent), and the wider system objectives (i.e., the reason supporting the deployment of technology in that context). Analyses of the interactive effect of social and technological factors are required in order to evaluate whether the objectives and applications of certain technologies in specific contexts are necessary and proportionate, while ensuring that individual rights are upheld (including privacy, anti-discrimination, and equality). Regulatory strategies to address this issue could be targeted according to the level of risk presented in specific contexts and specific use cases of technology. Moving forward, there is a need to consider, implement, and evaluate measures that aim to reduce discrimination and harm in existing systems (including the criminal justice system) and design better systems. In doing so, the structural discrimination that is a feature of many systems must be addressed to ensure that existing inequalities are not perpetuated by new technologies such as facial recognition.

6.5 Conclusion

The use of FRT in the criminal justice system and its association with racial discrimination is an important issue for society, given the rapidly expanding application of the technology and the limited regulation in many jurisdictions. This technology may operate to further historical forms of oppression, discrimination, bias, and over-representation of minority groups in the criminal justice system. There is evidence that FRT may contribute to racial discrimination by operating with reduced accuracy, owing to the fact that the data used to inform the operation of the technology does not include sufficient representation, leading to inaccuracy and mis-identification. While this issue must be dealt with, addressing it in isolation will not be sufficient. The disproportionate focus on minorities is a far bigger problem in the criminal justice system, and the extent to which FRT perpetuates this is a subset of a much bigger, complex and historically entrenched problem. Along with the data problem, this context must be considered by those operating the technology, and by law enforcement organisations and governments, and they should not over-deploy it in areas where these minority groups are concentrated.

Rather than ban the technology altogether, we need to focus on structural discrimination and inequality – calling for a widespread ban of technologies altogether, while it may be appealing to some, is not going to be productive in the long term, nor is it realistic. While there are data-based issues here that can be addressed, this step alone will not be sufficient, and there is a need to address the social issues if we are to achieve meaningful change. Technology is not the problem, nor is it the solution. In conclusion, there are two perspectives to take account of: a data perspective and a social perspective. Although they are inter-related, they need to be disaggregated, and their socio-technical interaction better understood. First, we can see that when technology is based on datasets skewed towards white populations, it does not function as accurately on minorities. Second, technology may further existing bias and racism inherent in the individuals and organisations deploying and operating it, and in terms of inequality within the criminal justice system and society more broadly. We need to ensure that there is representative racial representation in datasets (the technical issue), and ensure that it is not over-used it in areas where racial minorities are concentrated (the social issue).

7 Power and Protest Facial Recognition and Public Space Surveillance

Monika Zalnieriute

Political freedom, generally speaking, means the right to be a participator in government, or it means nothing.Footnote 1

7.1 Introduction

In 2018, police in India reported that the roll out of facial recognition technology (FRT) across New Delhi enabled their identification of 3,000 missing children in just four days.Footnote 2 In the United Kingdom, South Wales Police used live FRT to scan over 50,000 faces at various mass gatherings between January and August 2019 and identified nine individuals for arrest.Footnote 3 The Chinese Sharp Eyes programme, ‘omnipresent, fully networked, always working and fully controllable’, can take less than seven minutes to identify and facilitate apprehension of an individual among a population of nearly 5 million people.Footnote 4 In Moscow, 105,000 FRT-enabled cameras have monitored and enforced COVID-19 self-isolation orders,Footnote 5 with at least 200 violators being identified.Footnote 6

As protest movements are gaining momentum across the world, with Extinction Rebellion, Black Lives Matter, and strong pro-democracy protests in Chile and Hong Kong taking centre stage, many governments – both in the West and in the East – have significantly increased surveillance capacity of the public sphere. City streets and squares, stations, and airports across the globe, and social media and online platforms have become equipped with sophisticated surveillance tools, enabled and made legal through a myriad of complex and ever-expanding ‘emergency’ laws. Irrespective of whether these events and/or political strategies are framed as ‘emergencies’ such as the ‘war on terror’ with its invisible geopolitical enemies for 9/11, or whether they were pro-democracy or anti-racism protests or connected with COVID-19, the state resort to technology and increased surveillance as a tool to control the masses and population has been similar. Examples from varied countries – ranging from China, Russia, and India to the United States and the United Kingdom – tell us that recent technological advances have enabled authoritarian and democratic governments alike to build omnipresent biometric infrastructures that systematically monitor, surveil, predict, and regulate the behaviour of individual citizens, groups, or even entire populations. In this chapter, I focus on the chilling effect of FRT use in public spaces on the right to peaceful assembly and political protest. While technological tools have transformed protest movements widely, both amplifying and undermining them,Footnote 7 in this chapter I only focus how protest movements have been tackled with FRT and my emphasis is on political protests and public spaces. Pointing to the absence of oversight and accountability mechanisms on government use of FRT, the chapter demonstrates how FRT has significantly strengthened state power. It draws attention to the crucial role of tech companies in assisting governments in public space surveillance and curtailing protests. I argue for hard human rights obligations to bind these companies and governments, to ensure that political movements and protests can flourish in the post-COVID-19 world.

7.2 Undermining Protest Movements with FRTs

Live automated FRT, rolled out in public spaces and cities across the world, is transforming modern policing in liberal democracies and authoritarian regimes alike. The technology augments traditional surveillance methods by detecting and comparing a person’s eyes, nose, mouth, skin textures, and shadows to identify individuals.Footnote 8 The live automated facial recognition can instantaneously assess the facial biometric data in the captured images against a pre-existing ‘watchlist’ and flag it to police officers. Some FRT tools go further, purporting to classify people by gender or race or make predictions about their sexual orientation, emotions, and intent.

This FRT has been used to tackle protest movements globally. For example, the US company Geofeedia has been marketed to law enforcement ‘as a tool to monitor activists and protestors’,Footnote 9 incorporating FRT use with Twitter, Facebook, and Instagram databases.Footnote 10 Rasheed Shabazz, an activist and journalist, believes that his arrest near the Black Lives Matter protests in Oakland in 2014 was as a result of the Geofeedia software.Footnote 11 This same software was also used to monitor civil unrest after the police killing of Freddie Grey and link protesters with their social media profiles.Footnote 12 Similarly, in 2020, during the protests following the killing of George Floyd in Minneapolis, Minnesota, several people were arrested and charged after being identified through the use of FRT.Footnote 13 In another case, the Detroit Police Department used FRT to identify a Black Lives Matter protester who was arrested and charged with reckless driving and resisting arrest.

Similarly, FRT has been used in many other countries. For example, ‘habitual protesters’ in India are included in a dataset used to monitor large crowds,Footnote 14 which is composed of ‘miscreants who could raise slogans and banners’.Footnote 15 This database was used to identify dissidents at a prime ministerial rally in December 2019,Footnote 16 and also resulted in the detention of a ‘handful’ of individuals charged with violent crimes when it surveyed protests in New Delhi and Uttar Pradesh.Footnote 17 The Hong Kong police used FRT cameras to identify protesters and track their movements during the 2019 pro-democracy protests, which drew criticism from human rights advocates who argued that it violated the protesters’ right to privacy and could lead to their persecution.Footnote 18 In 2019–2020, FRT cameras were also used in Chile to monitor and identify protesters participating in demonstrations and civil unrest, known as the Estallido Social.Footnote 19 The cameras were installed in public areas, including train stations and street corners, by the Chilean government to track individuals who were suspected of participating in protests or other forms of civil disobedience. In the face of mounting criticism and protests against the use of this technology, the Chilean government announced that it would suspend the use of facial recognition cameras in public spaces in early 2020.

In all these cases, FRT allowed the authorities to quickly identify individuals who were wanted for questioning or arrest. The cameras were linked to a central database containing photos and personal information of individuals who were known to have participated in previous protests or other activities that the government deemed to be illegal. Such use of FRT cameras sparked controversy and concern among civil liberties groups and privacy advocates, who argued that the technology was being used to stifle dissent and violate the rights of protesters to peacefully assemble and express their opinions. Despite these concerns, governments typically defend FRT use by framing it as a necessary measure to maintain ‘public safety’ and order during a time of civil unrest.

In addition to such ‘top-down’ surveillance by public authorities in USA, India, Hong Kong, and Chile, ‘horizontal’ modes of surveillance have become increasingly popular.Footnote 20 This involves partially outsourcing surveillance functions to individuals and/or tech companies. A vivid example of such outsourced surveillance was the 2020 Black Lives Matter protests in Dallas, during which the police department asked individuals on Twitter to send them videos from protests that showed ‘illegal activity’.Footnote 21 A larger-scale example was seen in the aftermath of the 2010 Canadian Winter Olympics riots, in which closed-circuit television (CCTV) footage was used to identify offenders, and private individuals sent the Vancouver Police Department thousands of images and helped them scour social media.Footnote 22 Similarly, tech companies such as Facebook, Twitter, and Instagram have been crucial in surveillance of protesters, as the widespread use of social media has made the monitoring of protest and dissident activities significantly easier.Footnote 23 For example, in 2014 and 2016, the US government obtained two patents that may facilitate its ability to use social-media to predict when a protest will break out.Footnote 24

Protest movements in USA, Hong Kong, Chile, and beyond have also operated in the shadow of the global COVID-19 pandemic, and together raised questions about unprecedented levels of government power and the expanding regime of mass surveillance in public spaces. The COVID-19 pandemic has given governments a further impetus to explore FRT’s health-related uses – from monitoring compliance with quarantine or social-distancing requirements to tracking (in conjunction with other biometric technologies such as thermal scanning) those who are potentially infected. COVID-19 and the latest protests in Hong Kong, Chile, and the United States have redefined the boundaries of mass surveillance and biometric tracking globally, with irreversible implications for the future exercise of government power and surveillance.

7.3 Lack of Regulation and Dangers of FRT

Despite the increasing deployment of FRT in many city squares and streets across the globe, as many chapters in this book demonstrate, FRT use is not yet regulated. Law enforcement agencies around the world are experimenting with FRT with discretion and on an ad hoc basis, without appropriate legal frameworks to govern its use nor sufficient oversight or public awareness.Footnote 25 For example, there are currently no federal regulations in the United States governing the use of FRT by law enforcement.Footnote 26 In March 2019, two US senators introduced the Commercial Facial Recognition Privacy Act, intended to ban developers and providers of commercial FRT from collecting and sharing data for identifying or tracking consumers without their consent.Footnote 27 However, this only focussed on the commercial use of FRT. Similarly, in the EU, regulation of FRT has been very limited. In February 2020, a draft EU White Paper on Artificial Intelligence appeared to call for a discussion about a temporary five-year ban on facial recognition. However, the final draft of this paper removed mention of such a moratorium.Footnote 28

This lack of oversight of FRT use by public bodies can lead to abuses of power and violations of fundamental rights and civil liberties. As many chapters in this book demonstrate, FRT use can result in discriminatory treatment and undermining of privacy and due process, as well as other concerns. Indeed, the dangers of FRT are gradually being recognised by courts. For example, law enforcement’s use of automated FRT was successfully challenged in 2020 in R (on the application of Bridges) v. Chief Constable of South Wales Police ([2020] EWCA Civ 1058) (‘Bridges’) case, where the Court of Appeal held that the use of automated FRT by South Wales Police was unlawful because it was not ‘in accordance with law’ for the purposes of Article 8 of the European Convention on Human Rights.Footnote 29 In addition, South Wales Police had failed to carry out a proper Data Protection Impact Assessment and had not complied with the public sector equality duty.Footnote 30 While Bridges is the first successful legal challenge to police use of automated FRT worldwide, fresh lawsuits brought by non-governmental organisations in the United States and France are still pending, and they might provide different judicial responses to regulation of police FRT use.Footnote 31

Some jurisdictions have already regulated and limited FRT use by law enforcement. In the United States, for example, the cities of San Francisco and Berkeley have banned local agencies (including transport authorities and law enforcement) from using FRT,Footnote 32 some municipalities in Massachusetts have banned government use of facial recognition data in their communities,Footnote 33 and other US states (California, New Hampshire, and Oregon) have instituted bans on facial-recognition technology used in conjunction with police body cameras.Footnote 34 The United Kingdom also has an Automated Facial Recognition Technology (Moratorium and Review) Bill,Footnote 35 proposing to ban the use of technologies. Yet its future remains uncertain.

Therefore, not only civil right advocates, but also the courts and politicians widely recognise that FRT can be easily misused by law enforcement to target certain groups of people, such as political activists or marginalised communities, and such targeting often leads to further discrimination and injustice. Importantly, the growing prevalence of surveillance through FRT has a chilling effect on public discourse by threatening the right to protest anonymously; a notion fundamental to protest movements.

7.4 Protest Movements, Public Space, and the Importance of Anonymity

Protest movements are collective actions undertaken by a group of people who come together to express their dissent, raise awareness, and advocate for change around a particular issue or cause.Footnote 36 These movements can take many different forms, ranging from peaceful demonstrations, marches, and rallies to civil disobedience, strikes, and other forms of non-violent resistance. Protest movements can emerge in response to a wide range of social, economic, political, and environmental issues. Some of the most common causes of protest movements include discrimination, injustice, corruption, inequality, environmental degradation, and war. Contemporary examples are Occupy Wall Street (2011), Arab Spring (began in 2010), Black Lives Matter (began in 2013), and the Hong Kong pro-democracy movement (began in 2019). Protest movements can also be motivated by a desire to promote social change, challenge existing power structures, and hold those in authority accountable for their actions.

Throughout history, protest movements have played a critical role in advancing social progress and promoting human rights. They have helped to raise awareness of important issues, mobilise public opinion, and influence policy and legislative changes. Examples of protest movements from history include the civil rights movement of the 1950s–1960s, the women’s suffrage movement of the late nineteenth and early twentieth centuries and Vietnam anti-war protests (1960s). Today, protest movements continue to be an important tool for promoting social change and advocating for a more just and equitable world.

Protests movements require a tangible and accessible location, typically in the streets and other public places. Public space has always been central to social movements and political protests as a practical place for citizens to gather and as a symbolic place connected to wider democratic values. It provides a physical location where individuals can come together to voice their dissent, express their grievances, and demand change.Footnote 37 By occupying public spaces, protesters can create a visible and disruptive presence that draws attention to their cause, and can also serve as a symbolic representation of their struggle.

Public spaces, such as city squares, parks, and streets, are often central to the social and cultural life of a community, and their use for protests can be a powerful statement of the collective will of a group of people. Thus, public spaces are the ‘ultimate area of societal interaction’ and occupy a symbolic place in society given their accessibility, openness, and, according to Jens Kremer, inherent freedom.Footnote 38 When protesters occupy public spaces, they are asserting their right to participate in the democratic process and to be heard by those in power. In interrupting these public spaces, protesters ‘touch upon the very core of the current structure and organization of social systems, namely the balance of power, rule of law and democratic governance’.Footnote 39 It questions the ability of government authorities to maintain the integrity of these shared spaces,Footnote 40 thus challenging existing power structures.

Historically, protesters have taken the right to protest anonymously largely for granted; a right that is now becoming increasingly more fragile. The right to anonymity has been fundamental to social movements and protesting, as these events require the population to feel confident and safe in their ability to gather in public spaces and manifest their disagreement with the status quo. This is impossible if they fear surveillance tools can be weaponised against them to suppress and punish their dissent. The sense of safety necessary to facilitate robust democratic participation stems from an understanding that an individual, in the act of demonstrating, is expressing something larger than themselves by joining in a collective. They thus sacrifice their individual voice for the benefit of social disruption, and in return are granted the key right that protesters have enjoyed for centuries; the right of anonymity. The anonymity earned by protesters in public spaces has been increasingly challenged and eroded by surveillance infrastructure.

While the relative anonymity of the individual during protest gatherings has typically ‘neutralised’ the effect of surveillance, they have been increasingly subject to ‘counter-neutralization technologies’ that require those individuals to take more active steps to circumvent identification.Footnote 41 Of course, protest movements have long devised resistance strategies against surveillance. For example, protesters can break a system a surveillance system by flooding it, rendering surveillance inoperable or impractical.Footnote 42 Typical examples include crude forms of neutralisation such as disabling phone lines, wearing masks, and destroying cameras. For example, Hong Kong protesters in 2019 used lasers and broke smart lampposts that they believed contained FRT software.Footnote 43 With FRT, protestors are given two choices: first, they can wear a mask and risk arrest and the collection of biometric information in the form of criminal records, or second, they can do without a mask and risk collection of biometric data through FRTs.Footnote 44

Surveillance technologies dealing with political protests have become the norm in many countries, and scholars have theorised about the chilling effect of surveillance on dissent.Footnote 45 Monitoring, tracking, and detaining individual protesters for their actions in public places significantly shifts the power balance between the state and individuals. Surveillance of political protests undermines the individual as a ‘free autonomous citizen’, and negatively impacts democracy and the rule of law.Footnote 46 Protestors become disempowered in relation to their body and biological information,Footnote 47 which is not only threatening, in one sense, to discrete individuals,Footnote 48 but in another to discretise protesters, breaking down their collective image. Pervasive surveillance tools can be understood as disciplinary, as they are able to threaten and realise retribution against individual protesters who would otherwise have been lost in a sea of voices, but it is in another sense indicative of a ‘controlled’ society in which surveillance is ubiquitous.Footnote 49

7.5 Protecting Protesters from Abuse: Potential Ways to Regulate

Given the danger that FRT surveillance in public spaces poses to political protests, the rights to peaceful assembly and association, and wider democratic participation, legislatures should regulate or entirely ban the use of FRT in policing and law enforcement. Regulation of FRT use is a necessary step to ensure the chilling effect of FRT on political expression and freedom of assembly is eliminated.

The chilling effect on freedom of speech and assembly is even stronger in some jurisdictions, such as Australia. This is because, unlike many other jurisdictions discussed in this book, Australia has no human rights protection enshrined in its Constitution and no national human rights legislation.Footnote 50 Only three out of eight Australian states and territories have state-level human rights Acts. For this reason, in its recent report, the Australian Human Rights Commission has urged Australia’s federal, state, and territory governments to enact legislation regulating FRT.Footnote 51

What are the ways to protect protesters and protest movements from abuse by public authorities? Recent literature related to AI and accountability has recommended several avenues, including regulation,Footnote 52 the development of technical methods of explanation,Footnote 53 the promoting of auditing mechanisms,Footnote 54 and the creation of standards of algorithmic accountability in public and private bodies.Footnote 55 Law, of course, should also play a role.

7.5.1 Privacy Law

Privacy law provides one avenue to regulate police use of FRT in public spaces.

Scholars have long argued that public activities deserve privacy protections, and that the simple act of being ‘in public’ does not negate an individual’s expectation of privacy.Footnote 56 However, as Jake Goldenfein in Chapter 5 of this book suggests, privacy law has severe limitations when regulating the use of FRT. For example, the US Fourth Amendment, under current Supreme Court jurisprudence, has been viewed as an unlikely protection against FRT for two reasons: first, jurisprudence has typically ignored pre-investigatory surveillance,Footnote 57 and secondly, it has failed to encompass identification of information already exposed to the public.Footnote 58 In relation to the Fourth Amendment, Douglas Fretty questions whether Constitutional protection will require the Supreme Court to confirm the ‘right of the people to be secure’ or simply display how insufficient the Fourth Amendment is in safeguarding individuals beyond the scope of their private spaces.Footnote 59 Drawing on recent US Supreme Court cases concerning GPS tracking and other technologies and the Fourth Amendment,Footnote 60 Andrew Ferguson suggests that the Supreme Court is cognisant of the need to adapt Fourth Amendment jurisprudence to emerging technologies.Footnote 61 He identifies six key principles in adapting the Fourth Amendment to deal with modern concerns. First, technological searches cannot be viewed as equivalent to pre-technological police investigatory modes.Footnote 62 Secondly, there must be a general presumption against the large-scale aggregation of data.Footnote 63 Thirdly, there must be a general presumption against the long term storage and ongoing use of aggregated data.Footnote 64 Fourthly, the ability to track and trace an individual must be a relevant factor in considering the application of the Fourth Amendment.Footnote 65 Fifthly, the concept of anti-arbitrariness must be transposed to a digital setting to act against automated technologies that do not require probable cause.Footnote 66 Sixthly, monitoring technologies must not be so over-reaching as to grossly permeate civil society.Footnote 67 However, the Fourth Amendment offers limited support in the protection of protest movements from FRT surveillance in public spaces.

7.5.2 Discrimination Law

Could discrimination law provide a better avenue to regulate police use of FRT in public spaces? The emerging consensus in an increasing body of academic research is that FRTs are not ‘neutral’,Footnote 68 but instead reinforce historical inequalities.Footnote 69 For example, studies have shown that FRT performs poorly in relation to women, children, and individuals with darker skin tones.Footnote 70

This bias and discrimination can be introduced into the FRT software in three technical ways: first, through the machine learning process, based on the training data set and system design; secondly, through technical bias incidental to the simplification necessary to translate reality into code; and thirdly, through emergent bias that arises from users’ interaction with specific populations.Footnote 71 Because the training data for FRTs in the law enforcement context comes from photos relating to past criminal activity,Footnote 72 minority groups and people of colour are over-represented in FRT training systems.Footnote 73 In some jurisdictions, such as the United States, people of colour are at a much higher risk of being pulled over,Footnote 74 searched,Footnote 75 arrested,Footnote 76 incarcerated,Footnote 77 and wrongfully convicted than white women.Footnote 78 Therefore, police use of FRT to repress political protests can produce a large number of false positives – as it is already functioning in a highly discriminatory environment; and this can impact on the freedom of assembly and association of those already marginalised and discriminated against. However, discrimination law alone offers limited support in the protection of protest movements from FRT surveillance in public spaces.

7.5.3 Holding Private Companies Accountable for FRT Surveillance of Public Spaces

Private actors are also playing a role in the increasing surveillance of public spaces by stifling protest movements and political participation worldwide, and we need to insist on holding them accountable. Private companies, such as telecommunications service providers and tech giants, have been co-operating with law enforcement agencies and developing the technical infrastructure needed for public space surveillance. This includes police purchasing and using privately developed FRT technology or image databases, both of which often happen in secret. For example, IBM, one of the world’s oldest (and largest) technology companies,Footnote 79 has recently collaborated with repressive governments by providing FRT software. Best known for its famous personal computers, in recent years the company’s focus has shifted to AI and FRT.Footnote 80 A detailed report by The Intercept published in March 2019 revealed that in 2012 IBM provided police forces in the Philippines with video surveillance technology, which was subsequently used to perpetuate President Duterte’s war on drugs through extra-judicial killings.Footnote 81 The brutal and excessive crime suppression tactics of the Davao police were well known to local and international human rights organisations.Footnote 82

At the time, IBM defended the deal with Philippines, saying it ‘was intended for legitimate public safety activities’,Footnote 83 but claimed that it had ceased provision of its technology to the Philippines in 2012. However, it took at least several years for IBM to stop providing general purpose FRT software to law enforcement (e.g., IBM mentioned its Face Capture technology in a public disclosure in 2013 and 2014, related to its Davao City project).Footnote 84 The company’s practice of providing authoritarian regimes with technological infrastructure is not new and dates back to the 1930s, when IBM supplied the Nazi Party with unique punch-card technology that was used to run the regime’s censuses and surveys to identify and target Jewish people.Footnote 85

Because of such close (and often secretive) collaboration between private tech companies and governments, we need to think of new ways to hold the companies providing the FRT infrastructure accountable – not just in aspirational language, but in law. Currently, in many countries, the application of human rights laws is limited to government bodies only (anti-discrimination and data protection laws being the primary exceptions of horizontal application).Footnote 86 The same is true of international human rights law. This leaves private companies in the human rights gap. However, as I have argued elsewhere in detail, existing efforts focussing on voluntary ‘social and corporate responsibility’ and ethical obligations of private tech companies are insufficient and incapable of tackling the challenges that these technologies pose to freedom of expression and association.Footnote 87 Moreover, a lot of those efforts have merely been ‘transparency washing’ – performatively promoting transparency and respect for human rights while acting in ways that undermine both.Footnote 88

The problem of the human rights gap is greater in some jurisdictions, such as Australia, which lacks a federal level human rights framework and where governments often remain unaccountable for public space surveillance. Therefore, we need to demand change and accountability from governments, police, and tech companies. We should not continue to rely on the ‘goodwill’ of tech companies, when they promise to ‘respect’ our right to protest and our freedom of association and assembly. We need to demand hard legal obligations for private actors because of the significant role they play in increasing public space surveillance and infrastructure. We need data protection and human rights laws that bind companies, to ensure that political movements and protests can flourish and that communities whose rights to peaceful assembly and association have been curtailed via FRT technologies can access an effective remedy.

7.5.4 Outright Bans on Police Use of FRT

Of course, even with all the limits that could be placed by law, police use of FRT in public spaces is problematic in itself – owing to the centrality of public space and anonymity for protest movements. It is thus not surprising that many scholars, activists, regulators, and politicians have turned to arguing for bans on FRT use. For example, US scholar Ferguson advocates for a blanket ban on facial surveillance, a probable cause requirement for facial identification, a ban or probable cause-plus standard for facial tracing, and limitations to facial verification at international borders in addition to increased accountability for error, bias, transparency, and fairness.Footnote 89

Proposals to ban FRT have also been coming from sources outside the academic realm; with informal resistance groups such as the developers of the website Fight for the Future running a project called Ban Facial Recognition, which operates an interactive map of where and how the government is using FRT around the United States.Footnote 90 Further, the United Kingdom’s Equality and Human Rights Commission,Footnote 91 and the Australian Human Right Commission,Footnote 92 have recently called on governments to introduce a moratorium on the use of FRT in policing and law enforcement before legislation regulating the use of FRT and other biometric technology is formally introduced.

7.6 Conclusion

If the government and law enforcement can resort to FRT without any restrictions or safeguards in place, the right to protest anonymously will be curtailed and political discourse in our democracies will be stifled. For example, the High Court of Australia – Australia’s apex court – has emphasised the centrality of the right to protest to Australian democracy: besides casting their vote in elections, Australians have no other avenues through which to voice their political views.Footnote 93 Adapting Hannah Arendt’s famous quote used at the beginning of this chapter, political freedom must enable a right to participate in government. And in many instances, the only way to do that, in addition to voting, is through political protest.

Before FRTs develop further and become even more invasive, it is imperative that this public surveillance infrastructure is limited. We need laws restraining the use of FRT in our public spaces, and we need hard legal obligations for those who develop and supply law enforcement with them. The reforms could start with an explicit ban (or at least suspension) on police use of FRT in public spaces, pending independent scrutiny of the discriminatory impacts the technology may have against women and other protected groups.Footnote 94 These proposed changes are not drastic. In fact, they are a modest first step in the long journey ahead to push back against escalating surveillance of the public sphere worldwide.

8 Faces of War Russia’s Invasion of Ukraine and Military Use of Facial Recognition Technology

Agne Limante
8.1 Introduction

Shortly after Russia launched large-scale military action on Ukraine on 24 February 2022, Clearview AI (a US-based facial recognition company) announced that it had given its technology to the Ukrainian government to be used for military purposes in Russia’s attack on Ukraine.Footnote 1 In mid-March 2022, it was reported that Ukraine’s Ministry of Defence started using facial recognition technology (FRT).Footnote 2 In such a way, simply and effectively, without long-lasting political debate and academic or civil society discussions, FRT was brought to a new profitable market and joined a list of tools that can be employed for military purposes.

While the Russian war against Ukraine is not the first time that FRT has been used in a military setting, this conflict has brought the military use of this technology to a different level: FRT was offered openly at the outset of the war to one of the sides, being promptly accepted by the Ukrainian authorities and tested on the ground for a variety of objectives. Before 2022, there was only minimal evidence of FRT employment for military purposes. One might recall that in 2019, Bellingcat, a Netherlands-based investigative journalism group specialising in fact-checking and open-source intelligence, used FRT to help identify a Russian man who had filmed the torture and killing of a prisoner in Syria,Footnote 3 or that in 2021, Clearview AI signed a contract with the Pentagon to explore putting its technology into augmented reality glasses.Footnote 4 It has also been reported that Israel performs surveillance of Palestinians using a facial recognition program.Footnote 5 However, these cases only provide evidence of incidental use or potential future application of FRT for military purposes.

This chapter discusses how FRT is employed by both sides in Russia’s war against Ukraine. In particular, it examines how Ukraine engages FRT in the country’s defence, testing the different possibilities this technology offers. It also acknowledges the use of FRT on the other side of the conflict, elaborating on how it is used in Russia to suppress society’s potential opposition to the war. The chapter focusses on the potential and risks of using FRT in a war situation. It discusses the advantages that FRT brings to both sides of the conflict and underlines the associated concerns.

8.2 FRT in the Battlefield: Ukraine

Ukraine began exploring the possibilities of FRT use in the military during the first month of the war. There was no time for elaborate learning or training, and the FRT was put directly into the battlefield, applying creative thinking and a trial-and-error approach. As a result, the Ukraine war can be seen as a field trial for FRT, where, faced with the pressing need to defend its territory and people, the country referred to collective efforts to generate ideas on innovative ways to employ modern technologies and is putting them into practice and checking what works well.

It should be admitted that the FRT developed by Clearview AI (perhaps the most famous and controversial facial recognition company) worked for the interests of Ukraine, owing to its enormous database of facial images. The company has harvested billions of photos from social media companies such as Facebook, Twitter, and Russian social media sites (Vkontakte).Footnote 6 Such a method of database creation attracted wide criticism in peacetime,Footnote 7 but proved beneficial in war, enabling access to facial images of Russian citizens, including soldiers.

One might think that collecting facial images of military personnel, especially those of a higher rank, from social networks might be a challenge – as they are less likely to reveal their identity. But this is not entirely true. While persons who post frequently on social media can be identified more easily, facial recognition systems can also identify and verify those who do not even have social accounts. It is enough that a family member, friend, co-worker, or person also serving in the army post a picture where the person is seen. The technology can even single out a face in a crowd picture and compare it with the face in question. Face recognition can also be used where only some people from a group (e.g., a military unit) have been identified, with the rest of the group being identified through content from any of the identified members. Even if a person appears on the internet in a randomly taken picture, it might also be helpful information allowing their gradual recognition.Footnote 8

Here, several ways in which the Ukrainian authorities employed FRT in its efforts to fight Russia are discussed. The author would like to note that the list might be not exhaustive as, at the time of writing (September 2022), the war continues, and part of the information remains undisclosed.

8.2.1 Identification of Dead

Being probably the first country in the world to use FRT for such a purpose, Ukraine has made the headlines by announcing that it is employing FRT to recognise fallen Russian soldiers.Footnote 9 Many discussions have arisen regarding this controversial idea, its objectives, ethical issues, and effects in case of a mismatch.

Identification of the dead is typically a task for forensic experts. Methods such as examining DNA, dental data, and physical appearance can be used to identify the deceased and have proven reliable. While in peacetime this information is usually available, in wartime experts might be faced with limited data availability, both in the case of nationals of the country in question and soldiers or civilians of the enemy state. Obtaining pre-death samples of enemy fighters’ DNA or dental data is challenging, if not impossible, and in a majority of cases requires too much effort to be of value to a country at war.

In such a situation, the FRT becomes a particularly handy tool, as all is needed is to take a picture of a dead soldier and run it through the database. In the first fifty days since Russia’s invasion of Ukraine, Ukrainian officials are stated to have run more than 8,600 facial recognition searches on dead or captured Russian soldiers, identifying some of them.Footnote 10 Why was FRT used this way, especially towards the deceased? An unprecedented strategy was developed by Ukraine. After the bodies were identified, the Ukrainian officials (as well as civil activists) contacted the families of the deceased Russian soldiers to inform them about the death of their relative. As recognised by Ukraine’s Digital Transformation Minister, this served two purposes. On the one hand, it could be perceived as a method to inform the families, providing them with information on their beloved ones and allowing them to retrieve the bodies. On the other hand, it was seen as a tool for Ukrainians to overcome Russian propaganda and denial of the human costs of the war.Footnote 11 In other words, such FRT employment worked as a political counter-offensive and was one of Ukraine’s strategies in endeavours to inform Russians, who had limited access to non-state-controlled information or were simply ignorant, about the war hostilities and death of Russian soldiers.Footnote 12

This second objective of informing Russian families about the death of their relatives fighting in Ukraine nevertheless appeared to be challenging to accomplish. Again, Ukrainians had to develop their own model to fulfil this goal, as FRT had not been used in this way before and thus there were no experiences to learn from. Theoretically, it would have been possible only to send information to the relatives that their family member had died in a field. However, in the light of constant Russian claims as to ‘fakes’ being spread by the Ukrainians,Footnote 13 this would not have been enough – some evidence needed to be added. From the other perspective, accompanying information about the death of a soldier with pictures of his abandoned corpse or lifeless face, which allegedly was done from the Ukrainian side in several instances, might be interpreted as psychological violence towards the family members or even psychological warfare. Instead of encouraging Russian mothers and fathers to start opposing the war, such a strategy had a risk of bringing the opposite results of anger and claim of disregard for human dignity and humiliation by the enemy.

8.2.2 Identification and Verification of the Identity of Persons of Interest

Another possible use of FRT in a war zone is identifying persons of interest who come within eyeshot of military personnel or public authorities and verifying their identity. Such identification and verification might be employed in different contexts and serve various needs.

As public sources state, the Ukrainian government used FRT at checkpoints to help identify enemy suspects (Russian infiltrators or saboteurs, covert Russian operatives posing as Ukrainian civilians).Footnote 14 At the time of writing, however, it is impossible to obtain data on the extent and how effectively FRT was employed in checkpoints. But it can be claimed that FRT has considerable potential in this regard, especially if specific persons are sought – although systemic use of FRT at checkpoints might be complicated during wartime owing to technical and time constraints.

FRT could also be (and likely was) employed when identifying and interviewing captured soldiers. This limits the ability of captured soldiers to deny their links with the army or present false or misleading information. It also allows additional psychological pressure to be put on an enemy soldier, who is well aware he has been identified.

It might also be tempting to publish a video interview with a captured enemy soldier, aligning it with his image (alone or with family members) retrieved from social media. Similar to notification of families about killed Russian soldiers, this could be a strategy to encourage Russian society to oppose the war. In this regard, it should be taken into account that Article 13(2) of the Geneva Convention (III) prescribes that prisoners of war must be protected from insults and public curiosity, whether these take place at the same time or not. The International Committee of the Red Cross commentary (of 2020) on Article 13 of the Geneva Convention (III) underlines that the prohibition of exposing prisoners of war to ‘public curiosity’ also covers the disclosure of photographic and video images, recordings of interrogations in public communication channels, including the internet, as this practice could be humiliating and jeopardise the safety of the prisoners’ families and of the prisoners themselves once they are released (para. 1624).Footnote 15 The Committee suggests that any materials that enable individual prisoners to be identified must normally be regarded as subjecting them to public curiosity and, therefore, may not be transmitted, published, or broadcasted (if there is a compelling public interest in revealing the identity of a prisoner – for instance, owing to their seniority or because they are wanted by justice – then the materials may exceptionally be released, but only insofar as they respect the prisoner’s dignity) (para. 1627).

8.2.3 Combating Misinformation, Denial, and Propaganda

As noted earlier, one of the objectives of Ukrainian authorities when using the information retrieved by FRT is combating the misinformation and denial of Russian citizens regarding the human costs of war and propaganda related to the war itself.

In Russia, information published by Ukraine or Western countries on dead and captured Russian soldiers, as well as war atrocities committed by Russian soldiers, is dealt with using a simple strategy: denying the information, raising doubts about its truthfulness and blaming the other side. Russian commentators often claim that the faces are not of Russian soldiers, that the situation is staged, or that actors are involved.Footnote 16

In fact, during wartime, both sides could be falsifying information while simultaneously denying accurate information, damaging their policy. FRT, however, allows published material to be more precise and evidence-based, as faces can be linked to the name and family name, place of residence, and photos taken from social media profiles. It also simplifies cross-checking information published by the other side, and thus is a tool in an information war.

An example of using FRT to verify public information concerned the sinking of the Russian warship Moskva in the Black Sea south of Ukraine. When the Russian state held a ceremony for the surviving sailors and officers who had been on the ship, many people wondered if these were actual sailors from the Moskva. Bellingcat ran the pictures through Russian facial recognition platform FindClone using images in Russian social media, and found that most of the men were indeed Russian sailors from Sevastopol.Footnote 17

8.2.4 Identification of War Criminals

Historically, photographic and other visual evidence has been used to prosecute war crimes, promote accountability, and raise public awareness of abuses.Footnote 18 FRT has a high potential to improve such usage of visual tools and to contribute to the process of bringing those responsible to justice, as well as identifying guilty soldiers who it might otherwise be complicated to single out.

Ukraine does not deny that it uses facial recognition to identify Russian soldiers suspected of war crimes or caught on camera looting Ukrainian homes and storefronts. It acknowledges that from the beginning of the conflict it has deployed this technology to identify war criminals, and will continue to do so.Footnote 19 For instance, Ukraine’s Digital Transformation Minister (Mykhailo Fedorov) shared on Twitter and Instagram the name, hometown, and personal photo of a man who, according to him, was recorded shipping looted clothes from a Belarus post office to his home in Russia. The Ukrainian official added, ‘Our technology will find all of them’, presumably referring to FRT.Footnote 20 He also noted that ‘many killers have already been identified who terrorised civilians in Bucha and Irpen. In a short time, we will establish all the information about these people.’Footnote 21 The Chief Regional Prosecutor (Ruslan Kravchenko), in an interview with a news portal, also acknowledged the use of FRT to identify the Russian soldiers suspected of criminal offences, giving an example of how a Russian soldier who murdered a civilian was identified by FRT and later confirmed by the witness.Footnote 22 There were more reported recognitions later.Footnote 23

If FRT can successfully identify war criminals in Russia’s war against Ukraine, there is a slight hope that this could (at least to some extent) deter soldiers from committing war crimes in the future. One has to admit, though, that any false matches may lead to wrongful accusations of war crimes (see Section 8.4.1). Therefore, FRT should only be seen as an investigative lead, but not as definitive evidence. The risk of a false match can be minimised by performing additional analysis on the person concerned. Such further searches can prove to be particularly fruitful where context-related information can be found (e.g., videos and photos confirming the person was fighting in Ukraine, his statements and pictures, communication with relatives, photos and articles on his previous military activity and visibility).Footnote 24

8.3 FRT in Russia: A Government’s Tool in Its Effort to Stifle Anti-War Protests

During the war against Ukraine, the FRT in Russia mainly, though not exclusively,Footnote 25 served a different purpose – to stop any anti-war protests.Footnote 26 While marches and mass rallies against Russia’s attack on Ukraine were taking place all over Europe, protests in Russia had been sporadic and small-scale. In Moscow, with a population of more than 12 million, the number of protesters never exceeded a few thousand. Large numbers of protesters were also never seen on the streets in other cities.

There are different reasons for this. On the one hand, the small number of Russians who expressly oppose Russian aggression might be interpreted to confirm overall society’s support for the current government, prevailing approval of the policies being pursued, and agreement with the arguments put forward by the authorities as to the validity and necessity of the ‘special operation’ (the term used in Russia to refer to the attack on Ukraine). This support arguably stems from strong influence of national mass media, general mentality, and from the iconisation of Russia as a superpower and even ‘true-values protector’,Footnote 27 which has to be respected and abided by. On the other hand, owing to the massive arrests of protesters, those opposing the war see it as dangerous to protest.

From the very beginning of the invasion of Ukraine, Russian authorities effectively stopped any anti-war protest efforts. In addition to prohibiting protests against the Russian military attack on Ukraine and traditional street arrests, Russia employed FRT to track down and apprehend anti-war protesters. Analysis of online posts and social media reveals that Russian citizens are in no doubt that Big Brother is literally watching them, and that the FRTs used by the authorities in public spaces will prevent them from remaining unidentified and simply being part of the crowd.

According to Human Rights Watch, Russian authorities have been integrating public surveillance systems with FRT across the country and using these technologically advanced systems to identify and prosecute peaceful protesters since 2017.Footnote 28 The authorities do not deny this information and do not comment on the details of the extent of use, thus reinforcing the deterrent effect.Footnote 29 Already back in 2017, it was announced on the official website of the Mayor of Moscow that more than 3,500 cameras had been connected to the Joint Data Storage and Processing Centre, including more than 1,600 cameras in the entrances of residential buildings, with many closed-circuit television cameras in the city also reportedly being connected to a facial recognition system.Footnote 30 Additional cameras were placed during later years, and after the start of the war, surveillance was increased in the places where protests typically take place.Footnote 31 The collection of biometric data also continues to be strengthened. For instance, in May 2022, Russian authorities demanded that the four largest state-owned banks hand over their clients’ biometrics to the government.Footnote 32 To ensure that the biometric data is collected from the practically entire adult population, the laws were amended in July 2022 to oblige banks and state agencies to enter their clients’ biometric data, including facial images and voice samples, into a central biometrics database. This measure, which does not require clients’ consent to share data with the government, came into force in March 2023.Footnote 33

Human Rights Watch stated in its submission to the Human Rights Committee on Russia on 10 February 2022 that the use of FRT, including for police purposes, is not regulated by Russian law. It highlighted that such use in the context of peaceful protests contradicts the principle of legal certainty by interfering with the rights to liberty and security by using methods that are not adequately supervised or provided for by law. It also violates the rights to privacy and peaceful assembly and is used in a discriminatory manner on the basis of political opinion. Human Rights Watch suggested that the Committee urge the Russian government to end the use of facial recognition during peaceful protests and ensure that all government use of facial recognition is strictly regulated by law.Footnote 34 It is not likely that Russia intends to implement this proposal in the near future, as the FRT has proved to be a powerful tool to control protests against the country’s policies.

8.4 Concerns Associated with the Use of FRT in Wartime

While there is no doubt that the FRT brings many advantages to both sides of the conflict, it also raises a number of concerns. The main ones are the possibility of false identification, misuse of FRT, and problems associated with its continued use after the war.

8.4.1 False Identification

One of the significant risks linked to the use of FRT is false identification. FRT might produce inaccurate results; moreover, the input data determines the accuracy of FRT systems. This might be forgotten in wartime stress, especially considering the limited training for persons using it.

As to the recognition of dead soldiers, there is little research about FRT effectiveness in the case of deceased or distorted bodies. One recent study recognised that decomposition of a person’s face could reduce the software’s accuracy, though, according to researchers, overall the research results were promising.Footnote 35 Similar findings were presented in academic research on automatic face recognition for humanitarian emergencies, which concluded that automatic recognition methods based on deep learning strategies could be effectively adopted as support tools for forensic identification.Footnote 36 However, it has to be taken into account that the quality of the photos obtained in a war scenario can be substantially different from those taken under optimal conditions. Poor image quality, poor lighting, changes in faces after death, and injuries could lead to false positives or negatives.

When Ukraine started running FRT on dead Russian soldiers, it received a lot of criticism. This largely revolved around the idea that sending pictures of dead bodies to their relatives could constitute psychological violence and that any false-positive recognition of a dead soldier and subsequent notification of his family about his death would cause distress to the family. One could argue, however, that this second point might be slightly exaggerated. To cause stress, the misidentified family must first actually have a son at war in Ukraine, and second, they must have the option to try to contact him or his brothers in arms and verify the information received. Furthermore, one might expect that, at least currently, when FRT is taking its first steps as a military technology, its ability to identify fallen enemy soldiers remains considerably limited. The Ukrainian side tested the possibilities of identifying deceased soldiers and the impact of such identification on the Russian people; however, currently Ukraine focusses its war efforts on eliminating as many enemy soldiers as possible, and recognition of the dead ones is not on the priority list.

More problematic would be a mismatch by facial recognition performed in the warzone on live persons and when identifying the enemy. This could lead to the eventual prosecution (or even killing) of wrongly identified persons. Thus, FRT in no case should become a single tool to define the fate of a person, as the technical mistake could lead to fatal outcomes. Of particular importance to avoid false positives when using the FRT in a war context is to double-check a face recognition match using alternative intelligence and contextual information. While at the stage of after-war investigation of war crimes, FRT will most likely be used as a complementary source of information, double-checking its results, it is less realistic to assume such control in the fog of war.

8.4.2 Misuse of FRT

In an active war zone, it is difficult to guarantee only limited use of FRT or to enforce any restrictions on the use of the technology. It is a challenge to ensure that FRT is used only for the purposes it is designated or by the authorised persons.

As FRT is a new technology in a war zone with little legal regulation in place, it is tempting to experiment with it and its possibilities. This allows the almost uncontrolled proliferation of FRT uses. If the deployment of FRT on the battlefield proves effective for identifying enemy soldiers, this may lead to its incorporation into systems that use automated decision-making to direct lethal force. This possibility only increases with the development of the accuracy of the FRT. The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies, but also on, for instance, political opponents or members of specific ethnic groups.Footnote 37

Another issue is the possibility of the unauthorised use of FRT. One strategy to mitigate this risk is to create a clearly established system that verifies the identity and authority of any official who is utilising it. The administrator of a body using the FRT should be able to see who is conducting searches and what those searches are. Moreover, the system should be designed with the possibility to revoke entry from a distance, disabling the possibility of use in case of abuse.Footnote 38 However, legal instruments should be developed and personnel trained to implement such a control.

8.4.3 Continued Use after the War

Modern technologies share one common feature – once people learn to use them, the technologies spread, and their usage intensifies. This phenomenon has been observed in Ukraine: in September 2022, it was announced that additional cameras with facial recognition were already planned for installation in public places in the Kyiv region; with the declared goal being to counter sabotage and intelligence groups and search for saboteurs.Footnote 39

It is likely that authorities who get comfortable with using FRT during wartime will be willing to continue its use after the war ends. It is thus difficult to anticipate that FRT introduced during a war will not endure throughout peacetime. In such a situation, the issues related to privacy, discrimination, and other concerns explored in this volume, which are less concerning in wartime, become important.

The subsequent use of information gathered during the conflict, including images of battlefield victims, raises another set of concerns. In the case of the Russia–Ukraine military conflict, the Clearview AI database is considerably enriched with pictures of deceased persons or persons who were interviewed or simply checked at a checkpoint during wartime. While the legality of harvesting pictures from social networks causes doubts, even more ethical and legal issues arise as to images taken of dead persons or persons who were not informed about the collection of their data (which would be naive to expect in a war zone). When FRTs are employed in the EU for border control and migration, the sensitive data required for facial identification is collected by public agencies, the data subject is informed, and EU law strictly regulates the process. Naturally, the use of FRT in a war zone differs materially in this regard.

8.5 Concluding Remarks: FRT – A New Tool of Military Technology?

Any war is a tragedy for human society, but it also acts as a step in the further development of technologies. This is evident in the current Russian war against Ukraine. The conflict in Ukraine represents a coming of age for a number of advanced technologies, from drones to commercial satellites, loitering munitions, and FRT. As Lauren Kahn notes, Ukrainian steppes have been transformed into a proving ground for next-generation technologies and military innovations in this war.Footnote 40

From the perspective of FRT companies, the contribution of FRT to the Ukrainian war effort, in terms of proving ground and use, yields valuable data and – at the same time – visibility and even an advertisement for FRT companies. It is also difficult to argue against the fact that offering FRT technology to Ukraine during the war was a wise choice because it created a chance for the technology to prove its worth. Likely, companies whose products are being deployed in this conflict (both in Ukraine and in Russia) in a short time will become defence contractors offering their FRT as military technology.

In such a way, the broad and effective deployment of modern tools in Ukraine in its efforts to stop Russia’s military invasion is bringing emerging technologies into the mainstream of the military. In an era where technology reigns, it comes as no surprise that artificial intelligence is being employed for military purposes. FRT is advancing and spreading, and it might safely be projected that FRT will be a well-established military – and propaganda – tool in a decade or two. While one can argue that bringing FRT to war is dangerous and should be avoided because of the associated risks, it would be naive to believe that this will not happen. What can be done, though, is to develop international standards on the accepted use of FRT for military purposes – work that awaits the international community in the near future.

Footnotes

1 Facial Recognition Technology Key Issues and Emerging Concerns

1 A. Knutson, ‘Saving face’ (2021) 10(1) IP Theory, www.repository.law.indiana.edu/ipt/vol10/iss1/2/.

2 W. Hartzog and E. Selinger, ‘Facial recognition is the perfect tool for oppression’ (2 August 2018), Medium, https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66.

3 L. Stark, ‘Facial recognition is the plutonium of AI’ (2019) 25 (3) XRDS – Crossroads, The ACM Magazine for Students 50–55, https://doi.org/10.1145/3313129.

4 Cited in A. Hern, ‘Human rights group urges New York to ban police use of facial recognition’ (25 January 2021), The Guardian, www.theguardian.com/technology/2021/jan/25/new-york-facial-recognition-technology-police.

5 K. Hill and G. Dance, ‘Clearview’s facial recognition app is identifying child victims of abuse’ (7 February 2020), New York Times.

6 Stark, ‘Facial recognition’, p. 55.

7 M. Johnson, ‘Face recognition in healthcare: Key use cases’ (21 January 2022), Visage Technologies, https://visagetechnologies.com/face-recognition-in-healthcare/.

8 M. Andrejevic, Automated Media (Routledge, 2020).

9 N. Kelly, ‘Facial recognition smartwatches to be used to monitor foreign offenders in UK’ (5 August 2022), The Guardian, www.theguardian.com/politics/2022/aug/05/facial-recognition-smartwatches-to-be-used-to-monitor-foreign-offenders-in-uk.

10 K. Crawford, Atlas of AI (Yale University Press, 2021).

11 See J. Buolamwini and T. Gebru, ‘Gender shades’, Conference on Fairness, Accountability and Transparency (January 2018), Proceedings of Machine Learning Research, pp. 7791.

12 See S. Magnet, When Biometrics Fail (Duke University Press, 2011).

13 R. Benjamin, Race after Technology (Polity, 2019).

14 Footnote Ibid., p. 65.

15 C. Gilliard and D. Golumbia, ‘Luxury surveillance’ (6 July 2021), Real Life, https://reallifemag.com/luxury-surveillance/.

16 D. Raji, Post, Twitter (24 April 2021), https://twitter.com/rajiinio/status/1385935151981420557.

17 A. Albright, ‘If you give a judge a risk score’ (29 May 2019), www.law.harvard.edu/programs/olin_center/Prizes/2019-1.pdf.

18 Benjamin, Race after Technology.

19 Footnote Ibid., p. 125.

20 B. Han, The Transparency Society (Stanford University Press, 2015), p. vii.

21 R. Garland, ‘Trust in democratic government in a post-truth age’ in R. Garland (ed.), Government Communications and the Crisis of Trust (Palgrave Macmillan, 2021), pp. 155169.

22 D. McQuillan, Resisting AI (University of Bristol Press, 2022), p. 35.

23 R. Calo, ‘The scale and the reactor’ (9 April 2022), SSRN, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4079851, p. 3.

2 Facial Recognition Technologies 101 Technical Insights

1 OECD, Artificial Intelligence in Society (OECD Publishing, 2019), https://doi.org/10.1787/eedfee77-en.

2 G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, ‘Labeled faces in the wild: A database for studying face recognition in unconstrained environments’ (2007), Technical Report 07–49, University of Massachusetts, Amherst.

3 I. Kemelmacher-Shlizerman, S. M. Seitz, D. Miller, and E. Brossard, ‘The megaface benchmark: 1 million faces for recognition at scale’, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA (27–30 June 2016), pp. 4873–4882, doi: 10.1109/CVPR.2016.527.

4 Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao, ‘Ms-celeb-1m: A dataset and benchmark for large-scale face recognition’ in B. Leibe, J. Matas, and M. Welling (eds.), European Conference on Computer Vision (Springer, 2016), pp. 87102.

5 P. Jackson, Introduction to Expert Systems (3rd ed., Addison, Wesley, 1998), p. 2.

6 Stuart J. Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th ed., Pearson, 2021); T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning (Springer, 2009).

7 A. J. Goldstein, L. D. Harmon, and A. B. Lesk, ‘Identification of human faces’ (1971) 59(5) Proceedings of the IEEE 748760, https://doi.org/10.1109/PROC.1971.8254.

8 M. Turk and A. Pentland, ‘Eigenfaces for recognition’ (1991) 3(1) Journal of Cognitive Neuroscience 7186.

9 Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, ‘Deepface: Closing the gap to human-level performance in face verification’ (2014), Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708.

10 Turk and Pentland, ‘Eigenfaces for recognition’.

11 C. Liu and H. Wechsler, ‘Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition’ (2002) 11(4) IEEE Transactions on Image Processing 467476.

12 A. Krizhevsky, I. Sutskever, and G. E. Hinton, ‘Imagenet classification with deep convolutional neural network’ in NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, vol. 1 (Curran Associates Inc., 2012), pp. 1097–1105.

13 Y. Taigman, M. Yang, M. Ranzato, and L. Wolf (2014). Deepface: Closing the gap to human-level performance in face verification. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701–1708.

14 Hannes Schulz and Sven Behnke, ‘Deep learning’ (2012) 26 KI – Künstliche Intelligenz 357–363, https://doi.org/10.1007/s13218-012-0198-z.

15 P. J. Phillips, A. N. Yates, Y. Hu, C. A. Hahn, E. Noyes, K. Jackson, J. G. Cavazos, G. Jeckeln, R. Ranjan, S. Sankaranarayanan, J-C. Chen, C. D. Castillo, R. Chellappa, D. White, and A. J. O’Toole, ‘Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms’ (2018) 115(24) Proceedings of the National Academy of Sciences 61716176, https://doi.org/10.1073/pnas.1721355115

16 Yaoyao Zhong and Weihong Deng, ‘Towards transferable adversarial attack against deep face recognition’ (2020) 16 IEEE Transactions on Information Forensics and Security 14521466, doi: 10.1109/TIFS.2020.3036801.

3 FRT in ‘Bloom’ Beyond Single Origin Narratives

The author would like to acknowledge Stephanie Dick, Ausma Bernotaite, and Kalervo Gulson for their generous insight on different case-studies that comprise this chapter. Thanks also to Monika Zalneirute and Rita Matulionyte for their editorial guidance, and finally, Kathryn Henne at Australian National University, School of Regulation and Global Governance for her continued support.

1 Trevor Paglen, ‘Bloom’, Pace Gallery (10 September–4 November 2020).

2 Paglen obtained the dataset and visual materials on Bledsoe’s experiments from correspondence with Harvard trained historian of technology Stephanie Dick and her research at the Briscoe Center for American History, University of Texas. See Bledsoe, Woodrow Wilson, and Helen Chan. “A man-machine facial recognition system—some preliminary results.” Panoramic Research, Inc, Technical Report PRI A 19 (1965), Palo Alto, California.

3 Paglen states that ‘sophisticated machine learning algorithms that classify and categorise people are incentivized by assumptions of a stable relationship between the image and its measurement – but there are usually bad politics attached [and a misapprehension that these are human ways of seeing and of comprehending]’; Camille Sojit Pejcha, ‘Trevor Paglen wants you to stop seeing like a human’ (15 September 2020), Document, www.documentjournal.com/2020/09/trevor-paglen-wants-you-to-stop-seeing-like-a-human/.

4 David Gershgorn, ‘The data that transformed AI research – And possibly the world’ (26 July 2017), Quartz, https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world. Noted by Fei-Fei Li (the creator of the machine learning database Image-Net). For use of machine learning systems on Image Net, see E. Denton, A. Hanna, R. Amironesei, A. Smart, and H. Nicole, ‘On the genealogy of machine learning datasets: A critical history of ImageNet’ (2021) 8(2) Big Data & Society, https://doi.org/10.1177/20539517211035955.

5 Nikki Stevens and Os Keyes, ‘Seeing infrastructure: Race, facial recognition and the politics of data’ (2021) 35(4–5) Cultural Studies 833853, at 833.

6 Kelly A. Gates, ‘Introduction: Experimenting with the face’ in Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance (New York University Press, 2011), p. 5.

7 ‘Expert Panel: AI, Facial Recognition Terchnology and Law Enforcement’ hosted by AUSCL Australasian Society for Computers + Law, May 5th 2022.

8 Katelyn Ringrose, ‘Law enforcement’s pairing of facial recognition technology with body-worn cameras escalates privacy concerns’ (2019) 105 Virginia Law Review Online 57–66.

9 Dennis Desmond, ‘Bunnings, Kmart and The Good Guys say they use facial recognition for “loss prevention”. An expert explains what it might mean for you’ (15 June 2022), The Conversation, https://theconversation.com/bunnings-kmart-and-the-good-guys-say-they-use-facial-recognition-for-loss-prevention-an-expert-explains-what-it-might-mean-for-you-185126.

10 Pete Fussey and Daragh Murray, ‘Independent report on the London Metropolitan Police service’s trial of live facial recognition technology’ (July 2019), University of Essex Repository, https://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-2.pdf; see also Davide Castelvecchi, ‘Is facial recognition too biased to be let loose?’ (2020) Nature 587 347349.

11 NEC, ‘A brief history of facial recognition’ (12 May 2020), NEC Publications and Media, www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-recognition/.

12 China’s Zhejiang Dahua Technology Co Ltd shipped 1,500 cameras to Amazon in a deal valued at close to $10 million – see Krystal Hu and Jeffrey Dastin, ‘Exclusive: Amazon turns to Chinese firm on U.S. blacklist to meet thermal camera needs’ (4 April 2020), Reuters, www.reuters.com/article/ushealth-coronavirus-amazon-com-cameras/exclusive-amazon-turns-to-chinese-firm-on-u-s-blacklist-tomeet-thermal-camera-needs-idUSKBN22B1AL?il=0. For the black-listing of Dahua, see US Department of Commerce, ‘U.S. Department of Commerce adds 28 Chinese organisations to its entity list’, Office of Public Affairs, Press Release (7 October 2019), https://2017-2021.commerce.gov/news/press-releases/2019/10/us-department-commerce-adds-28-chinese-organizations-its-entity-list.html

13 This is needed as a corrective to those who focus uncritically on such things as ‘the computer and its social impacts but then fail to look behind technical things to notice the social circumstances of their development, deployment, and use’. Langdon Winner, ‘Do artifacts have politics?’ (1908) 109(1) Daedalus 121136, at 112.

14 Lucas D. Introna and David Wood, ‘Picturing algorithmic surveillance: The politics of facial recognition systems’ (2004) 2(2/3) Surveillance & Society 177198; Lucas D. Introna, ‘Disclosive ethics and information technology: Disclosing facial recognition systems’ (2005) 7(2) Ethics and Information Technology 7586; Lucas D. Introna and Helen Nissenbaum, Facial Recognition Technology: A Survey of Policy and Implementation Issues (Center for Catastrophe Preparedness and Response, New York University, 2010), pp. 160.

15 Mark Andrejevic and Neil Selwyn, Facial Recognition (John Wiley & Sons, 2022).

16 Luke Stark, ‘Facial recognition is the plutonium of AI’ (2019) 25(3)XRDS: Crossroads, The ACM Magazine for Students 5055; Richard Van Noorden, ‘The ethical questions that haunt facial-recognition research’ (2020) 587 Nature 354358. Joy Buolamwini and Timrit Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’ (2018) 81 Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research 7791; Jacqueline Cavazos, Jonathon Phillips, Carlos Castillo, and Alice O’Toole, ‘Accuracy comparison across face recognition algorithms: Where are we on measuring race bias?’ (2019) 3(1) IEEE Transactions on Biometrics, Behavior, and Identity Science 101111; Morgan Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R. Brubaker, ‘How we’ve taught algorithms to see identity: Constructing race and gender in image databases for facial analysis’ (2020) 4(CSCW1) Proceedings of the ACM on Human-Computer Interaction 135.

17 A main debate is whether this process should be considered a ‘diffusion’ from an established centre, such as Beldsoe’s laboratory, or a more globalised network of exchanges. This changes the way these systems can be understood, explained, and regulated. Decentred histories give attention to members of other classes, such as the experiences of women, exploitation of Indigenous groups, and non-humans including animals. They include histories from parts of the world outside the United States and Europe. See Eden Medina, ‘Forensic identification in the aftermath of human rights crimes in Chile: A decentered computer history’ (2018) 59(4) Technology and Culture S100–S133; Erik Van der Vleuten, ‘Toward a transnational history of technology: Meanings, promises, pitfalls’ (2008) 49(4) Technology and Culture 974994.

18 Ben Rhodes, Kenneth Laughery, James Bargainer, James Townes, and George Batten, Jr, ‘Final report on phase one of the Project “A man-computer system for solution of the mug file problem”’ (26 August 1976), Prepared for the Department of Justice, Law Enforcement Assistance Administration, National Institute of Law Enforcement, and Criminal Justice, under Grate 74-NI-99-0023 G.

19 Jeffrey Silbert, ‘The world’s first computerized criminal-justice information-sharing system, the New York State Identification and Intelligence System (NYSIIS)’ (1970) 8(2) Criminology 107128.

20 Stephanie Dick, ‘The standard head’ in Gerardo Con Diaz and Jeffrey Yost (eds.), Just Code! (Johns Hopkins University Press, 2024).

21 See A. Jay Goldstein, Leon D. Harmon, and Ann B. Lesk, ‘Identification of human faces’ (1971) 59(5) Proceedings of the IEEE 748760; also Takeo Kanade, ‘Picture processing by computer complex and recognition of human faces’ (1973), PhD thesis, Kyoto University; and finally, the development of Principle Component Analysis – a compression of facial data that allowed for faster computer comparisons to be made (crucial to automation). Lawrence Sirovich and Michael Kirby, ‘Low-dimensional procedure for the characterization of human faces’ (1987) 4(3) Josa a 519524.

22 In other words, original facial-recognition software was built from images of prisoners repurposed by the US government without their consent. Trevor Paglen produced another artistic work on this - ‘They Took the Faces from the Accused and the Dead …(SD18)’, 2020, the artist and Altman Siegel, San Francisco. For how these databases are constructed and configured see Craig Watson and Patricia Flanagan, ‘NIST special database 18: Mugshot identification database’ (April 2016), Information Technology Laboratory, National Institute of Standards and Technology, www.nist.gov/system/files/documents/2021/12/06/readme_sd18.pdf.

23 By producing a tape that could be fed to another, more powerful computer, the distance between specific points on the face then became a ‘coded definition of that face’. Dick, ‘The standard head’.

24 For a biographical narrative of Bledsoe’s efforts with Panoramic Research see Shaun Raviv, ‘The secret history of facial recognition’ (21 January 2020), Wired, www.wired.com/story/secret-history-facial-recognition/.

25 As Aradau and Blanke argue, controlling error in these systems requires repeated measurements and often converge towards’ the average’. This becomes the ‘standard’ benchmark with which to measure and render individuals uniquely identifiable. Claudia Aradau and Tobias Blanke, ‘Algorithmic surveillance and the political life of error’ (2021) 2(1) Journal for the History of Knowledge 113, at 5.

26 From vast literature on Bertillon, refer to Jonathan Finn, Capturing the Criminal Image: From Mug Shot to Surveillance Society (University of Minnesota Press, 2009); Keith Breckenridge, Biometric State: The Global Politics of Identification and Surveillance in South Africa, 1850 to the Present (Cambridge University Press, 2014).

27 This also included the first automated fingerprint system for the FBI, building contactless scanners, and the launch of electronic ID (eID) in the United States in 2017. See IDEMIA, ‘Innovation wall: A history of expertise’ (2022), www.idemia.com/wp-content/uploads/2021/01/idemia-history-of-expertise.pdf; and see FindBiometrics, ‘IDEMIA’s Matt Thompson on the reality of mobile ID and “Identity on the Edge”’(4 May 2021), Interview at Find Biometrics: Global Identity Management, https://findbiometrics.com/interview-idemia-matt-thompson-mobile-id-identity-on-the-edge-705059/.

28 For an analysis of ‘smart photography’ and facial recognition see Sarah Kember, ‘Face recognition and the emergence of smart photography’ (2014) 13(2) Journal of Visual Culture 182199. The use of digital photography also challenges ‘how can the photographic image continue to “guarantee” the existence of reality in what it shows when pixel by pixel manipulation allows a seamless modification?’ Scott McQuire, ‘Digital photography and the operational archive’ in Sean Cubitt, Daniel Palmer, and Nathaniel Tkacz (eds.), Digital Light (Open Humanities Press, 2015), chapter 6 (pp. 122143), at p. 142.

29 Clare Garvie, ‘Garbage in, garbage out: Face recognition on flawed data’ (16 May 2019), Georgetown Law, Center on Privacy & Technology, www.flawedfacedata.com/.

30 Clare Garvie, Alvaro Bedoya, and Jonathan Frankle, ‘The perpetual line-up: Unregulated police face recognition in America’ (18 October 2016), Georgetown Law, Center on Privacy & Technology, www.perpetuallineup.org.

31 Oscar H. Gandy, ‘Statistical surveillance: Remote sensing in the digital age’ in Kevin Haggerty, Kirstie Ball, and David Lyon (eds.), Routledge Handbook of Surveillance Studies (Taylor & Francis, 2012), pp. 125132.

32 The approach used a process to break down human faces into principle components via statistical means and these became ‘standardised ingredients’ known as eigenfaces. The experiment was constrained by environmental factors, but created significant interest in automated face recognition. M. Turk and A. Pentland, ‘Eigenfaces for recognition’ (1991) 3(1) Journal of Cognitive Neuroscience 7186.

33 At the Super Bowl signs advised fans that they were under video surveillance. The system identified nineteen people – all petty criminals. No one was detained or questioned because Facefinder was an experiment. See Vicky Chachere, ‘Biometrics used to detect criminals at Super Bowl’ (13 February 2001), ABC News, https://abcnews.go.com/Technology/story?id=98871&page=1.

34 Patrick J. Grother, Mei L. Ngan, and Kayee K. Hanaoka, ‘Ongoing Face Recognition Vendor Test (FRVT) Part 2: Identification’ (November 2018), NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD, https://doi.org/10.6028/NIST.IR.8238. The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets contain more unconstrained photos: 3.2 million webcam images, 200,000 side-view images, and 2.5 million photojournalism and amateur photographer photos. These datasets are sequestered at NIST, meaning that developers do not have access to them for training or testing.

35 Neurotechnology, ‘Neurotechnology and TCS selected by UIDAI to provide biometric de-duplication and authentication for India’s Aadhaar ID program’, Neurotechnology Press Release (22 March 2021), www.neurotechnology.com/press_release_india_uidai_aadhaar_id.html.

36 For references on Aadhaar, see Bidisha Chaudhuri and Lion König, ‘The Aadhaar scheme: A cornerstone of a new citizenship regime in India?’ (2018) 26(2) Contemporary South Asia 127142; Amiya Bhatia and Jacqueline Bhabha, ‘India’s Aadhaar scheme and the promise of inclusive social protection’ (2017) 45(1) Oxford Development Studies 6479; Kalyani Menon Sen, ‘Aadhaar: Wrong number, or Big Brother calling’ (2015) 11(1) Socio-Legal Review 85108.

37 See Keith Breckenridge, Biometric State: The Global Politics of Identification and Surveillance in South Africa, 1850 to the Present (Cambridge University Press, 2014). Chapter 3 (pp. 90–114), titled ‘Gandhi’s biometric entanglement: Fingerprints, satyagraha and the global politics of Hind Swaraj’, perfectly captures the complexity when dealing with the question of biometrics, and a mobility in their use.

38 Indian Statistical Institute, ‘Father of Indian statistics: Prof. Prasanta Chandra Mahalanobis’ (2020), Google Arts and Culture, https://artsandculture.google.com/exhibit/father-of-indian-statistics-prof-prasanta-chandra-mahalanobis%C2%A0/0AISK23-669lLA.

39 Prasanta Chandra Mahalanobis, ‘Statistics as a key technology’ (1965) 19(2) The American Statistician 4346; and refer to Paidipaty Poornima, ‘Testing measures: Decolonization and economic power in 1960s India’ (2020) 52(3) History of Political Economy 473497.

40 Dasgupta Somesh, ‘The evolution of the D statistic of Mahalanobis’ (1993) 55(3) Sankhyā: The Indian Journal of Statistics, Series A (1961–2002) 442459; Prasanta Chandra Mahalanobis, ‘On the generalized distance in statistics’ (1936) 12 Proceedings of the National Institute of Science India 4955.

41 Simon Michael Taylor, Kalervo N. Gulson, and Duncan McDuie-Ra, ‘Artificial intelligence from colonial India: Race, statistics, and facial recognition in the Global South’ (2021) 48(3) Science, Technology, & Human Values, https://doi.org/10.1177/01622439211060839.

42 Somesh, ‘The evolution of the D statistic’, p. 448.

43 Mahalanobis Prasanta Chandra, ‘A new photographic apparatus for recording profiles of living persons’ (1933) 20 Proceedings of the Twentieth Indian Science Congress. Patna Secondary Anthropology 413.

44 Mukharji Projit Bihari, ‘Profiling the profiloscope: Facialization of race technologies and the rise of biometric nationalism in inter-war British India’ (2015) 31(4) History and Technology 376396, at 392.

45 This applies whether for connectionist approaches such as using neural networks or deep learning; or statistical based approaches using hidden Markov models; or biometric probes with template feature matching; or geometric approaches to frontal face recognition such as eigenface images or geometrical feature matching.

46 Ada Lovelace Institute, ‘Beyond face value: Public attitudes to facial recognition technology’ (September 2019), Nuffield Foundation, Ada Lovelace Institute, London, p. 5, www.adalovelaceinstitute.org/wp-content/uploads/2019/09/Public-attitudes-to-facial-recognition-technology_v.FINAL_.pdf.

47 Enrico Vezzetti and Federica Marcolin, Similarity Measures for Face Recognition (Bentham Science, 2015).

48 The Mahalanobis distance function is ubiquitous owing to its algorithmic and biometric efficacy for structuring unknown datasets, its acceptability and incorporability into different decision systems, and the efficiency of being weighted to produce accurate results. See P. M. Roth, M. Hirzer, M. Köstinger, C. Beleznai, and H. Bischof, ‘Mahalanobis distance learning for person re-identification’ in S. Gong, M. Cristani, S. Yan, and C. C. Loy (eds.), Person Re-Identification (Springer, 2014), pp. 247267.

49 Machine learning tools often reuse elements that lie far afield from the scientific laboratories, statistical research institutes, and engineering settings in which they first took shape. See also Ariana Dongus, ‘Galton’s utopia – Data accumulation in biometric capitalism’ (2019) 5 Spheres: Journal for Digital Cultures 116, at 11, http://spheres-journal.org/galtons-utopia-data-accumulation-in-biometric-capitalism/.

50 Kate Crawford and Trevor Paglen, ‘Excavating AI: The politics of images in machine learning training sets’ (2021) 36(4) AI & Society 1105–1116.

51 Shiming Xiang, Feiping Nie, and Changshui Zhang, ‘Learning a Mahalanobis distance metric for data clustering and classification’ (2008) 41(12) Pattern Recognition 36003612.

52 Meredith Whittaker, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Myers West, Rashida Richardson, Jason Schultz, and Oscar Schwartz, ‘AI Now Report 2018’ (2018), AI Now Institute.

53 Cavazos et al., ‘Accuracy comparison across face recognition algorithms’; Clare Garvie, ‘Face recognition in US investigations: A forensic without the science’ (5 August 2020), Webinar, UNSW Grand Challenges, online presentation, UNSW Sydney; Scheuerman et al., ‘How we’ve taught algorithms to see identity’; Stark, ‘Facial recognition is the plutonium of AI’.

54 Alexander Monea and Jeremy Packer, ‘Media genealogy and the politics of archaeology’ (2016) 10 International Journal of Communication 31413159, at 3144.

55 Gates, ‘Introduction’, p. 11.

56 Caroline Compton, Fleur E. Johns, Lyria Bennett Moses, Monika Zalnieriute, Guy S. Goodwin-Gill, and Jane, McAdam, ‘Submission to the UNHCR’s Global Virtual Summit on Digital Identity for Refugees “Envisioning a Digital Identity Ecosystem in Support of the Global Compact on Refugees”’ (1 January 2019), UNSW Law Research Paper No. 19–31, https://ssrn.com/abstract=3380116 or http://dx.doi.org/10.2139/ssrn.3380116.

57 UNHCR, ‘From ProGres to PRIMES’, Information Sheet 2018 (March 2018), www.unhcr.org/blogs/wp-content/uploads/sites/48/2018/03/2018-03-16-PRIMES-Flyer.pdf.

58 Taylor, Simon Michael. “Species ex machina:‘the crush’of animal data in AI.” BJHS Themes (2023): 1–15.

59 Fleur Johns, ‘Data, detection, and the redistribution of the sensible in international law’ (2017) 111(1) American Journal of International Law 57103.

60 A. Lodinová, ‘Application of biometrics as a means of refugee registration: Focusing on UNHCR’s strategy’ (2016) 2(2) Development, Environment and Foresight 91100.

61 Footnote Ibid., p. 59.

62 Fleur Johns, ‘Global governance through the pairing of list and algorithm’ (2016) 34(1) Environment and Planning D: Society and Space 126149.

63 Marciano, Avi. “The politics of biometric standards: The case of Israel biometric project.” Science as Culture 28, no. 1 (2019): 98–119.

64 In September 2019, four researchers wrote to the publisher Wiley to ‘respectfully ask’ that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity. C. Wang, Q. Zhang, W. Liu, Y. Liu, and L. Miao, ‘Facial feature discovery for ethnicity recognition’ (2018) 9(1) Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery Article ID e1278.

65 Danielle Cave, Samantha Hoffman, Alex Joske, Fergus Ryan, and Elise Thomas, ‘Mapping China’s technology giants’ (18 April 2019), ASPI Report No. 15, www.aspi.org.au/report/mapping-chinas-tech-giants.

66 Raviv, ‘The secret history of facial recognition’.

67 Ausma Bernot, ‘Transnational state-corporate symbiosis of public security: China’s exports of surveillance technologies’ (2022) 11(2) International Journal for Crime, Justice and Social Democracy 159173.

68 Yan Luo and Rui Guo, ‘Facial recognition in China: Current status, comparative approach and the road ahead’ (2021) 25(2) University of Pennsylvania, Journal of Law and Social Change 153.

69 This includes clarifying information materials to train law enforcement personnel on using and maintaining FRT systems, including manual facial comparison, mobile device uses, and other FRT hardware. Garvie, Bedoya, and Frankle, ‘The perpetual line-up’.

70 See Simon Michael Taylor, ‘Species ex machina: ‘the crush’ of animal data in AI.’ (2023) 8 BJHS Themes, 155–169; Ali Shojaeipour, Greg Falzon, Paul Kwan, Nooshin Hadavi, Frances C. Cowley, and David Paul, ‘Automated muzzle detection and biometric identification via few-shot deep transfer learning of mixed breed cattle’ (2021) 11(1) Agronomy 2365, https://doi.org/10.3390/agronomy11112365; and Ali Ismail Awad, ‘From classical methods to animal biometrics: A review on cattle identification and tracking’ (2016) 123 Computers and Electronics in Agriculture 423435.

71 For animal facial recognition biometrics see Yue Lu, Xiaofu He, Ying Wen and Patrick Wang, ‘A new cow identification system based on iris analysis and recognition’ (2014) 6(1) International Journal of Biometrics 1832.

72 For regulatory gaps in agricultural data and privacy law, see Annie Guest, ‘Are Big Ag Tech companies harvesting farmers’ confidential data?’ (18 February 2022), ABC News, Landline, www.abc.net.au/news/2022-02-19/agriculture-data-protection/100840436; also Kelly Bronson and Phoebe Sengers, ‘Big Tech meets Big Ag: Diversifying epistemologies of data and power’ (2022) 31(1) Science as Culture 114; and Leanne Wiseman, Jay Sanderson, Airong Zhang, and Emma Jakku, ‘Farmers and their data: An examination of farmers’ reluctance to share their data through the lens of the laws impacting smart farming’ (2019) 90–91 NJAS – Wageningen Journal of Life Sciences 100301.

73 Mark Maguire, ‘The birth of biometric security’ (2009) 25 Anthropology Today 914. This is also because of what has worked in the past – building on successful statistical classifications, image categorisation, and probability.

74 SDC was called the first software company. It began as a systems engineering group for an air-defence system at the RAND in April 1955 – the same year that ‘artificial intelligence’ as a term was defined in a Dartmouth Conference proposal. Within a few months, RAND’s System Development Division had over 500 employees developing software computing applications. For informational retrieval and database management systems see Jules I. Schwartz, ‘Oral history interview with Jules I. Schwartz’ (7 April 1989), Center for the History of Information Processing, Charles Babbage Institute. Retrieved from the University of Minnesota Digital Conservancy, https://hdl.handle.net/11299/107628

75 SDC stressed that it was imperative to get into the computer-design phase as quickly as possible. Their main fear was that if NYSIIS waited too long in getting started, they might not develop a computer system at all. A strong rebuttal was supported by the administrative management of New York State. They felt a Feasibility Report and an exhaustive systems analysis was needed to be completed first. In the end, SDC went along with this decision. See Ross Gallati, ‘Identification and intelligence systems for administration of justice’, in Cornog et al. (eds.), EDP Systems in Public Management (Rand McNally, 1968), pp. 161162; also Silbert (1970), ‘The world’s first computerized criminal-justice information-sharing system’, p. 116.

76 Building Block One involved the fingerprint and an ability for the computer to search and summarise case-history capabilities; the second stage was to develop image-recognition on mug shot databases.

77 B. G. Schumaker, Computer Dynamics in Public Administration (Spartan Books, 1967).

78 Crawford and Calo consider ‘this a blindspot in AI’ and advocate for analyses at a systems level to consider the history of the data and algorithms being used, and to engage with the social impacts produced at every stage – dataset conception, technology design, use-case deployment and nation-state regulation. Kate Crawford and Ryan Calo, ‘There is a blind spot in AI research’ (2016) 538 Nature 311313.

79 New Zealand Police first contacted Clearview in January, and later set up a trial of the software; however, the high tech crime unit handling the technology appears not to have sought the necessary clearance before using it. Mackenzie Smith, ‘Police trialled facial recognition tech without clearance’ (13 May 2020), Radio New Zealand, www.rnz.co.nz/news/national/416483/police-trialled-facial-recognition-tech-without-clearance. This resulted in New Zealand Police commissioning a retrospective feasibility and social impacts study owing to the pace of technological change that has outstripped law and regulation. See Nessa Lynch and Andrew Chen, ‘Facial recognition technology: Considerations for use in policing’ (November 2021), Report commissioned by the New Zealand Police, www.police.govt.nz/sites/default/files/publications/facial-recognition-technology-considerations-for-usepolicing.pdf.

80 For example, IDEMIA systems have been deployed in different cultural settings with problematic results. IDEMIA supplied the biometric capture kits to the Kenyan government in 2018–2019 for its controversial national digital ID scheme, commonly known as Huduma Namba (‘service number’). Data Rights filed a case before the Paris tribunal accusing IDEMIA of failing to adequately address human rights issues. See Frank Hersey, ‘NGOs sue IDEMIA for failing to consider human rights risks in Kenyan digital ID’ (29 July 2022), BiometricUpdate.com, www.biometricupdate.com/202207/ngos-sue-idemia-for-failing-to-consider-human-rights-risks-in-kenyan-digital-id.

81 See Manasi Sakpal, ‘How to use facial recognition technology ethically and responsibly’ (15 December 2021), Gartner Insights, www.gartner.com/smarterwithgartner/how-to-use-facial-recognition-technology-responsibly-and-ethically; and also, Nicholas Davis, Lauren Perry, and Edward Santow, ‘Facial recognition technology: Towards a model law’ (2022), Human Technology Institute, The University of Technology, Sydney.

4 Transparency of Facial Recognition Technology and Trade Secrets

This chapter is a result of the project ‘Government Use of Facial Recognition Technologies: Legal Challenges and Solutions’ (FaceAI), funded by the Research Council of Lithuania (LMTLT), agreement number S-MIP-21-38.

1 Paul Bischoff, ‘Facial recognition technology (FRT): 100 countries analyzed’ (8 June 2021), Comparitech, www.comparitech.com/blog/vpn-privacy/facial-recognition-statistics/.

2 See, e.g., NSW Ombudsman, ‘The new machinery of government: Using machine technology in administrative decision-making’ (29 November 2021), State of New South Wales, www.ombo.nsw.gov.au/Find-a-publication/publications/reports/state-and-local-government/the-new-machinery-of-government-using-machine-technology-in-administrative-decision-making; European Ombudsman, ‘Report on the meeting between European Ombudsman and European Commission representatives’ (19 November 2021), www.ombudsman.europa.eu/en/doc/inspection-report/en/149338.

3 See, e.g., Access Now, ‘Europe’s approach to artificial intelligence: How AI strategy is evolving’ (December 2020), Report Snapshot, www.accessnow.org/cms/assets/uploads/2020/12/Report-Snapshot-Europes-approach-to-AI-How-AI-strategy-is-evolving-1.pdf, p. 3.

4 Regulation 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1, art 13.

5 Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual explanations without opening the black box: Automated decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology 841847, at 842, 878, 879 (‘a legally binding right to explanation does not exist in the GDPR’).

6 See European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ (21 April 2021) (hereafter draft EU AI Act), Com 206 Final, articles 13(1), 20, 60, 62.

7 See, e.g., OECD, ‘Transparency and explainability (Principle 1.3)’ (2022), OECD AI Principles, https://oecd.ai/en/dashboards/ai-principles/P7.

8 See, e.g., Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardozo, ‘Machine learning interpretability: A survey on methods and metrics’ (2019) 8(8) Electronics 832, 5–7; Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal, ‘Explaining explanations: An overview of interpretability of machine learning’ (3 February 2019), Working Paper, https://arxiv.org/abs/1806.00069.

9 Draft EU AI Act, para. 38.

10 Interview participant 2, NGO representative.

11 See, e.g., Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz, ‘Expanding explainability: Towards social transparency in AI systems’ (May 2021), Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Article No. 82, pp. 1–19; Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera, ‘Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI’ (2020) 58 (June) Information Fusion 82115.

12 Jonathan R. Williford, Brandon B. May, and Jeffrey Byrne, ‘Explainable face recognition’, Proceedings of Computer Vision – ECCV: 16th European Conference, Glasgow, UK (23–28 August 2020), Part XI, pp. 248–263; Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller (eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Springer International Publishing, 2019).

13 See, e.g., draft EU AI Act, arts 5, 21, 26.

14 Such as in OECD, ‘Transparency and explainability’; Australian Government, ‘Australia’s artificial intelligence ethics framework’ (7 November 2019), Department of Industry, Science and Resources, www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework.

15 Shane T. Mueller, Robert R. Hoffman, William Clancey, Abigail Emrey, Gary Klein, ‘Explanation in human-AI systems: A literature meta-review synopsis of key ideas and publications and bibliography for explainable AI’ (5 February 2019), DARPA XAI Literature Review, arXiv:1902.01876; Maja Brkan and Gregory Bonnet, ‘Legal and technical feasibility of the GDPR’s quest for explanation of algorithmic decisions: Of black boxes, white boxes and fata morganas’ (2020) 11(1) European Journal of Risk Regulation 1850, at 18–19.

16 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi, ‘A survey of methods for explaining black box models’ (2019) 51(5) ACM Computing Surveys 1–42.

17 Interview participant 1, IT expert.

19 See, e.g., Zana Buçinca, Krzysztof Z. Gajos, Phoebe Lin, and Elena L. Glassman, ‘Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems’ (2020), Proceedings of the 25th international conference on intelligent user interfaces, https://dl-acm-org.simsrad.net.ocs.mq.edu.au/doi/abs/10.1145/3377325.3377498; Julius Adebayo et al., ‘Sanity checks for saliency maps’ (2018) 31 Advances in Neural Information Processing Systems 9505, arXiv:1810.03292v3; Jindong Gu and Volker Tresp, ‘Saliency methods for explaining adversarial attacks’ (October 2019), Human-Centric Machine Learning (NeurIPS Workshop), https://arxiv.org/abs/1908.08413; similar from Interview participant 5, IT expert (‘it’s not clear to me if we’ll ever come up with a particularly good explanation of how the combination of neural networks and all the technologies that go into face recognition work. Whether we’ll ever be able to explain them’).

20 Interview participant 13, NGO representative.

21 See Section 4.3.

22 This type of information is being currently provided, for example, on the UK Metropolitan police website: www.met.police.uk/advice/advice-and-information/fr/facial-recognition.

23 Interview participant 1, IT expert.

24 Interview participant 19, law enforcement officer.

25 Watch list is the list against which the taken image is compared. When FRT is used in law enforcement context, the watch list normally comprises images of persons who are suspected or convicted for crimes, missing persons, etc. In case of a live FRT, the probe picture is a picture taken from the passing individual.

26 For example, the draft EU AI Act treats live FRT in the law enforcement context as extremely high risk and generally bans them, with a few exceptions: see draft EU AI Act, Annex 3.

27 Interview participant 1, IT expert.

28 The draft EU AI Act requires all high-risk AI technologies, including FRT, to undergo certification procedures. This requirement, however, has not yet been established in other jurisdictions.

29 Interview participant 21, legal expert.

30 Interview participant 5, IT expert (‘Particularly, I mean, transparency is a very useful means of regulating governments abusing their position’); similar from interview participant 2, NGO representative.

31 Interview participant 13, NGO representative.

32 Interview participant 2, NGO representative (‘for us in civil society, knowing the parameters that were set around accuracy and the impact that might have on people of colour, might be a useful thing to know, contest the use case’).

33 Another possible challenge is government secrets (the government may not want to disclose certain information for public security reasons, for example). The challenge in ensuring FRT explainability is technical (technical ability to provide explanations of how a specific AI functions).

34 State v. Loomis 881 N.W.2d 749, 755, 756, fn.18 (Wis. 2016), cert. denied, 137 S. Ct. 2290 (2017).

35 See Tanya Aplin, Lionel Bently, Phillip Johnson, and Simon Malynicz, Gurry on Breach of Confidence: The Protection of Confidential Information (2nd ed., Oxford University Press, 2012).

36 See, e.g., Clark D. Asay, ‘Artificial stupidity’ (2020) 61(5) William and Mary Law Review 11871257, 1243, Notably, significant financial costs might be incurred to ensure that information maintains secret.

37 For more, see Tanya Aplin, ‘Reverse engineering and commercial secrets’ (2013) 66(1) Current Legal Problems 341–377.

38 Katarina Foss-Solbrekk, ‘Three routes to protecting AI systems and their algorithms under IP law: The good, the bad and the ugly’ (2021) 16(3) Journal of Intellectual Property Law & Practice 247–258; Ana Nordberg, ‘Trade secrets, big data and artificial intelligence innovation: A legal oxymoron?’ in Jens Schovsbo, Timo Minssen, and Thomas Riis (eds.), The Harmonization and Protection of Trade Secrets in the EU: An Appraisal of the EU Directive(Edward Elgar Publishing Limited, 2020), pp. 194220, at p. 212.

39 See, e.g., Sylvia Lu, ‘Algorithmic opacity, private accountability, and corporate social disclosure in the age of artificial intelligence’ (2020) 23(99) Vanderbilt Journal of Entertainment & Technology Law 116117 (contending that software industry has relied on trade secret law to protect algorithms for decades and AI algorithms are no exception).

40 See, e.g., LivePerson, Inc. v. 24/7 Customer, Inc., 83 F. Supp. 3d 501, 514 (SDNY, 2015) (finding algorithms based on artificial intelligence eligible for trade secret protection).

41 Interview participant 1, IT expert.

42 Note that even if all of this information could be ‘factual’ trade secrets, not all of it would qualify as ‘legal’ trade secrets. For a distinction between the two see Sharon K. Sandeen and Tanya Aplin, ‘Trade secrecy, factual secrecy and the hype surrounding AI’ in Ryan Abott (ed.), Research Handbook on Intellectual Property and Artificial Intelligence (Edward Elgar, 2022), pp. 442450; see also Camilla A. Hrdy and Mark A. Lemley, ‘Abandoning trade secrets’ (2021) 73(1) Stanford Law Review 1–66.

43 See Section 4.2.2.1. While this information could be protected as government secrets, it would not be protected as a trade secret as it does not have independent commercial value.

48 For example, Court Suppression and Non-Publication Orders Act 2010 (NSW) s 9 allows a court to make a suppression or non-publication order if it is necessary to prevent prejudice to the proper administration of justice.

51 Similar has been suggested for AI acquisition process for government institutions: see Jake Goldenfein, ‘Algorithmic transparency and decision-making accountability: Thoughts for buying machine learning algorithms’ in Closer to the Machine: Technical, Social, and Legal Aspects of AI (Office of the Victorian Information Commissioner, 2019), https://ovic.vic.gov.au/wp-content/uploads/2019/08/closer-to-the-machine-web.pdf.

52 Interview participant 26, government representative.

53 See, e.g., Freedom of Information Act 1982 (Cth).

54 See Elizabeth A. Rowe, ‘Striking a balance: When should trade-secret law shield disclosure to the government?96 Iowa Law Review 791835, at 804–808.

55 For an overview of the public interest defence, see Aplin et al., Gurry on Breach of Confidence.

56 Attorney-General v. Guardian Newspapers Ltd [1990] AC 109, 282 (Spycatcher case) (‘although the basis of the law’s protection of confidence is a public interest that confidences should be preserved by law, nevertheless that public interest may be outweighed by some other countervailing public interest which favours disclosure’ (Lord Goff)) Similarly, in Campbell v. Frisbee, the UK Court of Appeal held that the confider’s right ‘must give way where it is in the public interest that the confidential information should be made public’. See Campbell v. Frisbee [2002] EWCA Civ 1374, [23].

57 See Karen Koomen, ‘Breach of confidence and the public interest defence: Is it in the public interest? A review of the English public interest defence and the options for Australia’ (1994) 10 Queensland University of Technology Law Journal 56–88.

58 See, e.g., Spycatcher case, 269 (Lord Griffiths); Fraser v. Evans [1969] 1 QB 349; Hubbard v. Vosper [1972] 2 QB 84; discussed in Trent Glover, ‘The scope of the public interest defence in actions for breach of confidence’ (1999) 6 James Cook University Law Review 109137, at 115–116, 118.

59 See discussion in Glover, ‘The scope of the public interest defence’; Corrs Pavey Whiting & Byrne v. Collector of Customs (Vic) (1987) 14 FCR 434, 454 (Gummow J).

60 Castrol Australia Pty Limited v. Emtech Associates Pty Ltd (1980) 51 FLR 184, 513 (Rath J, quoting with approval Ungoed-Tomas J in Beloff v. Pressdram [1973] 1 All ER 241, 260); for a criticism of a narrow interpretation see Koomen, Breach of confidence and the public interest defence.

61 See discussion in Jason Pizer, ‘The public interest exception to the breach of confidence action: are the lights about to change?’ (1994) 20(1) Monash University Law Review 67109, at 80–81.

62 See, e.g., Francome v. Mirror Group Newspapers Ltd [1984] 2 All ER 408.

63 Beloff v. Pressdram, 260; see similar limitation in Corrs Pavey Whiting & Byrne v. Collector of Customs, 456 (Gummow J).

5 Privacy’s Loose Grip on Facial Recognition Law and the Operational Image

1 Bridges v. South Wales Police [2019] EWHC 2341 (admin).

2 Footnote Ibid., at [85], citing S and Marper v. UK [2018] Eur Court HR 1581 and Catt v. UK (European Court of Human Rights, Application no. 43514/15, 24 January 2019).

3 Harun Farocki, ‘Phantom images’ (2004) 29 Public 1222; Trevor Paglen, ‘Operational images’ (2014) 59 E-Flux (online); Mark Andrejevic and Zala Volcic, ‘Seeing like a border: Biometrics and the operational image’ (2022) 7(2) Digital Culture & Society 139158; Rebecca Uliasz, ‘Seeing like an algorithm: Operative images and emergent subjects’ (2021) 36 AI & Society 12331241.

4 Mark Andrejevic and Zala Volcic, ‘Smart cameras and the operational enclosure’ (2021) 22(4) Television & New Media 343–359.

5 Paglen, ‘Operational images’.

6 Andrejevic and Volcic, ‘Smart cameras and the operational enclosure’, p. 347.

7 See, e.g., Adam Harvey and Jules LaPlace ‘Exposing.AI’ (2021), https://exposing.ai.

8 EU General Data Protection Regulation (GDPR): Regulation (EU) 2016/679, Art. 4(14).

9 Footnote Ibid., Recital 51: ‘The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.’

10 See, e.g., Bilgesu Sumer, ‘When do the images of biometric characteristics qualify as special categories of data under the GDPR: A systemic approach to biometric data processing’, IEEE International Conference of the Biometrics Special Interest Group (14–16 September 2022), referencing ISO/IEC 2382-37: 2022 Information Technology Vocabulary Part 37.

11 EU General Data Protection Regulation (GDPR): Regulation (EU) 2016/679, Art 4(1); See also Breyer v. Bundesrepublik Deutschland ECLI:EU:C:2016:779.

13 Landesbeauftragte für Datenschutz und Akteneinsicht, ‘Verarbeitung personenbezogener Daten bei Fotografien’ (June 2018), www.lda.brandenburg.de/sixcms/media.php/9/RechtlicheAnforderungenFotografie.pdf.

14 Article 29 Working Party, ‘Opinion 4/2007 on the concept of personal data (WP 136, 20 June 2007)’.

15 EU General Data Protection Regulation (GDPR): Regulation (EU) 2016/679, Art 9(1).

16 Sumer, ‘When do the images of biometric characteristics qualify’.

17 Article 29 Working Party, ‘Working document on biometrics (WP 80, 1 August 2003)’.

18 Article 29 Working Party, ‘Opinion 3/2012 on developments in biometric technologies (WP 193, 27 April 2012)’.

19 European Data Protection Board, ‘Guidelines 3/2019 on processing of personal data through video devices (Version 2.0, 29 January 2020)’.

20 S and Marper v. UK [2018] Eur Court HR 1581.

21 Biometric Information Privacy Act (740 ILCS 14/).

22 See, e.g., Megan Richardson, Mark Andrejevic, and Jake Goldenfein, ‘Clearview AI facial recognition case highlights need for clarity on law’ (22 June 2022), CHOICE, www.choice.com.au/consumers-and-data/protecting-your-data/data-laws-and-regulation/articles/clearview-ai-and-privacy-law.

23 See, e.g., Patel v. Facebook No. 18-15982 (9th Cir. 2019) – ‘the development of a face template using facial-recognition technology without consent’ is an invasion of a privacy interest.

24 Decision 2021-134 of 1 November 2021 issuing an order to comply to the company Clearview AI (No. MDMM211166).

25 EU General Data Protection Regulation (GDPR): Regulation (EU) 2016/679, Art 6(1)(f) specifies that even if data is publicly available it still requires a legal basis for processing and is not automatically available for re-use. When processing publicly available data on the basis of a legitimate interests, the European Data Protection Board suggests users need to reasonably expect that further processing.

26 Commissioner initiated investigation into Clearview AI, Inc. (Privacy) [2021] ALCmr 54 (14 October 2021).

27 Footnote Ibid., at [172].

28 Jake Goldenfein, Monitoring Laws (Cambridge University Press, 2019).

29 Chloe Xiang, ‘AI is probably using your images and it’s not easy to opt out’ (26 September 2022), Vice: Motherboard, www.vice.com/en/article/3ad58k/ai-is-probably-using-your-images-and-its-not-easy-to-opt-out.

30 See, e.g., Benjamin L. W. Sobel, ‘A new common law of web scraping’ (2021–2022) 25 Lewis and Clark Law Review 147–207; Vladan Joler and Matteo Pasquinelli, ‘Nooscope’ (2020) https://nooscope.ai/.

31 Madhumita Murgia, ‘Who’s using your face? The ugly truth about facial recognition’ (19 April 2019), Financial Times, www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e.

34 See, e.g., https://laion.ai/faq/.

35 The diversity of actors in the facial recognition supply chain also enables problematic ‘data laundering’ practices. Datasets are legally constructed by research institutions using non-commercial research exceptions to copyright law, but then made available to commercial entities that use them for profit: see Andy Baio, ‘AI data laundering: How academic and nonprofit researchers shield tech companies from accountability’ (30 September 2022), Waxy, https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/. reporting on a Meta owned generative text-video tool trained on the WebVid-10M dataset that was initially scraped from Shutterstock, as well as the XPretrain dataset released by Microsoft of millions of videos scraped from YouTube with text descriptions.

36 See, e.g., Harvey and LaPlace, ‘Exposing.AI’.

37 See, e.g., UC Riverside Video Computing Group. ‘Datasets’ (n.d.), https://vcg.ece.ucr.edu/datasets.

38 HiQ Labs v. LinkedIn Corp., 938 F.3d 985 (9th Cir. 2019).

39 See, e.g., Ryan Merkley, ‘Use and fair use: Statement on shared images in facial recognition AI’ (13 March 2019), Creative Commons, https://creativecommons.org/2019/03/13/statement-on-shared-images-in-facial-recognition-ai/.

40 Sobel, ‘A new common law of web scraping’.

41 See, e.g., Jonathan Band, ‘Google and fair use’ (2008) 3 Journal of Business & Technology Law 1–28.

42 See, e.g., including for contrasting views, Wendy Xu, ‘Recognizing property rights in biometric data under the right to publicity’ (2020–2021) 98 University of Detroit Mercy Law Review 143–166; Lisa Raimondi, ‘Biometric data regulation and the right to publicity: A path to regaining autonomy over our commodified identity’ (2021) 16(1) University of Massachusetts Law Review 200–230; A. J. McClurg, ‘In the face of danger: Facial recognition and the limits of privacy law’ (2007) 120 Harvard Law Review 18701891.

43 Vance v. IBM Case: 1:20-cv-00577.

44 PIJIP, ‘Joint comment to WIPO on copyright and artificial intelligence’ (17 February 2020), Infojustice, https://infojustice.org/archives/42009.

45 See, e.g., Mauritz Kop, ‘The right to process data for machine learning purposes in the EU’ (2021) 34 Harvard Journal of Law & Technology – Spring Digest 123.

46 See, e.g., Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchel, Joy Buolamwini, Joonseok Lee, and Emily Denton, ‘Saving face: Investigating the ethical concerns of facial recognition auditing’ (2020), AAAI/ACM AI Ethics and Society Conference 2020; Vinay Uday Prabhu and Abeba Birhane, ‘Large image datasets: A Pyrrhic win for computer vision?’ (2020), arXiv:2006.16923.

6 Facial Recognition Technology and Potential for Bias and Discrimination

1 Avi Marciano, ‘Reframing biometric surveillance: From a means of inspection to a form of control’ (2019) 21 Ethics and Information Technology 127136, at 134.

2 Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’ (2018) 81 Proceedings of Machine Learning Research Conference on Fairness, Accountability and Transparency 115.

3 Marcus Smith and Seumas Miller, Biometric Identification Law and Ethics (Springer, 2021).

4 Marcus Smith, Monique Mann, and Gregor Urbas, Biometrics, Crime and Security (Routledge, 2018).

5 Monique Mann and Marcus Smith, ‘Automated facial recognition technology: Recent developments and regulatory options’ (2017) 40 University of New South Wales Law Journal 121–145.

6 R (on the application of Bridges) v. Chief Constable of South Wales Police (2020) EWCA Civ 1058.

7 Marcus Smith and Gregor Urbas, Technology Law (Cambridge University Press, 2021).

8 Marcus Smith and Seumas Miller, ‘The ethical application of biometric facial recognition technology’ (2021) 37 AI & Society 167–175.

10 Smith, Mann, and Urbas, Biometrics, Crime and Security.

11 Smith and Miller, ‘Ethical application of biometric facial recognition technology’.

12 Sidney Perkowitz, ‘The bias in the machine: Facial recognition technology and racial disparities’ (5 February 2021), MIT Schwarzman College of Computing, https://mit-serc.pubpub.org/pub/bias-in-machine/release/1?readingCollection=34db8026.

13 Laura Moy, ‘A taxonomy of police technology’s racial inequity problems (2021) University of Illinois Law Review 139–193.

14 Drew Harwell, ‘Wrongfully arrested man sues Detroit police over false facial recognition match’ (13 April 2021), Washington Post, www.washingtonpost.com/technology/2021/04/13/facial-recognition-false-arrest-lawsuit/.

15 Brian Jefferson, Digitize and Punish: Racial Criminalization in the Digital Age (University of Minnesota Press, 2020), p. 11.

16 Patrick Grother, Mei Ngan, and Kayee Hanaoka, Face Recognition Vendor Test (FRVT) Part 2: Identification (NIST, 2019).

17 Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’ (2018) 81 Proceedings of the 1st Conference on Fairness, Accountability and Transparency 7779.

19 Clare Garvie, Alvaro Bedoya, and Jonathan Frankle, ‘The perpetual line-up: Unregulated police face recognition in America’ (18 October 2016), Georgetown Law Center on Privacy and Technology, www.perpetuallineup.org/.

20 Damien Patrick Williams, ‘Fitting the description: Historical and sociotechnical elements of facial recognition and anti-black surveillance’ (2020) 7 Journal of Responsible Innovation 7483.

21 Pete Fussey, Bethan Davies, and Martin Innes, ‘“Assisted” facial recognition and the reinvention of suspicion and discretion in digital policing’ (2021) 61 British Journal of Criminology 325344.

24 Simon Egbert and Monique Mann, ‘Discrimination in predictive policing: The dangerous myth of impartiality and the need for STS-analysis’ in V. Badalic (ed.), Automating Crime Prevention, Surveillance and Military Operations (Springer, 2021), pp. 2546.

25 Footnote Ibid., p. 25.

26 Pat O’Malley and Gavin Smith, ‘“Smart” crime prevention? Digitization and racialized crime control in a smart city’ (2022) 26(1) Theoretical Criminology 4056, at 40.

29 Sara Yates, ‘The digitalization of the carceral state: The troubling narrative around police usage of facial recognition technology’ (2022) 19 Colorado Technology Law Journal 483508.

30 Footnote Ibid., p. 505.

31 Footnote Ibid., p. 506.

32 Damien Patrick Williams, ‘Fitting the description: Historical and sociotechnical elements of facial recognition and anti-black surveillance’ (2020) 7 Journal of Responsible Innovation 7483.

7 Power and Protest Facial Recognition and Public Space Surveillance

Research for this chapter has been funded by the Research Council of Lithuania (LMTLT) (Government Use of Facial Recognition Technologies: Legal Challenges and Solutions (FaceAI), agreement number S-MIP-21-38); and Australian Research Council Discovery Early Career Research Award (Artificial Intelligence Decision-Making, Privacy and Discrimination Laws, project number DE210101183). The chapter draws on and adapts some arguments developed in M. Zalnieriute, ‘Facial Recognition Surveillance and Public Space: Protecting Protest Movements’, International Review of Law, Computers & Technology, 2024 (forthcoming).

1 Hannah Arendt, On Revolution (Penguin, 1977), p. 218.

2 PTI, ‘Delhi: Facial recognition system helps trace 3,000 missing children in 4 days’ (22 April 2018), Times of India, https://timesofindia.indiatimes.com/city/delhi/delhi-facial-recognition-system-helps-trace-3000-missing-children-in-4-days/articleshow/63870129.cms.

4 Ryan Grenoble, ‘Welcome to the surveillance state: China’s AI cameras see all’ (12 December 2017), HuffPost Australia, www.huffpost.com/entry/china-surveillance-camera-big-brother_n_5a2ff4dfe4b01598ac484acc.

5 Patrick Reevell, ‘How Russia is using facial recognition to police its coronavirus lockdown’ (30 April 2020), ABC News, https://abcnews.go.com/International/russia-facial-recognition-police-coronavirus-lockdown/story?id=70299736; Sarah Rainsford, ‘Russia uses facial recognition to tackle virus’ (4 April 2020), BBC News, www.bbc.com/news/av/world-europe-52157131/coronavirus-russia-uses-facial-recognition-to-tackle-covid-19. One man, having been given a self-quarantine order, was visited by police within half an hour of leaving his home to take out the rubbish.

6 NtechLab, ‘Biometric Solution against COVID-19’ (n.d.), https://ntechlab.com/en_au/solution/biometric-solution-against-covid-19/.

7 V. Barassi, Activism on the Web: Everyday Struggles against Digital Capitalism (Routledge, 2015); J. Juris, ‘Reflections on #occupy everywhere: Social media, public space, and emerging logics of aggregation’ (2012) 39(2) American Ethnologist 259–279; P. Gerbaudo, Tweets and the Streets: Social Media and Contemporary Activism (Pluto Press, 2012); P. Gerbaudo, The Mask and the Flag: Populism, Citizenism, and Global Protest (Oxford University Press, 2017); Alice Mattoni, Media Practices and Protest Politics How Precarious Workers Mobilise (Ashgate, 2012); Lucas Melgaco and Jeffrey Monoghan, ‘Introduction: Taking to the streets in the information age’ in Lucas Melgaco and Jeffrey Monoghan (eds.), Protests in the Information Age (Routledge, 2018), pp. 1–17; D. Trottier and Christian Fuchs (eds.), Social Media, Politics and the State (Routledge, 2015).

8 Andrew Guthrie Ferguson, ‘Facial recognition and the Fourth Amendment’ (2021) 105 Minnesota Law Review 11051106; Jagdish Chandra Joshi and K. K. Gupta, ‘Face recognition technology: A review’ (2016) 1 The IUP Journal of Telecommunication 5354, at 53; Relly Victoria Virgil Petrescu, ‘Face recognition as a biometric application’ (2019) 3 Journal of Mechatronics and Robotics 240; Mary Grace Galterio, Simi Angelic Shavit, and Thaier Hayajneh, ‘A review of facial biometrics security for smart devices’ (2018) 7 (37) Computers 3; Ian Berle, Face Recognition Technology: Compulsory Visibility and Its Impact on Privacy and the Confidentiality of Personal Identifiable Images (Springer, 2020), p. 1

9 ACLU of Northern CA, ‘Police use of social media surveillance software is escalating, and activists are in the digital crosshairs’ (22 September 2016), Medium, https://medium.com/@ACLU_NorCal/police-use-of-social-media-surveillance-software-is-escalating-and-activists-are-in-the-digital-d29d8f89c48.

10 Matt Cagle, ‘Facebook, Instagram, and Twitter provided data access for a surveillance product marketed to target activists of color’ (11 October 2016), ACLU of Northern California, www.aclunc.org/blog/facebook-instagram-and-twitter-provided-data-access-surveillance-product-marketed-target; Russell Brandom, ‘Facebook, Twitter, and Instagram surveillance tool was used to arrest Baltimore protestors’ (11 October 2016), The Verge, www.theverge.com/2016/10/11/13243890/facebook-twitter-instagram-police-surveillance-geofeedia-api; Kalev Leetaru, ‘Geofeedia is just the tip of the iceberg: The era of social surveillance’ (12 October 2016), Forbes, www.forbes.com/sites/kalevleetaru/2016/10/12/geofeedia-is-just-the-tip-of-the-iceberg-the-era-of-social-surveillence/.

11 Ali Winston, ‘Oakland cops quietly acquired social media surveillance tool’ (13 April 2016), East Bay Express, www.eastbayexpress.com/oakland/oakland-cops-quietly-acquired-social-media-surveillance-tool/Content?oid=4747526.

12 Shira Ovide, ‘A case for banning facial recognition’ (9 June 2020), New York Times, www.nytimes.com/2020/06/09/technology/facial-recognition-software.html.

13 Tate Ryan-Mosley and Sam Richards, ‘The secret police: Cops built a shadowy surveillance machine in Minnesota after George Floyd’s murder’ (3 March 2020), MIT Technology Review, www.technologyreview.com/2022/03/03/1046676/police-surveillance-minnesota-george-floyd/.

14 Jay Mazoomdaar, ‘Delhi police film protests, run its images through face recognition software to screen crowd’ (28 December 2019), Indian Express, https://indianexpress.com/article/india/police-film-protests-run-its-images-through-face-recognition-software-to-screen-crowd-6188246/.

15 Vidushi Marda, ‘View: From protests to chai, facial recognition is creeping up on us’ (7 January 2020), Carnegie India, https://carnegieindia.org/2020/01/07/view-from-protests-to-chai-facial-recognition-is-creeping-up-on-us-pub-80708.

16 Mazoomdaar, ‘Delhi police film protests’.

17 Alexandra Ulmer and Zeba Siddiqui, ‘Controversy over India’s use of facial recognition technology’ (17 February 2020), Sydney Morning Herald, www.smh.com.au/world/asia/controversy-over-india-s-use-of-facial-recognition-during-protests-20200217-p541pp.html.

18 Richard Byrne and Michael C. Davis, ‘Protest tech: Hong Kong’ (2020), Wilson Quarterly, http://wq.proof.press/quarterly/the-power-of-protest/protest-tech-hong-kong/.

19 Michelle Corinne Liu, Jaime R. Brenes Reyes, Sananda Sahoo, and Nick Dyer-Witheford, ‘Riot platforms: Protest, police, planet’ (2022) 54(6) Antipode 1901.

20 D. Trottier, ‘Crowdsourcing CCTV surveillance on the internet’ (2014) 15(5) Information Communication and Society 609; D. Trottier, ‘Digital vigilantism as weaponisation of visibility’ (2017) 30(1) Philosophy and Technology 55.

21 Heather Kelly and Rachel Lerman, ‘America is awash in cameras, a double-edged sword for protesters and police’ (3 June 2020), Washington Post, www.washingtonpost.com/technology/2020/06/03/cameras-surveillance-police-protesters/. In protest at the request, individuals reportedly sent the police videos and images of K-pop stars.

22 Debra Mackinnon, ‘Surveillance-ready-subjects: The making of Canadian anti-masking law’ in Lucas Melgaco and Jeffrey Monoghan (eds.), Protests in the Information Age (Routledge, 2018), pp. 151, 162.

23 Rachel Levinson-Waldman, ‘Government access to and manipulation of social media: Legal and policy challenges’ (2018) 61(3) Howard Law Journal 531562, at 526–531.

24 ‘U.S. Patent No. 9,892,168 BI’ filed on 24 May 2016; ‘U.S. Patent No. 9,794,358 BI’ filed on 13 March 2014; Farrah Bara, ‘From Memphis, with love: A model to protect protesters in the age of surveillance’ (2019) 69 Duke Law Journal 197229, at 206.

25 Monika Zalnieriute, ‘Burning bridges: The automated facial recognition technology and public space surveillance in the modern state’ (2021) 22(2) Columbia Science and Technology Review 314, 284.

26 Katja Kukielski, ‘The First Amendment and facial recognition technology’ (2022) 55(1) Loyola of Los Angeles Law Review 231.

27 Charlotte Jee, ‘A new face recognition privacy bill would give us more control over our data’ (8 October 2019), MIT Technology Review, www.technologyreview.com/f/613129/a-new-face-recognition-privacy-bill-would-give-us-more-control-over-our-data/; Security Newswire, ‘Commercial facial recognition Privacy Act of 2019 introduced’ (n.d.), Security, www.securitymagazine.com/articles/90097-commercial-facial-recognition-privacy-act-of-2019-introduced?v=preview.

28 Amrita Khalid, ‘The EU’s agenda to regulate AI does little to rein in facial recognition’ (20 February 2020), Quartz, https://qz.com/1805847/facial-recognition-ban-left-out-of-the-eus-agenda-to-regulate-ai/.

29 [2020] EWCA Civ 1058.

30 R (on the application of Edward Bridges) v. The Chief Constable of South Wales Police [2020] Court of Appeal (Civil Division) C1/2019/2670; EWCA Civ 1058, 210 (‘Bridges (Appeal)’).

31 American Civil Liberties Union v. United States Department of Justice (United States District Court, 31 October 2019). In October 2019 the American Civil Liberties Union (ACLU) brought an action against the US Department of Justice, the FBI, and the Drug Enforcement Agency, claiming that the public had a right to know when facial recognition software was being utilised under the Freedom of Information Act. The case was filed after the ACLU made a freedom of information request in January 2019. The DoJ, FBI, and DEA failed to produce any responsive documents. ACLU, ‘ACLU challenges FBI face recognition secrecy’ (31 October 2019), www.aclu.org/press-releases/aclu-challenges-fbi-face-recognition-secrecy; Conseil d’Etat, Décision n 442364 (26 April 2022), www.conseil-etat.fr/fr/arianeweb/CE/decision/2022-04-26/442364.

32 Kate Conger, Richard Fausset, and Serge Kovaleski, ‘San Francisco bans facial recognition technology’ (14 May 2019), New York Times, www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html. The decision was made by the Board of Supervisors, who stated that the responsibility to regulate FRT will lie first with local legislators who have the capacity to move more quickly than the Federal government.

33 Christopher Jackson, Morgan Livingston, Vetri Velan, Eric Lee, Kimberly Huynh, and Regina Eckert, ‘Establishing privacy advisory commissions for the regulation of facial recognition systems at the municipal level’ (2020), Science Policy Group, University of California, Berkeley, https://escholarship.org/uc/item/7qp0w9rn.

34 Max Read, ‘Why we should ban facial recognition technology’ (30 January 2020), Intelligencer, https://nymag.com/intelligencer/2020/01/why-we-should-ban-facial-recognition-technology.html; ACLU, ‘California governor signs landmark bill halting facial recognition on police body cams’ (8 October 2019), ACLU Northern California, www.aclunc.org/news/california-governor-signs-landmark-bill-halting-facial-recognition-police-body-cams.

35 Lord Clement-Jones, ‘Automated Facial Recognition Technology (Moratorium and Review) Bill [HL]2019–20’ (2019), https://services.parliament.uk/bills/2019-20/automatedfacialrecognitiontechnologymoratoriumandreview.html/.

36 John Scott and Gordon Marshall, A Dictionary of Sociology (Oxford University Press, 2009).

37 Daniel Trottier and Christian Fuchs, ‘Theorising social media, politics and the state: An introduction’ in Daniel Trottier and Christian Fuchs (eds.), Social Media, Politics and the State (Routledge, 2015), pp. 3, 33; Alberto Melucci and Leonardo Avritzer, ‘Complexity, cultural pluralism and democracy: Collective action in the public space’ (2000) 39(4) Social Science Information 507.

38 Jens Kremer, ‘The end of freedom in public places? Privacy problems arising from surveillance of the European public space’ (2017), PhD thesis, University of Helsinki, p. 5.

39 Footnote Ibid., p. 73.

40 Christoph Burgmer, ‘Protestbewegung: Warum einen öffentlichen Platz besetzen?’ [Why occupy a public space?] (3 October 2014), Deutschlandfunk, www.deutschlandfunk.de/protestbewegung-warum-einen-oeffentlichen-platz-besetzen.1184.de.html?dram:article_id=299327.

41 G. T. Marx, ‘Security and surveillance contests: Resistance and counter- resistance’ in T Balzacq (ed.), Contesting Security. Strategies and Logics (Routledge, 2015), pp. 15, 23.

42 G. T. Marx, Windows into the Soul: Surveillance and Society in an Age of High Technology (University of Chicago Press, 2016), p. 160.

43 Byrne and Davis, ‘Protest tech’.

44 Mackinnon, ‘Surveillance-ready-subjects’, p. 161.

45 Melgaco and Monoghan, ‘Introduction’, p. 7; Luis Fernandez, Policing Dissent: Social Control and the Anti- Globalization Movement (Rutgers University Press, 2008); P. Gillham, ‘Securitizing America: Strategic incapacitation and the policing of protest since the 11 September 2001 terrorist attacks’ (2011) 5(7) Sociology Compass 636; P. Gillham, B. Edwards, and J. Noakes, ‘Strategic incapacitation and the policing of Occupy Wall Street protests in New York City, 2011’ (2013) 23(1) Policing and Society 81; Jeffrey Monoghan and K. Walby, ‘Making up “terror identities”: Security intelligence, Canada’s integrated Threat Assessment Centre, and social movement suppression’ (2012) 22(2) Policing and Society 133; Jeffrey Monoghan and K. Walby, ‘“They attacked the city”: Security intelligence, the sociology of protest policing, and the anarchist threat at the 2010 Toronto G20 Summit’ (2012) 60(5) Current Sociology 653.

46 Irena Nesterova, ‘Mass data gathering and surveillance: The fight against facial recognition technology in the globalized world’, SHS Web of Conferences 74, 03006, www.shs-conferences.org/articles/shsconf/pdf/2020/02/shsconf_glob2020_03006.pdf, pp. 2–3, 6.

47 J. Pugliese, Biometrics: Bodies, Technologies, Biopolitics (Routledge, 2012).

48 T Monahan, ‘Dreams of control at a distance: Gender, surveillance, and social control’ (2009) 9(2) Cultural Studies – Critical Methodologies 286; Mackinnon, ‘Surveillance-ready-subjects’, p. 162.

49 Melgaco and Monoghan, ‘Introduction’, p. 9; Gilles Deleuze, ‘Postscript on the societies of control’ (1992) 59 October 3; Zygmunt Bauman and David Lyon, Liquid Surveillance: A Conversation (Polity Press, 2013).

50 Australian Human Rights Commission, ‘How are human rights protected in Australian law?’ (2015) https://humanrights.gov.au/our-work/rights-and-freedoms/how-are-human-rights-protected-australian-law.

51 Australian Human Rights Commission, ‘Human rights and technology: Final report’ (March 2021).

52 Marion Oswald, ‘Algorithm-assisted decision-making in the public sector: Framing the issues using administrative law rules governing discretionary power’ (2018) 376(2128) Philosophical Transactions of the Royal Society A, https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2017.0359; Andrew Tutt, ‘An FDA for algorithms’ (2017) 69 Administrative Law Review 83.

53 Brent D. Mittelstadt, Chris Russell, and Sandra Wachter, ‘Explaining explanations in AI’, Proceedings of FAT* ’19: Conference on Fairness, Accountability, and Transparency (FAT* ’19), Atlanta, GA, ACM, New York (29–31 January 2019).

54 Brent Mittelstadt, ‘Auditing for transparency in content personalization systems’ (2016) 10 International Journal of Communication 4991; Pauline T. Kim, ‘Auditing algorithms for discrimination’ (2017) 166 University of Pennsylvania Law Review Online 189.

55 Corinne Cath et al., ‘Artificial intelligence and the “good society”: The US, EU, and UK approach’ (2018) 24 Science and Engineering Ethics 505.

56 Tjerk Timan, Bryce Clayton Newell, and Bert-Jaap Koops (eds.), Privacy in Public Space: Conceptual and Regulatory Challenges (Edward Elgar, 2017); Bryce Clayton Newell, Tjerk Timan, and Bert-Jaap Koops, Surveillance, Privacy and Public Space (Routledge, 2019); N. A. Moreham, ‘Privacy in public places’ (2006) 65(3) The Cambridge Law Journal 606; Joel Reidenberg, ‘Privacy in public’ (2014) 69(1) University of Miami Law Review 141; Helen Fay Nissenbaum, ‘Towards an approach to privacy in public: Challenges of information technology’ (1997) 7(3) Ethics & Behaviour 207; Beatte Roessler, ‘Privacy and/in the public sphere’ (2016) 1 Yearbook for Eastern and Western Philosophy 243.

57 Elizabeth Joh, ‘The new surveillance discretion: Automated suspicion, big data, and policing’ (2016) 10 Harvard Law & Police Review 15, 33.

58 Orin Kerr, ‘The case for the third-party doctrine’ (2009) 107 Michigan Law Review 561, 566; Ferguson, ‘Facial recognition and the Fourth Amendment’, pp. 16–17.

59 Douglas A. Fretty, ‘Face-recognition surveillance: A moment of truth for Fourth Amendment rights in public places’ (2011) 16(3) Virginia Journal of Law & Technology 430, 463.

60 United States v. Jones [2012] US Supreme Court 565 U.S. 400; Carpenter v. United States [2018] United States Supreme Court 138 S. Ct.; Riley v. California [2014] United States Supreme Court 573 US 373.

61 Ferguson, ‘Facial recognition and the Fourth Amendment’, p. 21.

62 Footnote Ibid., pp. 21–23: ‘Anti-equivalence principle’; Carpenter v. United States 2219.

63 Ferguson, ‘Facial recognition and the Fourth Amendment’, pp. 23–24: ‘Anti-aggregation principle’.

64 Footnote Ibid., pp. 24: ‘Anti-permanence principle’.

65 Footnote Ibid., pp. 24–26: ‘Anti-tracking principle’.

66 Footnote Ibid., pp. 26–27: ‘Anti-arbitrariness principle’.

67 Footnote Ibid., pp. 27–28: ‘Anti-permeating surveillance principle’.

68 Clare Garvie, Alvaro Bedoya, and Jonathan Frankle, ‘The perpetual line-up: Unregulated police face recognition in America’ (18 October 2016), Georgetown Law Center on Privacy and Technology, www.perpetuallineup.org/; Brendan F. Klare, Mark J. Burge, Joshua C. Klontz, Richard W. Vorder Bruegge, and Anil K. Jain, ‘Face recognition performance: Role of demographic information’ (2012) 7 IEEE Transactions on Information Forensics and Security 1789; Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional accuracy disparities in commercial gender classification’ (2018) 81 Proceedings of Machine Learning Research 1, http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

69 Matthew Schwartz, ‘Color-blind biometrics? Facial recognition and arrest rates of African-Americans in Maryland and the United States’ (2019), Thesis in partial fulfilment of a Masters in Public Policy, Georgetown University, p. 15.

70 Salem Hamed Abdurrahim, Salina Abdul Samad, and Aqilah Baseri Huddin, ‘Review on the effects of age, gender, and race demographics on automatic face recognition’ (2018) 34 The Visual Computer 16171630; Jacob Snow, ‘Amazon’s face recognition falsely matched 28 members of Congress with mugshots’ (26 July 2018), ACLU, www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.

71 Rebecca Crootof, ‘“Cyborg justice” and the risk of technological–legal lock-in’ (2019) 119(1) Columbia Law Review 1, 8; Batya Friedman and Helen Fay Nissenbaum, ‘Bias in computer systems’ (1996) 14 ACM Transactions on Information Systems 330, 333–336.

72 Henriette Ruhrmann, ‘Facing the future: Protecting human rights in policy strategies for facial recognition technology in law enforcement’ (May 2019), CITRIS Policy Lab, https://citrispolicylab.org/wp-content/uploads/2019/09/Facing-the-Future_Ruhrmann_CITRIS-Policy-Lab.pdf, p. 46; Garvie, Bedoya, and Frankle, ‘The perpetual line-up’.

73 Ruhrmann, ‘Facing the future’, p. 63; Garvie, Bedoya, and Frankle, ‘The perpetual line-up’.

74 Nusrat Choudhury, ‘New data reveals Milwaukee police stops are about race and ethnicity’ (23 February 2018), ACLU, www.aclu.org/blog/criminal-law-reform/reforming-police/new-data-reveals-milwaukee-police-stops-are-about-race-and; Frank R. Baumgartner, Derek A. Epp and Kelsey Shoub, Suspect Citizens What 20 Million Traffic Stops Tell Us about Policing and Race (Cambridge University Press, 2018).

75 ‘Choudhury, ‘New data reveals’; Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel, ‘The problem of infra-marginality in outcome tests for discrimination’ (2017) 11(3) The Annals of Applied Statistics 1193; Lynn Lanton and Matthew Durose, ‘Police behavior during traffic and street stops, 2011’ (September 2013), US Department of Justice, www.bjs.gov/content/pub/pdf/pbtss11.pdf.

76 NAACP, ‘Criminal Justice Fact Sheet’ (n.d.), www.naacp.org/criminal-justice-fact-sheet/; Megan Stevenson and Sandra Mayson, ‘The scale of misdemeanor justice’ (2018) 98 Boston University Law Review 371.

77 Ashley Nellis, ‘The color of justice: Racial and ethnic disparity in state prisons’ (13 October 2021), The Sentencing Project, www.sentencingproject.org/publications/color-of-justice-racial-and-ethnic-disparity-in-state-prisons/.

78 Samuel Gross, Maurice Possley, and Klara Stephens, Race and Wrongful Convictions in the United States (National Registry of Exonerations, 2017), www.law.umich.edu/special/exoneration/Documents/Race_and_Wrongful_Convictions.pdf.

79 Encyclopedia Britannica, ‘IBM: Founding, history, & products’, www.britannica.com/topic/International-Business-Machines-Corporation.

80 Eric Reed, ‘History of IBM: Timeline and facts’ (24 February 2020), TheStreet, www.thestreet.com/personal-finance/history-of-ibm.

81 George Joseph, ‘Inside the video surveillance program IBM built for Philippine strongman Rodrigo Duterte’ (20 March 2019), The Intercept, https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/.

85 Edwin Black, IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition (Crown Publishers, 2001).

86 Monika Zalnieriute, ‘From human rights aspirations to enforceable obligations by non-state actors in the digital age: The case of internet governance and ICANN’ (2019) 21 Yale Journal of Law & Technology 278.

88 Monika Zalnieriute, ‘“Transparency-washing” in the digital age: A corporate agenda of procedural fetishism’ (2021) 8(1) Critical Analysis of Law 39.

89 Ferguson, ‘Facial recognition and the Fourth Amendment’, pp. 63–73.

90 ‘Ban Facial Recognition’, www.banfacialrecognition.com/.

91 For example, the UK Equality and Human Rights Commission had, in March 2020, called on suspension; see Equality and Human Rights Commission, ‘Facial recognition technology and predictive policing algorithms out-pacing the law’ (12 March 2020), www.equalityhumanrights.com/en/our-work/news/facial-recognition-technology-and-predictive-policing-algorithms-out-pacing-law.

92 Australian Human Rights Commission, ‘Human Rights and Technology Final Report’ (1 March 2021), https://tech.humanrights.gov.au/downloads?_ga=2.200457510.901905353.1624842000-1160604841.1624842000.

93 Brown v. Tasmania [2017] HCA 43.

94 For example, the UK Equality and Human Rights Commission had, in March 2020, called on suspension; see Equality and Human Rights Commission, ‘Facial recognition technology’.

8 Faces of War Russia’s Invasion of Ukraine and Military Use of Facial Recognition Technology

1 BBC, ‘Ukraine offered tool to search billions of faces’ (14 March 2022), BBC News, www.bbc.com/news/technology-60738204.

2 Paresh Dave and Jeffrey Dastin, ‘Exclusive: Ukraine has started using Clearview AI’s facial recognition during war’ (14 March 2022), Reuters, www.reuters.com/technology/exclusive-ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/.

3 BBC, ‘How facial recognition is identifying the dead in Ukraine’ (13 April 2022), BBC News, www.bbc.com/news/technology-61055319.

4 kwon0321, ‘Clearview AI working on A.R. goggles for Air Force security’ (3 February 2022), Days Tech, https://daystech.org/clearview-ai-working-on-a-r-goggles-for-air-force-security/.

5 Elizabeth Dwoskin, ‘Israel escalates surveillance of Palestinians with facial recognition program in West Bank’ (8 November 2021), Washington Post, www.washingtonpost.com/world/middle_east/israel-palestinians-surveillance-facial-recognition/2021/11/05/3787bf42-26b2-11ec-8739-5cb6aba30a30_story.html.

6 BBC, ‘How facial recognition is identifying the dead’.

7 In Europe, Clearview AI’s services have been condemned by, for instance, the Swedish DPA, the French DPA, the Italian DPA, and the UK Information Commissioner’s Office. See European Data Protection Board, ‘Swedish DPA: Police unlawfully used facial recognition app’ (12 February 2021), https://edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition-app_es; Commission Nationale de l’Informatique et des Libertés, ‘Facial recognition: The CNIL orders CLEARVIEW AI to stop reusing photographs available on the internet’ (16 December 2021), www.cnil.fr/en/facial-recognition-cnil-orders-clearview-ai-stop-reusing-photographs-available-internet; European Data Protection Board, ‘Facial recognition: Italian SA fines Clearview AI EUR 20 million’ (17 March 2022), https://edpb.europa.eu/news/national-news/2022/facial-recognition-italian-sa-fines-clearview-ai-eur-20-million_en; Information Commissioner’s Office, ‘ICO fines facial recognition database company Clearview AI Inc more than £7.5 m and orders UK data to be deleted’ (23 May 2022), https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/.

8 Pictures of soldiers might be useful in many ways. Another possible use of images is geoprofiling, employed at public and private level. A background of a picture often allows the identification of the location where the picture was taken, in a war situation enabling identification of the position of the enemy. Kyiv media cites the situation where using a dating website a hacker group received a picture of a Russian soldier standing next to a military base, thus allowing the detection – and then elimination – of the enemy. KyivPost post on Facebook, 5 September 2022.

9 FRT-based identification of fallen soldiers can also be performed on soldiers on any side of the conflict; however, it is more relevant with regard to enemy personnel.

10 Drew Harwell, ‘Ukraine is scanning faces of dead Russians, then contacting the mothers’ (15 April 2022), Washington Post, www.washingtonpost.com/technology/2022/04/15/ukraine-facial-recognition-warfare/.

11 Sara Sidner, ‘Ukraine sends images of dead Russian soldiers to their families in Russia’ (n.d.), CNN Video (including interviews with Ukraine officials), www.cnn.com/videos/world/2022/05/13/ukraine-face-recognition-russian-soldiers-lead-sidner-pkg-vpx.cnn.

12 This strategy was employed at the beginning of the conflict, but it lost its initial scale within a few months.

13 The term ‘fake’ (Rus. фейк) has entered the Russian language and is used on a regular basis in the politics, media, and everyday life. It has become a keyword that is used to raise doubts as to any information published by Ukraine or Western countries that conflicts with the information disseminated by Russian-controlled media.

14 BBC, ‘How facial recognition is identifying the dead’.

15 International Committee of the Red Cross, ‘Convention (III) relative to the Treatment of Prisoners of War. Geneva, 12 August 1949. Commentary of 2020, Art. 13 : Humane treatment of prisoners’ (2020), https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=openDocument&documentId=3DEA78B5A19414AFC1258585004344BD#:~:text=1626%E2%80%83%E2%80%83More%20compellingly,international%20tribunals%20subsequently.

16 For example, 1TV, ‘Украина и Великобритания сняли антироссийский фейк, поместив туда символику нацистов. Новости. Первый канал’ (3 June 2022), www.1tv.ru/news/2022-06-03/430410-ukraina_i_velikobritaniya_snyali_antirossiyskiy_feyk_pomestiv_tuda_simvoliku_natsistov.

17 Lizzy O’Leary, ‘How facial recognition tech made its way to the battlefield in Ukraine’ (26 April 2022), Slate, https://slate.com/technology/2022/04/facial-recognition-ukraine-clearview-ai.html.

18 Aoife Duffy, ‘Bearing witness to atrocity crimes: Photography and international law’ (2018) 40 Human Rights Quarterly 776.

19 Ministry of Digital Transformation, Ukraine, ‘Як Розпізнавання Обличчя Допоможе Знайти Всіх Воєнних Злочинців’ (9 April 2022), Interview (directed by Міністерство цифрової трансформації України), www.youtube.com/watch?v=fUKQM7BXryc.

20 Mykhailo Fedorov [@FedorovMykhailo], Post, Twitter (7 April 2022), https://twitter.com/FedorovMykhailo/status/1512101359411154953: ‘After events in Bucha, I am launching the #russianlooters column. Our technology will find all of them. Shchebenkov Vadym stole more than 100 kg of clothes from UA families and sent them from Mozyr, Belarus, to his hometown of Chita. It is 7 thousand km away.’

21 ‘FEDOROV’, Telegram, (9 April 2022), https://t.me/zedigital/1546.

22 Sidner, ‘Ukraine sends images of dead Russian soldiers’.

23 For example, Oleksandr Topchij and Vitlij Saenko, ‘Завдяки відео CNN встановлено особу росіянина, який розстріляв двох цивільних на Київщини’ (2 September 2022), Unian, www.unian.ua/society/zavdyaki-video-cnn-vstanovleno-osobu-rosiyanina-yakiy-rozstrilyav-dvoh-civilnih-na-kijivshchini-novini-kiyeva-11964534.html.

24 See ‘Tactical OSINT Analyst’ [@OSINT_Tactical], Post, Twitter (1 March 2022), https://twitter.com/OSINT_Tactical/status/1498694344781484037.

25 Russia also uses FRT to survey the areas close to the border zone to identify the enemy and saboteurs, for example. Moscow Times, ‘Russia to expand high-tech surveillance to Ukraine border areas – Kommersant’ (20 June 2022), www.themoscowtimes.com/2022/06/20/russia-to-expand-high-tech-surveillance-to-ukraine-border-areas-kommersant-a78043.

26 There is no doubt that the potential of FRT has a negative impact on freedom of assembly. Facial recognition systems integrated in street surveillance cameras significantly reduce the chances of people remaining anonymous, which is often crucial during protests. Particularly in countries where freedom of assembly is restricted and where administrative or criminal liability for anti-government rallies can be imposed, the likelihood of being identified, even after a rally, encourages people to refuse to express their opinions or ideas and to take part in the democratic process (the chilling effect).

27 See, e.g., LIFE, ‘Путин: Россия никогда не откажется от любви к Родине и традиционных ценностей’ (9 May 2022), https://life.ru/p/1492826.

28 Human Rights Watch, ‘Submission by Human Rights Watch on Russia to the Human Rights Committee’ (15 February 2022), www.hrw.org/news/2022/02/15/submission-human-rights-watch-russia-human-rights-committee.

29 While in Europe and many other Western countries companies offering face recognition platforms faced a lot of criticism and even fines, personal data protection seems to be much less stringent in Russia. Face recognition platforms in Russia boast wide use by private individuals. See VestiRu, ‘FindFace: российская программа распознавания лиц завоевывает мир (22 February 2016), www.vesti.ru/article/1656323.

30 ОВД-Инфо, ‘Как Власти Используют Камеры и Распознавание Лиц Против Протестующих’ (17 January 2022), https://reports.ovdinfo.org/kak-vlasti-ispolzuyut-kamery-i-raspoznavanie-lic-protiv-protestuyushchih. The use of FRTs to stop protests in the country caught the attention of the international community in 2019, when women’s rights activist Ms Popova filed a lawsuit after being detained for an unauthorised picket in 2018. Ms Popova claimed that the video used in her case file contained evidence of the use of FRT. In September 2019, Ms Popova and the politician Mr Milov filed another lawsuit alleging that the authorities use the technology to collect data on public protesters. However, Russian national courts rejected both claims, as well as all other similar claims. Human Rights Watch, ‘Moscow’s use of facial recognition technology challenged’(8 July 2020), www.hrw.org/news/2020/07/08/moscows-use-facial-recognition-technology-challenged.

31 After March 2022, additional surveillance cameras, presumably with facial recognition, were installed on Nevsky Avenue in St Petersburg, where some anti-war protests had been held.

32 Moscow Times, ‘Russian banks to share clients’ biometric data with the state – Kommersant’ (31 May 2022), www.themoscowtimes.com/2022/05/31/russian-banks-to-share-clients-biometric-data-with-the-state-kommersant-a77844.

33 Federal Law of July 14, 2022 No. 325-FZ On amendments to Articles 14 and 14-1 of the Federal Law ‘On Information, Information Technologies and Information Protection’ and Article 5 of the Federal Law ‘On Amendments to Certain Legislative Acts of the Russian Federation’, Official Publication of Legal Acts, Russia. http://publication.pravo.gov.ru/Document/View/0001202207140096?index=3&rangeSize=1.

34 Human Rights Watch, ‘Submission by Human Rights Watch on Russia’.

35 David Cornett, David Bolme, Dawnie W. Steadman, Kelly A. Sauerwein, and Tiffany B. Saul, ‘Effects of postmortem decomposition on face recognition’ (1 September 2019), Oak Ridge National Lab, Oak Ridge, TN, United States, www.osti.gov/biblio/1559672#:%7E:text=During%20the%20early%20stages%20of,have%20little%20effect%20on%20detection.

36 Ruggero Donida Labati, Danilo De Angelis, Barbara Bertoglio, Cristina Cattaneo, Fabio Scotti, and Vincenzo Piuri, ‘Automatic face recognition for forensic identification of persons deceased in humanitarian emergencies’ (2021), 2021 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), https://ieeexplore.ieee.org/document/9493678/.

37 Darian Meacham and Martin Gak, ‘Face recognition AI in Ukraine: Killer robots coming closer?’ (30 March 2022), openDemocracy, www.opendemocracy.net/en/technology-and-democracy/facial-recognition-ukraine-clearview-military-ai/.

38 See ‘At war with facial recognition: Clearview AI in Ukraine’ (17 May 2022), Interview with Hoan Ton-That, CEO of Clearview AI, at kwon0321, Days Tech, https://daystech.org/at-war-with-facial-recognition-clearview-ai-in-ukraine/.

39 Віолетта Карлащук, ‘На Київщині встановлять понад 250 камер з розпізнаванням обличчя’ (9 September 2022), Суспільне | Новини, https://suspilne.media/279898-na-kiivsini-vstanovlat-ponad-250-kamer-z-rozpiznavannam-oblicca/?.

40 Lauren Kahn, ‘How Ukraine is remaking war. Technological advancements are helping Kyiv succeed’ (29 August 2022), Foreign Affairs, www.foreignaffairs.com/ukraine/how-ukraine-remaking-war?utm_medium=promo_email&utm_source=lo_flows&utm_campaign=registered_user_welcome&utm_term=email_1&utm_content=20220907.

Figure 0

Figure 2.1 AI system life cycle

Figure 1

Figure 2.2 AI system key components

Figure 2

Figure 2.3 AI versus ML

Figure 3

Figure 2.4 Symbolic AI versus ML

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×