4.1 Introduction
Facial recognition technology (FRT) is being increasingly used by border authorities, law enforcement, and other government institutions around the world. Research shows that among the 100 most populated countries in the world, seven out of ten governments are using FRT on a large-scale basis.Footnote 1 One of the major challenges related to this technology is the lack of transparency and explainability surrounding it. Numerous reports have indicated that there is insufficient transparency and explainability around the use of artificial intelligence (AI), including FRT, in the government sector.Footnote 2 There are still no clear rules, guidelines, or frameworks as to the level and kind of transparency and explainability that should be expected from government institutions when using AI more generally, and FRT in particular.Footnote 3 The EU General Data Protection Regulation (GDPR) is among the first instruments to establish a right of explanation in relation to automated decisions,Footnote 4 but its scope is very limited.Footnote 5 The proposed EU Artificial Intelligence Act (Draft EU AI Act) sets minimum transparency standards to high-risk AI technologies that include FRT.Footnote 6 However, these transparency obligations are generic to all high-risk AI technologies and do not detail transparency requirements for FRT specifically.
Transparency and explainability are arguably essential to ensuring the accountability of government institutions using FRT; empowering supervisory authorities to detect, investigate, and punish breaches of laws or fundamental rights obligations; allowing individuals affected by an AI system’s outcome to challenge the decision generated using AI systems;Footnote 7 and enabling AI developers to evaluate the quality of the AI system.Footnote 8 According to the proposed EU AI Act, ‘transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress’.Footnote 9
At the same time, one should note that transparency and explainability of FRT alone would not help remedy essential problems associated with FRT use, and might further contribute to its negative impacts in some cases. For instance, if an individual learns about the government use of FRT in public spaces where public gatherings take place, this might discourage her from participating in such gatherings and thus have a ‘chilling effect’ on the exercise of her human rights, such as freedom of speech and freedom of association.Footnote 10 These considerations have to be kept in mind when determining the desirable levels of FRT transparency and explainability.
While there is extensive technical literature on transparency and explainability of AI in general,Footnote 11 and of FRT more specifically,Footnote 12 there is very limited legal academic discussion about the requisite extent of transparency and explainability of FRT technologies, and challenges in ensuring it, such as trade secrets. The goal of this chapter is to examine to what extent trade secrets create a barrier in ensuring transparent and explainable FRT and whether current trade secret laws provide any solutions to this problem.
This chapter first identifies the extent to which transparency and explainability is needed in relation to FRT among different stakeholders. Second, after briefly examining which types of information about AI could be potentially protected as trade secrets, it identifies situations in which trade secret protection may inhibit transparent and explainable FRT. It then analyses whether the current trade secret law, in particular the ‘public interest’ exception, is capable of addressing the conflict between the proprietary interests of trade secret owners and AI transparency needs of certain stakeholders. This chapter focusses on FRT in law enforcement, with a greater emphasis on real-time biometric identification technologies that are considered the highest risk.Footnote 13
Apart from the critical literature analysis, this chapter relies on empirical data collected through thirty-two interviews with experts in AI technology. The interviews were conducted with representatives from five stakeholder groups: police officers, government representatives, non-governmental organisation (NGO) representatives, IT experts (in academia and private sector), and legal experts (in academia and private sector) from Europe, the United States, and Asia-Pacific (October 2021–March 2022, online). The data collected from these interviews is especially useful when identifying the transparency and explainability needs of different stakeholders (Section 4.2).
Keeping in mind the lack of consensus on the terms ‘AI transparency’ and ‘AI explainability’, for the purpose of this chapter we define the concepts as follows. First, we understand the ‘AI transparency’ principle as a requirement to provide information about the AI model, its algorithm, and its data. The AI transparency principle could require disclosing very general information, such as ‘when AI is being used’,Footnote 14 or more specific information about the AI module – for example, its algorithmic parameters, training, validation, and testing information. While this concept of transparency might require providing very different levels of information for different stakeholders, it does not include information about how AI decisions are being generated. The latter is covered by the principle of ‘AI explainability’, which we define in a narrow technical way; that is, as an explanation of how an AI module functions, and how it generates a particular output. Such explanations are normally provided using so called Explainable AI (XAI) techniques.Footnote 15 Generally speaking, XAI techniques might be ‘global’, explaining the features of the entire module; or ‘local’, which explain how a specific output has been generated.Footnote 16 While this chapter largely focusses on FRT transparency and its possible conflict with trade secret protection, it also briefly reflects upon the need for FRT to be explainable.
In the following sections, we discuss the scope of explainability and transparency that different stakeholders need in relation to FRT in law enforcement (Section 4.2), in which situations trade secrets may conflict with these transparency and explainability needs (Section 4.3), and whether the ‘public interest’ defence under trade secrets law is capable of addressing this conflict (Section 4.4).
4.2 FRT Transparency and Explainability: Who Needs It and How Much?
Before examining whether trade secrets conflict with FRT transparency and explainability principles, we need to clearly identify the level of transparency and explainability that different stakeholders require in relation to FRT. We demonstrate that different stakeholders need very different types of information, some of which is – and some is not – protected by trade secrets.
For the purpose of this analysis, we identified six categories of stakeholders who have legitimate interests in certain levels of transparency and/or explainability around FRT technologies: (1) individuals exposed to FRT; (2) police officers who directly use the technology; (3) police authorities that acquire/procure the technology and need to ensure its quality; (4) court participants, especially court experts, who need access to technical information to assess whether the technology is of sufficient quality; (5) certification and auditing bodies examining whether the FRT meets the required standards; and finally (6) public interest organisations (NGOs and public research institutions) whose purpose is to ensure, in general terms, that the technology is high quality, ethical, legal, and is used for the overall public benefit.
As could be expected, our interviews with stakeholders have shown that different stakeholders have different explainability and transparency needs in relation to FRT.
4.2.1 FRT Explainability
In terms of the explainability of FRT, few stakeholders need it as a matter of necessity. Among the identified stakeholder groups, certification and auditing bodies that examine the quality of technology might potentially find XAI techniques useful – as these may help identify whether, for instance, a specific AI module is biased or contains errors.Footnote 17 For similar reasons, XAI techniques might be relied upon by public interest organisations, such as NGOs and research institutions, that have expertise in AI technologies and want to assess the quality of a specific FRT technology used by police. AI developers themselves have been using XAI techniques for a similar purpose; that is, to identify AI errors during the development process and eliminate them before deploying them in practice.Footnote 18 However, XAI techniques themselves do not currently have quality guarantees and often face issues as to quality and reliability.Footnote 19 It is thus questionable whether experts assessing the quality of AI, or FRT more specifically, would give much weigh to such explanations.
Other stakeholders – police authorities, police officers, and affected individuals – are unlikely to find explanations generated by XAI techniques useful, mainly because of the technical knowledge that is required to understand such explanations. Further, according to some interviewees, when FRT is used for identification purposes, users do not need an explanation at all as the match made by FRT could be easily double checked by a police officer.Footnote 20
Importantly, explanations generated by XAI techniques are unlikely to interfere with trade secret protection as they do not disclose substantial amounts of confidential information. As discussed later, in order to be protected by trade secrets, information should be of independent commercial value and kept secret.Footnote 21 XAI techniques, if integrated in the FRT system, would provide explanations to the end users, which, by their nature, would not be secret. Thus, owing to its limited relevance for our debate on FRT and trade secrets, FRT explainability will not be analysed here any further.
4.2.2 FRT Transparency Needs
In contrast, transparency around FRT is required by all stakeholders, although to differing extents. Depending on the level of transparency/information needed, stakeholders could be divided into three groups: those with (1) relatively low transparency needs, (2) high transparency needs, and (3) varying/medium transparency needs.
4.2.2.1 Low Transparency Needs
Individuals exposed to FRT, and law enforcement officers directly using the technology, require relatively general non-technical information about FRT (thus ‘low transparency’). Individuals have a legitimate interest in knowing where, when and for what purpose the technology is used; its accuracy levels and effectiveness; legal safeguards put around the use of this technology; and in which circumstances and how they can complain about inappropriate or illegal use of FRT.Footnote 22 After individuals have been exposed to the technology and if this has led to adverse effects (e.g., potential violation of their rights), they might require a more detailed ex post explanation as to why a specific decision (e.g., to stop and question the individual) was made and how FRT was used in this context. Still, they do not need any detailed technical explanations about how the technology was developed, trained, or how exactly it functions, as they do not have the technical knowledge required for the interpretation of this information.
As one of our interviewees explained (in the context of migration/border control):
So, for example, if I am a citizen stakeholder [and] my application for a visa is denied and it’s based on my looks [that suggests that I] have some criminal records, then, of course, it has impacted me and I’m not happy, and I will ask for answers. Even [if the] activities [were] rectified, still [I’ll ask for] answers on how come did you make this mistake? Why did you take me wrong [as] another person and it cost me my travel to be cancelled? So, to have explainability at this level, potentially you don’t need to explain all of the algorithms. It’s a matter of explaining why this sort of decision was made. For example, there was this person with similar facial features and the same name; or whatever some high-level explanation of what happened in the process that explains why mistake happened, etc.Footnote 23
Second, police officers who directly use the technology will want access to general information about how the system functions, what types of data were used to train the system, the accuracy rates in different settings, how it should be used, its limitations, and so on.Footnote 24
In addition, these stakeholders would benefit from user-friendly explanations about, for instance, which pictures in the watch-list were found to be sufficiently similar to the probe picture and the accuracy rate with relation to that specific match.Footnote 25 This would allow police officers to assess the extent to which they could rely on a specific FRT outcome before proceeding with an action (e.g., stopping an individual for questioning or arrest). Information needs might differ between real-time/live FRT and post FRT (i.e., when FRT is used to find a match for a picture taken some time ago), as the former is considered higher risk.Footnote 26
4.2.2.2 High Transparency Needs
Stakeholder groups that are required to assess the quality of a FRT system – certification and auditing authorities, and court experts – have high transparency needs. In order to conduct an expert examination of FRT technology, certification and auditing bodies require access to detailed technical information about the system. This might include algorithmic parameters, training data, processes and methods, validation/verification data and processes, as well as testing procedures and outcomes.
As one of the interviewed IT experts explained:
But if, for example, there is an audit happening. […] then of course, at that level explainability means something completely different. It’s about explaining how the system was designed, how it was being used, what sort of algorithms, what sort of data was used for the training, what sort of design and build decisions were made, and so on.Footnote 27
Similar highly technical information could be demanded in court proceedings by court experts who are invited to assess the quality of FRT used by law enforcement authorities during legal proceedings. Detailed technical information would be necessary to provide technically sound conclusions.
4.2.2.3 Medium/Varying Transparency Needs
The third group of stakeholders might have varied information needs depending on their level of knowledge about AI technologies. Namely, law enforcement authorities, when acquiring the FRT system, would need information that allows them to judge the quality and reliability of the FRT system in question. If they have only general knowledge about FRT, they will merely want to know whether the technology meets the industry standards and whether it was certified/validated by independent bodies;Footnote 28 how accurate it is; whether it has been trialled in real life settings, the trial results, and so on. If they have expert knowledge in AI/FRT (e.g., in their IT team), they might demand more technical information, for example, about datasets on which it was trained and validated, and validation and testing information.
As a final stakeholder group, public interest organisations (researchers and NGOs) have a legitimate interest in accessing information about government FRT use as ‘they are the ones that are most likely to initiate […] strategic litigation and other initiatives’,Footnote 29 and ensure that government is accountable for the use of this technology.Footnote 30 Similarly to law enforcement, their transparency needs will differ depending on their expertise and purpose. Those without expert knowledge in AI might be interested in general information as to which situations and purposes, and to what extent, law enforcement is using FRT; the accuracy levels and effectiveness of the technology in achieving the intended aims (e.g., whether the use of FRT led to the arrest of suspected persons or preventing a crime); and whether there have been human rights impact assessments conducted at the procurement level and their results.Footnote 31 Those with technical expertise in AI might want access to algorithmic parameters and weights, training and validation/verification data, or similar technical information, allowing them to assess the accuracy and possible bias of the technology (similar to the high level transparency discussed earlier).Footnote 32
These three levels of transparency are relevant when determining the situations in which trade secret protection might become a barrier to ensuring the transparency demanded by stakeholders.
4.3 In Which Situations Might Trade Secrets Inhibit Transparency of FRT?
There are a number of challenges in ensuring transparency around FRT.Footnote 33 One of them is trade secrets, which can arguably create barriers to ensuring transparency of AI technologies in general and FRT technologies in particular. The example often used is the State v. Loomis case decided by a US court, in which the defendant was denied access to the parameters of the risk assessment algorithm COMPAS owing to trade secrets.Footnote 34 In this section, we demonstrate that the answer is more nuanced: while trade secrets might create barriers to transparent FRT in some situations (‘actual conflict’ situations), they are unlikely to interfere with transparency needs in other situations (‘no conflict’ and ‘nominal conflict’ situations).
4.3.1 The Scope of Trade Secret Protection
In order to understand the situations in which trade secrets interfere with transparency needs around FRT, it is first necessary to clarify which information about FRT could be potentially protected by trade secrets.
Trade secrets are of special importance in protecting intellectual property (IP) rights underlying AI modules, including FRT. In contrast to other IP rights (patents, copyright), trade secrets could be used to protect any elements of AI modules as long as they provide independent commercial value and are kept secret.Footnote 35 Trade secret protection requires neither investment in the registration process nor public disclosure of the innovation.Footnote 36 While trade secret protection has its limitations, such as a possibility to reverse engineer technology protected by trade secrets,Footnote 37 and a lack of protection against third-party disclosure,Footnote 38 the software industry has so far successfully used trade secrets to protect its commercial interests.Footnote 39
As far as trade secrets and AI are concerned, courts have already indicated that at least certain parts of AI modules can be protected as trade secrets, such as source code, algorithms, and the way a business utilises AI to implement a particular solution.Footnote 40 Keeping in mind the requirements for trade secret protection – secret nature and commercial value – a range of information about AI (including FRT) could be possibly protected by trade secrets: the architecture of the algorithm, its parameters and weights; source code in which the algorithm is coded; information about the training, validation and verification of the algorithm, including training and validation/verification data, methods and processes; real life testing information (in which settings it was tested, and the methods and outcomes of testing), and so on. All this information is often seen by AI developers as of commercial value and kept secret,Footnote 41 and thus could be potentially protected as trade secrets.Footnote 42
4.3.2 When is the Conflict between Trade Secrets and the AI Transparency Principle Likely to Arise?
Keeping in mind the broad range of information about the FRT that could be protected as trade secrets and the transparency needs of stakeholders (identified earlier), three types of situations could be distinguished.
4.3.2.1 No Conflict Situations
First, in some situations, there would be no conflict between stakeholder’s transparency needs and trade secret protection as the information requested by the stakeholder is generally not protected by trade secrets. For instance, individuals subject to FRT would only want general information about the fact that FRT is used by a government authority, where and for what purposes it is used, and so on.Footnote 43 Similarly, police officers using the technology would only need a general understanding of how the technology functions, in which situations it could be used, its accuracy rates, and so on.Footnote 44 Owing to its generally public nature and lack of independent economic value, this information would normally not be protected as trade secrets.
4.3.2.2 Nominal Conflicts
In some other instances, ‘nominal’ conflict situations are likely to arise. First, certification and auditing organisations that are examining the quality of FRT technologies might require access to extensive technical information related to FRT that has commercial value and could be protected by trade secrets, such as algorithmic parameters, training, validation and verification information, and all information related to real-life trials.Footnote 45 Similar information might be requested in court proceedings by court experts who are invited to assess the reliability of the FRT system in question.Footnote 46 As discussed earlier, these types of technical information are likely to be protected as trade secrets: AI developers consider them commercially valuable and tend to keep them secret.Footnote 47
However, we refer to these types of situations as ‘nominal’ conflicts since they could be managed under existing confidentiality/trade secret rules that form part of certification/auditing processes or court procedures. Certification and auditing organisations are normally subject to confidentiality and use the confidential information provided by AI developers for assessment purposes only. Similarly, in court investigations, procedural rules determine how trade secrets disclosed during the court proceedings are protected from disclosure to third parties or to the public.Footnote 48 Since these situations are already addressed under current regulatory or governance frameworks, we will not examine them further.
4.3.2.3 Actual Conflicts
The third type of situations – related to transparency needs of law enforcement authorities and public interest organisations – are of most concern, and we refer to them as ‘actual conflicts’.
Law enforcement authorities might need access to certain technical information about the FRT (e.g., training, validation and testing information) in order to evaluate its reliability before procuring it.Footnote 49 Public interest organisations, such as NGOs and research organisations, might need access to even more detailed technical information (algorithms, training and validation data, testing data) in order to provide an independent evaluation of the effectiveness of the FRT system used by law enforcement.Footnote 50 As mentioned earlier, technical information is generally considered by AI developers as commercially valuable and is likely to be kept confidential.
It is worth noting that law enforcement authorities are able to obtain certain information through contract negotiation.Footnote 51 However, it is questionable whether this solution is suitable in all cases. Owing to a lack of adequate legal advice, bargaining power, or simply the novel nature of AI technologies, law enforcement authorities might fail to negotiate for appropriate access to all essential information that will be needed during the entire life cycle of the FRT system. Government authorities using AI tools acquired from third parties have already encountered the problem of subsequently getting access to certain confidential information about the AI module.Footnote 52
Similarly, while public interest organisations might acquire certain information about FRT used by government through freedom of information requests,Footnote 53 this solution is limited as the legislation generally protects trade secrets from public disclosure.Footnote 54 Therefore, we see both of these situations as an actual conflict between trade secret rights of AI developers and the AI transparency needs of two major groups of stakeholders (law enforcement authorities and public interest organisations).
4.4 Does Trade Secret Law Provide Adequate Solutions?
Trade secret law provides certain limitations that are meant to serve the interests of the public. Namely, in common law jurisdictions, when a breach of confidentiality is claimed, the defendant could raise a so-called public interest defence. In short, it allows defendants to avoid liability for disclosing a trade secret if they can prove the disclosure was in the public interest.Footnote 55 As explained by the House of Lords, protection of confidential information is based on the public interest in maintaining confidences, but the public interest sometimes favours disclosure rather than secrecy.Footnote 56 However, this public interest defence is of limited, if any, use in addressing the conflict between trade secrets and the legitimate transparency needs of identified stakeholders in an FRT scenario.
First, the scope of this defence is unclear.Footnote 57 Some judicial sources suggest the existence of a broad public interest defence, which is based upon freedom of the press and the public’s right to know the truth.Footnote 58 Other court judgments suggest that the defence should encompass no more than an application of the general equitable defence of clean hands, namely the information that exposes a serious wrongdoing of the plaintiff should not be classified as confidential in any case (iniquity rule).Footnote 59 For instance, Australian courts have confirmed that disclosure in the public interest should be construed narrowly; it should limited to information affecting national security, concerning breach of law, fraud, or otherwise destructive to the public, and must be more than simply the public’s interest in the truth being told.Footnote 60
Most importantly, the defence does not provide interested stakeholders with an active right to request information about the FRT technology and its parameters. It is merely a passive defence that could be invoked by a defendant only after they have disclosed the information (or where there is an imminent threat of such a disclosure). In order to disclose the information, the defendant should already have access to the information, which is not the situation of law enforcement authorities or public interest organisations seeking information about the FRT.
The public interest defence could be possibly useful in some exceptional situations. For instance, the employee/contractor of an FRT developer might disclose certain confidential technical information about the FRT system with the public or a specific stakeholder (public authority, NGO, etc.) in order to demonstrate that the AI developer did not comply with legal requirements when developing the FRT system and/or misled the public and/or the government authority as to the accuracy of the FRT technology, for example. If breach of confidence is claimed against this person, they could argue that the disclosure served the public interest: the use of an FRT system that is of low quality or biased may lead to incorrect identification of individuals, especially ethnic or gender minorities, which may further result in the arrest of innocent people and violation of their human rights. The defendant could argue that the disclosure of technical information about such an FRT system would thus help prevent harm from occurring.
Even then, the ability of a defendant to rely on the public interest defence is questionable. For instance, the court might accept the defence if the information is disclosed to government authorities responsible for prosecuting breaches of law or fraud, as ‘proper authorities’ for public disclosure purposes,Footnote 61 but not to public interest organisations or the public generally.Footnote 62 While the law enforcement authority (which is also the user of FRT in this case) might qualify as a ‘proper authority’, a public interest organisation is unlikely to meet this criterion.
Furthermore, if a narrow interpretation of the public interest defence is applied, the defendant would have to prove that the disclosed information relates to ‘misdeeds of a serious nature and importance to the country’.Footnote 63 It is questionable whether a low quality or biased FRT, or the AI developer hiding information about this, would qualify as a misdeed of such serious nature. More problematically, the defendant might not know whether the FRT does not meet certain industry or legal standards until the technical information is disclosed and an independent examination is carried out.
4.5 Conclusions
It is without doubt that transparency is needed around the development, functioning, and use of FRT in the law enforcement sector. The analysis here has shown that in some cases trade secrets do not impede the transparency around FRT needed by some stakeholders (e.g., affected individuals or direct users of FRT) and some possible conflicts could be resolved through existing arrangements and laws (e.g., with relation to the transparency needs of certification and auditing organisations, and court participants). However, trade secrets might conflict with the transparency needs of some stakeholders, especially law enforcement authorities (after acquiring the technology) and public interest organisations that might want access to confidential technical information to assess the quality of the FRT system. Unfortunately, trade secret law, with its unclear and limited public interest exception, is unable to address this conflict. Further research is needed as to how the balance between the proprietary interests of AI developers and transparency needs of other stakeholders (law enforcement authorities and public interest organisations) could be established.