18.1 Introduction
Artificial intelligence (AI)Footnote 1 increasingly plays a role within law enforcement. According to Hartzog et al., “[w]e are entering a new era when large portions of the law enforcement process may be automated … with little to no human oversight or intervention.”Footnote 2 The expansion of law enforcement use of AI in recent years can be related to three societal developments: austerity measures and a push toward using more cost-effective means; a growing perception that law enforcement should adopt a preventive or preemptive stance, with an emphasis on anticipating harm; and, finally an increase in the volume and complexity of available data, requiring sophisticated processing tools, also referred to as Big Data.Footnote 3
AI is seen as providing innumerable opportunities for law enforcement. According to the European Parliament, AI will contribute “to the improvement of the working methods of police and judicial authorities, as well as to a more effective fight against certain forms of crime, in particular financial crime, money laundering and terrorist financing, sexual abuse and the exploitation of children online, as well as certain types of cybercrime, and thus to the safety and security of EU citizens.”Footnote 4 Some of the main current applications include predictive policing (see further), traffic control (automated license plate detection and vehicle identification),Footnote 5 cybercrime detection (analysis of money flows via the dark web/ detection of online child abuse),Footnote 6 and smart camera surveillance (facial recognition and anomaly detection).Footnote 7
The goal of this chapter is to introduce one type of AI used for law enforcement: predictive policing and discuss the main concerns this raises. I first examine how predictive policing emerged in Europe and discuss its (perceived) effectiveness (Section 18.2). Next, I unpack, respectively, the legal, ethical, and social issues raised by predictive policing, covering aspects relating to its efficacy, governance, and organizational use and the impact on citizens and society (Section 18.3). Finally, I provide some concluding remarks (Section 18.4).
18.2 Predictive Policing in Europe
18.2.1 The Emergence of Predictive Policing in Europe
The origins of predictive policing can be found in the police strategy “Intelligence-led policing”, which emerged in the 1990s in Europe.Footnote 8 Intelligence-led policing can be seen as “a business model and managerial philosophy where data analysis and crime intelligence are pivotal to an objective, decision-making framework that facilitates crime and problem reduction, disruption and prevention through both strategic management and effective enforcement strategies that target prolific and serious offenders.”Footnote 9 One of the developments within intelligence-led policing was prospective hotspot policing, which focused on developing prospective maps. Using knowledge of crime events, recorded crime data can be analyzed to generate an ever-changing prospective risk surface.Footnote 10 This then led to the development of one of the first predictive policing applications in the United Kingdom, known as ProMap.Footnote 11 Early in the twenty-first century, the rise of the use of predictive machine learning led to what is now known as predictive policing.
Predictive policing refers to “any policing strategy or tactic that develops and uses information and advanced analysis to inform forward-thinking crime prevention.”Footnote 12 It is a strategy that can be situated in a broader preemptive policing model. Preemptive policing is specifically geared to gather knowledge about what will happen in the future with the goal to intervene before it is too late.Footnote 13 The idea behind predictive policing is that crime is predictable and that societal phenomena are, in one way or another, statistically and algorithmically calculable.Footnote 14
Although already being implemented since the beginning of the twenty-first century in the United States (US), Law Enforcement Agencies in Europe are increasingly experimenting with and applying predictive policing applications. Two types can be identified: predictive mapping and predictive identification.Footnote 15 According to Ratcliffe, predictive mapping refers to “the use of historical data to create a spatiotemporal forecast of areas of criminality or crime hot spots that will be the basis for police resource allocation decisions with the expectation that having officers at the proposed place and time will deter or detect criminal activity.”Footnote 16 Some law enforcement agencies use or have used software developed by (American) technology companies such as PredPol in the UK and Palantir in Denmark, while in other countries, law enforcers have been developing their own software. Examples are the Criminality Awareness System (CAS) in the Netherlands, PRECOBS in Germany, and a predictive policing algorithm developed in Belgium by criminology researchers in cooperation with the police.Footnote 17 Predictive mapping applications have in most cases focused on predicting the likelihood that a certain area is more prone to burglaries and adjusting patrol management according to the predictions.
Predictive identification has the goal to predict who is a potential offender, the identity of offenders, criminal behavior, and who will be a victim of crime.Footnote 18 These types of technologies build upon a long history of using risk assessments in criminal justice settings.Footnote 19 The difference is that the risk profiles are now often generated from patterns in the data instead of coming from scientific research.Footnote 20 This type of predictive policing has been mainly applied in Europe in the context of predicting the likelihood of future crime (recidivism). However, other examples can be found in the use of video surveillance that deploys behavior and gait recognition. There are also developments in lie and emotion detection,Footnote 21 the prediction of radicalization on social media,Footnote 22 passenger profiling, and the detection of money laundering.Footnote 23 A recent example can be found in the Netherlands where Amsterdam police uses what is known as the Top400. The Top400 targets 400 young “high potentials” in Amsterdam between twelve and twenty-four years old “that have not committed serious offences but whose behavior is considered a nuisance to the city.”Footnote 24 In the context of the Top400, the ProKid+ algorithm has been used to detect children up to sixteen years old that could become “a risk” and might cause future crime related problems. When on the list, youngsters receive intensive counseling and they and their families are under constant police surveillance.Footnote 25
18.2.2 Effectiveness of Predictive Policing
Evaluations of the effectiveness of predictive policing in preventing crime have, so far, been inconclusive due to a lack of evidence.Footnote 26 In addition, not all evaluations have been conducted in a reliable way, and with general falling crime rates, it is hard to show that fall in crime is the result of the technology. Moreover, it is difficult to evaluate the technology’s effectiveness in preventing crime as algorithms identify correlations, not causality.
For instance, the Dutch Police Academy concluded in their evaluation of the CAS system that it does seem to prevent crime but that it does have a positive effect on management.Footnote 27 The evaluation study conducted by the Max Planck Institute in Freiburg of a PRECOBS pilot-project in Baden-Württemberg concluded that it remains difficult to judge whether the PRECOBS software is able to contribute toward a reduction in home burglaries and a turnaround in case development. The criminality-reducing effects were only moderate and crime rates could not be clearly minimized by predictive policing on its own.Footnote 28 In Italy, reliability of 70 percent was found for the predictive algorithm of KEYCRIME which predicted which specific areas in Milan would become a crime hotspot.Footnote 29 In their overview of recent challenges and developments, Hardyns and Rummens did not find significant effects of predictive policing and argue that more research is needed to assess the effectiveness of current methods.Footnote 30
Apart from inconclusive evaluations, several police forces stopped using the software altogether. For instance, the use of PredPol by Kent Police was discontinued in 2019 and the German police forces of Karlsruhe and Stuttgart decided to stop using PRECOBS software because there was insufficient crime data to make reliable predictions.Footnote 31 Furthermore, amid public outcry about the use of PredPol, the Los Angeles Police Department in the US stopped using the software, yet at the same time it launched a new initiative: “data-informed community-focused policing (DICFP).”Footnote 32 The goal of this initiative is to establish a deeper relationship between community members and police, and to address some of the concerns the public had with previous policing programs. However, critics have raised questions about the initiative’s similarities with the use of PredPol.Footnote 33 Similar to PredPol, the data that is fed into the system is biased and often generated through feedback loops. Feedback loops refers to a phenomenon identified in research that police are repeatedly sent back to the same neighborhoods regardless of the true crime rate.Footnote 34
Regarding predictive identification, almost no official evaluations have been conducted. Increasingly, investigative journalists and human rights organizations are showing that there is significant bias in these systems.Footnote 35 Moreover, issues that have been raised with the effectiveness of actuarial risk assessment methods before it was digitalized, such as the (un)reliability of the risk factor research that underscores the applied theories of crime, are not solved by implementing algorithmic decision-making.Footnote 36 As to the use of predictive analytics in this area, the effectiveness of these systems likewise remains unclear. An assessment of a predictive model used by Los Angeles’ children’s services, which was promoted as highly effective in practice, “produced a false alarm 96 percent of the time.”Footnote 37
In general, the effectiveness concerns that were already identified for (prospective) hot-spot policing on the one hand and traditional risk assessments on the other, prior to the implementation of AI systems, did not disappear. With regards to predictive mapping spatial displacement, which is when crime moves to a different area after implementing a control measure such as CCTV or increased police presence is but one example.Footnote 38 It should also be noted that the long-term impacts of predictive policing on individuals and society are unclear and longitudinal research assessing this is not conducted. Finally, as demonstrated by the earlier overview, it is unclear if the adoption of predictive mapping will reduce overall crime, and whether it will be able to do so for different types of crime.Footnote 39
18.3 A Legal, Ethical, and Policy Analysis of Predictive Policing
18.3.1 Legal Issues
The European Union regulation on AI, published by the European Commission in 2024, provides numerous safeguards depending on how much risk a certain AI application poses to fundamental rights.Footnote 40 As Chapter 12 of this book more extensively explains, the AI Act classifies AI systems into several categories, including low or limited risk (not subject to further rules), medium/opacity risk (with new transparency obligations), high risk (with a broad set of conformity assessment requirements), and unacceptable risk (which are prohibited).
In its amendments published in June 2023, the European Parliament clearly opted for stricter safeguards by removing exceptions for law enforcement’s use of real-time remote biometric identification systems, and prohibiting some applications that the Commission had previously classified as high risk, such as predictive policing, and more specifically predictive identification applications used in criminal justice.Footnote 41 However, ultimately, the final text provides extensive exceptions for law enforcement when it comes to real-time remote biometric identification systemsFootnote 42 and does not prohibit place-based predictive policing. It does prohibit predictive identification in so far the risk assessments are solely based on the “profiling of a natural person or on assessing their personality traits and characteristics.”Footnote 43 It remains to be seen to what extent the interpretation, implementation, and enforcement of the regulation will provide sufficient democratic safeguards to protect the fundamental rights of citizens.
In addition to the AI regulation, the use of AI for law enforcement purposes is also regulated by the transposition into national laws of member states of the Law Enforcement Directive (LED).Footnote 44 The application of this directive concerns the processing of personal data by competent authorities for the prevention, investigation, detection, and prosecution of criminal offenses or the execution of criminal penalties.Footnote 45 It does not apply in the context of national security, to EU institutions, agencies, or bodies such as Europol, and it only applies to processing of personal data wholly or partly by automated means. The directive came about primarily out of the need felt by law enforcement agencies, including in response to terrorist attacks in the US and Europe in the first decades of the twenty-first century, to exchange data between member states. The directive, therefore, aims to strike a balance between law enforcement needs and the protection of fundamental rights.Footnote 46
The focus of the directive is on “personal data.” This is defined as “any information relating to an identified or identifiable natural person (‘data subject’).”Footnote 47 An identifiable natural person is one who can be “identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of that natural person.”Footnote 48 Already in 2007 the European advisory body, the Data Protection Working Party Article 29 (WP29),Footnote 49 proposed a very broad interpretation of personal data: “any information” includes not only objective and subjective information, but even false information. It does not just concern private or sensitive information.Footnote 50 Information can be associated with an individual in three ways: (1) content (when it is about a particular person); (2) purpose (when data is used to evaluate, treat, or influence an individual’s status or behavior in a certain way) and (3) result (when it is likely to have an impact on the rights and interests of a particular person taking into account all the circumstances of a particular case).Footnote 51
It is often questioned, especially by law enforcement agencies themselves, if predictive mapping applications process “personal data.” Lynskey argues that based on the advice of WG 29 and case law, it is possible to conclude that data processing in predictive mapping involves the processing of personal data.Footnote 52 The data processed are potentially linked to the data subject because of the purpose (to treat people in a certain way) or the effect (impact on those identified in the hotspots). Regarding predictive identification, it is clearer that personal data are processed, both when it comes to the input data (as the content concerns the data subject) and the output data (as the purpose and effect of the data are used to influence the prospects of an identified individual). In practice, however, interpretations diverge. For instance, in the case of the CAS system in the Netherlands, the Dutch law enforcement authority nevertheless concluded that it is not processing personal data and, therefore, that the data protection regulation does not apply to the system’s use.Footnote 53 This example shows that lack of clear guidance and specific regulation when it comes to the use of AI by law enforcement raises questions about the effectiveness of the current legislative safeguards for these applications.
18.3.2 Ethical and Social Issues
Predictive policing raises several ethical and social issues. These issues are dependent on what type of technology is implemented and the way the technologies are governed.Footnote 54 They can not only impact the effectiveness and efficacy of the technology, but they can also cause harm.Footnote 55 Below, I respectively discuss concerns pertaining to efficacy, governance, organization, and individual and social harms.
18.3.2.1 Efficacy
Several issues can be identified as regards the efficacy of predictive policing and of the use of AI by law enforcement more generally. Efficacy refers to the capacity to produce a desired result (in the case of predictive policing, a reduction in crime). First, law enforcement and technology companies often claim that the accuracy of the system’s prediction is high. However, these claims of “predictive accuracy” are often mistaken for efficacy, whereas the level of accuracy does not say anything about the system’s impact on crime reduction, making it difficult for a police force to assess a tool’s real-world benefits.Footnote 56 Second, the way the AI system is designed and purposed is largely driven by data science and technology companies, with comparatively little focus on the underlying conceptual framework, criminological theory, or legal requirements.Footnote 57 Third, specifically with regards to predictive policing, runaway feedback loops are a significant issue (see previous text).Footnote 58 Fourth, lack of transparency in the way algorithms are designed and implemented, the exact data, formulas, and procedures carried out by the software developers and the way the AI system works (“the black box”Footnote 59) makes it harder to evaluate its operation. It also makes it more difficult for independent researchers to replicate methods using different data.Footnote 60 Fifth, the role of technology companies can also have an impact on efficacy.
A first example arises when law enforcement authorities work with software developed by (non-EU) technology companies. Such companies often foresee a vendor lock in the software, which implies that law enforcement is not able to adjust or tweak the software themselves and are dependent on the companies for any changes. A second example is that cultural differences and/or translation issues can arise when buying software from other countries. For instance, in Denmark, a hospital invested in a digital hospital management system, EPIC, developed by an American company.Footnote 61 The software was translated into Danish using Google Translate and this led to significant errors. This was not merely a translation issue. In fact, the “design of the system was so hard-coded in U.S. medical culture that it couldn’t be disentangled,” hence making it problematic for use in a Danish context.Footnote 62 A third example is that technology companies can also have an impact on how predictive policing is regulated. To provide another example from Denmark: The Danish government recently adjusted its police law to enable the use of an intelligence-led policing platform developed by Palantir.Footnote 63 Finally, a lack of academic rigor can be identified in this field. Since there are not many publications by researchers evaluating and testing predictive policing applications, there is still little reliable evidence on whether it works.Footnote 64 The lack of scientific evidence raises questions about the legitimacy and proportionality of the application of predictive policing. When law enforcement deploys technology that poses an intrusion of fundamental rights law, law enforcement needs to demonstrate the necessity for the application in a democratic society and proportionality. However, considering the earlier discussion, that there is insufficient proof to show the efficacy and effectiveness of the technology the question arises if the fundamental rights test can be conducted in a reliable way and if the implementation of such technologies is justifiable.
18.3.2.2 Social Issues
There is increasing scientific evidence that AI applications and the poor-quality data the algorithms are trained on are riddled with error and bias.Footnote 65 They raise social and ethical concerns beyond undermining privacy and causing individual harm such as discrimination, stigmatization and social harms,Footnote 66 but they also can have an impact on society.Footnote 67 Predictive policing is a form of surveillance. Research in surveillance studies has shown that digital (police) surveillance potentially leads to several unintended consequences that go beyond a violation of individual privacy. For instance, surveillance can lead to social sorting cumulative disadvantage, discrimination, and chilling effects, but also fear, humiliation, and trauma.Footnote 68 Importantly, the harms raised by AI-driven predictive policing are also increasingly becoming cumulative through the significant increase of the more general implementation of surveillance in society.Footnote 69
More specifically, in the United Kingdom, a recent study concluded that national guidance is urgently needed to oversee the use of data-driven technology by law enforcement amid concerns that it could lead to discrimination.Footnote 70 In the US an example of harms of predictive policing can be found in a lawsuit that has been filed against Pasco County Sheriff’s Office (PCSO) in Florida.Footnote 71 This concerns a predictive policing application which, without notice to parents and guardians, places hundreds of students on a secret list, created using an algorithmic risk assessment identifying those who they believe are most likely to commit future crimes. When children are on the list, they are subject to persistent and intrusive monitoring. The criteria used to target children for the program are believed to have a greater impact on Black and Brown children.Footnote 72 Similarly, in the Netherlands a mother of a teenage boy, who was taken up in the Top400 list (see earlier content), states that as the result of police harassment she feels “like a prisoner, watched and monitored at every turn, and I broke down mentally and physically, ending up on cardiac monitoring.”Footnote 73
When law enforcement’s use of AI systems leads to these harms, this will also have an impact on police legitimacy. As was already mentioned when discussing hot-spot policing, intensive police interventions may erode citizen trust in the police and lead to fear through over-policing, and thus lead to the opposite result of what the technology is intended for.Footnote 74
18.3.2.3 Governance
AI has been heralded as a disruptive technology. It puts current governance frameworks under pressure and is believed to transform society in the same way as electricity.Footnote 75 It is therefore no surprise that several concerns arise around the governance structure of this disruptive technology when it is used to drive predictive policing. First, there is a lack of clear guidance and codes of practice outlining appropriate constraints on how law enforcement should trial predictive algorithmic toolsFootnote 76 and implement them in practice.Footnote 77 Second, there is a lack of quality standards for evaluations of these systems.Footnote 78 Whenever evaluations do take place, there is still a lack of attention to data protection and social justice issues, which also impact evidence-based policy that is based on such evaluations.Footnote 79 Third, there is a lack of expertise within law enforcement and oversight bodies,Footnote 80 which raises issues about how effective the oversight over these systems really is.
Finally, even when predictive machine learning does not process personal data or where it is compliant with the LED, there are still other concerns as we discussed earlier. These social and ethical concerns need to be addressed through innovative oversight mechanisms that go beyond judicial oversight.Footnote 81 Current oversight mechanisms are geared to compliance with data protection law, they do not address ethical or social issues discussed earlier (Van Brakel, 2021a).
New types of oversight bodies could be inspired by adding a relational ethics perspective to the current rational perspective. Governance structures must also involve citizens, and they should specifically engage with targeted and vulnerable communities when making policy decisions about implementing AI.Footnote 82 An example of a step in the right direction is the establishment of the Ethics Committee by the Westmidlands Police.Footnote 83 The committee evaluates pilot projects and implementation of new technologies by the police. What is positive about the committee is that it works in a transparent way publishing the reports fully on the website of the committee and the members of the committee are diverse. Members include representatives from the police, civil society, and community and academic experts in law, criminology, social science, and data science. However, to be successful and sustainable, such initiatives should also ensure that people are sufficiently compensated for their time and work, and this they not merely rely on volunteers and goodwill of the members.Footnote 84
18.3.2.4 Organizational Issues
The implementation of AI in policing by law enforcement also raises several organizational issues. The LED foresees a right to obtain human intervention when an impactful decision is taken solely by automated means.Footnote 85 This has been referred to as a “human in the loop,”Footnote 86 which is a safeguard to protect the data subject against “a decision evaluating personal aspects relating to him or her which is based solely on automated processing and which produces harm.”Footnote 87 However, in practice, this legal provision raises several challenges.
First, the directive does not specify what this “human in the loop” should look like or in what way the human should engage with the loop (on the loop, in the loop, or outside of the loop).Footnote 88 According to advice of the Article 29 Working Party, it is necessary to make sure that “the human intervention must be carried out by someone who has the appropriate authority and capability to change the decision and who will review all the relevant data including the additional elements provided by the data subject.”Footnote 89
According to Methani et al., meaningful human control refers to control frameworks in which humans, not machines, remain in control of critical decisions.Footnote 90 This means that, when it comes to AI, the notion of human oversight should extend beyond mere technical human control over a deployed system: It also includes the responsibility that lays in the development and deployment process, which entirely consists of human decisions and is therefore part of human control. The concept of meaningful human control should, in addition to mere oversight, also include design and governance layers into what it means to have effective control. However, these aspects are currently insufficiently taken into consideration, and guidance on how law enforcement must deal with this is lacking. Questions remain, therefore, how law enforcement officers need to be in the loop to make sure this safeguard is effective.
Second, not everybody is enthusiastic about new technologies. Resistance against surveillance is hence important to consider when implementing AI in law enforcement and evaluating its effectiveness. Research by Sandhu and Fussey on predictive policing has shown that many police officers have a skeptical attitude toward and reluctance to use predictive technologies.Footnote 91 A third implementation issue concerns automation bias, whereby a person will favor automatically generated decisions over a manually generated decision.Footnote 92 This is what Fussey et al. have called deference to the algorithm, when evaluating Live Facial Recognition Technology piloted by the London Metropolitan Police.Footnote 93 It also involves potential de-skilling, which implies that by relying on automated processes, people loose certain types of skills and/or expertise.Footnote 94 Of course, not everyone will respond to the use of such systems in the same way. However, this risk is something that needs to be taken seriously by both law enforcement agencies and by policymakers. At the same time, Terpstra et al. have suggested that as policing is becoming more dependent on abstract police information systems, professional knowledge, and discretion are becoming devalued, which may have negative impacts on officers’ sense of organizational justice and self-legitimacy.Footnote 95
18.4 Conclusion
In this chapter, I discussed predictive policing in Europe and its main legal, ethical, and social issues. Law enforcement will become increasingly dependent on AI in the coming years, especially if it is considered to be superior to traditional policing methods, and cheaper than hiring more officers. Current models of regulating, organizing, and explaining policing are based on models of human decision-making. However, as more policing will be performed by machines, we will urgently need changes to those assumptions and rules.Footnote 96 Hence, the challenge lies not only in rethinking regulation but also in rethinking policy and soft law, and exploring what role other modalities can play. Consideration must be given to how the technology is designed, how its users and those affected by it can be made more aware of its impact and be involved in its design, and how the political economy affects this impact. Current policy tools and judicial oversight mechanisms are not sufficient to address the broad range of concerns that were identified in this chapter. Because the harm that AI can cause can be individual, collective, and social, and often stems from an interaction of an existing practice with technology, an individualistic approach with a narrow technological focus, is not adequate.Footnote 97
While some of the earlier mentioned issues and challenges are dealt with by the upcoming AI regulation, as shown, it remains to be seen to which extent these safeguards will be taken up and be duly applicable in the context of law enforcement. Like the way regulation of data processing by law enforcement is always striving to find a balance between law enforcement goals and fundamental rights, the proposed AI regulation aims to find a balance between on the one hand corporate and law enforcement needs and on the other protecting fundamental rights. However, to address the social and ethical issues of AI, it is necessary to shift the focus in governance from the compulsion to show “balance” by always referring to AI’s alleged potential for good by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated.Footnote 98
Considering, on the one hand, the minimal evidence of the impact of predictive policing on crime reduction, and on the other hand, significant risks for social justice and human rights, should we not rethink the way AI is being used by law enforcement? Can it at all be used in a way that is legitimate, does not raise the identified social and ethical issues and is useful for police forces and society? Simultaneously, the question arises if the money that is invested in predictive policing applications should not be invested instead in tackling causes of crime and in problem-oriented responses, such as mentor programs, youth sports programs, and community policing, as they can be a more effective way to prevent crime.Footnote 99
As Virginia Dignum nicely puts it: “AI is not a magic wand that gives their users omniscience or the ability to accomplish anything.”Footnote 100 To implement AI for law enforcement purposes in a responsible and democratic way, it will hence be essential that law enforcement officials and officers take a more nuanced and critical view about using AI for their work.