Skip to main content Accessibility help
×
Hostname: page-component-7b9c58cd5d-hxdxx Total loading time: 0 Render date: 2025-03-15T08:28:48.337Z Has data issue: false hasContentIssue false

Part II - Human–Robot Interactions and Procedural Law

Published online by Cambridge University Press:  03 October 2024

Sabine Gless
Affiliation:
Universität Basel, Switzerland
Helena Whalen-Bridge
Affiliation:
National University of Singapore
Type
Chapter
Information
Human–Robot Interaction in Law and Its Narratives
Legal Blame, Procedure, and Criminal Law
, pp. 87 - 278
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

5 Introduction to Human–Robot Interaction and Procedural Issues in Criminal Justice

Sabine Gless Footnote *
I Mapping the Field

Legal procedure determines how legal problems are processed. Many areas of procedure also raise issues of rights, which are established by substantive law and overarching principles, and allocated in the process of dispute resolution. More broadly, legal procedure reflects how authorities can impose a conflict settlement when the individuals involved are unable to do so.

Criminal procedure is an example of legal processing that has evolved over time and developed special characteristics. The state asks the alleged victim to stand back and allow the people to prosecute an individual’s wrongdoing. The state also grants the defendant rights when accused by the people. However, new developments are demanding that criminal procedure adapt in order to maintain its unique characteristics. Adjustments may have to be made as artificial intelligence (AI)Footnote 1 robots enter criminal investigations and courtrooms.

In Chapter 6, Sara Sun Beale and Hayley Lawrence describe these developments, and using previous research into human–robot interaction,Footnote 2 they explain how the manner in which these developments are framed is crucial. For example, human responses to AI-generated evidence will present unique challenges to the accuracy of litigation. The authors argue that traditional trial techniques must be adapted and new approaches developed, such as new testimonial safeguards, a finding that also appears in other chapters (see Section II.B). Beale and Lawrence suggest that forums beyond criminal courts could be designed as sandboxes to learn more about the basics of AI-enhanced fact-finding.

If we define criminal procedural law broadly to include all rules that regulate an inquiry into whether a violation of criminal law has occurred, then the relevance of new developments such as a “Robo-Judge” become even clearer (see Section II.D). Our broad definition of criminal procedure includes, e.g., surveillance techniques enabled by human–robot interaction, as well as the use of data generated by AI systems for criminal investigation and prosecution or fact-finding in court. This Introduction to Part II of the volume will not address the details of other areas such as sentencing, risk assessment, or punishment, which form part of the sanction regime after a verdict is rendered, but relevant discussions will be referred to briefly (Section II.D).

II The Spectrum of Procedural Issues

AI systems play a role in several areas of criminal procedure. The use of AI tools in forensics or predictive analysis reflects a policy decision to utilize new technology. Other areas are affected simply by human–robot cooperation in everyday life, because law enforcement or criminal investigations today make use of data recorded in everyday activities. This accessible data pool is growing quickly as more robots constantly monitor humans. For example, a modern car records manifold data on its user, including infotainment and braking characteristics.Footnote 3 During automated driving, driving assistants such as lane-keeping assistants or drowsiness-detection systems monitor drivers to ensure they are ready to respond to a take-over request if required.Footnote 4 If an accident occurs, this kind of alert could be used in legal proceedings in various ways.

II.A Using AI to Detect Crime and Predictive Policing

In classic criminal procedural codes, criminal proceedings start with the suspicion that a crime has occurred, and possibly that a specific person is culpable of committing it. From a legal point of view, this suspicion is crucial. Only if such a supposition exists may the government use the intrusive measures characteristic of criminal investigations, which in turn entitle the defendant to make use of special defense rights.

The use of AI systems and human–robot interactions have created new challenges to this traditional understanding of suspicion. AI-driven analysis of data can be used to generate suspicion via predictive policing,Footnote 5 natural language-based analysis of tax documents,Footnote 6 retrospective analysis of GPS locations stored in smartphones,Footnote 7 or even more vague data profiling of certain groups.Footnote 8 In all of these cases, AI systems create a suspicion which allows the authorities to investigate and possibly prosecute a crime, one that would not have come to the government’s attention previously.Footnote 9

Today, surveillance systems and predictive policing tools are the most prominently debated examples of human–robot interaction in criminal proceedings. These tools aim to protect public safety and fight crime, but there are issues of privacy, over-policing, and potentially discrimination.

Broader criminal justice issues connected to these AI systems arise from the fact that these tools are normally trained via machine learning methods. Human bias, already present in the criminal justice system, can be reinforced by biased training data, insufficiently calibrated machine learning, or both. This can result in ineffective predictive tools which either do not identify “true positives,” i.e., the people at risk of committing a crime, or which burden the public or a specific minority with unfair and expensive over-policing.Footnote 10 In any case, a risk assessment is a prognosis, and as such it always carries its own risks because it cannot be checked entirely; such risk assessments therefore raise ethical and legal issues when used as the basis for action.Footnote 11

II.B Criminal Investigation and Fact-Finding in Criminal Proceedings

When a criminal case is opened regarding a particular matter, the suspicion that a crime actually occurred must be investigated. The authorities seek to substantiate this suspicion by collecting material to serve as evidence. The search for all relevant leads is an important feature of criminal proceedings, which are shaped by the ideal of finding the truth before a verdict is entered. Currently, the material collected as evidence increasingly includes digital evidence.Footnote 12

Human–robot interactions in daily life can also lead to a targeted criminal investigation in a specific case. For example, a modern car programmed to monitor both driving and driver could record data that suggests a crime has been committed.Footnote 13 Furthermore, a driver’s failure to react to take-over requests could factor into a prediction of the driving standards likely to be exhibited by an individual in the future.Footnote 14

For a while now, new technology has also played an important role in enhancing forensic techniques. DNA sample testing is one area that has benefited, but it has also faced new challenges.Footnote 15 Digitized DNA sample testing is less expensive, but it is based on an opaque data-generating process, which raises questions regarding its acceptability as criminal evidence.Footnote 16

Beyond the forensic technological issues of fact-finding, new technology facilitates the remote testimony of witnesses who cannot come to trial as well as reconstructions of relevant situations through virtual reality.Footnote 17 When courts shut their doors during the COVID-19 pandemic, they underwent a seismic shift, adopting virtual hearings to replace physical courtrooms. It is unclear whether this transformation will permanently alter the justice landscape by offering new perspectives on court design, framing, and “ritual elements” of virtual trials in enhanced courtrooms.Footnote 18

II.B.1 Taming the “Function Creeps”

Human–robot interaction prompts an even broader discussion regarding criminal investigation, as the field of inquiry includes not only AI tools designated as investigative tools, but also devices whose functions reach beyond their original intended purpose, termed “function creep.”Footnote 19 An example would be drowsiness detection alerts, as the driving assistants generating such alerts were only designed to warn the driver about their performance during automated driving, not as evidence in a criminal court.

In her Chapter 9, Erin Murphy addresses the issue that while breathalyzers or DNA sample testing kits were designed as forensic tools, cars and smartphones were designed to meet consumer needs. When the data generated by consumer devices is used in criminal investigations, the technology is employed for a purpose which has not been fully evaluated. For example, the recording of a drowsiness alert, like other data stored by the vehicle,Footnote 20 could be a valuable source of evidence for fact-finding in criminal proceedings, in particular, a driver’s non-response to alerts issued by a lane-keeping assistant or drowsiness detection system.Footnote 21 However, an unresolved issue is how a defendant would defend against such incriminating evidence. Murphy argues for a new empowerment of defendants facing “digital proof,” by providing the defense with the procedural tools to attack incriminating evidence or introduce their own “digital proof.”

A lively illustration of the need to take Murphy’s plea seriously is the Danish data scandal.Footnote 22 Denmark uses historical call data records as circumstantial evidence to prove that someone has phoned a particular person or has been in a certain location. In 2019, it became clear that the data used was flawed because, among other things, the data processing method employed by certain telephone providers had changed without the police authorities’ awareness. The judicial authorities eventually ordered a review of more than 10,000 cases, and consequently several individuals were released from prison. It has also been revealed that the majority of errors in the Danish data scandal were human error rather than machine error.

II.B.2 Need for a New Taxonomy

One lesson that can be learned from the Danish data scandal is that human–robot interaction might not always require new and complex models, but rather common sense, litigation experience, and forensic understanding. Telephone providers, though obliged to record data for criminal justice systems, have the primary task of providing a customer service, not preparing forensic evidence. However, when AI-generated data, produced as a result of a robot assessing human performance, are proffered as evidence, traditional know-how has its limits. If robot testimony is presented at a criminal trial for fact-finding, a new taxonomy and a common language shared by the trier of facts and experts are required. Rules have been established for proving that a driver was speeding or intoxicated, but not for explaining the process that leads an alert to indicate the drowsiness of a human driver. These issues highlight the challenges and possibilities accompanying digital evidence, which must now be dealt with in all legal proceedings, because most information is stored electronically, not in analog form.Footnote 23 It is welcome that supranational initiatives, such as the Council of Europe’s Electronic Evidence Guide,Footnote 24 provide standards for digital evidence, although they do not take up the specific problems of evidence generated through human–robot interactions. To support the meaningful vetting of AI-generated evidence, Chapter 8 by Emily Silverman, Jörg Arnold, and Sabine Gless proposes a new taxonomy that distinguishes raw, processed, and evaluative data. This taxonomy can help courts find new ways to access and test robot testimony in a reliable and fair way.Footnote 25

Part of the challenge in vetting such evidenceFootnote 26 is to support the effective use of defense rights to challenge evidence.Footnote 27 It is very difficult for any fact-finder or defendant to pierce the veil of data, given that robots or other AI systems may not be able to explain their reasoningFootnote 28 and may be protected by trade secrets.Footnote 29

II.C New Agenda on Institutional Safeguards and Defense Rights

The use of AI systems in law enforcement and criminal investigations, and the omnipresence of AI devices that monitor the daily life of humans, impact the criminal trial in significant ways.Footnote 30 One shift is from the traditional investigative-enforcement perspective of criminal investigations to a predictive-preventive approach. This shift could erode the theoretically strong individual rights of defendants in criminal investigations.Footnote 31 A scholarly debate has asked, what government action should qualify as the basis for a criminal proceeding as opposed to mere policing? What individual rights must be given to those singled out by AI systems? What new institutional safeguards are needed? And, given the ubiquity of smartphone cameras and the quality of their recordings, as well as the willingness of many to record what they see, what role can or should commercial technology play in criminal investigations?

In Chapter 7, Andrea Roth argues that the use of AI-generated evidence must be reconciled with the basic goals shared by both adversarial and inquisitorial criminal proceedings: accuracy, fairness, dignity, and public legitimacy. She develops a compilation of principles for every stage of investigation and fact-finding to ensure a reliable and fair process, one that meets the needs of human defendants without losing the benefits of new technology. Her chapter points to the notion that the use of AI devices in criminal proceedings jeopardizes the modern achievement of conceptualizing the defendant not as an object, but as a subject of the proceedings.

It remains to be seen whether future courts and legal scholarship will be able to provide a new understanding of basic principles in criminal proceedings, such as the presumption of innocence. A new understanding is needed in view of the possibility that investigative powers will be exercised on individuals who are not the subjects of criminal investigations, but instead predictive policing,Footnote 32 as these individuals would not be offered traditional procedural protections. This is a complex issue doctrinally, because in Europe the presumption of innocence only applies after the charge. If there is no charge, there is, in principle, no protection. However, once a charge is leveled, the protection applies retroactively.

II.D Robo-Judges

After criminal investigation and fact-finding, a decision must be rendered. Could robots hand down a verdict without a human in the loop? Ideas relating to so-called robo-judges have been discussed for a while now.Footnote 33 In practice, “legal tech” and robot-assisted alternative dispute resolution have made progress,Footnote 34 as has robot-assisted human decision-making in domains where reaching a decision through the identification, sorting, and calibration of numerous variables is crucial. Instances of robots assisting in early release or the bail system in overburdened US systems, or in sentencing in China, have been criticized for various reasons.Footnote 35 However, some decision-making systems stand a good chance of being adopted in certain areas, because human–robot cooperation in making judicial decisions can facilitate faster and more affordable access to justice, which is a human right.Footnote 36 Countries increasingly provide online dispute resolutions that rely almost entirely on AI,Footnote 37 and some may take the use of new technologies beyond that.Footnote 38

When legal punishment entails the curtailment of liberty and property, and in some countries even death, things are different.Footnote 39 The current rejection of robo-judges in criminal matters is, however, not set in stone. Research on the feasibility of developing algorithms to assist in handing down decisions exists in jurisdictions as different as the United States,Footnote 40 Australia,Footnote 41 China,Footnote 42 and Germany.Footnote 43 If human–robot cooperation brings about more efficient and fair sentencing in a petty crime area, this will have wide-ranging implications for other human–robot interactions in legal proceedings, as well as other types of computer-assisted decision-making.

Obviously, this path is not without risk. Defendants today often only invoke their defense rights when they go to trial.Footnote 44 And as has been argued above, their confrontation right, which is necessary for reliable and fair fact-finding, is particularly at risk in the context of some robot evidence. A robot-assisted trial would have to grant an effective set of defense rights. Even the use of a robo-judge in a preliminary judgment could push defendants into accepting a plea bargain without making proper use of their trial rights. Some fear the inversion of the burden of proof, based on risk profiles and possibly even exotic clues like brain research.Footnote 45

As things stand today, using robo-judges to entirely replace humans is a distant possibility.Footnote 46 However, the risks of semi-automated justice comprise a more urgent need.Footnote 47 When an AI-driven frame of reference is admitted into the judging process, humans have difficulty making a case against the robot’s finding, and it is therefore likely that an AI system would set the tone. We may see a robot judge as “fairer” if bias is easier to address in a machine than in a person. Technological advancement could reduce and perhaps eliminate a feared “fairness gap” by enhancing the interpretability of AI-rendered decisions and strengthening beliefs regarding the thoroughness of consideration and the accuracy of the outcome.Footnote 48 But until then, straightforward communication and genuine human connection seem too precious to sacrifice for the possibility of a procedurally more just outcome. As of now, it seems that machine-adjudicated proceedings are considered less fair than those adjudicated by humans.Footnote 49

II.E Robo-Defense

Criminal defendants have a right to counsel, but this right may be difficult to exercise when defense lawyers are too expensive or hard to secure for other reasons. If it is possible for robots to assist judges, so too could they assist defendants. In routine cases with recurring issues, a standard defense could help. This is the business model of the start-up “DoNotPay.”Footnote 50 Self-styled as the “world’s first robot lawyer,”Footnote 51 it aims to help fight traffic tickets in a cheap and efficient way.Footnote 52 When DoNotPay’s creator announced that his AI system could advise defendants in the courtroom using smart glasses that record court proceedings and dictate responses into their ear via AI text generators, he was threatened with criminal prosecution for the unauthorized practice of law.Footnote 53 Yet, the fact that well-funded, seemingly unregulated providers demonstrated a willingness to enter the market for low-cost legal representation might foreshadow a change in criminal defense.

Human–robot interaction might not only lower representation costs, but potentially also assist defendants in carrying out laborious tasks more efficiently. For example, if a large number of texts need to be screened for defense leads, the use of an AI system could speed up the process considerably. Furthermore, if a defendant has been incriminated by AI-generated evidence, it only makes sense to employ technology in response.Footnote 54

II.F Robots as Defendants

Dismissed as science fiction in the past, scholars in the last decade have begun to examine the case for punishing robots that cause harm.Footnote 55 As Tatjana Hörnle rightly points out in her introduction to Part I of the volume, theorizing about attributing guilt to robots and actually prosecuting them in court are two different things. But if the issue is considered, it appears that similar problems arise in substantive and procedural law. Prominent among the challenges is the fact that both imputing guilt and bringing charges requires the defendant to have a legal personality. It only makes sense to pursue robots in a legal proceeding if they can be the subject of a legal obligation.

In 2017, the EU Parliament took a functional approach to confer robots with partial legal capacity via its “Resolution on Civil Law Rules on Robotics,” which proposed to create a specific legal status for robots.Footnote 56 Conferring a legal personality on robots is based on the notion of a “legal personality” of companies or corporations. “Electronic personality” would be applied to cases where robots make autonomous decisions or otherwise interact with third parties autonomously.Footnote 57

In principle, the idea of granting robots personhood dates back a few decades. A prominent early proposal was submitted by Lawrence Solum in 1992.Footnote 58 He posited the idea of a legal personality, although the idea was more akin to a thought experiment.Footnote 59 He highlighted the crucial question of incentivizing “robots”: “what is the point of making a thing – which can neither understand the law nor act on it – the subject of a legal duty?”Footnote 60 More recently, some legal scholars claim that “there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems.”Footnote 61 Yet the EU proposal remains controversial for torts, and the proposal for legal personhood has not been taken up in the debate regarding AI systems in criminal justice.

II.G Risk Assessment Recommendation Systems (Bail, Early Release, Probation)

New technology not only changes how we investigate crime and search for evidence. Human–robot cooperation in criminal matters also has the potential to transform risk assessment connected to individuals in the justice system and the assignment of adequate responsive measures. A robot’s capacity to analyze vast data pools and make recommendations based on this assessment potentially promises better risk assessment than humans.Footnote 62 Robots assist in decision-making during criminal proceedings in particular cases, as when they make recommendations regarding bail, advise on an appropriate sentence, or make suggestions regarding early release. Such systems have been used in state criminal justice branches in the United States, but this has triggered controversial case lawFootnote 63 and a vigorous debate around the world.Footnote 64 What some see as more transparent and rational, i.e., “evidence-based” decision-making,Footnote 65 others denounce as deeply flawed decision-making.Footnote 66 It is important to note that in these cases, the final decision is always taken by a judge. However, the question is whether the human judge will remain the actual decision-maker, or becomes more and more of a figurehead for a system that crunches pools of data.Footnote 67

III Privacy and Fairness Concerns

The use of human–robot interaction in criminal matters raises manifold privacy and fairness concerns, only some of which can be highlighted here.

III.A Enhancing Safety or Paving the Way to a “Surveillance State”?

In a future where human–robot interactions are commonplace, one major concern is the potential for a “surveillance state” in which governments and private entities share tasks, thereby allowing both sides to avoid the regulatory net. David Gray takes on this issue when he asks whether our legal systems have the right tools to preserve autonomy, intimacy, and democracy in a future of ubiquitous human–robot interaction. He argues that the US Constitution’s Fourth Amendment could provide safeguards, but it falls short due to current judicial interpretations of individual standing and the state agency requirement. Gray argues that the language of the Fourth Amendment, as well as its historical and philosophical roots, support a new interpretation, one that could acknowledge collective interests and guard privacy as a public good against threats posed by both state and private agents.

In Europe, the fear of a surveillance state has prompted manifold domestic and European laws. The European Convention on Human Rights (ECHR), adopted in 1950 in the forum of the Council of Europe, grants the right to privacy as a fundamental human right. The EU Member States first agreed on a Data Protection Directive (95/46/EC) in 1995, then proclaimed a right to protection of personal data in the Charter of Fundamental Rights of the EU in 2000, and most recently put into effect the General Data Protection Regulation (GDPR) in 2018. The courts, in particular the Court of Justice of the European Union (CJEU), have also shaped data protection law through interpretations and rulings.

Data processing in criminal justice, however, has always been an exception. It is not covered by the GDPR as such, but by the Directive (EU) 2016/680, which addresses the protection of natural persons regarding the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties.Footnote 68 New proposals, such as regulation laying down harmonized rules on artificial intelligence (AI Act),Footnote 69 have the potential to undo current understandings regarding the dividing line between general regulation of data collection and police matters.

One major issue, concerning policing as well as criminal justice, pertains to facial recognition, conducted by either a fully responsible human via photo matching or by a robot using real-time facial recognition. When scanning masses of visual material, robots outperform humans in detecting matches via superior pattern recognition. This strength, however, comes with drawbacks, among them the reinforcement of inherent bias through the use of biased training materials in the machine learning process.

The use of facial recognition in criminal matters raises a number of issues, including public–private partnerships. Facial recognition systems need huge data pools to function, which can be provided by the authorities in the form of mug shots. Creating such data pools can, however, lead to the reinforcement of bias already existent in policing. Visual material could also be provided by private companies, but this raises privacy concerns if the respective individuals have not consented to be in the data pool. Data quality may also be problematic if the material lacks adequate diversity, which could affect the robot’s capability to correctly match two pictures. In the past, authorities bought pictures and services from companies that later came under scrutiny for their lack of transparency and other security flaws.Footnote 70 If such companies scrape photos from social media and other internet sources without consent from individuals, the material cannot be used for matching, but without an adequate volume of photographs, there may be serious consequences such as wrongful identification. Similar arguments are raised regarding the use of genealogy databases for DNA-sample testing by investigation authorities.Footnote 71 The use of facial recognition for criminal justice matters may have even more profound effects. People might feel safer overall if criminals are identified, but also less inclined to exercise legal rights that put them under the gaze of the authorities, such as taking part in demonstrations.Footnote 72

The worldwide awareness of the use of robots in facial recognition has given rise to an international discussion about the need for universal normative frameworks. These frameworks are based on existing international human rights norms for the use of facial recognition technology and related AI use. In June 2020, the UN High Commissioner for Human Rights published a report concerning the impact of new technologies,Footnote 73 including facial recognition technology, focusing on the effect on human rights.Footnote 74 The report highlighted the need to develop a standard for privacy and data protection, as well as address accuracy and discriminatory impacts. The following year, the Council of Europe published Guidelines on Facial Recognition, suggesting that states should adopt a robust legal framework applicable to the different cases of facial recognition technology and implement a set of safeguards.Footnote 75 At the beginning of 2024, the EU Member States approved a proposal on an AI ActFootnote 76 that aims to ban certain facial recognition techniques in public spaces, but permits its use if prior judicial authorization is provided for the purpose of specific law enforcement.Footnote 77

III.B Fairness and Taking All Interests in Consideration

Notwithstanding the many risks attached to the deployment of certain surveillance technology, it is clear that AI systems and robots can be put to use to support criminal justice in overburdened systems in which individuals face criminal justice systems under strain. For example, advanced monitoring systems might allow for finely adjusted bail or probation measures in many more situations than it is possible with current levels of human oversight.Footnote 78 Crowdsourced evidence from private cameras might provide exonerating evidence needed by the defense.Footnote 79 However, such systems raise fairness questions in many ways and require the balancing of interests in manifold respects, both within and beyond the criminal trial. Problems arising within criminal proceedings include the possible infringement of defense rights, as well as the need to correct bias and prevent discrimination (see Sections II.A and II.B.2).

A different sort of balancing of interests is required when addressing risks regarding the invasion of privacy.Footnote 80 Chapter 10 by Bart Custers and Lonneke Stevens outlines the increasing discrepancy between legal frameworks of data protection and criminal procedure, and the actual practices of using data as evidence in criminal courts. The structural ambiguity they detect has many features. They find that the existing laws in the Netherlands do not obstruct data collection but that the analysis of such evidence is basically unregulated, and data rights cannot yet be meaningfully enforced in criminal courts.

As indicated above, this state of affairs could change. In Europe, new EU initiatives and legislation are being introduced.Footnote 81 If the right to transparency of AI systemsFootnote 82 and the right to accountabilityFootnote 83 can be enforced in criminal proceedings and are not modified by a specialized criminal justice regulation,Footnote 84 courts that want to make use of data gained through such systems might find that data protection regulation actually promises to assist in safeguarding the reliability of fact-finding. As always, the question is whether we can meaningfully identify, understand, and address the possibilities and risks posed by human–robot interaction. If not, we cannot make use of the technology.

The controversial debate on how the criminal justice system can adequately address privacy concernsFootnote 85 and the development of data protection law potentially point the way to a different solution. This solution lies not in law, but in technology, via privacy by design.Footnote 86 This approach can be taken to an extreme, until we arrive at what has been called “impossibility structures,” i.e., design structures that prohibit human use in certain circumstances.Footnote 87 Using the example of driving automation, we find that the intervention systems exist on a spectrum. On one end of the spectrum, there are low intervention systems known as nudging structures, such as intelligent speed assistance and drowsiness warning systems. At the high intervention end of the spectrum are impossibility structures; rather than simply monitor or enhance human driving performance, they prevent human driving entirely. For example, alcohol interlock devices immobilize the vehicle if a potential driver’s breath alcohol concentration is in excess of a certain predetermined level. These structures prevent drunken humans from driving at all, creating “facts on the ground” that replace law enforcement and criminal trials. It is very difficult to say whether it would be good to bypass human agency with such structures, the risk being that such legality-by-design undermines not only the human entitlement to act out of necessity, but perhaps also the privacy that comprises one of the foundations of liberal society, which could undermine democracy as a whole.Footnote 88

IV The Larger Perspective

It seems inevitable that human–robot interaction will impact criminal proceedings, just as it has other areas of the law. However, the exact nature of this impact is unclear. It may help to prevent crime before it happens or it might lead to a merciless application of the law.

Legal scholars primarily point to the risks of AI systems in criminal justice and the need to have adequate safeguards in place. However, many agree that certain robots have the potential to make criminal proceedings faster, and possibly even fairer. One big, not yet fully scrutinized issue will be whether we can and will trust systems that generate information where the decision-making process is opaque to humans, even when it comes to criminal verdicts.Footnote 89

Future lawmakers drafting criminal procedure must keep in mind what Tatjana Hörnle pointed out in her introduction to Part I of the volume, that humans tend to blame other humans rather than machines.Footnote 90 The same is true for bringing charges against humans as opposed to machines, as explained by Jeanne Gaakeer.Footnote 91 Part of the explanation for this view lies in the inherent perspectives of substantive and procedural law.Footnote 92 Criminal justice is tailored to humans, and it is much easier, for reasons rooted in human understanding and ingrained in the legal framework, to prosecute a human.Footnote 93 This appears to be the case when a prosecution can be directed against either a human or a human–robot cooperation,Footnote 94 and it would most probably also be the case if one had to choose between prosecuting a visible human driver or a robot that guided automated driving.

With human–robot interaction now becoming a reality of daily life and criminal justice, it is time for the legal community to reconcile themselves to these challenges, and engage in a new conversation with the computer scientists, behavioral scholars, forensic experts, and other disciplines that can provide relevant knowledge. The digital shift in criminal justice will be manifold and less than predictable. Human–robot interaction might direct more blame in the direction of humans, but it might also open up various new ways to reconstruct the past and possibly assist in exonerating falsely accused humans. A basic condition for benefiting from these developments is to understand the different aspects of human–robot interaction and their ramifications for legal proceedings.

6 Human Psychology and Robot Evidence in the Courtroom, Alternative Dispute Resolution, and Agency Proceedings

Sara Sun Beale and Hayley Lawrence Footnote *
I Introduction

In the courtroom, the phrases artificial intelligence (AI) and robot witnesses (“robo-witnesses”) conjure up images of a Star Wars-like, futuristic world with autonomous robots like C3PO taking the witness stand. Although testimony from a robo-witness may be possible in the distant future, many other kinds of evidence produced by AI are already becoming more common.

Given the wide and rapidly expanding range of activities being undertaken by robots, it is inevitable that robot-generated evidence and evidence from human witnesses who interacted with or observed robots will be presented in legal forums. This chapter explores the effects of human psychology on human–robot interactions (HRIs) in legal proceedings. In Section II, we review the research on HRI in other contexts, such as market research and consumer interactions. In Section III, we consider the effect the psychological responses detailed in Section II may have in litigation.

We argue that human responses to robot-generated evidence will present unique challenges to the accuracy of litigation, as well as ancillary goals such as fairness and transparency, but HRI may also enhance accuracy in other respects. For our purposes, the most important feature of HRI is the human tendency to anthropomorphize robots. Anthropomorphization can generate misleading impressions, e.g., that robots have human-like emotions and motives, and this tendency toward anthropomorphization can be manipulated by designing robots to make them appear more trustworthy and believable. The degree of distortion caused by anthropomorphization will vary, depending on the design of the robot and other situational factors, like how the interaction is framed. The effects of anthropomorphization may be amplified by the simulation heuristic, i.e., how people estimate the likelihood that something happened based on how easy it is for them to imagine it happening, and the psychological preference for direct evidence over circumstantial evidence.Footnote 1 Moreover, additional cognitive biases may distort fact-finding or attributions of liability when humans interact with or observe robots.

On the other hand, robot-generated evidence may offer unique advantages if it can be presented as direct evidence via a robo-witness, because of the nature of a robo-witness’s memory compared to that of a human eyewitness. We have concerns, however, about the degree to which the traditional methods of testing the accuracy of evidence, particularly cross-examination, will be effective for robot-generated evidence. It is unclear whether lay fact-finders, who are prone to anthropomorphize robots, will be able to understand and evaluate the information generated by complex algorithms, particularly those using unsupervised learning models.

Although it has played a limited role in litigation, AI evidence has been used in other legal forums. Section IV compares the use of testimony from autonomous vehicles (AVs) in litigation with the use of similar evidence in alternative dispute resolution (ADR) and the National Transportation Safety Board (NTSB). These contrasting legal infrastructures present an opportunity to examine AI evidence through a different lens. After comparing and contrasting AI testimony in ADR and NTSB proceedings with traditional litigation, the chapter suggests that the presence of expert decision-makers might help mitigate some of the problems with HRI, although other aspects of the procedures in each forum still raise concerns.

II The Psychology of HRI in Litigation

Although there is no universally agreed-upon definition of “robot,” for our purposes, a robot is “an engineered machine that senses, thinks, and acts.”Footnote 2 Practically speaking, that means the robot must “have sensors, processing ability that emulates some aspect of cognition,” and the capacity to act on its decision-making.Footnote 3 A robot must be equipped with programming that allows it to independently make intelligent choices or perform tasks based on environmental stimuli, rather than merely following the directions of a human operator, like a remote controlled car.Footnote 4 Under our definition, robots need not be embodied, i.e., they need not occupy physical space or have a physical presence. Of course, the fictitious examples of R2D2 and C3P0 fit our definition, but so too do the self-driving, guided steering, or automatic braking features in modern cars.

II.A Anthropomorphism

The aspect of HRI with the greatest potential to affect litigation is the human tendency to anthropomorphize robots.Footnote 5 Despite knowing that robots do not share human consciousness, people nevertheless tend to view robots as inherently social actors. As a result, people often unconsciously apply social rules and expectations to robots, assigning to them human emotions and sentience.Footnote 6 People even apply stereotypes and social heuristics to robotsFootnote 7 and use the same language to describe interactions with robots and humans.Footnote 8 This process is unconscious and instantaneous.Footnote 9

Rather than operating like an on-off switch, there are degrees of anthropomorphization, and the extent to which people anthropomorphize depends on several factors, including framing, interactivity or animacy, physical embodiment and presence, and appearance. Furthermore, these factors interact with one another. The presence (or absence) of a given characteristic impacts the anthropomorphizing effect of the other present characteristics.

II.A.1 Framing

How an HRI is framed significantly impacts human responses and perceptions about the robot and the interaction itself. Framing therefore has the potential to interfere with the accuracy of the litigation process when robot-generated evidence is presented. Framing generally refers to the way a human observer is introduced to an interaction, and in the case of robot-generated evidence, to a robot before the interaction actually begins. For example, does the robot have a name? Is the name endearing or human-like, e.g., “Marty” versus “Model X”? Is the robot assigned a gender? Is the robot given a backstory? What job or role is the robot intended to fulfil? Framing immediately impacts the human’s perception of a robot. Humans use that introductory information to form a mental model about a robot, much as they do for people, assigning to it stereotypes, personal experiences, and human emotions through anthropomorphization.Footnote 10

Two experiments demonstrate the power of framing to establish trust and create emotional attachments to robots. The first experiment involved participants riding in AVs, which are robots by our definition, and it demonstrates how framing can impact people’s trust in a robot and how much blame they assign to it.Footnote 11 Each test group was exposed to a simulated crash that was unavoidable and clearly caused by another simulated driver. Prior to the incident, participants who had received anthropomorphic framing information about the car, including a name, a gendered identity, and a voice through human audio files, trusted the car more than participants who had ridden in a car with identical driving capabilities but for which no similar framing information had been provided (“agentic condition”) and more than those in the “normal” condition who operated the car themselves, i.e., no autonomous capabilities.Footnote 12 After the incident, participants reported that they trusted the anthropomorphically framed car more even though the only difference between the two conditions was the car having humanized qualities. Subjects in the anthropomorphized group also blamed the vehicle for the incident significantly less than the agentic group, perhaps because they unconsciously perceived the car as more thoughtful. Conversely, subjects in the normal condition who operated the car themselves assigned very little blame to the car. This makes sense because “[a]n object with no agency cannot be held responsible for any actions.”Footnote 13 It is important that the anthropomorphized condition group perceived the car as more thoughtful, which mitigated some of the responsibility imputed to the vehicle.

The second experiment demonstrates that the way a robot’s relationship to humans is framed, even by something as simple as giving the robot a name, can seriously impact the level of emotional attachment humans feel toward it. Participants were asked to observe a bug-like robot and then to strike it with a mallet.Footnote 14 The robot was introduced to one group of study participants with a name and an appealing backstory. “This is Frank. Frank is really friendly, but he gets distracted easily. He’s lived at the Lab for a few months now. His favorite color is red.”Footnote 15 The participants who experienced this anthropomorphic framing demonstrated higher levels of empathy and concern for the robot, showing emotional distress and a reluctance to hit it.Footnote 16

Additionally, framing may impact whether, and to what degree, humans assume a robot has agency or free will. Anthropomorphism drives humans to impute at least a basic level of human “free will” to robots.Footnote 17 In other words, people assume that a robot makes at least some of its choices independently rather than as a simple result of its internal programming. This understanding is, of course, flawed. Although AI “neural networks” are modeled after the human brain to identify patterns and make decisions, robots do not consciously think and make choices as we do.Footnote 18 As robots operate more autonomously and are equipped with more anthropomorphous characteristics, humans will likely perceive them as having more agency or free will.Footnote 19

II.A.2 Interactivity or Animacy

The interactivity or animacy of a robot also has a significant effect on HRI. Anthropomorphization drives people to seek social connections with robots,Footnote 20 and our innate need for social connection also causes humans to infer from a robot’s verbal and non-verbal “expressions” that it has “emotions, preferences, motivations, and personality.”Footnote 21 Social robots can now simulate sound, movement, and social cues that people automatically and subconsciously associate with human intention and states of mind.Footnote 22 Robots can motivate people by mimicking human emotions like anger, happiness, or fear, and demonstrate a pseudo-empathy by acting supportively.Footnote 23 They can apply peer pressure or shame humans into doing or not doing something.Footnote 24

Humans form opinions about others based on voice and speech patterns,Footnote 25 and the same responses, coupled with anthropomorphization, can be used to make judgments about robots’ speech. Many robots now communicate verbally, using verbal communication to persuade humans, establish a “relationship,” or convey moods or a personality.Footnote 26 Certain styles of speech, accents, and vernacular are perceived as more authoritative, trustworthy, persuasive, or intelligent.Footnote 27

II.A.3 Physical Presence and Physical Embodiment

Physical presence and physical embodiment also impact the extent to which people anthropomorphize a robot. A physically present robot is one that shares the same physical space with you. A physically embodied robot is one that has some sort of physical manifestation. A robot may be physically embodied, but not physically present. A familiar example is the Roomba vacuum robot. A Roomba in your house is physically present and physically embodied. But if you interact with C3P0, the gold robot from Star Wars, via video conference, C3P0 is physically embodied, but not physically present. Instead, he is telepresent. Lastly, Apple’s Siri is an example of a robot that is neither physically present nor physically embodied. The Siri virtual assistant is a voice with neither a physical appearance nor an embodiment outside the iPhone.

In experimental settings, a physically present, embodied robot affected HRI more than its non-embodied or non-present counterparts.Footnote 28 The combination of the robot’s presence and embodiment fostered favorable attitudes among study participants. These findings are consistent with the assumptions that people perceive robot agents as social actors and typically prefer face-to-face interactions.Footnote 29 A review of multiple studies found that participants had more favorable attitudes toward co-present, physically embodied robots than toward telepresent robots, and that physically embodied robots were more persuasive and more trustworthy than their telepresent counterparts.Footnote 30 There was, however, no statistically significant difference between human perception of telepresent robots and non-embodied virtual agents like Siri. Overall, participants favored the co-present robot to the virtual agent and found the co-present robot more persuasive, even when its behavior was identical to that of the virtual agent. People paid more attention to the co-present robot and were more engaged in the interaction.

II.A.4 Appearance

Because of the power of anthropomorphism, the appearance or features of an embodied robot can influence whether it is viewed as likeable, trustworthy, and persuasive.

II.A.4.a Robot Faces

Whether a robot is given a face, and what that face looks like, will have a significant impact on HRI. Humans form impressions almost instantly, deciding whether a person is attractive and trustworthy within one-tenth of a second of seeing their face.Footnote 31 Because humans incorrectly assume that robots are inherently social creatures, we make judgments about robots based on their physical attributes using many of the same mental shortcuts that we use for humans. Within the first two minutes of a human–robot interaction or observation, “people create a coherent, plausible mental model of the robot,” based primarily on its physical appearance and interactive features like voice.Footnote 32

Because humans derive many social cues from facial expressions, a robot’s head and face are the physical features that most significantly affect HRI.Footnote 33 People notice the same features in a robot face that they notice about a human one: eye color and shape, nose size, etc.,Footnote 34 and researchers already have a basic understanding of what esthetic features humans like or dislike in robots. For example, robots with big eyes and “baby faces” are perceived as naïve, honest, kind, unthreatening, and warm.Footnote 35 Researchers are also studying how features make robot heads and faces more or less likeable and persuasive.Footnote 36 Manipulating the relative size of the features on a robot’s head had a significant effect on not only study participants’ evaluation of a robot, but also on whether they trusted it and would be likely to follow its advice.Footnote 37 A robot with big eyes was perceived as warmer and more honest and participants were thus more likely to follow its health advice.

II.A.4.b Physical Embodiment and Interactive Style

When interacting with physically embodied robots, human subjects report that interactions with responsive robots, those with animated facial expressions, social gaze, and/or mannerisms, feel more natural and enjoyable than interactions with unanimated robots.Footnote 38 Embodied robots with faces can be programed to directly mirror subjects’ expressions, or to indirectly mirror these expressions based on the robot’s evaluation of the subject’s perceived security, arousal, and autonomy. Study participants rated indirect mirroring robots highest for empathy, trust, sociability, and enjoyment,Footnote 39 and rated indirect mirroring and mirroring robots higher than the non-mirroring robots in empathy, trust, sociability, enjoyment, anthropomorphism, likeability, and intelligence.Footnote 40

Generally, lifelike physical movement of robots, including “social gaze,” or when a robot’s eyes follow the subject it’s interacting with,Footnote 41 gestures, and human-like facial expressions, are highly correlated with anthropomorphic projection.Footnote 42 When those movements closely match humans’ non-verbal cues, humans perceive robots as more human-like. This matching behavior, exemplified through non-verbal cues, like facial expressions, gestures, e.g., nodding, and posture, is known as behavioral mimicry.Footnote 43 Behavioral mimicry is critical for establishing rapport and empathy in human interactions,Footnote 44 and this phenomenon extends to HRI as well.Footnote 45

II.B Other Cognitive Biases

A variety of other cognitive errors may distort fact-finding or the imposition of liability for the conduct of robots. For example, in experimental settings, subjects tended to blame human actors more than robots for the same conduct.

One study tested the allocation of blame for a hypothetical automobile accident in which a pedestrian has been killed by an automated car, and both the human driver and the automated system, a robot for our purposes, have made errors.Footnote 46 The “central finding is that in cases where a human and a machine share control of the car in hypothetical situations, less blame is attributed to the machine when both drivers make errors.”Footnote 47 In all scenarios, subjects attributed less blame to the automatic system when there was a human involved.

Other studies found that in experimental conditions subjects valued algorithmic predictions differently from human input. Coining the term “algorithmic appreciation,” the authors of one study found that lay subjects adhered more to advice when they believed it came from an algorithm rather than a person.Footnote 48 But this “appreciation” for the algorithm’s conclusions decreased when people chose between an algorithm’s estimate and their own.Footnote 49 Moreover, experienced professionals who made forecasts on a regular basis relied less on algorithmic advice than did lay people, decreasing the professionals’ accuracy. But other studies found “algorithmic aversion,” with subjects showing more quickly losing confidence in algorithmic than human forecasters, after seeing both make the same mistake.Footnote 50

III The Impact of the Psychology of HRI in Litigation

In this section, we assume that the psychological phenomena described above will occur outside the laboratory setting and, more specifically, in the courtroom. This is a significant assumption because it is difficult to perfectly extrapolate real-world behavior from experimental studies.Footnote 51

The cognitive errors associated with people’s tendency to anthropomorphize robots could distort the accuracy and fairness of the litigation process in multiple ways. The current prevalence of these errors may lead to the conclusion that the distortions arising from robot-generated evidence are no greater than those arising from other forms of evidence. Indeed, in some respects, robot-generated evidence might contribute to accuracy because it would be less subject to certain cognitive errors. There remain, however, difficult questions about how well the tools traditionally used to test accuracy in litigation can be adapted to robot-generated evidence, as well as questions about the distributional consequences of developing more persuasive robots.

III.A The Impact of Framing and Interactivity

Anthropomorphic framing and tailoring robots to preferences for certain attributes such as speech and voice patterns could distort and impair the accuracy of fact-finding in litigation. Anthropomorphic framing and design can cause humans to develop a false sense of trust and emotional attachment to a robot and may cause fact-finders to incorrectly attribute free will to it. These psychological responses could distort liability determinations if, e.g., jurors who anthropomorphized a robot held it, rather than its designers, responsible for its actions.Footnote 52 Indeed, in the automated car study discussed above,Footnote 53 because participants perceived the anthropomorphic car as being more thoughtful, they blamed it less than another car with the same automated driving capabilities. Anthropomorphism could also lead fact-finders to attribute moral blame to a robot. For example, in a study in which a robot incorrectly withheld a $20 reward from participants, nearly two-thirds of those participants attributed moral culpability to the robot.Footnote 54 Finally, tailoring voice and speech patterns to jurors’ preferences could improve a robo-witness’s believability, though these features would have no bearing on the reliability of the information provided.

On the other hand, the issues raised by anthropomorphization can be analogized to those already present in litigation. Fact-finders now use heuristics, or mental shortcuts, to evaluate a human witness based on her features, e.g., name, appearance, race, gender, mannerisms. In turn, this information allows jurors to form rapid and often unconscious impressions about the witness’s motivations, personality, intelligence, trustworthiness, and believability. Those snap judgments may be equally as unfounded as those a person would make about a robot based on its appearance and framing. And just as a robot’s programmed speech patterns may impact the fact-finder’s perception of its trustworthiness and believability, lay or expert human witnesses may be selected or coached to do the same thing. So, although robot-generated evidence and robo-witnesses may differ from their human counterparts, the issues their design and framing present in the litigation context are not entirely novel.

III.B The Impact of Robot Embodiment, Interactivity, and Appearance

Whether a robot is embodied and the form in which it is embodied have a significant impact on human perception. Assuming that these psychological responses extend to the litigation context, it may seem obvious that this would introduce serious distortions into the fact-finding process. But again, this problem is not unique to robots. As noted, humans apply the same unconscious heuristics to human faces, reacting more favorably depending on physical criteria, such as facial proportions, that have no necessary relationship to a witness’s truthfulness or reliability. Arguably, the same random distortions could occur for human or robot witnesses. Indeed, assuming equal access to this technology, perhaps the fact that all robot witnesses can be designed to generate positive reactions could eliminate factors that currently distort the fact-finding process in litigation. For example, jurors will not discount the evidence of certain robo-witnesses on grounds such as implicit racial bias, or biases against witnesses who are not physically attractive or well spoken.

III.C The Impact of Other Cognitive Biases

In litigation, other cognitive biases about robots or their algorithmic programming may affect either the attribution of fault or the assessment of the credibility of robot-generated evidence, particularly evidence that is generated by algorithms.

The study discussed earlier, which found a greater tendency to attribute fault to a human rather than an automated system, has clear implications for liability disputes involving automated vehicles. As the authors of the study noted, the convergence of their experimental results with “real world public reaction” to accidents involving automated vehicles suggests that their research findings would have external validity, and that “juries will be biased to absolve the car manufacturer of blame in dual error cases.”Footnote 55

One of the experiments finding “algorithmic appreciation,” which we characterize as the potential for overweighting algorithmic analysis, likely has some direct correlation in litigation, where an algorithm may be seen as more reliable than a variety of human estimates.Footnote 56

III.D Testing the Fidelity of Robot-Generated Evidence in Litigation

Robot-generated evidence already plays a role in litigation proceedings. But how will that dynamic change as robots’ capabilities mature to the point of testifying for themselves? We explore the possibilities below.

III.D.1 Impediments to Cross-Examination

It is unclear how adaptable the techniques traditionally used to test a human witness’s veracity and reliability are to robot-generated evidence. In particular, the current litigation system relies heavily on cross-examination, based on the assumption that it allows the fact-finder to assess a witness’s motivations, behavior, and conclusions. Cross-examination assumes that a witness has motivations, morality, and free will. But robots possess none of those, though fact-finders may erroneously assume that they do. Thus, it may be impossible to employ cross-examination to evaluate the veracity and accuracy of a robo-witness’s testimony. Additionally, robot-generated evidence presents two distinct issues: the data itself, and the systems that create the data. Both need to be interrogated, which will require new procedures adapted to the kind of machine or robot evidence in question.Footnote 57

III.D.2 The Difficulty in Evaluating and Challenging Algorithms

Adversarial litigation may also be inadequate to assess defects in a robot’s programming, including the accuracy or bias of the algorithm.Footnote 58 The quality and accuracy of an algorithm depends on the training instructions and quality of the training data. Designers may unintentionally introduce bias into the algorithm, creating skewed results. For example, algorithms can entrench existing gender biases,Footnote 59 and facial recognition software has been criticized for racial biases that severely reduce its accuracy.Footnote 60

It can be extraordinarily difficult to fully understand how an algorithm works, particularly an unsupervised one, in order to verify its accuracy. Unlike supervised learning algorithms, an unsupervised learning algorithm trains on an unlabeled dataset and continuously updates its own training based on environmental stimuli, generally without any external alterations or verification.Footnote 61 Although its original code remains the same, the way an unsupervised learning algorithm treats input data may change based on this continuous training. Data goes in and results come out, but how the algorithm reached that result may remain a mystery. Sometimes even the people who originally programmed these algorithms do not fully understand how they operate.

Juries may struggle to understand other complex technology, even with the assistance of experts, and unsupervised learning methods introduce a novel problem into the litigation process because even their creators may not know exactly how they work. This critical gap can only compound the difficulties introduced by anthropomorphism. Experts, even an algorithm’s creators, may not be able to understand, let alone explain, how it reached certain conclusions, making it nearly impossible to verify those conclusions in legal proceedings using existing methods.Footnote 62

III.D.3 The Advantages of Robot Memory

Although anthropomorphism can cause distortions, robot-generated evidence is not subject to other cognitive biases that currently impair fact-finding.Footnote 63

The most significant impediment to an accurate evaluation of testimony is pervasive misunderstandings of how memories are formed and recalled. As a foundational matter, many people erroneously assume that our memories operate like recording devices, capturing all the details of a given event, etched permanently in some internal hard drive, available for instant recall at any moment.Footnote 64 But human memory formation is far more complex and fallible. Initially, our memories capture only a very small percentage of the stimuli in our sensory environment.Footnote 65 Because there are gaps, we often consciously or subconsciously look for filler information to complete the memory of a given event. Unlike a recording device, which would create a static memory, human memory is dynamic and reconstructive, meaning that post-event interactions or information may alter one’s recollection of an event.Footnote 66 This susceptibility to influence is called suggestibility.Footnote 67 Outside influences can disturb the stability and accuracy of eyewitness memory over time, causing witnesses to misremember details about events they witnessed.Footnote 68 Moreover, when people are engaged in memory recall, their recollections are highly suggestible, increasing the likelihood that outside influences will taint their memories.Footnote 69

Although the reliability of human memory depends on whether the witness accurately perceived the event in the first place, and whether the witness’s memory degraded over time or was polluted by post-event information, jurors typically do not understand the complexity, malleability, and selectivity of memories.Footnote 70 Jurors’ assessments are also subject to another cognitive error: the confidence-accuracy fallacy. Although jurors typically use eyewitness confidence as a proxy for reliability,Footnote 71 the correlation between witness confidence and accuracy is quite weak.Footnote 72 And because people tend to overestimate the reliability of their own memories,Footnote 73 witnesses are likely to be overly confident of their recollections, leading jurors to overvalue their testimony.

Robot testimonyFootnote 74 would not share these vulnerabilities and may therefore be more reliable than human testimony. The common but incorrect understanding of the nature of human memory is in fact a fairly accurate representation of the way robots create memories, in that their internal decision-making systems operate much like a recording device. As a result, the information they record is verifiable and provable without additional corroboration, unlike a person’s memory. Presumably, robot memory is not dynamic or suggestible. And in certain instances, a robot may actually capture a video recording of a given incident or interaction. As a result, a robo-witness’s recollection of a given memory is likely to be more accurate than that of a human witness. Robot decision-making also takes into account more data than human decision-making processes can, which means a robot is capable of presenting a more thorough and accurate representation of what happened. Robot algorithms presumably would store the code from the time of the incident, recording, e.g., the environmental stimuli it perceived before making a fateful decision. In summary, robots capture more information than their human counterparts and do so more accurately, in part because they are less susceptible to post hoc manipulation or suggestibility. These advantages should enhance the accuracy of fact-finding. The potential to interrogate or challenge robot-generated evidence would depend on the nature of the robot and its memory function. For example, if a robot captures an incident by video recording, no further interpretation by third parties would be necessary. On the other hand, if the robot’s “memories” take the form of algorithm sequences, then an expert would be needed to interpret that data for a lay jury, akin to interpreting DNA test results.

Furthermore, because memory formation in robots operates like a recording device, confidence may indeed be a strong indicator of accuracy in future robot testimony.Footnote 75 Because the way robots form and recall memories is more similar to the commonly held understanding of memory, people’s existing heuristics are likely to help them to understand and evaluate robot testimony more accurately than human eyewitness testimony. As a result, robot witnesses ostensibly would be more reliable and improve the accuracy of litigation outcomes. A robot’s internal operating algorithm may also be able to produce a confidence interval for what it saw or why it made the decision it did. Experts could then interpret and explain this confidence interval to the lay jury.

III.D.4 The Preference for Direct Evidence and Eyewitness Testimony

Despite the well-documented unreliability of eyewitness testimony, several cognitive biases cause jurors to give it greater weight than circumstantial evidence, e.g., DNA evidence or fingerprints. Because of their preference for univocal evidence requiring fewer sequential inferences, jurors typically prefer direct evidence to circumstantial evidence.Footnote 76 Combined with the misunderstanding of memory described above, these phenomena threaten the jury’s fact-finding mission.

Several features that distinguish eyewitness and circumstantial evidence cause jurors to draw erroneous conclusions about their relative accuracy. First, direct testimony is told as a narrative, from a single perspective that allows jurors to imagine themselves in the witness’s shoes and to determine whether the proffered explanation is plausible. As a result, jurors tend to give greater weight to direct evidence like eyewitness testimony than to highly probative circumstantial evidence, such as DNA evidence, because direct evidence requires them to make fewer sequential inferences.Footnote 77 Eyewitness testimony is, at bottom, a story: “a moment-by-moment account that helps [jurors] imagine how the defendant actually committed it.”Footnote 78 In contrast, although abstract circumstantial evidence like DNA may be statistically more reliable than eye witness testimony, it does not allow the juror to visualize an incident happening.Footnote 79 Direct evidence is also univocal; when an eyewitness recalls the crime, she speaks with one voice, frequently in a singular, coherent narrative. Circumstantial evidence, by contrast, allows for, and often requires, many inferences. In this way, it is polyvocal; multiple pieces of evidence provide different snippets of the crime.Footnote 80 Jurors must fit those pieces together into a narrative, which is more difficult than following a single witness’s story. Finally, eyewitness testimony can be unconditional. An eyewitness can testify that she is absolutely certain that the defendant committed the crime, or the defendant admitted as much.Footnote 81 In contrast, circumstantial evidence is inherently probabilistic.Footnote 82

Jurors’ preference for direct evidence is driven by the simulation heuristic. The simulation heuristic postulates that people estimate how likely it is that something happened based on how easy it is for them to imagine it happening; the easier it is to imagine, the more likely it is to have happened.Footnote 83 Studies have shown that when jurors listen to witness testimony, they construct a mental image of an incident that none of them witnessed.Footnote 84 Relatedly, the ease of simulation hypothesis posits that the likelihood a juror will acquit the defendant in a criminal case depends on her ability to imagine that the defendant did not commit the crime.Footnote 85

A variety of factors could influence how the human preference for direct eyewitness testimony would interact with robot-generated testimony. As noted above, in experimental settings participants preferred and were more readily persuaded by embodied robots that were framed in an anthropomorphic fashion, and participants preferred certain attributes like faces and a mirroring conversational style. If a robot with the preferred design gave “eyewitness” testimony, it could provide a single narrative and speak in a confident univocal voice. Assuming that the same cognitive processes that guide jurors’ evaluations of direct and circumstantial evidence apply equally to such evidence, jurors would give it greater weight than circumstantial evidence. In the case of direct robot testimony, however, many of the inadequacies of human eyewitness testimony would be mitigated or eliminated altogether because robot memory is not subject to the many shortcomings of human memory. In such cases, the cognitive bias in favor of a single, confident, univocal narrative would not necessarily produce an inaccurate weighting of the evidence. However, as noted above, jurors would likely employ the same unconscious preferences for certain facial features, interaction, and speech that they apply to human witnesses.

On the other hand, robot-generated evidence not presented by a direct robo-witness might not receive the same cognitive priority, regardless of its reliability, as human eyewitness testimony. But framing and designing robots to enhance anthropomorphization, like a car with voice software and a name, might elevate evidence of this nature above other circumstantial or documentary evidence. Perhaps in this context, anthropomorphization could enhance accuracy by evening out the playing field for some circumstantial or documentary evidence that jurors might otherwise give short shrift.

III.D.5 Distributional Issues

Resource inequalities are already a serious problem in the US litigation system. Because litigation is so costly, particularly under the American Rule in which each party bears its own costs in civil litigation,Footnote 86 plaintiffs without substantial personal resources are often discouraged from bringing suit, and outcomes in cases that are litigated can be heavily impacted by the parties’ resources. Parties with greater resources may be more likely to present robot-generated evidence, and more likely to have robots designed to be the most persuasive witnesses. Disparate access to the best robot technology may well mean disparate access to justice, and this problem could increase over time as robot design is manipulated to take advantage of the distortions arising from heuristics and cognitive errors. On the other hand, as robots become ubiquitous in society, access to their “testimony” may become more democratized because more people across the socioeconomic spectrum may have regular access to them in their daily lives.

IV AI Testimony in Other Legal Proceedings

In this section, we consider the impact of HRI in legal proceedings other than litigation, specifically on ADR, with a focus on arbitration, and the specialized procedures of the NTSB. We do so for two reasons. First, in the United States, litigation is relatively rare, and most cases are now resolved by some form of ADR. That is likely to be true of disputes involving robo-witnesses and evidence about the actions of robots as well. Second, these alternatives address what Sections II and III identify as the critical problem in using robot-generated evidence in litigation: the tendency of humans, especially laypersons, to anthropomorphize robots and to misunderstand how human memory functions. In contrast, the arbitration process and the NTSB’s procedures assign fact-finding either to subject matter experts or to decision-makers chosen for their sophistication and their ability to understand the complex technology at issue. In this section, we describe the procedures employed by the NTSB and in arbitration and consider how these forums might address the potential distortions discussed in Sections II and III.

IV.A Alternative Dispute Resolution

One way to address the issues HRI would raise in litigation is to resolve these cases through ADR. ADR includes “any means of settling disputes outside of the courtroom,” but most commonly refers to arbitration or more informal mediation.Footnote 87 Arbitration resembles a simplified litigation process, in which the parties make opening statements and present evidence to an arbiter or panel of arbiters empowered to make a final decision binding on the parties and enforceable by courts.Footnote 88 Arbitration allows the parties to mutually select decision-makers with relevant industry or technical expertise. For example, in disputes arising from an AV, the parties could select an arbitrator with experience in the AV industry. We hypothesize that an expert’s familiarity with the technology could reduce the effect of the cognitive errors noted above, facilitate a more efficient process, and ensure a more accurate outcome. There is evidence that lay jurors struggle to make sense of complex evidence like MRI images.Footnote 89 An expert may be able to parse highly technical robot evidence more effectively. Likewise, individuals who are familiar with robot technology may be less likely to be influenced by the anthropomorphization that may significantly distort a lay juror’s fact-finding and attribution of liability.

There are reasons for concern, however, about substituting arbitration for litigation. Although arbitral proceedings are adversarial, they lack many of the procedural safeguards available in litigation, and opponents of arbitration contend that arbitrators may be biased against certain classes of litigants. They argue that “arbitrators who get repeat business from a corporation are more likely to rule against a consumer.”Footnote 90 More generally, consumer advocates argue that mandatory arbitration is anti-consumer because it restricts or eliminates altogether class action suits and because the results of arbitration are often kept secret.Footnote 91

IV.B Specialized Procedures: The NTSB

Another more specialized option would be to design agency procedures particularly suited to the resolution of issues involving robot-generated evidence. The procedures of the NTSB demonstrate how such specialized procedures could work.

The NTSB is an independent federal agency that investigates transportation incidents, ranging from the crashes of Boeing 737 MAX airplanes to run-of-the-mill highway collisions. The NTSB acts, first and foremost, as a fact-finder; its investigations are “fact-finding proceedings with no adverse parties.”Footnote 92 The NTSB has the power to issue subpoenas for testimony or other evidence, which are enforceable in federal court,Footnote 93 but it has no binding regulatory or law enforcement powers. It cannot conduct criminal investigations or impose civil sanctions, and its factual findings, including any determination about probable cause, cannot be entered as evidence in a court of law.Footnote 94

The NTSB’s leadership and its procedures reflect its specialized mission. The five board members all have substantial experience in the transportation industry.Footnote 95 Its investigative panels use a distinctive, cooperative “party system,” in which the subjects of the investigation are invited to participate in the fact-finding process, and incidents are investigated by a panel, run by a lead investigator who designates the relevant corporations or other entities as “parties.”Footnote 96 A representative from the party being investigated is often named as a member of the investigative panel to provide the investigative panel with specialized, technical expertise.Footnote 97 At the conclusion of an investigation, the panel produces a report of factual findings, including probable cause; it may also make safety recommendations.Footnote 98

The NTSB has two primary institutional advantages over traditional litigation, institutional competency and an incentive structure that fosters cooperation. First, unlike generalist judges or lay jurors, fact-finders at the NTSB are industry experts. Second, because the NTSB is prohibited from assigning fault or liability and its factual determinations cannot be admitted as evidence into legal proceedings, parties may have a greater incentive to disclose all relevant information. This would, in turn, promote greater transparency, informing consumers and facilitating the work of Congress and other regulators.

How would NTSB respond to cases involving robot-generated evidence? Certain aspects of the NTSB as an institution may make it a more accurate fact-finding process than litigation. First, finders of fact are a panel of industry and technical experts. Using experts who have either the education or the background to fully understand the technology means that an NTSB panel may be a more accurate fact-finder. Technical competence may also be a good antidote to the lay fact-finder tendency to anthropomorphize. The NTSB panel would also benefit from having the technology’s designers at its disposal, as both the designer and manufacturer of an AV could be named party participants to an investigation. Second, because the NTSB experts may have been previously exposed to the technology, they also may be less susceptible to the cognitive errors in HRI. They are more likely to understand, e.g., how the recording devices in an AV actually function, so they will have to rely less on heuristics to understand the issue and reach a sound conclusion.

On the other hand, the NTSB process has been criticized. First, critics worry that the party system may hamstring the NTSB, because party participants are often the only source of information for a given incident, although the NTSB can issue subpoenas enforceable by federal courts.Footnote 99 Second, because NTSB proceedings are cooperative, their investigations do not benefit from the vetting process inherent in adversarial proceedings like litigation. Because the NTSB cannot make rules or undertake enforcement actions, critics worry the agency cannot do enough to address evolving problems. Finally, the NTSB may not have adequate resources to carry out its duties. Although it has the responsibility to investigate incidents in all modern modes of transportation, it is a fairly small agency with an annual operating budget of approximately $110 million and about 400 employees.Footnote 100 Its limited staff and resources mean that the agency must focus on high-volume incidents, incidents involving widespread technology or transportation mechanisms.

Perhaps most important, the NTSB process is not designed to allocate liability or provide compensation to individual victims, and it is entirely unsuited to the criminal justice process in which the defendant has a constitutional right to trial by jury.

IV.C A Real-Life Example and a Thought Experiment
IV.C.1 The Fatal Uber Accident

A recent event provides a real-life example of robot-generated evidence involving the forums we have described. In March 2018, an AV designed by Uber and Volvo struck and killed a pedestrian pushing a bicycle in Tempe, Arizona.Footnote 101 During that drive, a person sitting in the driver’s seat, the safety driver, was supposed to be monitoring the car’s speed and looking out for any hazards in the road. But at the time of the crash, the safety driver was streaming TV on their phone. The car, equipped with multi-view cameras, recorded the entire incident, including the car’s interior.

The NTSB investigated the incident and concluded that both human error and an “inadequate safety culture” at Uber were the probable causes of the crash.Footnote 102 It found that the automated driving system (ADS) first detected the victim-pedestrian 5.6 seconds before the collision, initially classifying the pedestrian and her bike as a vehicle and then a bicycle, and finally as an unknown object.Footnote 103 As a result, the system failed to correctly predict her forward trajectory. The car’s self-driving system and its environmental sensors had been working properly at the time of the crash, but its emergency braking system was not engaged, depending solely on human intervention.Footnote 104 Finally, Uber’s automated driving technology had not been trained to identify jaywalking pedestrians; in other words, the algorithm was not programmed to register an object as a pedestrian unless it simultaneously detected a crosswalk.Footnote 105

Local authorities in Arizona declined to criminally prosecute Uber,Footnote 106 but they did charge the safety driver with criminal negligence,Footnote 107 and at the time of writing these charges were still pending. The victim’s family settled with Uber out of court;Footnote 108 there was no arbitration or mediation.

If the civil case against Uber had gone to trial, how would the issues we have discussed play out, and how would the resolution by litigation compare to the NTSB’s investigation and findings? The vehicle’s video of the incident would reduce or eliminate concerns about the accuracy of human memory. Consequently, the AV’s “memory” would likely improve the accuracy of the proceeding. It is unclear whether anthropomorphization would play any role. As we understand it, the robot controlling the AV had no physical embodiment, and it was not designed to have verbal interactions with jurors or with the safety driver. There was no anthropomorphic framing such as an endearing name, assigned gender, or backstory. Thus, the jury’s tendency to anthropomorphize robots would likely play no significant role in its fact-finding or attribution of liability in this specific case. In a trial, the jury’s task would be to comprehend complex technical information about the programming and operation of the algorithm that controlled the car. And although jurors would have the assistance of expert witnesses, it is doubtful whether they could reach more accurate conclusions about the causes of the accident than the NTSB panel. The NTSB’s panel would readily comprehend the technical information, such as why the AV mischaracterized the pedestrian and her bike as an unknown object. Moreover, the jurors, presumably more than experts familiar with the technology, might be influenced by common cognitive biases to blame the human driver more than the AV.

IV.C.2 A Thought Experiment: Litigation Involving Fully Autonomous Robotaxis

Companies like Waymo and Cruise have begun deploying fully driverless taxis in certain cities. In June 2022, Cruise, a subsidiary of General Motors and supported by Microsoft, received approval to operate and charge fares in its fully driverless, fully autonomous “robotaxis” in parts of San Francisco.Footnote 109 The conditions under which these robotaxis can operate are limited. Cruise AVs are permitted to charge for driverless rides only during night-time hours, and are limited to a maximum speed of 30 miles per hour.Footnote 110 They can, however, operate in light rain and fog, frequent occurrences in San Francisco. Waymo, an Alphabet subsidiary, began carrying passengers in its robotaxis in less crowded Phoenix in 2020, and as of April 2023 it was giving free rides in San Francisco and awaiting approval to charge fares.Footnote 111 The potential safety benefits of autonomous taxis are obvious. A computer program is never tired, drunk, or distracted. And cars like Waymo’s are equipped with sophisticated technology like lidar (light detection and ranging), radar, and cameras that simultaneously surveil every angle of the car’s surroundings.

How would the psychology of HRI affect fact-finding and the allocation of liability if these driverless taxis were involved in accidents? Companies designing these robotaxis have many design options that might trigger various responses, including anthropomorphic projections and responses to the performance of the algorithms controlling the cars. They could seat an embodied, co-present robo-driver in the car; its features could be designed to evoke a variety of positive responses. Alternatively, and more inexpensively, the designers could create a virtual, physically embodied driver who would appear virtually on a computer screen visible to the passengers. In either case, the robot driver could be given a name, a backstory, and an appealing voice to interact with the rider. The robotaxi driver would play the same social function as today’s Uber or taxi driver, but unlike their human counterparts, the robot drivers might play no role in actually operating the vehicle.

Design choices could affect ultimate credibility and liability judgments. For example, as experimental studies indicate, giving the car more anthropomorphic qualities, a name, an appearance, a backstory, etc. would make it more likeable, and as a result, people may be more hesitant to attribute liability to it – particularly if there is a human safety driver in the car. And if both the automated car and a car with a human driver were in an accident, the experimental studies suggest that the human driver would be blamed more. The fact-finders’ evaluation of algorithmic evidence might also be affected by cognitive biases, including the tendency to discount algorithmic predictions once they have been shown to be in error, even if humans have made the same error.

This example also highlights other factors that may affect the ability of various fact-finders to resolve disputes arising from the complex and rapidly evolving technology in AVs. Arbitrators vary by specialty, and some may eventually specialize in disputes involving AVs. Finally, the NTSB is the most knowledgeable body that could handle disputes involving AVs. However, given the structural limitations of the agency, its decisions of fault are not legally enforceable against the parties involved.

V Conclusion

Human responses to robot-generated evidence will present unique challenges to the accuracy of litigation, as well as the transparency of the legal system and the perceptions of its fairness.

Robot design and framing have the potential to distort fact-finding both intentionally and unintentionally. Robot-generated evidence may be undervalued, e.g., because it is not direct evidence. But such evidence may also be overvalued because of design choices intended to thwart or minimize a robot’s liability or perceived responsibility, and thus the liability of its designers, manufacturers, and owners. Although there are human analogs involving witness selection and coaching, they are subject to natural limits, limits which largely do not apply to the ex ante design-a-witness problem we may see with robots. Additionally, cognitive biases may distort assessments of blame and liability when human and robot actors are both at fault, leading to the failure to impose liability on the designers and producers of robots.

Testing the accuracy of robot-generated evidence will also create new challenges. Traditional cross-examination is ill-suited to this evidence, which may lead to both inaccurate fact-finding and a lack of transparency in the process that could undermine public trust. Cognitive biases can also distort the evaluation of evidence concerning algorithms. The high cost of accessing the most sophisticated robots and mounting the means to challenge them can exacerbate concerns about the fairness and accuracy of the legal system, as well as accessibility to justice. Accordingly, traditional trial techniques need to be adapted and new approaches developed, such as new testimonial safeguards.Footnote 112

But the news concerning litigation is not all bad. If it is possible to reduce the distorting effects arising from cognitive errors, robot-generated evidence could improve the accuracy of litigation, capturing more data initially and preserving it without the many problems that distort and degrade human memory.Footnote 113

Finally, alternative forums, such as arbitration and agency proceedings, can be designed to minimize the evaluation of evidence and the imposition of liability on the basis of fact-finding by individuals who lack familiarity with the technology in question.

7 Principles to Govern Regulation of Digital and Machine Evidence

Andrea Roth
I Introduction

Criminal prosecutions now routinely involve technologically sophisticated tools for both investigation and proof of guilt, from complex software used to interpret DNA mixtures, to digital forensics, to algorithmic risk assessment tools used in pre-trial detention, sentencing, and parole determinations. As Emily Silverman, Jörg Arnold, and Sabine Gless’s Chapter 8 explains, these tools offer not merely routine measurements, but also “evaluative data” akin to expert opinions.Footnote 1 These new tools, in critical respects, are a welcome addition to less sophisticated or more openly subjective forms of evidence that have led to wrongful convictions in the past, most notably eyewitness identifications, confessions, and statements of source attribution using “first generation”Footnote 2 forensic disciplines of dubious reliability, such as bite marks.Footnote 3

Nonetheless, this new generation of evidence brings new costs and challenges. Algorithmic tools offer uniformity and consistency, but potentially at the expense of equitable safety valves to correct the unjust results that would otherwise flow from mechanistic application of rules. Such tools also may appear more reliable or equitable than they are, as fact-finders fail to identify sources of error or bias because the tools appear objective and are shrouded in black box secrecy. Even with greater transparency, some results, such as the decisions of deep neural networks engaged in deep learning, will not be fully explainable without sacrificing the very complexity that is the ostensible comparative advantage of artificial intelligence (AI). The lack of explainability as to the method and results of sophisticated algorithmic tools has implications for accuracy, but also for public trust in legal proceedings and participants’ sense of being treated with dignity. As Sara Sun Beale and Haley Lawrence note in their Chapter 6 of this volume, humans have strong reactions to certain uses of robot “testimony” in legal proceedings.Footnote 4 Absent proper regulation, such tools may jeopardize key systemic criminal justice values, including the accuracy expressed by convicting the guilty and exonerating the innocent, fairness, public legitimacy, and softer values such as mercy and dignity.

In furtherance of these systemic goals, this chapter argues for four overarching principles to guide the use of digital and machine evidence in criminal justice systems: a right to front-end safeguards to minimize error and bias; a right of access both to government evidence and to exculpatory technologies; a right of contestation; and a right to an epistemically competent fact-finding process that keeps a human in the loop. The chapter offers legal and policy proposals to operationalize each principle.

Three caveats are in order. First, this chapter draws heavily on examples from the United States, a decentralized and adversarial system in which the parties themselves investigate the case, find witnesses, choose which evidence to introduce, and root out truth through contestation. Sabine Gless has described the many differences between the US and German approaches to machine evidence, distinguishing their adversarial and inquisitorial approaches, respectively.Footnote 5 Nonetheless, the principles discussed here are relevant to any system valuing accuracy, fairness, and public legitimacy. For example, although many European nations have a centralized, inquisitorial system, proposed EU legislation evinces concern over the rights of criminal defendants vis-à-vis AI systems, specifically the potential threat AI poses to a “fair” trial, the “rights of the defense,” and the right to be “presumed innocent,” as guaranteed by the EU Charter of Fundamental Rights.Footnote 6 As noted in Chapter 10 of this volume by Bart Custers and Lenneke Stevens, European nations are facing similar dilemmas when it comes to the regulation of digital evidence in criminal cases.Footnote 7

The second caveat is that digital and machine evidence is a wide-ranging and definitionally vague concept. Erin Murphy’s Chapter 9 in this volume offers a helpful taxonomy of such evidence that explains its various uses and characteristics, which in turn determine how such evidence implicates the principles in Section II.Footnote 8 Electronic communications and social media, e.g., implicate authentication and access concerns, but not so much the need for equitable safety valves in automated decision-making. Likewise, biometric identifiers may raise more privacy concerns than use of social media posts as evidence. The key characteristics of digital evidence as cataloged by Murphy also affect which principles are implicated. For example, data created by a private person, and possessed by Facebook, might implicate the right to exculpatory information and the Stored Communications Act,Footnote 9 while resiliency or lack of data such as body-worn camera footage might require the state to adopt more stringent preservation and storage measures, and to allow defendants access to e-discovery tools. So long as the principles are followed when they apply, the delivery of justice can be enhanced rather than jeopardized by digital and machine proof.

The third caveat is that this chapter does not write on a blank slate in setting forth principles to govern the use of technology in rendering justice. A host of disciplines and governing bodies have adopted principles for “ethical or responsible” use of AI, from the US Department of Defense to the Alan Turing Institute to the Council of Europe. Recent meta-studies of these various sets of principles have identified recurring values, such as beneficence, autonomy, justice, explainability, transparency, fairness, responsibility, privacy, expert oversight, stakeholder-driven legitimacy, and “values-driven determinism.”Footnote 10 More specifically, many countries already have a detailed legal framework to govern criminal procedure. In the United States, e.g., criminal defendants already have a constitutional right to compulsory process, to present a defense, to be confronted with the witnesses against them, to a verdict by a human jury, and to access to experts where necessary to a defense. But these rights were established at a time when cases largely depended on human witnesses rather than machines. The challenge here is not so much to convince nations in the abstract to allow a right to contest automated decision-making, but to explain how existing rights, such as the right of confrontation or right to pre-trial disclosure of the bases of expert testimony, might apply to this new type of evidence.

II The Principles
  • Principle I: The digital and machine evidence used as proof in criminal proceedings should be subject to front-end development and testing safeguards designed to minimize error and bias.

  • Principle I(a): Jurisdictions should acknowledge the heightened need for front-end safeguards with respect to digital and machine evidence, which cannot easily be scrutinized through case-specific, in-trial procedures.

To understand why the use of digital and machine evidence merits special front-end development and testing safeguards that do not apply to all types of evidence, jurisdictions should acknowledge that the current real-time trial safeguards built for human witnesses, such as cross-examination, are not as helpful for machine-generated proof.

A critical goal of any criminal trial is to ensure verdict accuracy by minimizing the chance of the fact-finder drawing the wrong inferences from the evidence presented. There are several different levers a system could use to combat inferential error by a jury. First, the system could exclude unreliable evidence so that the jury never hears it. Second, the system could implement front-end design and production safeguards to ensure that evidence is as reliable as it can be when admitted, or that critical contextual information about its probative value is developed and disclosed when the fact-finder hears it. Third, the system could allow parties themselves to explore and impeach, or attack the credibility/reliability of the evidence. Fourth, the system could adopt proof standards that limit the fact-finder’s ability to render a verdict absent a proof threshold such as beyond a reasonable doubt, or type or quantum of evidence.

For better or worse, the American system of evidence pursues accuracy almost entirely through trial and back-end safeguards, the third and fourth levers described above. Although the United States still clings to the rule excluding hearsay, understood as out-of-court statements offered for their truth, that rule has numerous exceptions. And while US jurisdictions used to have stringent competence requirements for witnesses, these have given way to the ability to impeach witnesses once they testify or once their hearsay is admitted.Footnote 11 The parties conduct such impeachment through cross-examination, physical confrontation, and admission of extrinsic evidence such as a witness’s prior convictions or inconsistent statements. In addition, the United States has back-end proof standards to correct for unreliable testimony, such as corroboration requirements for accomplice testimony and confessions. The US system has a similarly lenient admission standard with regard to physical evidence, requiring only minimal proof that an item such as a document or object is what the proponent says it is.Footnote 12

Nonetheless, there are particular types of witness testimony that do require more front-end safeguards, ones that could work well for digital and machine evidence too. One example is eyewitness identifications. If an identification is conducted under unnecessarily suggestive circumstances, a US trial court, as a matter of constitutional due process, must conduct a hearing to determine whether the identification is sufficiently reliable to be admitted against the defendant at trial.Footnote 13 Moreover, some lower US courts subject identification testimony to limits or cautionary instructions at trial, unless certain procedures were used during the identification, to minimize the risk of suggestivity.Footnote 14 Likewise, expert testimony is subjected to enhanced reliability requirements that question whether the method has been tested, has a known error rate, has governing protocols, and has been subject to peer review.Footnote 15 To a lesser extent, confession evidence is also subject to more stringent front-end safeguards, such as the requirement in some jurisdictions that stationhouse confessions be videotaped.Footnote 16

The focus on front-end safeguards in these specific realms is not a coincidence. Rather, it stems from the fact that the problems with such testimony are largely cognitive, subconscious, or recurring, rather than a matter of one-off insincerity, and therefore not meaningfully scrutinized solely through cross-examination and other real-time impeachment methods.Footnote 17 These categories of testimony bear some of the same process-like characteristics that make digital and machine evidence difficult to scrutinize through cross-examination alone.

Even more so than these particular types of human testimony, digital and machine evidence bear characteristics that call for robust front-end development and testing safeguards before it gets to the courtroom. First, the programming of the algorithms that drive the outputs of many of the categories of proof discussed by Erin Murphy, including location trackers, smart tools, and analytical software tools, does not necessarily change from case to case.Footnote 18 Repeatedly-used software can be subject to testing to determine its accuracy under various conditions. Second, unlike eyewitnesses and confessions, where the declarant in some cases might offer significant further context through testimony, little further context can be gleaned from in-court scrutiny of any of the categories of proof Murphy describes.Footnote 19 To be sure, a programmer or inputter could take the stand and explain some aspects of a machine’s output in broad strokes. But the case-specific “raw data,” “measurement data,” or “evaluative data”Footnote 20 of the machine is ultimately the product of the operation of the machine and its algorithms, not the programmer’s own mental processes, and it is the machine’s and algorithm’s operation that must also be scrutinized. In short, the accoutrements of courtroom adversarialism, such as live cross-examination, are hardly the “greatest legal engine ever invented for the discovery of truth”Footnote 21 of the conveyances of machines.

  • Principle I(b): Jurisdictions should implement and enforce, through admissibility requirements, certain minimal development and testing procedures for digital and machine evidence.

Several development and testing safeguards should be implemented for any software-driven system whose results are introduced in criminal proceedings. The first is robust, independent stress testing of the software. Such standards are available,Footnote 22 but are typically not applied, at least in the United States, to software created for litigation. For example, a software expert reviewing the code of the Alcotest 7110, a breath-alcohol machine used in several US states, found that it would not pass industry standards. He documented 19,500 errors, nine of which he believed “could ultimately [a]ffect the breath alcohol reading.”Footnote 23 A reviewing court held that such errors did not merit excluding the reading, in part because the expert could not say with “reasonable certainty” that the errors caused a false reading in the case at hand,Footnote 24 but the court did require modifications of the program for future use.Footnote 25 In addition, Nathaniel Adams, a computer scientist and expert in numerous criminal cases in the United States, has advocated for forensic algorithms to be subject to the industry-standard testing standards of the Institute of Electrical and Electronic Engineers (IEEE).Footnote 26 Adams notes that STRMix, one of the two primary probabilistic genotyping programs used in the United States, had not been tested by a financially independent entity,Footnote 27 and the program’s creators have disclosed more than one episode of miscodes potentially affecting match statistics, thus far, in ways that would underestimate but not overestimate a match probability.Footnote 28 Professor Adams’ work helped to inspire a recent bill in the US Congress, the Justice in Forensic Algorithms Act of 2021, which would subject machine-generated proof in criminal cases to more rigorous testing, along with pre-trial disclosure requirements, defense access, and the removal of trade secret privilege from proprietary code.Footnote 29 And exclusion aside, a rigorous software testing requirement reduces the chance of misleading or false machine conveyances presented at trial.

Jurisdictions should also enact mandatory testing and operation protocols for machine tools used to generate evidence of guilt or innocence, along the lines currently used for blood-alcohol breath-testing equipment.Footnote 30 Such requirements need not be a condition of admission; in the breath-alcohol context, the failure to adhere to protocols goes to weight, not admissibility.Footnote 31 Even so, the lack of validation studies showing an algorithm’s accuracy under circumstances relevant to the case at hand should, in some cases, be a barrier to admissibility. Jurisdictions should subject the conclusions of machine experts to validity requirements at the admissibility stage, similar to those imposed on experts at trial. Currently, the Daubert and Frye reliability/general acceptance requirements apply only to human experts; if the prosecution introduces machine-generated proof without a human interlocutor, the proof is subject only to general authentication and relevance requirements.Footnote 32

Requiring the proponent to show that the algorithm is fit for purpose through developmental and internal validation before offering its results is key not merely for algorithms created for law enforcement but for algorithms created for commercial purposes as well. For example, while Google Earth results have been admitted as evidence of guilt with no legal scrutiny of their reliability,Footnote 33 scientists have conducted studies to determine its error rate with regard to various uses.Footnote 34 While error is inevitable in any human or machine method, this type of study should be a condition of admitting algorithmic proof.Footnote 35

Such testing need not necessarily require public disclosure of source code or other levels of transparency that could jeopardize intellectual property interests. Instead, testing algorithms for forensic use could be done in a manner similar to testing of potentially patentable pharmaceuticals by the US Food and Drug Administration.Footnote 36 Others have made the point that scrutiny by “entrusted intermediate parties,” behind closed doors, would avoid any financial harm to developers.Footnote 37 Of course, for algorithms that are open source, such concerns would be lessened.

One limit on validation studies as a guarantor of algorithmic accuracy is that most studies do not speak to whether an algorithm’s reported score or statistic, along a range, is accurate. Studies might show that a software program boasts a low false positive rate in terms of falsely labeling a non-contributor as a contributor to a DNA mixture, but not whether its reported likelihood ratio might be off by a factor of ten. As two DNA statistics experts explain, there is no “ground truth” against which to measure such statistics:

Laboratory procedures to measure a physical quantity such as a concentration can be validated by showing that the measured concentration consistently lies with an acceptable range of error relative to the true concentration. Such validation is infeasible for software aimed at computing a [likelihood ratio] because it has no underlying true value (no equivalent to a true concentration exists). The [likelihood ratio] expresses our uncertainty about an unknown event and depends on modeling assumptions that cannot be precisely verified in the context of noisy [crime scene profile] data.Footnote 38

But systems are not helpless in testing the accuracy of algorithm-generated credit scores or match statistics. Rather, such results must be scrutinized using other methodologies, such as more complex studies that go beyond simply determining false positive rates, stress testing of software, examination of source code by independent experts, and assessment of whether various inputs, such as assumptions about the values of key variables, are appropriate.

  • Principle I(c): Jurisdictions should explicitly define what is meant by algorithmic fairness for purposes of testing for, and guarding against, bias.

Algorithms should also be tested for bias. The importance of avoiding racial and other bias in algorithmic decision-making is perhaps obvious, given that fairness is an explicitly stated value in nearly all promulgated AI standards in the meta-studies referenced in the introduction to this chapter. In addition, racial, gender, and other kinds of bias might trigger legal violations as well as ethical or policy concerns. To be sure, the Equal Protection Clause of the Fourteenth Amendment to the US Constitution guards only against state action that intentionally treats people differently because of a protected status, but if an algorithm simply has a disparate impact on a group, it will likely not be viewed as an equal protection violation. However, biased algorithms used in jury selection could violate the requirement that petit juries be drawn from a fair cross section of the population, and biased algorithms used to prove dangerousness or guilt at trial could violate statutory anti-discrimination laws or reliability-based admissibility standards.

In one highly publicized example of algorithmic bias from the United States, Pro Publica studied Northpointe’s post-trial risk assessment tool Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and determined that the false positive rates, i.e., rates of those labeled “dangerous,” but who did not reoffend, for Black subjects was much higher than for White subjects.Footnote 39 At the same time, however, other studies, including by Northpointe itself, noted that the algorithm is, in fact, racially non-biased if the metric is whether race has any predictive value in the model in determining dangerousness.Footnote 40 As Northpointe notes, Black and White subjects with the same risk score present the same risk of reoffending under the model.Footnote 41 The upshot was not that Pro Publica was wrong in noting the differences in false positive rates; it was that Pro Publica judged the algorithm’s racial bias by only one particular measure.

The COMPAS example highlights the problems of testing algorithms for fairness without defining terms. As others have explained, it is impossible to have both equal false positive rates and predictive parity where two groups have different base rates.Footnote 42 So, in determining whether the algorithm is biased, one needs to decide which measure is the more salient indicator of the type of bias the system should care about. Several commentators have noted possible differences in definitions of algorithmic fairness as well.Footnote 43 Deborah Hellman argues that predictive parity alone is an ill-suited measure of algorithmic fairness because it relates only to beliefs, not outcomes.Footnote 44 In Hellman’s view, a disparate false positive rate between groups is highly relevant to proving, though not dispositive of, normatively troubling unfairness.Footnote 45 While not all jurisdictions will agree with Hellman’s take, the point is that algorithm designers should be aware of different conceptions of fairness, be deliberate in choosing a metric, and ensure that algorithms in criminal proceedings are fair under that metric. Jurisdictions could require what Osagie Obasogie has termed “racial impact statements” in the administrative law context,Footnote 46 to determine the effect of a shift in decision-making on racial groups. The Council of Europe has made a similar recommendation, calling on states to conduct “human rights impact assessments of AI applications” to assess “risks of bias/discrimination … with particular attention to the situation of minorities and vulnerable and disadvantaged groups.”Footnote 47

Finally, in determining algorithmic fairness, decision-makers should judge algorithms not in a vacuum, but against existing human-driven decision-making processes. For example, court reporters have been known to mistakenly transcribe certain dialects, such as African American Vernacular English (AAVE), in ways that matter to fact-finding in criminal proceedings.Footnote 48 If an AI system were to offer a lower, even if non-zero, error rate with regard to mistranscriptions of AAVE, the shift toward such systems, at least a temporary one subject to continued testing and oversight, might reduce, rather than exacerbate, bias.Footnote 49

  • Principle II: Before trial or other relevant proceeding, the parties should have meaningful and equitable access to digital and machine evidence material to the proceeding, including exculpatory technologies and data.

  • Principle II(a): Pretrial disclosure requirements related to expert testimony should apply to digital and machine conveyances that, if asserted by a human, would be subject to such requirements.

Because digital and machine evidence cannot be cross-examined, parties cannot use the in-court trial process itself to discover the infirmities of algorithms or possible flaws in their results or opinions. As Edward Cheng and Alex Nunn have noted, enhanced pre-trial discovery must in part take the place of in-court discovery with regard to process-based evidence like machine conveyances.Footnote 50 Such enhanced discovery already exists in the United States for human experts, precisely because in-court examination alone is not a meaningful way for parties to understand and prepare to rebut expert testimony. Specifically, parties in criminal cases are entitled by statute to certain information with regard to expert witnesses, including notice of the basis and content of the expert’s testimony and the expert’s qualifications.Footnote 51 Disclosure requirements in civil trials are even more onerous, requiring experts to prepare written reports that include the facts or data relied on.Footnote 52 Moreover, proponents of expert testimony must not discourage experts from speaking with the opposing party,Footnote 53 and in criminal trials, proponents must also disclose certain prior statements, or Jencks material, of witnesses after they testify.Footnote 54 These requirements also facilitate parties’ ability to consult with their own experts to review the opposing party’s evidence or proffered expert testimony.

Using existing rules for human experts as a guide, jurisdictions should require that parties be given access to the following:

  1. (1) The evidence and algorithms themselves, sufficient to allow meaningful testing of their assumptions and running the program with different inputs. One probabilistic genotyping software company, TrueAllele, offers defendants access to its program, with certain restrictions, albeit only for a limited time and without the source code.Footnote 55 This sort of “black box tinkering” not only allows users to “confront” the code “with different scenarios,” thus “reveal[ing] the blueprints of its decision-making process,”Footnote 56 but also approximates the posing of a hypothetical to a human expert. Indeed, the ability to tinker might be just as important as access to source code; data science scholars have written about the limits of transparency and the superior promise of reverse engineering in understanding how inputs relate to outputs.Footnote 57 Along these lines, Jennifer Mnookin has argued that a condition for admissibility of computer simulations should be that “their key evidence-based inputs are modifiable,” allowing the opposing party to “test the robustness of the simulation by altering the factual assumptions on which it was built and seeing how changing these inputs affects the outputs.”Footnote 58

  2. (2) The training or software necessary to use or test the program. In the United States, criminal defendants have reported that certain trainings are off limits to non-law-enforcement; e.g., for using the Cellebrite program to extract digital evidence from a cell phone, or for using DNA genotyping software. Moreover, only certain defense experts are able to buy the software for their own use, and some academic researchers have been effectively denied research licenses to study proprietary forensic software. Instead, the defense and academic communities should presumptively be given a license to access to all software used by the government in generating evidence of guilt, to facilitate independent validity testing.

  3. (3) A meaningful account of the assumptions underlying the machine’s results or opinion, as well as the source code and prior output of software, where necessary to a meaningful understanding of those assumptions. Human experts can be extensively questioned both before and during trial, offering a way for parties to understand and refute their methods and conclusions. Digital and machine evidence cannot be questioned in the same way, but proponents should be required to disclose the same type of information about methods and conclusions that a machine expert witness would offer, if it could talk. Likewise, Article 15(1)(h) of General Data Protection Regulation (GDPR)Footnote 59 gives a data subject the right to know of any automated decision-making to which he is subject, and if so, the right to “meaningful information about the logic involved.” While the GDPR may apply only to private parties rather than criminal prosecutions, the subject’s dignitary interest in understanding the machine’s logic would presumably be even greater in the criminal realm.

    In particular, where disclosure of source code is necessary to meaningful scrutiny of the accuracy of machine results,Footnote 60 the proponent must allow access. As discussed in Principle I, source code might be important in particular to scrutinize scores or match statistics, where existing studies reveal only false positive rates. A jurisdiction should also require disclosure of prior output of the machine, covering the same subject matter as the machine results being admitted.Footnote 61 For human witnesses, such prior statements must be disclosed in many US jurisdictions to facilitate scrutiny of witness claims and impeachment by inconsistency. For machines, parties should have to disclose, e.g., the results of all prior runs of DNA software on a sample, all potentially matching reference fingerprints reported by a database using a latent print from a crime scene,Footnote 62 or calibration data from breath-alcohol machines.Footnote 63

  4. (4) Access to training data. Defendants and their experts should have access to underlying data used by the machine or algorithm in producing its results. In countries with inquisitorial as compared to adversarial systems, defendants should have access to “any data that is at the disposal of the court-appointed expert.”Footnote 64 For example, for a machine-learning model labeling a defendant a “sexual psychopath” for purposes of a civil detention statute, the defendant should have access to the training dataset. Issues of privacy, i.e., the privacy of those in the dataset, have arisen, but are not insurmountable.Footnote 65

To be sure, access alone does not guarantee that defendants will understand what they are given. But access is a necessary condition to allowing defendants to consult with experts who can meaningfully study the algorithms’ performance and limits.

  • Principle II(b): Jurisdictions should not allow claims of trade secret privilege or statutory privacy interests to interfere with a criminal defendant’s meaningful access to digital and machine evidence, including exculpatory technologies and data.

While creators of proprietary algorithms routinely argue that source code is a trade secret,Footnote 66 this argument should not shield code from discovery in a criminal case, where the code is material to the proceedings.Footnote 67 Of course, if proprietors can claim substantive intellectual property rights in their algorithms, those rights are still enforceable through licensing fees and civil lawsuits.

Likewise, criminal defendants should have meaningful access to exculpatory digital and machine evidence, including the ability to subpoena witnesses who can produce such evidence in criminal proceedings where such evidence is material. Rebecca Wexler has explored the asymmetries inherent in US statutes such as the Stored Communications Act, which shields electronically stored communications from disclosure and has an exception for “law enforcement,” but not for criminal defendants, however material the communications might be to establishing innocence. Such asymmetries are inconsistent not only with basic adversarial fairness, but arguably also with the Sixth Amendment compulsory process.Footnote 68

  • Principle II(c): Jurisdictions should apply a presumption in favor of open-source technologies in criminal justice.

In the United States, the public has a constitutional right of access to criminal proceedings.Footnote 69 With regard to human witnesses, the public can hear the witnesses testify and determine the strength and legitimacy of the state’s case. The public should likewise be recognized as a stakeholder in the development and use of digital and machine evidence in criminal proceedings. The Council of Europe’s guidelines for use of AI in criminal justice embrace this concept, requiring Member States to “meaningfully consult the public, including civil society organizations and community representatives, before introducing AI applications.”Footnote 70

The most direct way to ensure public scrutiny of such evidence would be through open-source software. Scholars have discussed the benefits of open-source software in terms of facilitating “crowdsourcing”Footnote 71 and “ruthless public scrutiny”Footnote 72 as means of testing models and algorithms for hidden biases and errors. Others have gone further, arguing that software should be open source whenever used in public law.Footnote 73 Public models would have the benefit of being “transparent” and “continuously updated, with both the assumptions and the conclusions clear for all to see.”Footnote 74 States could encourage adoption of open-source software through drastic means, excluding output from adjudication, or more modest means, such as offering monetary incentives or prizes for development of open source replacements.

  • Principle II(d): Jurisdictions should make investigative technologies equally available to criminal defendants for potential exculpatory purposes, regardless of whether the state used the technology in a given case.

As Erin Murphy notes in Chapter 9 of this volume, defendants have two compelling needs with regard to digital and machine evidence: a meaningful chance to attack the government’s proof, and a meaningful chance to discover and present “supportive defense evidence.”Footnote 75 Just as both defendants and prosecutors have the ability to interview and subpoena witnesses, defendants should have an equal ability to wield new technologies that are paid for by the state when prosecutors seek to use them. If a defendant is accused of a crime based on what he believes to be a human analyst’s erroneous interpretation of a complex DNA mixture, the defendant should be given the ability to use a probabilistic genotyping program, like TrueAllele, to attack these results. Of course, this access would be costly, and might reasonably be denied in cases where it bears no relevance to the defense, as determined ex parte by a judge. But if defendants have a due process right to access to defense experts where critical to their defense,Footnote 76 they should have such a right of access to exculpatory algorithms as well.

  • Principle III: Criminal defendants should have a meaningful right of contestation with respect to digital and machine evidence including, at a minimum, a right to be heard on development and testing procedures and meaningful access to experts.

Much has been written about a right of contestation by data subjects with regard to results of automated decision-making processes.Footnote 77 In the US criminal context, defendants already enjoy, at least in theory, a right to present a defense, encompassing a cluster of rights, including the right to be confronted by the witnesses against them, to testify in their own defense, and to subpoena and present witnesses in their favor. In the United States, a criminal defendant’s right of contestation essentially encompasses everything already discussed with regard to access to the state’s evidence, as well as to some exculpatory electronic communications. In addition, the US Supreme Court has held that the right to present a defense exists even where the government presents scientific evidence of guilt that a trial judge might deem definitive. The fact that an algorithm offers compelling evidence of guilt cannot preclude a defendant from offering a defense case.Footnote 78

In addition to pre-trial access to the evidence itself, and information about its assumptions and processes, other rights that are key to a meaningful ability to contest the results of digital and machine evidence include the ability to consult experts where necessary. David Sklansky has argued that a right to such expert testimony, and not merely in-court cross-examination, should be deemed a central part of the Sixth Amendment right of confrontation.Footnote 79

The importance of a right of contestation in the algorithmic design process might be less obvious. But in a changing world in which machine evidence is not easily scrutinized at the trial itself, the adversarialism upon which common law systems are built might need to partially shift from the trial stage to the design and development stage. Carl DiSalvo has coined the term “adversarial design”Footnote 80 to refer to design processes that incorporate political contestation among different stakeholders. While adversarial design would not be a case-specific process, it could still involve representatives from the defense community. Others have suggested appointing a “defender general” in each jurisdictionFootnote 81 who could inject adversarial scrutiny into various recurring criminal justice issues at the front end. Perhaps such a representative could oversee defense involvement in the design, testing, and validation of algorithms. This process would supplement, not supplant, case-specific machine access and discovery.

The right of contestation with regard to sophisticated AI systems, the methods of which may well never be meaningfully understood by the parties, might also need to incorporate a right to delegated contestation, in the form of the right to another machine’s scrutiny of the results. Other scholars have noted the possibility of “reversible” algorithms that would audit themselves or each other,Footnote 82 or have suggested that one machine opinion alone should be deemed legally insufficient for a conviction, in the absence of corroboration from a second expert system.Footnote 83

At the trial itself, the right of contestation should first include the right to argue for exclusion of the evidence on reliability (Frye/Daubert) and/or authenticity grounds. In the US federal system, proponents of digital and machine evidence must present sufficient evidence to persuade the fact-finder that the evidence is what the proponent says it is, e.g., that an email is from a particular sender.Footnote 84 In China, courts have used blockchain technology to facilitate authentication of electronically stored information.Footnote 85 Jurisdictions’ authenticity method might reasonably change as the ability for malfeasors to falsify evidence changes in the future. Likewise, litigants should have the right to insist on exclusion of machine evidence if inputs are not proven accurate. For example, in the United Kingdom, a “representation” that is made “other than by a person” but that “depends for its accuracy on information supplied (directly or indirectly) by a person” is not admissible in criminal cases without proof that the “information was accurate.”Footnote 86 In some cases, this showing will require testimony from the inputter.Footnote 87

  • Principle IV: Criminal defendants should have a right to a factfinding process that is epistemically competent but that retains a human in the loop, so that significant decisions affecting their liberty are not entirely automated.

  • Principle IV(a): While parts of the criminal process can be automated, human safety valves must be incorporated into the process to ensure a role for equity, mercy, and human moral judgment.

Both substantive criminal law and criminal procedure in the United States have become more mechanical over the past few decades, from mandatory arrest laws, to sentencing guidelines, to laws criminalizing certain quantities of alcohol in drivers’ blood.Footnote 88 The more mechanical that the system becomes on the front end via, e.g., mandatory arrest, prosecution, liability rules, and sentencing, the more that safety valves such as prosecutorial, fact-finder, and sentencing discretion become critical to avoid inequities, i.e., results that are legal but unjust.Footnote 89 Moreover, mechanical regimes reduce the possibility of mercy, understood to mean leniency or grace, beyond what a defendant justly deserves. While mercy may be irrational, it is a pedigreed and “important moral virtue” that shows compassion and a shared humanity.Footnote 90

As digital and machine evidence accelerate the mechanization of justice, jurisdictions should ensure that human actors are still able to exercise equity and mercy at the charging, guilt, and/or punishment stages of criminal justice. Not only are humans needed to ensure that laws are not applied mechanically. They are needed because they are literally human – they bring a human component to moral judgment that is necessary, if not for dignity, then at least for public legitimacyFootnote 91 and, in turn, for enforcement of criminal law.Footnote 92 In the United States, scholars have written since the 1970s of the illegitimacy of verdicts based solely on “naked statistical evidence,” based on personhood concerns.Footnote 93 Moreover, humans add to the fact-finding process as well, rendering AI systems fairer without having to make such systems less accurate through simplification.Footnote 94 Corroborating these observations, recent AI guidelines and data privacy laws reflect the public’s desire to keep humans in the loop with regard to automated decision-making, from the Council of Europe’s call to “ensure that the introduction, operation and use of AI applications can be subject to effective judicial review,”Footnote 95 to the EU Directive prohibiting processes that produce an “adverse legal effect” on a subject “based solely on automated processing,” without appropriate “safeguards for the rights and freedoms of the data subject, at least the right to obtain human intervention on the part of the controller.”Footnote 96

More concretely, criminal liability should not be based solely on an automated decision. Red light cameras are the closest the United States has come to fully automated liability, but thus far, such violations end only in a mailed traffic ticket rather than a criminal record. Moreover, in jurisdictions with juries, the power of jury nullification should continue undisturbed. It may well be that jurors’ ability to decide historical fact, e.g., “was the light red?”, could be curtailed, so long as their ability to decide evaluative data, e.g., “did the defendant drive ‘recklessly’?”, is preserved.Footnote 97 Indeed, some historical fact-finding might be removed from lay jurors, if they lack the “epistemic competence” to assess the evidence’s probative value.Footnote 98 Jurisdictions could still ensure that humans remain in the loop by disallowing machine experts from giving dispositive testimony on ultimate questions of fact,Footnote 99 prohibiting detention decisions based solely on a risk assessment tool’s score, and requiring a human expert potentially liable for injustices caused by inaccuracies to vouch for the results of any machine expert, before introducing results in a criminal proceeding.

  • Principle IV(b): Jurisdictions should ensure against automation complacency by developing effective human–machine interaction tools.

Keeping a human in the loop would be useless if that human deferred blindly to a machine. For example, if sentencing judges merely rubber-stamped scores of risk assessment tools, there would be little reason to ensure that judges remain in the loop.Footnote 100 Likewise, if left to their own devices, juries might irrationally defer to the apparent objectivity of machines.Footnote 101 A human in the loop requirement should entail the development of tools to guard against automation complacency. One underused tool in this regard is jury instructions. For example, where photographs are admitted as silent witnesses, the jury hears little about lens, angle, speed, placement, cameraperson bias, or other variables that might lead it to draw a false inference from the evidence. The jury should be educated about the effect of these variables on the image they are assessing.Footnote 102 Ultimately, jurisdictions should draw from the fields of human factors engineering, and human–computer interaction and collaboration, in designing ways to ensure a systems approach that keeps humans in the loop while leveraging the advantages of AI.

  • Principle IV(c): Jurisdictions should establish a formal means for stakeholders to challenge uses of digital and machine evidence that are fundamentally inconsistent with principles of human-delivered justice.

Keeping a human in the loop also necessarily means taking steps to ensure against inappropriate uses of AI that threaten softer systemic values like dignity. For example, certain machines might be condemned as inherently dehumanizing, such as the penile plethysmographFootnote 103 or deception detection.Footnote 104 Just as some modes of obtaining evidence are rejected as violating substantive due process, such as forcibly pumping a suspect’s stomach to find evidence of drug use,Footnote 105 modes of determining guilt should be rejected if the public views them as inhumane. Other jurisdictions might decide that the “right to explanation” is so critical to public legitimacy that overly complex AI systems must be abandoned in criminal adjudication, even if such systems promise more accuracy.Footnote 106 Whatever approach jurisdictions adopt regarding these issues, they should resolve such issues first, and only then look for available technological enhancements of proof, rather than vice versa. Numerous scholars have written about the seduction of quantification and measurement,Footnote 107 and the Council of Europe expressly included in its guidelines for the use of AI in criminal justice that Member States should “ensure that AI serves overall policy goals, and that policy goals are not limited to areas where AI can be applied.”Footnote 108

III Conclusion

The principles for governing digital and machine evidence articulated in this chapter attempt to move beyond the adversarial/inquisitorial divide, and incorporate the thoughtful recent work of so many scholars, policymakers, and stakeholders worldwide in promulgating guidelines for the ethical and benevolent use of AI in decision-making affecting peoples’ lives. Applied to a common law adversarial criminal system such as that in the United States, these principles may manifest in existing statutory and constitutional rights, albeit in new ways. Applied to other nations’ systems, these principles will manifest differently, perhaps because such systems already recognize the need for “out of court evidence gathering”Footnote 109 to ensure meaningful evaluation of complex evidence. On the other hand, as Sabine Gless has suggested, continental systems might find that party-driven examinations have an underappreciated role to play in ensuring reliability of machine evidence.Footnote 110

As AI becomes more sophisticated, one key goal for all justice systems will be to ensure that AI is not merely given an objective to accomplish, such as “determine whether this witness is lying” or “determine if this person contributed to this DNA mixture,” but is programmed to continually look to humans to express and update their preferences. If the former occurs, AI will preserve itself at all costs, and may engage in behavior antithetical to human values, to get there.Footnote 111 Only if machines are taught to continually seek feedback can AI remain benevolent. We cannot simply program machines to achieve the goals of criminal justice – public safety, social cohesion, equity, the punishment of the morally deserving, and the vindication of victims. We will have to ensure that humans have the last word on what justice means and how to achieve it.

8 Robot Testimony? A Taxonomy and Standardized Approach to the Use of Evaluative Data in Criminal Proceedings

Emily Silverman , Jörg Arnold , and Sabine Gless Footnote *
I Drowsy at the Wheel?

In 2016, the Swiss media reported a collision involving a sports car and a motor scooter that resulted in serious injuries to the rider of the scooter.Footnote 1 Charges were brought against the car driver on the grounds that he was unfit to operate his vehicle. Driving a motor vehicle while unfit to do so is a crime pursuant to the Swiss Traffic CodeFootnote 2 and one for which negligence suffices to establish culpability.Footnote 3 Although the accused denied consciously noticing that he was too tired to drive, prosecuting authorities claimed that he should have been aware of his unfitness, as the car’s driving assistants had activated alerts several times during the journey.Footnote 4 Media coverage of the event did not report whether or how the accused defended himself against these alerts.

We refer to these alerts as “evaluative data” because they combine data with some form of robot evaluation. We argue that acknowledging this novel category of evidence is necessary because driving assistants and other complex information technology (IT) systems outfitted with artificial intelligence (AI) do more than simply employ sensors that engage in relatively straightforward tasks such as measuring the distance between the vehicle and lane markings. Driving assistants also evaluate data associated with indicators that they deem potential signs of fatigue, such as erratic steering movements or a human driver’s drooping eyelids. They interpret this data and decide autonomously whether to alert the driver to drowsiness. When introduced into a criminal proceeding, this evaluative data can be referred to as a kind of robot testimony because it conveys an assessment made by a robot based on its autonomous observation.

This chapter aims to alleviate deficits in current understandings of the contributions such testimony can make to truth-finding in criminal proceedings. It explains the need to vet robot testimony and offers a taxonomy to assist in this process. In addition to a taxonomy of robot testimony, the chapter proposes a standardized approach to the presentation and evaluation of robot testimony in the fact-finding portion of criminal trials. Analysis focuses on a currently hypothetical criminal case, in which a drowsiness alert is proffered as evidence in a civil law jurisdiction such as Switzerland or Germany.

The chapter first introduces robot testimony and outlines the difficulties it poses when offered as evidence in criminal proceedings (Section II). Second, we propose a taxonomy for and a methodical way of using the results of a robot’s assessment of human conduct (Section III). Based on traditional forensic science, robot testimony must first be grounded in the analog world, using a standardized approach to accessibility, traceability, and reproducibility. Then, with the help of forensic experts and the established concepts of source level and activity level, the evidence can be assessed on the offense level by courtroom actors, who are often digital laypersons (Section IV). As robot witnesses cannot be called to the stand and have their assessments subjected to cross-examination, the vetting of robot testimony in the courtroom poses a number of significant challenges. We suggest some ways to meet these challenges in Section V. In our conclusion, we call for legislatures to address the lacunae regarding the use of robot testimony in criminal proceedings, and we consider how criminal forensics might catch up with the overall debate on the trustworthiness of robots, an issue at the core of the current European debate regarding AI systems in general (Section VI). An outline of questions that stakeholders might want to ask when vetting robot testimony via an expert is presented in the Appendix.

II Introducing Robot Testimony

A core problem raised when defending oneself against a robot’s evaluation of one’s conduct, not to mention one’s condition, is the overwhelming complexity of such an assessment. A car driver, as a rule, does not have the tools necessary to challenge the mosaic of components upon which the robot’s evaluation is based, including the requisite knowledge of raw data, insights into source code, or the capacity for reverse engineering; this is certainly the case in a driving assistant’s assessment that the human driver is drowsy.Footnote 5

II.A A New Generation of Forensic Evidence Generated by Robots

Today, various makes of cars are equipped with robots, understood as artificially intelligent IT systems capable of sensing information in their environment, processing it, and ultimately deciding autonomously whether and how to respond.Footnote 6 Unlike rule-based IT systems, these robots decide themselves whether to act and when to intervene. Due in part to trade secrets, little is known about the detailed functioning of the various types of driving assistants in different car brands, but the general approach taken by drowsiness detection systems involves monitoring the human driver for behavior potentially indicative of fatigue. The systems collect data on the driver’s steering movements, sitting posture, respiratory rate, and/or eyelid movements, etc.; they evaluate these indicators for signs of drowsiness or no signs of drowsiness; and, finally, on the basis of complex algorithms and elements of machine learning, choose whether to issue an alert to the driver.Footnote 7

Robots that issue such alerts do so on the basis of the definition of drowsiness on which they were trained. They compare the data collected from the human driver they are monitoring with their training data, and then decide by means of the comparison whether or not the driver is drowsy. This use of training data creates several problems. If the robot is trained on data from drivers who have round eyes when they are wide awake and droopy eyes when they are sleepy, the robot will issue a drowsiness alert if the driver they are monitoring is droopy-eyed, even if that particular driver’s eyes are droopy when he or she is rested.Footnote 8 Another difficulty that humans face when attempting to challenge an alert is that, on the one hand, it is not possible for all training data fed into the system to be recorded, and on the other hand, there is a lack of standards governing the data recorded from the driver. A provision requiring the implementation of a uniform data storage system in all automated vehicles, such as the Data Storage System for Automated Driving (DSSAD),Footnote 9 could resolve some of these issues and contribute to the advancement of a standardized, methodological approach to vehicle forensics.

Robots became mandatory for safety reasons in cars sold in the European Union beginning in 2022,Footnote 10 thus laying the groundwork for an influx of robot testimony in criminal proceedings. The hallmark of this data is the digital layer of intelligence added when robots evaluate human conduct and record their assessments. Up until now, there has been no taxonomy that facilitates a robust and common understanding of what sets evaluative data apart from raw data (Section III.A.1) or measurement data (Section III.A.2). The following sections first detail the difficulties raised by robot data, and then propose a taxonomy of raw data, measurement data, and evaluative data.

II.B Evidentiary Issues Raised by Robot Testimony

Basic questions arise as to the conditions under which the prosecution, the defense counsel, and the courts should be able to tap into the vast emerging pool of evaluative data and how robot testimony might be of assistance in the criminal process. Under what circumstances can evaluations generated by robots involved in robot–human interactions serve as evidence in criminal trials? And in the context of the hypothetical example used in this chapter, can alerts issued by a drowsiness detection system serve as meaningful evidence that a specific human driver was on notice of his or her unfitness?

Answers to these questions depend on many factors and require a more comprehensive analysis than can be given here.Footnote 11 This chapter therefore focuses on one fundamental challenge facing fact-finders:Footnote 12 their capacity as digital laypersons, with the help of forensic experts, to understand robot testimony.

One of the problems encountered when assisting digital laypersons to understand robot testimony is the fact that robot testimony is not generated by a dedicated set of forensic tools. While radar guns, breathalyzers, and DNA test kits are designed expressly for the purpose of producing evidence,Footnote 13 driving assistance systems are consumer gadgets swept into an evidentiary mission creep.Footnote 14 They monitor lane keeping, sitting posture, and respiratory rate, etc. from the perspective of safety. Car manufacturers are currently free to configure them as they see fit, so long as they satisfy the standards set by the applicable type approval regulations,Footnote 15 which are the minimum set of regulatory, technical, and safety requirements required before a product can be sold in a particular country. The lack of commonly accepted forensic standards causes manifold problems, as it is unclear how a drowsiness detection system distinguishes between a driver sitting awkwardly in the driver’s seat due to fatigue and a driver sitting awkwardly due to, say, a vigorous physical workout. To the best of our knowledge, these systems do not include baseline data for a specific driver, but are trained on available data chosen by the manufacturer. To address questions as to whether their results should be admissible as evidence in a court of law, and if so, what the information content of such data really is, a taxonomy to ground expert evidence is needed. Before drowsiness alerts and other evaluative data generated by non-forensic robots that serve primarily consumer demands can be used in court, a special vetting process may also be necessary, and possibly even a new evidentiary framework (see Section VI). One solution could be to require manufacturers to provide source code, training data, and data on validation testing, and to require manufacturers to share information regarding potential programming errors. The need for such information is clear but access is not yet possible, as confidentiality issues associated with proprietary data and the protection of trade secrets will first have to be addressed by legislatures or the courts.

As the use of robots to monitor human conduct becomes more common, robots’ assessments may seem reminiscent of eyewitness testimony. As things stand today,Footnote 16 however, robots – unlike human witnesses – cannot be brought into the courtroom and confronted directly. They cannot be called to the stand and asked to explain their assessments under cross-examination. Instead, digital forensic experts serve as intermediaries to bridge this gap. These experts aim to translate a robot’s message into a form that is comprehensible to lawyers. But in order to do so, experts must have access to the data recorded by the robot as well as the tools to secure and the competence to interpret this data. Experts must also clearly define their role in the fact-finding process. On what subjects should they be permitted to opine, e.g., that a drowsiness alert indicates that an average person, according to the training material used, was likely drowsy when the alert was issued? And could such testimony be rebutted with evidence regarding, e.g., the accused’s naturally drooping eyelids, due perhaps to advanced age, or habitually relaxed sitting posture?

II.C Searching for the Truth with the Help of Robots

In most criminal justice systems, statutory provisions and case law aim to render the evidentiary process rational and transparent while upholding the principle of permitting the fact-finder to engage in the unfettered assessment of evidence. The parties have a vital interest in participating in this crucial step of the trial. In our hypothetical example of drowsiness alerts, the prosecution will claim that alerts issued by the driving assistants were triggered by the accused’s drowsy driving, and the defense will counter that the driving assistants issued false alarms, perhaps by wrongly interpreting certain steering movements or naturally drooping eyelids as signs of drowsiness. The law provides little guidance on how to address such conflicting claims. The law also offers little guidance as to how the parties, the defense in particular, can participate in the vetting of robot testimony or question the admissibility or reliability of such evidence.Footnote 17 One difficulty is that forensic experts and lawyers have not yet developed sufficiently differentiated terminology; often all data stored in a computer system or exchanged between systems is simply labeled digital evidence.Footnote 18 Yet such a distinction is crucial, as failing to make distinctions runs the risk of lumping together very different kinds of information. If these kinds of data are to be of service in the fact-finding process, they must always be interpreted in the context of the circumstances in which they originated.Footnote 19

Inquisitorial-type criminal procedures, in particular, seem vulnerable to the risks posed by robot testimony, thanks to their broad, truth-seeking missions. For example, Article 139 of the Swiss Criminal Procedure Code (Swiss CrimPC) states that “in order to establish the truth, the criminal justice authorities shall use all the legally admissible evidence that is relevant in accordance with the latest scientific findings and experience.”Footnote 20 The Swiss CrimPC is silent, however, as to what “legally admissible evidence that is relevant in accordance with the latest scientific findings and experience” actually is. While case law and scholarship have provided an abundance of views on the admissibility in court of a small number of recognized categories of evidence, until now, they have provided little guidance on how to proceed when technological advances create new kinds of evidence that do not fall within these categories. There is consensus that these new types of evidence must comply with existing rules of presentation and accepted modi operandi.Footnote 21 In cases in which specialist knowledge and skills are necessary, Article 182 of the Swiss CrimPC, e.g., requires the court to ask an expert “to determine or assess the facts of the case.”Footnote 22 In a rather surprising parallel to an approach broadly seen as adversarial in nature, if a party wishes to challenge an expert’s determination or assessment, it can target the source and the reliability of the data, the expert’s methodology, or specific aspects of the expert’s interpretation, such as statistical reasoning.Footnote 23

The strengthening of fair trial principles and defense rights in vetting evidence can be seen in recent decisions taken by the German Constitutional Court (Bundesverfassungsgericht) that recognize access to raw data, i.e., the initial representation of physical information in digital form, as a prerequisite for an effective defense.Footnote 24 In November 2020, e.g., the Constitutional Court held that defendants in speeding cases have the right, in principle,Footnote 25 to inspect all data generated for fact-finding purposes, including raw data.Footnote 26

III A Taxonomy for the Use of Robot Testimony

Robot testimony is a potentially useful addition to the evidentiary process, but only if its meaning for a case can be communicated to the fact-finder in a comprehensible way. In order to facilitate this communication, we propose a taxonomy of robot testimony. The taxonomy distinguishes between three types of machine-readable data, beginning with the least complex form and ending with the most complex form. We also suggest how the taxonomy can be used in practice, by differentiating circumstantial information, which refers to the context in which the data is found (Section III.B), from information content, the forensically relevant information that the expert can deduce from the properly identified data (Section III.C).

III.A Categories of Machine-Readable Data

The term “data” is widely used, both in everyday language and in the legal context, but while the term was used as a synonym for any kind of information in the past, digitalization has led to changes in its usage. Today, the term is often used to mean any kind of machine-readable information.Footnote 27 This meaning is still very broad. When coupled with the lack of a legal definition in the law of criminal procedure, a broad definition can cause problems in situations where a finer distinction is required, e.g., when machine-readable information is introduced as evidence in a criminal case and a forensic expert is needed to explain the exact nature of the information being proffered. This chapter suggests that there are three categories of data: raw data, measurement data, and evaluative data.

III.A.1 Raw Data

Digital forensic experts define raw data as the initial representation of physical information in digital form. Raw data generated by sensors, e.g., captures measurements of physical indicators such as time or frequency, mass, angles, distances, speed, acceleration, or temperature. Raw data can also convey the status information of a technical system, i.e., on/off, operation status, errors, failure alarms, etc., or the rotational speed measured by sensors placed at the four wheels of a vehicle. It is necessary to keep in mind that raw data, the basic currency of information for digital forensics, may contain errors, and that tolerancesFootnote 28 must be considered. In order for this kind of information to be understood, it must be processed by algorithms, but at least in theory, its validity could always be checked by a human, e.g., by using a stopwatch, physically measuring the distance traveled, or checking whether a system was turned on or off.

Where a system operates as intended, the raw data produced by the system is deemed objective, although verification and interpretationFootnote 29 as well as an assessment supplied by a forensic expert may be necessary. Once the raw data has been collected, it is available for processing by algorithms into one of the other data categories, i.e., measurement data or, with the participation of AI-based robots, evaluative data.

III.A.2 Measurement Data

At present, the most important category of data is probably measurement data. This category is produced when raw data is processed with the help of algorithms. Given sufficient time and resources, if the algorithms involved are accessible, measurement data can theoretically be traced back to the original raw data. For example, the measurement data generated by the tachometer is vehicular speed. With the help of an algorithm, a tachometer calculates vehicular speed by taking the average of the raw data noted by rotational sensors located at each of the four wheels of a vehicle, known as wheel speed values. Wheel slip, another example of measurement data, is produced by calculating the difference between the four separate wheel speed values. In the event of an accident, this kind of processed data enables a forensic expert to testify about wheel slip and/or skidding, and state whether the vehicle was still under the control of the driver by the time of the incident or whether the driver had already lost control of it. While the raw data in this example would not mean very much to fact-finders, they could understand the meaning of the speed or wheel slip of a vehicle at a particular moment.

The distinction between raw data and measurement data is a clear one, in theory, but it can become blurred. For example, raw data must be made readable, and therefore processed, before it can be interpreted. This difficulty does not, however, call the taxonomy offered by the chapter into doubt as a matter of principle, but rather shows the importance of having categories that support differentiation, similar to the way in which the distinction between a fact and an opinion in evidence law distinguishes between two kinds of evidence.Footnote 30

III.A.3 Evaluative Data

The third category of data in our taxonomy is new, and we call it evaluative data. This kind of data is the product of a robot’s autonomous assessment of its environment. In contrast to measurement data, the genesis of evaluative data cannot, by definition, be completely verified by humans because the digital layer inherent to robot testimony cannot be completely reconstructed.

Evaluative data causes problems for fact-finding on several different levels. Using the drowsiness alert hypothethical,Footnote 31 a human cannot reconstruct the exact reckoning of a drowsiness detection system that monitors a human for behavior indicative of fatigue, because while this robot does continuously measure and evaluate the driver’s steering movements and tracks factors such as sitting posture and eyelid movements, the robot does not record all its measurements. It evaluates these indicators for signs of drowsiness or no signs of drowsiness, and when it determines that the threshold set by the programmer or by the system itself has been reached, it issues an alert to the driver and records the issuance of the alert.

This system cannot explain its evaluation of human conduct regarding a particular episode.Footnote 32 In fact, the operation by which a driving assistant reaches its conclusion in a particular case is almost always an impenetrable process, thanks to the simultaneous processing of a plethora of data in a given situation, the notorious black box problem of machine learning, and walls of trade secrets.Footnote 33 In the field of digital forensics, evaluative data is therefore a novel category of evidence that requires careful scrutiny.

It may be possible to vet the reliability of this category of data by focusing on the configuration of the system’s threshold settings for issuing an alert and then searching for empirical methods by which to test the robustness of its results. Before that point can be reached, however, fact-finders need a functional taxonomy and a standardized methodological approach so they can understand whether, or rather under what conditions, they can challenge a system’s issuance of drowsiness alerts.

Using evaluative data for evidentiary purposes raises questions on a number of levels, some of which are linked to the factual level of circumstantial information (Section III.B) and to information content (Section III.C). For example, the question arises as to whether the issuance of a drowsiness alert can be used to prove that the accused driver was drowsy or whether it can only be used to prove that an average person could be deemed drowsy given the data recorded by the robot while the accused was driving. Other questions pertain to the evidentiary level, such as whether the issuance of an alert can be used to prove that the driver was on notice of unfitness, or whether the issuance of an alert could even be used to prove that the driver was in fact unfit to operate the vehicle.

III.B Circumstantial Information

Raw data, measurement data, and evaluative data require a context, referred to in the field of forensics as circumstantial information,Footnote 34 to enable fact-finders to draw meaningful inferences that can be used to establish facts in a legal proceeding. In our drowsiness alert hypothetical, when a driver is charged with operating a vehicle while unfit to do so, the data read out of the car is useful only if it can be established what the data means for that particular car, what the normal operating conditions of the car are, who was driving the car at the time of the accident, etc. It is important to explain what kinds of data were recorded in the run-up to the drowsiness alert and to determine whether the manufacturer submitted the relevant validation data for that specific system. Otherwise, the machine learning mechanisms cannot be vetted. It might turn out, e.g., that the training data and machine learning methods used to teach robots to distinguish between drowsy and not drowsy differ significantly between the systems used by different manufacturers.

The furnishing of circumstantial information is an important and delicate step in the communication between forensic experts and lawyers. While courts in continental Europe, and judges and/or juries in other jurisdictions, are mandated to determine the truth, the role of a forensic expert is a different one. The forensic expert’s task is to keep an open mind and to focus solely on evaluating the forensic findings in light of propositions offered by court or parties (see Section IV.D).Footnote 35 In our drowsiness alert hypothetical, the expert will be asked to assess the data read out of the car in light of the proposition of the prosecution, namely that the accused was in fact the driver of the car and alerts were issued because the driver was driving while drowsy, as well as pursuant to the proposition of the defense, namely that the issuance of alerts was due to circumstances completely unrelated to the driver’s fitness to operate the vehicle. In order truly to assist the court, experts must avoid stepping outside the boundaries of scientific expertise. They must not step into the role of the fact-finder.

III.C Information Content

Once experts have explained the details of the relevant data and provided the requisite circumstantial information, the court and the parties should be in a position to formulate their propositions about its information content. In this context, information content is understood as the forensically relevant information deduced from raw, measurement, and evaluative data. In our hypothetical, the fact-finders ought to be able to decide whether, in their view, the alerts issued by the drowsiness detection system are evidence that the human driver was in fact unfit to operate a vehicle or whether the alerts are better interpreted as false alarms.

In a Swiss or German courtroom, the expert will be asked not only to present and verify the information content of a particular piece of evidence, but to provide a sort of likelihood ratio regarding the degree to which the various propositions are supported.Footnote 36 While this approach is not universal,Footnote 37 such an obligation is important in cases where evaluative data is proffered as evidence. Evaluative data in the form of drowsiness alerts cannot simply be taken at face value, and experts must therefore have the right conceptual tools with which to assess it.

IV A Standardized Approach to Interpreting Robot Testimony

Having established a tri-part taxonomy for the use of robot testimony, we now suggest a standardized approach regarding its interpretation in a court of law. Legal actors can draw on existing conceptsFootnote 38 in concert with the new taxonomy proposed here, but the traditional approach will have to be modified so as to accommodate the special needs of assessing evaluative data for fact-finding in a criminal case. A sort of tool kit is needed to test whether a robot generates trustworthy evidence. In our hypothetical, the question can be framed as whether a drowsiness detection system reliably detects reasonable parameters related to a human driver’s fitness to operate a vehicle.

In principle, the general rules for obtaining and presenting evidence in a criminal case apply to robot testimony. In our hypothetical, the vehicle involved in the accident will be seized. Subsequently, the search for analog evidence will follow existing provisions of the applicable code of criminal procedure regarding the admissibility and reliability of potential evidence. As far as digital evidence is concerned, various modifications stemming from the particularities of using bits and bytes for fact-finding will apply,Footnote 39 and specific risks of error will have to be addressed. For known problems, such as the loss of information during the transmission of data, solutions may already be at hand.Footnote 40 But new problems arise, including, e.g., the sheer volume of data that may be relevant if it becomes necessary to validate a specific alert issued by a vehicle’s drowsiness detection system. In such cases, it is essential for stakeholders to understand what is meant by accessibility (Section IV.A) and traceability (Section IV.B) of relevant data, as well as the reproducibility (Section IV.C) and interpretation (Section IV.D) of results provided by the expert.

IV.A Accessibility

An expert should first establish what data is available, i.e., raw, measurement, or evaluative, and how it was accessed. Digitalization poses a challenge to procedural codes tailored to the analog world because data and its information content are not physically available and cannot be seized. This characteristic of data may lead to problems with regard to location and accessibility. For example, even if the data recorded by a driving assistant is stored locally in a car’s data storage device, simply handing over the device to the authorities or granting them access to it will probably not suffice. Decrypting toolsFootnote 41 will have to be made available to the forensic expert, and the difficulties associated with decryption explained to the fact-finder.

Some regulations pertaining to accessibility are being pursued, e.g., the movement in Europe toward a DSSAD. As early as 2006, uniform data requirements were introduced in the United States to limit the effects of accessibility problems with regard to car data; these requirements govern the accuracy, collection, storage, survivability, and retrievability of crash event data, e.g., for vehicles equipped with Event Data Recorders (EDRs) in the 5 seconds before a collision.Footnote 42 In 2019, working groups were established at the domestic and international levels to prepare domestic legislation on EDRs for automated driving.Footnote 43 And in 2020, the UN Economic Commission for Europe (UNECE) began working toward the adoption of standardized technical regulations relevant for type approval.Footnote 44 The UNECE aims to define the availability and accessibility of data and to establish read-out standards.Footnote 45 It would also require cars to have a standardized data storage system.Footnote 46 However, these efforts will not lead to the recording of all data that might possibly be relevant for the establishment of facts in a criminal court.

IV.B Traceability: Chain of Custody

The second step toward the use of machine-readable data is a chain of custody that ensures traceability. A chain of custody should be built from the moment data is retrieved to the moment it is introduced in the courtroom. Data retrieval, also called read-outs of data, is the process by which raw data, and if relevant decrypted data, is translated into readable and comprehensible information. The results are typically documented in a protected report that is accessible to defined and identified users by means of a pre-set access code.Footnote 47 To ensure traceability, every action taken by the forensic expert must be documented, including when and where the expert connected to the system, what kind of equipment and what software was used, what was downloaded, e.g., file name, file size, and checksum,Footnote 48 and where the downloaded material was stored.Footnote 49

Traceability can be supported when a standard forensic software is used, e.g., the Crash Data Retrieval tool designed to access and retrieve data stored in the EDRs standard in cars manufactured in the United States.Footnote 50 In each country, the legislature could ensure the traceability of data generated by driving assistance systems by establishing a requirement to integrate a data storage system as a condition of type approval. Such a step could eliminate the difficulties currently associated with the traceability of data.

IV.C Reproducibility

The third basic requirement for establishing trustworthy robot testimony is reproducibility.Footnote 51 Simply stated, the condition of reproducibility is met if a second expert can retrieve the data, run an independent analysis, and produce the same results as the original expert. Whether this condition can be achieved in the foreseeable future probably depends less on having comprehensive access to all theoretically relevant data and more on the development of smart software that can evaluate the reliability of a specific robot’s testimony. This software could work by analyzing the probability of error on the basis of simulations using the raw and measurement data recorded by the robot, looking for bias, and testing the system’s overall trustworthiness.

Reproducibility in the context of evaluative data generated by a consumer product is particularly challenging. Driving assistants issue alerts on the basis of a plethora of data processed in a particular driving situation, and as noted above, only a subset of the data is stored. This subset is the only data available for forensic analysis. Reproducibility therefore currently depends on ex ante specifications of what data must be stored, and what minimum quality standards the stored data must meet in order to ensure that an incident can be reconstructed with the reliability necessary to answer both factual and legal questions.

In our drowsiness alert hypothetical, a key requirement for reproducibility would be the unambiguous identification of the vehicle at issue and of the data storage device if there is one. In addition, the report generated during the retrieval process must contain all necessary information about the conditions under which that process took place, e.g., VIN, operator, software version, time, and date. This discussion regarding reproducibility demonstrates the crucial importance of establishing minimum specifications for data storage devices, specifications that could probably be implemented most efficiently at the car’s type-approval stage. As these specifications are responsible for ensuring reproducibility, they ought to be defined in detail by law and standardized internationally.

IV.D Interpretation Using the Three-Level Approach

The fourth step of a sound standardized approach to the use of machine-readable data in court requires the data to be interpreted systematically in light of the propositions of the courtroom actors.Footnote 52 When courts lack the specialist knowledge necessary to determine or assess the facts of the case, they look to forensic experts.Footnote 53 In order to bridge the knowledge gap, lawyers and forensic experts need a common taxonomy, a common understanding of the scientific reasoning that applies to the evaluation of data,Footnote 54 and a common understanding of the kinds of information that forensic science can deliver.

Following an established approach in forensic science, three levels of questions and answers should be recognized: source level, activity level, and offense level.Footnote 55 These levels help, first, to distinguish pure expert knowledge (source level) from proposition-based evaluation of forensic findings by the expert in a particular case (activity level), and second, to distinguish these two levels from the court’s competences and duties in fact-finding (offense level).

In our drowsiness alert hypothetical, before deciding whether to convict or acquit the accused, the court will want to know whether there is any data to be found in the driving assistance system’s data storage system (source level), whether alerts have been issued (activity level), and whether there is any other evidence that might shed light on the driver’s fitness or lack thereof to operate a vehicle (offense level).

IV.D.1 Source Level

In forensic methodology, the source level is associated with the source of evidence. The first question is whether any forensic traces in analog or digital form are available, and if so what kind of traces, e.g., blood, drugs, fibers, or raw data. Source-level answers are normally simple results with defined tolerances;Footnote 56 the answer may simply be yes, no, or undefined.

In the context of digital evidence such as our drowsiness alert hypothetical, the source-level question would be whether there is any relevant data stored in a data storage device. Such data, if any, would enable the forensic expert to answer source-level questions regarding, e.g., the values of physical parameters such as speed, wheel slip, heart rate, or recently detected status information. In the context of airbags, the evaluation of the values recorded or the temporal development of these physical parameters leads to the decision to deploy the airbag, with storage of the respective data in the EDR, or not to deploy the airbag, normally without data storage. In the context of a drowsiness alert, the system produces either an alert and storage of the respective data in the DSSAD or a non-alert, normally without data storage.

IV.D.2 Activity Level

On the activity level, forensic experts evaluate a combination of source-level results and circumstantial information on the basis of propositions related to the event under examination. Complex communication between experts and fact-finders that covers the different categories of data as well as circumstantial information is required. In our drowsiness alert hypothetical, the question would be whether the drowsiness detection system issued an alert and whether and how the human driver reacted.

By addressing the activity level, experts provide fact-finders with the knowledge they need to evaluate the validity of propositions regarding a past event, e.g., when there are competing narratives concerning a past event. Regarding a drowsiness alert, the expert might present findings that support the prosecution’s proposition, namely, that the drowsiness detection system’s alerts were the consequence of the driver’s posture in the driver’s seat or other drowsiness indicators. Or, in contrast, the findings might support the defense’s proposition, namely that the alerts were not a consequence of the human driver’s conduct, but rather were a reaction of the driving assistant to external disturbances.

IV.D.3 Offense Level

In the context of a criminal case, the offense level addresses questions related to establishing an element of the offense charged. In this ultimate step of fact-finding, the task of the expert has ended, and the role of the court as adjudicator begins. In our drowsiness alert hypothetical, the legal question the fact-finder must answer is whether or not the driver was unfit to operate a motor vehicle. This task may be a difficult one if the expert is able to provide information on a robot’s functioning or its general capacity to monitor a human’s conduct, but is unable to provide information relevant to the question of whether the actual driver was unfit in the run-up to the accident.

V Unique Challenges Associated with Vetting Robot Testimony

The proposed standardized approach to proffering evaluative data as evidence in criminal proceedings illustrates the need for a sound methodology. It also simultaneously highlights the limits of the traditional approach with robot testimony. One of the parties may want to use an alert issued by a drowsiness detection system as evidence of a human driver’s unfitness to operate a vehicle, but forensic experts may not be able to offer sufficient insights to verify or refute the system’s evaluation. Crucial questions of admissibility or weight of the evidence are left unanswered when experts can attest only that the drowsiness detection system issued an alert before the accident occurred. If experts cannot retrieve sufficient data or sufficient circumstantial information, they may not be able to provide the fact-finder with the information necessary to assess the evidentiary value of the alert. The fact-finder cannot simply adopt the driving assistant’s evaluation, as doing so would fail to satisfy the judicial task of conclusively assessing evidence. The question as to the grounds upon which judges can disregard such evidence remains an open one.Footnote 57

The problems raised in vetting robot testimony become even clearer when the defense’s ability to challenge the trustworthiness of observations and evaluations generated by a robot are compared to the alternatives available to check and question measurement data generated by traditional forensic tools. If, e.g., the defense wants to question the results of a radar gun in a speeding case, the relevant measurement data, i.e., the whole dataset of frequency values, calculated speed values, and the additional measurements performed by the radar gun, can be accessed. This information can reveal whether or not a series of measurements appears to be robust.Footnote 58 Furthermore, if the defense wishes to cast doubt on an expert’s findings and develop another proposition to explain the results of the radar gun, the court could require law enforcement authorities to offer a second dataset based on an independent measurement method, e.g., a videotaping of the radar gun’s measurement and its environment. This would allow for independent verification and would make it possible to check for factors that may have distorted the measurements, such as truck trailers parked on the street or the surface reflections of buildings.Footnote 59

In contrast, if the defense wishes to challenge robot testimony such as a drowsiness detection system’s alert, new and unresolved issues with regard to both facts and law may arise.Footnote 60 As mentioned above, driving assistants are consumer gadgets designed to enhance road safety. They are neither approved nor certified forensic tools designed to generate evidence for criminal proceedings. It is currently left to the programmer of the driving assistance system or the manufacturer of the car to develop a robust machine learning process for the system that leads to the establishment of a threshold for issuing an alert and to determine what information to store for potential evaluation, ex ante, of the robot’s assessment. The decision-making power of the programmer or producer regarding the shaping of a smart product’s capacity to observe and record is limited only if there are regulations that require the storage of particular data in a particular form.

Parties challenging drowsiness alerts can try their luck by challenging different kinds of data. Measurement data, which generally describes physical facts in a transparent way, appears to be the most objective information, and the corresponding information content seems relatively safe from legal attack. In contrast, evaluative data, including records of decisions taken or interventions launched by a robot, appears to be much closer to the contested legal questions and thus a more appropriate target for legal challenge. Counsel could argue that the dataset containing information about the incident does not allow for robust testing of alternative scenarios, or that no validation exists for the thresholds for issuing an alert set by machine learning, thereby rendering an expert’s probability ratios worthless, or that someone might have tampered with the data. These arguments show that in order to do their jobs properly, lawyers must be capable of understanding not only how data is generated, retrieved, and accessed, but also how evidence can be evaluated, interpreted, verified, and vetted with regard to its information content and to the integrity of the data.

VI A Look to the Future
VI.A Criminal Procedure Reform

A robot’s capacity to assess its environment autonomously, and possibly self-modify its algorithms, is a development that holds promise for numerous fields of endeavor, and a sophisticated driving assistant that handles an enormous amount of data when monitoring an individual driver for specific signs of drowsiness holds great promise for fact-finding. The challenge will be to update procedural codes in a way that empowers courts to decipher this new form of evidence methodically, with the help of forensic experts who should be able fully to explain the specific operations undertaken by the robot in question.

Currently, doubts about the trustworthiness of a robot’s evaluation of a human driver’s fitness seem well-founded, given the fact that car manufacturers are free to shape a drowsiness detection system’s alert as a feature of their brand and may even construct its capacity to observe in such a way as to favor their own interests.Footnote 61 Our chapter argues that the use of robot testimony must be supported with a clear taxonomy, a standardized methodological approach, and a statutory regime.Footnote 62

Up until now, most procedural codes have opted for a blanket approach to evidence and for “technological neutrality,” even in the context of complex scientific evidence.Footnote 63 Yet there are many arguments that support the enactment of specific regulations for courts to rely on when using data as evidence, and that speak for the rejection of a case-by-case approach. Differences between data and other exhibits proffered as evidence in criminal cases, such as documents or photographs of car wrecks, seem obvious.Footnote 64 Raw, measurement, and evaluative data cannot be comprehended by the naked eye. Experts are needed not only to access the data and to ensure traceability, but also to interpret it. Fact-finders are dependent on experts when faced with the task of retracing the steps by means of which data is seized from computers,Footnote 65 from databases storing traffic data, and from other data carriers. They must also rely on experts to explain how data is retrieved from cloud computing services. As yet, fact-finders have no legal guidance on how to ensure that the chain of custody is valid and the data traceable and reproducible.

Fact-finders also face serious challenges when they have to fit digital evidence into a human-centered evidentiary regime designed with the analog world in mind. In German criminal proceedings, all evidence, including digital evidence, must be presented pursuant to four categories defined by law (StrengbeweisverfahrenFootnote 66), namely expert evidence, documentary evidence, evidence by personal inspection, and testimony; digital evidence is not defined by law as a separate category.Footnote 67 If a courtroom actor wants to use a driving assistant’s alert as evidence, the alert must be introduced in accordance with the rules of procedure governing one of these categories. Most probably, the court will call an expert to access relevant data, to explain the data-generating process, and to clarify how the data was obtained and how it was stored, but there is no guidance in the law as to how to account for the fact that drowsiness detection assistants issue alerts based on their own evaluation of the driver and that experts cannot retrace this evaluation completely when reading out the system.

VI.B Trustworthy Robot Testimony

Situations in which robots assess human behavior represent a potentially vast pool of evidence in our digital future, and legal actors must find a way to exploit the data. With a taxonomy for the use of robot testimony in legal proceedings and clearly defined roles for lawyers and forensic experts in the fact-finding process, particularly if a standardized approach is used to vet this new evidence, the law can do its bit to establish the trustworthiness of robot testimony.

Time is of the essence. With driving assistants already aboard cars, courts will soon be presented with new forms of robot testimony, including that provided by drowsiness detection systems. If evaluative data, which is set to be a common by-product of automated driving thanks to the requirement that new cars in some countries be equipped with integrated driving assistants, is to be proffered as evidence in criminal trials, legislatures must ensure that the robots’ powers of recollection are as robust as possible.Footnote 68 And not only the law must take action. New and innovative safety nets can be provided by different disciplines to ensure the trustworthiness of robot testimony. One option would be for these safety nets to take the form of an official certification process for consumer robot products likely to be used as witnesses, similar to the process that ensures the accuracy of forensic tools such as radar guns.Footnote 69 Ex ante certification might not solve all the problems, because in practice, drowsiness detection systems depend on many different factors, any one of which could easily distort the results, such as a driver not sitting upright due to a back injury, a driver wearing sunglasses, etc. Technical testing ex post, perhaps with the help of AI, might be a better solution; it could, at least, supplement the certification process.Footnote 70

Evaluative data generated by robots monitoring human conduct cannot be duly admitted as evidence in a criminal case until technology and regulation ensure its accessibility, traceability, and to the greatest extent possible reproducibility, as well as provide a sufficient amount of circumstantial information. Only when this has been achieved can the real debate about trustworthy robot testimony begin, a debate that will encompass the whole gamut of current deliberations concerning the risks posed by AI and its impact on human life.

9 Digital Evidence Generated by Consumer Products The Defense Perspective

Erin E. Murphy Footnote *
I Introduction

In courtrooms across the world, criminal cases are no longer proved only through traditional means such as eyewitnesses, confessions, or rudimentary physical evidence like the proverbial smoking gun. Instead, prosecutors increasingly harness technologies, including those developed and used for purposes other than law enforcement, to generate criminal evidence.Footnote 1

This kind of digital data may take different forms, including raw data, data that is produced by a machine without any processing; measurement data, data that is produced by a machine after rudimentary calculations; and evaluative data, data that is produced by a machine according to sophisticated algorithmic methods that cannot be reproduced manually.Footnote 2 These distinctions are likewise evident in the array of consumer products that can now be tapped to produce evidence in a criminal case. A mobile phone can be used to track the user’s location via raw data in the form of a readout of which tower the cell phone “pinged,” via measurement data reflecting the triangulation of data towers accessed along a person’s route, or with evaluative data generated by a machine-learning algorithm to predict the precise location of a person, such as a specific shop in a shopping mall.Footnote 3 In all three forms, the use of such data presents new evidentiary challenges, although it is the evaluative data that raises the most issues as a result of both its precision and impenetrability.

As scholars begin to tackle the list of questions raised by these new forms of evidence, one critical perspective is often omitted: the view of the criminal defendant. Yet, just as digital evidence serves to prove the guilt of an accused, so too can it serve the equally important role of exculpating the innocent. As it stands now, law fails to adequately safeguard the rights of a criminal defendant to conduct digital investigations and present digital evidence. In a world increasingly reliant on technological forms of proof, the failure to afford full pre-trial rights of discovery and investigation to the defense fatally undermines the presumption of innocence and the basic precepts of due process.

Persons accused of crimes have two compelling needs with regard to digital evidence. First, criminal defendants must be granted the power to meaningfully attack the government’s digital proof, whether offered by the government to prove its affirmative case or to counter evidence tendered by the defense.Footnote 4 For example, the defense might challenge cell site location records that purport to show the defendant’s location at the scene of the crime. Or they might contest cell site records offered by the government to undermine a defense witness’s claim to have witnessed the incident. The defendant is attacking the government’s proffered digital evidence in both cases, but in the first variation, the attack responds to the government’s evidence in its case-in-chief, whereas the second variation responds to evidence proffered by the government to counter a defense claim or witness.

The defense’s use of digital evidence in this way differs from the second category, which might be called supportive defense evidence. A defendant must be able to access and introduce the defendant’s own digital proof, in order to support a defense theory or to attack the government’s non-digital evidence. For example, the defendant might use digital data to show that the defendant is innocent, to reinforce testimony offered by a defense witness, or to support a claim that another person in fact committed the offense. Classic examples of such use would be DNA evidence that proves there was another perpetrator, or surveillance footage that reveals the perpetrator had a distinguishing mark not shared by the defendant. A defendant might also use such evidence to bolster a legal claim. In the United States, e.g., the defendant might use digital proof to argue that evidence must be suppressed because it was obtained in violation of the Constitution.Footnote 5 Or a defendant might use digital evidence to attack the non-digital evidence in the government’s case, like a defendant who introduces the cell-site records that show that the government’s witness was not at the scene, or offers the victim’s social media posts to prove that the victim still possessed the property the defendant allegedly stole. What links these examples of supportive defense evidence is that the defense introduces digital proof of its own; it does not just attack the digital proof offered by the government.

In both cases – when the defense aims to attack government digital proof, or when it aims to introduce its own digital proof – the defendant cannot effectively mount a defense without access to and the ability to challenge complex forms of digital proof. Yet, in all too many jurisdictions, the legal system has embraced the government’s use of technological tools to inculpate a defendantFootnote 6 without reckoning with the equivalent needs of the accused.Footnote 7 Baseline principles such as those enshrined in the Fifth and Sixth Amendments to the US Constitution and Article 6 of the European Convention on Human Rights sketch broad rights, but how those rights are actually implemented, and the governing rules and statutes that embody those values, may vary dramatically.Footnote 8 In the United States, criminal defendants have few positive investigatory powers,Footnote 9 and are largely dependent on rules that mandate government disclosure of limited forms of evidence or the backstop of the constitutional rights of due process, confrontation, and compulsory process.Footnote 10

Even when the defense is entitled to certain information, existing legal tools may be inadequate to effectively obtain and utilize it. Criminal defendants must typically rely on either a court order or subpoena to obtain information from third parties, but both of those mechanisms are typically understood as intended for the purpose of presenting evidence at trial, not conducting pre-trial investigation.Footnote 11 And even sympathetic courts struggle to determine whether and how much to grant requests. As one high court observed when addressing a defendant’s request for access to a Facebook post, “there is surprisingly little guidance in the case law and secondary literature with regard to the appropriate inquiry.”Footnote 12

Finally, generally applicable substantive laws may also thwart defense efforts to use technological evidence. For example, privacy statutes in the United States typically include law enforcement exceptions,Footnote 13 but as Rebecca Wexler has observed, those same statutes effectively “bar defense counsel from subpoenaing private entities for entire categories of sensitive information,” and in fact “[c]ourts have repeatedly interpreted [statutory] silence to categorically prohibit defense subpoenas.”Footnote 14

Without robust reconsideration of the rights necessary to empower defendants in each of these endeavors, the digitalization of evidence threatens to bring with it the demise of due process and accurate fact-finding. The first step in articulating these critical defensive rights, however, is to identify and classify the scope of such evidence and its pertinent features. Such analysis serves two purposes. First, it crystallizes the need for robust defense pre-trial rights, including rights to discovery, compelled process, and expert assistance, as well as substantive and procedural entitlements to confront such evidence and mount an effective defense at trial. Second, cataloging these technologies helps point the way toward a comprehensive framework for defense access and disclosure, one that can account for the many subtle variations and features involved in each technology – one that is wholly lacking now.

To facilitate deeper inquiry into the proper scope and extent of the criminal defendant’s interest in digital proof, this chapter presents a taxonomy of defensive use of technological evidence. Section II identifies and provides examples for seven categories of such data: location trackers, electronic communications and social media, historical search or cloud or vendor records, the “Internet of Things” and smart tools, surveillance cameras, biometric identifiers, and analytical software tools. Although the examples in this chapter are drawn primarily from legal cases in the United States, these technologies are currently in broad use around the world. Section III then considers ten separate characteristics that attend these technologies, and how each may affect the analysis of the proper scope of defense access. Section IV concludes.

II A Taxonomy of Digital Proof

The first step in articulating the issues that confound defense access to digital proof is to outline the general categories into which such technologies fall. Of course, digital information is used throughout the criminal justice process, e.g., in pre-trial bail and detention risk assessments and post-conviction at the time of sentencing. This chapter, however, focuses only on the use of such digital evidence to investigate and prove or disprove a defendant’s guilt.

In addition, although it might at first glimpse be appealing to attempt to draw sharp distinctions between consumer products and forensic law enforcement technologies, those categories prove illusory in this context.Footnote 15 The line between consumer and law enforcement either collapses, or is simply arbitrarily drawn, when it comes to defense investigation. For example, what difference does it make if law enforcement uses surveillance video from a police camera versus security footage from a Ring doorbell-camera or private bank? What does it matter if the facial recognition software is used on a repository of high school yearbooks versus police mugshots? Are questions of access so different when DNA testing was done via a public lab versus by a private lab, or whether the search was in a commercial versus law enforcement database?

Even if such a line were drawn, it may be difficult to defend in principle. Suppose law enforcement obtains data from an X (formerly Twitter) account, and then uses a proprietary law-enforcement software to do language analysis of the account. Is that a consumer product or law enforcement tool? Or if law enforcement secretly signs an agreement with a consumer DNA database to enable testing and searches for police purposes, is that a consumer tool or law enforcement tool? All too often, the lines between the two will break down as increasingly public–private cooperation generates evidence pertinent for a criminal case.

Of course, concerns about the reliability of evidence may differ when the evidence derives from a consumer product used by the general public as opposed to a forensic tool used only by law enforcement. Regulatory regimes and market incentives exercise an oversight function for commercial applications, and the financial incentives that ensure reliability for commercial products may be lacking in the law enforcement context. But those safeguards are not a substitute for a defendant’s opportunity to access and challenge technological evidence, because reliability of the government’s proof is not the only value at stake. The defense must have a meaningful right to access or challenge technological evidence, as a means of testing the non-digital aspects of the government’s proof as well as bolster its own case. Thus, in taxonomizing digital evidence, this chapter acknowledges but does not differentiate between technology created and used by general consumers versus those created primarily or exclusively by police.

II.A Location Data

The general label “location data” covers a wide array of technological tools that help establish the presence or absence of a person in a particular place and, often, time. Location evidence may derive from mobile phone carriers that either directly track GPS location or indirectly provide cell-site location services, license plate scanning technology, electronic toll payment systems, or even “smart” cars or utility meters that can indicate the presence or absence of persons or the number of persons in a particular space at a particular time.

The use of such technologies to implicate a defendant is obvious. Evidence that a defendant was in a particular location at a particular time may prove that a defendant had access to a particular place, support an inference that the defendant committed an act, or reinforce a witness’s assertions. For example, evidence that shows that the defendant’s cell phone was at that location where a dead body was found can strengthen the prosecution’s identification of the defendant as the perpetrator. But just as such evidence inculpates, so too might it exculpate. A criminal defendant might seek to introduce such evidence to contest a government victim or witness’s account, or prove bias or collusion by witnesses.Footnote 16

Location data also has supportive defense power, in that it could establish an alibi, prove the presence of an alternative perpetrator, or contradict a line of government cross-examination. A law enforcement officer or witness may be shown to have arrived at the scene after a pivotal moment, or left prior to a critical development. An alleged third-party perpetrator may be proved to have accessed a controlled site, or to have interacted with culpable associates.

The inability of defendants to access such information directly often leaves them reliant upon either the thoroughness of government investigators or the willingness of a court to authorize subpoenas for such information. For example, one police report described cases in which police used license-plate-reading cameras to support each defendant’s claim of innocence, and thus to exonerate individuals from false accusations.Footnote 17 But such open-minded and thorough investigation is not always the norm. In some cases, the government may have little incentive to seek information that contradicts the government’s theory or calls into question the government’s proof.

In Quinones v. United States,Footnote 18 the defendant alleged that his counsel was ineffective for failing to seek location data including both GPS and license plate readings that the defendant argued would support his claim that he had not been residing for months in the location where firearms were found, but rather had only recently visited. The court rejected the claim, stating that the defendant “fails to provide any indication that such evidence even exists, and if so, what that evidence would have revealed,” and that “[a] license plate reader would merely indicate that a certain vehicle was at a certain location at a specific time, but such would not conclusively prove the location of an individual.”Footnote 19 Another court likewise rejected a defendant’s claim that the defense attorney’s failure to seek such information constituted ineffective assistance, reasoning that the defendant had offered “no reason, beyond his own speculation, to believe that the GPS records would have bolstered his defense ….”Footnote 20 The dismissive tone regarding the potential evidence in Quinones is also evident in other cases, such as People v. Wells. In that case, the court dismissed the significance of automatic toll records, noting that “[t]here was no individual camera for the FasTrak lane. These inherent limitations in the underlying videotape evidence made it possible for defendant’s car to pass through undetected ….”Footnote 21

But of course, the very point of investigation is to find information that is not already known, including information that impeaches or contradicts critical witnesses, and to present such evidence, even though it may be equivocal. As one law firm wrote in a post that underscored the importance of location records, obtaining the complainant’s location data aided the firm in convincing the government that the complaint was unfounded.Footnote 22

The point of these cases is not so much that such evidence is always decisive. Rather, they highlight the discrepancy between the ease, even if not unfettered,Footnote 23 with which courts recognize that access to and introduction of such evidence is critical to building a government case, while dismissing its importance in mounting a defense. One press report from Denmark noted, in connection with the revelation that up to 1,000 cases may have been tainted by erroneous mobile geolocation data which precipitated the release of 30 persons from pre-trial detention, the fact that such errors went unchecked is “obviously very concerning for the functioning of the criminal justice system and the right to a fair trial.”Footnote 24 Yet the preceding discussion suggests that a court could well reject a defense request for such information out-of-hand.

II.B Electronic Communications and Social Media

The advent of mobile devices has changed the manner in which people communicate, and exponentially increased the amount of that communication. As one leading treatise puts it: “E-mail is inordinately susceptible to revealing ‘smoking gun’ evidence.”Footnote 25 Email and text messages comprise a significant fraction of digital records of communication, but social media accounts on platforms such as Facebook, Snapchat, Instagram, and X (formerly known as Twitter) also provide fertile ground for data.Footnote 26 Although criminal defendants typically have access to their own records, historical information including deleted material or material generated by other persons may not be as readily obtainable.

In one high-profile case in England, a man spent three years in prison in connection with a rape allegation. He contended the encounter was innocent, but it was only when his family was able to locate an original thread of Facebook messages by the complainant that it was revealed that she had altered the thread to make the incident appear non-consensual.Footnote 27 In a similar case in the United States, the court dismissed as critical to an effective defense the effort to obtain Facebook evidence.Footnote 28 Such discovery difficulties can occur even for high-profile defendants; the actor Kevin Spacey had trouble obtaining an unaltered copy of the complainant’s cell phone records.Footnote 29

Not every court has disregarded defense requests. In another case, the defendant sought the complainant’s emails in part to dispute the prosecution’s characterization of him as a predatory sadist, but the trial court denied the request, asserting that the defendant could simply “obtain the information contained in the e-mails from other sources, i.e., speaking directly with the persons who communicated with the complainant in these e-mails.”Footnote 30 In reversing, the appellate court observed that the evidence had particular power not only to undermine the prosecution’s depiction of the defendant, but also the complainant’s portrayal as a “naïve, overly trusting, overly polite and ill-informed” person.Footnote 31

Despite the critical role that written communications and correspondence can play as evidence, defendants often have trouble convincing courts of their value, and overcoming significant legal hurdles. Ironically, “[t]he greatest challenge may be ascertaining and obtaining electronic evidence in the possession of the prosecution.”Footnote 32 That is because, like location data, defendants often “must successfully convince the court that without ‘full and appropriate’ pretrial disclosure and exchange of ESI, the defendant lacks the ability to mount a full and fair defense.”Footnote 33

In the United States, access to electronic communications is one of the few areas expressly covered by statutory law, but that law also restricts the defense. Only governmental entities are expressly permitted to subpoena electronic communications from the service provider; other persons are dependent on access to the records from the person who created or received the communication, who may not have the records or be reluctant to share them.Footnote 34 There is also a demonstrated reluctance on the part of social media and other provider companies to support defense cases.Footnote 35 One public defender described Facebook and Google as “terrible to work with,” noting that “[t]he state’s attorney and police get great information, but we get turned down all the time. They tell us we need to get a warrant. We can’t get warrants. We have subpoenas, and often they ignore them.”Footnote 36 And in one high-profile case, Facebook accepted a $1,000 fine for contempt rather than comply with the court’s order to disclose information for the defense, citing its belief that the order contradicted the federal law on stored communications.Footnote 37

Eventually, the California Supreme Court directly confronted the problem of defense access in its decision in Facebook, Inc. v. Superior Court of San Diego County.Footnote 38 In that case, the defendant subpoenaed Facebook to obtain non-public posts and messages made by a user who was also a victim and witness in an attempted homicide case. Articulating a seven-part test for determining when to quash third-party subpoenas, the court also laid out a series of best practices for such requests that included a presumption against granting them ex parte and under seal.Footnote 39 Although the court’s opinion offers a roadmap for similar cases in the future, it is remarkable that the availability of such a critical and important form of evidence remains relatively uncertain in many jurisdictions.

II.C Historical Search, Cloud, Crowdsourced, and Vendor Records

It is not only social media and electronic messaging services that retain records of individuals. A vast network of automated and digital records has arisen documenting nearly every aspect of daily life, including Google search histories, vendor records from companies like Amazon, find my iPhone searches, meta-data stored when files are created, uploaded, or changed, or cloud-stored or backed-up records.

These records are commonly used tools to establish a defendant’s guilt.Footnote 40 But they might be equally powerful means of exculpating or partially exculpating an accused by identifying another perpetrator, undermining or disputing testimony by a government witness, or bolstering and reinforcing a defense witness, as in the case of a record showing that a phone’s flashlight feature was on, or history of purchases or searches, or crowdsourced data from a traffic app that proves the accident was the fault of a hazard along a roadway.Footnote 41

In one exceptional case, the defendant successfully defeated the charges only after his attorney – at New York’s Legal Aid Society, which unlike most defenders has its own forensic laboratory – was able to retrieve stored data that proved the defendant’s innocence.Footnote 42 The defendant was charged with threatening his ex-wife, but insisted he had in fact been on his way to work at the time. Fortunately, the Legal Aid Society had invested in its own digital forensics lab at the cost of roughly $100,000 for equipment alone. Using the defendant’s cell phone, the defense analyst produced a detailed map of his morning, which established that he was 5 miles from the site of the alleged assault. Software applications like “Oxygen Forensic Detective” provide a suite of data extraction, analysis, and organization tools, for mobile devices, computers, cloud services, and more,Footnote 43 but it is safe to say that there are few if any defenders that could have performed that kind of analysis in-house, and only a handful that could have apportioned expert funds to outsource it.

II.D “Internet of Things” and “Smart Tools”

An emerging category of digital records that could be lumped under the prior heading of historical search records arises from the Internet of Things (IoT) and smart tools. This general heading encompasses a broad array of technologies. Some simply record and generate data from commonplace household items and tools, without any real evaluative function, including the following: basic “wearables” that measure one’s pulse or temperature; medical monitoring devices like pacemakers; personal home aids like Siri, Echo, or Alexa; basic automotive data such as speed or mileage indicators; or even “smart” toys, lightbulbs, vacuums, toothbrushes, or mattresses.Footnote 44 These devices record everything from ambient sounds to specific requests, including passive and active biomedical information like weight, respiratory rate, sleep cycles, or heartbeat; and time in use or mode of use.

This category also includes tools that may have true evaluative function, including real-time analysis and feedback. For example, this category includes fully or semi-autonomous vehicles or medical instruments that do not just detect information and record it, but also process and respond to those inputs in real time.

Such information has a range of both inculpatory and exculpatory uses. The government readily accesses such information, and may do so even more in the future. For example, residents in Texas awoke one morning to find that their “smart” meters had raised the temperature overnight to avoid a burnout during a heat wave.Footnote 45 Used selectively, this technology could aid law enforcement. As one report summarized:Footnote 46

Everyday objects and devices that can connect to the Internet – known as the Internet of Things (IoT) or connected devices – play an increasing role in crime scenes and are a target for law enforcement. … We believe that a discussion on the exploitation of IoT by law enforcement would benefit from the views of a wide spectrum of voices and opinions, from technologists to criminal lawyers, forensic experts to civil society.

In one especially prominent case, James Bates was charged with strangling and drowning a man, based in part on evidence from Amazon Echo and a smart water meter. The water evidence presumably showed a five-fold uptake in usage that police said corresponding to spraying down the crime scene.Footnote 47 But after reviewing the Amazon Echo evidence, prosecutors dropped the case, noting that they could not definitively prove that the accused had committed the murder.Footnote 48 In Bates’ case, it was not clear that either the government or the defense could easily access the evidence, as Amazon initially refused its release to either party, but relented when Bates agreed to allow government access. In another case, investigators again sought Amazon Echo data in connection with a homicide; tellingly, the defense asked “to hear these recordings as well,” as they believed them exculpatory.Footnote 49

Some data within this category may exclusively be held in the defense’s hand. For example, fitness or health data is often preserved on the user’s own devices, and thus could be shared with defense counsel without seeking the permission of either the government or the vendor. In one case, police used data from a complainant’s Fitbit to determine that the allegations were false,Footnote 50 and in another, the government relied on the readings from a suspect’s health app to document activity consistent with dragging the victim’s body down a hill.Footnote 51 But it is just as easy to imagine that the accused might seek to introduce such evidence to show that they had a heart rate consistent with sleep at the time of a violent murder, or that sounds or images from the time of an incident contradict the government’s claim.

But of course, in other cases, the data will not be the defendant’s data. The accused may seek data from devices owned or operated by a witness, decedent, victim, or even an alleged third-party perpetrator. In the Bates case described above, Bates claimed to have gone to sleep, leaving the decedent and another friend downstairs in the hot tub. The friend claimed to have left just after midnight, and his wife corroborated that claim, thus ruling him out as a suspect. But suppose evidence from a device contradicted those claims? Perhaps the friend’s fitness tracker showed that in fact his heart had been racing and he had been moving around vigorously exactly around the time of the murder? Or maybe his “smart” door lock or lighting system would show he arrived home much later than he had claimed. Obtaining such evidence may be difficult for law enforcement, but it is all but impossible for the defense. Again, to quote one report, “[i]n criminal investigations, it is likely that the police will have access to more information and better tools than the witness, victim or suspect.”Footnote 52

II.E Surveillance Cameras and Visual Imagery

The overwhelming presence of surveillance tools in contemporary society make visual imagery another critical source of digital defense evidence. In some cities, cameras record nearly every square inch of public space, and are particularly trained on critical areas such as transportation hubs or commercial shopping areas. There have even been reports of the use of drones to conduct domestic policing surveillance in the United States, and a federal appeals court recently ruled a municipality’s “spy plane” surveillance unconstitutional.Footnote 53 Private cameras also increasingly record or capture pertinent information, as homeowners use tools like Nest or Ring and businesses install security systems. Individuals may also advertently or inadvertently generate visual records, such as the tourist snapping photos who accidentally captures a robbery, the film crew that unknowingly records a linchpin piece of evidence, or the citizen-journalist who records a police killing.Footnote 54

In perhaps one of the most dramatic examples – so dramatic it inspired a documentaryFootnote 55 – a man charged with capital murder was able to exonerate himself using media footage that established his alibi.Footnote 56 Juan Catalan was charged with murdering a witness who planned to testify against his brother in a separate case, based on the testimony of an eyewitness to the killing. But Catalan explained that he had attended a baseball game at Dodger Stadium on the night of the killing. The prosecutor didn’t believe him, but his defense attorney did, and with permission of the stadium he examined all the internal camera footage from the game that night. Although none of that footage turned up evidence of Catalan’s presence, Catalan recalled that a film crew had been present that night, gathering footage for a popular television show. The show producers allowed the attorney to review their material, which revealed images of Catalan that corroborated his account. Based on that evidence, and cell phone records placing him at the stadium, the case was dismissed.

In Catalan’s case, the defense was able to secure voluntary compliance with private entities, the stadium, the television producers, and the cell phone company. But what if material is held by an entity that does not willingly share its data? For example, in many localities, law enforcement operates the public surveillance cameras, i.e., the very persons who are accusing the defendant of the offense. For good or bad reasons, law enforcement may act as gatekeepers of the data, but when they deny defense requests in order to protect privacy interests, they privilege their own assessment of the relevance of such data or simply act in self-interest to safeguard their case from attack.

In an example from New York, a defense attorney subpoenaed surveillance footage held by the New York Police Department to corroborate the accused’s exculpatory account. The prosecutor reluctantly disclosed a portion of the video, claiming that it was only required to disclose video as required by its statutory discovery obligations and the Brady rule,Footnote 57 but the defendant asserted an “independent right to subpoena video that will exonerate her.”Footnote 58 The court, reviewing the arguments, stated:Footnote 59

[S]ince the inception of this case, the defense forcefully and persistently attempted to obtain surveillance footage that had the potential to “undercut” the complainant’s claims and to corroborate his client’s claim that she was not present at nor involved in any criminal activity … The defense, however, in contrast to Cruz, could not simply subpoena this potentially exculpating evidence because the footage was held by the NYPD …. Here, the defense compellingly argued that if immediate action was not taken, the recordings, which are maintained by the NYPD’s VIPER Unit for a period of no more than 30 days, would be destroyed.

Ultimately ruling on various motions in the case, the court held that the US Constitution and state laws supported the court’s preservation order, even if the state’s discovery rules did not.Footnote 60 But the closeness of the fight demonstrates the extent to which the defense must overcome significant hurdles to access basic information.

II.F Biometric Identifiers

Biometric identifiers are increasingly used for inculpatory proof, but they also can exculpate or exonerate defendants. Biometrics include familiar techniques such as fingerprinting and blood typing, but also more sophisticated or emerging methods like probabilistic DNA analysis that relies on algorithms to make “matches,” iris scanning, facial recognition technologies, or gait or speech analysis.

This category of digital proof may be the most familiar in terms of its exonerative use and thus perhaps requires the least illustration. The Innocence Project sparked a global movement to use DNA testing to free wrongfully convicted persons.Footnote 61 But biometric identifiers might also be used by the defense more generally, such as to identify eyewitnesses or alternate suspects, or to bolster the defense. In a disputed incident, biometric evidence might support the defense version, e.g., DNA on the couch but not the bed, over that of the prosecution.

Because of the particular power of DNA, there have been extensive legal analysis of the myriad legal hurdles for the defense in preserving, obtaining, and testing physical evidence,Footnote 62 including the fact that physical evidence is typically in the hands of the government, and the tools and expertise required to analyze it may exceed the reach of even well-resourced defense counsel.

II.G Analytical Software Tools

The final category of digital proof overlaps in many ways with the preceding groups, and focuses primarily on the evaluative data generated by machines. The label “analytical software tools” generally describes computer software that is used to reach conclusions or conduct analyses that mimic or exceed the scope of human cognition.

By way of example, prosecutors often rely on artificial intelligence (AI) and machine learning to identify complex patterns or process incomplete data. Perhaps the most common form of such evidence is found in the “probabilistic genotyping systems” used by the government to untangle difficult or degraded DNA samples. An accused could likewise marshal those tools defensively, either to challenge the system’s interpretation of evidence or to uncover supportive defense evidence.

Defense teams have often sought access to the algorithms underlying probabilistic genotyping software that returns inculpatory results used by the prosecution. But companies typically refuse full access, raising trade secret claims that are accepted uncritically by courts.Footnote 63 Such software might also be sought by the defense for directly exculpatory reasons, not just to call into question the accuracy of the government’s approach,Footnote 64 but also to demonstrate that a different party perpetrated the offense or that another interpretation of the evidence is possible.Footnote 65

DNA profiles are not the only targets for analytical software. Evidence from facial recognition software can cast new light on grainy surveillance video,Footnote 66 or a speech pattern analysis. As one commentator explains:Footnote 67

… in a blurry surveillance video or an unclear audio recording, the naked eye and ear may be insufficient to prove guilt beyond a reasonable doubt, but certain recognition algorithms could do so easily. Lip-reading algorithms might tell jurors what was said on video where there is no audio available. A machine might construct an estimation of a perpetrator’s face from only a DNA sample, or in other DNA analysis of corrupted samples.

Of course, the same could be true for the defense. Software could corroborate a defense claim or undermine the credibility of a government witness. It could also aid the defense in identifying other witnesses to the event or alternative perpetrators. In a case where inculpatory evidence was seized from a computer, the defense successfully argued for suppression of the evidence by obtaining information about how the software used to search the computer worked, thereby showing the search exceeded its permissible scope.Footnote 68

III Characteristics of Digital Proof

The preceding section provides a general overview of the digital shift in evidence in criminal prosecutions, and identifies the ways in which such evidence might likewise be critical to the defense. And as the preceding section demonstrates, it is not the case that the data’s reliability is a defendant’s only concern. The defense also requires access to digital proof for the same reasons that the government does – because digital evidence can help find or bolster witnesses, establish a critical fact, or impeach a claim. But the preceding part also reveals just how little guidance exists, either as a matter of rules-based guidance or judicial opinion, for those attempting to craft a meaningful defense right to access and use this material. This part identifies the critical questions that must be answered, and values that must be weighed, in devising a regime of comprehensive access to such information for defensive or exculpatory purposes.

1. Who “owns” and who “possesses” the data. Perhaps the most important question with regard to technological forms of evidence is who owns and who possesses the data. Ownership is critical because, as with physical items, the right to share or disclose data often rests in the hands of the owner. Possession is also critical because, as with physical items, a possessor may disclose information surreptitiously without permission or knowledge of the owner.

The most straightforward cases involve physical items owned and possessed by the accused or a person sympathetic to the accused’s interest, in which the data is stored locally in the instrument. Such might be the case for a security camera owned by the defendant, or an electronic device with stored files. In these cases, the owner can make the evidence available to the defense.

But many technological forms of evidence will not be accessed so simply. As a general matter, ownership might be in public, governmental hands, such as police or public housing surveillance footage, or private hands, such as a private security camera. Even the category of private ownership is complex – ownership may be as simple as belonging to a private individual, or as complex as ownership held by a publicly traded large corporate entity. In some cases, ownership may even cross categories. Digital information in particular may have multiple “owners,” e.g., a user who uploads a picture to a social media site may technically own the intellectual property, but the terms of service for the site may grant the site-owner a broad license to use or publicize the material.Footnote 69

When possession is divorced from ownership, a new suite of problems arises. Even when an owner is sympathetic or willing to share information with the defense, access may nonetheless be thwarted by an entity or person in possession of the data. Possession may also make access questions difficult because the reach of legal process may not extend to a physical site where information is kept. Possessors may also undermine the right of owners to exclude, e.g., if a security company grants visual access to the interior of a home against the wishes of the owner.

In both cases, obtaining data from third-party owners or possessors runs the further risk of disclosing defense strategies or theories. The government may have the capacity to hide its use of technology behind contracts with non-disclosure clauses or vague references to “confidential informants.” Human Rights Watch has labeled this practice “parallel construction” and documented its use.Footnote 70 But a criminal defendant may not be able to operate stealthily, relying instead on the goodwill of a third party not to disclose the effort or a court’s willingness to issue an ex parte order.

2. Who created the data. In many cases, answering questions of ownership and possession will in turn answer the question of creation. But not always. An email may be drafted by one person, sent to a recipient who becomes its legal “owner,” and then possessed or stored by a third party. Or an entity may “create” information or data by collecting or analyzing material owned or possessed by another, e.g., a DNA sample sent for processing through analytical software; the data is created by the processing company, and the physical sample tested may be “owned” or possessed by the government.

Data created by the defendant perhaps poses only ancillary obstacles when it comes to a defendant’s access to information. If anything, a defendant’s claim to having created data may bolster their claim to access, even if they neither own nor possess it.Footnote 71 Some jurisdictions even specifically bestow upon an individual the right to access, correct, and delete data.Footnote 72 But it may also be the case that the diffusion of claims in data may complicate rules for access and use by the defense. Imagine a piece of technology or evidence possessed by one party, owned by another person, and created by still another person – with disputes between the parties about whether or not to release the information.

3. For what purpose was the data created. Another factor that must be considered in contemplating defense access is the source of the data and the purpose for which it was created. Much of the information that the defense may seek to access for exculpatory purposes will have likely been created for reasons unrelated to the criminal matter. For example, an automated vacuum may record the placement of the furniture in the room so that it can efficiently clean, but such data might be useful to show the layout at the time of the robbery. Or a search engine may store entered searches to optimize results and targeted advertising, but the record may suggest that a third party was the true killer. Such purpose need not be singular, either. The person searching the internet has one goal, but the internet search engine company has a different objective. The critical point is that the reason the information is there may help shed light on the propriety of defense access.

By way of example, the most compelling case for unconstrained access to the defense might be for data that was created specifically for a law enforcement purpose. In the national security context in the United States, a statutory frame exists to resolve some of these claims.Footnote 73 Conversely, the most difficult case for access might be for information privately created for personal purposes unrelated to the criminal case. Although no single factor should determine the capacity and scope of access, the extent to which data or information is created expressly with a criminal justice purpose in mind may shed light on the extent to which such information should also be made accessible to an accused.

4. With what permissions was the data created. A related point arises with regard to how much of the general public is swept into a data disclosure, and the extent to which participants implicated by its disclosure are aware of the risks posed by broader dissemination. Open access to surveillance footage is troubling because it has the potential to implicate the privacy rights of persons other than the accused, who have no relation to the crime and who may not even know that they appear in the footage. Although we might tolerate those rights being compromised when it is only law enforcement who will access the information, or when used by a private operator with little incentive to exploit it, giving access to defense attorneys may create cause for concern. Persons in heavily policed neighborhoods may fear that, after viewing an image in surveillance, attorneys will be incentivized to accuse a third party of the crime simply out of expedience rather than in good faith. But such concerns may be minimized when creators or owners voluntarily provide data to law enforcement for law enforcement purposes.

5. How enduring or resilient is the data and who has the authority to destroy it. A central concern about defense access to data is that, without prompt and thorough access, such data will be destroyed before the defendant has a chance to request its preservation, if not disclosure. Some forms of evidence may be incredibly resilient. For example, cloud computing services or biometric identifiers of known persons may be highly resilient to destruction or elimination.

But other forms of data may be transient in nature, or subject to deliberate interference by an unwilling owner or holder of the data. Surveillance cameras notoriously run on short time loops, automatically erasing and retaping data in limited increments. Social media or other sites may promise total erasure of deleted material, not just superficial elimination from a single device.

Meaningful defense access to technological tools for exculpatory purposes requires attentiveness to timing, such that a defendant is able to access the material before its destruction. Even if the entitlement extends no farther than preservation, with actual access and use to be decided later, that would significantly impact a defendant’s capacity to make use of this information.

6. The form, expertise, and instrumentation required to understand or present the data. Generally speaking, evidentiary form is likely to be a less pertinent consideration in any framework for defense access than are questions related to ownership or possession. What import is it if the data is on a hard drive or flash drive? What matters is who owns it and who has it.

Nevertheless, any comprehensive scheme for meaningful defense access must consider form inasmuch as certain forms at the extreme may entail greater or lesser burdens on the party disclosing the information. Data diffused over a large and unsearchable system may provide important information to a defense team, but even turning over that data may present a significant challenge to its holder. In the Catalan case above, the surveillance video that exonerated him was physically held by the stadium officials and the production company. Fortunately, it was rather confined, as it covered one day and one game. It was the defense attorney who pored through the footage, isolating the exculpatory images.

But what if the records go beyond a single episode, or require special instrumentation to interpret. Information that requires that a holder devote significant time or resources to make the data available, or that is not readily shareable or accessible without expertise or instrumentation, may pose much more significant hurdles to open defense access. Some defense claims may actually be requests for access to services, rather than disclosure of information. For example, a defense request to run a DNA profile in the national database or to query a probabilistic genotyping system with a different set of parameters is less about traditional disclosure than about commandeering the government’s resources to investigate a defense theory. The same could be true for location data from a witness’s phone or search records from a particular IP address. The sought information is less an item than a process, a process to be conducted by a third party, not the defense.

7. What are the associated costs or expenses, and are there even available experts for the defense. A critical logistical, if not legal, hurdle to defense access to digital and technological evidence is the cost associated with seeking, interpreting, and introducing such evidence. Most of the forms of evidence described require some degree of expertise to extract, interpret, and understand, much less to explain to a judge, attorney, or juror. To the extent that the information also seeks an operational process or other search measure, the owner or possessor of the information may justly charge a fee for such services. Even more troubling, some vendors may restrict access to the government, or there may not be an available defense expert to hire given the lack of a robust market.

In this way, cost alone can preclude equitable access. For example, even assuming the defendant could get access to the probabilistic software used to interpret a complicated DNA crime sample, and assuming the vendor who contracted with the government would agree to run a defense query, the vendor may nonetheless charge for the service. Routine costs like copying fees or hourly rates can quickly put even routine investigative efforts beyond the capacity of a criminal defense lawyer, as the vast majority of defendants are indigent and there may be insufficient public funds available or such funds may be jealously guarded by judicial officials.Footnote 74

The introduction of this evidence also may require payment of expert fees so that the attorney is able to understand and clearly present the findings. Such costs can make defense lawyers reluctant even to pursue exculpatory evidence, because actually obtaining and using it appears insurmountable.

One still more troubling possibility is that some subpoenaed parties will actively choose to defy orders rather than comply. As discussed above, social media companies Facebook and Twitter (now X) both refused to turn over posts requested by the defense in a criminal case, leading the judge to hold them in contempt and fine each $1,000 – the maximum allowed under the law.Footnote 75 With fines capped statutorily, a company wealthy enough or unlikely to be a repeat player might simply choose non-compliance.

8. What are the privacy implications of divulging the data. Perhaps the most apparent and central concern raised by defense access to digital data relates to privacy. The nature of the material sought and the scope of what it reveals, along with the number of persons implicated by defense disclosure, is perhaps equal only to concerns about unnecessary “fishing expeditions” or wasted resources as a basis for the reluctance to provide generous access to the defense. Whereas the government is bound to act in the interest of the public, and thus in theory should minimize harm to innocent third parties in the course of its investigations, the defense is entitled to act only in furtherance of the interest of the accused.

At one end of the spectrum, some technological evidence will reveal deeply private or personal data belonging to a person wholly unrelated to the criminal offense. The DNA sequence or entire email history of a witness, or extensive public surveillance of a small community, obviously implicate profound interests. But at the other end of the spectrum, discrete bits of information created by the defendant him- or herself pose little concern when sought by the same defendant.

And of course, non-digital forms of evidence can raise the same concerns. As such, there are already mechanisms available to limit the privacy impact of revealing information to the defense. In prosecution investigations, it is not unusual to have a “taint team” that reviews sensitive information and passes along only the incriminating material to the prosecutor in the case. Or a judge can take on the responsibility to review material in camera, i.e., outside of the view of the parties and their attorneys, and disclose only evidence that is relevant to the defense.

In short, privacy is understandably a central and driving concern, but it should not be a definitive reason to close the door on broad defense access to exculpatory or defensive material.

9. What legal restrictions, whether substantive, procedural, or jurisdictional, limit access or use. The final critical inquiry incorporates some aspects of the privacy concerns just discussed, but goes beyond them. Namely, any comprehensive effort to provide defense access to digital and technological evidence must square with existing legal regimes surrounding disclosure and use of such evidence, whether as a matter of comprehensive or targeted privacy laws, intellectual or physical property, trade secret, or evidence. At a basic level, the data may straddle jurisdictions – created in one place, processed in another, and then used somewhere else. Legal restrictions may also be loosely lumped into substantive and procedural limitations, differentiating between substantive constraints such as privacy laws and procedural impediments such as jurisdictional rules.

Background statutory regimes that may conflict with defense access are imperative to consider, because jurisdictions increasingly have adopted such restrictions in response to complaints about privacy.Footnote 76 Although law enforcement is routinely afforded exceptions to privacy statutes,Footnote 77 there is rarely any mention of any equivalent route of access for a criminal defendant.Footnote 78 Moreover, even outside the realm of privacy law, there may be other statutory limitations on disclosure or access, such as legal non-disclosure agreements. Or jurisdictions may point to regulatory regimes aimed at reliability as sufficient to safeguard all of the defendant’s interests.Footnote 79

IV Conclusion

Digital proof is here, and it is here to stay. Such proof has already assumed a prominent place in the prosecution of criminal suspects. But all too often, the ability of the defense to access and utilize such evidence depends on happenstance rather than formal right. By cataloging and characterizing this critical form of proof, the chapter hopes to support efforts to formalize and standardize a defendant’s ability to marshal defense evidence for exculpatory and adversarial purposes as readily as the government does to inculpate.

10 Data as Evidence in Criminal Courts Comparing Legal Frameworks and Actual Practices

Bart Custers and Lonneke Stevens
I. IntroductionFootnote *

Technology has rapidly changed our society over the past decades. As a result of the ubiquitous digitalization of our society, people continuously leave digital traces behind. Some have already referred to this as “digital exhaust.”Footnote 1 People are often monitored without being aware of it, not only by camera surveillance systems, but also by their own smartphones and by other devices they use to access the internet.

Information about the whereabouts, behavior, networks, intentions, and interests of people can be very useful in a criminal law context. It is used mainly for guiding criminal investigations, as it may provide clues on potential suspects, witnesses, etc., but it can also constitute evidence in courts, as the data may confirm specific actions and behavior of actors. In other words, digital data can be used to find out exactly what happened, understood in the legal context as finding the truth, and try to prove what happened, understood in the legal context as providing evidence. This chapter focuses on the use of digital data as evidence in criminal courts. The large amounts of potentially useful data now available may cause a shift in the types of evidence presented in courts, in that there may be more digital data as evidence, in addition to or at the cost of other types of evidence, such as statements from suspects, victims, and witnesses.Footnote 2

However, in many jurisdictions, the legal provisions setting the rules for the use of evidence in criminal courts were formulated long before these digital technologies existed. As a result of ongoing technological developments, there seems to be an increasing discrepancy between legal frameworks and actual practices. The chapter investigates this disconnect by analyzing the relevant legal frameworks in the European Union for processing data in criminal courts and then comparing and contrasting these with actual court practices.

The relevant legal frameworks are criminal law and data protection law. Data protection law is mostly harmonized throughout the European Union, via the General Data Protection Regulation (GDPR)Footnote 3 and by regulation more specifically tailored to the criminal law context, via Directive 2016/680, also known as the Law Enforcement Directive (LED).Footnote 4 Criminal law, however, is mostly national law, with limited harmonization throughout the European Union. For this reason, criminal law is considered from a national perspective in this chapter. Criminal law in the Netherlands is taken as an example to illustrate the issues that may arise from using data as evidence in criminal courts.

Although Dutch criminal law may not be representative for all EU Member States, the discrepancies between EU data protection law and Dutch criminal law may be similar to other EU Member States. As such, the Netherlands may serve as a helpful example of how legal provisions dealing with the use of evidence in criminal courts is not aligned with developments in data as evidence.

We also think that reviewing the use of data as evidence in courts in the Netherlands may be interesting for other jurisdictions, because it can provide some best practices as well as identify caveats and pitfalls that can perhaps be avoided in other countries. We see two major arguments supporting such a claim. First, the issues of using data as evidence in courts are likely to be the same across Europe, as the technologies available are not confined to one or particular jurisdictions. This point also applies to the forensic standards that are applied, as these also have an international scope and nature, either because they are established by international standardization organizations such as ISO,Footnote 5 CEN-CENELEC,Footnote 6 and ETSIFootnote 7, or, if created on a national level, are at least aligned among forensics experts from different countries. Second, the legal frameworks for using data as evidence in courts are highly comparable. This is particularly the case for data protection law, which is highly harmonized across the European Union. Criminal law may not be harmonized that much across the European Union, but the norms and standards for evidence and fair trial are fleshed out in large part by the European Convention on Human Rights (ECHR) and Court of Justice of the European Union (CJEU) case law. All this means that the basic situation regarding technology and forensic practices and the relevant legal boundaries are more or less the same across the European Union, although national interpretations and practices within these confines may vary.

There are two other reasons to use the Netherlands as an example in this chapter, both related to the fact that the Netherlands is in the forefront of relevant regulation. First, international legal comparisons show that the Netherlands is a front runner in privacy and data protection law in several aspects.Footnote 8 The Netherlands implemented national legislation with higher levels of data protection than strictly necessary for compliance with EU data protection laws. Typical examples are data breach notification laws and mandatory privacy impact assessments that already existed in the Netherlands before the GDPR came into force in 2018.Footnote 9 Also, when looking at the criminal law context, the Netherlands was among the first countries to have specific acts for the police and the judiciary dealing with the processing of personal data in criminal law, long before EU Directive 2016/680 (the LED, see section III.C) came into force.Footnote 10 If there exists a disconnect between legal frameworks and actual practices with regard to data as evidence in criminal courts in a country that seems to be a regulatory front runner, in this case the Netherlands, similar problems may also exist in other EU Member States.

Second, the Netherlands is among the front runners in digital forensics and cybercrime legislation.Footnote 11 The Netherlands was among the initiators of the Convention on Cybercrime, adopted by the Council of Europe in 2001, which includes provisions that relate to the processing of police data.Footnote 12 This Convention regulates, among other things, the protection of personal data and international cooperation, including the exchange of personal data in criminal law cases between authorities of different countries. Also, the Netherlands ratified a series of legal instruments that aim to advance the cooperation and sharing of information between Member States, such as the Prüm TreatyFootnote 13 (for exchanging DNA data, fingerprints, and traffic data), the Schengen Information SystemFootnote 14 (for international criminal investigation information), the Visa Information SystemFootnote 15 (for visa data, including biometrical data), and the Customs Information SystemFootnote 16 and EurodacFootnote 17 (for fingerprints of asylum seekers and stateless people). The institutional regulations for Europol, Eurosur, and Eurojust contain provisions for the exchange of criminal law information between Member States.

In short, the Netherlands appears to be among the first countries in the European Union to develop both privacy and data protection, and digital forensics and cybercrime legislation. This characteristic is relevant because if there is a disconnect between legal frameworks and actual practices with regard to data as evidence in criminal courts in a country that seems to be in the forefront of regulation, in this case the Netherlands, it may be expected that similar problems also exist in other EU Member States.

In the Netherlands, a founding member of the European Union and its predecessors, there has been an extensive debate in society and in politics on how to balance using data in a criminal law context and protecting the right to privacy.Footnote 18 This debate has influenced the legal frameworks that regulate the use of data in criminal law. There are competing legal frameworks regulating this area: on the one hand, criminal law, including both substantive and procedural criminal law, and, on the other hand, privacy law, more specifically data protection law. It is important to note that both legal frameworks provide rules for allowing and restricting the use of personal data in criminal law, as sometimes there is a misunderstanding that criminal law would only or mainly allow the collection and processing of data, whereas data protection law would only or mainly restrict such data collection and processing.

The focus of this chapter is the discrepancy between legal frameworks and actual practices. First, the relevant legal frameworks for processing data in Dutch criminal courts are analyzed, i.e., Dutch criminal procedure law and EU data protection law). After this legal analysis, current court practices are examined, mainly by looking at typical case law and current developments in society and technology.

This chapter is structured as follows. Section II provides a brief general introduction to Dutch criminal procedure law. Section III provides a brief general introduction to EU data protection law and to some extent its implementation in Dutch data protection law, focusing on the GDPR and the LED respectively. Section IV investigates the actual use of evidence in Dutch criminal courts by focusing first on current court practices as reflected in case law, and second on current developments in society and technology. Section V compares current court practices with the developments in society and technology, in order to see whether there is a need to change court practices or the underlying legal frameworks.

II Criminal Procedure Law: The Example of the Netherlands

As the Netherlands is used as an example of national law in this chapter, some background information is provided regarding Dutch criminal law. The Dutch Code of Criminal Procedure (Dutch CCP)Footnote 19 dates back to 1926. Back then, the Code was characterized as “moderately accusatorial” since it introduced more rights for the defense than before that time.Footnote 20 Today, however, the suspect remains to a large extent the object of investigation, rather than, e.g., the victim, which has become increasingly important in Dutch criminal law in recent decades.Footnote 21 This is especially the case in the stages of police investigation, before the start of the trial. Although over the years more possibilities for the defense to influence the earlier investigation were introduced, such as the right to contra-expertise during police investigation in Article 150b of the Dutch CCP, the defense and the prosecutor are far from equal parties. Basically, the room for maneuver for the defense largely depends on the prosecutor’s goodwill, as it is the prosecutor who leads the criminal investigation.Footnote 22 A more accurate description of Dutch criminal procedure would therefore be “moderately inquisitorial.”Footnote 23

Fundamental to the position of the defense is the right to silence in Article 29 of the Dutch CCP. Rights and principles such as the privilege against self-incrimination, the equality of arms, and the presumption of innocence are not explicitly laid down in the Dutch CCP. They apply, however, directly to Dutch criminal procedure through Article 6 of the ECHR.

The Dutch CCP has been amended and supplemented many times since its creation in 1926. As a result, the Dutch CCP now looks more like a patchwork instead of structured and clear-cut Code. This is also one of the reasons that the legislator started the major, still-running project “Modernisation Criminal Procedure” (Modernisering Strafvordering) in 2014. This revision of legislation was not finished as of 2023, and it will take several more years before it is finished. The idea is to revise the Dutch CCP in order to make criminal procedure, among other things, more accessible and efficient.Footnote 24 Another aim of the revision is to tackle one of the greater challenges criminal procedures face nowadays, those of keeping up with technological developments in criminal investigation practice and developing an overall framework for regulating criminal investigation in the digital era. The Dutch CCP is still very much an analog-style Code that regulates the searching of homes, the seizure of letters, wiretapping, the questioning of witnesses, etc. Various digital investigation methods can be conducted on the basis of existing powers, e.g., a computer that was seized in a home can be searched just like a diary or a pistol that was seized in a home,Footnote 25 and several new digital investigation methods have been laid down in the Dutch CCP, e.g., the network search of Article 125j of the Dutch CCP or the hacking powers in Article 126nba of the Dutch CCP,Footnote 26 but many digital methods are still unregulated. Awaiting legislation, some gaps have been filled provisionally by the Supreme Court, in cases where the defense questioned the legitimacy of certain methods. One important discussion concerns the legitimacy of searching a smartphone that was seized from a suspect after arrest. In 2017, the Supreme Court ruled that the general power of a policeman to “seize and search objects the suspect carries with him when arrested” in Articles 94 and 95 of the Dutch CCP can be the basis of a smartphone search under the condition that the infringement on the right to privacy remains limited.Footnote 27 In cases where the infringement exceeds a limited search, such a search should be conducted or authorized by the public prosecutor. When it is foreseeable that the privacy-infringement will be “serious” (zeer ingrijpend), the investigatory judge needs to be involved.

The smartphone ruling of the Supreme Court needs to be understood from the perspective of the procedural legality principle that is laid down in Article 1 of the Dutch CCP. This article states that criminal procedure can only take place as foreseen by law,Footnote 28 which means that the police cannot use investigation methods that infringe fundamental rights which are not explicitly grounded in a sufficiently detailed and explicit statutory investigation power. However, investigation methods that are not explicitly regulated in the Dutch CCP, like the seize and search powers in Articles 94 and 95 of the Dutch CCP mentioned above, and that only cause minor infringements, can be based on Article 3 of the Police Act.Footnote 29 This Article contains the general description of the task carried out by the police: “it is the task of the police to maintain the legal order in accordance with the rules and under the subordination of the competent authority.”Footnote 30 In case law, several digital investigation methods have been found to constitute only a minor infringement and therefore did not need to be explicitly regulated.Footnote 31 For example, sending stealth text messagesFootnote 32 to someone’s cell phone can in principle be based on the general police task description, except when this is done for such a period or with such frequency and intensity that a complete image is revealed of certain aspects of someone’s private life.Footnote 33 The smartphone case, in which a very general power to seize was found to be a sufficient statutory basis for a limited smartphone search, builds on this settled case law.Footnote 34 In its legislative draft on digital investigation, the “Modernisation” legislator has incorporated the so-called “pyramid-structure” of the smartphone case, i.e., within the categories of limited, more than limited, and serious intrusions. A larger privacy infringement demands a higher approval authority, so instead of the police, a prosecutor or investigatory judge is required. Also, limited intrusions do not have to be explicitly regulated, while more than limited and serious intrusions are in need of more detailed and stringent legislation. To distinguish between the different levels of privacy intrusion, the legislator uses the concept of “systematicness” (stelselmatigheid).Footnote 35 This means that, e.g., a “forseeably systematic” computer or network search can be ordered by the public prosecutor, while a “foreseeably serious systematic” computer or network search also needs a warrant from the investigating judge.Footnote 36 The same regime applies to research in open sources.Footnote 37 The post-smartphone case law already demonstrates that the category of seriously systematic is almost non-existent in practice.Footnote 38 Although the introduction of the pyramid structure is also based on the practical premise that the investigating judge should not be overburdened within the context of digital investigations, this does raise serious concerns about the level of legal protection.

III Dutch and EU Data Protection Law
III.A GDPR and LED

In 2016, the European Union issued the final text for the GDPR, revising the EU legal framework for personal data protection. This legislative instrument, well known throughout the European Union, is directly binding for all EU Member States and their citizens.Footnote 39 To a large extent, the GDPR carried over the contents of the EU Data Protection Directive from the 1995 version it replaced, most notably the so-called principles for the fair processing of personal data, although the GDPR, which came into force in May 2018, received a lot of attention, probably due to the significant fines that were introduced for non-compliance. The European Union also issued with comparatively little fanfare Directive 2016/680, on protecting personal data processed for the purposes of law enforcement.Footnote 40 This much less well-known directive, referred to as the LED, which can be considered a lex specialis for the processing of personal data in the context of criminal law, had to be implemented into national legislation of each EU Member State by May 2018, coinciding with the date the GDPR came into force.

III.B The GDPR

Since the GDPR is directly binding for all Member States and their citizens, strictly speaking no further implementation is required. Nevertheless, some countries, including the Netherlands,Footnote 41 implemented national legislation to further implement the GDPR. The GDPR allows EU Member States to further elaborate on provisions in the GDPR that leave room for additional provisions at a national level.

The scope of the GDPR is restricted to personal data, which is defined in Article 4(1) as any information relating to an identified or identifiable natural person (the data subject). This excludes anonymous data and data relating to legal persons. Data on deceased people is not personal data and therefore beyond the scope of the GDPR.Footnote 42 For collecting and processing personal data, there are several provisions that data controllers have to take into account. First of all, all processing has to be lawful, fair, and transparent under Article 5(1). Furthermore, the purposes for which the data are collected and processed have to be stated in advance (purpose specification), the data may not be used for other purposes (purpose or use limitation), and data may only be collected and processed when necessary for these purposes (collection limitation or data minimization). Data has to be accurate and up to date (data quality). When data is no longer necessary, it has to be removed (storage limitation). The data needs to be processed in a way that ensures appropriate security and has to be protected against unlawful processing, accidental loss, destruction, and damage (data integrity, confidentiality). Furthermore, the data controller is responsible for compliance under Article 5(2) (accountability).

Data subjects have several so-called data subject rights regarding their personal data under the GDPR, including a right to transparent information on the data collected and the purposes for which it is processed (Articles 12–14), a right to access to their data (Article 15), a right to rectification (Article 16), a right to erasure (Article 17), a right to data portability (Article 20), and a right not to be subject to automated decision-making (Article 22).

The GDPR is relevant in a criminal law context for all data controllers that are not within the scope of the LED. For example, private investigators and government agencies in the migration domain are subjected to the GDPR. Also, when companies apply camera surveillance or other technologies that collect personal data, the data collected and processed are subject to the GDPR. As soon as the police or the public prosecution service request such data for criminal investigation, the data comes within the scope of the LED rather than the GDPR.Footnote 43 Law enforcement agencies can request data from individuals and companies at any time during a criminal investigation, but handing over such data is on a voluntary basis. It is only when law enforcement agencies have obtained a court warrant that handing over the data is mandatory. If relevant, any such information may be used as evidence in court cases.

III.C The Law Enforcement Directive (LED)

In 2012, the European Commission presented the first draft of a Directive that would harmonize the processing of personal data in criminal law matters.Footnote 44 The debate regarding the Directive between the European Parliament, the Commission, and the Council continued for four years. After amendments, the legislative proposal was adopted in 2016, in its current version as EU Directive 2016/680 (the LED). The deadline for implementation in national legislation was two years, with a final deadline in May 2018. Directive 2016/680 repealed the Framework Decision 2008/977/JHA as of that date.

The aim of the LED is twofold. It ensures the protection of personal data processed for the prevention, investigation, detection and prosecution of crimes, and the execution of criminal penalties. It also facilitates and simplifies police and judicial cooperation between Member States and, in general, more effectively addresses crime. This two-pronged approach is similar to that of the GDPR and the Framework Decision.

The LED is a data protection regime alongside the GDPR. The LED specifically focuses on data processing by “competent authorities,” as defined in Article 3(7). Competent authorities include:

  1. (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security, or

  2. (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.

Perhaps the most obvious competent authorities are police forces and public prosecution services, but there may be a variety of competent authorities in the national criminal law of EU Member States. For example, in the domain of execution of criminal penalties, competent authorities may include the “regular” prison system, juvenile correction centers, forensic psychiatric centers, probation authorities, etc.

The scope of the LED is limited to the processing of personal data by the competent authorities for the specific purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties (Articles 1 and 2). This includes the safeguarding against and the prevention of threats to public security (Recital 11). As such, it should be noted that not all personal data processed by law enforcement agencies and the judiciary is within the scope of the LED. For example, when law enforcement agencies or the judiciary are processing personnel data regarding their staff, for paying wages or assessing employee performance, the GDPR applies rather than the LED. The GDPR is also applicable to personal data processing regarding borders, migration, and asylum.

With regard to the protection of personal data, the LED includes, similar to the GDPR, a set of principles for the fair processing of information, such as lawful and fair processing, purpose limitation, accuracy of data, adequate security safeguards, and responsibility of the data controller in Article 4 of the LED. Transparency is strived for as much as possible, but it is obvious that there are clear limitations to transparency in the interest of ongoing criminal investigations. This can lead to interference with the principle of equality of arms (Article 6 of the ECHR), as the defense may not be entitled to review some relevant data, and in practice, the defense may only get what the prosecutor decides to give. Essentially, the rights granted to data subjects can be difficult to invoke, at least in a meaningful way. National data protection authorities are eligible to handle any complaints regarding actors in the criminal justice system that do not comply with the LED provisions, and such cases can also be brought to courts. However, for data subjects, it can be hard to get access to data on themselves if they do not know which data actually exists. Contrary to the GDPR regime of high fines, the LED regime leaves setting maximum fines to national legislation. No EU Member State has implemented significant fines for LED non-compliance, something that obviously does not contribute to strict enforcement.

Personal data should be collected for specified, explicit, and legitimate purposes within the LED’s scope, and should not be processed for purposes incompatible with the purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security. Some of these principles are problematic, particularly when data is transferred from a GDPR regime into the context of law enforcement.Footnote 45 Also, the protection provided under the GDPR may decrease, from a data subject’s perspective, when law enforcement agencies get access to data collected by private parties.Footnote 46 While the GDPR is not very specific about time limits for data storage and review,Footnote 47 the LED requires clear establishment of time limits for storage and review.Footnote 48 The LED states that Member States should provide for appropriate time limits to be established for the erasure of personal data or for a periodic review of the need for the storage of personal data. Article 5(1)(e) of the GDPR states that personal data should be kept no longer than necessary, but does not mention a number of days, months, or years. The Article 29 Working Party issued an opinion that argues that time limits should be differentiated.Footnote 49 Storage time limits vary across Member States and for different situations, including different types of data subjects and different crimes. For example, in Germany, data storage duration is limited depending on the types of persons: ten years for adults, five years for adolescents, and two years for children.Footnote 50 Data on whistleblowers and informants can only be stored for one year, but can be extended to three years. In the Netherlands, the storage of personal data by the police is limited to one year, which can be extended to five years if the data is necessary for the police tasks.Footnote 51 In the United Kingdom, section 39(2) of the Data Protection Act 2018Footnote 52 requires that appropriate time limits must be established for the periodic review of the need for the continued storage of personal data for any of the law enforcement purposes.Footnote 53

The LED offers explicit protection for special, i.e., sensitive, categories of data, such as data relating to race, ethnicity, political opinions, religion, trade union membership, sexual orientation, genetic data, biometric data, health data, and sex life data. The use of perpetrator profiles and risk profiles is explicitly allowed.

The LED also provides a list of data subject rights, such as the right to information, the right to access, the right to rectification, the right to erasure, and the right to restriction of the processing. Since these data subject rights can only be invoked if this does not interfere with ongoing investigations, these rights can be somewhat misleading. Some data subject rights mentioned in the GDPR, such as the right to data portability and the right to object to automated individual decision-making, are not included in the LED. The absence of the right to object to automated decision-making offers more leeway for law enforcement to use profiling practices, such as perpetrator profiling and risk profiling.

In the Netherlands, there already existed specific legislation for the processing of personal data in criminal law before the LED came into force. The Police Data Act (Wet politiegegevens) (“Wpg”)Footnote 54 regulated the use of personal data for police agencies, and the Justice and Prosecution Data Act (Wet justitiële en strafvorderlijke gegevens) (“Wjsg”)Footnote 55 regulates the use of personal data by the public prosecution services and the judiciary. Contrary to other EU Member States, where sometimes entirely new legislation had to be drafted, the Netherlands merely had to adjust existing legislation when implementing Directive 2016/680.

Both the Wpg and the Wjsg already strongly resembled the LED in terms of structure, scope, and contents, which meant that only a few changes were required. Also, the rights of data subjects, international cooperation, and supervision by data protection authorities were already regulated. Elements that were missing included concepts like Privacy by Design, Privacy by Default, and Privacy Impact Assessments.Footnote 56 The Netherlands already introduced data breach notification laws in 2016, prior to the GDPR, but these laws did not apply to the police, prosecution services, and the judiciary – a change brought about by the LED.

Across the European Union, implementation of the LED in national legislation proceeded slowly. In February 2018, a few months before the implementation deadline of May 2018, only a few countries, such as Germany, Denmark, Ireland, and Austria, had implemented the directive. The Netherlands had implemented the directive with some delay: the revised Wpg and Wjsg came into force in January 2019, more than half a year after the May 2018 deadline. Other countries, such as Belgium, Finland, and Sweden, were later, but they implemented the directive by 2019. However, there was also a group of countries, including Spain, France, Latvia, Portugal, and Slovenia, that had not yet accomplished implementation by 2019. In January 2019, the European Commission sent reasoned opinions to Bulgaria, Cyprus, Greece, Latvia, the Netherlands, Slovenia, and Spain for failing to implement the LED, and urged the Czech Republic and Portugal to finalize the LED’s implementation.Footnote 57 In July 2019, the European Commission lodged an infringement action against Greece and Spain before the CJEU for failing to transpose the LED into national legislation.Footnote 58 Since then, Greece passed Law 4624/2019 of August 29, 2019, implementing the LED. Latvia and Portugal transposed the LED in August 2019, while Spain had not yet adopted such an act. Also as of August 2019, six out of the 16 federal states (Länder) of Germany had not yet passed laws transposing the LED, which led the European Commission to send a formal notice, the first step of infringement proceedings.Footnote 59 As of May 2020, Germany had not yet fully transposed the LED, and the European Commission has sent a reasoned opinion. The same action was taken against Slovenia, which also failed to transpose the LED.Footnote 60 On February 25, 2021, the CJEU sanctioned Spain with a €15 million fine and a daily penalty of €89,000 for its ongoing failure to transpose the LED into national legislation.Footnote 61 In April 2022, the European Union launched an infringement procedure against Germany after detecting a gap in the transposition of the LED in relation to activities of Germany’s federal police.Footnote 62

IV Evidence in Dutch Criminal Law
IV.A Basic Principles

As in many countries, the evidentiary system in criminal cases in Dutch criminal law is based on the principle of establishing the substantive truth. This goal is expressed in the Dutch CCP by the requirement that a judge may assume that the offense charged is proven only if the judge “is convinced.”Footnote 63 This means that a high degree of certainty must exist that the suspect has committed the offense. The judge must be convinced by the contents of legal evidence. The latter is the evidence that the Dutch CCP considers admissible in criminal proceedings. It includes the judge’s own perception, statements by the suspect, statements by a witness, statements by an expert, and written documents per Article 339 of the Dutch CCP. This summary is so broad that hardly any evidence can be indicated that the law does not consider admissible.Footnote 64 Digital data as evidence will usually be submitted in the form of written police statements that report the results of an investigation.Footnote 65

There are only few rules in the Dutch CCP that govern the reliability of evidence. Relevant to any kind of evidence is the obligation for the judge to justify his rejection of a “plea against the use of unreliable evidence” in Article 359, paragraph 2 of the Dutch CCP, i.e., a defense objection to evidence. This means that if the judge decides not to exclude the contested evidence, he or she must give reasons why. The better the defense substantiates the plea of unreliability, the more an explanation is required from the court. Furthermore, there are the so-called minimum evidence rules in relation to statements. For example, the judge may not convictFootnote 66 on the basis of a statement by only one witness or by the suspect only. Because there is always a chance that the witness or the suspect will not tell the truth, the law requires a second piece of evidence for conviction. However, case law demonstrates that this requirement is very easily met.Footnote 67 A final and increasingly important example concerns criteria for assessing expert evidence. These criteria, developed by the Supreme Court, hold that if the reliability of expert evidence is disputed, the judge should examine whether the expert has the required expertise and, if so, which method(s) the expert used, why the expert considers that the method(s) is (are) reliable, and the extent to which the expert has the ability to apply that method in a professional manner.Footnote 68

Apart from reliability, the legitimacy of evidence may also be challenged in court. Article 359a of the Dutch CCP provides for attaching consequences to the unlawful gathering of evidence. Depending on the circumstances, the judge can decide to decrease the severity of the punishment, to exclude the evidence, or declare the case inadmissible for prosecution.Footnote 69 In practice, cases are almost never affected by unlawfully obtained evidence. Courts rarely impose consequences for unlawfully obtained evidence, and if they do, cases may not be affected by this, because the requirements the Supreme Court laid down in its case law regarding the scope of Article 359a of the Dutch CCP are rather restricted.Footnote 70

IV.B Current Court Practices: Increasing Use of Digital Evidence

Traditionally, statements of witnesses and suspects are important evidence in criminal cases. The general feeling is, however, that things are changing. Criminal investigations into organized crime in particular do not rely on witnesses, and investigations increasingly build a case by combining location data via phone locations or automatic number plate recognition, user data of phones and computers, the internet, etc.Footnote 71 The Dutch police increasingly and with success invest in “data-driven investigation,” and high-tech detectives have gained access to various encrypted communication providers that were used by organized crime groups such as Ennetcom, EncroChat, and Sky ECC.Footnote 72 An international coalition of investigators even built their own communication app “Anom,” which was gladly used by ignorant criminals. The downside of these celebrated successes, however, is that there is no capacity to read the millions of intercepted messages.Footnote 73

Moreover, the absence of adequate rules discussed in Section II, and the legitimacy of digital investigation methods, are serious issues. But due to the restricted interpretation of Article 359a of the Dutch CCP (discussed above), the courts almost never attach a serious consequence to the fact that evidence was gathered illegally. Next, there is the problem of territorial jurisdiction.Footnote 74 The data in the Ennetcom-seizure, e.g., was owned by a Dutch company, but stored on a Canadian server. As a result of this, the Dutch police could not investigate the data without permission of the Canadian authorities. In order to comply with the Canadian judicial requirements for access to the data, the Dutch investigatory judge and the prosecutor interpreted the Dutch procedural rules very broadly. The defense objected, but in the end the trial judge authorized the course of action.Footnote 75

Next to issues of legitimacy, digital evidence raises questions of reliability as well as on defense rights. We illustrate this with the case of the “Webcam blackmailer,” in which the reliability of a keylogger and the right to equality of arms were both discussed.Footnote 76 In this case, the suspect was tried, among other things, for threatening and spreading sexual images of underage girls via the internet, as well as for extorting various males with information on them having “webcam sex.” The discussion regarding the keylogger,Footnote 77 elaborately described in the verdict, clearly demonstrates the effort non-expert litigants have to make to understand how these kinds of technical devices work. To a large extent, they need to rely on expert witnesses for determining reliability. Even more interesting in this case are the attempts of the defense to get access to all the data that was found and produced by the police, including the complete copies that were made of the computers, all the results of the keylogger, all the Skype conversations with the victims, WE-logs, VPN-logs, etc. The defense brought forward an alternative scenario, and argued that in order to properly assess the selection and interpretation of the incriminating evidence, it is necessary to have access to all the data. Indeed, this request seems reasonable from the perspective of the right to equality of arms. All information that can be relevant for the case must be seen and checked by the defense. However, by Dutch law, the prosecution determines what is relevant and made available. This rule has always been the object of discussion between defense attorneys and prosecution, but this debate is given a new dimension in the context of big sets of technical data.Footnote 78 The police have their own software to search and select data, and they may not always be willing to provide insight into their investigative methods. Furthermore, the amount of data can be enormous, as in the Ennetcom, EncroChat, and Sky ECC examples above, and for that reason the effort to make it accessible for the defense will be too. There now seems to be a court policy developing in early cases in which decrypted data is used, allowing the defense to search the secondary dataset at the Netherlands Forensic Institute (NFI) with the search engine “Hansken.”Footnote 79 Hansken was developed by the NFI to investigate large amounts of seized data. In the Webcam blackmailer case, the Court of Appeal dismissed the request of the defense with the argument that they were on a phishing expedition and had had plenty of opportunity to challenge the evidence. Nonetheless, this case illustrates that the Dutch CCP needs provisions to ensure insight into issues generated by automated data analysis, for the defense, but also for the judge.Footnote 80

IV.C Developments in Society and Technology: New Issues of Quality and Assessment of Evidence

As observed in the beginning of the chapter, people are increasingly leaving digital traces everywhere all the time. People are often monitored without being aware of it, by camera surveillance systems, by their own smartphones, and on other devices they use to access the internet. This generates data that can be useful for law enforcement to collect evidence and to find out what happened in specific cases. In the Netherlands, many surveillance systems are in place for law enforcement to rely on. These are mostly private systems from which data is requested if needed.

The data we are referring to here is digital data, usually large amounts of data, in different formats such as statistics, as well as audio, video, etc., that can only be accessed via technological devices. In the past, forensic experts also provided technical data, such as fingerprints or ballistics, to criminal investigations and provided clarifications when testifying in courts, but the current use of data as evidence is significantly different. In the past, forensic data was collected in a very specific, controlled, and targeted way, mostly at the crime scene. Currently, it is possible to collect very large amounts of data, not necessarily specifically targeted to one individual or connected to a specific crime scene. For some of these relatively new data collection methods, no protocols even exist yet. In this subsection, we discuss three issues regarding the quality of evidence that arise as a result of the characteristics of digital data.

The first issue concerns the reliability of data. Digital data can be volatile and manipulated, which means that the litigating parties and the judge would need an instrument to assess the originality of the data. This instrument can be found in procedures on how to seize digital data in a controlled and reproducible way. For example, when a copy of a hard disc of a computer is made, it is very important to have a fixed procedure or protocol, including timestamps, so that it is clear to all litigating parties that the data was not tampered with or accidentally altered. Even with such procedures and protocols in place, creating a copy of the data on a seized computer can be complicated. For example, Bitcoin and other cryptocurrencies cannot be copied, even though they are essentially data on a computer. Seizure of cryptocurrencies therefore requires specific protocols. Another technological issue is that of streaming data and data in the cloud. Such data can also be hard to record or securely copy, and if so, much depends on the timing. Forensic experts in the Netherlands and other countries are working on new methods and protocols for securing digital data. A detailed discussion is beyond the scope of this chapter.Footnote 81

The second issue concerns the large amounts of data that can arise during criminal investigations in relation to the principle that the litigating parties need to have access to all relevant data, incriminating and exonerating. For example, in the Netherlands, law enforcement uses a significant amount of wiretapping to find clues for further investigation in criminal cases. This yields large amounts of data that can be hard to process by humans, as it would require listening to all audio files collected. Voice recognition technologies may be helpful to process such data in automated ways. Also, camera surveillance, including license plate recognition systems, may yield large amounts of data. Again, such data can be hard to process by humans going through all images. Analytics software may be useful to speed up such processes.

The large amounts of data routinely collected in criminal cases therefore calls for automated search and analysis. When using software tools to go through large amounts of data to find specific data or to disclose specific patterns, one problem may be that humans may find it hard to follow how the software works, particularly when such tools are very advanced. However, if it is not transparent how particular conclusions were drawn from the data, this could be an issue when such conclusions are used in courts as evidence.Footnote 82 According to the principle of equality of arms, it should be possible to contest all evidence brought up by any of the process parties. However, search and analysis tools may be programmed in such a way that they aim to find incriminating evidence in datasets, and there may be exonerating pieces of evidence in the databases that the tools may not show.Footnote 83

A detailed legal framework may be lacking, but courts still seem increasingly reliant on experts and computer systems. A typical example here are risk assessment models, usually based on algorithms, that provide risk scores for recidivism rates. In several of the United States, the system Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is used to assess recidivism risks.Footnote 84 In their decisions, courts place considerable weight on these models, or rather the results they spit out. In the Netherlands, the probation services use a system called RISC (Recidive inschattings schalen). Part of that system is the Oxford Risk of Recidivism Tool, an actuarial risk assessment tool that can be used to predict statistical risks.Footnote 85 These models increasingly play a role in the work of probation services and the decisions of courts.

The use of such models offers several benefits, such as fair assessments done in more structured and objective ways. Subjective assessors can be prone to human failure or can be influenced by bias and prejudice. If the models are self-learning, they can also recognize and incorporate new trends and developments. This ability obviously can also increase efficiency and reduce costs. However, there is also criticism of these instruments, because they do not seem to outperform assessments by human experts, and there are risks similar to human assessments, such as bias that can lead to discrimination.Footnote 86 In the United States, COMPAS seemed to systematically assign higher recidivism risks to Afro-Americans.Footnote 87 It is often argued that these models do not process any ethnicity data and, therefore, cannot be discriminating.Footnote 88 However, characteristics like ethnicity can easily be predicted and are therefore often reconstructed by self-learning technologies, without being visible to users.Footnote 89 Furthermore, it should be noted that the false positive rate for African-Americans is higher in COMPAS, but race has no predictive value. In other words, suspects from different ethnic backgrounds with the same risk score have the same risk of reoffense.

The third issue is related to difficulties in estimating the strength of the evidence. All datasets contain inaccurate data or gaps to some extent. Incorrect or incomplete data is not always problematic from a data analytics perspective, but it may reduce some of the accuracy and reliability of analysis results and thus affect the conclusions that can be drawn from it.Footnote 90 When based on large amounts of data, some minor errors and gaps in the data will hardly affect the final results. However, in cases of limited data, errors might have crucial impacts on the evidence. For example, cell phone data can be used in a court case to prove that a suspect was at the crime scene at a particular time. If this conclusion is based on data from three cell phone masts, but one of them is unreliable, then the result may not be entirely accurate. The conclusion could be, e.g., that the probability that the suspect can be pinpointed to the location is 75 percent. This problem with accuracy also brings in all the assessment problems that humans, including judges, may have when dealing with probabilities and risks, including the so-called prosecutor’s fallacy and the defense attorney’s fallacy.Footnote 91

Despite all these issues, the changing technological landscape does provide many opportunities for the use of data as evidence in courts. When used properly, the use of data could be more objective than the use of statements from suspects, victims, and witnesses.Footnote 92 People may easily forget specific details of a past situation and their memories may even distort after some time. Many psychological mechanisms might be at play. In very stressful situations, when people are the victim of a crime or witnessing serious crime, they may experience time in different ways, often thinking it takes longer than in reality, or they may invoke coping mechanisms that block particular information in their brains. Witnesses who are not directly involved in a crime they are witnessing may be paying less attention to details, and the evidence they can produce in their statements may therefore be limited. Research has shown that memories also fade over time for all actors.Footnote 93

Objective digital data, e.g., from cell phones, may easily fill in the blanks in people’s memories and rectify any distortions that have occurred. Such data can readily confirm where people were at a particular moment and can disclose connections between people. The data can help prove that some statements are wrong or confirm that some statements are indeed correct. Data can also help to avoid tunnel vision and other biases that law enforcement officers conducting criminal investigations may have.

Altogether, the use of data as evidence in courts can be a valuable asset. It can be more accurate, detailed, unprejudiced, and objective than statements. But this is only the case if some of the pitfalls and issues mentioned above are properly avoided. Data can be manipulated, the tools for analysis can be biased and discriminating, and the probabilities resulting from any analysis can be subject to interpretation fallacies.

Regarding categories of evidence, in general we see an increase in the use of data as evidence in courts, but not necessarily a decrease in the use of statements from suspects, victims, and witnesses. This decrease is not to be expected any time soon, as statements remain important, for more than evidentiary reasons, such as the procedural justice experienced by all parties in court. As such, the use of data as evidence is a valuable addition to statements, but not a replacement.

The European Union seems to expect that data as evidence will become increasingly important. A relevant development on the EU level that needs to be discussed here is the draft Regulation on e-evidence.Footnote 94 To make it easier and faster for law enforcement and judicial authorities to obtain electronic evidence needed to investigate and eventually prosecute criminals and terrorists, the European Commission proposed new rules in April 2018 in the form of a Regulation and a Directive. Both proposals focus on swift and efficient cross-border access to e-evidence, in order to effectively fight terrorism and other serious and organized crime.Footnote 95 The proposal for the directive focuses on harmonized rules for appointing legal representatives when gathering evidence in criminal proceedings.Footnote 96 The proposal for the regulation focuses on European production and preservation orders for electronic evidence in criminal matters.Footnote 97 The production order will allow judicial authorities to obtain electronic evidence directly from services in other Member States. These legal instruments have not yet been adopted by the European Union, as strong privacy, data protection, and privacy safeguards are still under scrutiny. However, it may be expected that, once adopted, this regulation will further increase the use of electronic evidence in court cases in the European Union over the next few years.

V Conclusion

In this chapter, we focused on the increasing discrepancy between legal frameworks and actual practices regarding the use of data as evidence in criminal courts. The two legal frameworks under consideration are criminal law and data protection law. Since the EU harmonization of criminal law is very limited, we used the example of the Netherlands to further examine the use of data as evidence in criminal courts. Even though the Netherlands is a front runner in the areas of privacy and data protection law, as well as digital forensics and cybercrime, large parts of its criminal law were developed before digital evidence existed. Data protection law, which is more recent, is highly harmonized throughout the European Union via the GDPR and the LED.

The two major legal frameworks of criminal law and data protection law are not fully integrated and adjusted to each other. There seems to be a structural ambiguity here. When it comes to regulating data as evidence, these frameworks together need to cover three separate but intertwined activities: (1) collection of data; (2) processing and analysis of data, including storage, selecting, combining; and (3) evaluation of data.Footnote 98 In the Netherlands, the Dutch CCP covers the collection and evaluation, while the processing is mainly the domain of the Wpg and Wjsg in accordance with the LED.

Based on the analysis of the existing legal frameworks, the actual use of data as evidence in criminal courts, and developments in society and technology, we have four major observations, regarding the final aspect of our research question: i.e., what is needed next. A first observation regarding regulation is that the existing legal frameworks in the Netherlands barely or not at all obstruct the collection of data for evidence. Hence, the legal frameworks essentially allow law enforcement agencies and public prosecutors to make use of the opportunities that data can offer as evidence in criminal courts. Although many digital investigation methods are not provided for in the Dutch CCP, and as a result, fundamental issues on privacy are debated, this seems to have few consequences for the legitimacy of data as evidence in specific cases. This is partly due to the fact that, in the Netherlands, illegally gathered evidence rarely leads to serious consequences. The Supreme Court case law thus reflects the importance given to crime fighting. Another explanation is that the debate on how to define and protect the right to digital privacy within criminal procedure is still in its infancy.

Our second observation is that regulation regarding collection via the Dutch CCP and regulation on processing and analysis via the Wpg and Wjsg is not integrated. As with other written law, these legal frameworks use different language and definitions, have different structures, and lack any cross-reference to one another. The Dutch CCP is not specifically aimed at what can be done with data once collected, but what can be done with data is also relevant for the evaluation of the extent of the privacy intrusion, and hence the design of the investigation powers. An integrated approach is also necessary for other reasons. Under data protection law, data subjects have a series of data subject rights they can invoke, such as the right to information, transparency, and access. These rights can be somewhat of a farce, as people may not know about them and how to invoke them and, if they do, they may be blocked in cases where a criminal investigation is still ongoing.Footnote 99

Our third observation concerns the absence of regulation of automated data analysis during all stages in the criminal justice system, including the prevention, investigation, detection, or prosecution of criminal offenses, the use of data as evidence in criminal courts, and the execution of criminal penalties. Automated data analysis raises fundamental questions regarding the equality of arms, and because all parties should have access to all relevant data and be able to assess data selection, we would like to argue that introducing some additional provisions for regulating data analytics, subsequent to data collection, would be appropriate. We have not seen any similar provisions in the legislation of other EU Member States,Footnote 100 but we did encounter an example of such a provision in the Dutch Intelligence Agencies Act (Wet Inlichtingen- en Veiligheidsdiensten).Footnote 101 Article 60 of this Act states that the Dutch intelligence agencies are empowered to perform automated data analytics on their own datasets and open sources. The data can be compared and used for profiling and pattern recognition. Since no similar provision exists in criminal law, it is unclear whether law enforcement agencies are allowed to do the same. We are not arguing that they should or should not be allowed to do this, but we would like to argue that there should be more clarity regarding this issue.

The absence of regulation of data analysis raises issues regarding privacy and data protection of the data subjects whose data is being processed, but it can also raise issues regarding equality of arms during litigation in courts. Normally, suspects have access to all evidence brought forward in their case, including any data underlying the evidence. In practice, defendants may only get what prosecutors grant them, and they may not be aware of what is missing. Furthermore, if data analysis is based on large amounts of data, and that data includes the data of others,Footnote 102 a suspect may not be granted access to it; the GDPR prevents this in order to protect privacy and personal data. As a result, a suspect may not have full transparency regarding the data on which the analysis was based and may be unable to reproduce the analysis.Footnote 103 If the data analytics involve very sophisticated self-learning technology such as AI, the prosecutor may not even know how the data analysis took place.

Finally, as a fourth observation, what may also need further attention is the level of court expertise in dealing with digital data as evidence. Given the increasing importance of data as evidence in criminal courts, it is imperative that judges understand some of the basics of how data is collected and processed before it results in the evidence that is presented to them. In order to evaluate the reliability and strength of the data-evidence, they have to be very aware of any of the pitfalls and issues mentioned in the previous section. Judges should be able to contest different types of data brought forward as evidence, even if the data is not contested by any of the litigating parties. For this reason, further training in this area may be important, as well as procedural rules identifying the basis for judicial assessment of how data was seized.

In view of these observations, we conclude that, on the one hand, there are perhaps no major obstructions in the existing legal frameworks for the use of data as evidence in criminal courts, but that, on the other hand, much of this area is in practice still a work in progress. In order to find the right balance between the interests of law enforcement and the rights of subjects in criminal cases, further work is needed. Further work would include research, but obviously also the development of case law, as the balancing of interests approach is at the heart of what courts do, most notably supreme courts, and particularly in search and seizure jurisprudence. Since criminal law and data protection law are more or less separate legal frameworks, they need to be further aligned, not necessarily by adjusting the legislation, but at least in detailing the actual practices and policies of law enforcement agencies further. The absence of any regulation regarding automated data analysis is a major concern and may have considerable consequences for data subjects and their rights in criminal cases. We suggest that, after further research, regulation be considered. Regulation can be done via legislation, but perhaps also via policies. And, finally, further training of actors in courts may be required to make all of this work.

When looking at the developments in society and technology, we expect that the use of data as evidence in courts will significantly increase in the coming decades. This means that the issues identified in this chapter, such as limited effectiveness of data subject rights provided in the LED and issues regarding the principle of equality of arms during litigation, may become more pressing in the near future. It is therefore important to further prepare both courts and law enforcement agencies for these challenges, as suggested above.

However, having said this, we do not expect that the use of other types of evidence in criminal courts, such as statements from suspects, victims, or witnesses, will fall out of use. We think it is important to consider the use of digital evidence in criminal courts as an addition to the use of statements and other types of evidence, not as a replacement. Humans seek to understand evidence by means of stories, which means that regardless of its digital nature, data will always need to fit into a story – the stories of suspects, victims, and witnesses.Footnote 104

11 Reconsidering Two US Constitutional Doctrines Fourth Amendment Standing and the State Agency Requirement in a World of Robots

David Gray
I Introduction

A wide array of robot technologies now inhabit our life worlds – and their population grows every day. The extent, degree, and diversity of our interactions with these technologies serve daily notice that we now live in an unprecedented age of surveillance. Many of us carry with us personal tracking devices in the shape of cellular phones allowing service providers and “apps” to monitor our locations and movements. The GPS chips embedded in smart devices provide detailed location data to a host of third parties, including apps, social media companies, and public health agencies. Wearable devices monitor streams of biological data. The IoT is populated by a dizzying array of connected devices such as doorbells, smart speakers, household appliances, thermostats, and even hairbrushes, which have access to the most intimate, if often quotidian, details of our daily lives. And then there is the dense network of surveillance technologies such as networked cameras, license plate readers, and radio frequency identification (RFID) sensors deployed on terrestrial and airborne platforms, including autonomous drones, that document our comings and goings, engagements and activities, any time we venture into public. Increasingly, these systems are backed by AI technologies that monitor, analyze, and evaluate the streams of data produced as we move through physical and online worlds, many of which also have the capacity and authority to take action. What once was the stuff of dystopian fiction is now a lived reality.

Privacy scholars have quite reasonably raised concerns about threats to fundamental rights posed by robots. For example, Frank Pasquale has advanced a trenchant critique of black-box algorithms, which have displaced human agents in a variety of contexts.Footnote 1 On the other hand, we readily invite robots into our lives to advance personal and social goals. Some play seemingly minor roles, such as autonomous vacuums and refrigerators capable of tracking their contents, determining when supplies are low, and submitting online orders. Others less so, such as fitness monitors that summon emergency medical personnel when they determine their human partners are in crisis, or mental wellness apps that utilize biometric data to recommend, guide, and monitor therapy.

Because they entail constant and intimate contact, these human–robot interactions challenge our conceptions of self, privacy, and society, stretching the capacities of our legal regimes to preserve autonomy, intimacy, and democratic governance. Prominent among these challenges are efforts to understand the role of constitutions as guarantors of rights and constraints on the exercise of power. In the United States, this is evident in conversations about the Fourth Amendment and technology.

The Fourth Amendment provides that: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” Since the Supreme Court’s pivotal 1967 decision in Katz v. United States,Footnote 2 the Fourth Amendment has been cast as a guarantor of privacy, which suggests that it might have a role to play in normalizing, protecting, and regulating the relationships among us, our technologies, corporations, and government agencies. Specifically, we might imagine the Fourth Amendment protecting us from threats to privacy posed by robots or securing our relationships with robots against threats of interference or exploitation. Unfortunately, doctrinal rules developed by the US Supreme Court have dramatically reduced the capacity of the Fourth Amendment to serve either role. Some of these rules have earned considerable attention, including the public observation doctrineFootnote 3 and the third party doctrine.Footnote 4 Others have so far avoided close scrutiny.

This chapter examines two doctrinal rules in this more obscure category that are particularly salient to robot–human interactions. The first is that privacy is a personal good, limiting standing to bring Fourth Amendment challenges to those who have suffered violations of their personal expectations of privacy. The second is that the Fourth Amendment can only reach the actions of state agents. This chapter will show that neither is required by the text or history of the Fourth Amendment. To the contrary, the text, history, and philosophical lineage of the Fourth Amendment favor a broader understanding of privacy as a public good that “shall be” secure against threats of intrusive surveillance and arbitrary power by both government and private actors, whether human or robotic. This reading should lead us to alter our understanding of a variety of Fourth Amendment doctrines,Footnote 5 including rules governing standing and the state agency requirement, thereby enhancing the potential of the Fourth Amendment to play a salutary role in efforts to understand, regulate, and even protect human–robot interactions.

Before turning to that work, it is worth pausing for a moment to wonder whether we would be better off abandoning the Fourth Amendment to these doctrinal rules and focusing instead on legislation or administrative regulation as a means to govern robot–human interactions. There are good reasons to doubt that we would be better off. Legislatures generally, and the US Congress in particular, have failed to take proactive, comprehensive action as new technologies emerge.Footnote 6 Instead, this is an area where Congress has tended to follow the courts’ lead. A good example is the Wire Tap Act,Footnote 7 passed in 1968 right after the Court’s landmark decision in Katz. Most important, however, is that accepting the degradation of any constitutional right out of deference to the political branches turns constitutional democracy on its head. The whole point of constitutional rights is to guarantee basic protections regardless of legislative sanction or inaction. At any rate, defending constitutional rights does not exclude legislative action. For all of these reasons, we should question the doctrinal rules that seem to limit the scope of constitutional rights rather than accepting them in the hope that legislatures or executive agencies will ride to the rescue.

II Fourth Amendment Standing

Established Fourth Amendment doctrine imagines that privacy is a personal good.Footnote 8 This received truth traces back to the Supreme Court’s 1967 opinion in Katz.Footnote 9 Confronted with the unregulated deployment and use of emerging surveillance technologies, including wiretaps and electronic eavesdropping devices, the Katz Court adopted a novel definition of “search” as a violation of a subjectively manifested “expectation of privacy … that society is prepared to recognize as ‘reasonable.’”Footnote 10 Applying this definition, the Katz Court held that eavesdropping on telephone conversations is a “search,” and therefore subject to Fourth Amendment regulation.

Once hailed as progressive, Katz’s revolutionary potential has been dramatically limited by its assumption that privacy is a personal good.Footnote 11 This point is manifested most clearly in the Court’s decisions governing Fourth Amendment “standing.” “Standing” is a constitutional rule limiting the jurisdiction of US courts. To avail themselves of a court’s jurisdiction, Article III, section 2, of the US Constitution requires litigants to show that they have suffered a legally cognizable injury caused by the opposing party, and that the court can provide relief. Fourth Amendment standing shares a conceptual kinship with Article III standing, but is neither jurisdictional nor compelled by the text. It is, instead, derivative of the assumption in Katz that privacy is a personal good. Thus, a litigant must establish that his “own Fourth Amendment rights [were] infringed by the search and seizure which he seeks to challenge.”Footnote 12

Fourth Amendment standing doctrine hamstrings efforts to challenge overreaching and even illegal conduct. Consider United States v. Payner.Footnote 13 There, Internal Revenue Service (IRS) agents suspected that taxpayers were using a Bahamian bank to hide income and avoid paying federal taxes. Unable to confirm those suspicions, agents decided to steal records from Michael Wolstencroft, a bank employee. To facilitate their plan, agents hired private investigators Norman Casper and Sybol Kennedy. Kennedy established a relationship with Wolstencroft and arranged to go to dinner with him.Footnote 14 During their dinner date, Wolstencroft left his briefcase in Kennedy’s apartment. Casper retrieved the briefcase and delivered it to IRS agent Richard Jaffe. Caspar and Jaffe broke into the briefcase and copied its contents, including hundreds of pages of bank documents. They then replaced the documents, relocked the briefcase, and returned it to Kennedy’s apartment. This fantastic criminal conspiracy was carried out with the full knowledge and approval of supervisory agents at the IRS.

Among the stolen documents were records showing that Payner used Wolstencroft’s bank to hide income. Based on this evidence, Payner was charged with filing a false tax return. At trial, he objected to the introduction of the stolen documents on the grounds that they were the fruits of a conspiracy to violate the Fourth Amendment. District Judge John Manos granted Payner’s motion and condemned the government’s actions as “outrageous.”Footnote 15 In an effort to deter similar misconduct in the future, Judge Manos suppressed the stolen documents, concluding that “[i]t is imperative to signal all likeminded individuals that purposeful criminal acts on behalf of the Government will not be tolerated in this country and that such acts shall never be allowed to bear fruit.”Footnote 16 The government appealed to the Supreme Court.

Writing for the Court in Payner, Justice Lewis Powell acknowledged that the government intentionally engaged in illegal activity and did so on the assumption that it would not be held accountable. He also agreed that: “No court should condone the unconstitutional and possibly criminal behavior of those who planned and executed this ‘briefcase caper.’”Footnote 17 Nevertheless, Justice Powell held that Payner could not challenge the government’s illegal conduct because Payner’s personal “expectation of privacy” was not violated. The briefcase belonged to Wolstencroft. The documents belonged to the bank. Payner had no personal privacy interest in either. He therefore did not have “standing” to challenge the government’s illegal actions.

Payner shows how treating privacy as a personal good prevents many criminal litigants from challenging illegal searches and seizures. That may seem defensible in the context of a criminal case, where demonstrably guilty defendants seek to avoid responsibility by suppressing reliable evidence. But what about public-minded civil actions? Consistent with its English heritage, US law allows for civil actions seeking equitable relief in the form of declaratory judgments and injunctions. Given the different interests at stake in this context, and the decidedly public orientation of these actions, one might expect to see a more expansive approach to questions of standing when litigants bring Fourth Amendment suits designed to benefit “the people” by challenging the constitutionality of search and seizure practices and demanding reform. Unfortunately, doctrinal rules governing Fourth Amendment standing make it nearly impossible to pursue declaratory or injunctive relief in most circumstances.Footnote 18 The culprit, again, is the assumption that privacy is a personal good. Los Angeles v. LyonsFootnote 19 offers a vivid example.

Adolph Lyons sued the Los Angeles Police Department (LAPD) and the City of Los Angeles after officers put him in a chokehold during a traffic stop. The chokehold was applied with such intensity that Lyons lost consciousness and suffered damage to his larynx. Given that he was the person assaulted, Lyons clearly had standing to bring a civil action alleging violations of his Fourth Amendment rights. To his credit, however, Lyons was interested in more than personal compensation. He wanted to use his suit to compel the LAPD to modify its practices, policies, and training on the use of force. Those remedies would have benefited not just Lyons, but also the people of Los Angeles and the United States generally, enhancing the people’s collective security against unreasonable seizures. Unfortunately, the Court dismissed these equitable claims on the grounds that Lyons did not have standing.

In order to demonstrate standing to pursue injunctive relief, the Court held, Lyons would need to “establish a real and immediate threat that he would again be stopped for [a] traffic violation, or for any other offense, by an officer or officers who would illegally choke him into unconsciousness without any provocation or resistance on his part.”Footnote 20 That is a virtually insurmountable burden, but it is entirely consistent with the Court’s assumption that privacy and Fourth Amendment rights are personal goods. No matter how public-minded he might be, or how important the legal questions presented, Lyons could not pursue judicial review of LAPD chokehold practices because he could not establish that his personal Fourth Amendment rights were in certain and immediate danger. It did not matter that the LAPD was still engaging in a pattern and practice of using dangerous, unjustified chokeholds, jeopardizing the Fourth Amendment rights of the people of Los Angeles as a whole. Those practices, and the threats they posed, did not violate Lyons’ personal interests, so he was powerless to demand change.

Both the assumption that privacy is a personal good and the derivative rules governing Fourth Amendment standing have consequences for robot–human interactions. For example, they can be deployed to insulate from judicial review the kinds of electronic data-gathering that are essential for robotic actors and easily conducted by robots. A ready example is Clapper v. Amnesty International.Footnote 21 There, a group of attorneys, journalists, and activists challenged the constitutionality of a provision of the FISA Amendments Act, 50 USC §1881a, granting broad authority for government agents to surveil the communications of non-US persons located abroad. The plaintiffs argued that this authority imperiled their abilities to maintain the confidence of their sources and clients, compromising their important work. While admitting that plaintiffs’ concerns were theoretically valid, the Supreme Court held that they did not have standing to challenge the law precisely because their fears were theoretical. In order to establish standing, the plaintiffs needed to demonstrate that their communications had actually been intercepted or were certain to be intercepted pursuant to authority granted by §1881a. As a result, the authority granted by §1881a remained in place, condemning “the people,” individually and collectively, to a state of persistent insecurity in their electronic communications against both human and robot actors.

Courts have also wielded rules governing Fourth Amendment standing to limit the ability of service providers to protect their customers’ privacy interests. For example, in California Bankers Association v. Shultz, the Supreme Court concluded that banks do not have standing to raise Fourth Amendment claims on their customers’ behalf when contesting a subpoena for financial records.Footnote 22 In Ellwest Stereo Theaters, Inc. v. Wenner, the Ninth Circuit Court of Appeals held that an adult entertainment operator had “no standing to assert the fourth amendment rights of his customers.”Footnote 23 In a 2012 case where investigators subpoenaed Twitter for messages posted by political protestors along with location data, a New York trial court decided that Twitter did not have standing to object on Fourth Amendment grounds.Footnote 24 In 2017, a federal court in Seattle found that Microsoft Corporation did not have Fourth Amendment standing to challenge the gag order provisions of 18 USC §2705(b), which the government routinely invoked when compelling providers of data, internet, and communication services to disclose information relating to their customers.Footnote 25

We are likely to see more of these kinds of decisions in coming years as courts continue to wrestle with new and emerging technologies. Consider, as an example, Amazon’s Ring.

Ring is an internet-connected doorbell equipped with cameras, microphones, and speakers that allows owners to monitor activity around their front doors through a smartphone, tablet, or computer, whether they are inside their homes or in another time zone. Ring is capable of coordinating with other smart devices to provide users with access and control over many aspects of their home environments. There is also a Ring device for automobiles. Although Ring does not make independent intelligent choices or perform tasks based on environmental stimuli, it represents the kinds of technologies that inhabit the IoT, which includes a rapidly rising population of robots. Some of these robots gather stimuli directly through their onboard sensors. Others draw on sensorial inputs from other devices, such as Ring. Either way, devices like Ring represent a critical point of engagement for robots and humans as we grant intimate access to our lives and the world outside our front doors in order to obtain the convenience and benefits of robotic collaborators. As recent experiences with Ring show, that access is ripe for exploitation.

In August 2019, journalists revealed that Amazon had coordinated with hundreds of law enforcement agencies to allow them access to video images and other information gathered by Ring without seeking or securing a court order or the explicit permission of owners.Footnote 26 It is hard to imagine a starker threat to the security of the people guaranteed by the Fourth Amendment than a program granting law enforcement access to our home environments. But who has standing to challenge this program? Criminals prosecuted using evidence from a Ring doorbell do not. That is the lesson from Payner. Owners of Ring doorbells do not, unless they can show that their personal devices have been exploited. That is the lesson from Clapper. But even a Ring owner who can show that her device was exploited cannot challenge the program writ large or demand programmatic reform. That is the lesson from Lyons. As a result, the Fourth Amendment appears to be functionally powerless, both to protect these sites of human–robot interaction,Footnote 27 and to protect the public from robotic exploitation via these kinds of devices and the data they generate. As an example of technologies in this latter category, consider facial recognition, which is capable of conducting the kinds of independent analysis once the sole province of carbon-based agents.Footnote 28

Rules governing Fourth Amendment standing are not the only culprits in the apparent inability of the Fourth Amendment to regulate robot–human interactions. As the next section shows, the state agency requirement also limits the role of the Fourth Amendment in protecting and regulating many robot–human interactions.

III The State Agency Requirement

Conventional doctrine holds that the Fourth Amendment binds state agents, not private actors.Footnote 29 This state agency requirement limits the capacity of the Fourth Amendment to regulate and protect many human–robot interactions. Justice Samuel Alito recently explained why:

The Fourth Amendment restricts the conduct of the Federal Government and the States; it does not apply to private actors. But today, some of the greatest threats to individual privacy may come from powerful private companies that collect and sometimes misuse vast quantities of data about the lives of ordinary Americans.Footnote 30

Many, if not most, of the robot–human interactions that challenge our conceptions of privacy, fracture social norms, destabilize our institutions, and that will most likely play central roles in our lives, are produced, deployed, and controlled by private companies. Smart speakers and other IoT devices, wearable technologies, and the myriad software applications that animate phones, tablets, and televisions are all operated by private enterprises. Some of these corporations have more immediate effects on our lives than government entities, and pose greater threats to our privacy, autonomy, and democratic institutions than government entities, but they stand immune from constitutional constraint because they are not state agents. As a consequence, the Fourth Amendment appears unable “to protect [the public] from this looming threat to their privacy.”Footnote 31

Here again, Ring provides a good example. Ring is part of a larger ecosystem of connected devices designed, sold, and supported by Amazon. In addition to Ring, many folks have other Amazon products in their homes, including Alexa-enabled devices, which are equipped with microphones and voice recognition technologies. These devices allow users to play music, operate televisions, order goods and services, make phone calls, and even adjust the lighting using voice commands. This ecosystem is increasingly open to devices capable of making independent choices. These are all wonderful human–robot interactions, but they come with the cost of allowing Amazon and its affiliates access to our homes and lives. By virtue of the state agency requirement, that relationship stands outside of Fourth Amendment regulation. Amazon, directly or through its robot intermediaries, is at liberty to threaten the security of the people in their persons and homes without fear of constitutional constraint so long as they do not directly coordinate with government agencies.

Must it be this way? Or does the Fourth Amendment have more to say about robot–human interactions than is suggested by rules governing standing and the state agency requirement? As the next sections argue, the text and history of the Fourth Amendment suggest that it does.

IV Challenging Fourth Amendment Standing

The Fourth Amendment undeniably protects collective interests and recognizes that privacy is a public rather than an exclusively private good. That is evident in the text, which uses the phrase “the people” instead of “persons.”Footnote 32 This choice was deliberate.Footnote 33 Those who drafted the Fourth Amendment had competing models to choose from, as represented in various state constitutions, some of which employed “persons”Footnote 34 and others “the people.”Footnote 35 The drafters demonstrated awareness of these alternatives by guaranteeing Fifth Amendment protections to “persons”Footnote 36 and Sixth Amendment rights to “the accused.”Footnote 37 By choosing “the people,” the First Congress aligned the Fourth Amendment with political rights protected elsewhere in the Constitution,Footnote 38 such as the First Amendment right to assemble and petition the governmentFootnote 39 and the Article I right of the people to elect their representatives.Footnote 40 That makes sense in light of contemporaneous experiences with general warrants and writs of assistance, which showed how search and seizure powers could be weaponized to silence political speech. As we shall see, those cases contributed to founding-era concerns that general warrants and writs of assistance threatened the collective security of the people, not just those who were actually the subject of searches and seizures, because the very existence of broad, indiscriminate licenses to search and seize threatened the security of the people as a whole.Footnote 41

The text of the Fourth Amendment reflects founding-era understandings that security against arbitrary searches and seizures was an essential feature of democratic society. The founders understood how searches and seizures could be used to oppress thought and speech. But they also understood the idea, well-established since the time of the ancients, that security in our persons, houses, papers, and effects is essential to processes of ethical, moral, and intellectual development, which in turn are essential to the formation and sustenance of citizens capable of performing the duties of democratic government.Footnote 42 This is privacy as a public good. The Fourth Amendment guarantees that public good by securing space for liberty, autonomy, civil society, and democracy against threats of oppressive scrutiny.

The Supreme Court is not completely blind to the collective interests at stake in the Fourth Amendment. Consider, as an example, its exclusionary rule jurisprudence. Most Fourth Amendment claims arise in the context of criminal trials where the remedy sought is exclusion of illegally seized evidence.Footnote 43 The idea that illegally seized evidence should be excluded at trial is not derived from the text or history of the Fourth Amendment.Footnote 44 In fact, nineteenth-century jurists rejected the idea.Footnote 45 The exclusionary rule is, instead, a prudential doctrine justified solely by its capacity to prevent Fourth Amendment violationsFootnote 46 by deterring police officers from violating the Fourth Amendment in the future.Footnote 47 Although illegal evidence is excluded in the cases of particular defendants, there is no individual right to exclude evidence seized in violation of the Fourth Amendment.Footnote 48 To the contrary, the Court has made clear that admitting evidence seized in violation of the Fourth Amendment “works no new Fourth Amendment wrong.”Footnote 49 In making this prudential case for the exclusionary rule on general deterrence grounds, the Court recognizes that there is more at stake in a particular search or seizure than the personal privacy of a specific person.

The Court’s awareness of the collective interests at stake in Fourth Amendment cases is not limited to its exclusionary rule jurisprudence. For example, in Johnson v. United States, decided in 1948, the Court noted that “[t]he right of officers to thrust themselves into a home is also a grave concern, not only to the individual, but to a society which chooses to dwell in reasonable security and freedom from surveillance.”Footnote 50 Similarly, in United States v. Di Re, the Court concluded that “the forefathers, after consulting the lessons of history, designed our Constitution to place obstacles in the way of a too permeating police surveillance, which they seemed to think was a greater danger to a free people than the escape of some criminals from punishment.”Footnote 51 Of course, these sentiments were issued before Katz, which shifted the focus to individual interests.

Importantly, however, Katz did not close the door, and there is some evidence that the Supreme Court may be ready to rethink rules governing Fourth Amendment standing in light of new challenges posed by emerging technologies. The strongest evidence comes from the Court’s decision in Carpenter v. United States.Footnote 52 There, the Court was asked whether the Fourth Amendment regulates governmental access to cell site location information (CSLI). CSLI has been a boon to law enforcement. It can be used to track suspects’ past movements and to establish their proximity to crimes. That is precisely what investigators did in Carpenter. Based on information from a co-conspirator, they knew that Carpenter was involved in a string of armed robberies. In order to corroborate that information, they obtained several months of CSLI for Carpenter’s phone, establishing his proximity to several robberies. At trial, Carpenter objected to the admission of this evidence on Fourth Amendment grounds.

In light of the Court’s views on standing and the state agency requirement, there was good reason to think that the government would prevail. After all, it was Carpenter’s cell phone company who, of its own accord, tracked his phone and stored his location information. It certainly did not appear to be acting as a state agent. Moreover, the information was recorded in the company’s business records. If Payner did not have standing to challenge the search of banking records, then why would Carpenter have standing to challenge the search of cellular service records? Despite these challenges, the Supreme Court held that the “location information obtained from Carpenter’s wireless carriers was the product of a search.”Footnote 53 In doing so, the Court seemed to return to the pre-Katz era:

The “basic purpose of this Amendment,” our cases have recognized, “is to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” The Founding generation crafted the Fourth Amendment as a “response to the reviled ‘general warrants’ and ‘writs of assistance’ of the colonial era, which allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity.” In fact, as John Adams recalled, the patriot James Otis’s 1761 speech condemning writs of assistance was “the first act of opposition to the arbitrary claims of Great Britain” and helped spark the Revolution itself …. [our] analysis is informed by historical understandings “of what was deemed an unreasonable search and seizure when [the Fourth Amendment] was adopted.” On this score our cases have recognized some basic guideposts. First, that the Amendment seeks to secure “the privacies of life” against “arbitrary power.” Second, and relatedly, that a central aim of the Framers was “to place obstacles in the way of a too permeating police surveillance.”Footnote 54

This reasoning marks a potential broadening of the Court’s approach to Fourth Amendment questions. Along the way, the Court seemed to recognize the important collective dimensions of the Fourth Amendment.Footnote 55

The majority opinion in Carpenter does not directly address the question of Fourth Amendment standing. Nevertheless, Justices Anthony Kennedy and Clarence Thomas make clear that something potentially revolutionary is afoot in their dissenting opinions. For his part, Justice Kennedy reminds us that the Court’s precedents “placed necessary limits on the ability of individuals to assert Fourth Amendment interests in property to which they lack a requisite connection.”Footnote 56 “Fourth Amendment rights, after all, are personal,” he continues, “[t]he Amendment protects ‘[t]he right of the people to be secure in their … persons, houses, papers, and effects’ – not the persons, houses, papers, and effects of others.” In the case of the business records at issue in Carpenter, Justice Kennedy concluded that they belonged to the cellular service provider “plain and simple.” Consequently, Carpenter, like Payner, “could not assert a reasonable expectation of privacy in the records.” Justice Thomas was even more pointed in his criticism, lambasting the majority for endorsing the idea that individuals can “have Fourth Amendment rights in someone else’s property.”Footnote 57

The Carpenter majority offers no direct response to these charges, but there are hints consistent with the arguments sounding in the collective rights reading of the Fourth Amendment advanced in this chapter. For example, the Court recognizes that allowing government agents unfettered access to CSLI implicates general, collective interests rather than the specific interests of an individual. As Chief Justice John Roberts, writing for the majority, points out, cellular phones are ubiquitous, to the point that there are more cellular service accounts with US carriers than there are people. Furthermore, most people “compulsively carry cell phones with them all the time … beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales.”Footnote 58 From these facts, the majority concludes that granting unfettered governmental access to CSLI would facilitate programs of “near perfect surveillance, as if [the Government] had attached an ankle monitor to the phone’s user.”Footnote 59 “Only the few without cell phones could escape this tireless and absolute surveillance.”Footnote 60 This exhibits a keen awareness that the real party of interest in the case was “the people” as a whole. At stake was “the tracking of not only Carpenter’s location but also everyone else’s, not for a short period, but for years and years.”Footnote 61 Denying customers’ standing to challenge government access to those records would leave the people insecure against threats of broad and indiscriminate surveillance – exactly the kind of “permeating police surveillance” the Fourth Amendment was designed to prevent.Footnote 62

Recognizing the collective dimensions of the Fourth Amendment provides good grounds for reconsidering rules governing Fourth Amendment standing. As the founders saw it, any instance of unreasonable search and seizure in essence proposed a rule, and the Fourth Amendment prohibits endorsement of any rule that threatens the security of the people as a whole.Footnote 63 It follows that anyone competent to do so ought to be able to challenge a proposed rule and the practice or policy it recommends. To be sure, a citizen challenging search and seizure practices should be limited in terms of the remedy she can seek. Actions at law seeking compensation should be limited to individuals who have suffered a direct, compensable harm. On the other hand, anyone competent to do so should have standing to bring actions seeking equitable relief in the form of declaratory judgments condemning search and seizure practices or injunctions regulating future conduct. Neither should we require the kind of surety of future personal impact reflected in the Court’s decisions in Lyons and Clapper. The founding generation recognized that the very existence of licenses granting unfettered discretion to search and seize threaten the security of the people as a whole. Why, then, would we not permit a competent representative of “the people” to challenge a statute, policy, or practice that, by its very existence, leaves each of us and all of us to live in fear of unreasonable searches and seizures?

Expanding the scope of Fourth Amendment standing would enhance human–robot interactions by allowing competent persons and groups to challenge efforts to exploit those interactions. It would likewise enhance our security against robotic surveillants. It would allow the activist groups like those who brought suit in Clapper to challenge legislation granting broad access to electronic communications and other data sources likely to play a role on robot–human interactions. It would allow technology companies to challenge government demands for the fruits and artifacts of our engagements with technologies. It would also license competent individuals and organizations to seek declaratory and injunctive relief when companies and government agencies exploit our relationships with robots and other technologies or seek to deploy robotic monitors. There is no doubt that this expanded access to the courts would enhance the security, integrity, and value of our interactions with a wide range of technologies that inhabit our daily lives, both directly and indirectly, by increasing pressure on the political branches to act.

V Reconsidering the State Agency Requirement

Contemporary doctrine holds that the Fourth Amendment applies only to state agents, and primarily the police. A closer look at the text and history of the Fourth Amendment suggests that it is not, and was not conceived to be, so narrow in scope.

To start, there is the simple fact that the police as we know them today did not exist in eighteenth-century America. That was not for lack of models or imagination. By the late eighteenth century, uniformed, paramilitary law enforcement agencies with general authority to investigate crimes were familiar in continental Europe. But England had rejected efforts to adopt that model, at least in part because members of the nobility feared privacy intrusions by civil servants. When Sir Robert Peel was able to pass the Metropolitan Police Act in 1829, establishing the Metropolitan Police Force, the “Peelers” (later “Bobbies”) were limited to maintaining the peace and did not have authority to investigate crimes. America was a decade behind England, with police forces making their first appearances in Boston (1838) and New York (1845). It was not until the late nineteenth century that professionalized, paramilitary police forces with full authority to investigate crime became a familiar feature of American society. By then, the Fourth Amendment was a venerable centenarian.

By dint of this historical fact, we know that the Fourth Amendment was not drafted or adopted with police officers as its sole or even primary antagonists. The text reflects this, making no mention of government agents of any stripe. Who then, was its target? The historical record suggests that it was overstepping civil functionaries, including constables, administrative officials, tax collectors, and their agents, as well as private persons. This is evidenced by the complicated role of warrants in eighteenth-century common law.

Contemporary Fourth Amendment wisdom holds that the warrant requirement plays a critical prospective remedial role, guarding the security of citizens against threats of unreasonable search and seizure by interposing detached and neutral magistrates between citizens and law enforcement.Footnote 64 Among others, Laura Donohue has made a persuasive case that the “unreasonable searches” targeted by the Fourth Amendment were searches conducted in the absence of a warrant conforming to the probable cause, particularity, oath, and return requirements described in the warrant clause.Footnote 65 But, as Akhil Amar has pointed out, the eighteenth-century history of warrants is somewhat more complicated.Footnote 66 Some of those complications highlight the role of private persons in conducting searches and seizures.

In a world before professional, paramilitary police forces, private individuals bore significant law enforcement responsibilities. In his commentaries, Blackstone recognized the right of private persons to effect arrests on their own initiative or in response to a hue and cry.Footnote 67 Searches and seizures in support of criminal investigations often were initiated by civilians who might go to a justice of the peace to swear-out a complaint against a suspected thief or assailant.Footnote 68 So, too, a plaintiff in a civil action could swear-out a warrant to detain a potential defendant.Footnote 69 A justice of the peace would, in turn, exercise his authority through functionaries, such as constables, who, as William Stuntz has noted, were “more like private citizens than like a modern-day police officer,”Footnote 70 or even civilian complainants themselves, by issuing a warrant authorizing those persons to conduct a search or seizure.Footnote 71 These private actors could conduct searches purely on their own authority as well, but in doing so would risk exposing themselves to claims in trespass.Footnote 72 Warrants provided immunity against these actions.

Searches and seizures performed by minor functionaries and civilians raised significant concerns in eighteenth-century England because they threatened established social hierarchies by licensing civil servants to invade the privacy of the nobility. Those same worries underwrote resistance to professional police forces and founding-era critiques of general warrants and writs of assistance.Footnote 73 Unlike the particularized warrants issued by judicial officers based on probable cause imagined in the warrant clause, general warrants and writs of assistance provided broad, unfettered authority for bearers to search wherever they wanted, for whatever reason, with complete immunity from civil liability. These instruments were reviled by our eighteenth-century forebears because they invited arbitrary abuses of power.Footnote 74 But those threats did not come exclusively from agents of the state or only in the context of criminal actions. To the contrary, one of the most pernicious qualities of general warrants and writs of assistance was that they allowed for the delegation of search and seizure authority to minor functionaries and private persons. This is evident in the signal eighteenth-century cases challenging general warrants and writs of assistance.

The philosophical lineage of the Fourth Amendment traces to three eighteenth-century cases involving general warrants and writs of assistance that “were not only well known to the men who wrote and ratified the Bill of Rights, but famous through the colonial population.”Footnote 75 The first two, Wilkes v. WoodFootnote 76 and Entick v. Carrington,Footnote 77 dealt with efforts to persecute English pamphleteers responsible for writing and printing publications critical of King George III and his policies. In support of cynical efforts to silence these critics, one of the king’s secretaries of state, Lord Halifax, issued general warrants licensing his “messengers” to search homes and businesses and to seize private papers. After their premises were searched and their papers seized, Wilkes and Entick sued Halifax and his agents in trespass, winning large jury awards. The defendants claimed immunity, citing the general warrants issued by Halifax. In several sweeping decisions written in soaring prose, Chief Judge Pratt – later Lord Camden – rejected those efforts, holding that general warrants were contrary to the common law.Footnote 78

The third case providing historical grounding for the Fourth Amendment is Paxton’s Case.Footnote 79 This was one among a group of suits brought by colonial merchants challenging the use of writs of assistance to enforce British customs laws in the American colonies. The colonists were ably represented by former Advocate General of the Admiralty James Otis, who left his post in protest when asked to defend writs of assistance. In an hours-long oration, Otis condemned writs of assistance as “the worst instrument of arbitrary power, the most destructive of English liberty and the fundamental principles of law that ever was found in an English law book.”Footnote 80 He ultimately lost the case; but colonial fury over the abuse of search and seizure powers played a critical role in fomenting the American Revolution.Footnote 81

Outrage over general warrants and writs of assistance was evident during the American constitutional movement.Footnote 82 Courts condemned them;Footnote 83 state constitutions banned them,Footnote 84 and states cited the absence of a federal prohibition on general warrants as grounds for reservation during the ratification debates.Footnote 85 In order to quiet these concerns, proponents of the Constitution agreed that the First Congress would draft and pass an amendment guaranteeing security from threats posed by unfettered search and seizure powers. The Fourth Amendment fulfills that promise.

All of this goes to show that we can look to founding-era experiences with, and objections to, general warrants and writs of assistance to inform our understandings of the Fourth Amendment. That record shows that the Fourth Amendment should not be read as applying exclusively to government officials. In their critiques of general warrants and writs of assistance, founding-era courts and commentators often highlighted the fact that they provided for the delegation of search and seizure powers to civilian functionaries. For example, the court in Wilkes argued that: “If such a power [to issue general warrants] is truly invested in a secretary of state, and he can delegate this power, it certainly may affect the person and property of every main this kingdom, and is totally subversive of the liberty of the subject.”Footnote 86 James Otis railed that “by this writ [of assistance], not only deputies, etc., but even their menial servants, are allowed to lord it over us.”Footnote 87 “It is a power,” he continued, “that places the liberty of every man in the hands of every petty officer.” “What is this,” he lamented, “but to have the curse of Canaan with a witness on us; to be the servant of servants, the most despicable of God’s creation?” The extent of that servitude, he explained, was virtually without limit, so that “Customhouse officers [and] [t]heir menial servants may enter, may break locks, bars, and everything in their way; and whether they break through malice or revenge, no man, no court, can inquire.” Because a writ of assistance “is directed to every subject in the king’s dominions,” he concluded: “Everyone with this writ, may be a tyrant.”

To be sure, many of the antagonists in these cases were state agents, if only of minor rank, or were acting at the direction of state agents. But the existence of general warrants and writs of assistance allowed both private citizens and government officials to threaten home and hearth. Otis explained why in his oration, quoting language common to writs of assistance that allowed “any person or persons authorized,”Footnote 88 including “all officers and Subjects,” to conduct searches and seizures.Footnote 89 That inclusion of “persons” and “Subjects” reflected the fact that writs of assistance and general warrants were issued not just in cases of customs and tax enforcement, but also to assist private litigants in civil actionsFootnote 90 or even to vindicate private animosities. Otis explained the consequences: “What a scene does this open! Every man prompted by revenge, ill humor, or wantonness, to inspect the inside of his neighbor’s house, may get a writ of assistance. Others will ask it from self-defense; one arbitrary exertion will provoke another, until society be involved in tumult and blood.”Footnote 91

Anticipating a charge of dramatization, Otis offered this anecdote:Footnote 92

This wanton exercise of this power is not a chimerical suggestion of a heated brain. I will mention some facts. [Mr. Ware] had one of these writs … Mr. Justice Walley had called this same Mr. Ware before him, by a constable, to answer for a breach of the Sabbath-day Acts, or that of profane swearing. As soon as he had finished, Mr. Ware asked him if he had done. He replied, “Yes.” “Well then,” said Mr. Ware, “I will show you a little of my power. I command you to permit me to search your house for uncustomed goods” – and went on to search the house from the garret to the cellar; and then served the constable in the same manner!

So, at the heart of this speech marking the birth of the American Revolution, we see Otis decrying general warrants and writs of assistance because they protected private lawlessness. That is hard to square with the contemporary state agency requirement.

The facts in Wilkes and Entick provide additional evidence of the potential for general warrants and writs of assistance to vindicate private interests and facilitate abuses of power. The searches in these cases aimed to discover evidence of libel against the king. In fact, the court in Entick characterized the effort as “the first instance of an attempt to prove a modern practice of a private office to make and execute warrants to enter a man’s house, search for and take away all his books and paper in the first instance ….”Footnote 93 The Entick Court went on to suggest that allowing for the issuance of general warrants in search of libels would pose a threat to the security of everyone in their homes because simple possession of potentially libelous publications was so common.Footnote 94

So, neither the text nor history of the Fourth Amendment appear to support a state agency requirement, at least not in its current form. That is evidenced by the fact that a strict state agency requirement appears to exclude from Fourth Amendment regulation some of the searches and seizures cited as bêtes noires in the general warrants and writs of assistance cases. Certainly, nothing in the text suggests that state agents are the only ones capable of threatening the security of the people against unreasonable searches and seizures. Moreover, eighteenth-century criticisms of search and seizure powers indicate that the founding generation was concerned about arbitrary searches performed by a range of actors. Given that history, there is good reason to conclude that the Fourth Amendment governs the conduct of private entities to the extent they pose a threat to collective interests, including privacy as a public good. Fortunately, the Court appears to be developing some new sympathies that line up with these ancient truths.

In addition to sparking a potential revolution in the rules governing Fourth Amendment standing, the Carpenter Court also appears to have introduced some complications to the state agency requirement. To start, the Court is never clear about when, exactly, the search occurred in Carpenter and who did it. At one point, the Court states that the “Government’s acquisition of the cell-site records was a search within the meaning of the Fourth Amendment.”Footnote 95 That would be in keeping with the state agency requirement. Elsewhere, the Court holds that the “location information obtained from Carpenter’s wireless carriers was the product of a search,”Footnote 96 suggesting that his cellular service provider performed the search when it gathered, aggregated, and stored the CSLI. That is intriguing in the present context.

The Carpenter Court does not explain its suggestion that cellular service providers engage in a “search” for purposes of the Fourth Amendment when they create CSLI records. This is an omission that Justice Alito, writing in dissent, finds worrisome, pointing out that: “The Fourth Amendment … does not apply to private actors.” Again, the Court offers no direct response, which may leave us to wonder whether its suggestion that gathering CSLI is a search was a slip of the pen. There are good reasons for thinking this is not the case. Foremost, Carpenter is a landmark decision, and Chief Justice Roberts has a well-deserved reputation for care and precision in his writing. Then there is the fact that what cellular service providers do when gathering CSLI can quite naturally be described as a “search.” After all, they are looking for and trying to find an “effect” (the phone) and, by extension, a “person” (the user).Footnote 97 By contrast, it is hard to describe the simple act of acquiring records as a “search,” although looking through or otherwise analyzing them certainly is. And then there is the fact that the acquisition was done by the familiar process of subpoena. As Justice Samuel Alito points out at length in his dissenting opinion, treating acquisition of documents by subpoena as a “search” would bring a whole host of familiar discovery processes within the compass of the Fourth Amendment.Footnote 98 By contrast, treating the aggregation of CSLI as the search would leave that doctrine untouched. For all these reasons, the best, most coherent, and least disruptive option available to the Court may have been holding that the cellular service provider conducted the search at issue in Carpenter.

Expanding the scope of Fourth Amendment regulations to searches conducted by private actors would provide invaluable protections for human–robot interactions and protections from robot surveillants. As Justice Alito points out in his Carpenter dissent, many of the most significant contemporary threats to privacy come from technology companies and parties who have access to us and our lives through our robot collaborators or deploy and use robots as part of their businesses. This gives them extraordinary power. We have certainly seen the potential these companies hold to manipulate, influence, and disrupt civil society and democratic institutions – just consider the autonomous decisions made by social media algorithms when curating content. In many ways, these companies and their robots are more powerful than states and exercise greater degrees of control. There can be no doubt that holding them to the basic standards of reasonableness commanded by the Fourth Amendment would substantially enhance individual and collective security, both in our engagements with robots and against searches performed by robots.

VI Conclusion

This chapter has shown that the Fourth Amendment’s capacity to fulfill its promise is limited by two established doctrines, individual standing and the state agency requirement. Together, these rules limit the ability of the Fourth Amendment to normalize, protect, and regulate human–robot interactions. Fortunately, the text and history of the Fourth Amendment provide grounds for a broader reading that recognizes collective interests, guarding privacy as a public good against threats posed by both state and private agents. More fortunately still, the modern Supreme Court has suggested that it may be willing to reconsider its views on standing and state action as it struggles to contend with new challenges posed by robot–human interactions. As they move forward, the Justices would be well-advised to look backward, drawing insight and wisdom from the text and animating history of the Fourth Amendment.

Footnotes

5 Introduction to Human–Robot Interaction and Procedural Issues in Criminal Justice

* I wish to thank Red Preston for the careful language editing and valuable advice.

1 For a definition of AI, see the EU AI Act, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD), Art. 3(1) [Artificial Intelligence Act], “software that is developed with one or more of [certain] approaches and techniques … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

2 Kate Darling, “‘Who’s Johnny?’: Anthropomorphic Framing in Human–Robot Interaction, Integration, and Policy” in Patrick Lin, Keith Abney, & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (New York, NY: Oxford University Press, 2017) 173.

3 See Nhien-An Le-Khac, Daniel Jacobs, John Nijhoff et al., “Smart Vehicle Forensics: Challenges and Case Study” (2020) 109 Future Generation Computer System 500 [“Smart Vehicle”].

4 Sabine Gless, Xuan Di, & Emily Silverman, “Ca(r)veat Emptor: Crowdsourcing Data to Challenge the Testimony of In-Car Technology” (2022) 62:3 Jurimetrics 285 [“Ca(r)veat Emptor”] at 286.

5 Athina Sachoulidou, “Going Beyond the ‘Common Suspects’: To Be Presumed Innocent in the Era of Algorithms, Big Data and Artificial Intelligence” (2023) Artificial Intelligence and Law [“Going Beyond”] at section 2.1.

6 Aaron Calafato, Christian Colombo, & Gordon J. Pace, “A Controlled Natural Language for Tax Fraud Detection,” paper delivered at the International Workshop on Controlled Natural Language (2016).

7 Jason Moore, Ibrahim Baggili, & Frank Breitinger, “Find Me If You Can: Mobile GPS Mapping Applications Forensic Analysis & SNAVP the Open Source, Modular, Extensible Parser” (2017) 12:1 Journal of Digital Forensics, Security and Law 15 at 25.

8 Karolina Kremens & Wojciech Jasinski, “Editorial of Dossier ‘Admissibility of Evidence in Criminal Process. Between the Establishment of the Truth, Human Rights and the Efficiency of Proceedings’” (2021) 7:1 Revista Brasileira de Direito Processual Penal 15 at 31.

9 Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Cheltenham, UK: Edward Elgar, 2015) at 159–185; Mathew Zaia, “Forecasting Crime? Algorithmic Prediction and the Doctrine of Police Entrapment” (2020) 18:2 Canadian Journal of Law and Technology 255 at 262; “Going Beyond”, note 5 above, at section 2.1.

10 For details, see Andrew G. Ferguson, “Policing Predictive Policing” (2016) 94:5 Washington University Law Review 1109; for possible remedies, see Sabine Gless, “Predictive Policing – In Defense of ‘True Positives’” in Emre Bayamlıoğlu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens et al. (eds.), Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen (Amsterdam, Netherlands: Amsterdam University Press, 2018) 76.

11 Matthew Browning & Bruce A. Arrigo, “Stop and Risk: Policing, Data, and the Digital Age of Discrimination” (2021) 46:1 American Journal of Criminal Justice 298 at 310; Oskar J. Gstrein, Anno Bunnik, & Andrej Zwitter, “Ethical, Legal and Social Challenges of Predictive Policing” (2019) 3:3 Católica Law Review, Direito Penal 77 at 86–88.

12 For a discussion on issues of using such material, see Alex Biedermann & Joëlle Vuille, “Digital Evidence, ‘Absence’ of Data and Ambiguous Patterns of Reasoning” (2016) 16 Digital Investigation S86.

13 Andreas Winkelmann, “‘Einzelraser’ nach §315 d Abs. 1 Nr. 3 StGB und der Nachweis durch digitale Fahrzeugdate” (‘Single Speeders’ According to §315 d para. 1 no. 3 StGB and the Proof by Digital Vehicle File) (2023) 19:1 Deutsches Autorecht (German Car Law) 2 at 4–6.

14 Empirical research using naturalistic driving data has been used to predict mild cognitive impairment and (oncoming) dementia in a longitudinal research on aging drivers: The scientists found that atypical changes in driving behaviors can be early signals of mental impairment using machine learning techniques on monthly driving data captured by in-vehicle recording devices; see Xuan Di, Rongye Shi, Carolyn DiGuiseppe et al., “Using Naturalistic Driving Data to Predict Mild Cognitive Impairment and Dementia: Preliminary Findings from the Longitudinal Research on Aging Drivers (LongROAD) Study” (2021) 6:2 Geriatrics 45.

15 Steven P. Lund & Hariharan Iyer, “Likelihood Ratio as Weight of Forensic Evidence: A Closer Look” (2017) 122:27 Journal of Research of National Institute of Standards Technology 1 [“Likelihood Ratio”] at 1; Filipo Sharevski, “Rules of Professional Responsibility in Digital Forensics: A Comparative Analysis” (2015) 10:2 Journal of Digital Forensics, Security and Law 39 [“Digital Forensics”] at 39; Charles E.H. Berger & Klaas Slooten, “The LR Does Not Exist” (2016) 56:5 Science and Justice 388 [“The LR”]; Alex Biedermann & Joelle Vuille, “Understanding the Logic of Forensic Identification Decisions (Without Numbers)” (2018) Sui Generis 397.

16 Erin Murphy, “The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence” (2007) 95:3 California Law Review 721 [“New Forensics”] at 723–724.

17 Frederic I. Lederer, “Technology-Augmented and Virtual Courts and Courtrooms” in Michael McGuire & Thomas Holt (eds.), The Routledge Handbook of Technology, Crime and Justice (London, UK: Routledge, 2017) 518 at 525–526.

18 Meredith Rossner, David Tait, & Martha McCurdy, “Justice Reimagined: Challenges and Opportunities with Implementing Virtual Courts” (2021) 33:1 Current Issues in Criminal Justice 94 at 94, 97; Deniz Ariturk, William E. Crozier, & Brandon L. Garrett, “Virtual Criminal Courts” (2020) 2020 University of Chicago Law Review Online 57 at 67–68.

19 Paul W. Grimm, Maura R. Grossman, & Gordon V. Cormack, “Artificial Intelligence as Evidence” (2021) 19:1 Northwestern Journal of Technology and Intellectual Property 9 at 51–52.

20 See “Smart Vehicle”, note 3 above, at 501.

21 “Ca(r)veat Emptor”, note 4 above, at 290; Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195 [“AI in the Courtroom”] at 213.

22 Lene Wacher Lentz & Nina Sunde, “The Use of Historical Call Data Records as Evidence in the Criminal Justice System – Lessons Learned from the Danish Telecom Scandal” (2021) 18 Digital Evidence and Electronic Signature Law Review 1 at 1–4.

23 Paul W. Grimm, Daniel J. Capra, & Gregory P. Joseph, “Authenticating Digital Evidence” (2017) 69:1 Baylor Law Review 1.

24 Council of Europe, “iPROCEEDS-2: Launching of the Electronic Evidence Guide v.3.0,” www.coe.int/en/web/cybercrime/-/iproceeds-2-launching-of-the-electronic-evidence-guide-v-3-0#.

25 One can bring a computer hard drive or a mobile phone to court, but the information stored is not accessible to the judges in the same way as printed information. Thus, jurisdictions must find a way to access email or mobile phone files or GPS data, and build expertise with computer forensics.

26 For a similar discussion regarding DNA evidence, see: “Likelihood Ratio”, note 15 above, at 1; “Digital Forensics”, note 15 above, at 39; Nils Ommen, Markus Blut, Christof Backhaus et al., “Toward a Better Understanding of Stakeholder Participation in the Service Innovation Process: More than One Path to Success” (2016) 69:7 Journal of Business Research 2409 at 2409; “The LR”, note 15 above, at 388.

27 “AI in the Courtroom”, note 21 above, at 232–250; “New Forensics”, note 16 above, at 723–724.

28 Cynthia Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead” (2019) 1:5 Nature Machine Intelligence 206 at 206.

29 Eli Siems, Katherine J. Strandburg, & Nicholas Vincent, “Trade Secrecy and Innovation in Forensic Technology” (2022) 73:3 UC Hastings Law Journal 773 at 794–799.

30 Mireille Hildebrandt & Bert-Jaap Koops, “The Challenges of Ambient Law and Legal Protection in the Profiling Era” (2010) 73:3 Modern Law Review 428 at 437–438.

31 Brandon L. Garrett, “Big Data and Due Process” (2014) 99 Cornell Law Review Online 207 at 211–212.

32 Lucia M. Sommerer, “The Presumption of Innocence’s Janus Head in Data-Driven Government” in Emre Bayamlıoğlu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens et al. (eds.), Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen (Amsterdam, Netherlands: Amsterdam University Press, 2018) [“Janus”] at 58–61; “Going Beyond”, note 5 above.

33 Daniel L. Chen, “Machine Learning and the Rule of Law” (2019) 1 Revista Forumul Judecatorilor (Judiciary Forum Review) 19.

34 John Morison & Adam Harkins, “Re-engineering Justice? Robot Judges, Computerised Courts and (Semi) Automated Legal Decision Marking” (2019) 39:4 Legal Studies 618 [“Re-engineering Justice”].

35 Ran Wang, “Legal Technology in Contemporary USA and China” (2020) 39 Computer Law & Security Review Article 105459, 11–14.

36 Jasper Ulenaers, “The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?” (2020) 11:2 Asian Journal of Law and Economics Article 20200008.

37 For consumer disputes, see Feliksas Petrauskas & Eglė Kybartienė, “Online Dispute Resolution in Consumer Disputes” (2011) 18:3 Jurisprudencija 921 at 930; for family law, see Mavis Maclean & Bregje Dijksterhuis (eds.), Digital Family Justice: From Alternative Dispute Resolution to Online Dispute Resolution? (London, UK: Bloomsbury Publishing, 2019); in general, see “Re-engineering Justice”, note 34 above, at 620–624.

38 Regarding China, see Ray W. Campbell, “Artificial Intelligence in the Courtroom: The Delivery of Justice in the Age of Machine Learning” (2020) 18:2 Colorado Technology Law Journal 323.

39 “Re-engineering Justice”, note 34 above, at 625.

40 Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S.Ct. 2290 (2017).

41 Nigel Stobbs, Daniel Hunter, & Mirko Bagaric, “Can Sentencing Be Enhanced by the Use of Artificial Intelligence?” (2017) 41:5 Criminal Law Journal 261 at 261–277.

42 Yadong Cui, Artificial Intelligence and Judicial Modernization (Shanghai, China: Shanghai People’s Publishing House and Springer, 2020).

43 Tamara Deichsel, Digitalisierung der Streitbeilegung (Digitization of Dispute Resolution) (Baden-Baden, Germany: Nomos, 2022).

44 William Ortman, “Confrontation in the Age of Plea Bargaining” (2021) 121:2 Columbia Law Review 451 at 451.

45 “Janus”, note 32 above, at 58–61.

46 “Re-engineering Justice”, note 34 above, at 632.

48 Benjamin M. Chen, Alexander Stremitzer, & Kevin Tobia, “Having Your Day in Robot Court” (2022) 36:1 Harvard Journal of Law & Technology 128 at 160–164.

50 DoNotPay, https://donotpay.com/ [DoNotPay].

51 See also Maura R. Grossman, Paul W. Grimm, Daniel G. Brown et al., “The GPTJudge: Justice in a Generative AI World” (2023) 23:1 Duke Law & Technology Review 1 at 21.

52 Success rate of DoNotPay, note 50 above.

53 For a news coverage, see Bobby Allyn, “A Robot was Scheduled to Argue in Court, Then Came the Jail Threats,” NPR (January 25, 2023), www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats.

54 “Ca(r)veat Emptor”, note 4 above, at 294–295.

55 Gabriel Hallevy, “The Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social Control” (2010) 4:2 Akron Intellectual Property Journal 171 at 179; Eric Hilgendorf, “Können Roboter schuldhaft handeln?” (Can Robots Act Culpably?) in Susanne Beck (ed.), Jenseits von Mensch und Maschine (Beyond Man and Machine) (Baden-Baden, Germany: Nomos, 2012) at 119; Susanne Beck, “Intelligent Agents and Criminal Law – Negligence, Diffusion of Liability and Electronic Personhood” (2016) 86:4 Robotics and Autonomous Systems 138 [“Intelligent Agents”] at 141–142; Sabine Gless, Emily Silverman, & Thomas Weigend, “If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability” (2016) 19:3 New Criminal Law Review 412 at 412–424; Monika Simmler & Nora Markwalder, “Guilty Robots? Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence” (2019) 30:1 Criminal Law Forum 1 [“Guilty Robots”] at 4; Ying Hu, “Robot Criminals” (2019) 52:2 University of Michigan Journal of Law 487 at 497–498; Ryan Abbott & Alex Sarch, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction” (2019) 53:1 University of California, Davies Law Review 323 at 351.

56 European Union, The European Parliament, Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), OJ 2015 C 252 (EU: Official Journal of the European Union, 2017) at para. 59.

57 “Guilty Robots”, note 55 above, at 9; “Intelligent Agents”, note 55 above, at 141 f.; Antonio Ianni & Michael W. Monterossi, “Artificial Autonomous Agents and the Question of Electronic Personhood: A Path between Subjectivity and Liability” (2017) 26:4 Griffith Law Review 563 at 570; see also Gunther Teubner, “Digital Personhood? The Status of Autonomous Software Agents in Private Law” (2018) Ancilla Juris 106 at 113.

58 Lawrence B. Solum, “Legal Personhood for Artificial Intelligences” (1992) 70:4 North Carolina Law Review 1231 [“Legal Personhood”] at 1231.

59 For a debate of his arguments, see Bert-Japp Koops, Mireille Hildebrandt, & David-Oliver Jaquet-Chiffelle, “Bridging the Accountability Gap: Rights for New Entities in the Information Society?” (2010) 11:2 Minnesota Journal of Law, Science & Technology 497 at 518–532.

60 “Legal Personhood”, note 58 above, at 1239.

61 Gunther Teubner, “Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law” (2006) 33:4 Journal of Law and Society 497 at 502.

62 Vanessa Franssen & Alyson Berrendorf, “The Use of AI Tools in Criminal Courts: Justice Done and Seen to Be Done?” (2021) 92:1 Revue Internationale de Droit Pénal 199 at 206.

63 Katherine Freeman, “Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis” (2016) 18:5 North Carolina Journal of Law & Technology 75.

64 Arthur Rizer & Caleb Watney, “Artificial Intelligence Can Make Our Jail System More Efficient, Equitable, and Just” (2018) 23:1 Texas Review of Law & Politics 181; Han-Wei Liu, Ching-Fu Lin, & Yu-Jie Chen, “Beyond State v Loomis: Artificial Intelligence, Government Algorithmization and Accountability” (2019) 27:2 International Journal of Law and Information Technology 122 at 133–141; Hans Steege, “Algorithmenbasierte Diskriminierung durch Einsatz von Künstlicher Intelligenz” (Algorithm-Based Discrimination through the Use of Artificial Intelligence) (2019) 11 Multimedia und Recht 715. For a European view on such systems, see Serena Quattrocolo, Artificial Intelligence, Computational Modelling and Criminal Proceedings: A Framework for A European Legal Discussion, Legal Studies in International, European and Comparative Criminal Law, vol. 4 (Cham, Switzerland: Springer Nature, 2020); for a Canadian point of view, see Sara M. Smyth, “Can We Trust Artificial Intelligence in Criminal Law Enforcement?” (2019) 17:1 Canadian Journal of Law and Technology 99; for a comparison, see Simon Chesterman, “Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity” (2021) 69:2 American Journal of Comparative Law 271 at 287–294.

65 Robert Werth, “Risk and Punishment: The Recent History and Uncertain Future of Actuarial, Algorithmic, and ‘Evidence-Based’ Penal Techniques” (2019) 13:2 Sociology Compass 1 at 8–10.

66 John Lightbourne, “Damned Lies & Criminal Sentencing Using Evidence-Based Tools” (2016) 15:1 Duke Law and Technology Review 327 at 334–342.

67 Marie-Claire Aarts, “The Rise of Synthetic Judges: If We Dehumanize the Judiciary, Whose Hand Will Hold the Gavel?” (2021) 60:3 Washburn Law Journal 511.

68 European Union, The European Parliament, Official Journal of the European Union L 119 of 4 May 2016, OJ 2015 L 119 (EU: Official Journal of the European Union, 2016) [L 119] at 1.

69 Artificial Intelligence Act, note 1 above.

70 Cf. Isadora Neroni Rezende, “Facial Recognition in Police Hands: Assessing the ‘Clearview Case’ from a European Perspective” (2020) 11:3 New Journal of European Criminal Law 375 at 389; for civil society challenges against Clearview AI in Europe, see “Challenge against Clearview AI in Europe,” Privacy International, https://privacyinternational.org/legal-action/challenge-against-clearview-ai-europe.

71 See e.g., Shanni Davidowitz, “23andEveryone: Privacy Concerns with Law Enforcement’s Use of Genealogy Databases to Implicate Relatives in Criminal Investigations” (2019) 85:1 Brooklyn Law Review 185.

72 Kristine Hamann & Rachel Smith, Facial Recognition Technology: Where Will It Take Us? (Prosecutors’ Center for Excellence, 2019), Art. 3, at 11–13; Johnathan W. Penney, “Understanding Chilling Effects” (2022) 106:3 Minnesota Law Review 1451.

73 United Nations, Report of the United Nations High Commissioner for Human Rights, Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, Including Peaceful Protests, UN Doc. A/HRC/44/24 (United Nations: Office of the High Commissioner for Human Rights, 2020).

75 Council of Europe, Guidelines on Facial Recognition, adopted by the Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data (Council of Europe: Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data 2021), https://edoc.coe.int/en/artificial-intelligence/9753-guidelines-on-facial-recognition.html.

76 Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (AI Act) COM/2021/206 final.

77 Michael Veale & Frederik Zuiderveen Borgesius, “Demystifying the Draft EU Artificial Intelligence Act: Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach” (2021) 22.4 Computer Law Review International 97–112 at 98.

78 Mirko Bagaric, Jennifer Svilar, Melissa Bull et al., “The Solution to the Pervasive Bias and Discrimination in the Criminal Justice System: Transparent and Fair Artificial Intelligence” (2021) 59:1 American Criminal Law Review 95 at 116 and 124; Mike Nellis, “From Electronic Monitoring to Artificial Intelligence: Technopopulism and the Future of Probation Services” in Lol Burke, Nicola Carr, Emma Cluley et al. (eds.), Reimagining Probation Practice, 1st ed. (London, UK: Routledge, 2022) 207.

79 “Ca(r)veat Emptor”, note 4 above, at 300–301.

80 See, for a detailed discussion, Kate Weisburd, “Sentenced to Surveillance: Fourth Amendment Limits on Electronic Monitoring” (2019) 98:4 North Carolina Law Review 717 at 753–757.

81 See e.g. relevant provisions in the Artificial Intelligence Act, note 1 above; European Union, European Commission, Proposal for a directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI liability directive), COM/2022/496 final (Brussels: European Commission, 2022).

82 Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz et al., “Towards Transparency by Design for Artificial Intelligence” (2020) 26:6 Science and Engineering Ethics 3333 [“Towards Transparency”] at 3335–3336.

83 Paul De Hert & Guillermo Lazcoz, “When GDPR-Principles Blind Each Other: Accountability, Not Transparency, at the Heart of Algorithmic Governance” (2022) 8:1 European Data Protection Law Review 31.

84 See e.g., L 119, note 68 above, at 1.

85 For a discussion on the protection offered by US Constitutional law regarding a rapidly developing technology, see Katherine J. Strandburg, “Home, Home on the Web and Other Fourth Amendment Implications of Technosocial Change” (2011) 70:3 Maryland Law Review 614.

86 “Towards Transparency”, note 80 above, at 3343–3344.

87 Sabine Gless & Emily Silverman, “Create Law or Facts? Smart Cars and Smart Compliance Systems,” Oxford Business Law Blog (March 17, 2023), https://blogs.law.ox.ac.uk/oblb/blog-post/2023/03/create-law-or-facts-smart-cars-and-smart-compliance-systems.

88 See Michael L. Rich, “Should We Make Crime Impossible?” (2013) 36:2 Harvard Journal Law & Public Policy 795 at 802–804 for definition of terms, and “Smart Vehicle”, note 3 above, at 500, for a reference to Professor Edward K. Cheng as the originator of the term “impossibility structures.” For other attempts to define the term, see Edward K. Cheng, “Structural Laws and the Puzzle of Regulating Behavior” (2006) 100:2 Northwestern University of Law Review 655 at 664 (“type II structural controls”); Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology” (2008) 14:4 Richmond Journal of Law & Technology 1 at 3 (“perfect prevention”); Timo Rademacher, “Of New Technologies and Old Laws: Do We Need a Right to Violate the Law?” (2020) 5:1 European Journal for Security Research 39 at 45.

89 See Chapter 6 in this volume.

90 See also Madeleine Clare Elish & Tim Hwang, “Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation,” Data and Society, Comparative Studies in Intelligent Systems – Working Paper 1 (2015) at 2–3.

91 See Chapter 15 in this volume.

92 In this volume, Frode Pederson’s Chapter 13 discusses how even narrative reflects a human orientation, which creates issues when dealing with robots.

93 Cf. Madeleine Elish, “Moral Crumple Zones: Cautionary Tales in Human–Robot Interaction (Pre-Print)” (2019) 5 Engaging Science, Technology, and Society 40.

94 Laurel Wamsley, “Uber Not Criminally Liable in Death of Woman Hit by Self-Driving Car, Prosecutor Says,” NPR (March 6, 2019), www.npr.org/2019/03/06/700801945/uber-not-criminally-liable-in-death-of-woman-hit-by-self-driving-car-says-prosec (in the death of Elaine Herzberg, unsolved evidentiary issues presumably hampered prosecution: “After a very thorough review of all the evidence presented, this Office has determined that there is no basis for criminal liability for the Uber corporation arising from this matter …”).

6 Human Psychology and Robot Evidence in the Courtroom, Alternative Dispute Resolution, and Agency Proceedings

* Sara Sun Beale, Charles L. B. Lowndes Professor of Law, Duke Law School; Hayley N. Lawrence, JD, LLM, Duke Law School, 2021.

2 Patrick Lin, Keith Abney, & George Bekey, “Robot Ethics: Mapping the Issues for a Mechanized World” (2011) 175:5–6 Artificial Intelligence 942 at 943.

5 Kate Darling, “‘Who’s Johnny?’: Anthropomorphic Framing in Human–Robot Interaction, Integration, and Policy” in Patrick Lin, Keith Abney, & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (New York, NY: Oxford University Press, 2017) 173 [“Who’s Johnny”] at 173; see Chapter 13 in this volume.

7 Aaron Powers & Sara Keisler, “The Advisor Robot: Tracing People’s Mental Model from a Robot’s Physical Attributes” (paper delivered at the International Conference on Human–Robot Interaction, March 2–3, 2006), HRI ’06: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human–Robot Interaction (New York, NY: Association for Computing Machinery, 2006) 218, www.cs.cmu.edu/~kiesler/publications/2006pdfs/2006_advisor-robot.pdf [“Advisor Robot”].

8 Susan Fussell, Sara Kiesler, Leslie D. Setlock et al., “How People Anthropomorphize Robots” (paper delivered at the International Human–Robot Interaction Conference, March 12–15, 2008), HRI ’08: Proceedings of the 3rd ACM/IEEE International Conference on Human–Robot Interaction (New York, NY: Association for Computing Machinery, 2008) 145 at 149, www.cs.cmu.edu/~./kiesler/publications/2008pdfs/2008_anthropomorphize-bots.pdf.

9 “Advisor Robot”, note 7 above, at 2.

10 “Who’s Johnny”, note 5 above, at 180.

11 Adam Waytz, Joy Heafner, & Nicholas Epley, “The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle” (2014) 52 Journal of Experimental Social Psychology 113 at 115.

14 “Who’s Johnny”, note 5 above, at 181.

15 Kate Darling, Palash Nandy, & Cynthia Breazeal, “Empathic Concern and the Effect of Stories in Human–Robot Interaction” (paper delivered at the IEEE International Workshop on Robot and Human Communication (RO-MAN), August 31–September 1, 2015), 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (Kobe, Japan: IEEE, 2015) 770 at 3, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639689.

16 Footnote Ibid. at 11–12.

17 Neil Richards & William Smart, “How Should the Law Think about Robots?” in Ryan Calo, A. Michael Froomkin, & Ian Kerr (eds.), Robot Law (Cheltenham, UK: Edward Elgar, 2016) [Robot Law] 3 at 18.

18 Instead, neural networks are comprised of a series of complex decision trees that are programmed to react according to environmental stimuli. Larry Hardesty, “Explained: Neural Networks,” MIT News (April 14, 2017), http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.

19 Matthias Scheutz, “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots” in Patrick Lin, Keith Abney, & George Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics (London, UK: MIT Press, 2012) 205 at 211–214.

20 Footnote Ibid. at 205–221.

21 Serena Marchesi, Davide De Tommaso, Jairo Perez-Osorio et al., “Belief in Sharing the Same Phenomenological Experience Increases the Likelihood of Adopting the Intentional Stance Toward a Humanoid Robot” (2022) 3:3 Technology, Mind, and Behavior 1 (finding subjects with exposure to a human-like robot were more likely to rate the robot’s actions as intentional).

22 “Who’s Johnny”, note 5 above, at 175–176.

23 Brian Jeffrey Fogg, “Computers as Persuasive Social Actors” in Brian Jeffrey Fogg, Persuasive Technology: Using Computers to Change What We Think and Do (San Francisco, CA: Morgan Kaufmann Publishers, 2003) 89 [“Persuasive Social Actors”] at 100.

25 Phil McAleer, Alexander Todorov, & Pascal Belint, “How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices” (2014) 9:3 PLoS ONE 1; see also “Advisor Robot”, note 7 above, at 1.

26 “Persuasive Social Actors”, note 23 above, at 101.

27 See generally, Andrea Morales, Maura Scott, & Eric Yorkston, “The Role of Accent Standardness in Message Preference and Recall” (2012) 41:1 Journal of Advertising 33 [“Accent Standardness”] at 34 (studying people’s accent preferences, noting, e.g., that “[s]ociolinguistic research shows that speakers with standard English accents are seen as having high social status and as being competent, smart, educated, and formal”).

28 Jamy Li, “The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots, and Virtual Agents” (2015) 77 International Journal of Human-Computer Studies 23 [“The Benefit”] at 33.

29 “Accent Standardness”, note 27 above, at 34.

30 Twenty-four out of twenty-nine studies surveyed confirmed this point: see “The Benefit”, note 28 above, at 33.

31 Chad Boutin, “Snap Judgments Decide a Face’s Character, Psychologist Finds,” Princeton University (August 22, 2006), www.princeton.edu/news/2006/08/22/snap-judgments-decide-faces-character-psychologist-finds.

32 See “Advisor Robot”, note 7 above, at 6.

33 Julia Fink, “Anthropomorphism and Human Likeness in the Design of Robots and Human–Robot Interaction” (paper delivered at the 4th International Conference, ICSR 2012, October 29–31, 2012) in Shuzi Sam Ge, Oussama Khatib, John-John Cabibihan et al. (eds.), Social Robotics (Berlin, Germany: Springer, 2012) 199 at 203 (noting that “most non-verbal cues are mediated through the face”).

34 People notice the same features they would notice unconsciously about a human face when they view a robot’s face. Carl DiSalvo, Francine Gemperle, Jodi Forlizzi et al., “All Robots Are Not Created Equal: The Design and Perception of Humanoid Robot Heads” (paper delivered at the Conference on Designing Interactive Systems, June 25–28, 2002), DIS ’02: Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (New York, NY: Association for Computing Machinery, 2002) 321 at 322, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.7443&rep=rep1&type=pdf [“Not Created Equal”].

35 “Advisor Robot”, note 7 above, at 6.

36 “Persuasive Social Actors”, note 23 above, at 92–93.

37 “Advisor Robot”, note 7 above, at 6.

38 For a study examining the correlation between a co-present robot’s emotional nonverbal response and a human’s anthropomorphic response, see Friederike Eyssel, Frank Hegel, Gernot Horstmann et al., “Anthropomorphic Inferences from Emotional Nonverbal Cues: A Case Study(paper delivered at the 19th International Conference, September 13–15, 2010), 19th International Symposium in Robot and Human Interactive Communication (Viareggio, Italy: IEEE, 2010) 646 at 646.

39 “Not Created Equal”, note 34 above, at 353–354 and 356.

41 See Debora Zanatto, Massimiliano Patacchiola, Jeremy Goslin et al., “Priming Anthropomorphism: Can the Credibility of Humanlike Robots Be Transferred to Non-Humanlike Robots?” (paper delivered at the 2016 11th ACM/IEEE Conference on HRI, March 7–10, 2016), 2016 11th ACM/IEEE International Conference on Human–Robot Interaction (Christchurch: IEEE, 2016) 543 at 543–544 (finding that people perceived an anthropomorphic robot as more credible than its non-anthropomorphic counterpart when it used social gaze, as measured by willingness to change their response to a question based on information provided by the robot).

42 “Who’s Johnny”, note 5 above, at 174, 175–176.

43 Elise Owens, Ferguson W. H. McPharlin, Nathan Brooks et al., “The Effects of Empathy, Emotional Intelligence and Psychopathy on Interpersonal Interactions” (2018) 25:1 Psychiatry, Psychology and Law 1 at 1–2.

45 Barbara Gonsior, Stefan Sosnowski, Christoph Mayer et al., “Improving Aspects of Empathy and Subjective Performance for HRI through Mirroring Facial Expressions” (paper delivered at IEEE RO-MAN Conference, July 31–August 3, 2011), 2011 RO-MAN (Atlanta, GA: IEEE, 2011) 350 at 351, www.researchgate.net/publication/224256284_Improving_aspects_of_empathy_and_subjective_performance_for_HRI_through_mirroring_facial_expressions.

46 Edmond Awad, Sydney Levine, Max Kleiman-Weiner et al., “Drivers Are Blamed More than Their Automated Cars When Both Make Mistakes” (2020) 4:2 Nature Human Behaviour 134 [“Drivers Are Blamed”].

47 Footnote Ibid. at 138.

48 Jennifer Logg, Julia Minson, & Don Moore, “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment” (2019) 151 Organisational Behavior and Human Decision Processes 90 [“Algorithm Appreciation”].

50 Berkeley Dietvorst, Joseph Simmons, & Cade Massey, “Algorithmic Aversion: People Erroneously Avoid Algorithms after Seeing Them Err” (2015) 144:1 Journal of Experimental Psychology: General 114.

51 Cf. “Adversarial Collaboration: An EDGE Lecture by Daniel Kahneman,” EDGE (February 24, 2022), www.edge.org/adversarial-collaboration-daniel-kahneman (noting difficulty of replicating results of priming experiments).

52 See Robot Law, note 17 above, at 19.

53 See notes 46–47 above and accompanying text.

54 Peter Kahn Jr., Takayuki Kanda, Hiroshi Ishiguro et al., “Do People Hold a Humanoid Robot Morally Accountable for the Harm It Causes?” (paper delivered at the 7th ACM/IEEE International Conference, March 5–8, 2012), HRI ’12: Proceedings of the 7th Annual ACM/IEEE International Conference on Human–Robot Interaction (New York, NY: Association for Computing Machinery, 2012) 33.

55 “Drivers Are Blamed”, note 46 above, at 139–140 (discussing the incidents with Tesla and Uber automated cars).

56 See “Algorithm Appreciation”, note 48 above, at 151.

57 See generally, Andrea Roth, “Machine Testimony” (2017) 126:1 Yale Law Journal 1972; see Chapters 7 and 9 in this volume.

58 Regarding programmer liability, see Chapter 2 in this volume.

59 See e.g. Nicol Turner Lee, Paul Resnick, & Genie Barton, “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms,” Brookings Institution (May 22, 2019), www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

61 An unsupervised algorithm “tries to make sense by extracting features and patterns on its own.”

62 See Chapter 8 in this volume.

64 “Elizabeth Loftus: How Can Our Memories Be Manipulated?” NPR (October 13, 2017), www.npr.org/transcripts/557424726 [“Manipulated”].

65 Richard Schmechel, T. P. O’Toole, C. Easterly et al., “Beyond the Ken? Testing Jurors’ Understanding of Eyewitness Reliability Evidence” (2006) 46:2 Jurimetrics 177 [“Beyond the Ken”] at 195.

67 Elizabeth Loftus & Hunter Hoffman, “Misinformation and Memory: The Creation of New Memories” (1989) 118:1 Journal of Experimental Psychology: General 100 at 100 (noting that “postevent information can impair memory of an original event”).

68 A witness who is exposed to leading questions by investigators, recollections by other witnesses, or news reports that differ from her own memory may begin to remember the event differently in a way that aligns more closely with the narratives heard from others. According to expert Elizabeth Loftus, “[i]t’s not that hard to get people to believe and remember things that didn’t happen.” “Manipulated”, note 64 above.

69 Elizabeth Loftus, “How Reliable is Your Memory?” (presentation delivered at TEDGlobal 2013: Think Again, June 11, 2013), www.ted.com/talks/elizabeth_loftus_how_reliable_is_your_memory.

70 “Beyond the Ken”, note 65 above, at 195.

71 This causes jurors to “dramatically overestimate the accuracy of eyewitness identifications.” Kevin Jon Heller, “The Cognitive Psychology of Circumstantial Evidence” (2006) 105:2 Michigan Law Review 241 [“Cognitive Psychology”] at 285; see also “Beyond the Ken”, note 65 above, at 199 (31 percent of potential jurors stated a witness who was “absolutely certain” was “much more reliable” than the witness who was not, and approximately 40 percent of potential jurors agreed with the statement “an eyewitness’ level of confidence in his or her identification is an excellent indicator of that eyewitness’ reliability”). When evaluating the testimony of a confident witness and an unconfident witness, jurors identified the confident eyewitness as more reliable. Elizabeth Tenney, Robert J. MacCoun, Barabara A. Spellman et al., “Calibration Trumps Confidence as a Basis for Witness Credibility” (2007) 18:1 Psychological Science 46 at 48.

72 “Beyond the Ken”, note 65 above, at 198.

73 When asked to evaluate the reliability of their own memories, people vastly overestimated. “Beyond the Ken”, note 65 above, at 196.

74 See Chapter 8 in this volume.

75 Cf. John Wixted & Gary Wells, “The Relationship between Eyewitness Confidence and Identification Accuracy: A New Synthesis” (2017) 18:1 Psychological Science in the Public Interest 10 at 55 (noting that in ideal conditions confidence level at initial identification is actually a good proxy for accuracy).

76 “Cognitive Psychology”, note 71 above, at 267–268.

77 Footnote Ibid. at 265, 267.

78 Footnote Ibid. at 265.

80 Footnote Ibid. at 267.

81 Footnote Ibid. at 268.

83 Footnote Ibid. at 260.

84 Elizabeth Loftus, “Psychological Aspects of Courtroom Testimony” (1980) 347 Annals of the New York Academy of Sciences 27 at 27–28.

85 “Cognitive Psychology”, note 71 above, at 262.

86 John Leubsdorf, “Does the American Rule Promote Access to Justice? Was that Why It Was Adopted?” (2019) 67 Duke Law Journal Online 257 at 257.

87 Cornell Legal Information Institute, “Alternative Dispute Resolution,” www.law.cornell.edu/wex/alternative_dispute_resolution. Mediation is an informal alternative to litigation, in which adverse parties, operating through mediators, attempt to reach a settlement.

89 Teneille Brown & Emily Murphy, “Through a Scanner Darkly: Functional Neuroimaging as Evidence of a Criminal Defendant’s Past Mental States” (2010) 62:4 Stanford Law Review 1119 at 1199–1201.

90 Stephanie Zimmermann, “Trouble with Tesla: Couple Were Sold a Damaged Car, then Told They Can’t Sue,” Chicago Sun Times (September 28, 2019), https://chicago.suntimes.com/2019/9/27/20887609/tesla-arbitration-car-damage-repair-consumer-legal-chicago-kansas.

92 US Code of Federal Regulations (as amended February 3, 2023), Title 49 [49 CFR], §831.4(c).

93 Footnote Ibid., §831.9(a)(3).

94 United States Code (2018), Title 49, §1154(b).

95 Biographies for all board members can be accessed from NTSB, “Board Member Speeches,” www.ntsb.gov/news/speeches/Pages/Default.aspx.

96 49 CFR, note 92 above, §831.8 (authority of investigator in charge), §831.11(a)(1) (designation of parties by investigator in charge).

97 NTSB, “The Investigative Process,” www.ntsb.gov/investigations/process/Pages/default.aspx.

98 49 CFR, note 92 above, §831.4(a)–(b).

99 Jack London, “Issues of Trustworthiness and Reliability of Evidence from NTSB Investigations in Third Party Liability Proceedings” (2003) 68:1 Journal of Air Law and Commerce 39 at 48.

100 NTSB, “Fiscal Year 2020 Budget Request” (Washington DC: NTSB, 2019) at 7, 28, www.ntsb.gov/about/reports/Documents/NTSB-FY20-Budget-Request.pdf.

101 Ethan Sacks, “Self-Driving Uber Car Involved in Fatal Accident in Arizona,” NBC News (March 20, 2018), www.nbcnews.com/tech/innovation/self-driving-uber-car-involved-fatal-accident-arizona-n857941.

102 NTSB, “Highway Accident Report: Collision between Vehicle Controlled by Developmental Automated Driving System and Pedestrian” (Washington DC: NTSB, 2018), www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf at v–vi (Executive Summary).

103 Footnote Ibid. at 39.

104 Footnote Ibid. at v.

105 Footnote Ibid. at 16.

106 “Uber ‘Not Criminally Liable’ for Self-Driving Death,” BBC (March 6, 2019), www.bbc.com/news/technology-47468391.

107 Kate Conger, “Driver Charged in Uber’s Fatal 2018 Autonomous Car Crash,” The New York Times (September 15, 2020), www.nytimes.com/2020/09/15/technology/uber-autonomous-crash-driver-charged.html.

108 Kiara Alfonseca, “Uber Reaches Settlement with Family of Woman Killed by Self-Driving Car,” NBC News (March 29, 2018), www.nbcnews.com/news/us-news/uber-reaches-settlement-family-woman-killed-self-driving-car-n861131.

109 Joann Muller, “Cruise’s Robotaxis Can Charge You for Rides Now,” Axios (June 6, 2022), www.axios.com/2022/06/06/cruise-driverless-taxi-san-fransisco.

110 As of April 2023, Cruise had applied for permission to begin testing its AVs throughout California at speeds of up to 55 miles per hour (25 mph higher). Michael Liedtke, “No Driver? No Problem. Robotaxis Eye San Francisco Expansion,” AP News (April 5, 2023), https://apnews.com/article/driverless-cars-robotaxis-waymo-cruise-tesla-684556379bb57425c8fdf35268e8046d.

112 See “Machine Testimony”, note 57 above (describing the potential infirmities of machine sources, providing a taxonomy of machine evidence that explains which types implicate credibility and explores how courts have attempted to regulate them, and offering a new “vision” of testimonial safeguards for machine sources of information).

113 See generally Andrea Roth, “Trial by Machine” (2016) 104:5 Georgetown Law Journal 1245 (documenting the rise of mechanical proof and decision-making in criminal trials as a means of enhancing objectivity and accuracy, at least when the shift toward the mechanical has benefited certain interests).

7 Principles to Govern Regulation of Digital and Machine Evidence

1 See generally Chapter 8 in this volume.

2 See Erin Murphy, “The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence” (2007) 95:3 California Law Review 721 [“New Forensics”] (comparing “first-generation” techniques, such as tool-marks and handwriting, to “second-generation” techniques, such as DNA and digital evidence).

3 See generally Innocence Project, “DNA Exonerations in the United States (1989–2020),” https://innocenceproject.org/dna-exonerations-in-the-united-states/ (noting numerous exonerations in cases involving mistaken eyewitnesses, false confessions, and embellished forensic evidence).

4 See Chapter 6 in this volume.

5 Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195 [“AI in the Courtroom”].

6 EU Charter of Fundamental Rights, 2000 (came into force in 2009), Title VI, Arts. 47–48; see also Artificial Intelligence Act, European Union (proposed April 21, 2021), COM(2021) 206 final 2021/0106, Explanatory Memorandum s. 3.5, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

7 See Chapter 10 in this volume (exploring the shift toward digital evidence in Dutch criminal courts).

8 See Chapter 9 in this volume. Erin Murphy divides “technological evidence” into location trackers, electronic communications and social media, historical search or cloud or vendor records, “Internet of Things” and smart tools, surveillance cameras and visual imagery, biometric identifiers, and analytical software tools.

9 18 United States Code [18 USC], §§2701–2712.

10 See e.g. Luciano Floridi & Josh Cowls, “A Unified Framework of Five Principles for AI in Society” (2019) 1:1 Harvard Data Science Review (examining forty-seven principles promulgated since 2016, which map onto beneficence, non-maleficence, autonomy, justice, and explicability); Anna Jobin, Marcello Ienca, & Effy Vayena, “The Global Landscape of AI Ethics Guidelines” (2019) 1:9 Nature Machine Intelligence 389399 (reviewing 84 documents, which centered around transparency, justice and fairness, non-maleficence, responsibility, and privacy); Daniel Greene, Anna Lauren Hoffmann, & Luke Stark, “Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning” (paper delivered at the Proceedings of the 52nd Hawaii International Conference on System Sciences, January 8, 2019), cited in Samuele Lo Piano, “Ethical Principles in Machine Learning and Artificial Intelligence: Cases from the Field and Possible Ways Forward” (2020) 7:1 Humanities and Social Sciences Communications, Article 9 (collecting meta-studies). The Council of Europe’s 2020 Resolution on AI also includes these values, specifically mentioning “transparency, including accessibility and explicability,” “justice and fairness,” and “human responsibility for decisions.” See Council of Europe, “Council of Europe and Artificial Intelligence,” www.coe.int/en/web/artificial-intelligence.

11 See e.g. Federal Rules of Evidence, United States (as amended on December 1, 2020) [Federal Rules of Evidence], Rules 602 (liberal competence standard), 806 (allowing impeachment of hearsay declarants), 608–609 (allowing impeachment by character-for-dishonesty evidence), and 613 and 801(d) (impeachment by inconsistent statements).

12 See e.g. Federal Rules of Evidence, note 11 above, Rules 901 and 902 (imposing minimal authentication requirements).

13 Manson v. Brathwaite, 432 U.S. 98 (1977).

14 See e.g. State v. Henderson, 27 A.3d 872, 878 (NJ 2011) (establishing protocols for eyewitness identification procedures).

15 See Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) (setting forth a non-exhaustive list of factors trial courts should use in determining the scientific validity of an expert method). A minority of US state jurisdictions continue to adhere to the alternative Frye test, that looks to whether novel scientific methods are “general[ly] accept[ed]” in the scientific community. See Frye v. United States, 293 F. 1013 (DC Cir. 1923).

16 See e.g. G. Daniel Lassiter, Andrew L. Geers, Ian M. Handley et al., “Videotaped Interrogations and Confessions: A Simple Change in Camera Perspective Alters Verdicts in Simulated Trials” (2002) 87:5 Journal of Applied Psychology 867 at 867.

17 See Edward Cheng & Alexander Nunn, “Beyond the Witness: Bringing a Process Perspective to Modern Evidence Law” (2019) 97:6 Texas Law Review 1077 [“Beyond the Witness”]; see also Jules Epstein, “The Great Engine that Couldn’t: Science, Mistaken Identifications, and the Limits of Cross-Examination” (2007) 36:3 Stetson Law Review 727.

18 See e.g. “New Forensics”, note 2 above (noting this aspect of “second-generation” forensic techniques like DNA).

19 See Footnote ibid.; see also Chapter 9 in this volume.

20 See Chapter 8 in this volume. The chapter defines “raw data” as data produced by a machine without any processing, “measurement data” as data produced by a machine after rudimentary calculations, and “evaluative data” as data produced by a machine according to sophisticated algorithmic methods that cannot be reproduced manually.

21 Prominent American evidence scholar John Henry Wigmore famously described cross-examination in this way, see John Henry Wigmore, Evidence in Trials at Common Law, vol. 5 (Boston, MA: Little, Brown & Co., 1974) at 32, s. 1367.

22 See e.g. Declaration of Nathaniel Adams, People v. Hillary, No. 2015–15 (New York County Court of St Lawrence, May 27, 2016) at 1–2 (on file with author) (listing citations to several governing bodies that have come together to promulgate industry standards for software development and testing).

23 See Supplemental Findings and Conclusions of Remand Court at 11, State v. Chun, No. 58,879 (NJ November 14, 2007), www.nj-dmv-dwi.com/state-v-chun-alcotest-litigation/.

25 See State v. Chun, 943 A.2d 114, 129–30 (NJ 2008); see also Robert Garcia, “‘Garbage in, Gospel Out’: Criminal Discovery, Computer Reliability, and the Constitution” (1991) 38:5 UCLA Law Review 1043 at 1088 (citing GAO report finding deficiencies in software used by Customs Office to record license plates, and investigations of failures of IRS’s computer system).

26 See e.g. Nathaniel Adams, “What Does Software Engineering Have to Do with DNA?” (2018) May Issue NACDL The Champion 58 [“Software Engineering”] (arguing that software should be subject to industry-standard IEEE-approved independent software testing); Andrea Roth, “Machine Testimony” (2017) 126:7 Yale Law Journal 1972 [“Machine Testimony”] at 2023 (arguing for independent software testing as admissibility requirement).

27 “Software Engineering”, note 26 above.

28 See Final Report – Variation in STRMix Regarding Calculation of Expected Heights of Dropped Out Peaks (STRMix, July 4, 2016) at 1–2 (on file with author) (acknowledging coding errors, but noting that errors would only underestimate the likelihood of contribution). Of course, an error underestimating the likelihood of contribution might also be detrimental to a factually innocent defendant in certain cases, such as where the defense alleges a third-party perpetrator.

29 See United States, Bill HR 2438, Justice in Forensic Algorithms Act of 2021, 117th Cong., 2021, www.govtrack.us/congress/bills/117/hr2438.

30 See e.g. Conforming Products List of Evidential Breath Alcohol Measurement Devices, 2012, 77 Fed. Reg. 35,747, 35,748 (prohibiting states from using machines except those approved by the National Highway Transportation Safety Administration).

31 See e.g. People v. Adams, 131 Cal. Rptr. 190, 195 (Ct. App. 1976) (holding that a failure to calibrate breath-alcohol equipment went only to weight).

32 See e.g. People v. Lopez, 286 P.3d 469, 494 (Cal. 2012) (admitting results of gas chromatograph, without testimony of expert); “Machine Testimony”, note 26 above, at 1989–1990 (explaining that the hearsay rule does not apply to machines, heightening the need for alternative forms of scrutiny).

33 See e.g. United States v. Lizarraga-Tirado, 789 F.3d 1107, 1109 (9th Cir. 2015) (admitting Google Earth “pin” associated with GPS coordinates as evidence that defendant had been arrested on the US side of the US–Mexico border for purposes of an illegal re-entry prosecution).

34 See e.g. Shawn Harrington, Joseph Teitelman, Erica Rummel et al., “Validating Google Earth Pro as a Scientific Utility for Use in Accident Reconstruction” (2017) 5:2 SAE International Journal of Transport Safety 135.

35 Cf. Beyond the Witness”, note 17 above (arguing that process-based evidence should be subject to testing to determine error rate).

36 See e.g. Andrew Tutt, “An FDA for Algorithms” (2017) 69:1 Administrative Law Review 83 (suggesting that such a body could prevent problematic algorithms from going to market).

37 Paul B. de Laat, “Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?” (2018) 31:4 Philosophy & Technology 525.

38 Christopher D. Steele & David J. Balding, “Statistical Evaluation of Forensic DNA Profile Evidence” (2014) 1:1 Annual Review of Statistics and Its Application 361 at 380.

39 See Jeff Larson, Surya Mattu, Lauren Kirchner et al., “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica (May 23, 2016), www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

40 See “Response to ProPublica: Demonstrating Accuracy, Equity, and Predictive Parity,” Northpointe Research Department (July 8, 2016), www.equivant.com/response-to-propublica-demonstrating-accuracy-equity-and-predictive-parity/ [“Response to ProPublica”]; Jon Kleinberg, Sendhil Mullainathan, & Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Cornell University (November 17, 2016), arxiv.org/abs/1609.05807v2 (arguing that algorithms like COMPAS cannot simultaneously satisfy all three possible means of measuring algorithmic fairness, and that it has predictive parity even with different false positive rates).

41 “Response to ProPublica”, note 40 above.

42 See e.g. Richard Berk, Hoda Heidari, Shahin Jabbari et al., “Fairness in Criminal Justice Risk Assessments: The State of the Art” (2021) 50:1 Sociological Methods & Research 3 (explaining that these two types of fairness are incompatible).

43 See e.g. Dana Pessach & Erez Schmueli, “Algorithmic Fairness,” Cornell University (January 21, 2020), https://arxiv.org/abs/2001.09784 (noting that COMPAS offered certain types of predictive parity, but that the odds of being predicted dangerous were worse for African-Americans than White subjects).

44 Deborah Hellman, “Measuring Algorithmic Fairness” (2020) 106:4 Virginia Law Review 811.

45 Footnote Ibid. at 840–841.

46 Osagie K. Obasogie, “The Return of Biological Race? Regulating Race and Genetics Through Administrative Agency Race Impact Assessments” (2012) 22:1 Southern California Interdisciplinary Law Journal 1.

47 “Justice by Algorithm – The Role of Artificial Intelligence in Policing and Criminal Justice Systems,” Doc. 15156, report of the Committee on Legal Affairs and Human Rights, Resolution 2342 (Council of Europe, Parliamentary Assembly, 2020), https://pace.coe.int/en/files/28805/html [“Justice by Algorithm”].

48 See e.g. Taylor Jones, Jessica Rose Kalbfeld, Ryan Hancock et al., “Testifying While Black: An Experimental Study of Court Reporter Accuracy in Transcription of African American English” (2019) 95:2 Language: Linguistic Society of America 216.

49 Whether AI voice-recognition-driven court reporting systems are more accurate than human stenographers remains to be seen.

50 “Beyond the Witness”, note 17 above.

51 See e.g. Federal Rules of Criminal Procedure, United States (as amended December 1, 2022) [Federal Rules of Criminal Procedure], Rule 16(a)(1)(G).

52 See Federal Rules of Criminal Procedure, Footnote note 51 above, Rule 26(a)(2)(B)(ii).

53 See e.g. Gregory v. United States, 369 F.2d 185, 188 (DC Cir. 1966) (“Both sides have an equal right, and should have an equal opportunity, to interview [state witnesses]”).

54 See e.g. 18 USC, note 9 above, Jencks Act, 18 USC §3500(b).

55 See State’s Response to Defense Motion to Compel, State v. Fair, No. 10-1-09274-5 (Wash. Sup. Ct. April 1, 2016) at 21 (representations made by TrueAllele as to defense access to its program).

56 Maayan Perel & Niva Elkin-Koren, “Black Box Tinkering: Beyond Transparency in Algorithmic Enforcement” (2017) 69:5 Florida Law Review 181.

57 Nick Diakopoulos, “Algorithmic Accountability Reporting: On the Investigation of Black Boxes” (2013) Tow Center for Digital Journalism 30, https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2.

58 Jennifer Mnookin, “Repeat Play Evidence: Jack Weinstein, ‘Pedagogical Devices,’ Technology, and Evidence” (2015) 64:2 DePaul Law Review 571 at 573.

59 General Data Protection Regulation, EU 2016, Regulation (EU) 2016/679 (with effect from May 25, 2018).

60 See e.g. Andrew Morin, Jennifer Urban, Paul D. Adams et al., “Shining Light into Black Boxes” (2012) 336:6078 Science 159 at 159 [“Shining Light”] (“Common implementation errors in programs … can be difficult to detect without access to source code”); Erin E. Kenneally, “Gatekeeping Out of the Box: Open Source Software as a Mechanism to Assess Reliability for Digital Evidence” (2001) 6:13 Virginia Journal of Law and Technology 13 (arguing that access to source code is necessary to prevent or unearth many structural programming errors).

61 See e.g. United States v. Liebert, 519 F.2d 542, 543, 550–51 (3d Cir. 1975) (entertaining the possibility that the defense was entitled to view the IRS program’s prior reports of non-filers to determine their accuracy, but determining that access was not necessary to impeach the program).

62 State officials generally refuse defense requests for access to the other reported near matches, notwithstanding arguments that these matches might prove exculpatory. See generally Simon A. Cole, “More than Zero: Accounting for Error in Latent Fingerprint Identification” (2005) 95:3 Journal of Criminal Law and Criminology 985.

63 Kathleen E. Watson, “COBRA Data and the Right to Confront Technology against You” (2015) 42:2 North Kentucky Law Review 375 at 381–382. But see Turcotte v. Dir. of Revenue, 829 S.W.2d 494, 496 (Mo. Ct. App. 1992) (holding that the state’s failure to file timely maintenance reports on a breath-alcohol machine did not “impeach the machine’s accuracy”).

64 “AI in the Courtroom”, note 5 above, at 248.

65 See e.g. Emiliano De Cristofaro, “An Overview of Privacy in Machine Learning,” Cornell University (May 18, 2020), https://arxiv.org/abs/2005.08679.

66 See generally Rebecca Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System” (2018) 70:5 Stanford Law Review 1343.

67 Footnote Ibid. (arguing that trade secrets doctrine should not apply in criminal cases).

68 See generally Rebecca Wexler, “Privacy Asymmetries: Access to Data in Criminal Defense Investigations” (2021) 68:1 UCLA Law Review 212.

69 See In re. Oliver, 333 U.S. 257 (1948); Sixth Amendment to the US Constitution (right to a “public trial”).

70 “Justice by Algorithm”, note 47 above, at 9.3.

71 Cathy O’Neil, Weapons of Math Destruction (New York, NY: Crown Books, 2016) [Weapons of Math Destruction] at 211 (calling for “crowdsourcing campaigns” to offer feedback on errors and biases in datasets and models); see also Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Cambridge, MA: Harvard University Press, 2015) at 208 (arguing for open source software in determining credit scores).

72 Holly Doremus, “Listing Decisions under the Endangered Species Act: Why Better Science Isn’t Always Better Policy” (1997) 75:3 Washington University Law Quarterly 1029 at 1138.

73 “Shining Light”, note 60 above (arguing for open-source software for public law uses).

74 Weapons of Math Destruction, note 71 above.

75 See Chapter 9 in this volume.

76 See Ake v. Oklahoma, 470 U.S. 68 (1986).

77 See e.g. Recommendation CM/Rec(2020)1 on the Human Rights Impacts of Algorithmic Systems (Council of Europe, Committee of Ministers, 2020) at 9, 13 (“[a]ffected individuals and groups should be afforded effective means to contest relevant determinations and decisions … [which] should include an opportunity to be heard, a thorough review of the decision and the possibility to obtain a non-automated decision”); OECD, Council on Artificial Intelligence, Recommendation of the Council on Artificial Intelligence, 2020, OECD/LEGAL/0449, at s. 1.3.iv, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

78 See Holmes v. South Carolina, 547 U.S. 319 (2006).

79 See David A. Sklansky, “Hearsay’s Last Hurrah” (2009) 2009:1 Supreme Court Review 1.

80 Carl DiSalvo, Adversarial Design (Cambridge, MA: The MIT Press, 2012).

81 See Daniel Epps & William Ortman, “The Defender General” (2020) 168:6 University of Pennsylvania Law Review 1469.

82 See Matthias Möller & Cornelis Vuik, “On the Impact of Quantum Computing Technology on Future Developments in High-Performance Scientific Computing” (2017) 19:4 Ethics and Information Technology 253.

83 See “Machine Testimony”, note 26 above, at 2038.

84 See Federal Rules of Evidence, note 11 above, Rule 901(9) (allowing admission of a live witness to prove that a “process or system” produces an accurate result), and Rule 902(13), (14) (allowing admission of electronically stored and generated information upon presentation of a certification from a qualified witness who can attest to how the process works).

85 See e.g. Zhuhao Wang, “China’s E-Justice Revolution” (2021) 105:1 Judicature 37 (noting how blockchain is used for authentication of electronic evidence); Ran Wang, “Legal Technology in Contemporary USA & China” (2020) 39:10549 Computer Law & Security Review 1 at 4.

86 Criminal Justice Act 2003, United Kingdom, c. 44, s. 129(1). If the inputter’s “purpose” is “to cause … a machine to operate on the basis that the matter is as stated,” it is treated as hearsay (see s. 115(3)), requiring the live testimony of the inputter (see s. 114(1)). The provision “does not affect the operation of the presumption that a mechanical device has been properly set or calibrated” (see s. 129(2)).

87 See e.g. Footnote ibid. (requiring inputter testimony); Gert Petrus van Tonder, “The Admissibility and Evidential Weight of Electronic Evidence in South African Legal Proceedings: A Comparative Perspective” (LLM thesis, University of Western Cape, May 2013), etd.uwc.ac.za/xmlui/bitstream/handle/11394/4833/VanTonder_gp_llm_law_2013.pdf (requiring live testimony of signer of documents).

88 See generally Andrea Roth, “Trial by Machine” (2016) 104:5 Georgetown Law Journal 1245 [“Trial by Machine”] (noting how various aspects of American criminal justice have become more mechanical).

89 See e.g. Martha C. Nussbaum, “Equity and Mercy” (1993) 22:2 Philosophy & Public Affairs 83 at 93 and n. 19 (explaining that equity “may be regarded as a ‘correcting’ and ‘completing’ of legal justice”).

90 Jeffrie G. Murphy, “Mercy and Legal Justice” in Jeffrie G. Murphy & Jean Hampton, Forgiveness and Mercy (Cambridge, UK: Cambridge University Press, 1998) 162 at 176.

91 Meg Leta Jones, “Right to a Human in the Loop: Political Constructions of Computer Automation and Personhood from Data Banks to Algorithms” (2017) 47:2 Social Studies of Science 216 at 231.

92 See generally Tom Tyler, “Procedural Justice, Legitimacy, and the Effective Rule of Law” (2003) 30:1 Crime & Justice 283 (explaining the role of procedural justice in inspiring compliance with law).

93 See e.g. Laurence Tribe, “Trial by Mathematics: Precision and Ritual in the Legal Process” (1971) 84:6 Harvard Law Review 1329.

94 See e.g. Katharine Miller, “When Algorithmic Fairness Fixes Fail: The Case for Keeping Humans in the Loop,” Stanford University: Institute for Human-Centered AI (November 2, 2020), https://hai.stanford.edu/blog/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop.

95 “Justice by Algorithm”, note 47 above, at 9.13.

96 See European Commission, Directive (EU) 2016/680 of April 27, 2016 (OJ 4.5.2018, L 119, 89), Art. 11.

97 Others have called for this; see e.g. Josh Bowers, “Legal Guilt, Normative Innocence, and the Equitable Decision Not to Prosecute” (2010) 110:7 Columbia Law Review 1655 at 1723; Anna Roberts, “Dismissals as Justice” (2017) 69:2 Alabama Law Review 327 (discussing Model Penal Code §2.12).

98 See e.g. Scott Brewer, “Scientific Expert Testimony and Intellectual Due Process” (1998) 107:6 Yale Law Journal 1535 at 1551 (arguing for a due process right to an “epistemically competent” fact-finder).

99 Cf. Federal Rules of Evidence, note 11 above, Rule 704 (prohibiting expert witnesses from giving opinions as to whether criminal defendants have the mental state required).

100 See Sonja B. Starr, “Evidence-Based Sentencing and the Scientific Rationalization of Discrimination” (2014) 66:4 Stanford Law Review 803 at 866–868 (suggesting that actuarial instruments drive judicial sentencing decisions).

101 R. A. Bain, “Comment, Guidelines for the Admissibility of Evidence Generated by Computer for Purposes of Litigation” (1982) 15:4 UC Davis Law Review 951 at 961 (noting that fact-finders might be unduly “awed by computer technology”).

102 See Benjamin V. Madison III, “Seeing Can Be Deceiving: Photographic Evidence in a Visual Age – How Much Weight Does It Deserve?” (1984) 25:4 William & Mary Law Review 705 at 740 (arguing for jury instructions along these lines for photographs); see generally Jessica M. Silbey, “Judges as Film Critics: New Approaches to Filmic Evidence” (2004) 37:2 University of Michigan Journal of Law Reform 493 (suggesting trial safeguards for explaining testimonial infirmities of images to fact-finders).

103 “Trial by Machine”, note 88 above (describing the penile plethysmograph and arguing that its use violates dignitary interests of subjects).

104 See Footnote ibid. (discussing personhood objections to various forms of lie detection evidence).

105 See Rochin v. California, 342 U.S. 165 (1952).

106 See e.g. “Justice by Algorithm”, note 47 above, at 9.9 (Member States should “ensure that the essential decision-making processes of AI applications are explicable to their users and those affected by their operation”).

107 See e.g. Andrea Saltelli, “Ethics of Quantification or Quantification of Ethics?” (2020) 116:102509 Futures 1 (discussing “metric fixation”); “Trial by Machine”, note 88 above, at 1281 (quoting Sally Engle Merry, The Seductions of Quantification: Measuring Human Rights, Gender Violence and Sex Trafficking (Chicago, IL: University of Chicago Press, 2016) (exploring the distorting effects of the quest for measurable indicators in the context of human rights)).

108 “Justice by Algorithm”, note 47 above, at 9.3.

109 “AI in the Courtroom”, note 5 above, at 251.

110 “AI in the Courtroom”, note 5 above, at 249.

111 See generally Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York, NY: Penguin Books, 2019).

8 Robot Testimony? A Taxonomy and Standardized Approach to the Use of Evaluative Data in Criminal Proceedings

* We wish to express our gratitude to the Swiss National Science Foundation for ongoing support as well as to NYU’s Jean Monnet Program for providing a forum in which to discuss our results.

1 See e.g. “Swiss Politician Fined Over Crash That Injured 17-Year-Old,” The Local (October 31, 2016), www.thelocal.ch/20161031/swiss-politician-fined-over-crash-that-injured-17-year-old.

2 Straßenverkehrsgesetz (StVG), SR 741.01 (as of January 1, 2020), Art. 91, para. 2, www.admin.ch/opc/de/classified-compilation/19580266/index.html.

3 Footnote Ibid. Art. 100, para. 1.

4 Some weeks after the accident, the car driver accepted a summary penalty order. With such an order, the public prosecutor’s office fixes a penalty for a criminal offense that will be enforced if the accused does not ask for the matter to be dealt with under the normal procedure by a court, Swiss Criminal Procedure Code, SR 312.0 (with effect from January 1, 2011) [Swiss CrimPC], Arts. 352–356, www.fedlex.admin.ch/eli/cc/2010/267/en.

5 For a more detailed discussion as to what information should be accessible, see Edward Imwinkelried, “Computer Source Code: A Source of the Growing Controversy over the Reliability of Automated Forensic Techniques” (2016) 66:1 DePaul Law Review 97.

6 For the definition of robot, see Chapter 6 in this volume (“an engineered machine that senses, thinks, and acts,” citing Patrick Lin, Keith Abney, & George Bekey, “Robot Ethics: Mapping the Issues for a Mechanized World” (2011) 175:5–6 Artificial Intelligence 942 at 943.

7 Muhammad Ramzan, Hikmat U. Khan, Shahid Mahmood Awan et al., “A Survey on State-of-the-Art Drowsiness Detection Techniques” (2019) 7 Institute of Electrical and Electronics Engineers Access 61904 [“Drowsiness Detection”] at 61908; for a legal assessment of such evidence, see Sabine Gless, Fred Lederer, & Thomas Weigend, “AI-Based Evidence in Criminal Trials?” (2024) 59:1 Tulsa Law Review 1.

8 For different ways to train systems to detect drowsiness, see Elena Magán López, M. Paz Sesmero Lorente, Juan Manuel Alonso-Weber et al., “Driver Drowsiness Detection by Applying Deep Learning Techniques to Sequences of Images” (2022) 12:3 Applied Sciences 1145; Samy Bakheet & Ayoub Al-Hamadi, “A Framework for Instantaneous Driver Drowsiness Detection Based on Improved HOG Features and Naïve Bayesian Classification” (2021) 11:2 Brain Sciences 240.

9 For details, see European Union, The European Parliament, & The Council of the European Union, Regulation (EU) 2019/2144 of 27 November 2019 on Type-Approval Requirements for Motor Vehicles, OJ 2019 L 325, ECE/TRANS/WP.29/2020/81 (EU: Official Journal of the European Union, 2019) [Regulation 2019/2144].

10 See Footnote ibid., as well as Straßenverkehrsgesetz (SVG) (Entwurf) (Swiss Reform Proposal), BBl 2021 3027 (December 29, 2021), www.fedlex.admin.ch/eli/fga/2021/3027/de.

11 For issues raised when using new technology for evidentiary purposes, see Edward Imwinkelried, “The Admissibility of Scientific Evidence: Exploring the Significance of the Distinction between Foundational Validity and Validity as Applied” (2020) 70:3 Syracuse Law Review 817 [“Scientific Evidence”] at 818–820.

12 In this chapter, the term “fact-finder” is used to refer to the legal actor responsible for determining the facts in a criminal case, i.e., judge or bench in a case that goes to trial, or prosecutor in a case disposed of by summary penalty order.

13 See Erin Murphy, “The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence” (2007) 95:3 California Law Review 721 at 723–724.

14 See Paul Grimm, Maura Grossmann, & Gordon Cormack, “Artificial Intelligence as Evidence” (2021) 19:1 Northwest Journal of Technology and Intellectual Property 9 (using the term “function creep”).

15 For details, see e.g. the Appendixes to Regulation 2019/2144, note 9 above.

16 For a visionary account of future courtrooms, see Frederic Lederer, “Technology-Augmented and Virtual Courts and Courtrooms” in M. R. McGuire & Thomas Holt (eds.), The Routledge Handbook of Technology, Crime and Justice (London, UK: Routledge, 2017) 518 at 525–526.

17 For a discussion on issues concerning scientific evidence, cf. Edward Imwinkelried, “Improving the Presentation of Expert Testimony to the Trier of Fact: An Epistemological Insight in Search of an Evidentiary Theory” (2020) 52:1 Arizona State Law Journal 49 at 57–59.

18 Eoghan Casey, Digital Evidence and Computer Crime, 3rd ed. (London, UK: Academic Press, 2011) at 7.

19 For further analysis, see Alex Biedermann & Joëlle Vuille, “Digital Evidence, ‘Absence’ of Data and Ambiguous Patterns of Reasoning” (2016) 16:S86–S96 Digital Investigation S86 at S90; Joëlle Vuille & Franco Taroni, “Measuring Uncertainty in Forensic Science” (2021) 24:1 Institute of Electrical and Electronics Engineers Instrumentation & Measurement Magazine 5 at 8.

20 Swiss CrimPC, note 4 above, Art. 139, www.fedlex.admin.ch/eli/cc/2010/267/en#a165.

21 For the Daubert/Frye test in the United States, see Andrea Roth, “Machine Testimony” (2017) 126:1 Yale Law Journal 1972 [“Machine Testimony”] at 1981–1983; for the more principled-driven “systematic approach” in Germany, see Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195 [“AI in the Courtroom”] at 234–237.

22 Joelle Vuille & Franco Taroni, “Measuring Uncertainty in Forensic Science” (2021) 24:1 IEEE Instrumentation & Measurement Magazine 5 at 5–9; Steven Lund & Hari Iyer, “Likelihood Ratio as Weight of Forensic Evidence: A Closer Look” (2017) 122:27 Journal of Research of National Institute of Standards and Technology 1; Filipo Sharevski, “Rules of Professional Responsibility in Digital Forensics: A Comparative Analysis” (2015) 10:2 Journal of Digital Forensics, Security and Law 39; Nils O. Ommen, Markus Blut, Christof Backhaus et al., “Toward a Better Understanding of Stakeholder Participation in the Service Innovation Process: More than One Path to Success” (2016) 69:7 Journal of Business Research 2409.

23 Edward Imwinkelried, “The Importance of Forensic Metrology in Preventing Miscarriages of Justice: Intellectual Honesty About the Uncertainty of Measurement in Scientific Analysis” (2014) 7:2 John Marshall Law Journal 333 [“Forensic Metrology”] at 353–362.

24 Raw data is comparable to DNA taken from blood samples on a murder weapon in the analog world.

25 The court conceded, however, a practical need for procedural flexibility in small-scale crimes en masse, i.e., certain traffic violation cases: see BVerfG Beschluss (Order of German Federal Constitutional Court) of November 12, 2020, 2 BvR 1616/18.

26 Footnote Ibid. nos. 32–34 and 50–55. The Constitutional Court based its decision on two articles of the Grundgesetz (German Basic Law) (with effect from May 23, 1949), Art. 2, para. 1 (which grants a general right of liberty and autonomy) and Art. 20, para. 3 (which captures a specific aspect of the rule of law – Rechtsstaatlichkeitsprinzip).

27 “Data is the representation of information in a form that can be processed by a machine”: Dino Buzzetti, “Digital Editions and Text Processing” in Marilyn Deegan & Kathryn Sutherland (eds.), Text Editing, Print and the Digital Word (Farnham, UK: Ashgate, 2009) 46.

28 In terms of measurement, the difference between the maximum and minimum dimensions of permissible errors is called the “tolerance.” The allowable range of errors prescribed by law, such as with industrial standards, can also be referred to as tolerance; see Measurement Fundamentals, “What Is Tolerance?” www.keyence.co.in/ss/products/measure-sys/measurement-selection/basic/tolerance.jsp.

29 Sandra Wachter & Brent Mittelstadt, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI” (2019) 2019:2 Columbia Business Law Review 494 at 510–511.

30 Richard O. Lempert, Samuel R. Gross, James S. Liebman et al., A Modern Approach to Evidence, 5th ed. (St. Paul, MN: West Academic Publishing, 2014) [Modern Approach] at 5.

31 For more details, see “Drowsiness Detection”, note 7 above, at 61904–61919.

32 Cynthia Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead” (2019) 1:5 Nature Machine Intelligence 206.

33 “Machine Testimony”, note 21 above; Rebecca Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System” (2018) 70:5 Stanford Law Review 1343; “AI in the Courtroom”, note 21 above.

34 Robert Cook et al., “A Hierarchy of Propositions: Deciding Which Level to Address in Casework” (1998) 38:4 Science & Justice 231 [“Hierarchy of Propositions”]; for the notion of “circumstantial evidence” in law, see Modern Approach, note 30 above, at 217–219.

35 For more detail on the expectation that experts provide a meaningful quantitative measure of uncertainties, see “Forensic Metrology”, note 23 above, at 353–362.

36 Joelle Vuille & Joerg Arnold, “L’appréciation des preuves techniques en matière de circulation routière – les traces numériques” (Assessment of Forensic Traffic Data – Digital Evidence) (2019) 3 Circulation Routière 60; on the expectation in the United States that experts provide a meaningful quantitative measure of uncertainties, see “Forensic Metrology”, note 23 above, at 353–362.

37 For case law in the United States discussing the role of likelihood in the context of DNA evidence, see “Forensic Metrology”, note 23 above, at 370, n. 77.

38 See “Hierarchy of Propositions”, note 34 above.

39 For details on the technology, see “SWGDE Best Practices for Archiving Digital and Multimedia Evidence” (Scientific Working Group on Digital Evidence, 2020), www.swgde.org/documents/published-complete-listing; for a discussion on the need to update procedural codes, see Orin Kerr, “Digital Evidence and the New Criminal Procedure” (2005) 105:1 Columbia Law Review 279 [“New Criminal Procedure”] at 285–287.

40 Take, e.g., the verification of raw data by means of checksums (or hash values). Paul Grimm, Daniel Capra, & Gregory Joseph, “Authenticating Digital Evidence” (2017) 69:1 Baylor Law Review 1 [“Authenticating Digital Evidence”] at 17 and 41.

41 For issues involving compelled decryption, see Orin Kerr & Bruce Schneier, “Encryption Workarounds” (2018) 106:4 Georgetown Law Journal 989; Laurent Sacharoff, “Unlocking the Fifth Amendment: Passwords and Encrypted Devices” (2018) 87:1 Fordham Law Review 203.

42 National Highway Traffic Safety Administration (NHTSA) Event Data Recorders Rules, 49 CFR Pt. 563, www.law.cornell.edu/cfr/text/49/part-563 [Data Recorders Rules].

43 For Germany, see Bundestagsdrucksache (Bundestag Document) BT-Drs 19/16250 of December 30, 2019 (Ger.); for a publication prepared under the auspices of the UNECE’s WP 29, see also United Nations, UN Economic and Social Council, Revised Framework Document on Automated/Autonomous Vehicles, ECE/TRANS/WP.29/2019/34 (Geneva: UN, 2019).

44 OEDR is discussed at United Nations, UN Economic and Social Council, Proposal for a New UN Regulation on Uniform Provisions Concerning the Approval of Vehicles with Regards to Automated Lane Keeping System, ECE/TRANS/WP.29/2020/81 (Geneva: UN, 2020) [“Uniform Provisions”] at Chapter 7, DSSAD at Chapter 8.

45 Cf. United Nations, Agreement Concerning the Adoption of Harmonized Technical United Nations Regulations for Wheeled Vehicles, Equipment and Parts, E/ECE/TRANS/505/Rev.3/Add.156 of March 4, 2021, no. 8 ‘Data Storage System for Automated Systems’; reading out the data will be possible by using On-Board Diagnostics Port, 2nd generation (OBD II port), launched in 1996, for further information, see UNECE, “Automated Driving,” https://unece.org/automated-driving.

46 Uniform Provisions, note 44 above, at Chapter 7.

47 E.g. the forensic expert will use the vehicle identification number (VIN) when accessing an EDR.

48 A checksum is a value that represents the number of bits in a transmission message and is used by IT professionals to detect high-level errors within data transmissions; see “Checksum,” TechTarget, www.techtarget.com/searchsecurity/definition/checksum.

49 See ISO/IEC 27043:2015 Information Technology, Security Techniques, Incident Investigation Principles and Processes (International Organization for Standardization, 2015), www.iso.org/standard/44407.html; ISO/IEC 27037:2012 Guidelines for Identification, Collection, Acquisition and Preservation of Digital Evidence (International Organization for Standardization, 2012); ISO/IEC 27040 Storage Security (International Organization for Standardization, 2015).

50 See Data Recorders Rules, note 42 above; Jeremy Daily, Nathan Singleton, Elizabeth Downing et al., “The Forensics Aspects of Event Data Recorders” (2008) 3:3 Journal of Digital Forensics, Security and Law 29; Nhien-An Le-Khac, Daniel Jacobs, John Nijhoff et al., “Smart Vehicle Forensics: Challenges and Case Study” (2020) 109 Future Generation Computer Systems 500 at 503.

51 Craig Cooley, “Forensic Science and Capital Punishment Reform: An ‘Intellectually Honest’ Assessment” (2007) 17:2 George Mason University Civil Rights Law Journal 299 at 353.

52 “Hierarchy of Propositions”, note 34 above.

53 See Swiss CrimPC, note 4 above, Art. 182 and German Code of Criminal Procedure (as amended March 25, 2022), Art. 75.

54 Colin Howson & Peter Urbach, Scientific Reasoning: The Bayesian Approach, 3rd ed. (Chicago, IL: Open Court, 2006).

55 “Hierarchy of Propositions”, note 34 above.

56 The definition of tolerance limits and the accuracy of results in forensic science are subjects of intense and ongoing discussions. See “ENFSI Guideline for Evaluative Reporting in Forensic Science” (European Network of Forensic Science Institutes, 2015), https://enfsi.eu/wp-content/uploads/2016/09/m1_guideline.pdf.

57 For an analysis of this fundamental problem when facing machine evidence, see “Machine Testimony”, note 21 above, at 1982–1983.

58 For a proposal to use error rates when testing facial recognition, see “Scientific Evidence”, note 11 above, at 838.

59 See Entscheid Obergericht Kanton Zürich (Decision of the Upper Court of Zurich, Switzerland) of November 10, 2016, SB160168-O/U/cwo (Ger.).

60 A promising approach could be to crowdsource data; see Sabine Gless, Xuan Di, & Emily Silverman, “Ca(r)veat Emptor: Crowdsourcing Data to Challenge the Testimony of In-Car Technology” (2022) 62:3 Jurimetrics 285.

61 “AI in the Courtroom”, note 21 above, at 213–214.

62 For a detailed discussion on the need to update procedural codes, see “New Criminal Procedure”, note 39 above, at 289–306.

63 Codes of criminal procedure provide few specific rules, e.g., with regard to DNA sampling, Swiss CrimPC, note 4 above, Art. 255, and the Law on DNA Profiles, Switzerland, SR 363 (with effect from June 20, 2003).

64 This chapter will not address limitations on the gathering of evidence due to privacy rights.

65 For a perspective from the United States, see “New Criminal Procedure”, note 39 above, at 309–310.

66 For further details on the German Strengbeweis, see Michael Bohlander, Principles of German Criminal Procedure, 2nd ed. (Oxford, UK: Hart, 2021) at 145–146.

67 Sabine Gless & Thomas Wahl, “The Handling of Digital Evidence in Germany” in Michele Caianiello & Alberto Camon (eds.), Digital Forensic Evidence. Towards Common European Standards in Antifraud Administrative and Criminal Investigations (Alphen aan den Rijn, Netherlands: Wolters Kluwer, 2021) 52.

68 A minimum prerequisite is the adoption of legal regulations for DSSADs; see Uniform Provisions, note 44 above, at Chapter 9.

69 For details on new certification approaches, see “Machine Testimony”, note 21 above, at 2023–2027; for certification of authenticity of digital evidence in general, see “Authenticating Digital Evidence”, note 40 above, at 46–54.

70 Sabine Gless & Thomas Weigend, “Intelligente Agenten als Zeugen im Strafverfahren?” (Intelligent Agents as Witnesses in Criminal Proceedings) (2021) 76:12 Juristenzeitung 612 at 618–620.

9 Digital Evidence Generated by Consumer Products The Defense Perspective

* I am deeply grateful to Safeena Mecklai for her outstanding research assistance.

1 See e.g. Ian N. Friedman & Eric C. Nemecek, “#Trending: Traditional Crimes Meet Nontraditional Evidence” (2018) The Champion 20; Erin Murphy, “The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence” (2007) 95:3 California Law Review 721 at 729–730.

2 See Chapter 8 in this volume.

3 See e.g. Haiyang Jiang, Mingshu He, Yuanyuan Xi et al., “Machine-Learning-Based User Position Prediction and Behavior Analysis for Location Services” (2021) 12:5 Information 180.

4 See Chapter 7 in this volume (recognizing five key rights of the accused).

5 United States v. Scott, No. 2:17-CR-20489-TGB, 2018 WL 2197911, at *5 (ED Mich. May 14, 2018).

6 The Electronic Frontier Foundation and the Reynolds School of Journalism created a database of police surveillance technologies, which is a helpful compilation of some police surveillance practices. See Atlas of Surveillance, https://atlasofsurveillance.org/.

7 See e.g. Rebecca Wexler, “Privacy Asymmetries: Access to Data in Criminal Defense Investigations” (2021) 68:1 UCLA Law Review 212 [“Privacy Asymmetries”]; Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195; Rebecca Wexler, “Life, Liberty and Trade Secrets: Intellectual Property in the Criminal Justice System” (2018) 70:5 Stanford Law Review 1343 [“Life, Liberty”]; Andrea Roth, “Trial by Machine” (2016) 104:5 Georgetown Law Journal 1245 [“Trial by Machine”]; Andrea Roth, “Machine Testimony” (2017) 126:1 Yale Law Journal 1972; Erin Murphy, “The Mismatch between Twenty-First-Century Forensic Evidence and Our Antiquated Criminal Justice System” (2014) 87:3 South California Law Review 633; Joshua A. T. Fairfield & Erik Luna, “Digital Innocence” (2014) 99:5 Cornell Law Review 981 [“Digital Innocence”] at 1056; Brandon L. Garrett, “Big Data and Due Process” (2014) 99 Cornell Law Review Online 207; Erin Murphy, “Databases, Doctrine and Constitutional Criminal Procedure” (2010) 37:3 Fordham Urban Law Journal 803.

8 See generally Wayne R. LaFave, Jerold H. Israel, Nancy J. King et al., Criminal Procedure, 4th ed. (St. Paul, MN: Thomson Reuters, 2015) [Criminal Procedure] at ss. 20.2(c) and 20.3.

9 Ion Meyn, “Discovery and Darkness: The Information Deficits in Criminal Disputes” (2014) 79:3 Brooklyn Law Review 1091 at 1095–1096 and 1108–1114.

10 Footnote Ibid. at 1113–1114.

11 Criminal Procedure, note 8 above, at s. 20.2(d).

12 Facebook, Inc. v. Superior Court, 471 P.3d 383, 387 (Cal. 2020) [Facebook v. Superior Court].

13 See e.g. Erin Murphy, “The Politics of Privacy in the Criminal Justice System: Information Disclosure, the Fourth Amendment, and Statutory Law Enforcement Exemptions” (2013) 111:4 Michigan Law Review 485 [“Politics of Privacy”].

14 See e.g. “Privacy Asymmetries”, note 7 above, at 215.

15 See Chapter 8 in this volume, and regarding the limited reach of the Fourth Amendment of the US Constitution to state agents and not private actors, see Chapter 11.

16 Kathleen McWilliams, “New Haven Man Jailed for 17 Years Freed after Judge Vacates Murder, Robbery Convictions,” Hartford Courant (April 25, 2018), www.courant.com/breaking-news/hc-br-vernon-horn-released-wrongful-conviction-20180425-story.html.

17 Press Release, “Grosse Ile Police Department Exonerates Two Individuals Using Fixed License Plate Reader Cameras,” Vigilant Solutions (February 4, 2016), www.police1.com/police-products/traffic-enforcement/license-plate-readers/press-releases/grosse-ile-police-department-exonerates-two-individuals-using-fixed-license-plate-reader-cameras-SyndPZ00572XK92v/.

18 2020 WL 1509386 (SD W. Va. Jan. 9, 2020) (slip copy) [Quinones v. United States].

19 Quinones v. United States, note 18 above, at *9. See also Harrison v. Baker, No. 3:18CV85-HEH, 2019 WL 404974, at *4 (ED Va. Jan. 31, 2019); Blackman v. United States, No. CIV.A. 2:12-02509, 2014 WL 1155444, at *4 (DNJ Mar. 21, 2014); United States v. Medina, 918 F.3d 774, 786 (10th Cir.), cert. denied, 139 S.Ct. 2706 (2019).

20 Cooper v. Griffin, 16-CV-0629 (VEC) (BCM), 2019 WL 1026303, at 11 (SDNY Feb. 11, 2019), report and recommendation adopted, 16-CV-0629 (VEC), 2019 WL 1014937 (SDNY Mar. 4, 2019).

21 People v. Wells, No. A112173, 2007 WL 466963, at 6 (Cal. Ct. App. Feb. 14, 2007), as modified on denial of reh’g (Mar. 13, 2007); Jackson v. Lee, 10-CIV-3062 (LAK) (AJP), 2010 WL 4628013 at 13 (SDNY Nov. 16, 2010), report and recommendation adopted, 10-CIV-3062 (LAK), 2010 WL 5094415 (SDNY Dec. 10, 2010).

22 “The Importance of Subpoenaing Cell Phone GPS-Data Records in California Criminal Cases,” HG.org, www.hg.org/legal-articles/the-importance-of-subpoenaing-cell-phone-gps-data-records-in-california-criminal-cases-51299.

23 See e.g. Carpenter v. United States, 138 S.Ct. 2206 (2018); Case C-623/17, Privacy International v. Secretary of State for Foreign and Commonwealth Affairs and others; Joint Cases C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and C-520/18, Ordre des barreaux francophones et germanophone and others.

24 “Danish Data Retention: Back to Normal after Major Crisis,” EDRi (November 6, 2019), https://edri.org/our-work/danish-data-retention-back-to-normal-after-major-crisis/; see also Lene Wacher Lentz & Nina Sunde, “The Use of Historical Call Data Records as Evidence in the Criminal Justice System – Lessons Learned from the Danish Telecom Scandal” (2021) 18 Digital Evidence and Electronic Signature Law Review 1 (“To support the ability of the defence to challenge the evidence, the prosecution must provide a transparent presentation of the data and the processes as a whole, with all the inherent risk of errors and uncertainties”).

25 Monique C. M. Leahy, “Recovery and Reconstruction of Electronic Mail as Evidence” in American Jurisprudence Proof of Facts, 3d at section 1, vol. 41 (Rochester, NY: Lawyers Cooperative Publishing, 2020).

26 Emily R. West, “Nolensville Homicide Suspect Wants Snapchat in Trial,” Tennessean (August 22, 2019), www.tennessean.com/story/news/local/williamson/2019/08/22/nolensville-murder-robert-ward-jonathon-elliott-snapchat/2083867001/ [“Nolensville”].

27 Matthew Diebel, “Man Convicted of Rape Is Freed after Sister-in-Law Finds Deleted Facebook Messages that Prove His Innocence,” USA Today (January 3, 2018), www.usatoday.com/story/news/world/2018/01/02/man-convicted-rape-freed-after-sister-law-finds-deleted-facebook-messages-prove-his-innocence/995197001/.

28 Williams v. Davis, No. 3:15-CV-331-M (BH), 2017 WL 1155855, at *7 (ND Tex. Feb. 13, 2017), report and recommendation adopted, No. 3:15-CV-331-M, 2017 WL 1155845 (ND Tex. Mar. 27, 2017). See also “Nolensville”, note 26 above; In the Interest of R.A.P., a Minor Appeal of R.A.P., No. 930 WDA 2019, 2020 WL 1910515, at *10 (Pa. Super. Ct. Apr. 20, 2020).

29 “Spacey’s Defense Claims Deleted Text Messages Will ‘Exonerate’ Him,” NBC Boston (June 2, 2019), www.nbcboston.com/news/local/spaceys-defense-claims-deleted-text-messages-will-exonerate-him/108067/.

30 People v. Jovanovic, 176 Misc.2d 729, 730 (NY Sup. Ct. 1997), rev’d 263 A.D.2d 182 (NY App. Div. 1999).

31 Footnote Ibid., 263 A.D.2d 182, 200, 700 N.Y.S.2d 156, 170 (1999). See also “Nolensville”, note 26 above.

32 Daniel B. Garrie, Esq., The Honorable Maureen Duffy-Lewis, & Daniel K. Gelb, Esq., “‘Criminal Cases Gone Paperless’: Hanging with the Wrong Crowd” (2010) 47:2 San Diego Law Review 521 at 523.

33 Footnote Ibid. “ESI” refers to electronically stored information.

34 Jenia I. Turner, “Managing Digital Discovery in Criminal Cases” (2019) 109:2 Journal of Criminal Law and Criminology 237 at 262; “Digital Innocence”, note 7 above, at 1055.

35 Andrew Cohen, “How Social Media Giants Side with Prosecutors in Criminal Cases,” The Marshall Project (January 15, 2018), www.themarshallproject.org/2018/01/15/how-social-media-giants-side-with-prosecutors-in-criminal-cases; “Digital Innocence”, note 7 above, at 1056.

36 Kashmir Hill, “Imagine Being on Trial. With Exonerating Evidence Trapped on Your Phone,” The New York Times (22 November 2019), www.nytimes.com/2019/11/22/business/law-enforcement-public-defender-technology-gap.html [“Being on Trial”]. See also Jeffrey D. Stein, “Why Evidence Exonerating the Wrongly Accused Can Stay Locked Up on Instagram,” The Washington Post (September 10, 2019), www.washingtonpost.com/opinions/2019/09/10/why-evidence-exonerating-wrongly-accused-can-stay-locked-up-instagram/.

37 See generally Maura Dolan, “After that $5 Billion Fine, Facebook Gets Dinged Again: $1000 by Judge Overseeing Murder Trial,” Los Angeles Times (July 26, 2019), www.latimes.com/california/story/2019-07-26/facebook-twitter-fined-private-postings-gang-trial [“$5 Billion Fine”].

38 Facebook v. Superior Court, note 12 above.

39 In re. Facebook (Hunter), 417 P.3d 725 (Cal. 2018). An ex parte proceeding or ruling is made without notice to or response from the opposing side.

40 See e.g. Walters v. State, 206 So. 3d 524 (Miss. 2016) (admitting Google Earth images); “Ellington Husband Accused of Killing Wife Searched ‘Poison’ Online: Court Documents,” NBC Connecticut (January 3, 2020), www.nbcconnecticut.com/news/local/ellington-husband-accused-of-killing-wife-searched-poison-online-court-documents/2205136/ [“Ellington Husband”].

41 See e.g. Sabine Gless, Xuan Di, & Emily Silverman, “Ca(r)veat Emptor: Crowdsourcing Data to Challenge the Testimony of In-Car Technology” (2022) 62:3 Jurimetrics 285.

42 “Being on Trial”, note 36 above. Cf. State v. Bray, 383 P.3d 883 (Ct. Ap. Oreg. 2016).

44 “With My Fridge as My Witness?!” Privacy International (June 28, 2019), https://privacyinternational.org/long-read/3026/my-fridge-my-witness [“Fridge as My Witness”]; “Ellington Husband”, note 40 above; United States v. Smith, 2017 WL 11461003 (D. NH 2017); Lauren Pack, “Defense Wants Middletown Man’s Pacemaker Evidence Tossed in Arson Case,” Journal News (June 6, 2017), www.springfieldnewssun.com/news/crime--law/defense-wants-middletown-man-pacemaker-evidence-tossed-arson-case/jZeYV7KjWdncLIZqNbYW2I/; Stephen Jordan, “Apple Health App Data Being Used as Evidence in Murder Trial in Germany,” Digital Trends (January 14, 2018), www.digitaltrends.com/mobile/apple-health-app-murder-germany/.

45 Tyler Sonnemaker, “Texas Power Companies Automatically Raised the Temperature of Customers’ Smart Thermostats in the Middle of a Heat Wave,” Business Insider (June 21, 2021), www.businessinsider.com/texas-energy-companies-remotely-raised-smart-thermostats-temperatures-2021-6.

46 “Fridge as My Witness”, note 44 above.

47 Sara Jerome, “Smart Water Meter Data Considered Evidence in Murder Case,” Water Online (January 3, 2017), www.wateronline.com/doc/smart-water-meter-considered-evidence-murder-case-0001; Dillon Thomas, “Bentonville PD Says Man Strangled, Drowned Former Georgia Officer,” 5 News (February 23, 2016), www.5newsonline.com/article/news/local/outreach/back-to-school/bentonville-pd-says-man-strangled-drowned-former-georgia-officer/527-0e573fa0-4ff9-457d-8ed1-b4c27762e189; Kathryn Gilker, “Bentonville Police Use Smart Water Meters as Evidence in Murder Investigation,” 5 News (December 28, 2016), www.5newsonline.com/article/news/local/outreach/back-to-school/bentonville-police-use-smart-water-meters-as-evidence-in-murder-investigation/527-e74e0aa5-0e2a-4850-a524-d45d2f3fd048.

48 Colin Dwyer, “Arkansas Prosecutors Drop Murder Case that Hinged on Evidence from Amazon Echo,” NPR: The Two-Way (November 29, 2017), www.npr.org/sections/thetwo-way/2017/11/29/567305812/arkansas-prosecutors-drop-murder-case-that-hinged-on-evidence-from-amazon-echo.

49 Minyvonne Burke, “Amazon’s Alexa May Have Witnessed Alleged Florida Murder, Authorities Say,” NBC News (November 2, 2019), www.nbcnews.com/news/us-news/amazon-s-alexa-may-have-witnessed-alleged-florida-murder-authorities-n1075621.

50 Criminal Complaint: Affidavit of Probable Cause Continuation, and Order from Pennsylvania v. Risley, http://online.wsj.com/public/resources/documents/2016_0421_PAvRisley.pdf (Fitbit data contesting victim’s account).

51 Philip Kuhn, “Die Version vom Handeln im Affekt ist mit dem heutigen Tag obsolet” (The Option of Acting in Effect Is Obsolete Today), Welt (August 1, 2018), www.welt.de/vermischtes/article172287105/Mordprozess-Hussein-K-Die-Version-vom-Handeln-im-Affekt-ist-mit-dem-heutigen-Tag-obsolet.html.

52 “Fridge as My Witness”, note 44 above.

53 See Leaders of a Beautiful Struggle and others v. Baltimore Police Department, 2 F.4th 330 (4th Cir. 2021) (en banc), https://law.justia.com/cases/federal/appellate-courts/ca4/20-1495/20-1495-2021-06-24.html; Cade Metz, “Police Drones Are Starting to Think for Themselves,” The New York Times (December 5, 2020), www.nytimes.com/2020/12/05/technology/police-drones.html?action=click&module=News&pgtype=Homepage. But see Timothy M. Ravich, “Courts in the Drone Age” (2015) 42:2 Northern Kentucky Law Review 161 at 164, n. 5.

54 See e.g. “Darnella Frazier,” The Pulitzer Prizes: The 2021 Pulitzer Prize Winner in Special Citations and Awards, www.pulitzer.org/winners/darnella-frazier.

55 Christopher Campbell, “New Netflix True-Crime Doc Shows How ‘Curb Your Enthusiasm’ Saved a Man from Death Row,” Thrillist (September 29, 2017), www.thrillist.com/entertainment/nation/netflix-documentary-long-shot-curb-your-enthusiasm-death-row.

56 Kirsten Fleming, “How ‘Curb Your Enthusiasm’ Saved This Man from Prison,” New York Post (September 23, 2017), https://nypost.com/2017/09/23/how-curb-your-enthusiasm-saved-this-man-from-prison/.

57 Brady v. Maryland, 373 U.S. 83 (1963) (requiring the prosecutor to disclose of exculpatory information to the defense). But see Angela J. Davis, Arbitrary Justice: The Power of the American Prosecutor (New York, NY: Oxford University Press, 2007) at 130–131.

58 People v. Swygert, 57 Misc. 3d 913 (NY Crim. Ct. 2017) [People v. Swygert] at 921–922. See also J. W. August, “Attorney: Security Video Exonerates Dina Shacknai in Death of Rebecca Zahau,” NBC San Diego (April 20, 2017), www.nbcsandiego.com/news/local/security-video-dina-shacknai-in-death-of-rebecca-zahau/12795/; People v. Butler, 61 Misc.3d 1009 (NY Sup. Ct. 2018).

59 People v. Swygert, note 58 above, at 922. See also Beth Schwartzapfel, “Defendants Kept in the Dark about Evidence, Until It’s Too Late,” The New York Times (August 7, 2017), www.nytimes.com/2017/08/07/nyregion/defendants-kept-in-the-dark-about-evidence-until-its-too-late.html.

60 People v. Swygert, note 58 above, at 923–924.

61 Innocence Project, www.innocenceproject.org/.

62 See generally Brandon L. Garrett, “Towards an International Right to Claim Innocence” (2017) 105:4 California Law Review 1173; Brandon L. Garrett, “Claiming Innocence” (2008) 92:6 Minnesota Law Review 1629. See generally Erin E. Murphy, Inside the Cell: The Dark Side of Forensic DNA (New York, NY: Nation Books, 2015).

63 See e.g. “Life, Liberty” and “Trial by Machine”, both note 7 above.

64 See e.g. United States v. Gissantaner, 417 F. Supp.3d 857 (WD Mich. 2019) (holding the government’s probabilistic DNA results evidence inadmissible because they lacked reliability).

65 See e.g. Katherine Kwong, “The Algorithm Says You Did It: The Use of Black Box Algorithms to Analyze Complex DNA Evidence” (2017) 31:1 Harvard Journal of Law & Technology 275 at 287–288 (citing defense uses of TrueAllele).

66 See e.g. Ben Fox Rubin, “Facial Recognition Overkill: How Deputies Cracked a $12 Shoplifting Case,” CNET (March 19, 2019), www.cnet.com/news/facial-recognition-overkill-how-deputies-solved-a-12-shoplifting-case/.

67 Patrick W. Nutter, “Machine Learning Evidence: Admissibility and Weight” (2019) 21:3 University of Pennsylvania Journal of Constitutional Law 919 at 921.

68 Jack Gillum, “Prosecutors Dropping Child Porn Charges after Software Tools Are Questioned,” ProPublica (April 3, 2019), www.propublica.org/article/prosecutors-dropping-child-porn-charges-after-software-tools-are-questioned.

69 See e.g. Instagram, “Terms of Use,” https://help.instagram.com/.

70 “Dark Side: Secret Origins of Evidence in US Criminal Cases,” Human Rights Watch (January 9, 2018), www.hrw.org/report/2018/01/09/dark-side/secret-origins-evidence-us-criminal-cases#.

71 Cf. O’Grady v. Superior Court, 44 Cal.Rptr.3d 72 (2006).

72 Pollyanna Sanderson, Katelyn Ringrose, & Stacey Gray, “It’s Raining Privacy Bills: An Overview of the Washington State Privacy Act and Other Introduced Bills,” Future of Privacy Forum (January 13, 2020), https://fpf.org/2020/01/13/its-raining-privacy-bills-an-overview-of-the-washington-state-privacy-act-and-other-introduced-bills/ [“Privacy Bills”].

73 Cf. “Digital Innocence”, note 7 above, at 1045–1048.

74 See e.g. Stephen A. Saltzburg, “The Duty to Investigate and the Availability of Expert Witnesses” (2018) 86:4 Fordham Law Review 1709 at 1720 (“[R]eluctance to appoint defense experts is rooted in cost to the government and inertia; i.e., a history of not routinely providing defense experts at the request of defense counsel”).

75 “$5 Billion Fine”, note 37 above.

76 See e.g. Paul M. Schwartz, “Global Data Privacy: The EU Way” (2019) 94:4 New York University Law Review 771; Electronic Privacy Information Center, “Face Surveillance and Biometrics,” https://epic.org/issues/surveillance-oversight/face-surveillance/; “Privacy Bills”, note 72 above.

77 See generally “Politics of Privacy”, note 13 above.

78 See generally “Privacy Asymmetries”, note 7 above.

79 See e.g. Federal Institute of Metrology METAS, “Legal Metrology – Regulating Measurement and Ensuring Its Binding Implementation,” Swiss Confederation, www.metas.ch/metas/en/home/gesmw/gesetzliches-messwesen---messen-regeln---.html.

10 Data as Evidence in Criminal Courts Comparing Legal Frameworks and Actual Practices

* A preliminary version of this chapter was published as Bart Custers & Lonneke Stevens, “The Use of Data as Evidence in Dutch Criminal Courts” (2021) 29:1 European Journal of Crime, Criminal Law and Criminal Justice 25.

1 Bruce Schneier, “The Battle for Power on the Internet,” The Atlantic (October 24, 2013), www.theatlantic.com/technology/archive/2013/10/the-battle-for-power-on-the-internet/280824/.

2 Data within a criminal procedural context means information that needs to be found and/or understood by means of certain techniques and expertise; thus, a witness statement is not data, but a DNA profile is.

3 General Data Protection Regulation, EU 2016, Regulation (EU) 2016/679 (with effect from May 25, 2018) [GDPR].

4 The Data Protection Law Enforcement Directive, EU 2016, Directive (EU) 2016/680 (with effect from May 5, 2016) [LED].

5 International Organization for Standardization, www.iso.org/home.html.

6 CEN stands for European Committee for Standardization (Comité Européen de Normalisation) and CENELEC stands for European Committee for Electrotechnical Standardization (Comité Europeén de Normalisation Électrotechnique), www.cencenelec.eu/.

7 European Telecommunications Standards Institute, www.etsi.org/.

8 Bart Custers, Alan M. Sears, Francien Dechesne et al., EU Personal Data Protection in Policy and Practice (Heidelberg, Germany: Asser/Springer, 2019) [EU Personal Data].

9 Christopher Kuner, “The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law” (2012) Bloomberg BNA Privacy and Security Law Report 1.

10 The introduction of EU Directive 2016/680 required few changes in the Dutch legal framework for processing personal data in criminal law. In comparison, in Italy, there were no specific laws or regulations for the protection of personal data in criminal law, apart from the general legislation for criminal investigation and the GDPR. As such, Italy needed to draft entirely new legislation. In other countries, such as Germany, Sweden, and Romania, this topic was dealt with in their Police Acts, which often needed further elaboration to comply with this EU Directive. EU Personal Data, note 8 above.

11 Susan Brenner & Bert-Jaap Koops, “Approaches to Cybercrime Jurisdiction” (2004) 4:1 Journal of High Technology Law 1; Bert-Jaap Koops, “Cybercrime Legislation in the Netherlands” (2005) 2005:4 Cybercrime and Security 1.

12 European, Council of Europe, Convention on Cybercrime, ETS No. 185 (Budapest: Council of Europe, 2001) Arts. 19–21.

13 European Union, The Council of the European Union, Council Decision 2008/615/JHA on the Stepping Up of Cross-border Cooperation, Particularly in Combating Terrorism and Cross-border Crime, OJ 2008 L 210 (EU: Official Journal of the European Union, 2008).

14 European Union, The Council of the European Union, Council Decision 2007/533/JHA on the Establishment, Operation and Use of the Second Generation Schengen Information System (SIS II), OJ 2007 L 205 (EU: Official Journal of the European Union, 2007).

15 European Union, The European Parliament, & The Council of the European Union, Regulation (EC) No. 767/2008 of The European Parliament and of The Council Concerning the Visa Information System (VIS) and the Exchange of Data Between Member States on Short-Stay Visas (VIS Regulation), OJ 2008 L 218 (EU: Official Journal of the European Union, 2008).

16 European Union, The Council of the European Union, Council Decision 2009/917/JHA on the Use of Information Technology for Customs Purposes, OJ 2009 L 323 (EU: Official Journal of the European Union, 2009).

17 European Union, The Council of the European Union, Council Regulation (EC) 2725/2000 Concerning the Establishment of ‘Eurodac’ for the Comparison of Fingerprints for the Effective Application of the Dublin Convention, OJ 2000 L 316 (EU: Official Journal of the European Union, 2000); European Union, The Council of the European Union, Council Regulation (EC) 407/2002 Laying Down Certain Rules to Implement Regulation 2725/2000 Concerning the Establishment of ‘Eurodac’ for the Comparison of Fingerprints for the Effective Application of the Dublin Convention, OJ 2002 L 62 (EU: Official Journal of the European Union, 2002).

18 Together with France and Italy, the Netherlands had a debate focused on privacy versus security. This culminated in a referendum on the proposed Intelligence Agencies Act that extended powers for intelligence agencies, which voters turned down. Since this referendum was not binding, the Dutch government accepted the act anyway and abolished this type of referendum; see Charlotte Wagenaar, “Beyond For or Against? Multi-Option Alternatives to a Corrective Referendum” (2019) 62:1 Electoral Studies Article 102091. This case shows a clear tension between the general public’s commitment to privacy issues versus the government’s priority of national security, and perhaps also of criminal law enforcement.

19 Wetboek van Strafrecht (Dutch Code of Criminal Procedure), Netherlands (1926) [Dutch CCP].

20 See Lonneke Stevens, Het nemo-teneturbeginsel in strafzaken: van zwijgrecht naar containerbegrip (The Nemo Tenetur Principle in Criminal Cases: From the Right to Remain Silent to an All-Purpose Concept, PhD thesis, Tilburg University) (Nijmegen, Netherlands: Wolf Legal Publishers, 2005) at ch. 3.

21 Cf. Jo-Anne Wemmers, Rien van der Leeden, & Herman Steensma, “What Is Procedural Justice: Criteria Used by Dutch Victims to Assess the Fairness of Criminal Justice Procedures” (1995) 8:4 Social Justice Research 329.

22 For more details, see Jeroen Chorus, Ewoud Hondius, & Wim Voermans (eds.), Introduction to Dutch Law, 5th ed. (Alphen aan den Rijn, Netherlands: Kluwer Law International, 2016).

23 Geert Corstens, Matthias Borgers, & Tijs Kooijmans, Het Nederlands strafprocesrecht (Dutch Criminal Procedure Law) (Deventer, Netherlands: Kluwer, 2018) [Het Nederlands] at 10.

24 See Documenten Modernisering Wetboek van Strafvordering, www.rijksoverheid.nl/documenten/publicaties/2017/11/13/documenten-modernisering-wetboek-van-strafvordering; Platform Modernisering Strafvordering, www.moderniseringstrafvordering.nl/.

25 See Bert-Jaap Koops & Jan-Jaap Oerlemans, “Formeel strafrecht en ICT” (Substantive Criminal Law and ICT) in Bert-Jaap Koops & Jan-Jaap Oerlemans (eds.), Strafrecht en ICT (The Hague, Netherlands: Sdu Uitgevers, 2018) 117 at 125–127.

26 Introduced with the Computer Crime Law III, Netherlands (in force since March 2019). See also Ronald Pool & Bart Custers, “The Police Hack Back: Legitimacy, Necessity and Privacy Implications of the Next Step in Fighting Cybercrime” (2017) 25:2 European Journal of Crime, Criminal Law and Criminal Justice 123.

27 Dutch Supreme Court (via www.rechtspraak.nl), HR 4 April 2017, NJ 2017, 229, ECLI:NL:HR:2017:584; see also the case note of Lonneke Stevens, “Onderzoek in een smartphone: Zoeken naar een redelijke verhouding tussen privacybescherming en werkbare opsporing” (Smartphone Searches: Balancing Privacy Protection and Criminal Investigation Practices) (2017) Ars Aequi 730 at 730–735. For an explanation in English, see Bryce Clayton Newell & Bert-Jaap Koops, “From Horseback to the Moon and Back: Comparative Limits on Police Searches of Smartphones upon Arrest” (2020) 72:1 Hastings Law Journal 229 [“From Horseback”].

28 “Law” meaning formal acts of Parliament.

29 Het Nederlands, note 23 above, at 29–30.

30 Police Act 1993, Netherlands (with effect from December 9, 1993), Art. 3.

31 This approach is also taken in some proposals in the United States, such as the American Bar Association (ABA) Model Standards for Criminal Justice: Law Enforcement Access to Third Party Records (2013), www.americanbar.org/groups/criminal_justice/standards/law_enforcement_access/. US courts have so far largely rejected this approach.

32 Stealth text messaging refers to sending a text message to a cell phone without the phone acknowledging receipt, in order to generate traffic data with the phone’s location that can be ordered from a telecoms provider.

33 Dutch Supreme Court, HR 1 July 2014, NJ 2015, 114, ECLI:NL:HR:2014:1563.

34 See Dutch Supreme Court, HR 4 April 2017, NJ 2017, 229, ECLI:NL:HR:2017:584.

35 It was initially the Commission “Modernisation of criminal investigation in the digital era” (Koops-Commission) that suggested the use of systematicness as a structuring concept; see the advice in: Netherlands, Commissie modernisering opsporingsonderzoek in het digitale tijdperk, Regulering van opsporingsbevoegdheden in een digitale omgeving (Regulating Criminal Investigation Powers in Digital Environments), s. l. (Netherlands: Commissie modernisering opsporingsonderzoek in het digitale tijdperk, 2018) [“Koops-Commission”].

36 See the proposal for the Nieuw Wetboek van Strafvordering (Proposed Code of Criminal Procedure), Netherlands (as amended July 2020) [Proposed CCP], Arts. 2.7.39 and 2.7.41.

37 Footnote Ibid., Art. 2.8.8.

38 “From Horseback”, note 27 above, at 264–268.

39 GDPR, note 3 above, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119.

40 LED, note 4 above, on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA [2016] OJ L 119/89.

41 In the Netherlands, the GDPR was implemented via the Uitvoeringswet AVG (GDPR Execution Act), Netherlands (with effect from May 25, 2018).

42 Edina Harbinja, “Does the EU Data Protection Regime Protect Post-Mortem Privacy and What Could Be the Potential Alternatives?” (2013) 10:1 SCRIPTed 19.

43 Although the GDPR is less relevant than the LED in a criminal law context, we use the GDPR as a starting point in this section, because we expect Europeans readers of this chapter to be more familiar with the GDPR.

44 This section of the chapter is partially based on Mark Leiser & Bart Custers, “The Law Enforcement Directive: Conceptual Issues of EU Directive 2016/680” (2022) 5:3 European Data Protection Law Review 367 [“Conceptual Issues”].

European Union, European Commission, Proposal for a Directive of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data by Competent Authorities for the Purposes of Prevention, Investigation, Detection or Prosecution of Criminal Offences or the Execution of Criminal Penalties, and the Free Movement of Such Data, COM (2012) 10 final (EU: European Commission, 2012).

45 Catherine Jasserand, “Subsequent Use of GDPR Data for a Law Enforcement Purpose: The Forgotten Principle of Purpose Limitation?” (2018) 4:2 European Data Protection Law Review 152.

46 Catherine Jasserand, “Law Enforcement Access to Personal Data Originally Collected by Private Parties: Missing Data Subjects’ Safeguards in Directive 2016/680?” (2018) 34:1 Computer Law & Security Report 154.

47 GDPR, note 3 above, Art. 5(1)(e) states that personal data should be kept no longer than necessary, but does not mention a number of days, months, or years. Note that Arts. 13 and 14 of the GDPR require data controllers to inform the data subject on storage times if they inquire about this.

48 LED, note 4 above, Art. 5; see also Teresa Quintel, “European Union – Article 29 Data Protection Working Party Opinion on the Law Enforcement Directive” (2018) 4:1 European Data Protection Law Review 104.

49 European Union, European Commission, Opinion on Some Key Issues of the Law Enforcement Directive (EU 2016/680) – wp258, WP 2017/258 (EU: European Commission, 2017).

50 Bundesgrenzschutzgesetz 1994 (Federal Border Protection Act 1994), Germany (with effect from/as amended 1994), § 35.

51 Wet Politiegegevens (Police Data Act), Netherlands (with effect from/as amended 1 October 2022) [Police Data Act], Art. 8.

52 Data Protection Act 2018, UK, c. 12 (with effect from May 25, 2018).

53 In comparison, this feature is largely missing in US regulatory frames.

54 Police Data Act, note 51 above.

55 Wet justitiële en strafvorderlijke gegevens (Justice and Prosecution Data Act), Netherlands (with effect from/as amended July 1, 2022).

56 Privacy by design and privacy by default are based on the idea that technology usually can be designed in different ways within provided requirements, resulting in the same functionality. However, some designs can be more privacy-friendly and other less privacy-friendly. Privacy by design aims to include privacy as a value into the design. Privacy by default aims to set defaults in technology in a privacy-friendly mode, e.g., opt-in instead of opt-out. Privacy impact assessments are risk assessments of new technologies, business models, policies, or other plans in which personal data are being processed. The risk assessments focus on privacy risks of the data subjects.

57 European Commission, “January Infringements Package: Key Decisions” (January 24, 2019), https://ec.europa.eu/commission/presscorner/detail/en/MEMO_19_462.

58 European Commission, “Data Protection: Commission Decides to Refer Greece and Spain to the Court for Not Transposing EU Law” (July 25, 2019), https://ec.europa.eu/commission/presscorner/detail/EN/IP_19_4261.

59 European Commission, “Infringement Proceedings: Commission Takes Legal Action against Germany in 17 Cases” (July 25, 2019), https://ec.europa.eu/commission/presscorner/detail/en/inf_23_142.

60 European Commission, “May Infringements Package: Key Decisions” (May 14, 2020) at “Data Protection: Commission Urges GERMANY and SLOVENIA to Complete the Transposition of the Data Protection Law Enforcement Directive,” https://ec.europa.eu/commission/presscorner/detail/en/inf_20_859.

61 C-658/19, Court of Justice of the European Union, February 25, 2021, ECLI:EU:C:2021:138.

62 European Union, European Commission, First Report on Application and Functioning of the Data Protection Law Enforcement Directive (EU) 2016/680 (“LED”), COM/2022/364 final (Brussels: European Commission, 2022), https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52022DC0364&from=EN.

63 There is no constitutional provision with the same content.

64 An example of such an exception is what the lawyer puts forward during the hearing.

65 The data itself is often stored in police databases or at the Netherlands Forensic Institute (NFI).

66 An important exception is that evidence that the suspect has committed the offense charged can – not must – be assumed by the judge on the basis of an official report by an investigating officer. See Dutch CCP, note 19 above, § 344(2).

67 For an overview and interpretation of the case law, see the case note M. J. Borgers in Dutch Supreme Court, July 7, 2015, NJ 2015, 488, ECLI:NL:HR:2015:1817.

68 HR 27 January 1998, NJ 1984, 404. Assessing the reliability of the ways in which data is secured may depend on the methods and technologies used; see e.g. Eric Van Buskirk & Vincent Liu, “Digital Evidence: Challenging the Presumption of Reliability” (2006) 1:1 Journal of Digital Forensic Practice 19.

69 If the case is declared inadmissible for prosecution, the court will not allow litigation to start because the eligibility criteria of procedural criminal law are not met.

70 See e.g. HR 19 February 2013, NJ 2013, 308; see also Het Nederlands, note 23 above, at 884–886.

71 Desiree de Jonge, “Verdediging in tijden van digitale bewijsvoering” (Legal Defense in the Age of Digital Evidence) in Patrick Petrus Jacobus van der Meij (ed.), Aan de slag. Liber amicorum Gerard Hamer (The Hague, Netherlands: Sdu Uitgevers, 2018) 125 [“Verdediging”].

72 See “Dutch Police ‘Read’ Blackberry Emails,” BBC News (January 12, 2016), www.bbc.com/news/technology-35291933; Robert Wright, “Hundreds Arrested across Europe as French Police Crack Encrypted Network,” The Financial Times (June 8, 2021).

73 “Judicial System Overwhelmed after Gaining Access to Encrypted Chats,” NL Times (June 14, 2021), https://nltimes.nl/2021/06/14/judicial-system-overwhelmed-gaining-access-encrypted-chats.

74 In relation to investigation in the cloud, see also Jan-Willem van den Hurk & Sander de Vries, “Cybercrime. Waar worden gegevens in de ‘cloud’ opgeslagen en welke juridische consequentie heeft het antwoord op die vraag? Een speurtocht langs het traditionele juridisch kader en actuele wetgeving en jurisprudentie leidt tot een opmerkelijke conclusie” (2019) Strafblad 34.

75 See e.g. para. 6 of the verdict of the Court of Amsterdam, April 19, 2018, ECLI:NL:RBAMS:2018:2504.

76 Court of Appeal Amsterdam, December 14, 2018, ECLI:NL:GHAMS:2018:4620.

77 A keylogger is a device or software that registers, typically in a covert manner, all keystrokes on a keyboard.

78 See also “Verdediging”, note 71 above.

79 See e.g. the rulings of the Court of Amsterdam, April 19, 2018 ECLI:NL:RBAMS:2018:2504 and April 1, 2021, ECLI:NL:RBAMS:2021:1507.

80 See Koops-Commission, note 35 above, at 27; see also Maša Galič, “De rechten van de verdediging in de context van omvangrijke datasets en geavanceerde zoekmachines in strafzaken: een suggestie voor uitbreiding” (Rights of the Defendant in the Context of Large Datasets and Advanced Search Engines in Criminal Cases) (2021) 2:2 Boom Strafblad 41 [“De rechten”].

81 For more details, see e.g. Jan-Jaap Oerlemans, Investigating Cybercrime, PhD thesis, Leiden University (Leiden, Netherlands: Meijers Research Institute and Graduate School of the Leiden Law School of Leiden University, 2017).

82 The increasing use of AI in a criminal law context can raise such issues; see Bart Custers, “Artificiële intelligentie in het Strafrecht: een overzicht van actuele ontwikkelingen” (Artificial Intelligence in Criminal Law: An Overview of Current Developments) (2021) 4 Computerrecht 330; for a more general discussion, see Daniel Solove, The Digital Person; Technology and Privacy in the Information Age (New York, NY: New York University Press, 2004). Regarding the interpretation of equality of arms in relation to large datasets, see “De rechten”, note 80 above. See also Sigurður Einarsson and others v. Iceland, App. No. 39757/15, ECtHR (June 4, 2019); and see Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195; Sabine Gless, Xuan Di, & Emily Silverman, “Ca(r)veat Emptor: Crowdsourcing Data to Challenge the Testimony of In-Car Technology” (2022) 62:3 Jurimetrics 285.

83 Toon Calders & Bart Custers, “What Is Data Mining and How Does It Work?” in Bart Custers, Toon Calders, Bart Schermer et al. (eds.), Discrimination and Privacy in the Information Society, no. 3 (Heidelberg, Germany: Springer, 2013) 27. For more on the responsibility of programmers, see Chapter 2 in this volume.

84 “Practitioner’s Guide to COMPAS Core” (Northpointe, 2015), https://assets.documentcloud.org/documents/2840784/Practitioner-s-Guide-to-COMPAS-Core.pdf.

85 OXRISK, “OXREC: Oxford Risk of Recidivism Tool,” https://oxrisk.com/oxrec-nl-2-backup/.

86 Gijs Van Dijck, “Algoritmische risicotaxatie van recidive: Over de Oxford Risk of Recidivism tool (OXREC), ongelijke behandeling en discriminatie in strafzaken” (Algorithmic Risk Assessment of Recidivism) (2020) 95:25 Nederlands Juristenblad 1784.

87 Julia Angwin, Jeff Larson, Surya Mattu et al., “Machine Bias,” ProPublica (May 23, 2016), www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

88 Marjolein Maas, Ellen Legters, & Seena Fazel, “Professional en risicotaxatie-instrument hand in hand: hoe de reclassering risico’s inschat” (Professional and Risk Assessment Work Together: How Probation Organisations Assess Risks) (2020) 1814 Nederlands Juristenblad 2055.

89 Cf. Faisal Kamiran, Toon Calders, & Mykola Pechenizkiy, “Techniques for Discrimination-Free Predictive Models” in Bart Custers, Toon Calders, Bart Schermer et al. (eds.), Discrimination and Privacy in the Information Society, no. 3 (Heidelberg, Germany: Springer, 2013) 223.

90 Bart Custers, “Effects of Unreliable Group Profiling by Means of Data Mining” in Gunter Grieser, Yuzuru Tanaka, & Akihiro Yamamoto (eds.), Lecture Notes in Artificial Intelligence, vol. 2843 (Berlin, Germany; New York, NY: Springer-Verlag, 2003) 290; for more on malfunctioning technology, which is also related to reliability, see Chapter 13 in this volume.

91 Both fallacies are errors in statistical reasoning involving a test for an occurrence, such as a match in fingerprints or DNA; the prosecutor’s fallacy exaggerates the probability of a criminal defendant’s guilt, whereas the defense attorney’s fallacy typically underestimates it. See William Thompson & Edward Schumann, “Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor’s Fallacy and the Defense Attorney’s Fallacy” (1987) 11 Law and Human Behavior 167.

92 The same applies to statements and testimonies by robots; see Chapters 6 and 8 in this volume.

93 Geralda Odinot, Amina Memon, David La Rooy et al., “Are Two Interviews Better than One? Eyewitness Memory across Repeated Cognitive Interviews” (2013) 8:10 PLoS ONE e76305.

94 European Union, European Commission, Proposal for a Regulation of the European Parliament and of The Council on European Production and Preservation Orders for Electronic Evidence in Criminal Matters, COM/2018/225 final – 2018/0108 (COD) (Strasbourg: European Commission, 2018), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A225%3AFIN [Production and Preservation].

95 European Council, “European Council Conclusions, 18 October 2018” (October 18, 2018), www.consilium.europa.eu/en/press/press-releases/2018/10/18/20181018-european-council-conslusions/.

96 European Union, European Commission, Proposal for a Directive of the European Parliament and of the Council Laying Down Harmonised Rules on the Appointment of Legal Representatives for the Purpose of Gathering Evidence in Criminal Proceedings, COM/2018/226 final – 2018/0107 (COD) (Strasbourg: European Commission, 2018).

97 Production and Preservation, note 94 above.

98 Obviously, this is a simplification. A more detailed analysis would need to include more steps, such as access to data, access to evaluations of data, destruction of data, etc.; cf. David Gray, The Fourth Amendment in an Age of Surveillance (Cambridge, UK: Cambridge University Press, 2017).

99 “Conceptual Issues”, note 44 above.

100 Bart Custers, Francien Dechesne, Alan M. Sears et al., “A Comparison of Data Protection Legislation and Policies across the EU” (2017) 34:2 Computer Law & Security Review 234.

101 Wet Inlichtingen- en Veiligheidsdiensten (Intelligence Agencies Act), 2017, Netherlands (as amended 1 January 2022).

102 E.g. risk assessments of individuals can only be made in comparison with data of others; typically, a suspect has a high risk in comparison with other suspects or the general population.

103 In the United States, a joint working group of the Department of Justice and the Administrative Office of the US Courts drafted guidelines for electronically stored information discovery production in federal criminal cases and how to inform defendants at an early stage about this; see US Department of Justice and Administrative Office of the US Courts Joint Working Group on Electronic Technology in the Criminal Justice System, “Recommendations for Electronically Stored Information (ESI) Discovery Production in Federal Criminal Cases” (Washington, DC: Department of Justice, 2012), www.uscourts.gov/sites/default/files/finalesiprotocolbookmarked.pdf. Because technology changes rapidly, there are no specific requirements for the manner or timing of the disclosure of the information. Instead, organizations in the criminal law system are required to develop best practices.

104 Kiel Brennan-Marquez, “Plausible Cause: Explanatory Standards in the Age of Powerful Machines” (2017) 70:4 Vanderbilt Law Review 1249.

11 Reconsidering Two US Constitutional Doctrines Fourth Amendment Standing and the State Agency Requirement in a World of Robots

1 Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Cambridge, MA: Harvard University Press, 2015).

2 389 U.S. 347 (1967) [Katz v. United States].

3 Under the public observation doctrine, police may make observations from any place where they lawfully have a right to be without triggering Fourth Amendment regulations. See David Gray, The Fourth Amendment in an Age of Surveillance (Cambridge, UK: Cambridge University Press, 2017) [Age of Surveillance] at 78–84.

4 Under the third party doctrine, government agents may acquire from third parties through lawful means information voluntarily shared with those parties without triggering Fourth Amendment protections. See Footnote ibid. at 84–89.

5 Age of Surveillance, note 3 above, at 190–299.

6 United States v. Jones, 565 U.S. 400 (2012) at 429–430 (Alito, J., concurring).

7 18 USC §§2510 et seq.

8 Alderman v. United States, 394 U.S. 165 (1969) at 174 (“Fourth Amendment rights are personal rights”).

9 Footnote Ibid. (citing Katz v. United States).

10 Katz v. United States, note 2 above, at 361 (Harlan, J., concurring).

11 Katz v. United States, note 2 above, at 350 (“[The Fourth] Amendment protects individual privacy against certain kinds of governmental intrusion.…”). This assumption underwrites both the third-party and public observation doctrines.

12 Rakas v. Illinois, 439 U.S. 128 (1978) at 133.

13 447 U.S. 727 (1980) [United States v. Payner (1980)].

14 United States v. Payner, 434 F. Supp. 113 (1977) at 119–121.

15 Ibid., at 130–131.

17 United States v. Payner (1980), note 13 above, at 733.

18 See Jennifer E. Laurin, “Trawling for Herring: Lessons in Doctrinal Borrowing and Convergence” (2011) 111:3 Columbia Law Review 670.

19 461 U.S. 95 (1983).

20 Footnote Ibid. at 105.

21 568 U.S. 398 (2013).

22 416 U.S. 21 (1974).

23 681 F.3d 1243 (1982) at 1248.

24 People v. Harris, 945 N.Y.S.2d 505 (NY Crim. Ct. 2012); Megan Guess, “Twitter Hands over Sealed Occupy Wall Street Protestor’s Tweets,” Ars Technica (September 14, 2012), https://arstechnica.com/tech-policy/2012/09/twitter-hands-over-occupy-wall-street-protesters-tweets/.

25 Microsoft Corp. v. United States Dep’t of Justice, No. C16-0538JLR (W. Dist. Wash., Feb. 8, 2017), slip opinion at 39–45. Microsoft ultimately settled with the Department of Justice; US, Office of the Deputy Attorney General, Policy Regarding Applications for Protective Orders Pursuant to 18 USC §2705(b) (Washington, DC: US Department of Justice, October 19, 2017), www.documentcloud.org/documents/4116081-Policy-Regarding-Applications-for-Protective.html.

26 Kim Lyons, “Amazon’s Ring Now Reportedly Partners with More than 2,000 US Police and Fire Departments,” The Verge (January 31, 2021), www.theverge.com/2021/1/31/22258856/amazon-ring-partners-police-fire-security-privacy-cameras.

27 For an in-depth discussion of government access to information shared with robots, see Chapter 8 in this volume.

28 See David Gray, “Bertillonage in an Age of Surveillance: Fourth Amendment Regulation of Facial Recognition Technologies” (2021) 24:1 SMU Science and Technology Law Review 3; for a considered discussion of evidentiary issues relating to robot-generated evidence, see Chapters 7, 9, and 10 in this volume.

29 Burdeau v. McDowell, 256 U.S. 465 (1921) at 475.

30 Carpenter v. United States, 138 S.Ct. 2206 (2018) [Carpenter v. United States] at 2261 (Alito, J., dissenting).

31 Footnote Ibid. at 2261.

32 See David Gray, “Dangerous Dicta” (2015) 72 Washington & Lee Law Review 1181 (explaining why dicta in District of Columbia v. Heller, 554 U.S. 570 (2008) at 580, n. 6, suggesting that Fourth Amendment rights are individual rather than collective finds no support in the text or history of the Fourth Amendment).

33 Age of Surveillance, note 3 above, at 149.

34 Massachusetts Constitution, US, Declaration of Rights (1780), Art. XIV.

35 Pennsylvania Constitution, US, Declaration of Rights (1776), Art. X.

36 US Constitution, Fifth Amendment.

37 US Constitution, Sixth Amendment.

38 Age of Surveillance, note 3 above, at 150–154.

39 US Constitution, First Amendment.

40 US Constitution, Art. I.

41 Wilkes v. Wood, 8 Eng. Rep. 489 (CP 1763) [Wilkes v. Wood] at 498 (“discretionary power … to search wherever their suspicions may chance to fall … certainly may affect the person and property of every man in this kingdom, and is totally subversive of the liberty of the subject”); Entick v. Carrington, 95 Eng. Rep. 807 (KB 1765) [Entick v. Carrington] at 817 (“[W]e can safely say there is no law in this country to justify the defendants in what they have done; if there was, it would destroy all the comforts of society …”). For an extended defense of this reading of the Fourth Amendment, see Age of Surveillance, note 3 above, at 134–172.

42 Elvin T. Lim, “The Federalist Provenance of the Principle of Privacy” (2015) 75:1 Maryland Law Review 415 at 419, 425–428.

43 Richard Myers, “Fourth Amendment Small Claims Court” (2013) 10 Ohio State Journal of Criminal Law 567 at 584.

44 United States v. Leon, 468 U.S. 897 (1984) [United States v. Leon] at 906.

45 See e.g. United States v. La Jeune Eugenie, 26 F. Cas. 832 (CCD Mass. 1822) at 843–844 (“In the ordinary administration of municipal law the right of using evidence does not depend, nor, as far as I have any recollection, has ever been supposed to depend upon the lawfulness or unlawfulness of the mode, by which it is obtained”); Commonwealth v. Dana, 43 Mass. (2 Met.) 329 (1841) at 337 (“If the search warrant were illegal, or if the officer serving the warrant exceeded his authority … this is no good reason for excluding the papers seized as evidence …”).

46 Elkins v. United States, 364 U.S. 206 (1960) at 217; Age of Surveillance, note 3 above, at 219–221.

47 United States v. Calandra, 414 U.S. 338 (1974) at 348.

48 Davis v. United States, 564 U.S. 229 (2011) at 236–237; Stone v. Powell, 428 U.S. 465 (1976) at 486; United States v. Janis, 428 U.S. 433 (1976) at 454.

49 United States v. Leon, note 44 above, at 906.

50 33 U.S. 10 (1948) [Johnson v. United States] at 14.

51 332 U.S. 581 (1948) at 595.

52 Carpenter v. United States, note 30 above.

53 Footnote Ibid. at 2217.

54 Footnote Ibid. at 2213–2214, citations omitted.

55 David Gray, “Collective Rights and the Fourth Amendment after Carpenter” (2019) 79:1 Maryland Law Review 66 at 67–85.

56 Carpenter v. United States, note 30 above, at 2227 (Kennedy, J., dissenting), citations omitted.

57 Footnote Ibid. at 2241–2242 (Thomas, J., dissenting).

58 Footnote Ibid. at 2218, citations omitted.

61 Footnote Ibid. at 2219.

62 Footnote Ibid. at 2214.

63 David Gray, “The Fourth Amendment Categorical Imperative” (2017) 116 Michigan Law Review Online 14 at 31–34.

64 Johnson v. United States, note 50, at 13–14.

65 Laura K. Donohue, “Original Fourth Amendment” (2016) 83:3 University of Chicago Law Review 1181; Laura K. Donohue, “The Fourth Amendment in a Digital World” (2017) 71:4 NYU Annual Survey of American Law 553.

66 Akhil Reed Amar, The Constitution and Criminal Procedure: First Principles (London, UK: Yale University Press, 1998) [Constitution and Criminal Procedure] at 3–20.

67 William Blackstone, Commentaries on the Laws of England: A Facsimile of the First Edition of 1765–1769, vol. 4 (Chicago, IL: University of Chicago Press, 1979) at 286–290.

68 Constitution and Criminal Procedure, note 66 above, at 12; William Stuntz, “The Substantive Origins of the Fourth Amendment” (1995) 105:2 Yale Law Journal 393 [“Substantive Origins”] at 401. See also James Otis, “In Opposition to Writs of Assistance” in William Jennings Bryan (ed.), The World’s Famous Orations (New York, NY: Funk & Wagnalls, 1906) 27 [“In Opposition”] at 29 (describing common law cases “in which the complainant has before sworn that he suspects his goods are concealed” providing grounds for “warrants to search such and such houses, specially named”).

69 Bell v. Clapp, 10 Johns R. 263 (NY 1813) at 269; Grumon v. Raymond, 1 Conn. 40 (1814) [Grumon v. Raymond] at 44 (reporting on Smith v. Bouchier, 2 Stra. 993, in which “[t]he question arose upon a custom, that a plaintiff making oath that he has a personal action against any person with the precinct, and that he believes the defendant will not appear, but run away, the judge may award a warrant to arrest him, and detain him until the security is given for answering the complaint”).

70 “Substantive Origins”, note 68 above, at 401, n 36.

71 Grumon v. Raymond, note 69 above, at 45 (noting that in searches for stolen goods, “[t]here must be an oath by the applicant that he has had his goods stolen, and strongly suspects that they are concealed in such a place …”); Entick v. Carrington, note 41 above, at 817 (describing then-familiar cases of searches for stolen goods, in which “case the justice and the informer must proceed with great caution; there must be an oath that the party has had his good stolen, and his strong reason to believe they are concealed in such a place …”).

72 Entick v. Carrington, note 41 above, at 817 (the common law “holds the property of every man so sacred, that no man can set his foot upon his neighbor’s close without his leave; if he does he is a trespasser, though he does no damage at all; if he will tread upon his neighbor’s ground, he must justify it by law”).

73 Wilkes v. Wood, note 41 above, at 497 (noting that Wood, a secretary to Secretary of State Lord Halifax, was “the prime actor in the whole affair”); William Cuddihy, The Fourth Amendment: Origins and Original Meaning (Oxford, UK: Oxford University Press, 2009) [Origins and Original Meaning] at 439–440 and 446–452 (discussing the conditions that led to the General Warrant cases and the British rejection of general warrants).

74 Entick v. Carrington, note 41 above, at 817 (“we can safely say there is no law in this country to justify the defendants in what they have done; if there was, it would destroy all the comforts of society”).

75 “Substantive Origins”, note 68 above, at 396–397. See also Origins and Original Meaning, note 73 above, at 39–87; Telford Taylor, Two Studies in Constitutional Interpretation (Columbus, OH: Ohio State University Press, 1969) at 24–44; Nelson B. Lasson, The History and Development of the Fourth Amendment to the United States Constitution (Baltimore, MD: Johns Hopkins University Press, 1937) at 13–78; Tracey Maclin, “The Central Meaning of the Fourth Amendment” (1993) 35:1 William & Mary Law Review 197 at 223–228. But see Constitution and Criminal Procedure, note 66 above, at 11 (allowing that the general warrants cases were “familiar to every schoolboy in America,” but contending that the writs of assistance case was “almost unnoticed in debates over the federal Constitution and Bill of Rights”).

76 Wilkes v. Wood, note 41 above.

77 Entick v. Carrington, note 41 above.

78 Wilkes v. Wood, note 41 above, at 498; Entick v. Carrington, note 41 above, at 817.

79 “In Opposition”, note 68 above, at 27–37.

80 Footnote Ibid. at 28.

81 Mark Graber, “Seeing, Seizing, and Searching Like a State: Constitutional Developments from the Seventeenth Century to the End of the Nineteenth Century” in David Gray & Stephen Henderson (eds.), The Cambridge Handbook of Surveillance Law (Cambridge, UK: Cambridge University Press, 2017) 395 [“Seeing, Seizing, and Searching”] at 405–407.

82 See “Seeing, Seizing, and Searching”, note 81 above, at 405–407; Bernard Bailyn, The Ideological Origins of the American Revolution (Cambridge, MA: Harvard University Press, 1967) at 117.

83 Frisbie v. Butler, 1 Kirby 213 (Conn. 1787); Grumon v. Raymond, note 69 above, at 42–44 (“a warrant to search all suspected places, stores, shops and barns in [town]” because the discretion granted the officers “would open a door for the gratification of the most malignant passions”).

84 Massachusetts Constitution, US, Declaration of Rights (1780), Art. XIV; Vermont Constitution, US, Declaration of Rights (1786), Art. XII; New Hampshire Constitution, US, Bill of Rights (1784), Art. XIX; North Carolina Constitution, US, Declaration of Rights (1776), Art. XI; Maryland Constitution, US, Declaration of Rights (1776), Art. XXIII; Pennsylvania Constitution, US, Declaration of Rights (1776), Art. X; Delaware Constitution, US, Declaration of Rights (1776), Art. XVII; Virginia Constitution, US, Declaration of Rights (1776), Art. X.

85 Department of State, Documentary History of the Constitution of the United States of America (Washington, DC: Department of State, 1894) at 193, 268, and 379 (reproducing reservations filed by New York, North Carolina, and Virginia).

86 Wilkes v. Wood, note 41 above, at 498.

87 “In Opposition”, note 68 above, at 30–32.

88 Footnote Ibid. at 32.

89 Philip B. Kurland & Ralph Lerner (eds.), The Founders’ Constitution, 5th ed. (Chicago, IL: University of Chicago Press, 2000) at 226 (quoting from the text of the writ at issue in Paxton).

90 Footnote Ibid. (noting “writs [of assistance] issued by King Edward I. to the Barons of the Exchequer, commanding them to aid a particular creditor to obtain a preference over other creditors …”).

91 “In Opposition”, note 68 above, at 32.

93 Entick v. Carrington, note 41 above, at 818.

94 One might object to this historical analysis citing Shelley v. Kraemer, 334 U.S. 1 (1948), the landmark case prohibiting judicial enforcement of racially restrictive covenants, as grounds for concluding that private agents acting in the shadow of judicial sanction are state agents. Of course, that conclusion does not follow. As the Court noted in Shelley v. Kraemer, its holding bore on the “judicial enforcement of [racially restrictive covenants],” not the validity “of the private agreements as such.”

95 Carpenter v. United States, note 30 above, at 2220.

96 Footnote Ibid. at 2217.

97 Kyllo v. United States, 533 U.S. 27 (2001) at 32, n. 1: “When the Fourth Amendment was adopted, as now, to ‘search’ meant ‘[t]o look over or through for the purpose of finding something; to explore; to examine by inspection …’”

98 Carpenter v. United States, note 30 above, at 2246–2250.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×