We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers civil rights under international human rights law. It includes the right to legal personality, the right to a name, the right to family life, the right to marry, the right to privacy, and the right to respect for home and correspondence. The chapter discusses the legal standards and protections for these rights, the obligations of states to respect and fulfill them, and the role of international bodies in monitoring compliance. It also highlights the challenges in implementing civil rights protections and the importance of adopting comprehensive measures to address violations and ensure effective remedies for victims.
This chapter scrutinizes the operation of public sector privacy and data protection laws in relation to AI data in the United States, the United Kingdom and Australia, to assess the potential for utilizing these laws to challenge automated government decision-making. Government decision-making in individual cases will almost inevitably involve the collection, use, or storage of personal information, and may also involve drawing inferences from data already collected. At the same time increased usage of automated decision-making encourages the large-scale collection and mining of personal data. Privacy and data protection laws provide a useful chokepoint for limiting discrimination and other harms that arise from misuses of personal information.
The implementation of the General Data Protection Regulation (GDPR) in the EU, rather than the regulation itself, is holding back technological innovation. The EU’s data protection governance architecture is complex, leading to contradictory interpretations among Member States. This situation is prompting companies of all kinds to halt the deployment of transformative projects in the EU. The case of Meta is paradigmatic: both the UK and the EU broadly have the same regulation (GDPR), but the UK swiftly determined that Meta could train its generative AI model using first-party public data under the legal basis of legitimate interest, while in the EU, the European Data Protection Board (EDPB) took months to issue an Opinion that national authorities must still interpret and implement individually, leading to legal uncertainty. Similarly, the case of Deepseek has demonstrated how some national data protection authorities, such as the Italian Garante, have moved to ban the AI model outright, while others have opted for investigations. This fragmented enforcement landscape exacerbates regulatory uncertainty and hampers EU’s competitiveness, particularly for startups, which lack the resources to navigate an unpredictable compliance framework. For the EU to remain competitive in the global AI race, strengthening the EDPB’s role is essential.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
The actors that are active in the financial world process vast amounts of information, starting from customer data and account movements over market trading data to credit underwriting or money-laundering checks. It is one thing to collect and store these data, yet another challenge to interpret and make sense of them. AI helps with both, for example, by checking databases or crawling the Internet in search of relevant information, by sorting it according to predefined categories or by finding its own sorting parameter. It is hence unsurprising that AI has started to fundamentally change many aspects of finance. This chapter takes AI scoring and creditworthiness assessments as an example for how AI is employed in financial services (Section 16.2), for the ethical challenges this raises (Section 16.3), and for the legal tools that attempt to adequately balance advantages and challenges of this technique (Section 16.4). It closes with a look at scoring beyond the credit situation (Section 16.5).
Strategic litigation plays a crucial role in advancing human rights in the digital age, particularly in cases where data subjects, such as migrants and protection seekers, experience significant power imbalances. In this Article, we consider strategic litigation as part of broader legal mobilization efforts. Although some emerging studies have examined contestation against digital rights and migrant rights separately using legal mobilization frameworks, scholarship on legal mobilization concerning the use of automated systems on migrants and asylum seekers is scarce. This Article aims to address this gap by investigating the extent to which EU law empowers strategic litigants working at the intersection of technology and migration. Through an analysis of five specific cases of contestation and in-depth interviews, we explore how EU data protection law is leveraged to protect the digital rights of migrants and asylum seekers. This analysis takes a socio-legal perspective, analyzing the opportunities presented by EU data protection law and how civil society organizations (CSOs) utilize them in practice. Our findings reveal that the pre-litigation phase is particularly onerous for strategic litigants in this field, requiring a considerable investment of resources and time before even reaching the litigation stage. We illustrate this phase as akin to “climbing a wall,” characterized by numerous hurdles that CSOs face and the strategies they employ to overcome them.
This paper examines the rise of monitoring schemes to coordinate supervisors and market authorities in addressing the cross-industry challenges posed by large language models’ deployment. As artificial intelligence (AI) intersects with the core mandate of market authorities dealing with financial stability, data protection, intellectual property, competition and telecommunications, effective oversight requires collaboration and information sharing. Using examples such as the Canadian Digital Regulator’s Forum, the UK’s Digital Regulation Cooperation Forum and the European Union’s AI Act implementation process, the paper illustrates how national and international institutional coordination can help operationalizing the high level principles on AI governance which are currently discussed in international fora. Ultimately, this approach aims to ensure responsible AI development while addressing risks and maximizing its societal benefits.
Chapter 12 analyses Irish law on police access to digital evidence. It outlines the domestic legal framework regarding data retention, interception of communications and access to stored data. It then considers the law governing cross-border requests for data. It assesses the extent to which these rules are adequate for law enforcement purposes and whether these rules are compatible with the European Convention on Human Rights, the Charter of Fundamental Rights and data protection standards.
The Conclusion describes how, while the handbook started with the main technological and legal challenges regarding collection of digital evidence, the research shows that even though the challenges are shared by legal systems across the globe, the answers are not. Legal solutions to similar problems are fragmented, disparate and often unsatisfactory. Even if technology-neutral solutions are preferable to make sure hard-fought EU legislation and international agreements can stand the test of time, the legal reality appears to be quite different. Despite positive recent legal developments at EU and international levels, future approximation of national approaches seems highly desirable to enable LEAs to conduct effective criminal investigations to protect society and its citizens from new criminal phenomena. At the same time, protection of citizens’ fundamental rights should be reinforced, not just at the national level but in a cross-border context, considering that many criminal investigations now reach beyond national borders. Global initiatives are, however, hampered by tensions between democratic and non-democratic states, making a one-size-fits-all solution inadequate.
Chapter 3 explores how EU data protection law relates to public–private direct cooperation on digital evidence in criminal investigations. It asks if a neat prima facie separation of the GDPR and the LED matches the realities of private-to-public data transfers for criminal investigations, and if that legal framework is harmonious enough to warrant description as an EU data protection acquis. It distinguishes scenarios of formal (and informal) direct cooperation, viewed through the conceptual prism of data controllership. It applies that frame to the European Commission’s 2018 ‘e-Evidence package’, along with co-legislators’ competing visions, before looking at the final 2023 compromise text from a data protection perspective. It discusses how far CJEU case law illuminates theoretical blind spots and if the ongoing strengthening of enforcement powers is likely to herald not only greater legal certainty on the supply of digital evidence but also meaningful, workable data subject rights. Last, it reflects on the future place of EU data protection standards within the Council of Europe’s own new direct cooperation mechanism – the Second Additional Protocol to the Budapest Convention.
Generative artificial intelligence (AI) has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly in the context of the AI Act, a critical but often overlooked issue lies in the friction between generative AI and data protection laws. The rise of these technologies highlights unresolved tension between safeguarding fundamental protection rights and and the vast, almost universal, of scale of data processing required for machine learning. Large language models, which scrape nearly the whole Internet rely on and may even generate personal data falling under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
Authored by leading scholars in the field, this handbook delves into the intricate matter of digital evidence collection, adopting a comparative and intra-disciplinary approach. It focuses specifically on the increasingly important role of online service providers in criminal investigations, which marks a new paradigm in the field of criminal law and criminal procedure, raising particular challenges and fundamental questions. This scholarly work facilitates a nuanced understanding of the multi-faceted and cross-cutting challenges inherent in the collection of digital evidence, as it navigates the contours of current and future solutions against the backdrop of ongoing European and international policy-making. As such, it constitutes an indispensable resource for scholars and practitioners alike, offering invaluable insights into the evolving landscape of digital evidence gathering.
As other chapters in this volume show, the EU remedies system is difficult to employ when it comes to EU fundamental right violations. When discussing (im)possibilities of procedural rules and how these encourage or discourage litigation, socio-legal scholars have referred to the concept of legal opportunity structures. In relation to this concept, the EU is a system with closed procedural legal opportunities: rules on directly accessing the CJEU severely limit the possibilities to pursue strategic litigation. At the same time, the EU has opened up legal opportunities as well, by bringing litigants a new catalogue of rights to invoke. In the context of fundamental rights accountability, strategic litigation is used extensively. This begs the question: how are actors (NGOs, lawyers, individuals) making use of the (partially) closed EU system and what lessons can be drawn therefrom? This chapter delves into several cases of mobilisation of the EU remedies system and describes the way in which the actors involved worked with or around EU legal opportunity structures, both inside and outside the context of formal legal procedures. The lessons drawn from these actions can inform future action in this field.
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.
This article examines the National Health Data Network (RNDS), the platform launched by the Ministry of Health in Brazil as the primary tool for its Digital Health Strategy 2020–2028, including innovation aspects. The analysis is made through two distinct frameworks: Right to health and personal data protection in Brazil. The first approach is rooted in the legal framework shaped by Brazil’s trajectory on health since 1988, marked by the formal acknowledgment of the Right to health and the establishment of the Unified Health System, Brazil’s universal access health system, encompassing public healthcare and public health actions. The second approach stems from the repercussions of the General Data Protection Law, enacted in 2018 and the inclusion of Right to personal data protection in Brazilian’s Constitution. This legislation, akin to the EU’s General Data Protection Regulations, addressed the gap in personal data protection in Brazil and established principles and rules for data processing. The article begins by explanting the two approaches, and then it provides a brief history of health informatics policies in Brazil, leading to the current Digital Health Strategy and the RNDS. Subsequently, it delves into an analysis of the RNDS through the lenses of the two aforementioned approaches. In the final discussion sections, the article attempts to extract lessons from the analyses, particularly in light of ongoing discussions such as the secondary use of data for innovation in the context of different interpretations about innovation policies.
Society needs to influence and mould our expectations so AI is used for the collective good. we should be reluctant to throw away hard (and recently) won consumer rights and values on the altar of technological developments.
By establishing a common data governance mechanism across the EU, the Regulation on the European Health Data Space (EHDS) aims to enhance the reuse of electronic health data for secondary use (e.g. public health, policy-making, scientific research) purposes and realise associated benefits. However, the EHDS requires health data holders to make available vast amount of personal and non-personal electronic health data, including electronic health data subject to intellectual property (IP) rights, for secondary use, which may pose risks for stakeholders (patients, healthcare providers and manufacturers alike). This paper highlights some conceptual legal problems which need to be addressed in order to provide clearer regulatory requirements to ensure effective and consistent implementation of key data minimisation measures (anonymisation or pseudonymisation) and data management safeguards (secure processing environments). The paper concludes that the EHDS has been drafted ambiguously (for example, its definition of “electronic health data” or the list of “minimum categories of electronic data for secondary use”), which could lead to inconsistent data management practices and may impair the rights and legitimate interests of data subjects and rights holders. To address legal uncertainties, prevent fragmentation and mitigate/eliminate risks, the EHDS requires closely coordinated implementation and legislative fine-tuning.
Non-fungible tokens (NFTs) introduce unique concerns related to the privacy of personal data. To create an NFT, users upload data to publicly accessible and searchable databases. This data can encompass information essential for the creation, transfer, and storage of the NFT, as well as personal details pertaining to the creator. Additionally, users might inadvertently engage with technology crafted to gather personal data. Traditional paradigms of privacy have not evolved in tandem with advancements in NFT and blockchain technology. To pinpoint where current privacy paradigms falter, this chapter delves into an introduction of NFTs, elucidating their foundational technical mechanisms and processes. Subsequently, the chapter juxtaposes current and historical privacy frameworks with NFTs, underscoring how these models may be either overly expansive or excessively restrictive for this emerging technology. This chapter suggests that Helen Nissenbaum’s concept of “contextual integrity” might offer the requisite flexibility to cater to the distinct attributes of NFTs. In conclusion, while there is a pronounced societal drive to safeguard citizen data and privacy, the overarching aim remains the enhancement of the collective good. Balancing this objective, governments should be afforded the latitude to equate society’s privacy interests with its imperative for transparency.
Global digital integration is desirable and perhaps even inevitable for most States. However, there is currently no systematic framework or narrative to drive such integration in trade agreements. This article evaluates whether community values can offer a normative foundation for rules governing digital trade. It uses the African Continental Free Trade Area (AfCFTA) Digital Trade Protocol as a case study and argues that identifying and solidifying the collective needs of the African region through this instrument will be key to shaping an inclusive and holistic regional framework. These arguments are substantiated by analysis of the regulation of cross-border data flows, privacy and cybersecurity.