I. Introduction
The rapid advancement of artificial intelligence (AI) has ushered in a transformative era, impacting diverse aspects of society and individuals’ lives. It already influences the way we interact, work, learn and do business. Nevertheless, algorithms pose several societal, economic and legal challenges. In this context of change, the European Union (EU) has taken actions in order to regulate this technology and the risks thereof. From one side, the ambition is to create a trustworthy, human-centric, secure and ethical AI. From the other, the ambition is to prevent fragmentation of the European market by ensuring legal certainty.
The purpose of this article is to explore those proposed amendments to the AI Act that introduce the notion of group or “groups of persons” as potentially adversely affected parties by an AI-powered system. This is a major novelty that has the potential to shift current data protection approach in a new direction. The review of the proposed amendments referring to “groups of persons” shows that the changes are concentrated into three main categories. According to the intended remedy they provide to an identified concern, those categories are adverse effects, public trust and redress mechanisms. Despite those intentions, the lack of definition of the notion of “group”, the challenge of providing a description of the involved AI’s logic and the unclear redress mechanism concerning harms suffered by groups of persons are challenges that need to be addressed by the legislator.
The methodology employed in this analysis involves a comprehensive review and critical evaluation of the AI Act’s proposed amendments, incorporating legal analysis, historical review and consideration of technological advancements. The structure of this research article includes an introduction to the topic (Section I), followed by a brief review of the societal impacts of AI (Section II) and, within in this context, the AI Act proposal (Section III). Next, I discuss concretely the categories and challenges around the amendments envisaging groups of persons (Section IV) in order to provide specific recommendations for improvement of the proposed texts (Section V). (Section VI) concludes, and (Section VII) lists some limitations of this analysis.
II. AI in context
AI promises more efficient supply chains and workflows, faster and more customised services, optimised administration, better healthcare and new professional opportunities.Footnote 1 Despite the promising future AI technology paints for us, some have voiced concerns about the pace and scope of this technology.Footnote 2 Algorithm-based technologies pose important social, economic and ethical challenges.
First, in terms of social interaction, AI has already changed the ways we interact on the Internet, relate with others or choose our leaders.Footnote 3 Take, for example, platforms like Facebook, Twitter (now X),Footnote 4 LinkedInFootnote 5 and Tinder,Footnote 6 which use AI for content moderation, creation and analysis as well as for advertisement.
Second, algorithms influence the ways we work and trade due to its integration into business operations and commercial strategies.Footnote 7 One of its main advantages but also concerns is its deployment in customer monitoring and online behaviour tracking. Although it is true that AI enables companies to personalise clients’ experiences, this poses critical pitfalls when it comes to the misuse of this technology and possible discrimination based on biased data, amongst other potential issues.
Third, AI raises several ethical challenges related to the human implications in the decision-making processes of algorithm-based services and individual privacy. Algorithms learn from historical data, which more often than not contain the inherent biases present in society. When these biased datasets are used to train new AI models, they can perpetuate and even amplify societal prejudices. Added to the lack of adequate due-diligence mechanisms ensuring transparency and accountability in an AI’s output,Footnote 8 this can lead to discriminatory outcomes in areas such as recruitment, money lending and law enforcement. Privacy is yet another significant ethical concern.Footnote 9 AI systems often require access to vast amounts of personal data to function effectively. The collection, storage and use of these data raise questions as to the effectiveness of the current understanding of privacy. AI’s inference capabilities defy the individual-centred approach of data protection and, therefore, the very founding of current legislation.Footnote 10
In this context of economic, social and ethical uncertainty, the proposal for a Regulation on Artificial IntelligenceFootnote 11 (AI Act) is the European legislative response thereof.
III. The AI Act: context, novelties and purpose
1. Political context
Not many jurisdictions have taken actions in order to regulate algorithm-powered technologies and the risks thereof.Footnote 12 Some of the reasons for this lagging behind of the legislators is probably the backlash from investors, the novelty and the uncertainty around the scope of the implications of AI as well as its consequences for governments.Footnote 13
The political commitment that brought up the current proposal for the Regulation dates back to 2017 with the explanatory memorandum of the proposal states,Footnote 14 when the European Council issued its Conclusions,Footnote 15 which were followed by a series of statement documents in 2019 and 2020.Footnote 16 The European Parliament (EP) engaged in this process and supported the adoption of a number of resolutions outlining what would take shape as the future AI Regulation framework. While those documents pledge for a comprehensive approach based on ethical principles and fundamental rights protection, addressing the risks of AI systems, it is the 2020 EP Resolution on a Framework of Ethical Aspects of Artificial Intelligence Robotics and Related Technologies that consolidates the principles guiding the European approach in a specific proposal for regulation.Footnote 17
The European Commission made public on 21 April 2021 the proposal for a Regulation on Artificial Intelligence,Footnote 18 known as the AI Act. This piece of legislation is part of the larger EU strategy in the digital domain.Footnote 19 It builds on the documents adopted by the European Council and EP and is embedded into a series of revisions and adaptations of legislation in order to respond to the challenges posed by AI development and use.Footnote 20
The ambition of the draft Regulation as stated by its explanatory memorandum is three-fold: first, to create a framework for a trustworthy, human-centric, secure and ethical AI; second, to prevent fragmentation of the European market by harmonising the requirements for “AI products and services, their marketing, their use, the liability and the supervision by public authorities”Footnote 21 ; and third, to ensure legal certainty and to facilitate investment and innovation.Footnote 22
Thus, the present AI Act proposal is the legal tool that the European legislator chose to set forth in order to address the risks emerging from the disparate national legislation and enforcement laws as well as from the production and commercial use of AI tools.
2. Novelties
With this intention in mind, the text of the proposed Regulation establishes a harmonised set of rules, limitations and requirements for AI creation, placement on the market and use.Footnote 23 The Regulation applies to providers placing on the market or putting into service AI systems or the use thereof, regardless of whether they are established in the Union market. It applies to all users located in the EU as well as in all those cases where, if located elsewhere, providers or users apply the output of an AI system within the Union. Thus, the wording of Articles 1 and 2 of the Regulation leave no doubt as to the universal scope of application of the proposed document, as well as the ambition of its drafters to set and lead the global standard in the field of AI regulation.Footnote 24
The proposed AI Act follows a risk-based approach based on three levels of AI impacts on individuals, namely unacceptable risk, high risk and low risk. First, the Regulation prohibits AI practices that are considered to be particularly harmful and posing unacceptable risk with regard to human dignity, freedom, equality, democracy and the rule of law as well as fundamental rights (Recital 15). Specifically, AI systems intended to distort human behaviour (Recital 16) and to provide social scoring of individuals (Recital 17) as well as systems enabling “real-time” remote biometric identification (Recital 18) are forbidden expressly in Article 5.
Second, particular requirements are laid down for high-risk AI systems, which have the potential to impact significantly fundamental rights and decrease safety. The Regulation mandates the establishment of a risk management system (Article 9), which should make sure that the minimum but compulsory requirements concerning training data quality (Article 10), documentation (Article 11), recordkeeping (Article 12), transparency (Article 13), human oversight (Article 14) and accuracy and robustness (Article 15) are respected. In addition, a conformity assessment mechanism (Article 43) is foreseen as well. The Regulation lays down obligations for providers and deployers of high-risk AI systems as well as other involved parties (Articles 26–29).
Finally, other low-risk algorithms are bound by less strict rules but still have to comply with transparency obligations. In addition, the AI Act sets forth an institutionalised governance system at the Member State level and creates a European Artificial Intelligence Board.
3. Purpose
The yet-to-be-enacted AI Act is one of the major legislative undertakings of the current EU executive government. The AI Act strikes a balance between, from one side, the yet-to-be-explored AI technology promises and, from the other, the issues it poses to fundamental rights. Precisely there resides the core of the debate around the Regulation on AI.
The criticism around the AI Act points out that the approach adopted seems overly paternalistic.Footnote 25 In this vein, some EU governments considered that raising the compliance costs for an emerging industry whose full potential is yet to be explored may hinder European competitiveness in a global market driven by powerful big tech companies established outside of Europe.Footnote 26
However, it seems that the AI Act’s critics do not give an account of EU competition policy. The European approach boils down to one main objective, and that is to achieve market integration by reducing barriers to private initiativesFootnote 27 and by protecting “competitors rather than competition”.Footnote 28 This understanding of the EU’s economy substantiates the prohibition that “the competition may not be eliminated for a substantial part of the relevant market as one of the four cumulative conditions to grant an exemption from the cartel prohibition”.Footnote 29 This is why EU competition policies are based on the idea of a market of small and medium-sized companies that should be protected and whose initiative should be unhindered.Footnote 30 Therefore, EU competition regulations are not intended to favour big conglomerates, as is the case for other players such as the USA and China, where, for different reasons, companies are able to amass huge market power. The European approach to the AI industry aligns with its own competition doctrine. It is not intended to create companies able to capture a whole market in order to exert influence on the other market players. Furthermore, the framework of the AI Act aims to provide a common level playing field for those who use AI or would like to enter the market, following the EU’s competition policy. The Regulation aims to provide clarity and certainty for companies operating in the industry and, through this, to encourage innovation and investment in the development of new AI technologies that comply with the General Data Protection Regulation (GDPR) and other data protection regulations. In that sense, the AI Act aims to establish accountability and transparency rules that could eventually translate into increased trust in AI systems and, consequently, greater adoption of the technology.Footnote 31
While some express uncertainty about the Regulation’s success,Footnote 32 its drafters are confident that the AI Act will become a beacon for other jurisdictions in matters of regulating AI and repeat the game-changing impact of the GDPR. Doubts persist, however, over the end result, as some challenge the potential of the AI Act to reaffirm the power of the Brussels effect.Footnote 33
While the previous paragraphs aimed to provide a picture of the context, novelties and envisaged general purpose of the AI Act, in the following I focus particularly on the EP’s proposed amendments of the Regulation,Footnote 34 especially concerning the notion of “groups of persons” and its challenges.
IV. The amendments: groups of persons
The proposed amendments introduce the notion of group or “groups of persons” next to the notion of an individual as potentially adversely affected parties by an AI-powered system. This is a major novelty that has the potential to shift the current data protection approach in a new direction. The current approach is entirely centred on the role and place of the individual user in relation to the processing party. This individual-based approach owes its central place to the technological and legislative reality that data protection law evolved in over recent decades.Footnote 35
1. Historical background
In the following paragraphs, I revise briefly the origins and conceptualisation of the current approach with regard to the technological reality back then as well as the legal understanding of the matter.
First,Footnote 36 individual data were collected directly from single users, often mainly relying on consent. Consent has a long history, dating back to antiquity,Footnote 37 as a tool expressing someone’s will and knowledge,Footnote 38 which reflects an individual’s autonomy to decide. The core idea behind consent is for it to be a tool empowering individuals in a power-asymmetrical relationship – for example, between a patient and a physician or a data subject and a data controller. Putting it into perspective, consent as a lawful basis for data processing should be regarded as a component of the development of data protection and privacy policies.Footnote 39 Although the notion of privacy as a separate right was already being discussed in the nineteenth century,Footnote 40 it was not until the late 1970s–1980s that privacy and data protection acts were first adopted in some European countries.Footnote 41 In 1983, the German Constitutional Court adopted a landmark decisionFootnote 42 establishing a right to “informational self-determination”.Footnote 43 This decision is based on the idea that dignity, privacy and freedom to decide for oneself should be legally guaranteed in the digital environment as well.Footnote 44 This case plays a fundamental role in understanding the present-day data protection landscape and its focus on the individual as a holder, manager and controller of their data.
Second, since the census case, technology has evolved exponentially, and with it the amount of data collected and stored. In the early 2000s, the development of Big Data technologies made it possible to collect, process and store larger volumes of data in a faster and more efficient way. This led to the emergence of predictive analytics and machine learning, which shifted the value of the data, which “no longer resides solely in its primary purpose for which it was collected, but in its secondary uses”.Footnote 45 Datasets are now interactive repositories that have the potential to allow AI systems to infer information regarding the subjects included in the dataset as well as those outside of it, build upon this information and take a decision or predict human behaviour through the combination of various sources of data. This unchains Big Data, or any other AI tool, from the necessity to collect data on all of the members of a given target group in order to achieve successful results. Furthermore, AI systems can affect individuals whose data are not present in the dataset that is used for the performance of the output of the AI system.Footnote 46 Thus, individuals’ voluntary participation (eg through consent) is no longer crucial or necessary for the performance of an algorithm. This in turn translates into a lack of legal protection of those individuals who never consented to their data being collected or processed, and therefore they cannot rely on the rights enshrined in the GDPR. Hence, they are left with little to no legal protection or possibility to search for redress.
In sum, this dramatic change in the volume, velocity and varietyFootnote 47 of the data collected and processed over recent years challenges the current data protection and privacy legislation state of affairs.Footnote 48 In particular, when it comes to individual control over personal data, the existing model of collection on the basis of consent seems inadequate, as do the mechanisms to prevent identification, such as anonymisation or differential privacy.Footnote 49
In this context, the notion of groups of persons as employed in the EP’s amendments to the proposed AI Regulation seem promising, and they are explored further in this article.
2. The AI Act’s amendments on groups of persons: categories and challenges
The EP amends the AI Act proposal in such a way that it introduces “groups of persons” as potentially affected entities of high-risk AI systems. The new Amendment 167 to Article 3, paragraph 1, point 1b defines “significant risk” as the ability to “affect an individual, a plurality of persons or to affect a particular group of persons”. Furthermore, Amendment 174 to Article 3, paragraph 1, point 8a (new) establishes that the “affected person” may include “any natural person or group of persons”, and Amendment 191 to Article 3, paragraph 1, point 34 adds groups of persons to those possibly affected by an “emotion recognition system”. These amendments testify to the purposeful and coherent intention to extend the AI Act’s protected parties further, to those who may be affected adversely by an algorithm-based decision on the grounds of their belonging to a particular group. In order to understand how these changes fit within the AI Act, the remainder of this subsection is divided into two parts: categories of amendments and the challenges they pose.
a. Categories
In this respect, the changes related to groups that the European legislator introduces could be clustered into three main categories, namely adverse effects, public trust and redress. In the following, this article explores those categories.
i. Adverse effects
Discrimination based on groups’ characteristics can ensue from several technical and social contexts related to the development of an AI and its uses.Footnote 50 Its development and learning models, based predominantly on past historical data and records, play a crucial role.Footnote 51
First, Amendment 50 to Recital 26a (new) highlights the particular risk of discrimination against individuals and groups of people. AI used by law enforcement authorities to make predictions, profiles or risk assessments based on “personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of persons for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s)” may lead to a violation of human dignity and of the presumption of innocence.Footnote 52 Moreover, the use of AI systems involves a specific power imbalance between the respective authority and the suspect(s), who may face insurmountable obstacles in order to obtain meaningful information and challenge the results.Footnote 53 Therefore, those systems should be prohibited, pursuant to Amendment 224, Article 5, paragraph 1, point da (new).
Second, AI systems that influence decisions in the domain of education are considered high-risk “since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood”.Footnote 54 The EP underlines the potential consequences of these systems for specific groups within society, such as “women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation”.Footnote 55
Third, the same reasoning applies in the professional domain as well as in the process of recruitment according to Amendment 66, Recital 36, “since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights”.
Fourth, AI may adversely influence and perpetuate historical discriminatory patterns and biasesFootnote 56 as well as create new ones affecting persons or groups when it comes to their access to healthcare or other public and private services or benefits, following Amendment 67 to Recital 37.
ii. Public trust
Confidence in the decision-making process of an AI system and therefore in the results ensuing thereof is of prime importance for the deployment and use of algorithm-based systems. The principles of lawful, fair, transparent and accurate and accountable data collection and processing are some of the cornerstones of data protection.Footnote 57 They contribute to greater confidence in this technology, which collects and processes huge amounts of personal data. This is why they find their place in the proposed AI Act (Articles 13, 15, 52) as well. The amendments introduce a collective perspective to several of those principles, particularly in terms of accuracy, risk assessment and transparency.
First, when it comes to the accuracy of the information represented in the dataset used by the AI, the right balance between technical excellence in terms of operations’ effectiveness and accuracy is indispensable in order to prevent any adverse impacts on individuals.Footnote 58 It is important to highlight the difference in the meaning of “accuracy” used in this context compared to the meaning of “accuracy” used in the GDPR. While in the latter accurate data in Article 5(d) designates data that are up to date and correctly reflect users’ characteristics, in the AI Act “accuracy” signifies the correct output of the system. Updated personal data play a role in the correctness of AI predictions but are not the only factor that would influence the result. In this sense, technical resilience and the security of the system from external attacks should not be to the detriment of the quality of the data used to inform the predictions of the system.
Second, the role of developers and deployers of AI systems is essential to the risk assessment of the algorithm. A new Article 29a, under Amendment 413, is proposed in which a description of the fundamental rights impact assessment for high-risk AI systems is established. Points c) and f) specifically mention harm likely to impact groups. Moreover, in paragraph 4, Article 29a (new) mandates that representatives of the affected groups should be involved in the impact assessment procedure. Amendment 92, Recital 58a (new) establishes that the deployers of high-risk AI systems should create such governance mechanisms so that the potential adverse effects on “the people or groups of people likely to be affected, including marginalised and vulnerable groups” can be adequately foreseen. When it comes to assessing the harm posed by a high-risk AI system, a series of amendmentsFootnote 59 instruct that the impact on a “particular group of persons” should be taken into account as well.
The national supervisory authority on the matters contained in this Regulation should intervene whenever there is a suspicion that an “AI system exploits the vulnerabilities of vulnerable groups or violates their rights intentionally or unintentionally”.Footnote 60
Third, the group aspect has been embedded in the transparency mechanism foreseen in the AI Act. For example, Amendment 315, Article 10, paragraph 3 mandates that human oversight should be ensured “where decisions based solely on automated processing by AI systems produce legal or otherwise significant effects on the persons or groups of persons on which the system is to be used”.
Furthermore, the technical information required for the placement on the market of an AI should include “the categories of natural persons and groups likely or intended to be affected”Footnote 61 and “the overall processing or logic of the AI system … with regard to persons or groups of persons”,Footnote 62 as well as detailed information about “the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy”.Footnote 63
iii. Redress
Importantly for the effectiveness of those changes, amendments envisaging redress mechanisms are foreseen. First, the proposed Recital 84a (new)Footnote 64 urges deployers to “provide internal complaints mechanisms to be used by natural and legal persons or groups of natural persons”. Second, a new Article 68a is introduced aiming to empower users and “groups of natural persons” by providing them with a right to lodge a complaint before the respective national authority.Footnote 65
b. Challenges
The proposal to introduce groups as protected parties in the AI Act is a major innovation in the data protection landscape, whose main objective is to prevent negative effects of algorithmic decision-making that could have an impact on parties whose data have not been collected but whose processing thereof has an impact on. The potential of the collective dimension to fill this gap is not negligible. However, the place and role of groups within the larger data protection landscape, and particularly in the context of the AI Act, do not seem to be without pitfalls. In the following paragraphs, I explore the potential challenges that the notion of “group” would entail.
i. Challenge 1: definition of “group”
The proposed AI Act as well as its amendments do not include a definition of “group” or “group of persons”, unlike the detailed and layered description of individuals,Footnote 66 their role and their place within the architecture of the Regulation. This lack of legal definition is not isolated to this Regulation, because nowhere in the data protection legislation could be found any definition of collective entities or groups as protected data subjects. With no legal reference from the domain of data protection, it seems challenging to determine the applications and implications of the proposed provisions.
ii. Challenge 2: “plurality of notions”
The wording of the text and its interpretation further increase this uncertainty. From one side, in three instances the text of the proposed amendments mentions the effect of AI on “a plurality of persons”.Footnote 67 The lack of clarity on the definition of plurality of persons, however, seems surmountable when applying a textual analysis of the Articles in question. While groups of persons are separated by “or”, individual and pluralities of persons are clustered together. Thus, it could be concluded that the legislator envisaged that harm could be inflicted on an individual or more than one person. This raises the question about the threshold at which a number of individuals becomes a group and, subsequently, the distinction between a “group of persons” and a “plurality of persons”, given that both consist of more than one person. Therefore, a plausible interpretation is that the legislator refers to two separate groups. While groups of persons designates a concrete entity identified in the AI Act’s vulnerable groups, “plurality of persons” is the term used for the individuals collectively affected by an AI’s predictions.
Furthermore, this lack of clarity feeds on the apparent dichotomy between vulnerable groups of people and other groups. First, in multiple instances the text defines certain groups as “vulnerable”. In particular, this reference can be found in provisions referring to prohibited AI techniques that could distort human behaviour,Footnote 68 the training models used,Footnote 69 the implementation of a risk management system,Footnote 70 the role of the supervisory body in this contextFootnote 71 and the elaboration of codes of conduct.Footnote 72 Second, groups of persons could be harmed in the context of education and work as well as in the provision of public and private services. Although the amendments seem to avoid defining them as “vulnerable” and delimit their belonging to particular “minority subgroups”Footnote 73 in society usually associated with higher rates of discrimination, such as “persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation”,Footnote 74 those subgroups happen to overlap with the groups who present the highest degree of vulnerability. Therefore, the meaning of these amendments should be interpreted as an attempt to establish the features that certain groups should comply with in order to be characterised as such. Third, the wording of Amendment 256, Article 7, paragraph 2a (new) and Amendment 413, Article 29a (new) supports this conclusion, which mandates that the AI impact assessment should include representatives of “people or groups of people likely to be affected, including marginalised and vulnerable groups”.Footnote 75 Nevertheless, it remains uncertain as to how these groups’ representatives would be identified and on what grounds they would be entitled to represent diverse and unrelated members of a group. In addition, it poses doubts as to the provisions’ real effects unless those groups are institutionalised under some form of a union or association. Workers’ unions are example thereof.
Other parts of the text suggest that vulnerable groups or marginalised minorities are not the only ones who could potentially be adversely affected by AI systems. The best example of this can be found in Amendment 413, Article 29a (new). While Article 29a (new) f) mandates that the fundamental rights impact assessment for high-risk AI systems shall include “specific risks of harm likely to impact marginalised persons or vulnerable groups”, point c) establishes that the assessment shall include “categories of natural persons and groups likely to be affected by the use of the system”. Therefore, the proposed amendments set forth a distinction between vulnerable groups and groups impacted by AI in general. This approach is similar to the one adopted by the GDPR concerning vulnerable data subjects.Footnote 76
A possible critique to the conclusion drawn in the previous paragraph could stem from the fact that the proposed Article 29a (new) mandates that the impact assessment should establish categories of groups and, after that, delimit the specific risks of harm likely to affect vulnerable groups. Hence, the only groups envisaged by the legislator are those that bring together vulnerable and marginalised communities. While this might be the actual intention of the drafters of the text, it is plausible to expect that an impact assessment involving the identification of the affected groups would witness a surge of other unexpected groups of persons equally affected, although not “vulnerable”, pursuant to the indications of the Regulation. It is uncertain, however, what action should be taken in such cases. Although these critiques might be grounded in the particular wording of the amendments, groups falling outside of the definitions of “vulnerable” or “marginalised” exist and could be equally influenced by an AI system’s decisions. This is so because of the particular ways in which algorithms perform tasks. They could categorise and cluster users’ data based on multiple indicators, which could give rise to the creation of random groups that would be unaware of their existence as such. The difference between stable (eg marginalised communities) and unstable groups stems from the level of awareness, institutionalisation and stability of the group. Stable groups,Footnote 77 also called active groups,Footnote 78, usually are aware of their belonging to the group, either because they are auto-proclaimed as such or because they are externally designated as such – take, for example, linguistic, cultural, professional or urban subgroups and minorities. The most important characteristic is that they are static and do not endure easy, rapid and random shifts in their morphology. Unstable groups, on the other hand, are usually externally proclaimed and unaware of their existence and relatedness. They are also highly malleable and thus could be created and dissolved rapidly. In addition, their structure and membership could evolve quickly. For example, while the students of a university may regard themselves as members of one community (therefore, a stable group), an algorithm could cluster some of them based on their postcode, gender or transportation used in order to infer correlations about the safety of the campus.
Therefore, while the Regulation envisages some protection for particular groups based on specific or general vulnerabilities, there could be groups issuing from an AI system’s analytical capacities that would fall outside of the intended protected groups. Hence, individuals whose data do not participate in the dataset of the algorithm or are not identified in the Regulation would not be able to rely on effective protection against the adverse effects of a particular automated decision.
iii. Challenge 3: explainability 2.0
In this relation, a third challenge stems from the technical information required pursuant to Article 11(a) in Annex IV of the Regulation. Amendment 742 stipulates that before the deployment of an AI, a general description of the AI system has to include the categories of “natural persons and groups likely to be affected” by the algorithm. While for some groups, such as the previously mentioned stable ones, this is possible, AI’s capacities allow the grouping of people based on many different, often unrelated indicators in order to reach a decision or to provide a particular output. Therefore, this Amendment raises doubts as to the effectiveness of providing a clear-cut general projection of the groups possibly affected by an AI system’s output and consequently on the compliance of AI developers and deployers with the Regulation itself.
Moreover, Amendment 752 to Annex IV, paragraph 1, point 2, point b mandates that a description of the architecture, including the logic of the AI system, its rationale and its assumptions, has to be provided “also with regard to persons or groups of persons on which the system is intended to be used”. This means that an ex-ante description of the architecture of the algorithm is needed. A developer may be able to design, create and successfully launch an AI, but this does not imply an understanding of what data determined its results, nor the specific process behind its determination. Hence, this text poses a challenging task to developers due to the nature of AI systems, which are intended to harness huge amounts of data and infer correlations that humans are otherwise unable to do. Providing understandable information regarding the purposes, functioning and outcomes expected of algorithmic data processing is often a fiction. This is so because in some cases it is very difficult to describe the purposes of this data collection. Data might be collected for one reason but later may prove useful for different, previously unspecified purposes.
iv. Challenge 4: redress mechanism
Finally, the fourth challenge posed by the wording of the amendments concerns the redress mechanism proposed in the amendments to the Regulation. A new Article 68a stipulates that “every natural persons or groups of natural persons shall have the right to lodge a complaint with a national supervisory authority”. Furthermore, another amendment recommends that “it is essential that natural and legal persons or groups of natural persons have meaningful access to reporting and redress mechanisms and to be entitled to access proportionate and effective remedies”. In the previous paragraphs, I have sustained that the lack of clarity on the definition of the notion of groups of persons as well as on the scope of the term prevents its effective application and performance as a separate subject of data protection. Those same pitfalls also determine the uncertainty around the mechanism for redress when groups of persons claim to have been harmed by an AI system’s decision. The proposed new Articles do not clarify the requirements that groups should comply with in order to be able to lodge a complaint in procedural or substantial terms. It is not clear what types of groups could claim tort. Should only vulnerable or marginalised groups be able to do this, or should any other group that has been targeted and treated as group and suffered a tort thereof be able to lodge a complaint? In addition, who should prove that a passive groupFootnote 79 (created externally; eg by an algorithm) is actually a group? Furthermore, it is unclear whether there should be a mechanism to represent the group as such or whether it would be represented be an appointee.
V. Recommendations
With regard to the discussion above concrete recommendations can be made that could serve as inspirations for the legislator as well as for further discussion and analysis of this matter.
1. Clarify the definition of the terms “plurality of persons” and “group of persons”
If the AI Act is enacted with the current amendments, the Regulation should include definitions of those terms in order to clarify their scope and their intended addressees, if they are two separate entities. Data protection law has not developed a notion of “vulnerability”Footnote 80 or of the groups affected by algorithm-based decision-making. Therefore, the dichotomy between particularly affected and generally affected groups because of their specific social status or because of the logic of the algorithm reinforces the uncertainty around the introduction of the notion of groups. In addition to the clarification of these terms, the AI Act should foresee the implications thereof when it comes to the representation of those groups in the impact assessment mechanism.
2. Clarify how AI developers and deployers can provide general and detailed descriptions of the AI architecture concerning groups of persons
Further improvement of the Regulation proposal should envisage the need to provide a more detailed description of the information that those groups should expect. EU legislators should take a stance on the question as to which groups are entitled to an explanation of the functioning of the algorithm as a system and how this description of the logic of the algorithm would be reasonably achieved.
3. Provide guidelines on the redress mechanism for groups of persons
Provisions on the mechanism for redress for groups of persons would empower not only vulnerable groups but also those individuals whose data were not collected but who are affected by the AI system’s inference capabilities. A common framework for lodging collective complaints, including the requirement to prove harm based on defined requirements, would provide legal certainty for users and AI providers and ultimately extend the legal coverage of data protection.
VI. Conclusion
The proposed amendments introducing the notion of “groups of persons” into the AI Act are a significant step forward in addressing the challenges posed by AI systems that influence individuals whose data have not been directly collected. These changes have the potential to broaden the data protection framework and adapt it to the evolving technological landscape. By recognising and providing protection for groups of persons, the amendments attempt to bridge the gaps in current data protection approaches and ensure that the rights of individuals affected by AI are upheld. Therefore, the notion of “groups of persons” is relevant and necessary. However, these amendments are not without their challenges. The lack of a clear legal definition of “groups” raises uncertainty regarding the scope and application of the proposed changes. Additionally, understanding and providing a detailed description of the involved AI system’s logic concerning groups of persons can be technically challenging. It is necessary to carefully consider how to effectively explain AI systems in a meaningful and understandable way. Moreover, establishing a comprehensive and fair redress mechanism for groups of persons is a complex task that requires further clarification and guidelines to ensure its effectiveness and accessibility. Not addressing these challenges risks undermining the AI Act’s intended purpose of protecting users, reducing it simply to a group of words.
VII. Limitations
This analysis is not without its limitations, which stem from the evolving nature of the legislation in question. The provisions discussed above are part of the proposed amendments to the AI Act’s text by the EP, which entails some uncertainty as to the final text of the Regulation. This is the first ever piece of legislation on the matter of AI. Hence, there is no reference jurisprudence that this analysis could rely on. In addition, the notions of group or group privacy represent underexplored terrain when it comes to their application to data protection or AI as well as their practical consequences. This is the reason as to why this article envisages a further revision that would take into account the final and definite text of the AI Act once enacted. Despite these limitations, my hope is that this article inspires further investigations into the matter because I believe it opens new horizons in data protection and privacy scholarship.
Competing interests
The author declares none.