I. Introduction
On 21 April 2021, the European Commission published its package of measures on a European approach to artificial intelligence (AI), consisting of a communication,Footnote 1 accompanied by an updated Coordinated Plan on AIFootnote 2 and a proposal for a horizontal regulation (Artificial Intelligence Act, AIA)Footnote 3 with nine annexes. This package is the first of three inter-related legal initiatives announced by the Commission with the aim of making Europe a safe and innovation friendly environment for the development of AI. This first initiative aims to establish a European legal framework for AI to address fundamental rights and safety risks specific to AI systems. The second initiative is the revision of sectoral and more horizontal safety legislation. A proposal for a new Machinery RegulationFootnote 4 with eleven annexes was already published on the same day as the AI package, addressing an important aspect of AI usually referred to as ‘robotics’, and a proposal for a new General Product Safety RegulationFootnote 5 followed soon after. Parliament and Council are currently preparing both files for the trilogues. Finally, the third initiative announced is the introduction of EU rules to address liability issues related to new technologies, including AI systems. The Public Consultation for this initiative has already been closed and a proposal is planned for the third quarter of 2022.Footnote 6 This third initiative will comprise measures adapting the liability framework to the challenges of new technologies, including AI, to ensure that victims who suffer damage to their life, health, or property as a result of new technologies have access to the same compensation as victims of other technologies. In the Inception Impact Assessment, a revision of the Product Liability Directive (PLD),Footnote 7 and a legislative proposal with regard to the liability for certain AI systems are identified as policy options.Footnote 8
Given that liability for AI and other emerging digital technologies had been on the agenda for some time, it may come as a surprise that liability legislation figures last on the agenda. An Expert Group on Liability and new Technologies was established in 2018. It was divided into two formations, one dealing specifically with the PLD and being largely dominated by stakeholders, the other – the so-called New Technologies Formation (EG-NTF) – having a broader mandate and consisting mainly of academics.Footnote 9 Only the NTF ever published an official written report,Footnote 10 which then served, inter alia, as a basis for the European Commission’s report on the safety and liability implications of AI, the Internet of Things (IoT), and roboticsFootnote 11 of 19 February 2020, which formed part of the 2020 AI package and accompanied the Commission White Paper on AI.Footnote 12
A major driver of activities in the field of liability has certainly been the European Parliament. After its first resolution in 2017,Footnote 13 which included the much-quoted and much-criticised plea for electronic personhood,Footnote 14 the European Parliament passed another resolution on 20 October 2020 that includes a full-fledged ‘Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of AI systems’.Footnote 15 This proposal is certainly much more mature than the 2017 resolution and bears a striking resemblance to policy considerations made within parts of the European Commission.
Whether the Commission will follow the recommendations of Parliament or take a different approach remains yet to be seen. Because AI liability is a subject matter that might be addressed within different regulatory and legal frameworks for which different Directorates General of the Commission and different Committees within the Parliament are responsible, the matter remains highly controversial. This paper analyses the different risks posed by AI, and why AI challenges existing liability regimes. It also explains the main solutions put forward so far and evaluates them, concluding that different solutions may be appropriate for different types of risk.
II. Dimensions of AI and Corresponding Risks Posed
The challenges posed by AI and modern digital ecosystems in general – such as opacity (‘black box-effect’), complexity, and partially ‘autonomous’ and unpredictable behaviour – are similar, irrespective of where and how AI is deployed. However, at a somewhat lower level of abstraction, the potential risks associated with AI usually appear to be falling into either of two dimensions: ‘safety risks’ and ‘fundamental rights risks’.Footnote 16 These two types of risks are just the downside of our expectations of AI and of the promises made by those developing and deploying the technology, that is, that AI will both help by improving health and saving lives and the climate, and assist us in making better decisions, enhancing fairness, and developing into a better society (Figure 12.1).
1. Traditional (Physical) Safety Risks
Traditionally, death, personal injury, and damage to property have played a special role within safety and liability frameworks. These traditional types of risks can more specifically be described as ‘physical’ safety risks, but are normally referred to simply as ‘safety risks’. These risks continue to play their very special role in the digital era, but the concept must be understood more broadly to include not only death, personal injury, and damage to property in the traditional sense, but also damage to data and to the functioning of other digital systems. Where, for example, the malfunctioning of software causes the erasure of important customer data stored by the data holder in some cloud space, this should have the same legal effect as the destruction of a hard disk drive or of paper files with customer data (which is not to say that all data should automatically be treated in exactly the same way as tangible property in the tort liability context).Footnote 17 Likewise, where tax management software causes the victim’s customer management software to collapse, this must be considered a safety risk, irrespective of whether the customer management software was run on the victim’s hard disk drive or somewhere in the cloud within a SaaS scheme. While this is unfortunately still disputed under national tort law,Footnote 18 any attempt to draw a line between data stored on a physical medium owned by the victim and data stored otherwise seems to be completely outdated and fails to recognise the functional equivalence of different forms of storage.
2. Fundamental Rights Risks
‘Fundamental rights risks’ are associated with the social dimension of AI. They include discrimination, exploitation, manipulation, humiliation, oppression, and similar undesired effects that are – at least primarily – non-economic (non-material) in nature and that are not just the result of physical harm (as the latter would be dealt with under traditional regimes of compensation for pain and suffering, etc). Such risks have traditionally been dealt with primarily by special legal regimes, such as data protection law, anti-discrimination law or, more recently, law against hate speech on the Internet and similar legal regimes.Footnote 19 There is also a growing body of tort law that deals specifically with the infringement of personality rights.Footnote 20 Even though the concept of ‘fundamental rights’ is focused on individual rights, the term ‘fundamental rights risks’ should be understood more broadly as encompassing also risks of a more collective nature, for example, risks for the rule of law, democracy, and freedom of expression in general.Footnote 21
While the fundamental rights aspect and, therefore, the non-economic aspect of such risks is in the foreground, these risks can, of course, entail economic risks for the affected individual or for society as a whole. For instance, AI systems used for recruitment that favour male applicants create a social risk for female applicants by discriminating against them, but this also leads to adverse economic effects for the affected women.
3. Overlaps and In-Between Categories
The division between safety and fundamental rights risks is generally not always clear-cut and should not be overestimated. There are not only clear overlaps, but also a considerable grey area of a number of important risks. For instance, adverse psychological effects can be a very traditional safety risk,Footnote 22 where the effect is a diagnosed illness according to WHO criteria (such as depression), but also a fundamental rights risk that is associated with the social dimension of AI where the effect is not a diagnosed illness, but, for example, just stress or anxiety. It is not always easy to draw a line between the two.Footnote 23
a. Cybersecurity and Similar New Safety Risks
Digitalisation has given rise to a number of very special risks that are not easy to classify. They are essentially safety risks, albeit safety risks of a nature that is somewhat in a grey zone between ‘physical’ and ‘intangible’. Such special safety risks include the ‘data security’ aspect of data protection and privacy (i.e. prevention of data leaks), cybersecurity and harm to the network, and fraud or illegal collusion, to name but a few. They are recognised as relevant safety risks under selected pieces of safety legislation, in particular the Radio Equipment Directive (RED)Footnote 24 and the Medical Device Regulation (MDR).Footnote 25 Digital risks are also recognised in the Proposal for a Regulation on Machinery ProductsFootnote 26 and the Proposal for a Regulation on General Product Safety,Footnote 27 which are intended to replace the Directives currently in force. However, these (digital) risks will often primarily relate to the ‘physical’ dimension of safety, because data theft and manipulation or the breakdown of networks and other essential infrastructures will indirectly, at least in most cases, lead to damage to property in the broader sense or even threaten the health and life of persons.
b. Pure Economic Risks
Pure economic risksFootnote 28 are economic risks that are not just the result of the realisation of physical risks, such as personal injury or property damage. Where medical AI causes a surgery to fail, resulting in personal injury and consequently in hospitalisation, the costs of hospitalisation is an economic harm, but not a ‘pure’ economic harm because it results from the personal injury. Where, however, AI manipulates consumers and makes them buy overpriced products, the financial loss caused is not in any way connected with a safety risk and, therefore, qualifies as a pure economic risk (also referred to as immaterial harm). For pure economic risks to be considered legally relevant outside the realm of contractual liability, most legal systems require additional elements, such as fraud or other illegal behaviour or conduct that is considered socially inacceptable.Footnote 29 Pure economic risks, at least when legally relevant, might, therefore, be closer to fundamental rights risks.
III. AI As a Challenge to Existing Liability Regimes
1. Classification of Liability Regimes
While extra-contractual liability law has – beyond product liability law and some few specific areas – so far largely been a matter for the Member States, and while there exists a broad variety of different liability regimes at national level, it is still possible to group liability regimes according to their general characteristics.
a. Fault Liability
Fault liability has been the most important pillar of extra-contractual liability in a majority of European jurisdictions.Footnote 30 Liability always requires a sufficient justification for shifting loss from the person who originally suffered the damage (the victim) to a person who caused the damage (the tortfeasor). In the case of fault liability, the fault of the tortfeasor, which is usually either intent or negligence with many different shades and gradations, such as gross negligence or recklessness, is the justification. If damage is caused by mere negligence, further conditions must usually be met, otherwise liability could potentially escalate indefinitely. Jurisdictions use different tools in order to keep liability within reasonable boundaries. Often, there is a requirement that the potential tortfeasor’s conduct was somehow objectionable, that is, that it was either violating the law, or public policy, or infringing rights and legally protected interests whose absolute integrity is so vital that any kind of infringement must, per se, be considered as presumably unlawful. The latter is usually the case where human life, health, or bodily integrity are at stake or where the infringement concerns clearly defined property rights.Footnote 31
b. Non-Compliance Liability
Liability may also be triggered by the infringement of particular laws or particular standards whose purpose includes the prevention of harm of the type at hand. We find this type of liability regime both at EU level and at national level. An example for non-compliance liability at EU level is Article 82 of the General Data Protection Regulation (GDPR),Footnote 32 which attaches liability to any infringement of the requirements set out by the GDPR. Further, yet very different, examples can be found in EU non-discrimination legislation such as Council Directive 2004/113/EC.Footnote 33 Non-discrimination law obliges Member States to introduce into their national legal systems the legal measures necessary to ensure real and effective compensation for loss and damage sustained by a person injured as a result of discrimination, in a way which is dissuasive and proportionate to the damage suffered. In this context, Member States must ensure that, when a plaintiff establishes facts from which it may be presumed that there has been direct or indirect discrimination, it shall be for the respondent to prove that there has been no breach of anti-discrimination law.Footnote 34 Another example of non-compliance liability can be found in the financial sector. Where issuers of a financial instrument do not publicly disclose inside information concerning them, they become liable for any damage caused by the failure to do so.Footnote 35
At the national level, there may be both general clauses attaching liability to the infringement of protective statutory provisionsFootnote 36 and specific liability regimes attaching liability to non-compliance with very particular standards. Non-compliance liability is always of an accessory nature, in other words, there needs to be a basic regime setting out in some detail the duties and obligations to be met in order to be considered compliant. It should also be noted that, under a number of national jurisdictions, efforts are being made to impose non-compliance liability only in cases where the potential tortfeasor was at fault.Footnote 37
c. Defect and Mal-Performance Liability
A number of different liability regimes in jurisdictions in Europe may be described as types of ‘defect liability’ (or, in the case of services, ‘mal-performance liability’), although this is certainly not a common technical term. In the extra-contractual realm, the most important form of defect liability is product liability, which has been harmonised by the Product Liability Directive (PLD).Footnote 38 Product liability does not require fault on the part of the producer, but it still requires a particular shortcoming in the producer’s sphere, in that it requires that the product put into circulation was defective at the time when it left that sphere. The development risk defence (i.e. the defence relying on the fact that the defect, according to the state of the art in science and technology, could not have been detected when the product was put into circulation), which Member States were free to implement or not, moves product liability somewhat into the vicinity of fault liability.Footnote 39
Product liability is only the most conspicuous form of defect liability and the one where the term ‘defect’ is in fact used. However, when looking more closely at liability regimes in national jurisdictions, it becomes apparent that there is a panoply of different forms of liability that are all based on the unsafe or otherwise objectionable state of a particular object within the liable person’s sphere of control. Many of these forms of liability are somewhat at the borderline between fault liability and defect liability, as they are based on a presumption of fault, which the liable person is free to rebut under particular circumstances. Even some forms of vicarious liability under national law may be qualified, at a closer look, as forms of defect or mal-performance liability. For example, vicarious liability may be based on the generally ‘unfit’ nature of the relevant auxiliary in terms of personality or skills,Footnote 40 or on the fact that the human auxiliary failed to meet a particular objective standard of care.
d. Strict Liability
The term ‘strict liability’, although often used with a broader meaning, should be reserved for such forms of liability that do not require any kind of defect or mal-performance but are more or less based exclusively on causation. At a closer look, some further requirements beyond causation may have to be met, such as that the risk that ultimately materialised was within the range of risks covered by the relevant liability regime, and there may possibly be defences, such as a force majeure defence.Footnote 41
Strict liability is usually imposed only in situations where significant and/or frequent harm may occur despite the absence of any fault or any identifiable defect, mal-performance, or other non-compliance. It is also imposed where such elements would be so difficult for the victim to prove that requiring such proof would lead to massive under-compensation or inefficiency. Paradigm cases are the operation of aircraft, railways, ships, or motor vehicles, although solutions in the EU Member States differ, as does the attitude towards a ‘general clause’ of strict liability for unforeseen but parallel cases.Footnote 42 While there are also examples in national law where something close to strict liability is extended to all objects,Footnote 43 this is more or less exceptional and often narrowed down by case law.
2. Challenges Posed by AI
The mass rollout of AI and related technologies poses numerous challenges to existing liability regimes. Some of these challenges have their origin in interconnectedness, which is not strictly related to AI, but to digital ecosystems more generally. Other challenges are truly specific to AI.
a. Liability for the Materialisation of Safety Risks
(i) ‘Complexity’, ‘Openness’, and ‘Vulnerability’ of Digital Ecosystems
With enhanced connectivity and data flows in the Internet of Things (IoT), everything potentially affects the behaviour of everything, and it may become close to impossible for a victim to prove what exactly caused the damage (‘complexity’Footnote 44). For example, where a smart watering system for the garden floods the premises, this may be the effect of the watering system itself being unsafe, but there might also have been an issue with a humidity sensor bought separately, or with the weather data supplied by another provider.
‘Openness’Footnote 45 means the fact that components are not static but dynamic and are subject to frequent or even continuous change. Products change their safety-relevant features after the product has been put into circulation, for example through the online provision of updates as well as through a variety of different data feeds and cloud-based digital services. This, in fact, means that a victim may not get compensation under liability regimes such as the PLD which exclusively refer to the point in time when a product was first put into circulation.Footnote 46
Connectivity also gives rise to increased ‘vulnerability’,Footnote 47 due to cyber security risks and privacy risks as well as a number of related risks, such as risks of fraud. However, as has been demonstrated by the short survey of existing liability regimes, such risks are not necessarily covered by liability because of a general focus on risks of a ‘physical’ nature such as death, personal injury, or property damage.
(ii) ‘Autonomy’ and ‘Opacity’
AI adds further challenges to an already challenging picture through the features of ‘autonomy’ and ‘opacity’. The term ‘autonomy’, whose use with regard to machines has often been criticised because of its inextricable link with the free human will, refers to a certain lack of predictability as far as the reaction of the software to unseen instances is concerned. It is in particular when coding of the software has occurred wholly or partially with the help of machine learningFootnote 48 that it is difficult to predict how the software will react to each and every situation in the future.Footnote 49
While unpredicted behaviour in new situations nobody had ever thought about may also occur with software of a traditional kind, algorithms created with the help of machine learning cannot easily be analysed, especially not when sophisticated methods of deep learning have been used. This ‘opacity’ of the codeFootnote 50 (‘black box effect’) means that it is not easy to explain why an AI behaved in a particular manner in a given situation, and even less easy to trace that behaviour back to any feature which could be called a ‘defect’ of the code or to any shortcoming in the development process.
Both autonomy and opacity make it difficult to trace harm back to any kind of intent or negligence on the part of a human actor, which is why fault liability is not an ideal response to risks posed by AI. However, it is also clear that emerging digital technologies, notably AI, make it increasingly difficult to identify a defect due to the autonomy of software and software-driven devices as well as the opacity of the code, which means that defect liability may not be a wholly satisfactory response either.
(iii) Strict and Vicarious Liability as Possible Responses
As the ‘autonomy’ and ‘opacity’ of AI may give rise to exactly the kind of difficulties strict liability is designed to overcome,Footnote 51 the further extension of strict liability to AI applications is increasingly being discussed. This would, at the same time, solve some of the problems associated with ‘complexity’, ‘openness’, and ‘vulnerability’ that come with the IoT. For instance, where it is unclear whether the flooding of the premises was due to a defect of the watering system itself, a humidity sensor, or a data feed, it is still clear that the water itself came from the pipes. Thus, if the legislator introduced strict liability for smart watering systems, this would mean that whoever is the addressee of this strict liability (e.g. the operator or the producer of the watering system) would have to compensate victims for harm suffered from water spread by the system. There have been extensive discussions as to who is the right addressee of liability, and as to which types of risks should ultimately be covered.Footnote 52
Similar effects may be achieved by extending vicarious liability to situations where sophisticated machines are used in lieu of human auxiliaries. Otherwise, parties could escape liability by outsourcing a particular task to a machine rather than to a human auxiliary.Footnote 53
For some time, there has been a debate whether to recognise that highly sophisticated robots, and software agents may themselves be the addressees of liability. The idea of ‘electronic personhood’ was fuelled by a 2017 European Parliament resolution,Footnote 54 but the proposal was met with a great deal of resistance since.Footnote 55 Some of the resistance had its roots in ethical considerations,Footnote 56 but there are also practical flaws. Being the addressee of liability, AI systems would have to be equipped with funds or with equivalent insurance, which means that electronic personhood is more an additional complication than a solution.Footnote 57 Another radical solution proposed is that of replacing liability schemes altogether by insurance or funds so that those suffering harm from AI would be compensated by a general compensation scheme to which, in particular, producers and maybe professional users would be contributing.Footnote 58 However, it is meanwhile broadly accepted that such schemes could realistically only be implemented for very particular applications and fields, such as connected driving, but not across the board for a general purpose technology such as AI.Footnote 59
b. Liability for the Materialisation of Fundamental Rights Risks
The main challenge to existing liability schemes is the fact that they are entirely inadequate to address the challenges posed by AI, due to their focus on safety risks. Where fundamental rights risks posed by AI materialise, there is often no fault on the part of those deploying the AI, and it may be close to impossible for a victim to prove that there was fault on the part of the producer. Defect liability, at least as it currently exists under the PLD and under national legal regimes, is entirely focussed on traditional safety risks. This holds true to an even greater extent for strict liability, which, for the time being, is almost exclusively restricted to physical risks. Further, extending vicarious liability to situations where sophisticated machines are deployed in lieu of human auxiliariesFootnote 60 may help also with regard to fundamental rights risks, as long as there is a basis for liability of the hypothetical human auxiliary. Non-compliance liability might possibly be an option, but beyond non-discrimination law, the GDPR, and unfair commercial practices law there is currently not much of a general compliance regime that could serve as a ‘backbone’ for AI liability. Of course, this ‘backbone’ could theoretically be created by the emerging AI safety legislation. This is why it is essential to analyse this legislation.
IV. The Emerging Landscape of AI Safety Legislation
While the debate on challenges posed by AI to existing liability regimes is still ongoing, the landscape of AI-relevant product safety law is already changing rapidly, as illustrated by the proposals for a new Machinery Regulation and for the AIA. It is important to understand the emerging safety regimes, because it is only against their background that liability regimes specifically tailored to AI can be properly designed.
1. The Proposed Machinery Regulation
a. General Aims and Objectives
The proposed Machinery Regulation aims at modernising the existing machinery safety regime harmonised by the Machinery Directive,Footnote 61 in particular with regard to new technologies. This concerns potential risks that originate from a direct human-robot collaboration, risks originating from connected machinery, the phenomenon that software updates affect the ‘behaviour’ of the machinery after its placing on the market, and the problems associated with risk assessment on machine learning applications before the product is placed on the market. Also, the current regime harmonised by the Machinery Directive still foresees a driver or an operator responsible for the movement of a machine, but fails to set up requirements for autonomous machines. Needless to say, there were also developments to consider and inconsistencies to fix that were not directly related to software and AI. The current list of high-risk machines in Annex I to the Directive was elaborated 15 years ago and is urgently in need of an update.
b. Qualification As High-Risk Machinery
Within the product safety framework for machinery, the qualification of machinery products as high-risk machinery plays an important role. Amongst others, in Annex I, all software ensuring safety functions, including AI systems, and all machinery embedding AI systems ensuring safety functions has been added to the list of high-risk machinery.Footnote 62 The fact that all safety components that are software components, and all machinery embedding AI for the purpose of ensuring safety functions, are now included in the list of high-risk machinery automatically means under the proposed Machinery Regulation that, for this kind of machinery, only third party certification will be accepted, even when manufacturers apply the relevant harmonised standards.
A machinery product is included in the list of high-risk machinery products if it poses a particular risk to human health. The notion of ‘safety’ therefore seems to refer exclusively to risks of a physical nature. The risk posed by a certain machinery product is, according to Article 5(3) of the Proposal, established based on the combination of the probability of occurrence of harm and the severity of that harm. Factors to be considered in determining the probability and severity of harm include the degree to which each affected person would be impacted by the harm, the number of persons potentially affected, the degree of reversibility of the harm, and indications of harm that have been caused in the past by machinery products which have been used for relevant purposes. However, there are also factors that go more in the direction of ‘fundamental rights risks’, such as the degree to which potentially affected parties are dependent on the outcome produced by the machinery product, and the degree to which potentially affected parties are in a vulnerable position vis-à-vis the user of the machinery product.
c. Essential Health and Safety Requirements
The essential health and safety requirements that must be met for conformity of high-risk machinery are listed in Annex III. Where machinery uses AI for safety functions, the conformity assessment must consider hazards that may be generated during the lifecycle of the machinery as an intended evolution of its fully or partially evolving behaviour or logic.Footnote 63 As far as human-machine collaboration is concerned, a machinery product with fully or partially evolving behaviour or logic that is designed to operate with varying levels of autonomy must be adapted to respond to people adequately and appropriately; this must occur verbally through words or nonverbally through gestures, facial expressions, or body movement. It must also communicate its planned actions (what it is going to do and why) to operators in a comprehensible manner.Footnote 64
Largely, however, AI-specific aspects are referred to in the future AIA, that is, where the machinery product integrates an AI system, the machinery risk assessment must consider the risk assessment for that AI system that has been carried out pursuant to the AIA.Footnote 65
2. The Proposed Artificial Intelligence Act
a. General Aims and Objectives
The AIA Proposal of 21 April 2021 aims at ensuring that AI systems placed on the Union market and used in the Union are safe and respect existing law on fundamental rights and Union values, and at enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems. At the same time, efforts are being made to ensure legal certainty in order to facilitate investment and innovation in AI and to facilitate the development of a single market for AI applications and prevent market fragmentation. The AIA is complementary to existing data protection law (in particular the GDPR and the Law Enforcement DirectiveFootnote 66), non-discrimination law, and consumer protection law.
As regards high-risk AI systems, which are safety components of products, the AIA will be integrated into the existing and future product safety legislation. For high-risk AI systems related to products covered by the New Legislative Framework (NLF) legislation (e.g. machinery, medical devices, toys), the requirements for AI systems set out in the AIA will be checked as part of the existing conformity assessment procedures under the relevant NLF legislation.Footnote 67 The latter may, at the same time, include further AI-specific requirements relevant only in a particular sector. AI systems related to products covered by relevant ‘old approach’ legislation (e.g. aviation, motor vehicles)Footnote 68 are not directly covered by the AIA, though.Footnote 69
b. The Risk-Based Approach
The AIA Proposal follows a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, a limited risk, and a low or minimal risk.
(i) Prohibited AI Practices
Title II lists some narrowly defined AI systems whose use is considered unacceptable as contravening EU values and violating fundamental rights, such as manipulation through subliminal techniques or exploitation of group-specific vulnerabilities (e.g. children) in a manner that is likely to cause affected persons psychological or physical harm. The Proposal also prohibits general-purpose social scoring by public authorities and, subject to a range of exceptions, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes.Footnote 70
(ii) High-Risk AI Systems
Title III contains mandatory essential requirements for AI systems qualified as ‘high-risk’ AI systems, defined as systems that create a high risk to the health and safety or fundamental rights of natural persons. There are two main categories of high-risk AI systems: AI systems used as a safety component of products that are subject to third party ex ante conformity assessment under NLF legislation listed in Annex II; and other stand-alone AI systems explicitly listed in Annex III. The systems listed in Annex III, as it currently stands, more or less exclusively address fundamental rights risks. This includes biometric identification and categorisation of natural persons; education and vocational training; employment; workers management and access to self-employment; access to, and enjoyment of, essential private services, public services, and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes. The only exception is the ‘management and operation of critical infrastructure’Footnote 71 as the latter poses a systemic risk of a more physical nature rather than a fundamental rights risk.
The Commission may, from time to time, expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology. The risk assessment criteria listed in Article 7(2) are similar to those listed in the relevant Article of the proposed Machinery Regulation,Footnote 72 with two main exceptions: Reference is not only made to risks for the health of persons, but also to risks for the ‘health and safety or … fundamental rights’. Also, an additional criterion to consider is the extent to which existing Union legislation already provides for effective measures of redress in relation to the risks posed by an AI system (with the exclusion of claims for damages) and the existence of effective measures to prevent or substantially minimise those risks. For the purpose of future classification of additional AI systems as ‘high-risk’ systems, safety risks and fundamental rights risks are treated in the same manner and are not dealt with separately.
(iii) AI Systems Subject to Specific Transparency Obligations
Title IV is devoted to AI systems that are subject to enhanced transparency obligations. This concerns, for example, AI systems that may be mistaken for human actors, deep fakes, emotion recognition systems, and biometric categorisation systems.Footnote 73 It is important to note, though, that Titles III and IV are not mutually exclusive, i.e. an AI system that qualifies as a ‘high-risk’ system for the purpose of Title III may still fall under IV as well.
c. Legal Requirements and Conformity Assessment for High-Risk AI Systems
Legal requirements set out in Title III for high-risk AI systems address data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. By and large, and with regard to the AI system, the same requirements apply irrespective of whether what is at stake is the safety component of a toy robot or a connected household device falling under the RED, or an AI system intended to be used for the selection and evaluation of applicants in the course of a recruitment procedure. This may not be particularly convincing, because the safety requirements with regard to the toy robot or the connected household device are very different to the safety requirements with regard to the recruitment software. However, due to the general nature of the requirements and obligations listed in the Proposal, it may still be the better choice to deal with the two risk categories under identical provisions.
Obligations with regard to these requirements are largely placed on producers (called ‘providers’) of high-risk AI systems, but proportionate obligations are also placed on (professional) users and other participants across the AI value chain (such as importers, distributors, and authorised representatives) consistent with other modern product safety legislation. The Proposal sets out a framework for notified bodies to be involved as independent third parties in conformity assessment procedures. AI systems used as safety components of products regulated under the NLF, such as machinery or toys, are subject to the same compliance and enforcement mechanisms of the products of which they are a component, but in the course of applying these mechanisms the requirements imposed by the AIA must be ensured as well. New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems.
As regards stand-alone high-risk AI systems, which are currently not covered by product safety legislation, a new compliance and enforcement mechanism is established along the lines of existing NLF legislation. However, with the exception of remote biometric identification systems, such high-risk AI systems are only subject to self-assessment of conformity by the providers. The justification provided in the explanatory notesFootnote 74 is that the combination with strong ex post enforcement would be an effective and reasonable solution, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated.Footnote 75
V. The Emerging Landscape of AI Liability Legislation
While Commission proposals on AI liability, which were initially planned for the first quarter of 2022, have meanwhile been postponed to the third quarter of 2022, a draft Regulation by the European Parliament has been on the table since October 2020.Footnote 76 It was prepared in parallel with the Commission’s White Paper on AI and the preparatory work for the AIA Proposal and has clearly been influenced by work at Commission level.
1. The European Parliament’s Proposal for a Regulation on AI Liability
The cornerstone of the EP Proposal for the regulation of AI liability is a strict liability regime for the operators of ‘high-risk’ AI systems enumeratively listed in an Annex, accompanied by an enhanced regime of fault liability for the operators of other AI systems.
a. Strict Operator Liability for High-Risk AI Systems
According to Article 4 of the EP Proposal, operators of AI systems shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device, or process driven by an AI system. The EP Proposal ultimately adopted the division into ‘frontend operator’ (i.e. the person deploying the AI system) and ‘backend operator’ (i.e. the person that continuously controls safety-relevant features of the AI system, such as by providing updates or cloud services) that had been developed by the author of this paper and included in the 2019 EG-NTF report.Footnote 77 According to the final version of the EP Proposal, not only the frontend operator, but also the backend operator may become strictly liable. However, the backend operator’s liability is covered only if it is not already covered by the PLD.Footnote 78 The only defence available to the operator is force majeure.Footnote 79 For the AI systems subject to strict liability, mandatory insurance is being proposed.Footnote 80
‘High-risk’ AI systems for the purpose of the proposed Regulation are to be exhaustively listed in an Annex. Interestingly, the final version of the Proposal was published with the Annex left blank. The Annex attached to the first published draft from April 2020 had met with heavy resistance due to its many inconsistencies, and it may have proved too difficult to agree on a better version. Also, it seemed opportune to wait for the list of ‘high-risk’ AI applications that would be attached to the AIA. In any case, given the rapid technological developments and the required technical expertise, the idea is that the Commission should review the Annex without undue delay, but at least every six months, and if necessary, amend it through a delegated act.Footnote 81
b. Enhanced Fault Liability for Other AI Systems
The EP Proposal does not only include a strict liability regime for ‘high-risk’ applications, but also a harmonised regime of rather strictish fault liability for all other AI systems. Article 8 provides for fault-based liability for ‘any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system’, and fault is presumed (i.e. it is for the operator to show that the harm or damage was caused without his or her fault).Footnote 82 In doing so, the operator may rely on either of the following grounds: The first ground is that the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken. The second ground is that due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities, and maintaining the operational reliability by regularly installing all available updates. It looks as if these two grounds are the only grounds by means of which operators can exonerate themselves, but Recital 18 also allows for a different interpretation, namely, that the two options listed in Article 8(2) should just facilitate exoneration by establishing ‘counter-presumptions’.
The proposed fault liability regime is problematic not only because of the lack of clarity in drafting, but also because Article 8(2)(b) might be unreasonably strict, as it seems that the operator must demonstrate due diligence in all aspects mentioned, even if it is clear that lack of an update cannot have caused the damage. More importantly, in the absence of any restriction to professional operators, even consumers would face this type of enhanced liability for any kind of AI device, from a smart lawnmower to a smart kitchen stove. This would mean burdening consumers with obligations to ensure that updates are properly installed, irrespective of their concrete digital skills, and possibly confronting them with liability risks they would hardly ever have had to bear under national legal systems.
c. Liability for Physical and Certain Immaterial Harm
Article 2(1) of the Proposal declares the proposed Regulation to apply where an AI system has caused ‘harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss’. Article 3(i) provides for a corresponding definition of ‘harm or damage’. While life, health, physical integrity, and property were clearly to be expected in such a legislative framework, the inclusion of ‘significant immaterial harm resulting in a verifiable economic loss’ came as a surprise. If immaterial harm or the economic consequences resulting from it – such as loss of earnings due to stress and anxiety that do not qualify as a recognised illness – is compensated through a strict liability regime whose only threshold is causation,Footnote 83 the situations where compensation is due are potentially endless and difficult to cover by way of insurance.Footnote 84
This is so because there is no general duty not to cause significant immaterial harm of any kind to others, unless it is caused by way of non-compliant conduct (such as by infringing the law or by intentionally acting in a way that is incompatible with public policy). For instance, where AI used for recruitment procedures leads to a recommendation not to employ a particular candidate, and if that candidate, therefore, suffers economic loss by not receiving the job offer, full compensation under the EP Proposal for a Regulation would be due even if the recommendation was absolutely well-founded and if there was no discrimination or other objectionable element involved. While some passages of the report seem to choose somewhat more cautious formulations, calling upon the Commission to conduct further research,Footnote 85 Recital 16 explains very firmly that ‘significant immaterial harm’ should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances.
2. Can the EP Proposal be Linked to the AIA Proposal?
The 2020 White Paper on AI, the EP’s 2020 Proposal for an AI Liability Regulation, and the 2021 Commission Proposals for an AIA and for a new Machinery Regulation clearly have a number of parallels. They range from some identical terminology (e.g. ‘AI system’, ‘high-risk’) to the legislative technique of exhaustively listing ‘high-risk’ AI systems in an Annex, combined with the option for the European Commission to amend the Annex in a rather flexible procedure through delegated acts. So the question arises whether it would be possible to link an AI liability regime along the lines of the EP Proposal with the AIA Proposal in a way that the legal requirements and obligations perspective matches the liability perspective.
a. Can an AI Liability Regulation Refer to the AIA List of ‘High-Risk’ Systems?
The first question that arises is whether the list of ‘high-risk’ AI systems in the AI Liability Regulation can be identical to the list of ‘high-risk’ AI systems under the AIA. However, as tempting as it may be to simply refer to the AIA, it would lead to overreaching and inappropriate results. The justification for imposing strict liability that the relevant product or activity leads to significant and/or frequent harm despite the absence of any fault or any identifiable defect, mal-performance, or non-compliance does not coincide with the justification for imposing particular precautionary measures against unsafe products. While the AI systems for which strict liability is justified will most likely be a subset of the AI systems for which enhanced safety measures are justified, by far not all AI systems of the latter type should be included in a strict liability regime, for example, when they are normally safe except when clearly defective. This is underlined by the fact that the relevant players are not identical. While safety requirements are primarily addressed at the level of producers (‘providers’ in the AIA terminology), the EP Proposal suggests imposing strict AI liability primarily on the frontend operators (‘users’ in the AIA terminology), but also on the backend operators (a concept missing in the AIA). So even if something along the lines of the EP Proposal became the law it would be imperative to draft a liability-specific Annex defining ‘high-risk’ AI systems specifically for liability purposes. This could, for example, include big AI-driven cleaning or lawnmower robots used in public spaces, but not a small vacuum cleaner or toy robot.
b. Can the AIA Keep Liability for Immaterial Harm within Reasonable Boundaries?
As concerns fundamental rights risks, the current approach taken by the EP Proposal, which considers strict liability (alongside fault liability) for ‘significant immaterial harm that results in a verifiable economic loss’, has already been discarded earlier in this chapterFootnote 86 because of its failure to keep liability within any reasonable boundaries. However, the question arises whether the AIA Proposal can now assist in solving this problem.
One way of attaching liability immediately to the AIA Proposal seems to be attaching liability to the engagement in any prohibited AI practice within the meaning of Title II of the AIA Proposal, which could lead to the compensation of both material and immaterial harm thereby caused. This would be a model of non-compliance liability and fit easily into existing non-discrimination, data protection, and consumer protection legislation, all of which provide for liability for damages where harm has been caused by the engagement in prohibited practices.
Another option would be to restrict liability for immaterial harm to cases of non-conformity with the legal requirements in Title III Chapter 2 of the AIA. For instance, where training, validation, or testing data for recruitment AI fail to be relevant, representative, free of errors, and complete, as required by Article 10(4) of the AIA Proposal, the provider could be liable if an applicant was falsely filtered out by the system despite being objectively better qualified. However, it soon transpires that the legal requirements included in Title III Chapter 2 of the AIA Proposal are not optimally suited as a basis for defect liability. For many of the requirements are not so much ends in themselves that would automatically mean an AI system violates fundamental rights. Rather, some of them resemble due diligence standards that must be met during AI development, either as a quality-enhancing measure (e.g. data governance) or to facilitate monitoring (e.g. record-keeping). Non-conformity with such requirements could, therefore, justify a shift of the burden of proof, but should not in itself trigger liability. Thus, in the case of the recruitment AI system, non-conformity of training data with Article 10 should not lead to a final determination of liability but rather to the presumption that the resulting AI was defective.
VI. Possible Pillars of Future AI Liability Law
If the AIA Proposal as it currently stands is not optimally suited for functioning as a ‘backbone’ for AI liability, this does not mean that the AIA as such cannot fulfil this function. Upon a closer look, not much would have to be changed in the AIA to make it an appropriate basis for future legal regimes on AI liability. At the end of the day, liability for damages caused by AI systems may have to rest on different pillars, all of which would have to rely on, or at least be aligned with, provisions in the AIA and further product safety and other law.
1. Product Liability for AI
The first obvious link between the AIA (and other product safety law) on the one hand and liability law on the other could be established within product liability law, which relies on the PLD. Meanwhile, it is widely accepted that the PLD must in any case be adapted to the challenges of digital ecosystems at large.Footnote 87
a. Traditional Safety Risks
With regard to the reform of the PLD, the debate has so far been focused entirely on safety risks. Already with regard to these risks, the PLD as it currently stands is not fit to meet the challenges posed by digitalisation, not least in the light of uncertainties with regard to its scope (e.g. concerning self-standing software, including AI) and its focus on the point in time when a product is put into circulation, which fails to take into account updates, data feeds, and machine learning.Footnote 88 Where AI is involved, a victim may face particular difficulties showing that the AI system was defective. This is why no defect of the AI should have to be established by the victim for AI-specific harm caused by AI-driven products. Rather, it should be sufficient for the victim to prove that the harm was caused by an incident that might have something specifically to do with the AI (e.g. the cleaning robot making a sudden move in the direction of the victim) as contrasted with other incidents (e.g. the victim stumbling over the powered-off cleaning robot).Footnote 89
b. Product Liability for Products Falling Short of ‘Fundamental Rights Safety’?
As has been pointed out, the AIA Proposal also addresses fundamental rights risks. This raises the question whether also product liability might, in the future, include liability for products with a ‘fundamental rights defect’ or falling short of ‘fundamental rights safety’.
The legal requirements described in Title III Chapter 2 of the AIA Proposal address some cloudy notion of ‘adverse impact on the fundamental rights’ of persons, including non-discrimination and gender equality, data protection and privacy, and the rights of the child. However, they fail to state – either in a positive or in a negative manner – what exactly the legal requirements are designed to achieve or to prevent. It is rather obvious that discrimination as far as prohibited by EU non-discrimination law, or data processing as far as prohibited by EU data protection law, is among the core effects to be prevented. However, given the much more ‘fuzzy’ nature of fundamental rights risks as compared with traditional safety risks, and given that there is a floating spectrum of beneficial or adverse impact on a broad variety of different fundamental rights, it is very difficult to impose liability for the materialisation of fundamental rights risks as such.
In order to achieve liability for the materialisation of fundamental rights risks as such, the first step must be to formulate an equivalent to the established concept of ‘safety’ in traditional product safety legislation. As far as traditional safety risks are concerned, it is possible for Article 6(1) of the PLD to simply state: ‘A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account […]’, implicitly referring to the bulk of existing product safety law that is designed to protect ‘the safety and health of persons’ and similar traditional notions of safety. A corresponding concept of ‘fundamental rights safety’ could theoretically be derived from the AIA, in particular from the requirements for high-risk AI systems listed in Chapter 2 of Title III of the current proposal. However, in order to make these requirements operational for purposes of liability law they would have to be divided into two groups. Requirements which constitute ‘AI-specific safety’ (which would, by and large, be the requirements listed in Articles 13 through 15 of the draft AIA) would have to be seen as clearly separated from the requirements that are about managing safety (mostly Article 9), increasing the likelihood of safety (selected aspects of which are listed in Article 10), or documenting safety (Articles 11 and 12). Shortcomings in the technical documentation or in logging capabilities, for instance, should not be seen as a lack of 'fundamental rights safety' as such, but should rather trigger proof-related consequences in the liability context. Where technical documentation or logging capabilities are missing, or where the producer withholds logging data that would be available and potentially relevant, there could be a presumption that the missing information would have been to the detriment of the producer. Where, on the other hand, an AI system is not as accurate and robust as stated in its description or as could reasonably be expected from an AI system of the relevant kind, and therefore harm occurs (e.g. recruitment software assessing candidates has a strong gender bias and therefore female applicants are discriminated against), this lack of accuracy or robustness might trigger liability of the provider under an extended scheme of product liability. Designing such an extended scheme of product liability would, without doubt, remain to be challenging.
2. Strict Operator Liability for ‘High-Physical-Risk’ Devices
As far as death, personal injury, or property damage caused by a ‘high-risk’ product that includes AI for safety-relevant functions is concerned, strict liability seems to be a proper response. Again, the question arises whether the AIA can be made operational for the purposes of liability law.
a. Why AI Liability Law Needs to be More Selective than AI Safety Law
As has already been pointed out,Footnote 90 not every product that qualifies as a ‘high-risk’ product under the AIA fulfils the requirements that should be met for justifying strict liability (and the accompanying burden of insurance). For instance, a small robot vacuum cleaner may, under the future Machinery Regulation (if the current draft were enacted as is), be automatically classified as ‘high-risk’ and be subject to third party conformity assessment. It would, therefore, at least if the AI component fulfils a safety function, automatically be classified as a ‘high-risk’ AI system also under AIA. Similarly, a toy robot vehicle for children using AI for a safety function would be qualified as ‘high-risk’ under the AIA in cases where that toy is subject to third party conformity assessment,Footnote 91 (e.g. in any case where no harmonised standards exist that cover all safety requirements, or the producer has deviated from the standard).Footnote 92
However, it would arguably be exaggerated to impose strict liability for harm caused by small toy robots or robot vacuum cleaners, in particular if that strict liability is imposed on operators. Those machines hardly ever cause significant physical harm by themselves, and if they do, it is usually because it was improper for the (frontend) operator to deploy them in the particular situation, such as where the operator of a retirement home uses an unsupervised cleaning robot in places and at times when elderly residents might stumble over it. Another possibility is that the machine is defective, for example, the vacuum cleaner, which is normally only used during the night in areas that are locked for residents, suddenly breaks loose and starts hovering when elderly residents are leaving the dining room. The problem is not so much that it would be inappropriate in the case of the retirement home to make its operator strictly liable for damage caused by the cleaning robot. Rather, the problem is that if all operators of small vacuum cleaner robots (including the millions of businesses that use them for cleaning their office space during the night, or even consumers) had to face strict liability and had to take out corresponding insurance, this would be extremely inefficient and benefit no one but the insurance industry.
b. Differentiating ‘High-Risk’ and ‘High-Physical-Risk-As-Such’
The AIA could, therefore, be made fully operational as a ‘backbone’ to AI liability law if its Article 6 with Annex II drew a distinction between AI systems that are – for whatever inner logic the relevant sectoral NLF product safety legislation may follow – subject to third party conformity assessment, and AI systems that create a high physical risk as such. Needless to say, the two groups would not be mutually exclusive, as AI systems that create a high physical risk as such will often be subject to third party conformity assessments under the relevant product safety law. On the other hand, it will often be AI systems governed by ‘old approach’ legislationFootnote 93 that pose a high physical risk to the safety of persons as such. This means that the AIA could provide a better basis for AI liability law if these two groups of AI systems could be separated and better differentiated, either by way of restructuring and slightly redrafting Article 6 and Annex II or by drawing that distinction in a separate legal instrument on AI liability.
c. Avoiding Inconsistencies with Regard to Human-Driven Devices
However, it should also be borne in mind that strict liability for physical risks caused by AI-driven devices might create significant inconsistencies if not accompanied by strict liability for the same type of devices where those devices are not AI-driven but steered by humans or by technology other than AI. A victim run over by a vehicle does not care that much whether the vehicle was AI-driven or not. So if strict liability is found to be appropriate for a particular type of device of a certain minimum weight running at a certain minimum speed in public spaces (or other spaces where they typically get into contact with persons involved with the operation), this will normally be the case irrespective of whether the device is human-driven or AI-driven. For instance, large cleaning machines, lawnmowers, or delivery vehicles in public spaces might generally have to be included in strict liability regimes even where, in the relevant jurisdiction, this is so far not the case. So a strict liability regime should, at the end of the day, not be restricted to AI systems.
3. Vicarious Operator Liability
Vicarious liability in the sense of liability for the acts and omissions of others, such as (human) auxiliaries, might be yet another pillar of future AI liability.
a. The ‘Accountability Gap’ that Exists in a Variety of Contexts
Part of the problem with existing liability regimes in Member States is associated with the absence, in most legal systems, of vicarious liability for the mal-functioning of machines. Where a human cleaner knocks over a person passing by, or where a human bank clerk miscalculates a customer's credit score, there is usually fault liability of either the human auxiliary that was acting, or their employer, or both. Where, however, the person passing by is knocked over by a cleaning robot, or the credit score miscalculated by credit scoring AI, it is well possible that no one is liable at all. The AI system itself cannot be liable, but its operator may not be liable either if that operator can demonstrate that they have bought the AI system from a recognised provider and complied with all monitoring and similar duties. The producer will often not be liable as a defect in the AI system is sometimes difficult to prove, and in any case product liability (unless it will be significantly extended) only covers personal injury and property damage.
Vicarious liability would be a solution, but the rules on liability for acts or omissions of others differ vastly across the Member States and some courts insist that this kind of liability remains restricted to human auxiliaries.Footnote 94 Due to the fact that the application of vicarious liability, either directly or by analogy, is uncertain, an ‘accountability gap’ may exist, as very harmful activities could be conducted without anyone taking responsibility. This concerns both contexts where fault liability would normally apply and contexts where there would be non-compliance liability, and possibly other contexts.
b. Statutory or Contractual Duty on the Part of the Principal
Vicarious AI liability can only go as far as the operator of the AI would itself be liable, under national law, for violation of the same standard of conduct. This means that there must exist some statutory or contractual duty, in particular a duty of diligence, on the part of the operator. Such duties may exist in a variety of contexts, from professional care to recruitment to credit scoring to pricing, and vicarious liability may become relevant for a variety of legal frameworks, from traditional areas of tort law to non-discrimination law to data protection law to consumer and competition law.
Such duties could also follow from the AIA. It is, in particular, the engagement in prohibited AI practices that should lead to liability, irrespective of whether the operator was acting intentionally or negligently with regard to the fact that, for example, the AI was exploiting age-specific vulnerabilities. With an associated liability scheme in mind, it becomes even more apparent, though, that the very ‘pointillistic’ style of Title II of the AIA Proposal is a problem and that, if fundamental rights protection is taken seriously, it would have been necessary to have a more complete list of blacklisted AI practices plus ideally a general clause to cover unforeseen cases.
c. A Harmonised Regime of Vicarious Liability
A new European scheme of vicarious liability might restrict itself to ensuring that a principal that employs AI for a sophisticated task faces the same liability under existing Member State law as a principal that employs a human auxiliary.Footnote 95 For example, a professional user of an AI system would be liable for harm caused by any lack of accuracy or other shortcomings in the operation of the system to the same extent as that user would be liable (under the applicable national law) for the acts or omissions of a human employee mandated with the same task as the AI system. Where a human would not have been able to fulfil the same task, such as where the task requires computing capabilities exceeding those of humans, the point of reference for determining the required level of performance would be available comparable technology which the user could be expected to use.Footnote 96
However, the EU legislator could also go one step further and introduce a fully harmonised concept of vicarious liability that does not suffer from the outset from the shortcomings we see in existing national concepts. By and large, this new European scheme of vicarious liability could provide that a business or public authority is liable for damage caused by its human auxiliaries acting within the scope of their functions, or any AI employed by the business or public authority, where these auxiliaries or AI fail to perform – for whatever reason – at the standard that could reasonably be expected from them.Footnote 97 This comes close to strict liability insofar as it requires neither fault nor a defect (or general lack of reliability in the case of human auxiliaries), but some output that does not meet the standards of conduct to be expected from a business or public authority in the fulfilment of their functions. What this level of quality is, depends on the task to be fulfilled. For instance, if it is about assessing the creditworthiness of a customer seeking credit, it would be the duty to provide proper assessment along the lines of any criteria prescribed by the law or stated by the business, and if it is about assessing candidates for a vacant position, it is again about assessing them properly, without any prohibited discrimination and duly taking into account the qualifications required for the position. Vicarious liability would, in any case, cover both safety risks and fundamental rights risks.
4. Non-Compliance and Fault Liability
Last but certainly not least, non-compliance and fault liability can also play an important role in the future landscape of liability for AI. In very much the same manner as Article 82 of the GDPR provides for liability of a controller or processor where that controller or processor violates their obligations under the GDPR, there could be liability under the AIA, or in a separate piece of legislation, where a provider, user or other economic operator covered by the AIA fails to comply with relevant AIA provisions, thereby causing relevant harm. This non-compliance liability might complement general fault liabiity that would continue to co-exist as a general baseline for extra-contractual liability. A breach of a duty of care that would constitute negligence could include deploying AI for a task it was not designed for, failing to provide for appropriate human oversight and other safeguards or failing to provide for necessary long-term monitoring and maintenance. Non-compliance liability and fault liability could also be merged, such as by alleviating the burden of proof for the victim under fault liability, or even reversing that burden, where obligations under the AIA have failed to be complied with.
VII. Conclusions
The potential risks associated with AI appear as normally falling into either of two dimensions: (a) ‘safety risks’ (i.e. death, personal injury, damage to property etc.) caused by unsafe products and activities involving AI and (b) ‘fundamental rights risks’ (i.e. discrimination, total surveillance, manipulation, exploitation, etc.), including risks for society at large, caused by inappropriate decisions made with the help of AI or otherwise inappropriate deployment of AI. While safety risks are highly relevant also in the AI context, fundamental rights risks are much more AI-specific.
Existing extra-contractual liability regimes can essentially be divided into four categories: fault liability, non-compliance liability, defect or mal-performance liability, and strict liability in the narrower sense. Vicarious liability can normally also be analysed as falling into one of these categories. Three out of the four categories of liability regimes are either restricted to, or heavily focused on, traditional safety risks such as death, personal injury, or property damage. It is only non-compliance liability, such as can be found in the GDPR or as an annex to EU non-discrimination law or consumer protection law, that frequently addresses also harm resulting from fundamental rights risks. Despite the fact that fundamental rights risks are more AI specific, liability for such risks seems to be largely unchartered territory, and the debate around liability for AI has largely been restricted to safety risks.
At the level of AI safety law, fundamental rights risks are now being addressed by way of prohibiting certain AI practices and by imposing mandatory legal requirements for other ‘high-risk’ AI systems, such as concerning data and data governance, transparency, and human oversight. While it is not impossible to use the emerging AI safety regime as a ‘backbone’ for the future AI liability regime, the AIA proposal, as it currently stands, is not optimally suited to help address liability for fundamental rights risks.
The future AI liability law could rest on several different pillars, such as: (a) a revised regime of product liability, which might even include liability for lack of ‘fundamental rights safety’; (b) strict operator liability for death, personal injury, property damage, and possibly further safety risks caused by ‘high-physical-risk’ devices; (c) vicarious operator liability for mal-performance of functions carried out in the course of business activities or activities of a public authority; and (d) fault and/or non-compliance liability for the operator's own negligence and/or failure to comply with obligations following from, in particular, the AIA.
While it would be desirable to have an AI safety regime that allows an AI liability regime to dock on, it becomes apparent that the AIA Proposal has, regrettably, not been drafted with liability law in mind. Further negotiations about the AIA Proposal and the preparatory work on a future AI liability regime as well as on a potential revision of the PLD should, for the sake of consistency of Union law and of legal certainty, be more closely aligned.
I. Introduction
On 2 October 1997, the Member States of the European Union (EU) signed the Treaty of Amsterdam and endowed the European legislature with a competence in the field of private international law that is now found in Article 81(2)(c) of the Treaty on the Functioning of the European Union.Footnote 1 In the following two decades, the EU created an expanding body of private international law.Footnote 2 In particular, the Rome II Regulation on the law applicable to non-contractual obligations was enacted on 11 July 2007.Footnote 3 Only eleven months later, the Rome I Regulation on the law applicable to contractual obligations was adopted.Footnote 4 Although both Regulations are already rather comprehensive, gaps as well as inconsistencies remain.Footnote 5 In light of the rapid technological development since 2009, the issue as to whether there is a need for specific rules on the private international law of artificial intelligence (AI) has to be addressed.Footnote 6 After the European Parliament’s JURI Committee had presented a proposal for a civil liability regime for AI in April 2020,Footnote 7 the European Parliament adopted – with a large margin – a pertinent resolution with recommendations to the Commission on 20 October 2020.Footnote 8 This resolution is part of a larger regulatory package on issues of AI.Footnote 9 The draft regulation (DR) proposed in this resolution is noteworthy not only with regard to the rules on substantive law that it contains,Footnote 10 but also from a choice-of-law perspective because it introduces new, specific conflicts rules for AI-related aspects of civil liability.Footnote 11 In the following contribution, I analyse and evaluate the European Parliament’s proposal against the background of the already existing European regulatory framework on private international law, in particular the Rome I and II Regulations.
II. The Current European Framework
1. The Goals of PIL Harmonisation
The basic economic rationale underlying the Rome II Regulation is succinctly captured in its Recital 6, which reads as follows:
The proper functioning of the internal market creates a need, in order to improve the predictability of the outcome of litigation, certainty as to the law applicable and the free movement of judgments, for the conflict-of-law rules in the Member States to designate the same national law irrespective of the country of the court in which an action is brought.
This Recital epitomises the basic tenet of the methodology developed by Friedrich Carl von Savigny in the nineteenth century, in other words, the goal of international decisional harmony.Footnote 12 The Commission’s explanation for its Rome II draft of 2003 is even more explicit with regard to the deterrence of forum shopping: unless conflicts rules for non-contractual obligations become unified, ‘[t]he risk is that parties will opt for the courts of one Member State rather than another simply because the law applicable in the courts of this State would be more favourable to them.’Footnote 13 The explanation for the draft of 2003 also makes clear that a unification of tort conflicts rests on a sound economic rationale, the reduction of transaction costs borne by the parties. A European Regulation on tort conflicts ‘allows the parties to confine themselves to studying a single set of conflict rules, thus reducing the cost of litigation and boosting the foreseeability of solutions and certainty as to the law.’Footnote 14 This rationale is particularly important for tort conflicts, because, contrary to contract conflicts, a choice of the applicable law ex ante was traditionally not available in many jurisdictions.Footnote 15 Even if the parties enjoy that possibility, they will frequently not be able to exercise this right because they do not anticipate an accident to happen.Footnote 16 Accordingly, clear objective conflicts rules have significantly greater weight in tort than in contract cases.Footnote 17 This is an important factor facilitating the emergence of new technologies with cross-border implications, such as driverless cars.Footnote 18
Moreover, the force of a practical example that would emanate from a successful codification of European conflicts rules on AI must not be underestimated. Although the initial American reaction towards the Rome II Regulation was rather critical, denouncing the final text as a ‘missed opportunity’ to transplant US doctrines to Europe,Footnote 19 there is a palpable transatlantic interest in recent European developments and the lessons that these may hold for the United States.Footnote 20 A well-known American conflicts scholar even recommended the European codification of tort conflicts as a model for further US legislation.Footnote 21 While the ‘end of history’ for private international law (i.e. a full convergence of US and European conflict of laws in torts),Footnote 22 is still a long road ahead, a successful EU legislation on the law applicable to liability issues of AI will certainly increase the prospects for creating harmonised conflicts rules in this area on a global level.
2. The Subject of Liability
Both the Rome I and II Regulations only address the liability of natural personsFootnote 23 and ‘companies and other bodies, corporate or unincorporated’.Footnote 24 Thus, the question arises as to whether an AI system could be classified as another ‘unincorporated body’ within the meaning of these provisions.Footnote 25 There is a parallel discussion about attributing legal personality to AI-systems in substantive private law.Footnote 26 Although the mere wording of the English version of the Rome I and II Regulations would arguably allow such an innovative interpretation, other linguistic versions suggest a narrower, more traditional reading of the Regulations (e.g. the German one, which speaks of ‘Gesellschaften, Vereine und juristische Personen’). Since the law applicable to legal personality is not yet determined by EU private international law, but remains subject to domestic choice-of-law rules within the boundaries of the freedom of establishment,Footnote 27 it would be unwise to burden the Rome I and II Regulations with a regulatory aspect that is, from the point of view of international contract and tort law, merely an incidental question. Thus, the law applicable to legal personality will have to be determined by other measures, e.g. by a regulation based on the draft presented by the European Group for Private International Law in 2016.Footnote 28
3. Non-Contractual Obligations: The Rome II Regulation
a. Scope
The Rome II Regulation determines the law applicable to non-contractual obligations, in particular torts. The notion of ‘non-contractual obligation’ must be interpreted as an autonomous concept.Footnote 29 It covers both strict and fault-based liability.Footnote 30 Generally speaking, all types of harm or damage are covered, such as physical damage to property, pure economic loss, and immaterial harm.Footnote 31 The Rome II Regulation is limited to civil and commercial matters;Footnote 32 notably, it does not cover the liability of the state for acts and omissions in the exercise of state authority.Footnote 33 Thus, the law applicable to a Member State’s liability for the use of AI for the purpose of international police surveillance or military operations, for example, is determined by domestic choice-of-law rules.Footnote 34 Moreover, the Rome II Regulation is not applicable to non-contractual obligations arising out of violations of privacy and rights relating to personality, including defamation.Footnote 35 Therefore, the law applicable to any kind of use of AI that violates a person’s right to privacy or causes damage to their reputation must still be determined by domestic choice-of-law rules, such as Articles 40–42 of the German EGBGB.Footnote 36 Finally, although the rules of the Rome II Regulation are of European origin, they shall be applied whether or not the law specified by them is the law of an EU Member State.Footnote 37 Thus, according to this principle of ‘universal application’, even if an AI system operated by a British company causes damage to a person in Switzerland, the court of an EU Member State will determine the law applicable to such a case pursuant to the Rome II Regulation.Footnote 38
b. The General Rule (Article 4 Rome II)
The basic rule for torts in general is found in Article 4(1) Rome II, which refers to the place of injury. Recital 15 Rome II acknowledges that ‘lex loci delicti is the basic solution for non-contractual obligations in virtually all the Member States’. Nevertheless, the diverging interpretations of this principle by various Member States’ legislatures and courts in complex cases (place of injury, place of acting, or even both under the so-called theory of ubiquity) had in the past led to considerable legal uncertainty.Footnote 39 The preference for the place of injury is justified because, generally speaking, it strikes ‘a fair balance’ between the interest of the person claimed to be liable to foresee the applicable law and the interests of the person sustaining the damage.Footnote 40 From an economic point of view, the place of injury will usually lead to a fair distribution of the costs for obtaining the relevant legal information: In most cases, the person claimed to be liable should be able to anticipate that his or her acts may cause harm in another country, whereas the victim should be able to rely on the legal standard of the environment to which he or she exposed his or her body or property.Footnote 41 While the tortfeasor is thus forced to internalise the costs for negative externalities arising in other countries,Footnote 42 the victim is given the opportunity to structure his or her insurance in accordance with the law to which he or she is presumably accustomed.Footnote 43 Since Article 4(1) Rome II is based on the idea of striking ‘a fair balance’ between the alleged tortfeasor and victim, this neutral provision must not be interpreted in a one-sided fashion that favours the plaintiff. The Rome II Regulation does not, as a general principle, embrace the plaintiff-friendly principle of ubiquity found in German or Italian private international law.Footnote 44
The Rome II Regulation contains a significant number of specific rules for special torts.Footnote 45 This considerably reduces the weight that the general rule has to carry, which applies only ‘unless otherwise provided for in this Regulation’.Footnote 46 The main group of cases of practical importance that are exclusively governed by the general rule instead of specific rules are traffic accidents.Footnote 47 However, even in this regard, the scope of application of Article 4 Rome II is limited in practice. The full communitarisation of private international law is impeded by the fact that there already exist two supranational instruments dealing with important areas of tort conflicts, namely, the Hague Convention on the law applicable to Traffic Accidents (HCTA) and the Hague Convention on the law applicable to Products Liability (HCP).Footnote 48 Both conventions count several EU Member States among their parties.Footnote 49 Those Member States were (and are) unwilling to withdraw from the respective conventions.Footnote 50 Since the EU could arguably not terminate their membership without their consent, rules governing the collision between EU conflicts rules and the Hague conventions had to be invented.Footnote 51 The solution finally codified in the Rome II Regulation provides that the Regulation does not prejudice the application of existing conventions that contain conflicts rules for non-contractual obligations.Footnote 52 The Rome II Regulation takes precedence, however, over conventions concluded exclusively between two or more of them insofar as such conventions concern matters governed by the Regulation.Footnote 53 Since both pertinent Hague conventions have a sizeable number of non-EU state parties, this exception is of little practical use.Footnote 54 Even if a traffic accident is only connected with, for example, France and Germany, French courts have to apply the HCTA, whereas a German court must determine the applicable law under the Rome II Regulation.Footnote 55 Thus, in two of the most important areas of tort conflicts, traffic accidents and product liability, European private international law remains fragmented and continues to offer ample possibilities of forum shopping.Footnote 56 This situation is exacerbated by the fact that the Rome II Regulation excludes the possibility of renvoi.Footnote 57 Thus, cases involving driverless cars, for example, may be subject to different laws in various Member States.Footnote 58
The lex loci damniFootnote 59 is displaced in cases where the person claimed to be liable and the person sustaining the damage both have their habitual residence in the same country at the time when the damage occurs.Footnote 60 This rule had been familiar to many European codifications already before Rome II was enacted.Footnote 61 Again, it is a legitimate expression of the basic economic rationale underlying the Regulation: ‘[I]n most cases the common residence rule guarantees lower litigation costs, more efficient court administration, and international harmony of decisions’.Footnote 62 Usually, parties who share a common habitual residence will litigate in the country where they live; moreover, their insurance coverage will, in most cases, be structured according to the standards prevailing in this country.Footnote 63
Article 4(1) and (2) Rome II are coupled with an escape clause that is meant to provide for a sufficient degree of judicial discretion in the individual case.Footnote 64 The final paragraph, which is rather an open-ended standard than a rule, combines a fairly general approach in its first sentence (manifestly closer connection) with a particular example of such a connection (relationship between the parties, for example, a contract) in its second sentence. As Recital 14 Rome II shows, the drafters of the Regulation were mindful of the tension between ‘the requirement of legal certainty’ on the one hand and the ‘need to do justice in individual cases’ on the other. The Recital explains that
this Regulation provides for a general rule but also for specific rules and, in certain provisions, for an ‘escape clause’ which allows a departure from these rules where it is clear from all the circumstances of the case that the tort/delict is manifestly more closely connected with another country. This set of rules thus creates a flexible framework of conflict-of-law rules. Equally, it enables the court seised to treat individual cases in an appropriate manner.
Finally, Article 14 Rome II provides for a modern and liberal approach to party autonomy for non-contractual obligations, allowing a choice of the applicable law both ex post and, provided certain conditions are met, ex ante.Footnote 65 The reasons for this liberal approach are spelled out in the first sentence of Recital 31: ‘To respect the principle of party autonomy and to enhance legal certainty, the parties should be allowed to make a choice as to the law applicable to a non-contractual obligation.’ Party autonomy enhances legal certainty in two ways.Footnote 66 First, the flexible approach of the Regulation, which is characterised by a rather generous array of escape clauses,Footnote 67 introduces a potential source of litigation that must be balanced by giving parties the possibility of quickly resolving any dispute on the applicable law.Footnote 68 Secondly, the substantive laws of the Member States are characterised by significant divergences as far as the proper boundaries between tort and contract law are concerned. This is particularly true for cases such as pre-contractual liability, liability for pure economic loss, and the protection of third persons who are not a party to an existing contract with the person claimed to be liable.Footnote 69 Thus, parties who want to avoid a protracted litigation on issues of classification are well advised to choose the law applicable not only to their contractual obligations, but also to their non-contractual obligations.Footnote 70
c. The Rule on Product Liability (Article 5 Rome II)
With regard to product liability, Article 5 Rome II strives to create a balance between an effective protection of the victim, who is often a consumer and typically regarded as the weaker party, on the one hand, and the producer’s interest in foreseeability of the applicable law, on the other.Footnote 71
Article 5(1) Rome II presupposes a damage ‘caused by a product’. The notion of ‘product’ must be interpreted autonomously;Footnote 72 the Commission’s Explanatory Memorandum of 2003Footnote 73 refers to the definition found in the EU Directive on Product Liability.Footnote 74 The substantive EU law on product liability so far only applies to physical goods.Footnote 75 Thus, strict liability for data processing cannot be based on the current Product Liability Directive.Footnote 76 A working group hosted by the European Law Institute has recently published a paper on giving the Product Liability Directive a digital ‘update’, but this reform process is still in its first stages.Footnote 77 Although the rules of the current Product Liability Directive may be extended to cover standard software delivered on a DVD, for example,Footnote 78 it is controversial whether software that was designed to meet the specific needs of the customer could be classified as a ‘product’.Footnote 79 Those delineations are generally transferred to Article 5(1) Rome II.Footnote 80 In cases of autonomous driving, however, the software will be sold as an integral part of a car. In cases where software is embedded in a physical good, both the Product Liability Directive and Article 5(1) Rome II apply.Footnote 81
The cascade of connections found in Article 5 Rome II is structured as follows: first, parties may choose the law applicable to product liability claims under the general provision on party autonomy.Footnote 82 Likewise, the Rome II Regulation provides for an accessory connection of product liability claims to a pre-existing relationship, such as a contract, between the parties.Footnote 83 Both steps constitute major improvements compared to the Hague Convention on the law applicable to product liability,Footnote 84 which failed to include such rules.
Secondly, if both parties have their habitual residence in the same country, the law of that state applies.Footnote 85
Thirdly, if none of the above applies, Article 5(1) Rome II basically refers to the law of the state where the product was marketed, provided that the place of marketing coincides with one of three other territorial factors (the victim’s habitual residence, the place where the product was acquired, the place of injury) and that the person claimed to be liable (usually the producer) could reasonably foresee the marketing of the product or a product of the same type in this country. Contrary to specific provisions on product liability, for example in ItalyFootnote 86 or Switzerland,Footnote 87 Article 5(1) Rome II is not an alternative connection, but ranks the connecting factors in a hierarchical order. Firstly, the law applicable is that of the victim’s habitual residence, provided that (1) it coincides with the place of marketing and (2) the producer does not succeed at proving that he could not foresee the marketing of this or a similar product in this country.Footnote 88 If one of those conditions (marketing, foreseeability) is not met, the law of the country in which the product was acquired applies, again subject to a coincidence with the place of marketing and the test of foreseeability.Footnote 89 If the applicable law cannot be determined at this stage, the law of the country in which the ‘damage [read: injury] occurred’, applies, if at least in this country the two additional requirements (marketing, foreseeability) are met.Footnote 90 If all of the three countries enumerated in Article 5(1) Rome II do not pass the test of foreseeability, the applicable law is that of the producer’s habitual residence.
This rather unwieldy ‘cascade system of connecting factors’Footnote 91 fails to achieve wholly convincing results. First, even after the Rome II Regulation has been in force now for more than a decade, it has not induced a single Member State, which is a party to the HCP, to denounce this convention. On the contrary, under Article 28 Rome II, the HCP takes precedence over the Rome II Regulation. The result is that, since 2009, Europeans have two different regimes on product liability conflicts which are both influenced by a similar methodology (grouping of contacts), but which do not yield uniform results in practice.
While Recital 20 explains that the ‘conflict-of-law rule in matters of product liability should meet the objectives of fairly spreading the risks inherent in a modern high-technology society, protecting consumers’ health, stimulating innovation, securing undistorted competition and facilitating trade,’ it must be kept in mind that Article 5(1) Rome II is not limited to business-to-consumer (B2C) cases, but applies to business-to-business (B2B) cases as well.
Since the connecting factor that enjoys primacy in the basic ruleFootnote 92 is relegated to the last rung of the ladder in cases of product liability,Footnote 93 drawing the line between general tortious liability and product liability is decisive in traffic accidents involving autonomous cars.Footnote 94 Thus, one may argue that there is a need for a special conflicts rule for those cases. A further complication arises from the above-mentioned fact that, in quite a number of member states, the law applicable to traffic accidents or product liability is still not determined by the Rome II Regulation, but by the pertinent Hague Conventions of the early 1970s (see Sub-section II.3(b)). Therefore, even an amendment to the Rome II Regulation would not create European legal unity in this regard.
d. Special Rules in EU Law (Article 27 Rome II)
Pursuant to Article 27 Rome II, special EU conflicts rules take precedence over Rome II. In particular, the conflicts rules of the General Data Protection RegulationFootnote 95 may be relevant in cases involving AI.Footnote 96 In the course of the preparation of the Rome II Regulation, industry lobbies argued for codifying the ‘country of origin’-approach as a choice-of-law rule.Footnote 97 While those attempts failed, Article 27 Rome II explicitly states that ‘provisions of Community law which, in relation to particular matters, lay down conflict-of-law rules relating to non-contractual obligations’ take precedence over the Regulation. Moreover, Recital 35 Rome II adds that the Regulation:
should not prejudice the application of other instruments laying down provisions designed to contribute to the proper functioning of the internal market insofar as they cannot be applied in conjunction with the law designated by the rules of this Regulation. The application of provisions of the applicable law designated by the rules of this Regulation should not restrict the free movement of goods and services as regulated by Community instruments, such as … [the] Directive on electronic commerce[Footnote 98].
The precise reach of this exhortation is hard to define because the Directive on electronic commerce itself takes the somewhat schizophrenic position that it does not contain conflict-of-law rules,Footnote 99 while at the same time laying down the country-of-origin principle in its Article 3(1) and (2).Footnote 100 With regard to violations of rights of personality, a field not covered by Rome II, the CJEU tried to clarify matters as follows:Footnote 101
Article 3 of Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (“Directive on electronic commerce”), must be interpreted as not requiring transposition in the form of a specific conflict-of-laws rule. Nevertheless, in relation to the coordinated field, Member States must ensure that, subject to the derogations authorized in accordance with the conditions set out in Article 3(4) of Directive 2000/31, the provider of an electronic commerce service is not made subject to stricter requirements than those provided for by the substantive law applicable in the Member State in which that service provider is established.
If the European legislature were to codify special conflicts rules on AI, such a regulation would not only supersede the Rome II Regulation pursuant to its Article 27, but arguably also take precedence over the Hague Conventions. The respective Articles 15 of the HCTA and the HCP state that the Hague Conventions shall not prevail over other Conventions ‘in special fields’ to which the contracting states are or may become parties. Although an EU Regulation is surely not a ‘convention’ within the technical meaning of those provisions, one may argue that Article 15 HCTA/HCP should apply by way of an analogy to any EU Regulation dealing with the law applicable to autonomous driving, for example.
4. Contractual Obligations: The Rome I Regulation
a. Scope
Complementing Rome II, the Rome I Regulation determines the law applicable to contractual obligations.Footnote 102 Mirroring the Rome II Regulation,Footnote 103 the notion of contractual obligation must be interpreted as an autonomous concept.Footnote 104 Thus, the Rome I Regulation designates the law applicable to so-called smart contracts, for example.Footnote 105 Likewise, the Rome I Regulation is of universal application as well.Footnote 106
b. Choice of Law (Article 3 Rome I)
Party autonomy is largely permitted by Article 3 Rome I.Footnote 107 Consumers, however, must not be deprived of the protection accorded to them by the law of their habitual residence.Footnote 108
c. Objective Rules (Articles 4 to 8 Rome I)
Usually, the habitual residence of the service provider determines the law applicable to contracts for services.Footnote 109 With regard to consumers, the law of the consumer’s habitual residence applies under the conditions set out in Article 6(1) Rome I.Footnote 110
d. Special Rules in EU Law (Article 23 Rome I)
Special conflicts rules in other EU legal instruments prevail over the Rome I Regulation.Footnote 111 There are occasional conflicts rules in older consumer directives;Footnote 112 however, the more recent directive on digital content and services does not contain any such rule.Footnote 113 On the contrary, Recital 80 of said directive explicitly states that ‘[n]othing in this Directive should prejudice the application of the rules of private international law, in particular Regulations (EC) No 593/2008 and (EU) No 1215/2012 of the European Parliament and of the Council’.
III. The Draft Regulation of the European Parliament
1. Territorial Scope
With regard to substantive law, the draft regulation distinguishes between legally defined high-risk AI-systemsFootnote 114 and other AI-systems involving a lower riskFootnote 115. For high-risk AI-systems, the draft regulation would introduce an independent set of substantive rules providing for strict liability of the system’s operator.Footnote 116 Further provisions deal with the amount of compensation,Footnote 117 the extent of compensationFootnote 118 and the limitation period.Footnote 119 The spatial scope of those autonomous rules on strict liability for high-risk AI-systems is determined by Article 2 DR, which reads as follows:
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.
2. Any agreement between an operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, concluded before or after the harm or damage occurred, shall be deemed null and void as regards the rights and obligations laid down in this Regulation.
3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships, as well as from regulations on product liability, consumer protection, anti-discrimination, labour and environmental protection between the operator and the natural or legal person who suffered harm or damage because of the AI-system and that may be brought against the operator under Union or national law.
The unilateral conflicts rule found in Article 2(1) DR would prevail over the Rome II Regulation on the law applicable to non-contractual relations pursuant to Article 27 Rome II.Footnote 120 However, the Rome II Regulation still applies to additional liability claims mentioned in Article 2(3) DR. Moreover, Article 2(1) DR seems to limit the applicability of the draft regulation to cases where the harm was suffered on the territory of the European Union.Footnote 121 This stands in stark contrast with the principle of universal application that is one of the cornerstones of the Rome II Regulation.Footnote 122 If a high risk AI-system operated in Freiburg, Germany, for example, caused damage in Basel, Switzerland, the preconditions set out in Article 2(1) DR would not be met; thus, one would have to resort to the Rome II Regulation to determine the law applicable to the Swiss victim’s claims.
2. The Law Applicable to High Risk Systems
Furthermore, it must be noted that Article 2(1) DR deviates considerably from the choice-of-law framework of Rome II. While Article 2(1) DR reflects the lex loci damni approach enshrined as the general conflicts rule in the Rome II Regulation,Footnote 123 one must not overlook the fact that product liability is subject to a special conflicts rule, namely Article 5 Rome II, which is considerably friendlier to the victim of a tort than the general conflicts rule.Footnote 124 This cascade of connections is evidently influenced by the desire to protect the mobile consumer from being confronted with a law that may be purely accidental from his point of view. The lex loci damniFootnote 125 may have neither a relationship with the legal environment that consumers are accustomed toFootnote 126 nor with the place where they decided to expose themselves to the danger possibly emanating from the product.Footnote 127 The rule reflects the presumption that a defective product will affect most consumers in the country where they are habitually resident. Insofar, Article 2(1) DR is, in comparison with the Rome II Regulation, friendlier to the operator of a high-risk AI-system than to the consumer.
Even if one limits the comparison between Article 2(1) DR and the Rome II Regulation to the latter’s general rule,Footnote 128 it is striking that the DR does not adopt familiar approaches that allow for deviating from a strict adherence to lex loci damni. Contrary to Article 4(2) Rome II, where the person claimed to be liable and the person sustaining damage both have their habitual residence in the same country at the time when the damage occurs, Article 2 DR does not allow to apply the law of that country. Moreover, an escape clause such as Article 4(3) or Article 5(2) Rome II is missing in Article 2 DR. Finally, yet importantly, Article 2(2) DR bars any party autonomy with regard to strict liability for a high-risk AI-system, which deviates strongly from the liberal approach found in Article 14 Rome II.
3. The Law Applicable to Other Systems
Apart from the operator’s strict liability for high-risk AI-systems, the draft regulation would introduce a fault-based liability rule for other AI-systems.Footnote 129 In principle, the spatial scope of the latter liability rule would also be determined by Article 2 DR as already described.Footnote 130 However, unlike the comprehensive set of rules on strict liability for high-risk systems, the draft regulation’s model of fault-based liability is not completely autonomous. Rather, the latter type of liability contains important carve-outs regarding the amounts and the extent of compensation as well as the statute of limitations. Pursuant to Article 9 DR, those issues are left to the domestic laws of the Member States. More precisely, Article 9 DR states: ‘Civil liability claims brought in accordance with Article 8(1) shall be subject, in relation to limitation periods as well as the amounts and the extent of compensation, to the laws of the Member State in which the harm or damage occurred.’ Thus, we find a lex loci damni approach with regard to fault-based liability as well. Again, the principle of universal applicationFootnote 131 is discarded; contrary to the rules of Rome II, Article 9 DR is a unilateral conflicts rule that only refers to ‘the laws of the Member State in which the harm or damage occurred’. Moreover, all the modern approaches codified in the Rome II Regulation – the cascade of connecting factors for product liability claims, the common habitual residence rule, the escape clause, and party autonomy – are strikingly absent from Article 9 DR as well.
Finally, yet importantly, Article 9 DR leads to a split between the law applicable to the basis of liability, on the one hand, and the law applicable to limitation periods as well as the extent of compensation, on the other. This dépeçage stands in stark contrast with the general scope that Article 15 Rome II assigns to the lex causae. Pursuant to Article 15(a) Rome II, the law applicable to a non-contractual obligation under the Rome II Regulation covers both the basis and the extent of liability.Footnote 132 In addition, Article 15(h) Rome II provides that the law designated by the Rome II Regulation also applies to rules of prescription and limitation.Footnote 133 As Axel Halfmeier explains, ‘the general tendency of the [Rome II] Regulation is to expand the reach of the lex causae and limit the role of the lex fori [because] the goal of the Rome Regulations is to produce harmony in results among the Member States’ courts’Footnote 134 – the classic Savignyan goal of international decisional harmony mentioned above.Footnote 135 Of course, one has to take into account that Article 9 DR does not refer to the lex fori, but to the lex loci damni. Insofar, the rule does not offer any incentive for forum shopping. However, the unitary approach underlying Article 15 Rome II also serves the goal of ‘avoiding the risk that the tort or delict is broken up in to several elements, each subject to a different law’.Footnote 136 Insofar, Article 15 Rome II aims at preventing the ‘legal uncertainty’ associated with applying different laws to a single case.Footnote 137 Particularly with regard to Article 15(h) Rome II, the Court of Justice of the EU (CJEU) ‘pointed out that, in spite of the variety of national rules of prescription and limitation, Article 15(h) of the Rome II Regulation expressly makes such rules subject to the general rule on determining the law applicable’.Footnote 138 Creating a dépeçage between an autonomous rule on the conditions of liability, on the one hand, and the law applicable to the extent of damages and prescription issues, on the other, may lead to difficult questions of characterisation and adaptation. For example, the question may arise which particular rule of prescription of the lex loci damni shall apply if the latter law comprises various types of fault-based liability or calibrates the length of the prescription period depending on the degree of fault. In such a scenario, the court addressed would have to determine which domestic type of liability most closely corresponds to the model found in Article 8 DR – a task that may not be easy to fulfil. With regard to legal policy, it is hardly convincing to subject the issue of prescription to domestic laws because the periods codified in the Member States’ laws have been criticised as being too short in light of the complexities of international cases.Footnote 139
4. Personal Scope
The draft regulation, in principle, limits its personal scope to the liability of the operator alone.Footnote 140 Recital 9 of the resolution explains that the European Parliament
[c]onsiders that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third party like a hacker or for persons whose property is damaged by such a third party, as the interference regularly constitutes a fault-based action; notes that only for specific cases, including those where the third party is untraceable or impecunious, does the addition of liability rules to complement existing national tort law seem necessary.
Thus, for third parties, the conflicts rules of Rome II would continue to apply.
IV. Evaluation
At first impression, it seems rather strange that a regulation on a very modern technology – AI – should deploy a conflicts approach that seems to have more in common with Joseph Beale’s First Restatement of the 1930sFootnote 141 than with the modern and differentiated set of conflicts rules codified by the EU itself at the beginning of the twenty-first century (i.e. the Rome II Regulation). While the European Parliament’s resolution, in its usual introductory part, diligently enumerates all EU regulations and directives dealing with substantive issues of liability, the Rome II Regulation is not mentioned once in the Recitals. One wonders whether the members of Parliament were aware of the European Union’s acquis in the field of private international law at all.
V. Summary and Outlook
In April 2020, the JURI Committee of the European Parliament presented a draft report with recommendations to the Commission on a civil liability regime for AI (see Sub-section I). The draft regulation proposed therein is noteworthy from a private international law perspective because it introduces new conflicts rules for AI. In this regard, the proposed regulation distinguishes between a rule delineating the spatial scope of its autonomous rules on strict liability for high-risk AI systems (Article 2 DR), on the one hand (see Sub-section III.2), and a rule on the law applicable to fault-based liability for low-risk systems (Article 9 DR), on the other hand (see Sub-section III.3.). The latter rule refers to the domestic laws of the Member State in which the harm or damage occurred. In this chapter, I have analysed and evaluated this proposal against the background of the already existing European regulatory framework on private international law, in particular the Rome II Regulation. In sum, compared with Rome II, the conflicts approach of the draft regulation would be a regrettable step backwards in many ways. On 21 April 2021, the European Commission presented its proposal for an ‘Artificial Intelligence Act’.Footnote 142 However, this proposal contains neither rules on civil liability nor provisions on the pertinent choice-of-law issues. Thus, it remains to be seen how the relationship between the European Parliament’s draft regulation and Rome II will be designed and fine-tuned in the further course of legislation.