I. Introduction
In May 2018, the European Commission entrusted a designated Expert Group (EG) to advise on the applicability of the Product Liability Directive (PLD) to traditional products, new technologies and new societal challenges (Product Liability Directive Formation) and to provide principles for possible adaptations of applicable laws related to new technologies (New Technologies Formation).Footnote 1
In its November 2019 Report,Footnote 2 the EG claimed that, in principle, the liability framework provided by non-harmonised contractual and non-contractual liability ensures basic protection against damages caused by new technologies, and yet, due to certain characteristics of the latter, victims may struggle to obtain compensation, resulting in an unfair allocation of the social costs of technological development.Footnote 3
Thus, the EG suggests that civil liability rules should follow a two-tiered approach based on the level of risk brought about by the use of a given technology:
-
(1) If the latter does not pose serious risks of harm to others, the user should abide by duties to carefully select, operate, monitor and maintain said technology and be held liable for breach according to a fault-based regime. When “autonomous” applications are involved, liability should not be less severe than what is provided for damages caused by human auxiliaries.
-
(2) If the technology carries an increased risk of harm to others, the operator should be held strictly liable for the damages derived from its operation and possibly be subject to compulsory insurance. Said “operator” ought to be identified as the subject who is most in control of the technology and benefits from it. When multiple operators can be identified, strict liability should lie on the “subject who has more control over the risks of the operation”, who – depending on the circumstances – may be the final user (eg the “driver” of a self-driving car) or another subject, which the EG designates as the “back-end operator” (eg the service provider).Footnote 4
These regimes should have general application and not repeal that established by the PLD.Footnote 5 In both scenarios, it is the “manufacturer of products or digital content incorporating emerging digital technologies”Footnote 6 who ought to be held liable if damages are caused by a defective product, even if determined by changes made when it had already been put into circulation, provided that the producer was in control of the latter.Footnote 7
Moreover, the EG claims that if a technological application increases the difficulties of proving liability beyond what can be reasonably expected, the victim’s evidentiary position should be eased.Footnote 8 The design of the device should allow logging features, and whenever the operator fails to log or give access to logged data the burden of proof should be reversed.Footnote 9
The Report offers an important analysis of liability for damages caused by new technologies, and many of the considerations regarding the assessment of the existing framework are commendable. However, when assessed against the Report’s main objective – proposing a liability regime that ensures a fair and efficient allocation of the damages caused by new technologies – some of the suggestions made by the EG seem problematic and require further consideration. To this purpose, the current article will focus on those stances that may be deemed questionable and deserve reconsideration in order to substantiate the policy initiatives at the European level. These are: the excessive diversity of applications encompassed through the notion of “artificial intelligence and other emerging technologies” (Section II); the distinction between high- and low-risk applications as a potential source of legal uncertainty (Section III); the primary reliance on evidentiary rules over substantive ones (Section IV); the problematic role attributed to safety rules (Section V); the unclear relationship between the PLD and other ad hoc liability regimes (Section VI); the exclusion of electronic personhood as a way of addressing liability issues (Section VII); and the limited contextualisation of compulsory insurance and no-compensation funds (Section VIII).Footnote 10
II. The excessive diversity of applications encompassed through the notion of “artificial intelligence and other emerging technologies”
The EG was asked to assess the liability framework applicable to new technologies and formulate policy proposals. This was not only a broad task, but also a dauntingly indeterminate one.
Faithful to its mandate, the Report identifies and assesses extant rules and suggests their reform by referring to “artificial intelligence and other emerging technologies”Footnote 11 (AI&ET) as if the latter were an identifiable class of applications (CoA), for which liability should be reconsidered.
However, such a choice may be criticised under two intertwined yet logically distinct profiles.Footnote 12
Firstly, the notion of AI and even more so that of AI&ET – including the Internet of Things (IoT), platforms, smart-home applications and robotics,Footnote 13 when considered as a separate field from AI – encompasses a very broad and heterogeneous spectrum of applications, which prevents the formulation of a definition with real selective capacity from both a technical and a legal perspective. Indeed, “AI” is used to indicate ground-breaking technologies, regardless of the specific mechanisms applied, so that what is considered “AI” today might not be classified as such tomorrow.Footnote 14 Thus, reference to AI&ET fails to identify precisely the boundaries of the enquiry and, when used to elaborate policy proposals, impedes a precise assessment of the desirability of the suggested rules and leads to unworkable uncertainties over their very application.
Secondly, addressing the liability rules applicable to AI&ET as a general category makes it impossible to account for the different societal concerns that each CoA gives rise to, leading to questionable “one-size-fits-all” solutions. And yet the resulting rules would most likely have a limited practical application: they would be concretely used only in those cases where claimants have adequate incentives to do so, while alternative legal frameworks would apply whenever they were more convenient.
The PLD’s jurisprudence is paradigmatic of such a tendency: despite its alleged technologically neutral character, it proves fit for purpose only for non-complex products (eg raw materials), in case of great economic losses, typically associated with fundamental rights infringements (eg in pharmaceutical and medical device-related litigation, where the health, bodily integrity and life of the user are challenged), and/or when sophisticated players may be involved (eg litigation associated with nuclear reactors or mechanical appliances).Footnote 15 Damages arising from many categories of products, such as the more technologically advanced ones, are not as frequently litigated due to the more limited stakes (in terms of the amount of damages compared to litigation costs and relevance of the right infringed), and are also often actioned under national contract or tort law,Footnote 16 typically to overcome the constraints and limitations (with respect to compensable damages and defences) put forth by the Directive.Footnote 17
Given the impossibility of defining AI&ET unitarily – absent any clear unifying trait or lowest common denominator – regulation ought to focus on those qualifying elements that differentiate a specific CoA from the others. This entails identifying both their functioning and technological peculiarities, the ways and settings in which they are used and how they interact – or interfere – with existing rights and interests.Footnote 18 For example, autonomy is not common to all advanced technologies (eg it is not pursued in biorobotic prostheses, which are conceived to smoothly abide to the commands of the wearer), it comes in a wide spectrum of degrees (ranging from that of a vacuum cleaner to that of an expert system), and it gives rise to profoundly different concerns – in a societal perspective – according to the settings in which the system is deployed (eg capital markets, medical consulting, consumer applications). Similar considerations could be drawn for other typically referred-to elements deemed to define advanced technologies (eg the ability of a system to learn and self-modify).Footnote 19
For such reasons, sound regulation would better focus on said differences, for even the solutions required for an identical legal issue (eg liability) might be diverse due to the specificities of the given CoA. These include the possibility to insure (entailing the existence of a relevant private market for insurance for the given CoA), the relevance of the interests affected, the size and severity of potential damages (eventually requiring proportionate capping), the possibility of identifying one single entry point for litigation, as well as the policy choice about whom and to what extent to burden (at least among the owner, user, operator or service provider). Indeed, such is the case today when the numerous fields of law are considered, whereby medical malpractice is separately regulated from the liability of intermediaries in capital markets, traffic accidents, the responsibility of parents and owners of things (or animals) in custody and that for nuclear accidents.
Indeed, up until today, the sector-specific approach has been the prevalent one in identifying and addressing the numerous sub-branches of liability within Member States’ (MSs) private law legal frameworks. Thence, just because a doctor, a financial intermediary, an autonomous driving (car) or flying (drone) system, a platform or consumer might use AI-based applications, this does not imply that they should be subject to an identical liability regime. Indeed, a unitary regulation of advanced technologies, with respect to the sole liability, is unjustifiable from a technological and legal perspective, and is a potential source for market failures and distortions (see Section III below) due to the absence of any clear and narrowly defined common technological feature that justifies assimilating such diverse applications, the profoundly distinct settings in which they ought to be used and deployed and the different incentive structures that are currently deemed necessary to efficiently and effectively regulate those domains (shielding some possible defendants and burdening others). Indeed, even the efforts made as of today to regulate advanced technologies at the European and national level, such as in the case of dronesFootnote 20 and autonomous vehicles,Footnote 21 have rightly pursued a technology-specific approach.
Significantly, the Report criticises one-size-fits-all solutions.Footnote 22 However, the broad definition of the object of enquiry and the lack of a functional, bottom-up and CoA-by-CoA analysis ultimately contradict this very stance. Indeed, the only classification the EG envisages is the one between low- and high-risk applications, where the lack of significant criteria according to which the dichotomy should be concretely constructed brings technology neutrality in through the backdoor.
III. The distinction between high- and low-risk applications as a potential source of legal uncertainty
As anticipated, the Report suggests that when AI&ET “are operated in non-private environments and may typically cause significant harm”,Footnote 23 due to the “interplay of [its] potential frequency and the severity”,Footnote 24 a strict liability regime should apply. Significant harms would likely be caused by “emerging digital technologies which move in public spaces, such as vehicles, drones, or the like”, and “objects of a certain minimum weight, moved at a certain minimum speed … such as AI-driven delivery or cleaning robots, at least if they are operated in areas where others may be exposed to risk”. On the contrary, “[s]mart home appliances will typically not be proper candidates for strict liability”, and the same is said for “merely stationary robots (eg surgical or industrial robots) even if AI-driven, which are exclusively operated in a confined environment, with a narrow range of people exposed to risk, who are also protected by a different – including contractual – regime …”.Footnote 25
This proposal is highly questionable.Footnote 26
First, the distinction between high- and low-risk applications – despite echoing the liability regime envisaged by some MSs for dangerous things and activitiesFootnote 27 – does not specify when the harm should qualify as “severe” or “potentially frequent”. Thus, it results in a circular definition, void of any selective meaning.Footnote 28
On the one hand, the distinction does not offer any guidance to policymakers, who need not to predetermine the criteria according to which they will decide whether to regulate a specific application, use or domain. Risk – as defined by the EG – ought not to be the sole criterion justifying intervention. Indeed, reform might be argued on other grounds, such as social desirability, the need to ensure access to justice in case of very small claims that would otherwise not be compensated, causing externalities and market failures of various kinds and the need to provide positive incentives towards the adoption of a specific solution. Said policy arguments could also vary from one case to another, and uniformity and consistency of criteria are not desirable per se. Quite to the contrary, this could substantially limit the spectrum of considerations to be taken into account with respect to different emerging technologies when deciding whether to intervene.
On the other hand, were the dichotomy to be adopted and used – absent a clear enumeration of which applications fall into what category (thence being classified as high- or low-risk, ultimately causing the definition itself to become superfluous) – it would give rise to unacceptable ex ante uncertainty about the applicable standard of liability (strict or fault-based) in each case. Somewhat recalling the Learned Hand formula,Footnote 29 it would not allow operators of the specific technology to determine beforehand what standard of liability they would then be subject to. This would open the floodgate to litigation – most likely both at the national and the European level – potentially causing progressive divergence among MSs. In particular, even if only low-risk applications were to be identified a contrario – while high-risk ones were clearly and strictly indicatedFootnote 30 – considering the pervasiveness of AI and the broad notion of “AI-system” considered,Footnote 31 which applications ought to fall under this special regime of liability would most likely be uncertain. If a piece of software used in medical diagnosis – not classified as high-risk – were to be considered a low-risk application (thence still falling under the special liability regime for advanced technologies), this would only be ascertained before a judge once harm had already occurred. The medical doctor would not know beforehand what standard of liability would apply to their case, and, from a normative perspective, this would heavily interfere with national tort law systems. All of this is while – on the face of a careful assessment of the applicable legal framework – many prospective low-risk applications would simply not need to be regulated.
Second, elevating the degree of risk to the main criterion for apportioning liability is problematic. Often, very limited data concerning the technology’s safety and security are available. Moreover, the adaptive, customisable and ever-updating character of advanced applications (eg industrial robots, modified by system integrators to fit business users’ needsFootnote 32 ) leads to structural uncertainties about the risks associated with them (“unknown unknowns”).Footnote 33 If envisaging and measuring risks is in itself difficult, using different measures of risk as criteria for attributing different types of liability seems impractical at least.
Third, even if we could calculate the average amount of damages that a given technology might cause, there is no reason for limiting strict liability to severe harms, leaving non-severe harms under-protected. On the contrary, a comprehensive reading of the literature on tort law and regulation shows how various elements contribute to determining the preferred liability regime, including:
-
(1) The need to simplify the overlap of different regimes, easing the identification of the ex ante, first-instance responsible party (“one-stop shop”);
-
(2) The need to ease access to justice when victims would not have enough incentive to sue;
-
(3) The need to impede manufacturers, designers and/or operators from externalising the costs of a given technology, leading to unfair and anti-competitive outcomes;
-
(4) The technology’s characteristics, social desirability and potential diffusion;
-
(5) The overall incentives that the legal system offers to the different parties involved.Footnote 34
While the EG considers some of these criteria, the primary focus that it reserves for the distinction between high- and low-risk applications, and the fact that the latter are presented, in and of themselves, as justifying a different regime of responsibility, ultimately leads to an oversimplified analysis and makes the proposed solution incapable of accounting for these heterogeneous rationales and objectives. Moreover, in focusing on the high- versus low-risk distinction and considering other criteria only incidentally, the Report’s approach may be criticised – from a normative perspective – for not being systematic enough.
Fourth, the exemplification provided to illustrate this dichotomy is questionable, as it seems to link the prevision of severe harms to three criteria: whether the technology (1) operates in public spaces and (2) moves autonomously; and whether (3) other compensatory regimes are available.Footnote 35
Point (3) is to be welcomed: where applicable legislation already ensures an adequate level of protection, the use of advanced technology does not per se require different standards of liability. Yet the same cannot be said for the other criteria. The need for a different kind of liability is unrelated to the nature – public or private – of the place where the harm takes place: a smart-home application might set an entire private house on fire and kill its occupants, including guests, who have not willingly accepted the risks posed by it.
Likewise, even assuming that the capacity to move autonomously is relevant for qualifying the types of risks – which is questionable – it is unclear whether only the ability to move “at a certain minimum speed” would trigger a strict liability regime, or if any moving capacity would do so, as long as it is “autonomous”. In this case, a definition of “autonomy” ought to be provided: a machine could be remotely controlled by a human operator, be merely supervised or be completely independent. Indeed, the Report grants excessive relevance to the kind of movement performed by the machine instead of considering other elements that typically increase the risk of the operation, such as the different control systems used.Footnote 36
Moreover, replicating the very distinction between products and services that the EG criticises,Footnote 37 Points (1) and (2) presuppose a corporeal notion of advanced technologies that marginalises non-embedded AI applications, despite their ever-increasing relevance. Indeed, such applications could very well cause severe harm, both pecuniary and not, eventually affecting individuals’ fundamental rights. Severe harms may derive from AI-based applications engaging in high-frequency trading or software used for diagnostic purposes.Footnote 38
Finally, the relative weight of each criterion is unclear. Would the operation of a movable industrial robot within a factory’s restricted environment qualify as “high risk”, simply because of its ability to move autonomously? If so, the legal system may indirectly promote the diffusion of fixed robots and applications over movable ones, regardless of the actual desirability of the technology in question.
IV. Primary reliance upon evidentiary rules and logging by design
The Report deals at length with the rules of evidentiary issues.
Technological complexity and opacity may make it difficult to reconstruct the causal nexus leading to the harm, as well as to apportion liability when multiple subjects cooperate in providing a service or good (producer, programmer, service provider, infrastructure manager, etc.).Footnote 39 Moreover, producers and operators enjoy an evidentiary advantage vis-à-vis (especially non-professional) claimants, as they have the knowledge and information necessary to establish the existence of defects, causal nexuses and faults of any of the parties involved, while the victim may only seek to acquire them through expensive technical consultancy, with no guarantee of success. Thus, the EG claims, “where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected” – especially causal nexus and fault or defectiveness of the product under the PLDFootnote 40 – “victims should be entitled to a facilitation of proof”,Footnote 41 following the doctrinal or judicial solutions already envisaged in some MSs.Footnote 42
Moreover, to facilitate reconstructing the elements of liability in case of advanced and opaque technologies, the EG suggests that, when appropriate and proportionate, producers should be required to add logging features, and that “the absence of logged information or failure to give the victim reasonable access to the information should trigger a rebuttable presumption that the condition of liability to be proven by the missing information is fulfilled”.Footnote 43
Despite the agreeable premises, this solution may be subject to criticism.Footnote 44
The reversal is conditioned upon a vague requirement – “beyond what can be reasonably expected” and “disproportionate difficulties or costs” for the proof of defectiveness under Article 4 PLD – that, if granted normative value, would have an uncertain and inconsistent application. Moreover, if interpreted as referred to individual elements of liability, the solution envisaged therein may be insufficient. Indeed, were the claimant released from proving the defect, they might still be required to demonstrate the causal nexus. It is in fact uncertain whether both elements would be covered by the informational asymmetry, for which access to information should be made mandatory. While an alternative reading of the intentions of the EG – whereby both elements would automatically be covered – is possible, it seems less plausible (given the overall approach to the reversal of the burden of proof maintained) as well as more problematic, since those two elements are separate and distinct and depend upon different factual elements.
Indeed, the EG claims that causation should in principle be proven by the claimant, but it suggests that other factors may justify the alleviation of this burden, such as: the likelihood that the technology contributed to the harm; the risk of a known defect; the degree of ex post traceability and intelligibility of the technology’s processes; the degree of ex post accessibility and comprehensibility of the data collected and generated by it; and the kind and degree of harm.Footnote 45
However, these considerations only “offer guidance on the further development and approximation of laws, and … allow for a more coherent and comparable line of reasoning” among MSs, without “promoting any specific measure [that could] run the risk of interfering with national rules of procedure”. Moreover, the Report does not specify whether the alleviation of the claimant’s burden of proof should be realised through uniform European rules or whether its implementation should be left to the MSs’ discretion.Footnote 46 This uncertainty is problematic: if MSs voluntarily implemented this suggestion, the vagueness of the proposal and the very application of the MSs’ procedural autonomy would most likely lead them to adopt very different solutions (eg regarding the circumstances under which the reversal should happen; the threshold according to which liberating circumstances would allow the defendant to escape liability, etc.). While the EG implicitly assumes that the mere reversal of the burden of proof would be sufficient to provide convergence and predictability in the application of the law, the possible different implementations of the latter at the national level would bring the very same legal and economic fragmentation that the Report is supposed to avoid.
Finally, there are no grounds to radically exclude a European competence on evidentiary aspects strictly connected to substantive ones.
As for the logging features, if the designer/operator allows access to the information, no reversal of the burden of proof is required. Yet the claimant would most likely not be capable of proving their right to receive compensation by simply accessing data. Indeed, the information would have to be selected – identifying the relevant portion for the specific accident – and interpreted. Interpreting is most certainly costly and complex, requiring the intervention of an expert engineer or technician. The producer/operator could easily comply, thence avoiding the reversal of the burden of proof. Yet such a solution would deter – potentially numerous – small(er)-value claims, ensuring insufficient legal protection and favouring the externalisation of some costs of technological innovation.
Most importantly, it is debatable whether the burden of proof over elements such as the causal nexus and the defendant’s fault should rest upon the claimant in the first place.Footnote 47 Even if this is the standard approach of liability rules, it is neither a mandatory feature of the latter nor it is consistent with the functions attributed thereto in this case. Leaving the economic consequences of the harmful event on the victim is both unfair and inefficient, as it favours opportunistic behaviours by some players, potentially distorting competition.Footnote 48 Instead of tinkering with evidentiary issues, we may need to reframe the substantive rules of liability to ensure (1) effective protection of the victims and fair allocation of both the costs and benefits of innovation, while (2) incentivising technological development, especially in case of socially desirable products or services. In particular, this may require releasing the claimant from proving the defect and the causal nexus, and allowing the producer or operator to contest said presumption (eg by identifying additional information or alternative interpretations that could liberate them from liability).
Any such consideration, however, would require the attentive assessment of the specific CoA considered, its qualifying technological traits, its implications in a societal perspective – primarily its desirability – as well as the characteristics of its potential market (size and relevance), of connected markets (eg the insurance market), and their potential failures in order to conceive the desirable incentive structure. Said otherwise, there is a need to proceed on a CoA-by-CoA basis, as has been done up until today with drones and driverless cars, both at the European and the national level. Any coarser distinction, as well as the mere articulation between high- and low-risk applications, is for such reasons inadequate.
V. Safety rules
According to the EG, “where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, including rules on cybersecurity, should lead to the reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect”.Footnote 49 Such rules, the Report clarifies, are those put forth by “the lawmaker, such as those adopted under the ‘New Regulatory Approach’, and not mere standards developing in practice”.Footnote 50
Again, the proposed solution is questionable.Footnote 51
Firstly, it does not effectively ease the claimant’s position, since demonstrating the violation of safety rules would require complex technical ascertainment, for which the same considerations made above hold. Indeed, certification costs are often deemed excessive by smaller enterprises trying to have their innovative products certified.Footnote 52 Requiring a private claimant to bear substantially equivalent expenditures to demonstrate that the device falls short of said requirements is quite unrealistic.
Secondly, the suggestion seems to mirror a misleading reading of the relationship between two separate fields of law: compliance with safety rules is necessary to commercialise the product within the European Union (EU), but it does not shield the producer from liability should damages nonetheless occur from its use. Eventually, failure to comply should automatically affirm the responsibility of the manufacturer or operator, without further enquiries, having violated mandatory regulation.
Moreover, the locution “safety rules” adopted by the Report could allow some uncertainty to persist with respect to the inclusion – within that notion – of the harmonised technical standards (hEN, adopted by the European Standardization Organizations upon request of the European Commission),Footnote 53 which the producer may voluntarily decide to abide by in order to facilitate compliance with the former (exploiting the associated presumption of conformity). If this second interpretation were adopted, manufacturers would need to comply with such standards to avoid the reversal of the burden of proof, thus altering their normative value.
VI. The relationship between the producer’s and the operator’s liability
As for the regime set out in the PLD, the EG replicates the critiques to the evidentiary profiles made for general liability and discussed above, and suggests overcoming both the development risk defence and the rigid distinction between products and services.Footnote 54
It then proposes that, in a series of cases to be better specified, the operator’s strict liability might prove to be a desirable alternative to that of the producer. Yet it does not explain how these two regimes should coexist.Footnote 55 The operator’s liability does not, in fact, exclude the application of the PLD. Thence, considering the breadth of the first notion, allowing multiple parties to qualify as operators of a complex AI-based service and the persistence of the producer’s liability, whenever an accident occurs, the additional liability rule would not simplify the existing framework by allowing the clear identification of a single entry point of primary litigation. Quite the contrary, it would add one or more parties amongst which liability could theoretically be apportioned, requiring an even more complex assessment of individual responsibilities and contributions. In the case of an autonomous vehicle crash, for instance, responsibility could theoretically be traced back to the producer, to the back-end operator (eg smart road manager, mobility as a service provider, etc.), the owner and the driver (both of whom could possibly qualify as front-end operators). Adopting the operator’s liability does not thence reduce the number of potential defendants, nor rationalise – by simplifying – the current regulatory framework. Indeed, when advanced technologies are considered, one of the major concerns is provided by the coexistence of multiple potentially responsible parties, whose exact liability is hard to assess and apportion. This, in turn, always increases litigation costs, potentially to the point of materially denying access to litigation to certain victims or in case of small(er)-value claims – which could, however, be numerous and relevant in an aggregate fashion – and eventually force litigation against the weaker party (eg the owner of the vehicle as opposed to the producer).Footnote 56
The primary purpose of any legal reform in this domain should thence be that of achieving simplification by identifying a single entry point for all primary litigation and thence a subject who may also insure themself – and possibly distribute compensation costs to all users of the given technology through price mechanisms.
The EG’s proposal, by adding further potentially responsible parties without clarifying their relationship with the producer – still subject to the PLD – does not achieve said result.
One solution could be to apply the joint and several liabilities proposed by the EG to deal with alternative causation scenarios, encompassing the producer themself, as doing so incentivises legal arrangements on the apportionment of liability between the parties that collaborate in providing a given product or service, leading to positive risk-management practices.Footnote 57 Even in such cases, however, while primary litigation – and possibly victims’ compensation – would be simplified, the mechanism could still favour the suing of the weaker defendant (eg the owner of the vehicle), who may not possess sufficient economic resources and incentives to further pursue secondary litigation, for the reasons already discussed. This, again, is the consequence of the broad notion of “operator” that has been adopted, and of the choice of pursuing a general liability framework applicable to an extremely broad spectrum of devices as opposed to a technology-specific one, which could identify the optimal responsible party for each CoA and scenario of use. For example, the operator who has material “control” over the operation of the technology may not be best positioned to manage the costs of the damage derived therefrom, both ex ante and ex post.
VII. The legal personality of the machine
The Report rejects the necessity of attributing legal personhood to AI&ET for liability purposes.Footnote 58
Although for many applications this is probably the adequate solution, if adopted in general and apodictic terms, the claim against an “electronic personhood” is unjustified and should be rejected.
If, from an ontological perspective, advanced technologies are mere objects and there are no reasons to consider them legal subjects and hold them responsible, from a functional perspective there may be reasons for granting a fictitious legal personality to a specific CoA, similarly to corporations. Attributing legal personhood to a given technology, demanding its registration and compliance with public disclosure duties, minimal capital and eventually insurance coverage would turn it into the entry point for all litigation, easing the claimants’ position. Those that contribute to its creation and operation could possess shares of the legal entity, be entitled to subsequent cash flows generated, and also bear the economic consequences of liability in proportion. At the same time, the operation of the system could provide revenues that would integrate its assets, to be then used to meet all sorts of obligations, either compensatory, contractual or statutory.Footnote 59
VIII. Insurance and no-fault compensation funds
In addition to other commendable considerations – such as the need to extend the duties of care of the producer, seller or provider of the technology to the post-contractual phaseFootnote 60 – the EG suggests the imposition of compulsory insurance duties and envisages the possibility of establishing compensation funds.Footnote 61
While this is mostly agreeable upon, we must highlight that reliance on insurance mechanisms – especially compulsory ones – might prove inadequate when the relative market is not sufficiently developed. This could happen when the risks of a given technology are too high, difficult to restrain or even calculate,Footnote 62 or when there are not enough users in proportion to the risks identified.Footnote 63
In said cases, no-fault compensation fundsFootnote 64 might constitute a more efficient alternative. Indeed, they could be funded through taxation or publicly subsidised – when a fundamental right is at stake and deserves protectionFootnote 65 – thereby circumventing potential (insurance) market failures. Alternatively, they could be managed through private mechanisms (first-party insurance) or be associated with liability caps (proportionate to the potential harm). If correctly conceived, they would minimise litigation, both in small-value claims – where no internalisation of costs would otherwise be achieved – and in those claims where relevant interests are at stake but risks are otherwise difficult to manage (eg when the rights of people with disabilities are considered).Footnote 66 The relevance of such an instrument should thus be significantly expanded,Footnote 67 whereas the Report only suggests its use against damages caused by unidentified or uninsured tortfeasors.
IX. Conclusions
Although the EG’s Report constitutes an important assessment of the current status quo and many of its criticisms of the existing legal framework are to be welcomed, some of the proposed solutions are problematic and thus here were critically discussed.
First, the report adopts a general notion of AI&ET, which is inadequate for regulatory purposes. Indeed, the diversity across the potential spectrum of applications falling under the notions is so broad that they cannot be regulated unitarily, not even with respect to civil liability.
Second, attributing a fault-based liability regime to low-risk applications and strict liability for high-risk ones is largely questionable. Excluding strict liability for smaller claims – falling under the notion of low-risk – leads to inefficiencies and denied justice for many parties. At an operational level, the very distinction between low- and high-risk is counterproductive. It is impossible to determine due to a lack of data regarding the amount and frequency of expected damages. It constitutes a meta-criterion lacking any normative purpose at either the regulatory or adjudicatory level. Similarly, it is based on questionable criteria, such as the claim’s value and the environment in which the technology operates. Yet these considerations are not of primary importance in determining the level of risk those technologies give rise to, most notably non-embedded AI applications.
Third, relying on the reversal of the burden of proof at the national level is insufficient. Logging by design does not ensure that the victim is capable of establishing liability. Elaborating and interpreting data are complex and costly, and the claimant might not have the capacity to do so, thus becoming disincentivised to sue. Likewise, compliance with harmonised standards is not relevant for establishing liability and should thus not interfere with its ascertainment.
Fourth, by adopting a strict ontological perspective, the Report excludes the possibility of considering technologically advanced applications as legal subjects. On the contrary, for specific CoAs and given purposes, the functional use of legal personality may be appropriate and, thus, should not be radically excluded.
Differently from what the Report suggests, the EU should pursue continuity in its sectorial approach to regulation. AI will be used in diverse fields – from capital markets to medicine – where liability is currently regulated separately, and so they should continue to be so even when AI-based solutions are implemented.
Finally, to avoid market fragmentation, reform should occur at the European level and through substantive norms, reducing the possibility for non-uniform application across MSs.
In particular, from a regulatory perspective, all applications that – according to a complex assessment of the overall legal framework – need intervention should be addressed through ad hoc solutions. To this end, the level of risk is but one of the relevant elements of evaluation.
Overall, preference should be given to strict liability rules that clearly identify the party who is to be deemed responsible, as the one who benefits from the use or commercialisation of the technology and is in control of the risks posed by it, being best positioned to manage them.Footnote 68