Liability for Artificial Intelligence
Liability for Artificial Intelligence (Credits: European Commission)

A new proposal to address liabilities caused by the AI

The European Commission proposed new guidelines[1] on 28 September that would require developers of artificial intelligence-powered software and goods to compensate victims of their creations.

In 2020, the chamber had urged[2] the Commission to implement regulations to ensure victims of harmful AI can get compensation, notably requesting that creators, providers, and users of high-risk autonomous AI could be held legally liable for accidental injury. However, the EU executive opted for a “pragmatic” approach that is weaker than this severe liability regime, arguing that the facts did not “justify” such a regime.

AI Liability Directive

The new AI Liability Directive would make it simpler to claim for compensation when a person or organization is injured or suffers damages as a result of drones, robots, or software such as automated hiring algorithms powered by artificial intelligence.

The proposed legislation is the most recent attempt by European officials to govern AI and establish a global standard for controlling the burgeoning technology. It comes at a time when the EU is in the midst of negotiating the AI Act,[3] the world’s first bill to restrict high-risk uses of AI, such as facial recognition, “social score” systems, and AI-enhanced software for immigration and social benefits.

Under the proposed new AI Liability Directive, the presumption of causality will only apply if claimants satisfy three core conditions: first, that the fault of an AI system provider or user has been demonstrated or at least presumed to be so by a court; second, that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and third, that the presumption of causality Some of the strategies that DeepMind’s AlphaGo used to defeat a human were ones that expert Go players had never considered. This exemplifies both the benefits and challenges presented by artificial intelligence. We may not be able to comprehend the reasoning behind an AI’s decision or how one thing led to another as humans. This is one of the reasons why AI liability laws may be required to make a policy judgment based on an assumption. Therefore, subject to certain conditions and in limited circumstances, national courts would be required to presume, for the purposes of applying liability rules to a claim for damages, that the output produced by the AI system (or the failure of the AI system to produce an output) was caused by, for instance, the AI provider’s fault.

For instance, claimants would need to demonstrate that providers or users of high-risk AI systems had not complied with the obligations imposed by the AI Act. For providers, these responsibilities include training and testing of data sets, system oversight, as well as system precision and resiliency. For users, who, under the AI Act, may be an organization or a consumer, the obligations include monitoring or utilizing the AI system in accordance with the accompanying instructions.

To aid claimants in establishing blame, the proposed AI liability regulation permits courts to force providers or users of high-risk AI systems to preserve and disclose material pertaining to those systems. The proposed legislation further encourages disclosure by allowing the presumption of causality to be rebutted when a provider or user can demonstrate that adequate evidence and expertise are reasonably accessible for the claimant to establish the causal link. In the event of a malfunction with a conventional product, it may be clearly apparent what went wrong. Nonetheless, this may not be the case with AI. A court can force a provider of a high-risk AI system (as defined by the EU AI Act) to disclose relevant and necessary evidence concerning their product under certain conditions. In restricted instances, third parties may also submit requests.

Product Liability Directive

The Commission proposed the AI Liability Directive alongside a separate, but related, proposal for a new Product Liability Directive.[4]

Under the proposed Product Liability Directive, AI systems and AI-enabled items would be classified as “products” and consequently subject to the liability framework of the directive. As with any other product, compensation is available when defective AI causes harm, and the affected party is not required to prove manufacturer negligence. In addition, producers may be held accountable for modifications they make to items they have previously introduced to the market (i.e. when these changes are triggered by software updates or machine learning). In some instances, a person who modifies a product that has been placed on the market or put into service may be regarded the product’s manufacturer and consequently accountable for any resulting damages. Other parties, such as importers, authorised representatives, fulfilling service providers, and distributors, may be held accountable for faulty products created outside the EU or when the producer cannot be identified.

In addition, the plan makes it clear that not only hardware makers but also software providers and providers of digital services, such as a navigation service in an autonomous vehicle, might be held accountable.

In addition, the burden of proof will be decreased when the court determines that a claimant will have trouble showing a fault or the causal relationship between the defect and the loss due to technical or scientific complexity (though the person sued can contest this). This proposal links the planned Product Liability Directive with the AI Liability Directive by shifting the burden of evidence.

Finally, claim thresholds and compensation level ceilings are eliminated. Under existing regulations, in order for a claim to be considered, the harm caused by a defective product must be at least 500 euros, and Member States may place a cap on the producer’s total culpability for damages resulting in death or personal injury (this amount cannot be lower than EUR 70 million).

The need for further implementation

Despite the transmission of essential information, it can be exceedingly difficult to show fault in complex systems, which is one of the major critiques leveled against the planned AILD. Especially given that certain AI systems behave autonomously and have functions that are so complicated that their outputs cannot be simply explained. This challenge relates to the ‘black box’ algorithm problem that develops when the complexities of an AI system make it difficult to comprehend the input that leads to a certain output.

While the proposed AILD cites autonomy as a barrier to understanding the system in its Recitals 3, 27, and 28, it offers little to aid injured parties in establishing a presumption of cause. An injured party still has a heavy burden of proof under the AILD, from providing evidence in support of the plausibility of the claim (Article 3(1)) to identifying non-compliance with AIA requirements (Article 4(1)(a)) and demonstrating a connection between the action of the AI system and the damage sustained (Article 4(1)(c)). It may also be difficult to demonstrate noncompliance with the proposed AIA’s rules. For example, it may be difficult to demonstrate that the datasets utilized in the creation of an AI system, or the accuracy levels of a given system are insufficient. Therefore, the proposed AILD provides very minimal procedural convenience for aggrieved parties at best. More must be done to improve the effectiveness of the recourse accessible to victims of AI-caused harm.

Using the updated PLD’s defect-based remedy, which requires no proof of wrongdoing, is one strategy for addressing this issue. Article 4(6) of the PLD stipulates that compensation may only be sought in the event of material damage. It means that AI systems used in, for example, credit-scoring by any financial institution that could hurt persons in a non-physical manner cannot be challenged on the basis of being flawed. To collect compensation, the injured party would need to demonstrate fault through the AILD. The Deputy Director of the European Consumer Organization (BEUC), Ursula Pachl, has already expressed this worry when discussing the draft directives.[5] BEUC is an umbrella consumer group that unites 46 independent consumer organizations from 32 countries and has advocated for years for an update to EU liability laws to account for the rising applications of artificial intelligence and to guarantee that consumer protections laws are not outrun. In its view, the EU’s proposed policy package falls short of the more comprehensive reform package it advocated for. It consists of modifications to the existing Product Liability Directive (PLD) so that it covers software and AI systems (among other changes); and a new AI Liability Directive (AILD) that aims to address a broader range of potential harms stemming from automation. In contrast to traditional product liability regulations, if a consumer is hurt by an AI service provider, they will be required to prove that the service provider was at fault. Given the opaqueness and complexity of AI systems, these characteristics will make it impossible for consumers to exercise their claim to compensation.

In this context, the Commission’s approach is more advantageous for developers, some of whom opposed stringent liability for non-material damages during the proposal development process.[6] In addition to this barrier, the PLD requires claimants to demonstrate the likelihood that injury was caused by the system’s failure in order to utilize the presumption of defectiveness (Article 9). This raises questions concerning the likelihood criterion that a claimant must achieve, particularly when a given system is difficult to comprehend.

A month ago, the U.K.’s data protection watchdog issued a blanket[7] warning against pseudoscientific AI systems that claim to perform ’emotional analysis,’ urging that such technology is not used for anything other than pure entertainment. This is illustrative of the types of AI-driven harms and risks that may be driving demands for robust liability protections. In the public sector, a Dutch court ruled[8] in 2020 that an algorithmic evaluation of social security claimants’ welfare risk violated human rights law. In recent years, the United Nations has also issued a warning[9] regarding the hazards of automating public service delivery to human rights. In addition, the employment of ‘blackbox’ AI systems by US courts to make sentence decisions, which opaquely bake in bias and discrimination, has been a crime against humanity for decades.

In conclusion, the proposed AI liability directions of the Commission largely facilitate the gathering of information regarding AI systems in order to prove culpability. In a sense, this codifies the transparency requirement for AI systems, which is also included in the proposed AIA. However, while doing so, the proposed liability directives saddle claimants with difficult obstacles to surmount, such as having to demonstrate blame or presumptions of defectiveness and causality, as well as the connection between the harm and the defect or fault, as outlined in the AILD and PLD. The Commission’s proposals are in the beginning stages of the legislative process and will likely undergo a number of adjustments prior to their final adoption. Moreover, possible changes to the proposed AIA could also affect the execution of the proposed liability guidelines, as the directives depend on the AIA’s definitions and standards. The legislative measures to establish AI liability are a significant step in the right direction towards successfully regulating AI. However, it is still essential to exercise caution in regard to the intricacies of AI systems, especially when the objective is to safeguard wounded parties.

Sources

  1. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807
  2. https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html
  3. https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(COD)&l=en
  4. https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-7b6b23220bcd_en
  5. https://www.euractiv.com/section/digital/podcast/the-new-liability-rules-for-ai/
  6. https://www.ccianet.org/wp-content/uploads/2022/08/2022.08.24-Joint-Industry-Letter-on-the-PLD-and-AI-Directive.pdf
  7. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/10/immature-biometric-technologies-could-be-discriminating-against-people-says-ico-in-warning-to-organisations/
  8. https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878
  9. https://www.ohchr.org/en/statements/2018/11/statement-visit-united-kingdom-professor-philip-alston-united-nations-special?LangID=E&NewsID=23881