By Evangelia Manika Associate, LL.B MSc
In September 2022, the European Commission proposed the AI Liability Directive which aims to establish a harmonised legislative framework for dealing with consumer liability claims for damage caused by AI products and services. The AI Liability proposal applies to damage caused by any type of AI systems: both high-risk and not high-risk. The purpose of the AI Liability Directive proposal is to improve the functioning of the internal market by establishing new and uniform rules for certain aspects of non-contractual civil liability for damages caused by AI systems. It also aims to reduce legal uncertainty surrounding liability claims and AI-related damages.
In particular, the proposal apply to non-contractual fault-based civil law claims for damages, in cases where the damage caused by an AI system occurs. However, it does not apply to criminal liability. The new rules will ensure that any type of victim (either individuals or businesses) can have a fair chance to compensation if they are harmed by an AI system. The proposal provides for a statutory responsibility to compensate for damage caused intentionally or by a negligent act or omission.
For the time being, the national liability rules cannot handle claims for damage caused by AI-enabled products and services. When AI is involved, it is excessively difficult for the victim to identify whom to sue, and explain in detail the fault, the damage, and the causal link between the two, since AI systems are often very complex and opaque. As a result, it is almost impossible for the victim to meet this burden of proof.
In this context, the proposal introduces two main safeguards, in order to ensure that victims can seek effective redress for AI-related damages. First, the AI Liability Directive alleviates the victims’ burden of proof by introducing the ‘presumption of causality‘ which will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission (broader protection for victims). Second, victims will have more tools to seek legal reparation, since AI Liability Directive introduces the right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved which will allow victims to identify the person that could be held liable and to find out what went wrong.
This proposal is part of a package of measures to support the roll-out of AI in Europe by fostering excellence and trust. In particular, together with the revision of the Product Liability Directive, these initiatives complement the Commission’s effort to make liability rules fit for the green and digital transition. Together these rules will promote trust in AI (and other digital technologies) by ensuring that victims are effectively compensated if damage occurs whilst the AI Liability Directive makes compliance with the AIA even more crucial.