Who Is Liable for AI-Driven Accidents? The Law Is Still Emerging

To establish negligence, a plaintiff needs to prove causation.

June 21, 2023
car
The Autonom, an electric self-driven car by Navya, is unveiled at the Cité du cinéma in Saint-Denis, France, November 7, 2017. (Aurore Marechal/ABACA via REUTERS)

The challenge of co-existence with artificial intelligence (AI) is a growing concern for governments worldwide. The environmental, societal and personal harms that such systems can cause, and have caused, are prompting governments to intervene with policy and legislation. Guardrails are being quickly developed.

Part of this work includes developing legal responses to AI-driven harms and accidents, which have become increasingly prevalent. A federal agency in the United States reports that self-driving cars were involved in almost 400 car crashes in 2021 alone. Yet the complexity and the inscrutability of AI make enforcement of established legal norms complex and difficult.

In September of 2022 the European Commission proposed two remedial directives — the AI Liability Directive and the Product Liability Directive. The aim was to adapt tort law to the distinctive characteristics of accidents caused by AI systems. Tort law deals with acts or omissions that cause harm or injury for which the court imposes liability. It is integral to remedying loss or harm caused by accidents, whether the harm is physical, financial, reputational or emotional. Through tort-based litigation, victims of accidents can claim financial compensation for harm caused by intentional conduct or the failure to meet a duty of care. In most cases of AI accidents, such as injury caused by a self-driving vehicle, it is the tort of negligence that would apply.

To establish negligence, a plaintiff needs to prove causation. In other words, the plaintiff must substantiate that the defendant’s actions or omissions caused injury. To do so, the plaintiff must show that the harm caused was a foreseeable consequence of the defendant’s conduct. But AI systems have raised fundamental doubts as to the viability of this test, due to the opacity of their internal decision-making processes and the distribution of responsibility among the various actors, entities and automated processes involved in the development and deployment of AI systems.

The development and the deployment of AI involve numerous actors, including hardware manufacturers, software developers and data trainers, leading to a diffusion of responsibility.

Establishing Liability under Tort Law for AI Injuries

Andrew Selbst, assistant professor of law at University of California, Los Angeles, observes that the most common current use of AI is in “decision assistance” to humans, rather than in fully autonomous robots. The use of AI tools, he argues, replaces or augments “human decision processes with inscrutable, unintuitive, statistically derived, and often secret code.” While replacing human decision processes may in certain cases enhance safety, it can also obscure the foreseeability of harm, as automation may dull human intuition and experiential learning. Aircraft autopilots, for instance, have been linked to the de-skilling of pilots. De-skilling can hamper their ability to handle crisis situations.

Further, AI systems have complex internal workings. Their decision-making processes are inscrutable, effectively making them black boxes. Users of such systems may not know or be able to detect when an error occurs. Further, owners of proprietary AI double down on their impenetrability through the enforcement of intellectual property rights. This can make it extremely difficult for victims of AI accidents or other harms to gather evidence.

To address the challenges presented by AI systems to litigating torts, the European Union has proposed revisions to tort liability frameworks. Broadly, the proposals aim to alleviate the burden on victims of AI accidents to establish causation.

European Union’s Proposed Directives

For example, the European Commission’s new proposal for non-contractual civil liability — the AI Liability Directive — introduces a rebuttable presumption of causality in the case of injuries caused by AI systems. It draws a limited presumption, as commentary on the directive has noted, “between the breach of a duty of care by the defendant and the AI system’s output.”

In addition, the directive seeks to help victims access evidence in the defendant’s possession by giving national courts the power to order disclosure of evidence pertaining to so-called high-risk AI systems. The latter have been defined by the proposed AI Act as those systems that “pose significant risks to the health and safety or fundamental rights of persons.”

Amendments have also been recommended to the product liability regime to expressly cover AI products. (Product liability is a kind of tort that protects consumers from injuries caused by manufacturing and design defects in products.) The proposed directive will allow national courts in the European Union to draw a causal link between defectiveness, causality and injury if they deem that the “claimant faces excessive difficulties, due to technical or scientific complexity.”

By simplifying disclosure, the directives make it easier to shine a light into black-box AI systems and obtain insights into the system’s decisions. When an AI system’s process or output is explainable, foreseeability and defectiveness can be contested. However, the mere availability of data is not enough. Presenting the data in a manner that is easy to understand and interpretable by the plaintiff is also vital.

Who Should Be Held Liable for AI Accidents?

The unique features of AI systems also complicate the determination of who is liable. The development and the deployment of AI involve numerous actors, including hardware manufacturers, software developers and data trainers, leading to a diffusion of responsibility, commonly referred to as the “problem of many hands.” It can result in a situation where no one, or only the actor with the lowest position in the chain of command, is held accountable for a harm. Moreover, as Filippo Santoni de Sio and Giulio Mecacci observe, the opacity of the outputs produced by these systems can make it “more difficult for individual persons to satisfy the traditional conditions for moral and legal culpability: intention, foreseeability, and control.”

In 2017, the European Parliament considered establishing a compensation fund for accidents, which would apply to either all smart, autonomous and adaptive robots or specific robotic categories. The fund would be financed by manufacturers, programmers, owners and users of robotic systems and used to compensate victims in case an accident occurred. The proposal also suggested that those who contributed to the fund would face only a limited liability in case of an accident.

A major benefit of the proposal was that it would guarantee compensation and obviate the difficult task of finding the entity responsible for the accident. However, the trade-off between contributing to the fund and liability needs to be carefully thought out. Granting manufacturers, programmers and so forth immunity from liability by merely contributing to a fund could disincentivize them from proactively incorporating safety measures in their AI products.

Controversially, the 2017 proposal also suggested exploring the creation of separate legal status or “electronic personality” for robots. This idea received considerable backlash and accusations of manufacturers’ wanting to absolve themselves of liability, and ultimately led to the dropping of the proposals for the establishment of electronic personality and the compensation fund.

The discussion of who is legally responsible for AI accidents cannot and should not occur in isolation. Instead, it’s crucial that we adapt tort law, the traditional foundation of accident litigation, to both encourage producers, manufacturers and users to create safer AI products and empower victims of AI accidents to obtain compensation. The European Union has been at the forefront of legal advancements aimed at adapting tort frameworks to the digital age and holds important lessons for other jurisdictions. The most important among these advancements are the alleviation of the burden of proof on victims and the enabling of their access to evidence.

The responsible development and deployment of AI requires justice and accountability for victims of AI accidents. These objectives can can only be achieved if the legal complexities of dealing with this rapidly developing technology are addressed.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Amrita Vasudevan is a CIGI fellow and an independent researcher focusing on the political economy of regulating digital technologies and investigating the impact of these technologies through a feminist lens.