Artificial Intelligence Liability, Nonreciprocal Risks and Network Theory / Anat Lior

Date: 

Wed, 29/05/2019 - 15:00 to 16:30

Location: 

I-CORE – The Center for Empirical Studies of Decision Making and the Law, the Faculty of Law, Mt. Scopus, The Hebrew University of Jerusalem

 

Anat Lior

This article focuses on the field of AI-inflicted torts and the question of who should be held liable when damages occurs. I argue that the combination of Fletcher's nonreciprocal paradigm[1] with the study of network theory[2] establishes a well-grounded base for applying a strict liability regime when AI-inflicted damages transpire. This paper discusses the utilization of this combination in the context of robot-human interactions[3] to identify the appropriate liability regime and liable party.

The nonreciprocal approach states that "a victim has a right to recover for injuries caused by a risk greater in degree and different in order from those created by the victim and imposed on the defendant", i.e., for injuries and damages which are a result of a "nonreciprocal risk". If the defendant has generated a "disproportionate, excessive risk of harm, relative to the victim's risk-creating activity" she will be found liable under this approach. Fletcher argues that this approach stems from a notion of fairness because all individuals in a certain society are entitled to be exposed to roughly the same degree of security from risk. A prominent critique of this approach claims that measuring nonreciprocal risks is subjective and determined arbitrarily by the analogies and novelty of an activity.[4] This is where network theory can be helpful.

Network Theory is the study of symmetric or asymmetric relations between separate items. It focuses on information about the relationships between objects. The theory describes the separate objects as nodes; in the case of AI, a node can be a robot or algorithm, and the relationships between these nodes are called edges. These two elements present a general view and understanding of the world as an infinite number of edges (i.e. connections) between nodes. Network theory focuses, inter alia, on the degree of connectivity, overall structure (centralized, decentralized or distributed), and properties (robust or critical) of the network in order to analyze it and its features.

AI machines, robots, agents and algorithms (hereinafter: "AI creatures"[5]) are able to connect more intensely than humans can. An AI creature's ability to communicate significantly faster, in a repetitive manner and across a vast number of platforms simultaneously is unique to online networks and AI creatures which act upon them. This leads us to focus on the AI's level of activity, which is significantly higher than their counterpart human nodes, thus creating excessive risks.[6] The high levels of activity combined with their repetitive and all-encompassing nature leads to the creation of nonreciprocal risks, even if the desired level of care is obtained. Examples of this notion can be found in AI bots spreading fake news; high-frequency trading algorithms (HFT); and Distributed Denial of Service (DDoS) attacks between multiple AI bots.

Reducing these harmful levels of activity can be better incentivized via a strict liability regime where the injurer must take into account her level of activity knowing that she will be liable for any damage that might occur. A negligence regime cannot fully take into consideration one's level of activity in deciding whether one has acted negligently, but rather focuses solely on their level of care. Thus, a negligence regime provides less incentives to reduce these harmful levels of activity.[7]

Examining the features of networks comprised with both humans and AI nodes will help discover the risks AI nodes impose by analyzing the network's characteristics. If they create nonreciprocal risks, a strict liability regime should be in place. Network theory helps us identify the liable node, unmask its true role in the network and force it to take responsibility for the nonreciprocal risks it imposes. Utilizing network theory can offer insights to the way policies should be built around these systems in a manner that takes into consideration their unique characteristics. These features create a more uniform set of rules which will help us decide which liability regime should apply and who should be held liable.

 


[1] George P. Fletcher, Fairness and Utility in Tort Theory, 85 Harv. L. Rev. 537 (1972).

[2] Mark Newman, Networks: An Introduction (2010).

[3] Calo's social valence. See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 545 (2015).

[4] Guido Calabresi & Jon T. Hirschoff, Toward a Test for Strict Liability in Torts, 81 Yale L. J. 1055 (1972).

[5] The use of this word is derived from its relative neutrality. AI are not always robots nor are they algorithms.

[6] See also Leon E. Wein, The Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence, 6 Harv. J. L. & Tech. 103, 107 (1992) ("[…] automated devices generate liability of a different order or degree than humans performing an equivalent task […]").

[7] Steven Shavell, Liability for Accidents, Handbook of Law and Economics 139, 146–147 (2007).