May 2019

Liability for Artificial-intelligence-based Robots

By: Omri Rachum-Twaig.

Read More
In March 2016, Microsoft shut down its artificial-intelligence (AI)-based chatbot, Tay, which had been developed to autonomously interact with users via Twitter and provide data for research on conversational understanding. Tay was supposed to adapt itself and self-learn conversational skills by analyzing Twitter tweets. The shut-down came following a series of racist and misogynic tweets by Tay, surprising both users and Microsoft.[1] A little over a year later, Facebook had to shut down its own AI-based chatbot experiment. After launching an experiment attempting to reach autonomous bargaining skills between chatbots, the Facebook developers noticed that the two bots, Alice and Bob, began interacting in an unintelligible manner. After not being able to decipher the codes used by the two bots, Facebook shut-down the project.[2] These two bot cases had relatively harmless implications, but they suggest a potential pattern regarding the use and deployment of AI-based technology.

What if, in the (near) future, Tay were able to establish substantial influence, leading to the actual formation of hate groups that inflict physical harm? In the near future – if this has not already occurred – robots could conspire to take over Facebook accounts and retrieve sensitive personal or financial information on individuals.

The idea of robots with autonomous capabilities and intelligence ungoverned by human directions or supervision dates as far back as several decades ago.[3] These (then) futuristic ideas were steadily incorporated into very real and current technology, first with respect to digital communications and cyberspace, and then, mainly in recent years, to the physical world as well.

In the age of connected devices and robotics, cyberspace is no longer limited to bits and bytes. Connected devices, personal-use machines, and robots allow activities in cyberspace to directly affect the physical world in a more concrete way than ever, not only with respect to critical infrastructures, but also in our homes, at our workplaces, and on our roads. Combined with AI technology, products and machines disrupt the idea of agency and the involvement of human beings in the provision of services and the manufacturing of consumer products. This trend/phenomenon is coupled with unpredictable manner in which such robots behave and the inability to foresee risks that they may inflict.

In light of these technological advancements, important questions relating to liability arise. How should liability be constructed in the absence of any apparent agency or personhood, or when actions are almost inherently unforeseeable? More specifically, in the context of AI-based robots, do models of product liability or other tort liability fit the new framework? Should designers of AI-based robots be strictly liable for damages inflicted by their creations? Should programmers of autonomous robots be liable for all the robots’ expected and unexpected future conduct and actions? Current forms of liability seem to be insufficient to capture the entire spectrum of possibilities and nuances that arise in the context of AI-based robots.

My main argument is that the lack of personhood and agency, and the impossibility of foreseeing and explaining certain behaviors the robots exhibit, are two key factors that substantially disrupt current tort doctrines in the context of AI-based robots. For example, products liability doctrines (as well as other tort doctrines) are commonly restricted to physical injuries and damage to property and cannot necessarily account for other types of damages such as privacy violations, pure economic harm, denial of critical services, and the like. Moreover, they are generally limited to harm caused by design and manufacturing defects, a concept that does not easily fit the idea of AI if we believe that part of its goals are to generate unexpected and inexplicable results. In addition, other general forms of liability in torts are not adequate for several reasons. Tort law generally requires agency as a precondition. However, in the age of AI and autonomous machines, the question of agency may pose challenges to which, in absence of the legal accountability of robots, tort law cannot necessarily respond. Negligence is also insufficient because the duty of care and the standards for reasonable precautions are dependent on a baseline that is constantly changing in these technological fields, and are disrupted by new types of unexpected harms and a general lack of foreseeability, which undermine both the concept of breach of duty and the general concept of causation.

Zooming out from specific doctrines, I argue that three main tort liability regimes cannot solve the AI challenges. Strict liability regimes may impose excessive burden on persons utilizing AI-based robots, since the ultimate purpose of such products is to function in an unpredictable manner that the manufacturer cannot necessarily foresee. Thus, manufacturers are not necessarily better situated to assess the risks and the ways to prevent them. Negligence as a liability regime appears to be inadequate as well due to the expected difficulty of courts to set the optimal level of care in the context of AI. Even full no-fault mandatory insurance schemes cannot necessarily overcome these shortcomings, due to the difficulty in setting premiums and assessing potential risk as well as the cross-jurisdictional nature of AI-robotics.

Ultimately, I suggest employing supplementary rules that, together with existing liability models, could provide better legal structures that fit these business and technological requirements, at least for the near future and in the absence of the legal liability of robots. Such supplementary rules would function as quasi-safe-harbors or predetermined levels of care. Meeting them would grant immunity from specific doctrines, such as products liability, and would shift the burden of proving negligence back to potential plaintiffs. Failing to adhere to such rules would lead to liability. Such supplementary rules may include a monitoring duty, built-in emergency breaks, and ongoing support and patching duties. The argument is that these supplementary rules could be used as a basis for presumed negligence that complements the existing liability models. If adopted, they could establish clear rules or best practices that determine the scope of potential liability of designers, operators, and end-users of AI-based robots. Such model of presumed negligence and quasi-safe harbors may fit those circumstances in which harms caused by AI-based robots disrupt current tort doctrines. Naturally, AI-based robots will function in various ways, some of which may not raise such difficulties. Thus, different liability models would apply to different phenomena associated with AI-based robots.


[1] Sarah Perez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism, TechCrunch (March 24, 2016), https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay... James Vincent, Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day, The Verge (March 24, 2016), https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.

[2] Andrew Griffin, Facebook's artificial intelligence robots shut down after they start talking to each other in their own language, The Independent (July 31, 2017), https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-....

[3] See, e.g., Isaac Asimov, I, Robot (1950); Arthur C. Clarke, 2001: A Space Odyssey (1968).

Read Less