August 2019

Insurability of Artificial Intelligence Algorithms and Robots – A Different Version of the Same Policy

By: Anat Lior.

Read More

The discussion about Artificial Intelligence and the insurance market is divided into two paths which operate within an influence loop. The first, more popular one, is that of the way AI will influence, and influences now, actuarial science and the insurance market. On this facet, AI is enhancing insurance companies' ability to better define insurance policies by utilizing the power of AI, thus allegedly increasing efficiency by calculating accurate premiums and marching the insurance market and actuarial science into a more precise future.[1] We will refer to this as the AI for insurance path. The second path, which has yet to receive the proper attention it deserves, is the discussion about the insurability of AI machines, robots, agents and algorithms (hereinafter: "AI creatures"[2]). This fact relates to instances in which AI creatures cause harms or damages to humans or property. We will refer to this as the insurance for AI path.

Insurance for AI, which is the focus of this blog, incorporates important legal questions focusing on the intersection of torts and insurance law. The discussion about AI liability has been prominent during the last couple of years as various scholars suggested different liability regimes that should apply on AI-inflicted damages. Some call for the application of negligence, while others call for a strict liability regime, or a combination of the both via, for example, adopting safe harbors.[3] The intersection between tort law and AI is not complete without discussing the important part that insurance plays in damages inflicted by AI creatures.[4]

One aspect of the insurance for AI that is more developed, is the discussion about insuring autonomous vehicles (AV). In August 2018, Volvo's CEO declared the company will take full responsibility for all accidents which may be caused by its AV,[5] and by doing so freeing the car owner/user from liability. Distinct from the position of Volvo’s CEO, the regulators in the UK have opted for a different liability scheme. In 2018 they enacted the Automated and Electric Vehicles Act.[6] This Act states that if an accident is caused by an AV and it is insured, the insurer is liable for that damage. If the AV is not insured, the owner of the AV will be liable for the damage. It seems that scholarly suggestions in this avenue are largely based on the existing no-fault liability insurance scheme, which is commonplace in the auto-insurance industry for the last couple of decades. Other suggestions offer different insurance schemes, such as, creating a fund which will provide remedies to victims, or declaring a sole manufacture liability rule.[7] Despite these suggestions, it seems more likely and efficient that the future insurance market of AI creatures will be vastly based on existing insurance infrastructure and in fact will extend the mandatory insurance scheme to other AI creatures, similarly to the UK Act.

A no-fault accident compensation scheme of AI creatures (i.e. a mandatory third-party insurance scheme) may help avoid legal questions of liability and blame-placing; provide some sense of predictability with regards to the identity of the entity compensating for the damages; and will channel the behavior of those who purchased the insurance to minimize the risk of harm by requiring certain standards from them in order to maintain the validity of their policy.

This scheme has a couple of drawbacks, however: first, as in all insurance policies-contracts, the insurer can choose to create exclusions of unpredictable damages, thus undermining our attempt to use insurance as a tool to avoid difficult questions of foreseeability in the AI context.[8] Second, generally, insurance policies inherently create moral hazards that undermine the important goal of deterrence.[9] Third, in the context of some AI creatures such as the autonomous vehicle, it is problematic to enforce mandatory insurance given the lack of physical borders and political differences that may impede the application of a unified insurance scheme across different countries. Fourth, difficulties related to cost allocation and premium estimation in the AI context are more difficult than in other fields. These difficulties include questions, such as, who will be responsible for purchasing an insurance policy, and how can we determine premiums given the unpredictability of AI creatures.[10]

Despite these drawbacks, I believe the required mandatory insurance scheme should be adopted as it is a valid and beneficial way to regulate AI creatures. Most of the challenges presented above are not unique to AI and have raised in the past when new technologies have emerged, such as aviation and motor transportation. Questions about who should purchase the insurance policy and how the premium should be set will be resolved as our usage of these AI creatures will become clear in the future. Insurance companies will be able to use the forces of AI itself, i.e. AI for insurance, to set accurate premiums and identify the most efficient entity who should be responsible for purchasing the liability insurance. This is the influence loop between the two paths where AI enhances accurate actuarial science capacity, and AI for insurance helps to better regulate insurance for AI.[11]

Creating a tailored-fit insurance scheme for AI creatures seems to be impractical at the moment. The only new policy created in the last couple of years is a terrorism policy,[12] and it is unlikely that new forms of technology will lead to the creation of new policies, at least as they make their entrance into the commercial market. The solution for AI-inflicted damages will have to stem from our existing insurance practices and infrastructure. It is up to us to best utilize this infrastructure to make sure the vital tool of insurance will help regulate the current and future use of AI creatures.


[1] See e.g. McKinsey's report, Insurance 2030—The Impact of AI on the Future of Insurance (April 2018), www.mckinsey.com/industries/financial-services/our-insights/insurance-20....

[2] The use of this phrase is derived from its relative neutrality. AI are not always robots nor are they algorithms.

[3] For a discussion in favor of strict liability see e.g. David C. Vladeck, Machine without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117, 146–147 (2014); for a discussion in favor of negligence see e.g. Bryan Casey, Robot Ipsa Loquitur, 2019 Geo. L. Rev. (forthcoming); for a discussion about safe harbors see e.g. Omri Rachum-Twaig, Whose Robots is it Anyway?: Liability of Artificial-Intelligence-Based Robots, 2020 U. Ill. L. Rev. (forthcoming).

[4] For the reciprocal influence insurance law and tort law have on each other, see e.g. Tom Baker, Liability Insurance as Tort Regulation: Six Ways that Liability Insurance Shapes Tort Law in Action, 12 Conn. Ins. L. J. 1 (2005).

[5] Kirsten Korosec, Volvo CEO: We Will Accept All Liability When our Cars are in Autonomous Mode, Fortune (Oct. 7, 2015), fortune.com/2015/10/07/volvo-liability-self-driving-cars/.

[7] See e.g. Carrie Schroll, Splitting the Bill: Creating a National Car Insurance Fund to Pay for Accidents in Autonomous Vehicles, 09 Nw. U. L. Rev. 803 (2015); Curtis E.A. Karnow, Liability for Distributed Artificial Intelligences, 11 Berkeley Tech. L. J. 147, 181 (1996); Kenneth S. Abraham & Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127, 139 (2019); Omri Rachum-Twaig, supra note 3.

[8] Jacob Turner, Robots Rules: Regulating Artificial Intelligence in the 21st Century 117 (2018).

[9] Moral hazard is an inseparable and important part of insurance law, but delving into it exceeds the scope and purpose of this blog. For more on moral hazards see Tom Baker, On the Genealogy of Moral Hazard, 75 Tex. L. Rev. 237 (1996); Louis Kaplow, An Economic Analysis of Legal Transitions, 99 Harv. L. Rev. 509 (1986); Steven L. Schwarcz, Systemic Risk, 97 Geo. L.J. 193 (2008); Daniel Keating, Pension Insurance, Bankruptcy and Moral Hazard, 1991 Wis. L. Rev. 65.

[10] Omri Rachum-Twaig, supra note 3, at 28–31.

[11] A caveat for this statement is that actuarial science might actually fail us in the AI context because of its unpredictability, bias and lack of sufficient claims and historical data. See e.g. Ronald Richman, AI on Actuarial Science, available on SSRN, papers.ssrn.com/sol3/papers.cfm?abstract_id=3218082; Jean Lemaire, Challenges to Actuarial Science in the 21st Century, 34 Astin Bulletin 271 (2004).

Read Less