In the age of connected devices and robotics, the cyberspace is no longer limited to bits and bytes. Connected devices and personal-use robots allow activities in the cyberspace to directly affect the physical space in a more concrete way than ever, not only with respect to critical infrastructures, but also at our homes, workplaces and roads. Establishing a physical-digital dichotomy becomes even more challenging in face of developing artificial intelligence technologies that disrupt the idea of agency and the involvement of human beings in the provision of services and the manufacturing of consumer products.
In light of these technological advancements, important questions relating to liability in general and product liability in particular arise. How should liability be constructed in this context? More specifically, in the context of connected products, do models of product liability fit the cyberspace and artificial intelligence framework? Should manufacturers of connected devices be strictly liable for cybersecurity breaches and any related damage? Should programmers of machine-learning robots be liable for every expected and unexpected future conduct and action of the robot? Alternatively, should we turn to alternative models for liability such as service provider liability, operator liability or even end-user liability? Current forms of liability seem to be insufficient to capture the entire spectrum of possibilities and nuances that arise in the context of connected devices and machine learning.
This project seeks to explain why current law and doctrine cannot fit with these technological advancements. Product liability regimes are commonly restricted to physical damages and cannot account for other types of damages such as privacy violations, monetary damages, denial of critical services and the like. In addition, general forms of liability in torts are not adequate for several reasons. Tort law requires agency as a precondition. However, in the age of artificial intelligence and machine learning the question of agency becomes extremely challenging and in absence of legal accountability of robots and computers, tort law cannot necessarily respond to such challenges. In addition, strict liability regimes may impose excessive burden on manufacturers or distributors of connected devices and machine learning products, since the ultimate purpose of such products is to function in an unpredictable manner that the manufacturer cannot necessarily foresee. Negligence is also insufficient because the duty of care and the standards for reasonable precautions are dependent on a baseline which is constantly changing in these technological fields.
The project will further review the factors that need to be weighed to produce a sustainable liability model. Choosing the most suitable liability model is dependent on understanding the risks involved, the technology itself, the protected individual rights and their underlying justifications, the role of the various actors involved and the potential outcomes and market effects of each alternative model. Only by weighing these various factors, will it be possible to strike a balance that will provide proper deterrence and compensation mechanisms on the one hand and will not restrict the technological activity altogether, and thus impede innovation, on the other hand.
Ultimately, this project aims at suggesting supplementary rules that, together with existing liability models, could provide adequate legal structures that fit these business and technological requirements. Such supplementary rules may include a duty of oversight, built in emergency back-doors, duties of instruction and various insurance-based solutions. The argument is that these supplementary rules could be used as statutory duties or self-regulation safe harbors that complement the existing liability models such as negligence or strict liability. If adopted, they could establish clear rules or best practices that determine the scope of potential liability of manufacturers, distributers and operators of connected devices and machine learning products, who may be exempted from liability in tort by employing such tools and therefore have the certainty required to establish sustainable lines of business.