The idea of robots that possess autonomous capabilities and intelligence and are ungoverned by human directions or supervision dates as far back as several decades ago. These (then) futuristic ideas were steadily incorporated into very real and current technology. Artificial Intelligence (AI) is increasingly embedded in everyday life, from autonomous cars and medical diagnostics to manufacturing and service providers, and one can easily imagine a future in which humans interact with AIs as frequently as they do with other humans, if not more.
Interactions bread disputes and legal scholars are now envisioning the legal response to the infusion of AI in everyday lives. Most questions remain open. Conceptually, we may think about the definition and legal status of AIs. Normatively, the mere desirability of AIs and the type and extent of risk AIs should be allowed to pose on humans remain up for grabs. Practically, the use of AIs produces a host of evidentiary and remedial questions.
Alongside these questions, the embeddedness of AIs in everyday life raises questions about our social lives and norms. AIs gain their ability to perform complex functions in various settings by a process of machine learning. Machine learning may come in various forms, but whichever way one educates her AIs, the question of the behavioral and social norms we wish AIs to learn from and imitate arises. Should and can AIs be taught to follow the social norms humans follow, ignore them all together, or does the answer depend on the particular context and type of interaction? How will frequent human-AI interactions affect the norms governing the relationships between individuals? And will the answer to these questions facilitate (or undermine) individuals’ trust in AIs and willingness to relinquish control to them?
The workshop aims to provide different perspectives on these and similar question concerning AI law. Offering the opportunity to those engaging with AI law to present hear, and discuss cutting edge research, new projects, and innovative ideas.
(More information to come)