AI-based software provides new venues for managing operations in areas once dominated by human discretion, or perceived as too complex for any computerized solution. The unique AI capabilities offer great potential for societal benefits, and at the same time present multiple risks of a legal, ethical and social nature, challenging existing regulatory regimes governing software development and deployment. Acknowledging these potential harms, the EU "Artificial Intelligence Act" draft, is an initial attempt to address the risks associated with AI based technologies. Left unattended by EU regulations and specific regulations in distinct domains, such as "autonomic" cars – are broad social impacts, which cannot be easily translated into direct infringements of rights or safety or health risks for concrete individuals. AI-based technologies produce widespread socio-political transformation, or what is commonly called in Silicon Valley jargon "disruption". Such effects by new and emerging technology, which can affect either distributional justice, economic activity, power relations, civic structure and physical infrastructure, are left relatively unattended by contemporary and evolving regulation.
Whether and how we should regulate these types of disruptive societal impacts? What are the appropriate vocabulary, methods, and metrics for examining and evaluating these risks and integrating them into ethical or legal analysis? Who should examine them and at what stage of development stages, if at all?
In this research project we aim to map societal disruptive risks, examine the unique normative and methodological challenges associated with such risks and examine the regulatory challenges they raise, especially in the context of early intervention in the development and introduction to use stages.
In the upcoming brainstorming workshop we will ask to raise these questions and examine together - what research tools, vocabulary, and normative background should be used in order to formulate a specific research agenda on the subject.