AI-based software provides new venues for managing operations in areas once dominated by human discretion, or perceived as too complex for any computerized solution. The unique AI capabilities offer great potential for societal benefits, and at the same time present multiple risks of a legal, ethical and social nature, challenging existing regulatory regimes. Acknowledging these potential harms, several new legislative initiatives, as well as private ethical principles, seek to address some of these risks. Left relatively unetched by these initiatives are AI broad social impacts, which cannot be easily translated into direct infringements of rights or safety for concrete individuals. AI-based technologies produce widespread socio-political transformation, or what is commonly called in Silicon Valley jargon "disruption". Such effects by new and emerging technologies, which can affect either distributional justice, economic activity, political and power relations, civic structures and physical infrastructure, are left relatively unattended by contemporary and evolving regulation.
Whether and how we should regulate these types of disruptive societal impacts? What are the appropriate vocabularies, methods, and metrics for examining and evaluating these risks and integrating them into ethical or legal analysis? Who should examine them and at what stage of development stages, if at all?
In this research project we aim to map societal disruptive risks, examine the unique normative and methodological challenges associated with such risks and examine the regulatory challenges they raise, especially in the context of early intervention in the development and introduction to use stages.
This meeting will be in-person, in Beit-Maiersdorf.