Interview with Noam Kolt

1. Can you tell us about yourself and how you found your way to academia?

I am deeply fascinated by the interaction between AI, law, and society. My interest began after moving to Israel from Australia and working in national security and corporate law, where I first encountered the far-reaching impact of emerging technologies. My transition from legal practice to academia, somewhat fortuitously, coincided with dramatic advances in language model technology, which has become a focal point of my research. As a doctoral candidate at the University of Toronto, I have been fortunate to be at the forefront of many exciting developments in AI - gaining early access to frontier systems and exploring their broader societal implications.

2. What is your main research focus?

My research centers around examining the impact of AI on social and legal systems, and designing regulatory frameworks to unlock the technology’s benefits and combat the associated risks. In an empirical project, I studied the ability of a large language model to read and understand consumer contracts. In addition to uncovering a systematic anti-consumer bias in the model, the dataset I created has been integrated into a recent collaborative benchmark for evaluating language models on legal tasks, led by researchers at Stanford University.

On the regulatory front, my research has focused on tackling large-scale societal risks from AI. In a forthcoming article, I argue that policymakers should address “algorithmic black swans” - that is, catastrophic tail events from AI. As the underlying technology advances, the prospect of automated systems manipulating our information environment, distorting societal values, and destabilizing political institutions is increasingly palpable. Current regulatory proposals, however, primarily target the immediate risks from AI and overlook these broader and longer-term risks. To fill this gap, I propose several regulatory principles to address the prospect of algorithmic black swans and mitigate the harms they pose to society.

Across these projects, I strive to engage with cutting edge technical AI research. For example, I participated in testing and red-teaming OpenAI’s GPT-4 model prior to its public release. More recently, I collaborated with researchers at Google DeepMind to develop procedures for evaluating risks posed by frontier AI systems. As a legal researcher, the insights and skills gained from technical projects offer me valuable perspectives on the opportunities and challenges in governing this transformative technology.

3. Why did you choose this area of research and why is it important?

The release of ChatGPT in late 2022 marked a watershed in the public discourse around AI. The impact of AI is increasingly felt across different industries, geographies, and aspects of life. The case for critically analyzing the implications of this technology and steering it in prosocial directions is stronger than ever. Ensuring that regulation keeps pace with technology is difficult at the best of times - and AI is no exception. Playing a part in that effort is, for me, an exciting and formidable challenge.