Interview with Omri Rachum-Twaig

In a few words, can you tell us about yourself and how you found your way to the academic world?
I started studying law immediately following a career as a performing musician. This transition naturally pushed me towards legal fields such as intellectual property, which, in turn, further pushed me to learn more about law and information and law and technology. At a certain point, I understood that my interest is in combining practice and academia in cutting-edge legal fields such as cyber law, privacy and intellectual property. I believe that these fields always require both a practical and academic perspective.

What is the main core of your research? Can you give an example or two?
My current research is focused on understanding how the law would (or should) impose liability in the face of very disruptive technologies such as AI- based robotics. An example would be a smart personal assistant that, as part of performing an everyday task, decides to sell the personal data because this is the most efficient way to complete the task (without being so instructed by the user or the designer of the robot). This is highly related to cyber security because the cyberspace (and to a great extent the physical space too) will soon be populated by many AI-based robots. If we have no tools to regulate them and to impose legal liability for misconduct, then the cyberspace will surely be less secure.

Why choose this area over all others? Did your personal or professional background lead you to it?
New technologies are always an interest of mine. Specifically, I find interest in "technologies of phenomena that require us to return to basic legal doctrines such as property law, and in this case tort law. This is an opportunity to revisit well established doctrines and see if they are still applicable and what can we do to adjust them (or even change them altogether).

Do you think that in this cyber age these issues are even more complex compared to other times in history?  If so – in what ways?

The issues are not necessarily more complex. Tort law basics have been an acute question in any significant sociological change (e.g., the industrial revolution). The difference is that the technological change becomes more rapid and thus the law is revisited more often and with higher degrees of required knowledge and interdisciplinarity. This is not necessarily a qualitative difference, but definitely a quantitative one.

After explaining the main core of your research, what do you think is the solution? What is the proper model for that? Is it applicable?
What I suggest in the current research is enhancing the current (failing) tort law doctrines with supplementary rules that will only apply to the operation and design of AI-based robots. For example, I suggest that any robot should be monitored for unexpected behavior. Thus, even if we cannot expect what will go wrong and when, we can detect irregularities and react in a timely manner. In addition, I suggest applying "emergency brakes" to robots so that users will always have the ability to suspend the autonomous capabilities of the robot and reduce expected harm in extreme situations.

What is the next phase in your professional life?
Probably more of the same. I strive to continue balancing between my law and technology practice and academic initiatives, and focus on new challenges that the law is facing in light of new technologies and sociological changes. In the near future, I plan to keep studying the field of artificial intelligence and suggesting a more robust, contextualized approach to regulating AI-based robots.

What is your message to the public?
We do not have to fear from new technologies, but we do have to be very knowledgeable about them even if we are not the designers of the technology. This will make us smart consumers of technology and will allow us to more efficiently and safely enjoy its benefits. This is true to users and consumers, as well as to academics and lawyers.