In a few words, can you tell us about yourself and how you found your way to the academic field?
My PhD focused on the role of the parties' emotion contract law. Emotions work in concert with norms - as the violation of norms often triggers an emotional response and emotions provide the motivation to up-hold norms. This led me to my current research, focusing on the interaction between AIs and social norms.
What is the main core of your research? Can you give an example or two? How is it related to cyber security?
My current research focuses on AI and social norms. The research begins by asking should and can AI be taught to follow social norms that guide human behavior and, from the opposite perspective, how would frequent Human-AI interaction effect the social norms and latter interpersonal interaction. In light of these, the researches would continue by offering policy recommendation for the regulation of AI. AIs and their regulation, of course, are integral to cyber security. Thus, for example, requiring designers and producers to include adhesion to (certain) social norms during the machine learning phase my be invaluable in making sure that AIs would not threaten, but in fact advance, cyber security.
Why did you choose this area over all others? Did your personal or professional background lead you to it?
I believe that AI technology allows to discuss anew the role, place, and desirability of social norms in their relation to law.
Do you think that in this cyber age these issues are even more complex compared to other times in history? If so – in what ways?
In the particular context I am working on I think that the issue is not more complex, but that the age opens new opportunity for society to renegotiate old compromises. Indeed, I believe that a similar discussion could have been (and might have actually took place) when it comes to corporations. I am not sure, however, that the compromises reached there are still desirable today.
After explaining the main core of your research, what do you think is the solution? What is the proper model for that? Is it applicable?
The research is still its early days to say for sure. My co-researchers and mine intuition is that the answer is context-dependent. For example, AIs should follow (some or all) social norms when interacting with humans, but should not be required to do so when interacting with other AIs. In fact, frequent AI-AI interaction might produce new norms of behavior, some of which may prove relevant for humans as well.
What is the next phase in your professional life?
I am currently focused on the present project, which will hopefully continue for the next few years.
What is your message to the public?
Make AIs human again