Contextualizing Robot Behavior: Should Robots Become Human Again

By: Omri Rachum Twaig, Ohad SomechEric Hilgendorf

The idea of robots that possess autonomous capabilities and intelligence and are ungoverned by human directions or supervision dates as far back as several decades ago. These (then) futuristic ideas were steadily incorporated into very real and current technology.

AI technology disrupts the involvement of human beings in manufacturing and provision of services. In the absence of a human touch, how should society determine the “personality” of such technology, its interaction with humans and other machines, and enforce such behavioral rules? Should robots behave as humanly as possible? Should they follow social conventions? Or should they be efficient instruments to accomplish computational goals?

This requires a multi-faceted study. It requires first understanding the different ways in which AI technology is implemented and used, as well as its goals (engineering and computer science). It further requires understanding what the potential behavioral rules are that could apply to such technological “behavior” (behavioral sciences). Then, a normative and ethical study is required to determine which rules we, as society, would choose to apply (law and ethics). Finally, it requires to study the technological possibilities and constraints of embedding such rules into the learning process and operation of AI-robots (multi-disciplinary).

Such studies, each separately and combined, could have significant implications on the way AI technology is used and publicly received. For example, it could yield that the most desirable way to use autonomous vehicles is on driverless only roads and that in such environment AI-based cars could drive in an ultra-human manner. In contrast, it could suggest that AI-based customer service agents should be taught the nuances of human behavior when engaging with human users, but could use other behavior standards when performing tasks that do not require human engagement. In other contexts, AI-robots interacting with humans but for a purpose that is strictly instrumental, such as financial advisors, could require a different set of behavioral rules.

As an interdisciplinary research, the proposed project is a fusion of independent research in different fields. First, an overview of AI learning methods and their implications is to be provided by engineering and computer science. This framework, based on existing literature and industrial practices, is needed to ground the project in genuine possibilities and constraints of AI’s learning abilities.

Second, given a focus on social behavior and norms, the project seeks to unearth their existence and prominence in relevant environments. Using surveys, field and laboratory experiments, the project will identify prevailing behavioral norms in areas likely to be dominated by social robots and AIs. These include, for instance, driving, customer and citizen service provision, and cyber as well as physical, AI-controlled warfare. Furthermore, the project will analyze the impact of (socially) agentic robotic systems on social processes and humans, i.e. on their social competencies and their relation to social robots.

Third, and in a related fashion, the project will use interviews and field studies to identify how the behavioral norms identified are learned, disseminated, and differentiated. This, to facilitate assessing the norms and to implement learning methods when ‘educating’ AI.

Fourth, the project will assess the desirability of behavioral norms. Desirability will be assessed on several levels. The extent to which the norm is ingrained in society. Whether AI-based robots are likely to interact with other robots or with individuals. The social context and purpose the AI is meant to achieve. The consequences of norms on behavior, social interactions and the people affected by them. All of these will combine a theoretical-ethical study, with empirical knowledge. The latter will be based on interviews on the perception and desirability of norms. For instance, in the context of customer service, surveys will be conducted with managers, employees, and customers to engage their views. The sociological and behavioral findings will then be infused into a legal and normative discussion which allow reaching conclusion with respect to the most appropriate modes of regulation.

Fifth, the above insights will be combined to determine how AI-based robots are to be educated in different contexts. That, given the desirability of the norms; the context and purpose of the use of AI-based robots; with whom robots are expected to engage; and the ability to educate robots to follow certain norms and differentiate these norms from others.