How to make robots transparent and explainable for everyday users

This week I received the notification of acceptance of our paper on explainable robots to be presented at the AAAI Fall Symposium ‘AI-HRI’, November 9th-11th, 2017 in Arlington, VA, USA. In the paper, Bertram Malle and I present a strategy on how to make autonomous intelligent systems, such as robots, transparent and explainable for everyday users.


To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, “explainable” we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a well-supported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.