For the project “How people use theory of mind to explain robot behavior”, I am investigating whether people infer mental states such as beliefs, desires, and intentions to explain robotic agents’ behavior, just as they do for humans. As a first step in our research program, we ran a series of pretests. The goal was to create a pool of stimulus behaviors that are similarly judged on properties of behavior that influence people’s explanations, regardless of whether these behaviors were performed by humans or by robots. We were successful in developing a stimulus behavior pool that can be used to rigorously examine whether people explain robot and human behaviors in similar or distinct ways. However, in the course of identifying this robust set of behaviors, we also discovered several behaviors that showed markedly discrepant judgments on these properties of behavior (i.e., intentionality, surprisingness, and desirability). These results were published as a Late-Breaking-Report at the HRI 2018 conference in Chicago. The full set of behavior stimuli can be found here.
Abstract of the paper
The emergence of robots in everyday life raises the question of how people explain the behavior of robots—in particular, whether they explain robot behavior the same way as they explain human behavior. However, before we can examine whether people’s explanations differ for human and robot agents, we need to establish whether people judge basic properties of behavior similarly regardless of whether the behavior is performed by a human or a robot. We asked 239 participants to rate 78 behaviors on the properties of intentionality, surprisingness, and desirability. While establishing a pool of robust stimulus behaviors (whose properties are judged similarly for human and robot), we detected several behaviors that elicited markedly discrepant judgments for humans and robots. Such discrepancies may result from norms and stereotypes people apply to humans but not robots, and they may present challenges for human-robot interactions.