Organizing a Workshop at RO-MAN 2016

This year, I co-organized the workshop ‘Challenges of HRI in Real-World Contexts‘, together with Somaya Ben Allouch and Astrid Rosenthal – von der Putten, during the RO-MAN conference held in New York. The aim of our workshop was to bring together researchers from both industry and academia to discuss best practices as well as pitfalls of HRI research in real-world settings, and to provide the HRI community with guidelines to inform future developments of their robotic systems. The interactive character of our workshop gave room to many extended discussions on the challenges of doing HRI research outside the lab. What methodological issues arise when we try to replicate findings earlier established in lab settings? What role plays the research context? And what kind of technological challenges are we facing when performing research in the wild? Based on the presentations and the follow-up discussions after each presentation, we listed four topics for our main extend discussion session at the end of the morning.

Take what you can and give nothing back

IMG_0558The first discussion topic was guided by the following question: What can we learn from other fields about research in real-world contexts? One of the main points addressed in the discussion on this topic is that the HRI community may rely too much on theories and methods from (social) psychology. The assumption that human-robot interactions (should) follow guidelines of human-human interaction take a very prominent place in especially the way social behaviors of robots are developed and evaluated. However, there are many other fields that have relevant theories, methods and approaches. For example the fields of medicine or health research offer alternatives that can be useful when studying (long-term) effects on human behavior. The second main point addressed in this discussion was that ethical issues become more prominent when conducting research in the wild.

Everything inside out

The second discussion topic was guided by the following question: What does it mean to replicate results found in lab studies in real-world contexts? One of the main points addressed was that all researchers should acknowledge that there is a divide between lab studies and research in the wild. Not only with regard to the conclusions that can be drawn from the different types of studies, but also concerning the methods one could (or should) apply. But maybe an even more important divide in terms of its impact of the ability of the HRI field to move forward is that both types of research seem to be conducted by different sub-groups within the field. The HRI community should be more open to different types of methodologies and approaches to build a strong research field. This leads to the second main topic addressed in this discussion, which involves the necessity to include qualitative  data when performing research in the wild. Quantitative data on its own is not able to capture all the complex phenomenon going on in real-world contexts. IMG_0564A third and final point addressed in this discussion, which is linked to the previous one, is that almost any type of HRI research is currently different from the ultimate real-world contexts since robots are not yet fully disseminated within our society. And participants in research are not the same as real end-users. Only when robots become mainstream and people start using robots on a regular basis, we can begin to unravel the sustained effects of our interactions with these artificial others.

Houston, we have a problem

A third discussion topic evolved around the technological challenges when conducting research in the wild. HRI researchers often encounter technical problems with the utility of the system, but also with regard to the collection of user data. The first main point addressed here is that researchers often choose the best solution for their technical constraints but not the perfect solutions. This is a result from either limitations in resources, the infeasibility of the perfect solutions, or other reasons. This does not have to be a problem, but its implications of the conclusion drawn from such studies should be properly addressed. Another main point addressed in this discussion is that innovation research often strive for patent registration, which has a negative effect on sharing progress with others in the field.

Long time no see

The fourth topic discussed was on long-term research. One of the main points addressed was that the definition of what makes a study long-term should not solely depend on the user’s perspective, but should be linked to the (cognitive) development of the robot as well. Not only users change their (use) behaviors over time, also the robot will develop over time when it learns to master its necessary skills. Another main point addressed in this discussion was that each users will have his or her own interpretation of the robot, resulting in different (social) roles they assign to the robot. This can even be the same robot; each user will establish its own use behaviors even though they are interacting with the same robot. One person may have daily social chit-chats with the robots, while another person may use the robot just as a tool. A final point addressed in the discussion on long-term research is that we may need to define classifications for both the technology as well as the user. For example, we could stereotype the technology based on its functionalities in a similar way we do with users and writing persona’s for each user group, and we could stereotype user groups on several aspects such as (not) wanting to interact socially with robots.

I was awarded the NWO Rubicon grant

This week, I received the news that the Netherlands Organization for Scientific Research (NWO) has decided to award me a Rubicon grant, a grant that offers talented researchers who have completed their doctorates in the past year the chance to gain experience at a top research institution outside the Netherlands. The grant enables me to continue my research at Brown University’s Humanity Centered Robotics Initiative, supervised by Professor Bertram Malle. The goal of the research project is to investigate if, when and how people use theory of mind to explain robot behavior. The results will contribute to technology design and policy direction of acceptable robot behavior.


Project Summary

Many emerging applications of robotic technology involve humans and robots in assistive, pedagogical, and collaborative interactions. People interact increasingly with robots that display basic features of intentional agency (e.g., approaching, looking, grasping, listening, speaking, fetching). The question therefore arises how people conceptualize such robot behaviors, in particular, whether they interpret them by way of mental states such as beliefs, desires, intentions, and so on, just as they do for other humans [2]. Such interpretations constitute what is typically referred to as ‘theory of mind’, which is the ability to infer and reason about other people’s mental states [1], [3]. Because robot designers are expected to optimize such sophisticated human-robot interactions, they need to examine how people interpret robot behaviors and determine whether interactions are more acceptable, satisfying, and effective when humans indeed apply their theory of mind to robot behaviors. However, the conditions and functional benefits of this theory of mind in human-robot interactions (HRI) are currently unknown, and detailed insights into the scope and limits of people’s humanlike treatment of robots are needed.

The present research uses the study of behavior explanations —a core component of human social cognition— as a novel technique for examining people’s readiness to infer mental states from robot behavior and to manage a robot’s social standing. Research on theory of mind in the area of human-robot interaction will yield novel insights into the way people explain robot behaviors as compared to human behaviors. Moreover, the results of this research will inform design requirements for robotic systems to optimize social interactions between robots and humans.


[1] Baron-Cohen, S. (1988). Without a theory of mind one cannot participate in a conversation. Cognition, 29, 83–84.

[2] Malle, B.F., and Hodges, S.D. (2005). Other minds: How humans bridge the divide between self and other. Guilford Press: New York, NY, USA.

[3] Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1, 515–526.

2 papers accepted at RO-MAN 2016

Banner_final1_resizeBoth our papers were accepted as full papers for publication and presentation at the RO-MAN conference to be held in August in New York. Below are the titles and their abstracts.

Anticipating Our Future Robotic Society: The Evaluation of Future Robot Applications From A User’s Perspective

With an expected growth of robots in our future society, we believe that potential implications for robot applications should be addressed. Therefore, we conducted an online questionnaire among the general Dutch population (n= 1162) to map the societal impact of robots by identifying potential benefits and disadvantages of future robot applications. People differentiate between several applications, and more realistic applications were also rated more positively. Overall, people associate a future robot society with the positive consequences of efficiency, decrease of casualties, and convenience, and the negative consequences of job loss and robots’ lack of social skills. Our qualitative approach provides an in-depth evaluation of potential future robot applications, which could prompt guidelines for the development of acceptable robots.

What are People’s Associations of Domestic Robots?: Comparing Implicit and Explicit Measures

The acceptability of robots in homes does not depend solely on the practical benefits they may provide, but also on complex relationships between cognitive, affective and emotional components of people’s associations of and attitudes towards robots. This important area of research mainly relies on explicit measures, and alternative measures are rather unexplored. We therefore studied both implicit and explicit associations of robots, and found inconsistent findings between implicit and explicit measures. Our findings speak in favor of the proposition that people are actually more negative about robots than they consciously express. Since associations play an important role when people form attitudes towards robots we stress that caution when researchers and designers solely rely on explicit measures in their research.

The Unwanted Sociability of Robots?

People do not want robots to be social?

One of the studies during my PhD research involved an online survey among the Dutch population (n= 1168) investigating the anticipated acceptance of social robots in domestic environments. An interesting finding of this survey was that, overall, the robot’s social behaviors were not appreciated. The participants in the survey negatively evaluated the sociability and the companionship possibilities of future robot scenarios (i.e., a butler, a companion, and a information source robot). Thus, these data suggest that, at least at this stage of social robot diffusion in society, people do not want robots to behave socially. There may be several explanations for these results. One explanation is that people do not prefer robots that behave socially and that the development of such robots should not be pursued. The results of the survey reveal that potential future users seem to have a higher intention to use a social robot in their own homes when the robot is less sociable. Additionally, the participants indicated that they believed that a social robot could better adapt to their needs when it provided less companionship. In this manner, it is suggested that people do not want robots to behave socially or provide companionship, and that the development of these types of robots appears undesirable.

No need to start panicking (yet)

Luckily for those pursuing the development of social robots, the participants in the survey provided some inconsistent assessments of social robots by indicating that a more sociable robot could better adapt to their needs, i.e., increase its adaptability. Thus, a second explanation for the more negative evaluation of the robot’s social behavior could be that people fear or are not yet familiar with social interactions and companionship with social robots. Examining the average scores of the acceptance variables in the survey shows that the participants had very high concerns about their privacy when using a social robot in their own homes. Additionally, the results show that when the participants believed that they were more competent to interact with a social robot and could better trust a social robot, they perceived the robot’s behavior as more sociable. Furthermore, the results indicate that when participants believed that they were more competent in their own skills to properly interact with social robots, they expected to feel less fear when talking to a social robot. And when the participants expected to feel safer in the presence of a social robot, they believed that a social robot could provide more companionship. Privacy concerns may play a role, and people may fear the sociability of future social robots that are capable of providing companionship. This fear, then, is caused by  people’s privacy concerns, their lack of competence in properly interacting with social robots, their expected fear of talking to robots, or the expected lack of safety when in the presence of a social robot. Above all, the participants indicated that when a social robot is more expensive and increases the user’s status, they expect such a social robot to provide more companionship.

Social interaction with robots violates current social norms

A third explanation for the more negative evaluations of sociability and companionship is that admitting to treating social robots as companions is perceived as not socially desirable by the participants. Just as depending on television for companionship has been characterized as an inappropriate motivation for use (Rubin, 1983), it is possible that using a robot for companionship is not acceptable according to prevailing social norms. Social desirability is the tendency of participants to answer questions in a manner that will be viewed with favor by others (Paulhus, 1991), which causes over-reporting of ‘good’ behavior and under-reporting of ‘bad’ or ‘undesirable’ behavior. From the social sciences, it is known that a social desirability bias may occur in self-reported data, including data collected from questionnaires (Huang, Liao, & Chang, 1998), especially when inquiring about sensitive topics (King & Brunner, 2000). In an online study measuring both people’s implicit and explicit associations with domestic robots (de Graaf, Ben Allouch, & Lutfi, submitted), it was found that these two measures had conflicting outcomes, which may have been due to social desirability. Although people explicitly reported that they have positive associations with robots, the implicit measures revealed that they had negative associations. Furthermore, people’s implicit associations negatively correlated with their attitudes towards robots and positively correlated with their anxiety towards robots. Yet, people’s explicit associations did not significantly correlate with their attitudes towards robots and negatively correlated with anxiety towards robots.

Based on these combined results, de Graaf, Ben Allouch, and Lutfi (submitted) concluded that people implicitly have opinions about robots that are different from what they want to explicitly reveal. The difference between people’s implicit and explicit associations with robots may be because people feel a social pressure -at least when completing a scientif survey on that topic- to be positive towards robot technology, but in fact, they are not. The study on implicit and explicit associations with robots was also an online-based study without any real-world human-robot interactions. Future research on implicit and explicit associations with robots should further investigate the predictive power of implicit and explicit measures in relation to actual behavior in human-robot interaction scenarios to draw further conclusions concerning the explanatory power of implicit and explicit associations with robots. Observational methods may result in different findings because they are less sensitive to social desirability. Such studies will become increasingly important as robotic technology advances, is widespread in society and is employed in home environments for the long run.

Social interaction with robots is a process of familiarization

To further explore why the participants in the online acceptance survey indicated that they did not want robots to behave socially or provide companionship, we must turn to other methods, such as observations and interviews, to be able to determine how people interact with social robots. In contrast to the results of the online survey, the results from my long-term home study indicate that people actually do behave socially with robots in their own homes despite their skepticism concerning perceiving robots as social actors and companions. In this long-term home study, I deployed 70 Karotz robots into people’s homes (n= 168) and collected both qualitative and quantitative data about their acceptance process over a period up to 6 months. When being in their private spaces confronted with an actual robot, the participants engaged in social interaction with the robot, talked to it, gave it a name and interpreted the robot’s behavior in a social way. Furthermore, some participants indicated that they would appreciate it when future robots are able to interact more socially with their users. Some participants attempted to increase social interactions with the Karotz robot used in this study. However, not all participants appreciated the robot’s social behavior. Some participants experienced certain feelings of uncanniness when the robot initiated unsolicited conversations, and those participants reduced the social features of the robot to a minimum. Taking the findings from the acceptance survey and the long-term home study together, the social behavior of robots has a long way to go with respect to their proper development and their full social acceptance by potential future users.

My ICSR paper “What makes robots social?”

My full paper called “What makes robots social?: A user’s perspective on characteristics for social human-robot interaction” has been accepted as a poster presentation at ICSR 2015, Paris. Social robots are supposed to interact with us in a “humanlike way”. What does that mean? Additionally, we all know that social robots are still far away from ideal
social behavior. So how can we make better social robots? Moreover, social robots will always be programmed machines. Thus, can robots actually be social?Me kissing Karotz with blurred edges

Our paper provides a set of social behaviors and certain specific features social robots should possess based on user’s experiences in a longitudinal home study, discusses whether robots can actually be social, and presents some recommendations to build better social robots.

Addressing the Ethics of Human-Robot Relationships at the New Friends Conference

This week, I will attend the New Friends Conference on Social Robots in Therapy and Education in Almere, The Netherlands. I will present my first thoughts on the ethics of human-robot relationships. A short paper will be available in the conference proceedings, and I was asked to submit a full paper for a special issue to be published in the Interaction Journal of Social Robotics (which was eventually accepted for publication).

Abstract of my paper “The Ethics of Human-Robot Relationships”

Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this just a logical next step in our technological world? Would it be possible for people to treat robots as companions? What implications does this have on future generations, who will be growing up in the everyday presence of robots? Is companionship between humans and robots desirable? This paper fosters a discussion on the ethical considerations of human-robot relationships.

Attending RO-MAN 2015, Kobe, Japan


This year’s RO-MAN conference took place in Kobe, Japan. I felt very excited about this trip, because when else would one get the opportunity to travel to such an interesting country in the far, far East of the world. And an interesting country it was! The acceptance rate at RO-MAN this year was extremely high (84%), which resulted in 4 parallel sessions, and, I believe, had an effect on the overall quality of the research presented.

On the first day, I attended the workshop “From temporal interactions to sustainable relationships”, which included some very interesting talks. I especially liked the talk from Masashi Kasaki entiteled “Philosophical reflections on trustworthy machines: What is it to trust machines?”. In his talk, he provided an overview of trust in philosophy and related these analyses to trustworthy machines. In addition, he discussed what moral ramifications can enhance the trustworthiness of machines in a future robot society.

During the main conference, there were 2 talks that stayed in my memory. The first one was presented by Christoph Bartneck “Meta-analysis of the usage of the Godspeed questionnaire series”.  It stroke me that, due to the poor way of analyses and reporting results in the HRI community, it was impossible to perform a proper meta-analysis. This means that we, as a research community, cannot build upon each others findings. This actually inspired me to write a paper on what is “wrong” with current HRI research and how we can improve our research methods, data-analyses and way of reporting our results.

The second talk that inspired me was the one by Bertam Malle “When will people regard robots as morally competent social partners?”. In his talk, he offered clear guidelines for social moral behavior for robots and discussed what elements of moral competence robots need for people to treat that robot as a moral agent.

I successfully defended my thesis on June 26th

Cover voorzijdeI successfully defended my thesis entitled ‘Living with robots: Investigating the usere acceptance of social robots in domestic environments’ on June 26th. Below is a short summary of the content of my thesis.

Over the most recent decades, the field of social robotics has advanced rapidly. There are a growing number of different types of robots, and their roles within society are expanding. This dissertation has argued that investigating the long-term acceptance of social robots in home environments is necessary for the successful diffusion of these types of robots within society.

User acceptance

The findings of this dissertation indicate that usefulness is a requisite for social robot acceptance and that certain additional important acceptance variables may further explain why people continue to use a social robot in their own homes. These additional acceptance variables show that the acceptance of a social robot for domestic use increases when future users believe that they possess the necessary skills to use a social robot, when they perceive that having such a robot enhances their status, and when they expect that such a robot provides more enjoyable interactions, behaves less sociably, and causes fewer privacy concerns. However, when examining the long-term use of social robots in home environments, it appears that the importance of the acceptance variables in explaining social robot acceptance changes over time, shifting from control beliefs to attitudinal beliefs. It is believed that the importance of the acceptance variables depends on the development stage in which the technology is located (Peters, 2011). When people gain experiences with a social robot, other acceptance variables explain people’s intention to continue using it compared to the acceptance variables that explained their initial adoption.

Human-robot relationships

Concerning human-robot relationships, the studies presented in this dissertation indicate that people are initially reluctant to build a relationship with a robot and deny the possibility that such a relationship will occur for them. However, after people have adopted a social robot and have begun to use it in their own homes, some people acknowledge that they have established some type of relationship with the robot. However, not all people seem to appreciate a robot’s social behavior, and it seems that people remain unfamiliar with the possibilities of human-robot relationships and that the actual use and interactions with social robots reveal what types of relationships people are willing to establish with these robots.

The findings of this dissertation may help both researchers and developers of social robots to further develop an integrated theory or model of social robot acceptance that can describe and explain this acceptance in more real-world contexts, such as the home.

A full PDF version of my thesis can be accessed through our university library.

My post-doc proposal on the ethics of human-robot relationships has been accepted

After the recent merger of our faculty of Behavioral Sciences with the faculty of Management and Public Administration, the University of Twente initiated the Tech4People initiative to stimulate collaborations between the existing departments. After two rounds of reviews, my post-doc proposal on the ethics of human-robot relationships was one of those that were granted. Underneath is a summary of the submitted proposal.

Human-Robot Relationships and the Good Life

Demographic changes in our Western societies estimate that in 2060 30% of the EU population will be over 65 years old (European Commission, 2012). This results in less young people in the labor market to support the health care system and an increasing need for the care of older people. With the prices of robot manufacture falling, robots are believed to be a solution for this problem with following and monitoring solitarily living elderly people and perform caregiving tasks. Today, robots are increasingly build to interact socially with humans. These socially interactive robots are perceived by its users as social entities and users tend to assign humanlike characteristics to them (Kerepesi et al., 2006). As lonely people more easily attribute social characteristics to robots (Eyssel & Reich, 2013), the permanent presence of socially interactive robots in the home of older persons living alone might serve as a fertile ground for human-robot relationships. And despite the pervasive role of technology in modern society and contemporary life, very little research on well-being has focused on technology (Brey, 2012). This proposal will investigate if and how companionship with socially interactive robots affects the psychological well-being of elderly people.

The ‘good life’ refers to the physical and psychological well-being of people. Although traditionally being a philosophical topic, in recent decades well-being, or positive psychology, has become an important concern in psychology (Brey, 2012). Positive psychology focuses on finding and nurturing talent and making normal life more fulfilling (Seligman & Csikszentmihalyi, 2000). Research in positive psychology aims at developing positive practices that enhance human well-being, and focuses on supporting positive experiences, positive individual qualities, or positive social processes and institutions.

Research on human-robot interaction provides evidence that people can establish some kind of emotional or social bond with socially interactive robots (de Graaf, Ben Allouch, & Klamer, in press), especially when these robots are perceived as advanced technologies (de Graaf & Ben Allouch, 2014). People might even benefit from these relationships with robots in particular situations (Broadbent et al., 2009). Socially interactive robots are robots for which social interaction plays a key role in their interactions with users and aim to exhibit the following characteristics: social learning and imitation, dialog, learning and developing social competencies, exhibit distinctive personality, establishing and maintaining social relationships (Fong et al., 2003). People interact similarly with robots as they do with other people (Kerepesi et al., 2006). In addition, the fundamental human motivation for the ‘need to belong’ not only induces the desire for meaningful and enduring relationships with other humans (Baumeister & Leary, 1995), but also increases the probability that people will form emotional attachments to artificial beings as well (Krämer, Eimler, von der Putten, & Payr, 2011). This bonding with non-human objects is most likely to be enlarged when these objects possess lifelike abilities and are endowed with humanlike capacities, such as socially interactive robot.

Ethical concerns related to socially interactive robots, especially those developed for care settings, are increasingly gaining attention (Sharkey & Sharkey, 2012). The perceptions of life largely depends on the observation of intelligent behavior, and the more intelligent a being is the more rights we tend to grant to it (Bartneck et al., 2007). People, thus, may fall prey to accepting robotic companionship without the moral responsibilities that real, reciprocal relationships involves (Kahn et al., 2013). Therefore, it is argued that any benefits gained from interactions with robots are the consequences of deceiving people into thinking they could establish relationships with robots. People might feel happy when interacting with robots and forming relationships with them. However, this some scholars (Sparrow & Sparrow, 2006) debate that this is a delusion, because those people mistakenly believe that robots have properties which they do not and failure to apprehend the world accurately is a moral failure. This calls for an evaluation on how human-robot relationships affect the psychological well-being of elderly people when they bond with socially interactive robots.

New published article: Sharing a Life with Harvey: Exploring the acceptance of and relationship building with a social robot

My new article is in press at the journal of Computers in Human Behavior.


Social robots will become ubiquitous in our everyday environments. These robots could potentially extend life expectancy, and improve the health and quality of life of an aging population. A long-term explorative study has been conducted by installing a social robot for health promotion in older people’s own homes. Content analysis of interviews provided an in-depth understanding of the factors that influence the acceptance of and relationship-building with social robots in domestic environments. The permanent presence of a robot in users’ own homes yields the vital challenges social robots encounter to be successfully accepted by their users. These vital acceptance challenges are unlikely to be revealed in one-day laboratory human-robot interaction studies or even in multiple observations of short interactions between humans and robots.