Smart robotic companions to assist seniors

Funded by NSF with a $1 million grant, Brown’s Humanity Centered Robotics Initiative is teaming up with Hasbro with the goal to add artificial intelligence capabilities of the Joy for All Companion Pets to offer support to seniors with everyday tasks. This include help in finding lost objects, medication reminders or other tasks that sometimes become challenging, especially those who may have mild dementia. Over the next three years, we plan to perform a variety of user studies to understand how the smart pets might best assist older adults, and we will work on developing and integrating a variety of artificial intelligence technologies that will meet the needs identified in the user studies.

The official press release can be found here.

Advertisements

Listed on Robohub’s list of “25 women in robotics you need to know about – 2017”

Today I was honored to see myself among 24 other amazing female roboticists who, according to Robohub, everyone needs know about in 2017. The list features women from all over the world, from all disciplines related to robotics, both in academia and industry. Robohub yearly identifies 25 women based on their inspirational story, their enthusiasm, their fearlessness, their vision, ambition, and accomplishments. As said on their website, I too hope to inspire other young women and girls to go into STEM, or robotics specifically.

Robohub’s 25 Women in Robotics – 2017 list

How to make robots transparent and explainable for everyday users

This week I received the notification of acceptance of our paper on explainable robots to be presented at the AAAI Fall Symposium ‘AI-HRI’, November 9th-11th, 2017 in Arlington, VA, USA. In the paper, Bertram Malle and I present a strategy on how to make autonomous intelligent systems, such as robots, transparent and explainable for everyday users.

Abstract

To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, “explainable” we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a well-supported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.

Best paper award at HRI 2017

Who would have thought

Our paper entitled ‘Why do they refuse to use my robot?: Reasons for non-use derived from a long-term home study‘ has received the best paper award at HRI 2017, Vienna, in the track HRI User Studies.

Abstract of the paperHRI Best paper award 2.JPG

Research on why people refuse or abandon the use of technology in general, and robots specifically, is still scarce. Consequently, the academic understanding of people’s underlying reasons for non-use remains weak. Thus, vital information about the design of these robots including their acceptance and refusal or abandonment by its users is needed. We placed 70 autonomous robots within people’s homes for a period of six months and collected reasons for refusal and abandonment through questionnaires and interviews. Based on our findings, the challenge for robot designers is to create robots that are enjoyable and easy to use to capture users in the short-term, and functionally-relevant to keep those users in the longer-term. Understanding the thoughts and motives behind non-use may help to identify obstacles for acceptance, and therefore enable developers to better adapt technological designs to the benefit of the users.

Me presenting at HRI 2017.jpg

Other interesting talks at HRI 2017

Obviously, mine was not the only interesting presentation at the conference. Here is a list of my favorites. All papers of the full program can be found on the conference website.

‘Threatening flocks and mindful snowflakes: How group entitativity affects perceptions of robots’ presented by Marlena Fraune.

‘Steps towards participatory design of social robots: Mutual learning with older adults with depression’ presented by Hee Rin Lee.

‘Affective grounding in human-robot interaction’ presented by Malte Jung.

‘Staking the ethical limits of HRI’ presented by Thomas Arnold.

 

Organizing a Workshop at RO-MAN 2016

This year, I co-organized the workshop ‘Challenges of HRI in Real-World Contexts‘, together with Somaya Ben Allouch and Astrid Rosenthal – von der Putten, during the RO-MAN conference held in New York. The aim of our workshop was to bring together researchers from both industry and academia to discuss best practices as well as pitfalls of HRI research in real-world settings, and to provide the HRI community with guidelines to inform future developments of their robotic systems. The interactive character of our workshop gave room to many extended discussions on the challenges of doing HRI research outside the lab. What methodological issues arise when we try to replicate findings earlier established in lab settings? What role plays the research context? And what kind of technological challenges are we facing when performing research in the wild? Based on the presentations and the follow-up discussions after each presentation, we listed four topics for our main extend discussion session at the end of the morning.

Take what you can and give nothing back

IMG_0558The first discussion topic was guided by the following question: What can we learn from other fields about research in real-world contexts? One of the main points addressed in the discussion on this topic is that the HRI community may rely too much on theories and methods from (social) psychology. The assumption that human-robot interactions (should) follow guidelines of human-human interaction take a very prominent place in especially the way social behaviors of robots are developed and evaluated. However, there are many other fields that have relevant theories, methods and approaches. For example the fields of medicine or health research offer alternatives that can be useful when studying (long-term) effects on human behavior. The second main point addressed in this discussion was that ethical issues become more prominent when conducting research in the wild.

Everything inside out

The second discussion topic was guided by the following question: What does it mean to replicate results found in lab studies in real-world contexts? One of the main points addressed was that all researchers should acknowledge that there is a divide between lab studies and research in the wild. Not only with regard to the conclusions that can be drawn from the different types of studies, but also concerning the methods one could (or should) apply. But maybe an even more important divide in terms of its impact of the ability of the HRI field to move forward is that both types of research seem to be conducted by different sub-groups within the field. The HRI community should be more open to different types of methodologies and approaches to build a strong research field. This leads to the second main topic addressed in this discussion, which involves the necessity to include qualitative  data when performing research in the wild. Quantitative data on its own is not able to capture all the complex phenomenon going on in real-world contexts. IMG_0564A third and final point addressed in this discussion, which is linked to the previous one, is that almost any type of HRI research is currently different from the ultimate real-world contexts since robots are not yet fully disseminated within our society. And participants in research are not the same as real end-users. Only when robots become mainstream and people start using robots on a regular basis, we can begin to unravel the sustained effects of our interactions with these artificial others.

Houston, we have a problem

A third discussion topic evolved around the technological challenges when conducting research in the wild. HRI researchers often encounter technical problems with the utility of the system, but also with regard to the collection of user data. The first main point addressed here is that researchers often choose the best solution for their technical constraints but not the perfect solutions. This is a result from either limitations in resources, the infeasibility of the perfect solutions, or other reasons. This does not have to be a problem, but its implications of the conclusion drawn from such studies should be properly addressed. Another main point addressed in this discussion is that innovation research often strive for patent registration, which has a negative effect on sharing progress with others in the field.

Long time no see

The fourth topic discussed was on long-term research. One of the main points addressed was that the definition of what makes a study long-term should not solely depend on the user’s perspective, but should be linked to the (cognitive) development of the robot as well. Not only users change their (use) behaviors over time, also the robot will develop over time when it learns to master its necessary skills. Another main point addressed in this discussion was that each users will have his or her own interpretation of the robot, resulting in different (social) roles they assign to the robot. This can even be the same robot; each user will establish its own use behaviors even though they are interacting with the same robot. One person may have daily social chit-chats with the robots, while another person may use the robot just as a tool. A final point addressed in the discussion on long-term research is that we may need to define classifications for both the technology as well as the user. For example, we could stereotype the technology based on its functionalities in a similar way we do with users and writing persona’s for each user group, and we could stereotype user groups on several aspects such as (not) wanting to interact socially with robots.

I was awarded the NWO Rubicon grant

This week, I received the news that the Netherlands Organization for Scientific Research (NWO) has decided to award me a Rubicon grant, a grant that offers talented researchers who have completed their doctorates in the past year the chance to gain experience at a top research institution outside the Netherlands. The grant enables me to continue my research at Brown University’s Humanity Centered Robotics Initiative, supervised by Professor Bertram Malle. The goal of the research project is to investigate if, when and how people use theory of mind to explain robot behavior. The results will contribute to technology design and policy direction of acceptable robot behavior.

 

Project Summary

Many emerging applications of robotic technology involve humans and robots in assistive, pedagogical, and collaborative interactions. People interact increasingly with robots that display basic features of intentional agency (e.g., approaching, looking, grasping, listening, speaking, fetching). The question therefore arises how people conceptualize such robot behaviors, in particular, whether they interpret them by way of mental states such as beliefs, desires, intentions, and so on, just as they do for other humans [2]. Such interpretations constitute what is typically referred to as ‘theory of mind’, which is the ability to infer and reason about other people’s mental states [1], [3]. Because robot designers are expected to optimize such sophisticated human-robot interactions, they need to examine how people interpret robot behaviors and determine whether interactions are more acceptable, satisfying, and effective when humans indeed apply their theory of mind to robot behaviors. However, the conditions and functional benefits of this theory of mind in human-robot interactions (HRI) are currently unknown, and detailed insights into the scope and limits of people’s humanlike treatment of robots are needed.

The present research uses the study of behavior explanations —a core component of human social cognition— as a novel technique for examining people’s readiness to infer mental states from robot behavior and to manage a robot’s social standing. Research on theory of mind in the area of human-robot interaction will yield novel insights into the way people explain robot behaviors as compared to human behaviors. Moreover, the results of this research will inform design requirements for robotic systems to optimize social interactions between robots and humans.

References

[1] Baron-Cohen, S. (1988). Without a theory of mind one cannot participate in a conversation. Cognition, 29, 83–84.

[2] Malle, B.F., and Hodges, S.D. (2005). Other minds: How humans bridge the divide between self and other. Guilford Press: New York, NY, USA.

[3] Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1, 515–526.

2 papers accepted at RO-MAN 2016

Banner_final1_resizeBoth our papers were accepted as full papers for publication and presentation at the RO-MAN conference to be held in August in New York. Below are the titles and their abstracts.

Anticipating Our Future Robotic Society: The Evaluation of Future Robot Applications From A User’s Perspective

With an expected growth of robots in our future society, we believe that potential implications for robot applications should be addressed. Therefore, we conducted an online questionnaire among the general Dutch population (n= 1162) to map the societal impact of robots by identifying potential benefits and disadvantages of future robot applications. People differentiate between several applications, and more realistic applications were also rated more positively. Overall, people associate a future robot society with the positive consequences of efficiency, decrease of casualties, and convenience, and the negative consequences of job loss and robots’ lack of social skills. Our qualitative approach provides an in-depth evaluation of potential future robot applications, which could prompt guidelines for the development of acceptable robots.

What are People’s Associations of Domestic Robots?: Comparing Implicit and Explicit Measures

The acceptability of robots in homes does not depend solely on the practical benefits they may provide, but also on complex relationships between cognitive, affective and emotional components of people’s associations of and attitudes towards robots. This important area of research mainly relies on explicit measures, and alternative measures are rather unexplored. We therefore studied both implicit and explicit associations of robots, and found inconsistent findings between implicit and explicit measures. Our findings speak in favor of the proposition that people are actually more negative about robots than they consciously express. Since associations play an important role when people form attitudes towards robots we stress that caution when researchers and designers solely rely on explicit measures in their research.

The Unwanted Sociability of Robots?

People do not want robots to be social?

One of the studies during my PhD research involved an online survey among the Dutch population (n= 1168) investigating the anticipated acceptance of social robots in domestic environments. An interesting finding of this survey was that, overall, the robot’s social behaviors were not appreciated. The participants in the survey negatively evaluated the sociability and the companionship possibilities of future robot scenarios (i.e., a butler, a companion, and a information source robot). Thus, these data suggest that, at least at this stage of social robot diffusion in society, people do not want robots to behave socially. There may be several explanations for these results. One explanation is that people do not prefer robots that behave socially and that the development of such robots should not be pursued. The results of the survey reveal that potential future users seem to have a higher intention to use a social robot in their own homes when the robot is less sociable. Additionally, the participants indicated that they believed that a social robot could better adapt to their needs when it provided less companionship. In this manner, it is suggested that people do not want robots to behave socially or provide companionship, and that the development of these types of robots appears undesirable.

No need to start panicking (yet)

Luckily for those pursuing the development of social robots, the participants in the survey provided some inconsistent assessments of social robots by indicating that a more sociable robot could better adapt to their needs, i.e., increase its adaptability. Thus, a second explanation for the more negative evaluation of the robot’s social behavior could be that people fear or are not yet familiar with social interactions and companionship with social robots. Examining the average scores of the acceptance variables in the survey shows that the participants had very high concerns about their privacy when using a social robot in their own homes. Additionally, the results show that when the participants believed that they were more competent to interact with a social robot and could better trust a social robot, they perceived the robot’s behavior as more sociable. Furthermore, the results indicate that when participants believed that they were more competent in their own skills to properly interact with social robots, they expected to feel less fear when talking to a social robot. And when the participants expected to feel safer in the presence of a social robot, they believed that a social robot could provide more companionship. Privacy concerns may play a role, and people may fear the sociability of future social robots that are capable of providing companionship. This fear, then, is caused by  people’s privacy concerns, their lack of competence in properly interacting with social robots, their expected fear of talking to robots, or the expected lack of safety when in the presence of a social robot. Above all, the participants indicated that when a social robot is more expensive and increases the user’s status, they expect such a social robot to provide more companionship.

Social interaction with robots violates current social norms

A third explanation for the more negative evaluations of sociability and companionship is that admitting to treating social robots as companions is perceived as not socially desirable by the participants. Just as depending on television for companionship has been characterized as an inappropriate motivation for use (Rubin, 1983), it is possible that using a robot for companionship is not acceptable according to prevailing social norms. Social desirability is the tendency of participants to answer questions in a manner that will be viewed with favor by others (Paulhus, 1991), which causes over-reporting of ‘good’ behavior and under-reporting of ‘bad’ or ‘undesirable’ behavior. From the social sciences, it is known that a social desirability bias may occur in self-reported data, including data collected from questionnaires (Huang, Liao, & Chang, 1998), especially when inquiring about sensitive topics (King & Brunner, 2000). In an online study measuring both people’s implicit and explicit associations with domestic robots (de Graaf, Ben Allouch, & Lutfi, submitted), it was found that these two measures had conflicting outcomes, which may have been due to social desirability. Although people explicitly reported that they have positive associations with robots, the implicit measures revealed that they had negative associations. Furthermore, people’s implicit associations negatively correlated with their attitudes towards robots and positively correlated with their anxiety towards robots. Yet, people’s explicit associations did not significantly correlate with their attitudes towards robots and negatively correlated with anxiety towards robots.

Based on these combined results, de Graaf, Ben Allouch, and Lutfi (submitted) concluded that people implicitly have opinions about robots that are different from what they want to explicitly reveal. The difference between people’s implicit and explicit associations with robots may be because people feel a social pressure -at least when completing a scientif survey on that topic- to be positive towards robot technology, but in fact, they are not. The study on implicit and explicit associations with robots was also an online-based study without any real-world human-robot interactions. Future research on implicit and explicit associations with robots should further investigate the predictive power of implicit and explicit measures in relation to actual behavior in human-robot interaction scenarios to draw further conclusions concerning the explanatory power of implicit and explicit associations with robots. Observational methods may result in different findings because they are less sensitive to social desirability. Such studies will become increasingly important as robotic technology advances, is widespread in society and is employed in home environments for the long run.

Social interaction with robots is a process of familiarization

To further explore why the participants in the online acceptance survey indicated that they did not want robots to behave socially or provide companionship, we must turn to other methods, such as observations and interviews, to be able to determine how people interact with social robots. In contrast to the results of the online survey, the results from my long-term home study indicate that people actually do behave socially with robots in their own homes despite their skepticism concerning perceiving robots as social actors and companions. In this long-term home study, I deployed 70 Karotz robots into people’s homes (n= 168) and collected both qualitative and quantitative data about their acceptance process over a period up to 6 months. When being in their private spaces confronted with an actual robot, the participants engaged in social interaction with the robot, talked to it, gave it a name and interpreted the robot’s behavior in a social way. Furthermore, some participants indicated that they would appreciate it when future robots are able to interact more socially with their users. Some participants attempted to increase social interactions with the Karotz robot used in this study. However, not all participants appreciated the robot’s social behavior. Some participants experienced certain feelings of uncanniness when the robot initiated unsolicited conversations, and those participants reduced the social features of the robot to a minimum. Taking the findings from the acceptance survey and the long-term home study together, the social behavior of robots has a long way to go with respect to their proper development and their full social acceptance by potential future users.

My ICSR paper “What makes robots social?”

My full paper called “What makes robots social?: A user’s perspective on characteristics for social human-robot interaction” has been accepted as a poster presentation at ICSR 2015, Paris. Social robots are supposed to interact with us in a “humanlike way”. What does that mean? Additionally, we all know that social robots are still far away from ideal
social behavior. So how can we make better social robots? Moreover, social robots will always be programmed machines. Thus, can robots actually be social?Me kissing Karotz with blurred edges

Our paper provides a set of social behaviors and certain specific features social robots should possess based on user’s experiences in a longitudinal home study, discusses whether robots can actually be social, and presents some recommendations to build better social robots.

Addressing the Ethics of Human-Robot Relationships at the New Friends Conference

This week, I will attend the New Friends Conference on Social Robots in Therapy and Education in Almere, The Netherlands. I will present my first thoughts on the ethics of human-robot relationships. A short paper will be available in the conference proceedings, and I was asked to submit a full paper for a special issue to be published in the Interaction Journal of Social Robotics (which was eventually accepted for publication).

Abstract of my paper “The Ethics of Human-Robot Relationships”

Currently, human-robot interactions are constructed according to the rules of human-human interactions inviting users to interact socially with robots. Is there something morally wrong with deceiving humans into thinking they can foster meaningful interactions with a technological object? Or is this just a logical next step in our technological world? Would it be possible for people to treat robots as companions? What implications does this have on future generations, who will be growing up in the everyday presence of robots? Is companionship between humans and robots desirable? This paper fosters a discussion on the ethical considerations of human-robot relationships.