Our group covered in I/O Magazine

Our group, the Human Centered Computing group led by Judith Masthoff, was covered in I/O Magazine with some nice pictures of us and our labs.

The research of our group focuses on computing to improve people’s wellbeing and adopts a user perspective when studying the interactions between systems and humans. Our computing research is always inspired by what people do, what they want, what is good for them and what they think of the technology. Read more about our group in the article.

My 1st Year as Faculty – The Highlights

As you may have noticed, it has been somewhat quiet on my social media outlets. That is because I was busy surviving my first year as Assistant Professor at Utrecht University. The first few months I was nearly drowning, the second half of the year I was managing to keep my head above water at all time, and –now that I have also survived the first semester of my second year– I finally feel like I can safely remove my floaties and swim on my own. After all this craziness, I still would like to update you about the milestone of my past year.

 

Teaching

Most of my time adjusting to my new position was spend on getting a grasp of how teaching works at Utrecht University and getting acquainted with the courses. I am the responsible teacher for the bachelor course Human, Technology and ICT that addresses the societal impact of information technology with approximately 160 students enrolling in this course. Given that the course has had many critiques in the past years, I have the challenge to take this course to a higher level for next academic year. The other excited teaching task was the start of the new HCI master last Fall and the development of a whole new course from scratch. I am the responsible teacher for the master course Cognitive and Social Psychology for HCI that introduces several main aspects of the field of psychology as relevant for human interactions with technological systems.

 

Academic Service

At Utrecht University, I am teaching in the program of Information Science; a program that is currently undergoing some structural changes to adapt the program to contemporary advancements in our technological society. I have contributed to this endeavor in multiple ways, specifically by rethinking the central vision of the program of Information Science advising particularly to equip students with a stronger background in experimental design and statistics and imbue them with a stronger understanding of the ethical and societal implications of IS systems. That document served as the foundation for multiple working groups to further improve the program; and I am currently involved in the one ensuring a general line of ethics throughout the bachelor program.

 

Other highlights under this header were happening outside my university. Starting early 2019, I am one of the Associate Editors of the ACM Transactions on Human-Robot Interaction, which is the main journal of our field. Additionally, I am very excited to announce that I was elected to become an At Large member of the HRI Steering Committee; a group of prominent member of the international research community. With my background in behavioral psychology and communication science, I aim to be the voice of people from the social sciences –a group of scientists whose involvement in the HRI community has become of crucial value given the progressively ubiquitous presence of robotic applications in our everyday lives which will inevitably transform our societies. To further mature HRI as a field, my goal is to encourage the HRI community to apply rigorous research methods and data analyses commonly practiced in the social sciences, as well as the inclusion of a wider range of theoretical and methodological approaches to better reflect the interdisciplinary nature of HRI research.

 

Starting my own research lab

In supporting the further advancement of my research line, I was able to by my very first robot, Softbank’s Pepper robot. I want to give them a gender-neutral name, but cannot seem to land on one, suggestions? Currently, this robot is still residing in my office, but the construction works for my research lab should be fishing soon. The name of my lab will be “REsponses To RObots” or ReTRo Lab which will document the affective (emotions), behavioral (anticipated and actual), and cognitive (beliefs) components of people’s responses to robots. More news coming soon(ish).

And last Fall, my department allowed me to recruit a PhD student, Anouk Neerincx, who started in November 2019. She has the opportunity to shape her own thesis topic in the next couple of months within the scope of social human-robot interaction taking a (social) psychological approach.

 

Awards and Grants

After the honor the be listed as one of 50 women in robotics by Robohub in 2017, it was my pleasure to see myself nominated for the VIVA400 as TechTalent as well as being awarded as one of the Inspiring 50 Netherlands in 2019. The VIVA400 is a list of 400 women who inspire others with their goals and achievements. Inspiring 50 is a non-profit that aims to increase diversity in tech by making female role models in tech more visible to encouraging more girls and women in technology.

But also within my university, I am partaking in initiatives to promote female scientist role models, draw attention to the problematic issues women are facing in their academic careers, and advise policy makers in how to retain female talent in academia. This all is happening through the WiCS (Women in Computing and Information Science) community who have won the Diversity & Inclusion award from Utrecht University.

Regarding grants, my year has not been that successful so far. I have submitted to the Dutch Research Organization twice for a personal grant; the second time not even making it into the next round. I also unsuccessfully submitted an EU grant last Spring. My current hopes are for the NWA grant which we are about to submit later this month together with a highly multidisciplinary team of renowned researchers coming from seven Dutch universities and two universities of applied sciences, covering the entire country and with many existing collaborations. But even more importantly, the applicants form a balanced mix of highly qualified senior researchers and very talented junior researchers, all with excellent track records. So, finger crossed for this one.

 

Publications

In March 2019, I presented at the HRI 2019 conference our research on robot behavior explanations. The research documents how people’s explanations of robot behavior resemble and differ from corresponding explanations of human behavior. We found that people use the same conceptual toolbox of behavior explanations for both human and robot agents, robustly indicating inferences of intentionality and mind. But people applied specific explanatory tools at somewhat different rates and in somewhat different ways for robots, revealing certain preferences and expectations people hold when thinking about robots. With these findings we not only gain insight into people’s perceptions of robots as intentional agents (and likely future members of human communities) but also offer a template for how robots could explain their own behavior in ways that are understandable to people.

I was invited to submit an article to the “Journal of Human Factors” in The Netherlands discussing what is needed to successfully introduce robots to the workplace. Given that my former colleague Suzanne Janssen recently received a personal grant to study employee motivations to work with robots, I contacted her to join me in this endeavor. The article outlines several steps of the adoption and appropriation process of robots in the workplace.

 

Workshops

At the HRI 2019 conference, I have co-organized two workshops. One with the goal to introduce communication science as a relevant field to better understand the interactions between humans and machines. The other taught the HRI community on critical thinking techniques to address societal issues related to robot design and applications.

 

My response to the EU’s intentions to grant robots ‘Electronic Personhood’ and the Open Letter to the European Commission

shutterstock_506661994I am one of the scientist that signed the Open Letter to the European Commission as a reaction to the EU’s intentions to grant robots and other intelligent machines ‘Electronic Personhood’. I agree with the suggestion that robots should not have moral status at this time and should not be considered capable of having rights. At the moment, there is no robot or other intelligent machine that is ‘smart’ enough to make the assignment of “personhood” logically acceptable.

However, I disagree with the blunt rejection of rights for robots, as drawn up in the Open Letter to the EU. One of the arguments in the letter says that granting human rights to robots would violate human rights. That’s right if you look at the matter as “only humans can have human rights”. Although this is correct, the problem lies in the bigotries nature of such formulations. In the past, many people were excluded from equal rights: socially lower classes, women, people from ethnic minorities, and more recently the LGBTQ community. And the problem is that those who already are granted with the rights determine who else will or will not be receptive of the same rights. If, on the other hand, the words in the letter mean that robot rights conflict with the rights that people practice; that hold true for all rights (a.k.a., freedom of speech vs. right for equality and non-discrimination).

Robot rights image resized compressedFor many the discussion on endowing robots with personhood evolve around liability and responsibility. It is about who should hold responsible or pay for accidents, violations and other illegal practices. But “personhood” is a multidimensional concept and includes a combination of rights and responsibilities. And the issue is too complicated for the rigid “never” in the Open Letter to the EU. Perhaps there is a need for new legal categories that contain a more graded solution.

There are more examples of non-human entities having rights. Companies are legal entities. Though, they still have actual people behind the scenes that represent that company. But other examples are New Zealand’s Wanganui River and the River Ganges in India; both are also legal entities and do not have people representing them. Grating these rivers with same legal rights as humans is the result of conservational measures implemented by activists to use the local legislation to protect nature. I am not saying that I agree with this, but it is a consequence of the ontological limitations of our current legislation.

The problem lies in the dominant anthropocentric concept of the word “personhood”. This requires multiple legal categories; which is still under discussion. A “never” as stated in the Open Letter is exceptional and exclusive, while there are still many debatable points to be revolved. The “never” shuts down such debates even before they have started. But it also too soon to give robots and other autonomous technologies “electronic personhood”. Now is the time to start discussing the many open issues of robot rights; not a time for final decisions.

When robots actually become self-conscious, it would be hypocritical and unfair of us humans to deny such robots the status of “personhood.” But to date we are still very far away from such a critical point in our history.

A robust set of human-robot behaviors and some discrepancies

For the project “How people use theory of mind to explain robot behavior”, I am investigating whether people infer mental states such as beliefs, desires, and intentions to explain robotic agents’ behavior, just as they do for humans. As a first step in our research program, we ran a series of pretests. The goal was to create a pool of stimulus behaviors that are similarly judged on properties of behavior that influence people’s explanations, regardless of whether these behaviors were performed by humans or by robots.  We were successful in developing a stimulus behavior pool that can be used to rigorously examine whether people explain robot and human behaviors in similar or distinct ways. However, in the course of identifying this robust set of behaviors, we also discovered several behaviors that showed markedly discrepant judgments on these properties of behavior (i.e., intentionality, surprisingness, and desirability). These results were published as a Late-Breaking-Report at the HRI 2018 conference in Chicago. The full set of behavior stimuli can be found here.

 

Abstract of the paper

The emergence of robots in everyday life raises the question of how people explain the behavior of robots—in particular, whether they explain robot behavior the same way as they explain human behavior. However, before we can examine whether people’s explanations differ for human and robot agents, we need to establish whether people judge basic properties of behavior similarly regardless of whether the behavior is performed by a human or a robot. We asked 239 participants to rate 78 behaviors on the properties of intentionality, surprisingness, and desirability. While establishing a pool of robust stimulus behaviors (whose properties are judged similarly for human and robot), we detected several behaviors that elicited markedly discrepant judgments for humans and robots. Such discrepancies may result from norms and stereotypes people apply to humans but not robots, and they may present challenges for human-robot interactions.

 

Smart robotic companions to assist seniors

Funded by NSF with a $1 million grant, Brown’s Humanity Centered Robotics Initiative is teaming up with Hasbro with the goal to add artificial intelligence capabilities of the Joy for All Companion Pets to offer support to seniors with everyday tasks. This includes help in finding lost objects, medication reminders or other tasks that sometimes become challenging, especially those who may have mild dementia. Over the next three years, we plan to perform a variety of user studies to understand how the smart pets might best assist older adults, and we will work on developing and integrating a variety of artificial intelligence technologies that will meet the needs identified in the user studies.

The official press release can be found here.

Listed on Robohub’s list of “25 women in robotics you need to know about – 2017”

Today I was honored to see myself among 24 other amazing female roboticists who, according to Robohub, everyone needs know about in 2017. The list features women from all over the world, from all disciplines related to robotics, both in academia and industry. Robohub yearly identifies 25 women based on their inspirational story, their enthusiasm, their fearlessness, their vision, ambition, and accomplishments. As said on their website, I too hope to inspire other young women and girls to go into STEM, or robotics specifically.

Robohub’s 25 Women in Robotics – 2017 list

How to make robots transparent and explainable for everyday users

This week I received the notification of acceptance of our paper on explainable robots to be presented at the AAAI Fall Symposium ‘AI-HRI’, November 9th-11th, 2017 in Arlington, VA, USA. In the paper, Bertram Malle and I present a strategy on how to make autonomous intelligent systems, such as robots, transparent and explainable for everyday users.

Abstract

To make Autonomous Intelligent Systems (AIS), such as virtual agents and embodied robots, “explainable” we need to understand how people respond to such systems and what expectations they have of them. Our thesis is that people will regard most AIS as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanation to them. We present a well-supported theory of how people explain human behavior and sketch what it would take to implement the underlying framework of explanation in AIS. The benefits will be considerable: When an AIS is able to explain its behavior in ways that people find comprehensible, people are more likely to form correct mental models of such a system and calibrate their trust in the system.

Best paper award at HRI 2017

Who would have thought

Our paper entitled ‘Why do they refuse to use my robot?: Reasons for non-use derived from a long-term home study‘ has received the best paper award at HRI 2017, Vienna, in the track HRI User Studies.

Abstract of the paperHRI Best paper award 2.JPG

Research on why people refuse or abandon the use of technology in general, and robots specifically, is still scarce. Consequently, the academic understanding of people’s underlying reasons for non-use remains weak. Thus, vital information about the design of these robots including their acceptance and refusal or abandonment by its users is needed. We placed 70 autonomous robots within people’s homes for a period of six months and collected reasons for refusal and abandonment through questionnaires and interviews. Based on our findings, the challenge for robot designers is to create robots that are enjoyable and easy to use to capture users in the short-term, and functionally-relevant to keep those users in the longer-term. Understanding the thoughts and motives behind non-use may help to identify obstacles for acceptance, and therefore enable developers to better adapt technological designs to the benefit of the users.

Me presenting at HRI 2017.jpg

Other interesting talks at HRI 2017

Obviously, mine was not the only interesting presentation at the conference. Here is a list of my favorites. All papers of the full program can be found on the conference website.

‘Threatening flocks and mindful snowflakes: How group entitativity affects perceptions of robots’ presented by Marlena Fraune.

‘Steps towards participatory design of social robots: Mutual learning with older adults with depression’ presented by Hee Rin Lee.

‘Affective grounding in human-robot interaction’ presented by Malte Jung.

‘Staking the ethical limits of HRI’ presented by Thomas Arnold.

 

Organizing a Workshop at RO-MAN 2016

This year, I co-organized the workshop ‘Challenges of HRI in Real-World Contexts‘, together with Somaya Ben Allouch and Astrid Rosenthal – von der Putten, during the RO-MAN conference held in New York. The aim of our workshop was to bring together researchers from both industry and academia to discuss best practices as well as pitfalls of HRI research in real-world settings, and to provide the HRI community with guidelines to inform future developments of their robotic systems. The interactive character of our workshop gave room to many extended discussions on the challenges of doing HRI research outside the lab. What methodological issues arise when we try to replicate findings earlier established in lab settings? What role plays the research context? And what kind of technological challenges are we facing when performing research in the wild? Based on the presentations and the follow-up discussions after each presentation, we listed four topics for our main extend discussion session at the end of the morning.

Take what you can and give nothing back

IMG_0558The first discussion topic was guided by the following question: What can we learn from other fields about research in real-world contexts? One of the main points addressed in the discussion on this topic is that the HRI community may rely too much on theories and methods from (social) psychology. The assumption that human-robot interactions (should) follow guidelines of human-human interaction take a very prominent place in especially the way social behaviors of robots are developed and evaluated. However, there are many other fields that have relevant theories, methods and approaches. For example the fields of medicine or health research offer alternatives that can be useful when studying (long-term) effects on human behavior. The second main point addressed in this discussion was that ethical issues become more prominent when conducting research in the wild.

Everything inside out

The second discussion topic was guided by the following question: What does it mean to replicate results found in lab studies in real-world contexts? One of the main points addressed was that all researchers should acknowledge that there is a divide between lab studies and research in the wild. Not only with regard to the conclusions that can be drawn from the different types of studies, but also concerning the methods one could (or should) apply. But maybe an even more important divide in terms of its impact of the ability of the HRI field to move forward is that both types of research seem to be conducted by different sub-groups within the field. The HRI community should be more open to different types of methodologies and approaches to build a strong research field. This leads to the second main topic addressed in this discussion, which involves the necessity to include qualitative  data when performing research in the wild. Quantitative data on its own is not able to capture all the complex phenomenon going on in real-world contexts. IMG_0564A third and final point addressed in this discussion, which is linked to the previous one, is that almost any type of HRI research is currently different from the ultimate real-world contexts since robots are not yet fully disseminated within our society. And participants in research are not the same as real end-users. Only when robots become mainstream and people start using robots on a regular basis, we can begin to unravel the sustained effects of our interactions with these artificial others.

Houston, we have a problem

A third discussion topic evolved around the technological challenges when conducting research in the wild. HRI researchers often encounter technical problems with the utility of the system, but also with regard to the collection of user data. The first main point addressed here is that researchers often choose the best solution for their technical constraints but not the perfect solutions. This is a result from either limitations in resources, the infeasibility of the perfect solutions, or other reasons. This does not have to be a problem, but its implications of the conclusion drawn from such studies should be properly addressed. Another main point addressed in this discussion is that innovation research often strive for patent registration, which has a negative effect on sharing progress with others in the field.

Long time no see

The fourth topic discussed was on long-term research. One of the main points addressed was that the definition of what makes a study long-term should not solely depend on the user’s perspective, but should be linked to the (cognitive) development of the robot as well. Not only users change their (use) behaviors over time, also the robot will develop over time when it learns to master its necessary skills. Another main point addressed in this discussion was that each users will have his or her own interpretation of the robot, resulting in different (social) roles they assign to the robot. This can even be the same robot; each user will establish its own use behaviors even though they are interacting with the same robot. One person may have daily social chit-chats with the robots, while another person may use the robot just as a tool. A final point addressed in the discussion on long-term research is that we may need to define classifications for both the technology as well as the user. For example, we could stereotype the technology based on its functionalities in a similar way we do with users and writing persona’s for each user group, and we could stereotype user groups on several aspects such as (not) wanting to interact socially with robots.

I was awarded the NWO Rubicon grant

This week, I received the news that the Netherlands Organization for Scientific Research (NWO) has decided to award me a Rubicon grant, a grant that offers talented researchers who have completed their doctorates in the past year the chance to gain experience at a top research institution outside the Netherlands. The grant enables me to continue my research at Brown University’s Humanity Centered Robotics Initiative, supervised by Professor Bertram Malle. The goal of the research project is to investigate if, when and how people use theory of mind to explain robot behavior. The results will contribute to technology design and policy direction of acceptable robot behavior.

 

Project Summary

Many emerging applications of robotic technology involve humans and robots in assistive, pedagogical, and collaborative interactions. People interact increasingly with robots that display basic features of intentional agency (e.g., approaching, looking, grasping, listening, speaking, fetching). The question therefore arises how people conceptualize such robot behaviors, in particular, whether they interpret them by way of mental states such as beliefs, desires, intentions, and so on, just as they do for other humans [2]. Such interpretations constitute what is typically referred to as ‘theory of mind’, which is the ability to infer and reason about other people’s mental states [1], [3]. Because robot designers are expected to optimize such sophisticated human-robot interactions, they need to examine how people interpret robot behaviors and determine whether interactions are more acceptable, satisfying, and effective when humans indeed apply their theory of mind to robot behaviors. However, the conditions and functional benefits of this theory of mind in human-robot interactions (HRI) are currently unknown, and detailed insights into the scope and limits of people’s humanlike treatment of robots are needed.

The present research uses the study of behavior explanations —a core component of human social cognition— as a novel technique for examining people’s readiness to infer mental states from robot behavior and to manage a robot’s social standing. Research on theory of mind in the area of human-robot interaction will yield novel insights into the way people explain robot behaviors as compared to human behaviors. Moreover, the results of this research will inform design requirements for robotic systems to optimize social interactions between robots and humans.

References

[1] Baron-Cohen, S. (1988). Without a theory of mind one cannot participate in a conversation. Cognition, 29, 83–84.

[2] Malle, B.F., and Hodges, S.D. (2005). Other minds: How humans bridge the divide between self and other. Guilford Press: New York, NY, USA.

[3] Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1, 515–526.