People do not want robots to be social?
One of the studies during my PhD research involved an online survey among the Dutch population (n= 1168) investigating the anticipated acceptance of social robots in domestic environments. An interesting finding of this survey was that, overall, the robot’s social behaviors were not appreciated. The participants in the survey negatively evaluated the sociability and the companionship possibilities of future robot scenarios (i.e., a butler, a companion, and a information source robot). Thus, these data suggest that, at least at this stage of social robot diffusion in society, people do not want robots to behave socially. There may be several explanations for these results. One explanation is that people do not prefer robots that behave socially and that the development of such robots should not be pursued. The results of the survey reveal that potential future users seem to have a higher intention to use a social robot in their own homes when the robot is less sociable. Additionally, the participants indicated that they believed that a social robot could better adapt to their needs when it provided less companionship. In this manner, it is suggested that people do not want robots to behave socially or provide companionship, and that the development of these types of robots appears undesirable.
No need to start panicking (yet)
Luckily for those pursuing the development of social robots, the participants in the survey provided some inconsistent assessments of social robots by indicating that a more sociable robot could better adapt to their needs, i.e., increase its adaptability. Thus, a second explanation for the more negative evaluation of the robot’s social behavior could be that people fear or are not yet familiar with social interactions and companionship with social robots. Examining the average scores of the acceptance variables in the survey shows that the participants had very high concerns about their privacy when using a social robot in their own homes. Additionally, the results show that when the participants believed that they were more competent to interact with a social robot and could better trust a social robot, they perceived the robot’s behavior as more sociable. Furthermore, the results indicate that when participants believed that they were more competent in their own skills to properly interact with social robots, they expected to feel less fear when talking to a social robot. And when the participants expected to feel safer in the presence of a social robot, they believed that a social robot could provide more companionship. Privacy concerns may play a role, and people may fear the sociability of future social robots that are capable of providing companionship. This fear, then, is caused by people’s privacy concerns, their lack of competence in properly interacting with social robots, their expected fear of talking to robots, or the expected lack of safety when in the presence of a social robot. Above all, the participants indicated that when a social robot is more expensive and increases the user’s status, they expect such a social robot to provide more companionship.
Social interaction with robots violates current social norms
A third explanation for the more negative evaluations of sociability and companionship is that admitting to treating social robots as companions is perceived as not socially desirable by the participants. Just as depending on television for companionship has been characterized as an inappropriate motivation for use (Rubin, 1983), it is possible that using a robot for companionship is not acceptable according to prevailing social norms. Social desirability is the tendency of participants to answer questions in a manner that will be viewed with favor by others (Paulhus, 1991), which causes over-reporting of ‘good’ behavior and under-reporting of ‘bad’ or ‘undesirable’ behavior. From the social sciences, it is known that a social desirability bias may occur in self-reported data, including data collected from questionnaires (Huang, Liao, & Chang, 1998), especially when inquiring about sensitive topics (King & Brunner, 2000). In an online study measuring both people’s implicit and explicit associations with domestic robots (de Graaf, Ben Allouch, & Lutfi, submitted), it was found that these two measures had conflicting outcomes, which may have been due to social desirability. Although people explicitly reported that they have positive associations with robots, the implicit measures revealed that they had negative associations. Furthermore, people’s implicit associations negatively correlated with their attitudes towards robots and positively correlated with their anxiety towards robots. Yet, people’s explicit associations did not significantly correlate with their attitudes towards robots and negatively correlated with anxiety towards robots.
Based on these combined results, de Graaf, Ben Allouch, and Lutfi (submitted) concluded that people implicitly have opinions about robots that are different from what they want to explicitly reveal. The difference between people’s implicit and explicit associations with robots may be because people feel a social pressure -at least when completing a scientif survey on that topic- to be positive towards robot technology, but in fact, they are not. The study on implicit and explicit associations with robots was also an online-based study without any real-world human-robot interactions. Future research on implicit and explicit associations with robots should further investigate the predictive power of implicit and explicit measures in relation to actual behavior in human-robot interaction scenarios to draw further conclusions concerning the explanatory power of implicit and explicit associations with robots. Observational methods may result in different findings because they are less sensitive to social desirability. Such studies will become increasingly important as robotic technology advances, is widespread in society and is employed in home environments for the long run.
Social interaction with robots is a process of familiarization
To further explore why the participants in the online acceptance survey indicated that they did not want robots to behave socially or provide companionship, we must turn to other methods, such as observations and interviews, to be able to determine how people interact with social robots. In contrast to the results of the online survey, the results from my long-term home study indicate that people actually do behave socially with robots in their own homes despite their skepticism concerning perceiving robots as social actors and companions. In this long-term home study, I deployed 70 Karotz robots into people’s homes (n= 168) and collected both qualitative and quantitative data about their acceptance process over a period up to 6 months. When being in their private spaces confronted with an actual robot, the participants engaged in social interaction with the robot, talked to it, gave it a name and interpreted the robot’s behavior in a social way. Furthermore, some participants indicated that they would appreciate it when future robots are able to interact more socially with their users. Some participants attempted to increase social interactions with the Karotz robot used in this study. However, not all participants appreciated the robot’s social behavior. Some participants experienced certain feelings of uncanniness when the robot initiated unsolicited conversations, and those participants reduced the social features of the robot to a minimum. Taking the findings from the acceptance survey and the long-term home study together, the social behavior of robots has a long way to go with respect to their proper development and their full social acceptance by potential future users.