Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Session 09: Emotional Responses towards Robots
Time:
Thursday, 07/Sep/2017:
2:00pm - 3:00pm

Session Chair: Constanze Schreiner
Location: Room CIV 260

Show help for 'Increase or decrease the abstract text size'
Presentations

Engaging with Humanoid Robots in Perspective of Human Communication and Emotional Responsiveness

Elly A. Konijn, Johan F. Hoorn

Vrije Universiteit Amsterdam, Netherlands, The

Robots are introduced in our society at rapid pace, not only as housekeeping support like robotic vacuums, but increasingly in terms of humanlike social entities for communicative and social purposes. For example, social robots to keep demented elderly company (e.g., Paro), to fulfill a social function for mentally healthy elderly (e.g.,Alice; Hoorn et al.,2015; van Kemenade et al., 2015) or to teach children elementary school tasks (e.g.,Nao; Kennedy et al.,2015; Leyzberg etal., 2014; Konijn etal.,2016). Due to an aging society and lack of financial and human resources to sustain sufficient levels of social welfare and healthcare (Allen & Wearden,2013), an increase in need for supportive social robots is foreseen (Asaro, 2006; Pew,2014; Stafford etal., 2014). However, social robots have thus far hardly been studied in communication science while they are increasingly designed to communicate with human beings in ways that resemble face-to-face communication.

The current study focuses on the communicative aspects of human-robot-interaction. Humanoid robots that are designed to look like humans for communicative purposes (e.g., Nadine in Singapore, Jia Jia in China, or Erica in Japan) are most likely to fulfil various communication needs in the future. Quintessential to the rise of humanoid robots in service-oriented professions such as healthcare, hospitality, and education is emotional responsiveness (Broadbent, 2017; Gockley et al., 2005; Fong et al., 2003). Emotional responsiveness contributes to humanness – a lack of emotional responsiveness is generally considered less human, not humane and uncanny, indeed ‘robotic’. Likewise, when humanoid robots can find a way to emotionally connect to the interlocutor, it might be more successful. By evoking emotional responsiveness, the cognitive reflections of ‘it is just a robot’ will become less salient. Research has indicated that fulfilling emotional needs and emotional responsiveness toward social robots will then take control precedence over reflective cognitions about the robot’s fabricated nature – knowing it is not a real human being (van Kemenade et al.,2015; Konijn & Hoorn, 2017). Such ‘biased’ processing may likewise occur with an empathic response (following Study 2 in Konijn et al.,2009) and empathy underlies appropriate social communication. Thus, the following questions guided our research: 1) are people emotionally responsive to social robots? 2) do they empathize with robots when ‘in pain’ or maltreated? 3) do people attribute emotions to social robots? And 4) do these responses differ between more/less detailed facial expressions of a human and social robots?

Previous research concluded that humans feel empathy for social robots in similar ways as they do for human beings when in pain (Rosenthal-von der Pütten, et al., 2014; Suzuki, et al., 2015). These studies provide important insights in human responsiveness to robots in pain, yet, they did not compare humans with robot’s faces. Rosenthal-von der Pütten’s study compared a human dressed in black and filmed from the back with a baby toy-animal robot Pleo. Suzuki’s study compared cutting a human versus robot finger with a knife. Because facial expressions are a primary means of conveying social information and observing another’s facial expression is essential in evoking an emotional and empathetic response (Misselhorn, 2009), we focused on the face and included two types of humanoid robots that differed in facial expressiveness. Both the robots and human being are filmed from the front while either being caressed or maltreated.

Participants were adults (N=265; M=31.5; SD=12.7; 47% male) and randomly assigned to six conditions in a 2 (robot/human: Robokind Alice vs. Alebaran Nao vs. Human Actress) by 2 (treatment: caressed vs. maltreated) between-subjects design. Dependent variables were emotional responsiveness, empathy, and emotion projection. The three ‘social entities’ (Figure 1) featured in video clips (1 min.) in which the robot/human was either treated nicely (caressed) or maltreated, resulting in six video clips. A manipulation check (3x2 MANOVA) confirmed the intended treatments (p<.001).

Measures (5-point Likert-type scales) focused on whether people A) respond emotionally toward robots as they do for humans (PANAS; Watson et al.,1988, 20 items, 10 negative and 10 positive feelings; Cronbach α>.82); B) respond empathetically when maltreated (through combining items from Rosenthal et al.,2014, and other scales, 14 items; Cronbach α>.80); and C) project emotions (from Rosenthal et al.,2014, Cronbach α>.72) onto robots as they do to humans. Related, D) to what extend does detailed facial expressions of robots make a difference?

Results (two-way MANOVA) showed main effects for both factors (p’s<.001) and a significant interaction effect (p<.001). Posthoc analyses showed that for either human or robot, participants’ emotional responses were in accordance with how the human/robot was treated (caressed vs. maltreated). Furthermore, a significant interaction effect for empathy (p<.001) indicated that a violent treatment raises more empathy than a caressing treatment, more so for humans but also for the robots. Additionally, a significant interaction occurred for projection of feelings (p<.001), indicating that participants project feelings onto robots as they do to a human in accordance with expectations of the treatment, yet in higher intensities to the human. Particularly regarding the maltreatment, the level of projected emotions follows the level of detailed facial expressiveness: highest for the human, then Alice, and least to Nao.

In all, results showed that participants did respond emotionally and showed empathy toward robots and also endowed them with emotions, yet (a little) less intense than they did toward humans. Notably, the emotional responsiveness toward robots was beyond the midpoint of the scale. Results further indicated slightly stronger responsiveness toward the more expressive robot face (i.e., robot Alice) for some types of emotional responsiveness. Moreover, responses to the robots were in accordance with general expectations for humans when maltreated in raising more intense levels of negative emotions and ‘feeling sorry for’ than a caressing treatment. Likewise, people attribute the robots ‘feeling pain’ as they do to humans. Thus, people do respond affectively to humans and robots alike. At the conference, we will discuss this new line of research within a larger framework of how and why people respond to humanoid robots the way they do and their tendency to project human schemata onto robotic experiences.


I’ll Show You How I Feel – Emotional Facial Reactions Towards a Robot

Isabelle M. Menne, Birgit Lugrin

University of Wuerzburg, Germany

Robots are moving from industry halls to our private homes, in the shape of lawn mower robots, vacuum cleaner robots or entertainment robots. The appearance of robots outside of an industrial context leads to new challenges and questions, such as: How do people react emotionally towards this new “form of life”? Can humans “feel something” for an artificial being? Can their reaction be directly observed? The answer to these questions is not only important for ethical considerations but also for advances in the design of a natural human-robot interaction. A natural interaction includes the interaction partner’s ability to infer the affective state of its communication partner based on external observable cues. Cues that are unobtrusively observable are facial expressions. Research indicates, facial expressions are associated with emotions (e.g. Ekman et al., 2005). Although there is a large body of research on emotion and facial expressions of emotion, most of this research focuses only on human-human interaction. Systematic research on spontaneous facial (emotional) expressions towards robots remains rather scarce. Almost the same applies to systematic research regarding emotional reactions towards robots. However, there are some exceptions and a particular study (Rosenthal-von der Pütten et al., 2013), systematically investigating emotional responses towards robots, inspired our present study. The authors studied the participants’ physiological arousal as well as their self-reported emotions towards a robot shown in different situations and could show that humans indeed react emotionally towards robots.

As emotions are a complex multilevel phenomenon, their measurement could profit from a multi-method approach to increase the validity of results. We believe facial expressions could serve as an important input channel for further investigations on emotional responses towards robots, especially concerning their value for a natural, unobtrusive human-robot interaction. Thus, we studied whether a human’s emotional reaction towards a robot can be observed in the face. We used the Facial Action Coding System (Ekman, Friesen & Hager, 2002) as it is the most widely and most frequently used method for facial expression analysis in multiple fields (e.g. Lien et al., 2000). As stimulus material we used the anthropomorphic social robot Reeti by robopec and showed participants videos of Reeti either being tortured or being treated friendly. Participants displayed more facial expressions (Action Units 12+25+38) associated with pleasantness and joy (Scherer & Ellgring, 2007) when watching Reeti being treated nicely. They also displayed more facial expressions (Action Units 9, 10, 15, 39) associated with unpleasantness and fear/disgust (Scherer & Ellgring, 2007) when watching Reeti being tortured. The findings also indicate a possible match with the participants’ self-reported emotions as they reported feeling more positively after the positive video and more negatively after the torture video. With this and future work we hope to provide information for the design of a natural and intuitive human-robot interaction. We also hope to answer questions such as: which Action Units are recommendable for further investigations and applications?


Distal and Proximal Paths Leading into the Uncanny Valley of Mind

Jan-Philipp Stein, Peter Ohler

Chemnitz University of Technology, Germany

From the golems in Jewish mythology to Frankenstein's monster, from Goethe's brooms disobeying the sorcerer's apprentice to the malignant computer H.A.L. in Kubrick's "2001" – man-made entities as threat to their creators have long remained a central theme in cultural lore, arts, and media. However, cultural science indicates that it is especially Western societies in which a deep embedment of anthropocentric values (i.e., those emphasizing the dominance of humans above all creation) leads to pessimistic views on intelligent machinery (Fuller, 2014); in contrast to this, East Asian cultures seem to attach less importance to the uniqueness of the "human soul" (Borody, 2013).

Inspired by such observations as well as the constant advancement of technology, media psychologists too have explored the way people interact with – and feel anxious about – sophisticated technology. Apart from aversions towards abstract computers and robots (e.g., Bartneck et al., 2007, Gray & Wegner, 2012), a growing body of research has focused on human-like androids and embodied conversational agents whose anthropomorphic features or emotional behavior directly challenge the idea of human distinctiveness. In an extension of Masahiro Mori's influential "Uncanny Valley" theory (1970), recent research suggests that these entities might fall into an "Uncanny Valley of Mind" if their mental capabilities approach those of a human being. While a groundbreaking experiment by Gray and Wegner (2012) highlighted the uncanniness of machines possessing their own feelings, we conducted a study focusing on machines recognizing the mind and mood of others. We suggest that the according psychological concepts – social cognition, empathy, and having a theory of mind – present themselves as particularly unique human traits, as is indicated by findings from various scientific areas (e.g., Gallagher & Frith, 2012; Iacoboni, 2009; Pagel, 2012). In consequence, machines conquering these domains should appear as unpleasant, or even frightening to observers.

We designed a fictitious Virtual Reality chat software that could be accessed via head-mounted display, and asked 92 participants to watch a standardized conversation between two 3D characters. Keeping the characters' appearance and behavior constant, we manipulated both the supposed controller (human avatar vs. computer-controlled agent) as well as its presumed level of autonomy (completely scripted vs. autonomous) in a between-subject design. While the dialogue always featured empathic behavior, the varying instructions thus resulted in different attributions of mind to the depicted entities. Statistical analyses revealed that participants who ascribed the displayed empathy to an autonomous artificial intelligence experienced significantly more uncanniness than all other groups (F(1,88) = 5.68, p = .019, ηp² = 0.06).

Follow-up statements by several participants and observations during the experiment indicate that perceptions of (loss of) control over the intelligent artificial entity might work as strong moderator for the reported effect. As such, we propose and discuss an extended model that addresses the interplay between more cognitive aspects in the form human uniqueness concerns – i.e., a distal path – and the immediate, affective experience of threat by an emotionally aware computer – a proximal path.


Mechanisms and Positive Effects of Narrative Introductions in Human-Robot-Interactions

Martina Mara2, Astrid Rosenthal-von der Pütten1, Carolin Straßmann1, Markus Appel3, Nicole Krämer1

1University of Duisburg-Essen, Germany; 2Ars Electronica Center, Linz; 3University of Koblenz-Landau

The presented work deals with the question whether a positive narrative introduction is beneficial for robots independent of their appearance. First studies revealed the power of narrative communication previous to human-robot interaction and showed that a narrative introduction of a robot leads to more positive interactions and evaluations (Mara & Appel, 2015). It is, however, unclear whether this positive framing of robots by narratives works equally for different robot design approaches and appearances. Moreover, it is unclear which underlying mechanism (transportation via narrative story or clarification of the concept service robot) is at work. We present two online experiments addressing these open research questions.

In a 2x6 between-subjects online experiment (n=249), we varied the introduction (narrative describing a young woman interacting with a robot vs. instruction manual, equal number of words) and appearance of the robot (6 different robot appearances, cf. Rosenthal-von der Pütten & Krämer, 2014). The goal of this study was to examine whether a narrative introduction of a robot has an effect on its evaluation and whether there are interaction effects with different robot appearances. We calculated a two-factorial MANOVA and found a significant main effect of robot appearance on ratings of likability, dominance, uncanniness, human-likeness and mechanicalness (all p<.002, ηp2 between .079 and .126), but not for autonomy and intelligence. These results replicate previous findings on evaluation differences based on appearance (Rosenthal-von der Pütten & Krämer, 2014). Moreover, we found a significant main effect of narration. When the robot was introduced by a narrative story, it was evaluated as being more likable, more autonomous, more intelligent, more humanlike and less mechanical compared to being introduced by the informational leaflet (all p<.006). The MANOVA revealed no significant interaction effects between the robots’ appearance and narration on users’ evaluation.

In a second 2x3 between-subjects (narration vs. instruction manual; 3 different robots) online experiment (n=152), we additionally administered scales assessing participants’ sense of transportation into the narrative story (Appel, Gnambs, Richter & Green, 2015) and clarification of the concept service robot (Appel, Krause, Gleich & Mara, 2016). We found a significant main effect for narration on evaluation (likability, autonomy, intelligence, human-likeness, mechanicalness; all p<.001). However, differences in evaluations based on appearance were not replicated and there were no interaction effects. Regarding the mechanism of narration, we found that a narrative introduction led to a higher sense of transportation (narration M=3.96; instruction leaflet M=3.44, F(5/146)=13.74, p<.001, ηp2=.086), but not to better concept clarification. The effect of narration on the robots’ evaluation was not mediated by participants’ experience of being transported into the narrative story.

In summary, results indicate that robots introduced by a positively framed narrative story were evaluated as being more likable, intelligent, autonomous, and humanlike. They were also perceived to be less mechanical and less uncanny. However, there were no interaction effects between narration and robot appearance suggesting that narration is beneficial for robots regardless of their appearance and hence is a strong mechanism to shape positive expectations before actually interacting with a robot.



 
Contact and Legal Notice · Contact Address:
Conference: Mediapsychology 2017
Conference Software - ConfTool Pro 2.6.113
© 2001 - 2017 by H. Weinreich, Hamburg, Germany