Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Session 16: Robots and Virtual Agents
Time:
Friday, 08/Sep/2017:
9:00am - 10:00am

Session Chair: Elly A. Konijn
Location: Room CIV 165

Show help for 'Increase or decrease the abstract text size'
Presentations

University Students' Attitudes Towards Sexual Interactions with Robots

Nicola Döring

Ilmenau University of Technology, Germany

Introduction:

Recent forecasts predict that by 2050 it will be perfectly normal for both men and women to have sex with robots, satisfactory sex in fact (Levy, 2007, p. 22; Pearson, 2016). Theoretical concepts like anthropomorphization, parasocial interaction/relationships and media equation are able to explain why and how humans might be able to accept humanoid robots as their sexual and romantic partners. The first dedicated sex robots already are on the market, and the number of scientific publications on robotic sex (i.e. sexual interactions between humans and robots) has been on the rise over the last ten years (Hauskeller et al., 2014; Cheok et al., 2017). Most studies are theoretical, however, and empirical data is scarce.

Objectives:

The current study thus aimed at empirically investigating the attitudes of university students in Germany towards robotic sex. Four research questions guided the study:

RQ1: To what extent have university students already learnt about robotic sex through media representations?

RQ2: Do university students see the spread of robotic sex more as a probable or an improbable future scenario?

RQ3: Do university students evaluate robotic sex more as a positive or a negative phenomenon?

RQ4: Which personal attributes predict if university students show or don't show interest in trying out robotic sex themselves?

Method:

A convenience sample of N=198 university students from Germany (social science department, 71% female, 29% male, mean age 21) anonymously filled in an online questionnaire.

The first part of the questionnaire measured knowledge and attitudes towards robotic sex with several self-constructed scales (based on Levy, 2007; Scheutz & Arnold, 2016). The second part of the questionnaire measured background variables: 1) gender (operationalized as female/male); 2) sexual experience (operationalized as number of sexual partners); 3) acceptance of technology (operationalized with the 4 acceptance items from the technology commitment scale by Neyer, Felber & Gebhardt, 2012) and 4) five main personality traits (operationalized by the 10 Item Big Five Inventory by Rammstedt et al., 2014). All scales revealed satisfactory internal consistency. The statistical analyses were conducted with SPSS.

Results:

Only a minority of the respondents had already heard about robotic sex in the media (RQ1). The majority of the respondents considered the spread of robotic sex an improbable future scenario (RQ2). When reviewing possible positive and negative implications of robotic sex, respondents predominantly evaluated robotic sex as a dangerous phenomenon.

Most of the respondents (64%) rejected robotic sex as an option for themselves. Of the remaining 36% some revealed low (18%), some medium (11%) and some strong (7%) interest in trying out robotic sex. Multiple regression analysis confirmed that high acceptance of technology, male gender, low conscientiousness, low extraversion, low emotional stability, and more sexual experience predicted higher interest in robotic sex.

Conclusion:

While the study is based on a small convenience sample of university students from social science only, it provides a first insight into current attitudes towards robotic sex. Further research is necessary in order to better understand attitudes towards robotic sex in different demographic groups.


Self-Efficacy in Human-Robot-Interaction

Nikolai Bock, Astrid Rosenthal-von der Pütten

University of Duisburg-Essen, Germany

This paper discusses the influence of individuals’ beliefs about their abilities to use and control robotic technologies on their evaluation of human-robot-interaction (HRI). The goal of this work was two-fold: First, we developed and validated a measure of Self-Efficacy in HRI. Second, we explored the influence of customization of a robot on users’ perceived self-efficacy in HRI, and evaluation of the robot and interaction.

To develop and validate a measure of Self-Efficacy in HRI we conducted three surveys. Exploratory factor analysis revealed a two-factorial (factors perceived self-efficacy and loss of control) solution with good reliability (study 1, n = 201). Confirmatory factor analysis did not confirm the two-factorial structure, but revealed a better model fit for a one-factorial solution (18 items; χ²/df-ratio of 2.98, RMSEA = .066, CFI = .95 and SRMR = .029) for a German (study 2, n = 450) and an English version (study 3, n = 209). We used this questionnaire in two experimental studies to explore the influence of customization on self-efficacy in HRI.

In the first study, student participants (study 4, n=60) engaged in a social interaction with the Nao robot in a household related setting. The aim of this interaction was to create five different dishes, each based on three different ingredients, with the robot’s help. Preceding this interaction, participants either a) trained the robot (training interaction -> customization of the robot) which ingredients fit into which dish, b) simply showed the robot all items upon request in order to check whether all study materials are present (non-training interaction -> interaction, but no possibility to customize the robot), or c) read a fact sheet about the robot’s capabilities. We expected that interacting with a robot increases self-efficacy and decreases perceived loss of control. Moreover, we hypothesized that actively teaching or training the robot leads to even stronger effects. We found that interacting with a robot increased self-efficacy. There were no differences regarding non-training and training condition. However, individual changes in self-efficacy predicted more positive evaluations (e.g. Anthropomorphism: ß=.288, p = .049; Animacy: ß=.346, p=.018, Perceived Intelligence: ß=.341, p=.013, Likability: ß=.289, p=.045). This demonstrates that self-efficacy can be positively influenced by experience and that individual self-efficacy is a strong explanatory variable for evaluation effects.

In a second study with elderly users (study 5, n=60), we slightly modified the experimental design by keeping the training condition and the control fact sheet condition, but replacing the non-training condition. Instead, we added one condition to explore in more detail the effect of “mediated” versus “do-it-yourself” customization. In this additional condition, participants customized the robot with the help of a “programmer”. We could not replicate the general positive effect of the interaction on self-efficacy. Again, we did not find the hypothesized stronger effect of customization on self-efficacy, nor did we find that relationship between self-efficacy increase and evaluation. We discuss implications of our results and limitations of the setting and for questionnaire design for elderly participants.


Please don't switch me off - An experimental study on switching off a robot

Aike C. Horstmann, Nikolai Bock, Doreen Eitelhuber, Janina Lehr, Eva Linhuber, Jessica Szczuka, Carolin Straßmann, Nicole Krämer

University of Duisburg-Essen, Germany

How does it feel to turn off an interaction partner against his will after a conversation?

Theoretically grounded in media equation assumptions, this study addresses the question how people react when they are asked to switch off a robot with which they just interacted and what effect an objection by the robot has. Previous research suggests that the reaction depends on how the robot is perceived. Here, Bartneck, van der Hoek, Mubin and Al Mahmud (2007) stated that „If a robot would not be perceived as being alive then switching it off would not matter”. Therefore it can be assumed that it is decisive whether the robot is rather perceived as a likeable living being or as a functional machine. While prior studies have analysed the effects and behaviors surrounding the switching off of robots, neither likeability as a living being nor the effects of an explicit objection from the robot´s part have been manipulated. In line with this research gap, the behaviour of the robot was varied within the study so that the interaction style was either likeable (e.g. telling jokes) or functional (e.g. saving data). Since a robot’s emotional display can influence a human’s moral action toward the robot (Briggs & Scheutz, 2014) we further examined whether an objection expressed by the robot influences the participant’s behaviour. Here, the robot either gave no objection, or voiced a human-like (“Please don’t switch me off. I’m scared that it won’t get light again.”) or a machine-like (“Don’t switch me off”) objection after the investigator’s request to turn it off. To examine how the participants reacted in each case, we videotaped the setting and measured the reaction time for switching the robot off after it was requested by the investigator. Further stress and discomfort was assessed via electrodermal activity measurements and questionnaires. Additionally half of the participants were interviewed afterwards to obtain more in-depths insights regarding how the robot and switching it off was perceived.

In a laboratory study with an experimental 2 (likeable/functional interaction) x 3 (no/human-like/machine-like objection) 119 participants (85 female) were asked to perform two different interaction tasks with the robot allegedly to improve its visual and auditive recognition system. The average age was 22.28 years (SD = 3.68) and most participants were students (93.3 %).

All participants switched the robot off, so there was no effect of the condition on the immediate behavior. First results of the interviews indicate that the investigator’s request dominated the robot’s objection, because participants would rather follow the instruction of a human than of a robot. Contrary, participants in the “no objection”-condition stated during the interviews that if they imagined the robot expressing the wish to stay turned on, they would not have switched him off against his will. Therefore, the imagined objection of the robot had a higher influence at least on the imagined behaviour than the actual objection. Further results on questionnaires and electrodermal activity have to be analysed yet and will be reported at the conference.


Categorizing a virtual agent’s visual appearance and discovering age-related user preferences

Carolin Strassmann, Nicole C. Krämer

University Duisburg-Essen, Germany

One primary factor that influences human-agent interaction is the visual appearance of the virtual agent. It was shown that appearance has an effect on various variables such as motivation (Baylor, 2009), learning outcomes (Sträfling, Fleischer, Polzer, Leutner & Krämer, 2010), persuasive effects (Hanus & Fox, 2015) and the agent’s overall evaluation (Ring, Utami & Bickmore, 2014). Multiple variables can be varied in order to design the virtual agent’s appearance. Although numerous studies are focusing on agent’s appearance, a systematic overview is missing. Therefore, based on prior research, a categorization of appearance factors was constructed that resulted in four main categories: species, realism, 2D vs. 3D, and feature specifications.

This categorization can now be used to explore the users’ preferences in more detail. In order to create virtual agents that are engaging and motivate the user to interact with it, it is necessary to take the users’ needs into account, and thus, it is important to know the preferences of the agent’s target group. Since current developments aim to maintain the elderly’s autonomy by using virtual agents (e.g. Yaghoubzadeh, Kramer, Pitsch & Kopp, 2013), it is of particular interest to explore the preferences of this target group. Until now, little is known about age-related differences with regard to appearance preferences, but since assistive technologies like virtual agents can be extremely beneficial, knowledge about elderly people’s preferences is called for. Therefore, age-related preferences regarding an agent’s appearance have been explored in two different studies. A qualitative interview study with students (n = 6, age: M = 24.17, SD = 7.44) and elderly people (n = 5, age: M = 67.00, SD = 6.12) has been used to get a first overview and explore the most important categorizes. Participants answered different questions about their preferences and feelings with regard to the different categories of appearance. Results point that there are age-related differences and that preferences varied especially with regard to species and realism. Students stated to prefer the agent to be less realistic and rejected a humanoid agent, since they want to distinguish between a virtual and the real world. On the opposite, elderly people preferred a more realistic look and humanoid agents, because they are more familiar to those kinds of appearances.

Based on these results an experimental study (N = 43; 23 student and 20 elderly) with a within-subjects design (3x2x5) was conducted, which investigated the evaluation of different appearances more systematic and specific. Specie (human, animal, robot) and realism (high details, low details, cartoon proportion, cartoon shade, cartoon proportion and shade) has been varied. The 30 resulting stimuli were evaluated with regard to person perception, likability and intention to use, in order to quantify the findings of the interviews. Results indicate that students and elderly liked different species, while this is not true for realism. It can be summarized, that users do have different preferences, thus the agent’s appearance has to be tailored to the exact target group.



 
Contact and Legal Notice · Contact Address:
Conference: Mediapsychology 2017
Conference Software - ConfTool Pro 2.6.111
© 2001 - 2017 by H. Weinreich, Hamburg, Germany