Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Session 8: Methodological and Conceptual Challenges and Approaches
Time:
Wednesday, 17/Sept/2025:
2:15pm - 3:45pm

Session Chair: Lisa Weidmüller
Location: BAR/0I88/U

Barkhausen-Bau Haus D, Georg-Schumann-Str.11, First floor

Show help for 'Increase or decrease the abstract text size'
Presentations

When Tomorrow is Too Far Away: Methodological Considerations in Futuristic Human-Machine Communication

Andrew Prahl

Nanyang Technological University, Singapore

This paper reflects on methodological challenges in conducting experimental human-machine communication (HMC) research on futuristic AI applications within the constraints of large interdisciplinary AI grant programs. It discusses difficulties in translating abstract, unfamiliar technologies into effective experimental stimuli, identifies reasons for null findings, and proposes methodological adaptability to navigate constraints imposed by technical collaborations and funding expectations.



From human experts to artificial authorities? Reconsidering the concept of epistemic authority in the context of generative AI

Esther Greussing, Evelyn Jonas, Monika Taddicken

Technische Universität Braunschweig, Deutschland

This research explores the role of generative AI (GenAI) as an epistemic authority by integrating perspectives from HMC and epistemology. Based on the respective literature as well as on qualitative interviews, key objectives of the submission include 1) discussing GenAI's technological capabilities (i.e., what speaks for and against considering GenAI as an epistemic authority at all), 2) exploring users' perceptions of its authority across different contexts, and 3) distinguishing between epistemic authority and trust. Ultimately, this research seeks to enrich discussions on AI's role as an authority, particularly with regard to how users accept or contest AI-generated information. It highlights the need for a deeper understanding of how GenAI shapes the dynamics of authority in digital spaces by introducing (artificial) epistemic authority into the HMC debate and offering initial reflections in this respect.



Trust in Voice-Based Assistants: A User-Centered Perspective

Katharina Frehmann, Marc Ziegele

Universität Düsseldorf, Deutschland

Voice-Based Assistants (VBAs) like Alexa are increasingly integrated into daily life, serving as technical tools, interaction partners, and information sources. While trust plays a key role in their adoption, existing research relies on predefined frameworks that may not fully capture user perspectives. This study uses qualitative interviews to explore how users themselves define and experience trust in VBAs, what it affects and how they trust their own VBA compared to the new voice mode of ChatGPT. Results show that to users, social features are less relevant than technical reliability and informational accuracy. Trust is function-specific, affecting engagement only in certain limits. Users prioritize experience over technological sophistication, favoring familiar VBAs over the more capable voice mode of ChatGPT. These findings challenge traditional trust models for conversational AI and show how users prioritize influences on trust. Future research should integrate informational trustworthiness and account for the limited usage and engagement.



Measuring AI Literacy: Developing a Comprehensive Self-Assessment Tool

Danilo Harles, Ulrike Mendel, Antonia Schlude, Christian Stumpf, Roland A. Stürz

Bayerisches Forschungsinstitut für Digitale Transformation, Deutschland

Given the increasing presence of AI in professional settings, the ability to interact competently with generative AI systems becomes increasingly important. This development raises questions about the relevance of AI literacy in the context of socio-economic divides.

Against this background, this study explores how AI literacy can be effectively measured through a concise self-assessment questionnaire. The study followed a three-step process: (1) a scoping review of research literature concerned with AI literacy, leading to the identification of over 250 relevant components of AI literacy; (2) the synthesis of the identified components into 60 questionnaire items; (3) an empirical reduction of the item set based on data from a survey of 1,500 Internet users in Germany. This resulted in a final 11-item scale that ensures broad coverage of AI literacy while maintaining structural consistency with an established measurement method of digital literacy (DigCompSAT).



Real-Time-Response-Measurement in HMC Research: An Example Based on Trustworthiness Evaluations of Empathic and Humorous Communicative AI as an Intermediary for Science-related Information

Evelyn Jonas, Monika Taddicken

Technische Universität Braunschweig, Institut für Kommunikationswissenschaft, Deutschland

Our submission addresses the growing complexity of researching human-AI interactions, particularly regarding perceptions and evaluations of communicative AI systems. Traditional post-receptive surveys often face biases, prompting the use of real-time response (RTR) measurements to capture shifts in perception as they actually occur. The study presents a two-stage mixed-methods approach, investigating the trustworthiness of a voice-based AI agent incorporating empathic and humorous expressions in a dialogue about science-related information on nutritional supplements. The first study took place in a laboratory (n = 36), followed by an online study (n = 503) in Germany. Combining RTR measurements with guided interviews, the study identifies distinct evaluation patterns through hierarchical cluster analysis. Findings reveal that humorous statements can decrease perceived trustworthiness, while higher attributions of empathy, humor, and human-likeness lead to increased trustworthiness, supporting the CASA hypothesis. This approach offers valuable insights into how communicative AI is perceived and evaluated in real-time interactions.