Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Session 6: The Machine Actor as an Opportunity for Improving Our Skills and Attitudes?
Time:
Wednesday, 17/Sept/2025:
11:00am - 12:00pm

Session Chair: Jihyun Kim
Location: BAR/0I88/U

Barkhausen-Bau Haus D, Georg-Schumann-Str.11, First floor

Show help for 'Increase or decrease the abstract text size'
Presentations

Imagining Sustainable Futures (With ChatGPT): Can a Climate Fiction Writing Exercise Foster Environmental Cognitive Alternatives and Creative Self-Efficacy?

Julia Winkler1, Tanja Messingschlager1, Wojciech Małecki2, Markus Appel1

1University of Würzburg, Deutschland; 2University of Wrocław, Poland

This study examines how writing climate fiction may foster the imagination of desirable climate futures (environmental cognitive alternatives) and creative self-efficacy, and how the collaborative use of generative AI affects the creative writing process depending on individual differences in perceived creative writing competence and climate change knowledge. In a two-group lab experiment (N = 144), participants wrote a climate fiction story either independently or with ChatGPT-4. The task increased environmental cognitive alternatives, regardless of AI use. Flow experience positively affected creative self-efficacy but not environmental cognitive alternatives and was predicted by perceived creative writing competence and baseline creative self-efficacy. Surprisingly, the writing task overall decreased creative self-efficacy, though ChatGPT use had a positive effect for participants with below-average subjective climate knowledge. Within the ChatGPT group, a stronger sense of leadership in the writing process was linked to a greater increase in environmental cognitive alternatives.



Persuasive AI: How Multimodal Large Language Models (MLLM) Improve the Design and Evaluation of Digital Health Messages

Sijia Yang, Yibing Sun, Luhang Sun

University of Wisconsin-Madison, United States of America

Through three empirical studies, this research investigates the dual roles of multimodal large language models (MLLMs) as "AI research assistants" and "AI campaign designers" in improving digital health message design and evaluation. Our findings reveal that MLLMs effectively accelerate feature measurement in HPV vaccination tweets through in-context learning methods (Study 1). In a follow-up online experiment, we found that AI-generated visual exemplars showed modest but significant benefits in correcting misperceptions by reducing psychological reactance. Furthermore, in the context of tobacco control messages, while MLLMs demonstrate limitations in simulating human message evaluations, they excel at multimodal annotation and feature discovery, identifying specific features that enhance or diminish message effectiveness (Study 3). This research illustrates how MLLMs can improve both deductive (augmenting literature review, feature measurement, and stimuli generation) and inductive approaches (automating message annotation, interpretation, and feature identification) to health communication research, potentially increasing efficiency and scalability of digital health campaigns.



The Impact of Explainable AI on Fairness Perceptions in Academic Performance Prediction: The Role of AI Literacy and Trust in Algorithmic Decision-Making

Laurids Blume, Marco Lünich

Heinrich-Heine-Universität Düsseldorf, Deutschland

This study examines the impact of Explainable Artificial Intelligence (XAI) on fairness perceptions in academic performance prediction (APP), focusing on the roles of AI literacy and trust in algorithmic decision-making. As AI-driven assessments become more prevalent in higher education, ensuring transparency and fairness is crucial for student acceptance and institutional trust. The research distinguishes between informational and distributive fairness, analyzing how faulty or dysfunctional decision trees affect these perceptions. Findings indicate that high trust in AI reduces the critical evaluation of erroneous decisions. In contrast, greater AI literacy enhances sensitivity to errors, highlighting the need for educational initiatives to foster AI literacy. The study underscores the importance of explainable AI systems and calls for balanced trust in AI to prevent over-reliance on flawed decisions. It recommends that AI developers enhance transparency while educational institutions integrate AI literacy training to prepare students for critical engagement with algorithmic decision-making.



Reducing Political Polarization Through Conversations with Artificial Intelligence

Timon M. J. Hruschka, Markus Appel

Universität Würzburg, Deutschland

We hypothesized that conversations with AI can be leveraged to alleviate polarization. Moreover, we hypothesized that communicative strategies successfully tested in human-human-communication can be implemented in AI chatbots to increase the benefits of human-machine-interactions. To evaluate these propositions, two experiments were conducted, in which participants (N = 1036) communicated with AI chatbots in real time. Across both experiments, engaging with a counterarguing AI chatbot led to significant issue depolarization. Implementing high (vs. low) conversational receptiveness and active listening in the chatbot’s communication strategy led to stronger affective depolarization and to a greater willingness to engage in future conversations with holders of opposing opinions (AI or humans). Intellectual humility was a consistent mediator between experiencing active listening and issue depolarization. Positivity resonance mediated the effects of positive communication on affective depolarization. Our studies show that LLMs can be powerful tools for individual depolarization.



Immerse & Befriend: The Role of Synthetic Relationship Perception and Narrative Transportation in a Mental Health App

Stefanie Helene Klein1, Marisa Tschopp2, Henrik Skaug Sætra3, Sonja Utz1,4

1Leibniz-Institut für Wissensmedien, Germany; 2scip AG; 3University of Oslo; 4Eberhard Karls Universität Tübingen

The global mental health crisis has spurred the development of apps that use conversational agents (CAs) to supplement or replace human professional support. While CA-based apps mostly use established therapeutic approaches, the psychological processes underlying the relationship between CA characteristics and user engagement and well-being are not fully understood. To shed light on this issue, we conducted an exploratory cross-sectional survey with 349 active users of the mental health app Betwixt. The app incorporates a CA that guides users through stages of self-development within a fantasy narrative. We tested how perceived synthetic relationships with the CA and narrative transportation relate to app engagement and perceived stress. Narrative transportation and friend-like CA perception positively predicted hedonic benefits and future usage intentions. However, these factors were not significantly associated with perceived stress. Our results contribute to the emerging field of synthetic relationships and highlight the potential of narrative transportation for developers of mental health CAs.