Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Session 2: Technological Perspectives on the Machine Actor
Time:
Tuesday, 16/Sept/2025:
11:30am - 12:30pm

Session Chair: Esther Greussing
Location: BAR/0I88/U

Barkhausen-Bau Haus D, Georg-Schumann-Str.11, First floor

Show help for 'Increase or decrease the abstract text size'
Presentations

Metacognitive Illusions – Rhetoric between AI Strategies and AI Literacies

Fabian Erhardt1,2, Markus Gottschling1,2

1Eberhard Karls Universität Tübingen; 2Center for Rhetorical Science Communication Research on Artificial Intelligence (RHET AI)

Generative AI systems use “metacognitive illusions” to interact with users in a dialogical way. At the center of this illusion is a communicative performance in which the system stages itself as a reflective instance in which similar processes of deliberation and reflection take place as in mental systems. This enables intuitive and effective user experience. At the same time, metacognitive illusions conceal the statistical-probabilistic functioning of the systems. Their simulated reflexivity is based on classical rhetorical devices. In our paper “Metacognitive Illusions – Between Rhetorical Strategies and Rhetorical AI Literacy”, we will systematize such devices and examine the opportunities and challenges of metacognitive illusions in terms of practical communication. The aim is both to provide an innovative description of the interaction between users and text-generating systems from a rhetorical point of view and to outline the observation and production competencies that these systems seem to require.



Talking machines, not saying anything, shaping communication: Reflecting on “artificial intelligence” and fake-human/human interactions

Annekatrin Bock

Universität Vechta, Deutschland

Against the background of a rapid evolution of large language models (LLMs), drawing on critical data studies this contribution weaves together actor-centered approaches with research on technology-driven social transformations. Asking how chatbots can reveal and reconstruct self-evident aspects of interpersonal communication, methodologically, the paper employs a walkthrough approach to critically analyze the interactions between a user persona and the MyAI chatbot within the Snapchat app. With the findings, this research 1) problematizes what we understand about the nature of interpersonal communication in conversations with machines; 2) addresses questions for communication science, such as how human communication will evolve in a machine-mediated future?; 3) reflects the often invisible impact of technology on communication. The paper aims to contribute thoughtfully to the ongoing research into the complex, often hidden power dynamics within digital communication and to actively shape future research agendas.



Are Bots More Biased Than Humans? Assessing Gender Stereotypes of Warmth and Competence in AI-Generated Ratings, Stories, and Images

Alexandra Wölfle, Desiree Schmuck

Universität Wien, Österreich

Generative artificial intelligence is increasingly shaping our everyday lives, holding considerable advantages but also risks, such as biases. Based on the stereotype content model, we explore gender stereotypes of warmth and competence in AI-generated compared to human-generated content. Drawing on ratings and stories created by ChatGPT-4o-mini, images generated by DALL·E 3, ratings and stories collected through a survey-experiment, and a corpus of Google Images, we conducted a set of preregistered automated content analyses. Findings showed that women are portrayed as warmer yet less competent than men in AI-generated ratings and images and that these stereotypes are amplified compared to human-generated content. In stories, however, women are described as warmer and more competent than men. Showing that generative AI not only reproduces but amplifies gender stereotypes, our findings have crucial implications for policymakers, developers, and users of generative AI alike.



Social Bots in Online Climate Change Discourse: Evidence from COP27 Tweets

Junyi Han, Sonja Utz

Leibniz Institut für Wissensmedien, Germany

Social media platforms have been one of the most important locations for public discussion about climate change. The emerging scholarship on online climate change discourse tends to depict it as a plural sphere among scientific professionals, politicians, journalists, stakeholders, and laypeople. Non-human actors, namely social bots, are also found to be active in this sphere according to the current few empirical observations. Social bot research, mainly political case studies, generally portrayed social bots as malicious actors, responsible for spreading misinformation, online propaganda, and distorting online public opinion. Yet, the role social bots play is unclear in online climate change discourse. We aimed to examine social bot tweets in online climate change discourse to identify social bots’ activity in comparison with human accounts.