Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Session 1: Institutional and Societal Perspectives on the Machine Actor
Time:
Tuesday, 16/Sept/2025:
10:00am - 11:15am
Session Chair: Daniela Mahl
Location:BAR/0I88/U
Barkhausen-Bau Haus D, Georg-Schumann-Str.11, First floor
Presentations
The materiality of HMC: Imagining sustainable AI infrastructures
Anne Mollen, Sigrid Kannengießer, Anastasia Glawatzki
Universität Münster, Deutschland
We are proposing a materiality perspective to the study of human–AI communication to understand how the emergence of machines as communicators matters in context of the current sustainability crisis. An infrastructural perspective conceptualizes how relevant actors and their practices as well as the technology’s materiality interrelate in shaping the sustainability of generative AI. Based on a qualitative content analysis of documents and websites, we investigated socio-technical imaginaries put forward by organizations that develop generative AI. The results show that sustainability is rarely explicitly mentioned. When mentioned, it is often imagined as a means to support resource savings, without considering the sustainability of its own infrastructures. Organizations tend to imagine efficient and open AI systems as technological progress, while neglecting the environmental and social impacts of their broader infrastructures. This narrow focus limits the potential for sustainable development of generative AI, emphasizing the need for a more comprehensive approach.
Value Frictions: Competing Priorities in the Public and Private Governance of Generative Artificial Intelligence
Rebecca Scharlach1, CJ Reynolds2, Vasilisa Kuznetsova1, Christian Katzenbach1, Blake Hallinan2
1University of Bremen, Deutschland; 2Hebrew University of Jerusalem, Israel
With the sudden rise of digital communication with machines, technology companies increasingly shape everyday communication and public debate. Although researchers are only beginning to explore the impacts of generative AI, concerns are already mounting about its effects on communication and culture. Regulatory bodies such as the European Commission raced to develop policy frameworks for the governance of (generative) AI. However, the alignment between the priorities set by regulators and the principles that guide commercial tech development. Previous research has examined how platform companies invoke values to mediate conflicts and shape power structures. However, little is known about how AI companies navigate similar challenges, recognizing that some players are the very same companies dominating the social media landscape. This mixed-methods study explores value frictions within and between public and private AI governance frameworks to provide a foundation for informed debate over which principles should govern systems underpinning communication and creativity.
Negotiating norms of ChatGPT use: a cyclical perspective on the expanding appropriation of a generative AI chatbot among university students
Thilo von Pape2, Angèle Devantéry2, Veronika Karnowski1, Étienne Yerly2
1TU Chemnitz, Deutschland; 2Université de Fribourg, Schweiz
This study examines how university students negotiate social norms around ChatGPT use through a cyclical appropriation lens, applying Wirth et al.’s Mobile Phone Appropriation model to interviews with 18 students. We identified two distinct metacommunication patterns: peer discussions focused on practical AI applications (e.g., prompt optimization and academic shortcuts), while authority figures and media emphasized risks like academic dishonesty and existential AI threats.
The analysis reveals how initial academic uses form a foundational appropriation cycle that expands into professional, organizational, and social contexts. However, conflicting norms from different social spheres (permissive peers vs. restrictive institutions) and media narratives hamper stable appropriation patterns. The findings suggest transparent institutional communication about generative AI could facilitate more coherent societal appropriation, as early academic experiences significantly influence broader adoption trajectories across life domains
Hesitation to Report: Understanding Why Bystanders and Victims Refrain from Reporting Offensive Content
Steliyana Doseva1, Hannah Schmid-Petri2
1Bayerisches Forschungsinstitut für Digitale Transformation (bidt), Deutschland; 2Universität Passau
Offensive content is deeply embedded in online debates, particularly on social media. Previous research has shown that effectively removing such content fosters more respectful interactions among discussion participants. Our study examines whether bystanders and victims of offensive content online differ in their reporting behavior and which factors contribute to their reluctance to report. To address these questions, we conducted a quantitative online survey in Germany (N = 5000), focusing on clear cases of illegal offensive content under German law. We found that victims report offensive content more frequently than bystanders. When analyzing the reasons for non-reporting, we found no substantial differences between the two groups. The most common reason cited by both was the perception that taking action would be insignificant. Consistent with previous research, a lack of trust in platforms and authorities also emerged as a key barrier to reporting.
Building a Better World Through AI Journalism: Global Solutions from Gen Z
Aurora Alliegro, Elena Campo, Alexis von Mirbach
LMU, Germany
In a time of polycrisis, the global system is destabilized, plunging into a harmful state of disequilibrium. This instability underscores an urgent need for innovation in the field of communication. It is essential to consider the role of emerging technologies, specifically artificial intelligence (AI), and the voices of those at the forefront of these changes. Generation Z, the cohort whose future is most immediately impacted, has made clear demands for their perspectives to be acknowledged. The present research draws on 105 interviews with young people from 15 countries across three continents. It investigates how Gen Z perceives the impact of AI in journalism. This Global Public Scholarship project, designed around qualitative interviews with Gen Z, aims to gather ideas, proposals, and actionable or utopic solutions for leveraging AI journalism to shape a better world.