Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
5.2: AI and Society
Time:
Friday, 12/Sept/2025:
9:00am - 10:30am

Session Chair: Julian BORNEMEIER
Location: LK051


Show help for 'Increase or decrease the abstract text size'
Presentations

Conversational AI as extended mind - does it matter whether AI is perceived as tool or actor?

Sonja UTZ

Leibniz-Institut für Wissensmedien, Germany

More and more people use conversational AI like ChatGPT for receiving information. In earlier times, they used search engines for this purpose and research has shown that people misattribute the internet’s knowledge as their own and show higher values of cognitive self-esteem than people who do not have access to a search engine for answering questions (Ward, 2021). Follow-up research showed that this overestimation of cognitive self-esteem does not occur when people use a conversational AI (Hamilton et al., 2024), likely because it is clearer that the knowledge comes from another entity. We extend this work by examining whether it also matters whether people perceive conversational AI as a tool vs. an actor. Prior work was experimental; we also explore whether there are within-person effects of AI use on cognitive self-esteem over time. We conducted a longitudinal study (6 waves, 2 months between waves; September 2024 - July 2025, nWave1 = 1008) and will test the following hypotheses:

H1: People who perceive conversational AI as a social actor show lower cognitive self- esteem than people who perceive AI as a tool.

H2: The effect postulated in H1 is stronger the more people interact with the AI via voice.

H3: People who perceive AI as social actor (vs. tool) at time t1, show lower cognitive self-esteem at time t+1.

Planned analysis: H1 & H2: Multilevel regression (timepoints nested in participants) with cognitive self-esteem as dependent variable and wave, perception as tool vs. social actor (H1), modality, the interaction effect between perception as tool vs. actor and modality (H2) as well as the interactions of the predictors with time as predictors

H3: Multilevel model as above, but also add lagged perception as tool vs. social actor as predictor

Cognitive self-esteem measured by the scale of Ward (2013), tool vs. actor one self-developed item



AI vs. Search Engines: How Generative Chatbots Change Our Interaction with Information

Jakob KAISER1, Carolin KAISER1, Rene SCHALLNER1, Sabrina SCHNEIDER2

1Nuremberg Institute for Market Decisions, Germany; 2Vorarlberg University of Applied Science, Austria

Finding information online is an essential part of learning and everyday life. For a long time, keyword-based search engines like Google have dominated online searches. However, generative AI chatbots such as ChatGPT are emerging as increasingly popular alternatives for information retrieval. Despite their rapid adoption, it remains unclear how this new search paradigm influences human engagement with online sources of information. To investigate this, we conducted a large-scale experiment (N = 1,526) with a diverse U.S. sample, using custom-built software to track participants’ behaviour during a practical online search task. Participants were randomly assigned to use either Google (as the currently prevalent search engine) or ChatGPT (as the currently prevalent AI chatbot) as their primary search tool. They were tasked with finding either a holiday destination or a smartphone that met specific criteria. Importantly, participants were allowed to visit additional websites beyond their assigned search tool to gather the necessary information. After completing the task, participants rated their satisfaction with the assigned search tool and completed standardised personality assessments, including the Big Five personality traits. This allowed us to address the following key questions: (1) Do AI chatbots, compared to traditional search engines, enhance search effectiveness and/or efficiency? (2) Do AI chatbots reduce engagement with primary sources, such as other websites, when searching for information? (3) Does the preference for AI chatbots versus traditional search engines depend on users’ personality traits? By answering these questions through a large-scale empirical experiment, we contribute to a deeper understanding of the transformative effects of AI technologies on online search behaviour and media use.



Voices of Science: An Experimental Study on the Influence of Human vs AI Authorship on Trustworthiness, Credibility, and Knowledge of Podcast Listeners

Bianca Nowak1,2, Antonia Rosada1,3, Gina Goebel3, Nicole Krämer1,3

1Reserach Center for Trustworthy Data Science and Security, University of Duisburg Essen, Germany; 2Human Understanding of Algorithms and Machines, University of Duisburg Essen, Germany; 3Social Psychology: Media and Communication, University of Duisburg Essen, Germany

With virtually all information now instantly accessible online, individuals can quickly familiarise themselves with complex topics. However, this ease of access raises questions like how potentially simplified information shapes recipients’ understanding of scientific issues. AI tools like Google's Illuminate automatically transform research papers into engaging podcasts. While such tools have undeniable potential for science communication, they also add a new layer of complexity to the challenge of identifying legitimate sources of information and, consequently, the potential impact on recipients. Informed by the conceptual framework of Bartsch et al. (2025) on epistemic authority (i.e., meaning the acceptance of others' knowledge), we examine the perception and subsequent effects on recipients’ cognitions of different knowledge brokers in podcasts. Specifically, the authority of traditionally legitimate sources, such as science and journalism, is increasingly negotiated in the digital media sphere by the public and transferred to new actors, such as influencers and - now - generative AI. Other than traditional sources, both operate without institutional backing yet play an increasing role in shaping public understanding of science. Using a between-subjects design (N = 1.200), we compare four types of sources (AI, scientists, journalists, influencers) discussing various scientific topics in podcasts. In doing so, we aim to examine the effects of these sources on recipients' perceptions of epistemic empathy (i.e., ability to adjust towards the listener) and trustworthiness (comprising expertise, integrity, and benevolence). Further, we investigate their potentially mediating role on information credibility alongside subjective and objective knowledge changes to explore how different knowledge brokers in the evolving media landscape affect our perceptions of science. Thus, this study examines the influence of authorship on trust in various actors, provides crucial insights into the role of epistemic authority in the digital sphere, and it informs the responsibilities associated with AI usage in science communication for researchers and the public.



Experimental Investigation of Socio-Cognitive Processes During Action Coordination in Hybrid Intelligence Teams

Noëlle BENDER1,2, Nicole Krämer1,3

1Social Psychology: Media and Communication, University of Duisburg-Essen; 2WisPerMed; 3Research Center for Trustworthy Data Science and Security, University of Duisburg-Essen, Germany

Recent advances in Generative Artificial Intelligence (GenAI) have introduced a novel, less familiar partner into collaborative decision-making, raising questions about how humans develop mental models of artificial agents. These collaborations are described as Hybrid Intelligence, a team in which both partners contribute, but the human stays in control. In humans, engaging in joint action is based on automatic co-representation of the partner's perspective (Sebanz & Knoblich, 2021). This socio-cognitive process of the action coordination phase builds the foundation for more sophisticated processes such as mentalizing (the ability to infer others' mental states). Whether disembodied AI similarly triggers these early processes during action coordination remains unclear. Further, the influence of task interactivity (joint goal vs. competitive) on co-representation, perception of the partner's errors and attributed agency in hybrid intelligence teams has yet to be explored.

We investigate these questions with a factorial between-subject design that manipulates the believed entity (human vs. AI) and task interactivity (joint vs. competitive). Our experiment utilizes the Joint Simon task to index whether co-representation occurs when sharing a reaction time-based task. This will be combined with an Intentional Binding task (index for implicit agency) and the measure of the believed partner's post-error slowing (index for social error monitoring).

With this research, we aim to gain insights into fundamental differences and commonalities of socio-cognitive mechanisms of action coordination in a hybrid intelligence. Our investigation intends to extend theories of joint action with wide-ranging implications for the design of Hybrid Intelligence teams, specifically in scenarios in which the human needs to stay in control, such as medical decision-making or autonomous driving. These findings can be particularly informative to interventions that target the development of accurate mental models of disembodied AI.