Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
1.3: Beliefs and Attitudes Towards AI
Time:
Thursday, 11/Sept/2025:
8:45am - 10:15am

Session Chair: Jakob KAISER
Location: LK052


Show help for 'Increase or decrease the abstract text size'
Presentations

Depolarization through AI: Do effects of active listening and conversational receptiveness translate to AI conversations?

Timon Manfred Joachim Hruschka, Markus Appel

Julius-Maximilians-Universität Würzburg, Germany

Across many democracies, scholars have raised concerns about affective and attitudinal polarization (e.g., Finkel et al., 2020; Iyengar et al., 2019; Voelkel et al., 2022). Increasing polarization is often viewed as a threat to democracy. At the same time, the rise of artificial intelligence (AI) has amplified concerns about increasing misinformation and polarization. We report the results of two experiments (N = 1200) that take a more positive angle in examining the use of AI to reduce attitude and affective depolarization: In these experiments, participants who reported extreme views on a contentious political topic in the U.S. interacted with one of four different AI chatbots on this topic. In three conditions of the experiment, participants were presented with counterarguments to their view by the AI chatbots. The chatbots varied in their presentation of those counterarguments: Participants either interacted with a chatbot that practiced conversational receptiveness (Yeomans et al., 2019), conversational receptiveness and active listening (Itzchakov et al., 2023), or an unreceptive presentation of counterarguments. A fourth group interacted with a chatbot about a different, unrelated topic and served as a control group. We measured attitude extremity on four different political topics in the U.S. (budget deficit, U.S. involvement regarding the war in Ukraine, gun regulation, energy policy) as well as affective polarization before and after the AI conversations. We additionally investigated whether experiencing AI conversational receptiveness would translate over to human-to-human conversations. Intellectual humility and attitudes towards AI were investigated as moderators of the effects, positivity resonance, intellectual humility, and perceived AI intellectual humility as mediators. We interpret the results in light of the computer are social actors (CASA; Nass et al., 1994) paradigm as well as the Human-AI-Interaction framework (HAII; Sundar, 2020).



How Values and Attitudes Shape Evaluations of Biased AI-Generated Content

Julian Bornemeier1,2, Jan-Philipp Stein1

1Department of Media Psychology, TU Chemnitz; 2Department of Cognitive Psychology and Human Factors, TU Chemnitz

Building on media effects theories and dual-process models of perception, this study examines how de-facto biases in AI-generated imagery are perceived and evaluated by audiences, with particular interest in pre-existing values and attitudes. Focusing on the context of gender equality (and related sexist biases), our research investigates whether the evaluation of AI outputs with uneven gender distributions is solely determined by their contents and the presumed objectivity of AI—or whether viewers’ perceptions are also influenced by their predispositions toward inequality, that is, their social dominance orientation and level of ambivalent sexism. In an additional theoretical contribution, we further scrutinize to which extent people’s initial perceptions (in terms of basic bias recognition) align with their subsequent, more contemplative evaluation of AI fairness. To address our research propositions, a between-subjects online experiment was conducted. A total of 550 participants were randomly assigned to one of four conditions that varied in both the gender distribution (balanced versus male-dominated) and the supposed creator (AI versus human) of STEM marketing images. Each participant viewed a series of ten images depicting individuals in STEM occupations; in the balanced condition, five images showed male workers and five showed female workers, while in the male-dominated condition, eight images featured male individuals. Second, the cover story was manipulated so that participants were informed the images were either created by a generative AI system or captured by a human photographer. After viewing all images, participants rated the perceived bias with regards to gender representation and the fairness of the viewed imagery. Afterwards, established scales were used to measure social dominance orientation and ambivalent sexism.

The study extends existing research on algorithmic bias and media effects, highlighting that the perception and evaluation of AI outputs is impacted not only by content characteristics, but also by the recipient’s personal ideology.



Users' awareness, attitudes, and capabilities in addressing algorithmic bias in generative AI: Insights from interviews with three user groups

Tanja Veronika MESSINGSCHLAGER, Markus Appel

Universität Würzburg, Germany

In recent years, generative AI has transformed creative media production, presenting both opportunities and challenges for individuals and societies. Studies have pointed towards biases in AI-generated content in which women or minorities are not represented or portrayed in a stereotypical manner. Despite these biases, AI-generated content might be perceived as less biased due to machine heuristics. We focus on the potential of AI users to counter bias in AI-generated output during the process of co-creation. Awareness of biases will allow users to evaluate whether outputs are affected by bias and reflect on their possibilities to diversify the content if deemed necessary. However, little is known about the awareness and the skills of users to detect and adequately react to bias in AI-generated content. Hence, we plan to assess and document users’ perspective on the issue of algorithmic bias.

We will conduct semi-structured interviews with three user groups with different levels of experience with text- and/or image-generating AI: professional/heavy users (usage: at least once a week) experienced users (usage: at least once every month), and non-users (no active use but likely exposure to AI-generated content). We will interview 12 users per group (36 interviews), transcribe, and code them. The results will be analyzed using qualitative content analysis (Kuckartz, 2019; Mayring, 2021). The interviews will focus on users’ trust and mistrust in generative AI and its products to be unbiased. To account for the possibility that users are unaware of potential bias in generative AI, the questions will start broadly and gradually point to the topic of algorithmic bias. The interviews will also cover measures users take, or non-users believe could be taken, to counter algorithmic bias in AI-generated text and images. In addition, we will ask about participants’ perception of roles and responsibilities when interacting with AI during the creation process.



Not what, but who: The role of artificial sources in reducing conspiracy beliefs.

Paul Ballot1,2, Philipp Schmid1

1Centre for Language Studies, Radboud University, Nijmegen, The Netherlands; 2iHub, Radboud University, Nijmegen, The Netherlands

Conspiracy beliefs are highly resistant to change with most interventions yielding negligible or even adverse outcomes. This persistence is frequently attributed to underlying identity needs. Yet, recent findings on the effectiveness of reducing conspiracy beliefs by debating them with a chatbot challenge the role of social inaccuracy motivations: Maybe, previous interventions simply lacked the level of depth and personalization necessary to trigger belief change – something Large Language Models (LLMs) are particularly well suited for. However, what if the effects observed do not stem from the quality of the message but rather from the non-human nature of the sender? Perhaps, for conspiracy correction, LLMs allow for favourable communication outcomes precisely because they are not seen as humans. While computers are often perceived as social actors, they are perceived as less social actors and therefore evoke weaker social responses. For LLM-driven interventions, this could decrease identity threats triggered by the rebuttal and therefore weaken defense motivations, reactance, and resistance to change. An effect further facilitated by how we perceive computational sources: Following machine heuristics, artificial agents are often seen as more neutral, unbiased, and non-judgemental. So, while individuals challenging the conspiracy might be automatically perceived as part of a rival outgroup, artificial agents could instead be categorized as a non-rival, less competitive outgroup and as such induce even less identity threat. This should render them especially effective in reducing conspiracy beliefs. To investigate these claims, we invite participants to debate their individual conspiracy theories with an LLM while manipulating the perceived source (‘human’ vs ‘AI’ label). Measuring pre- and post-treatment confidence levels and attitudes towards the source and analysing conversation scripts, we intend to deepen our understanding on the role of source characteristics. Should artificial sources outperform alleged human sources, this also challenges the benefits of anthropomorphic design in human-computer interaction.



The impact of AI-generated images in political advertisement on credibility, emotions, and political mobilization

Sabine Reich, Stephanie Geise, Anna Ricarda Luther, Michael Linke

Universität Bremen, Germany

Following the dissolution of the German government, political parties face the challenge of campaigning on a tight schedule and budget. Migration is a central issue in the 2025 election, with anti-migration campaigns leveraging fear appeals to shape public attitudes and policy support (Scheller, 2019; Widmann, 2021). The growing use of AI-generated content in this context raises concerns about its psychological and political effects (Reveland, 2025).

Despite their increasing presence in political advertising, the effects of AI-generated images remain underexplored. Visuals, in general, are perceived as more immediate and credible than text (Sundar, 2008). They are particularly effective at eliciting emotional responses (Brader, 2006) and mobilizing voters (Kühne et al., 2011). However, individuals struggle to distinguish AI-generated visuals from real photographs, which raises ethical concerns (Köbis et al., 2021).

Still, AI-generated images possess distinct visual properties that audiences are sensitive to, inducing feelings of uncanniness and negative affect even in naïve users (Eiserbeck et al., 2023; Wu et al., 2024). Labeling such images reduces engagement and credibility (Wittenberg et al., 2024) while psychologically distancing viewers from the content, though it may also erode trust in authentic media (Arango et al., 2023).

This study examines how AI-generated political ads influence message and source evaluations (RQ1), affective responses (RQ2), issue perception (RQ3), and political mobilization (RQ4) compared to traditional photographic ads. Additionally, we investigate whether priming AI-awareness alters these effects and affects trust in authentic media (RQ5). We conduct a 2 (AI-generated vs. original photograph) × 2 (AI-prime vs. no prime) between-subjects survey experiment with a quota sample of ~1,000 German online users. Respondents view a social media feed including one post with the different treatments. The pre-registration of our hypotheses and materials is in preparation. Using a fictitious party ad against illegal migration, we took care to debrief every participant directly.