Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
2.3: AI in Education and Learning Environments
Time:
Thursday, 11/Sept/2025:
10:30am - 12:00pm

Session Chair: Bianca NOWAK
Location: LK052


Show help for 'Increase or decrease the abstract text size'
Presentations

Better Together? Investigating Willingness to Delegate Writing Tasks to AI

Teresa Luther1, Joachim Kimmerle1,2

1Leibniz-Institut für Wissensmedien, Germany; 2Department of Psychology, Eberhard Karls University Tübingen, Germany

The widespread adoption of artificial intelligence (AI) is reshaping human-AI collaboration, with AI systems increasingly augmenting human capabilities in automating tasks and optimizing workflows (Langer & Landers, 2019). AI delegation, as a distinct aspect of human-AI collaboration, has the potential to alleviate human effort in time-consuming tasks and improve overall efficiency (Westphal et al., 2024). A key question is which factors influence the likelihood of people delegating tasks to AI. Prior research highlights user-related (AI literacy, Pinski et al., 2023; trust in AI, Lubars & Tan, 2019) and context-related factors (subjective vs. objective tasks, Castelo et al., 2019). However, most existing studies have examined delegation across various tasks rather than specifically focusing on one task type. Considering recent meta-analytic evidence of performance gains of human-AI synergy in content-creation tasks (Vaccaro et al., 2024) and given the increasing use of AI for writing, understanding what influences people’s decisions of delegation to AI in this context is crucial for effective human-AI collaboration. Thus, our study aimed to investigate (a) user-related factors, including AI experience, knowledge about AI, and attitudes towards AI; (b) perceptions of AI, including trustworthiness, anthropomorphism, and intelligence; and (c) task-related factors, including type of task, perceived difficulty, and trust in AI’s capabilities to perform the task, as determinants of delegation preferences to AI in the context of writing tasks. The study was conducted online, using a US-based sample (N = 1007). The participants answered questionnaires and were presented with eight writing tasks covering personal and professional contexts. For each task, they were asked to indicate their preferred level of AI assistance on a four-point scale, adopted from Lubars and Tan (2019). The tasks were presented in randomized order. Multiple linear regressions were performed to identify significant predictors of willingness to delegate the tasks to AI.



A longitudinal study on evolving attitudes, perceptions, and knowledge regarding AI

Angelica LERMANN HENESTROSA, Gerrit Anders

Leibniz-Institut für Wissensmedien (IWM), Germany

With the increasing integration of AI-supported assistants, a growing number of individuals have access to generative AI. This technology is becoming embedded in various applications and permeating different aspects of daily life, often unnoticed. Therefore, investigating interactions with this technology, particularly people’s attitudes and perceptions, is essential in light of its unique strengths and limitations, as well as critical misconceptions (Lermann Henestrosa & Kimmerle, 2024).

A crucial question is how such attitudes and perceptions evolve and whether exposure to information leads individuals to adopt more extreme positions (Sunstein, 2002). Furthermore, as people use these tools for scientific-related information searches (Greussing et al., 2025) and have a basic trust in the output (Lermann Henestrosa et al., 2023), which further predicts usage intention (Potinteu et al., 2023), the importance of understanding the usage of AI tools becomes evident.

In a longitudinal study with six planned waves, starting in September 2024 and continuing until July 2025 at two-month intervals, we investigate how trust, attitudes, risk-opportunity-perceptions (ROP), and knowledge regarding AI tools evolve over time. Specifically, we hypothesize that attitudes toward AI and ROP will polarize over time – meaning that both positive and negative initial values will increase – reflecting a process of belief polarization (Taber & Ladge, 2006). Additionally, we hypothesize that objective knowledge about AI has a slight correlation with usage but a moderate correlation with subjective knowledge. Our analyses include linear regression models, Pearson correlations, and latent growth curve modeling to examine changes over time and relationships among variables.

In the first wave, 1,000 US participants were invited to complete an online questionnaire to retain 400 participants by the sixth wave. This project is part of a comprehensive, collaborative effort capturing several other measures to monitor human-AI interactions over the long term during a critical era of rapidly evolving technologies.



"Digital Companions?" – University Students' Heuristic Perceptions of Chatbots

Eileen Plagge, Regina Jucks

University Münster, Germany

Theoretical background

Higher education must establish ethical guidelines for AI use while refining integration strategies to optimize learning. Research shows that chatbots significantly impact student outcomes, especially when designed with human-like avatars and emotional intelligence (Wu & Yu, 2024). Additionally, factors such as politeness or benevolence might influence the outcomes (Brummernhenrich et al., 2025; Jucks et al., 2018). Core human traits like perceived warmth (Asch, 1946) further shape interactions with AI, affecting perception and learning success. Understanding these dynamics is crucial for designing chatbots that enhance education while maintaining ethical and pedagogical integrity.

Research Questions

We examined how a chatbot is perceived as "warm" or "cold" in the sense of Asch (1946) and how its description—either as anthropomorphic or technical—affects evaluation. This was investigated in two studies.

We expect university students to perceive differences in the textual representations of a chatbot as a communication partner, transferring human communication phenomena to chatbots. Specifically, we assume that a chatbot presented with kind cues or depicted as more anthropomorphic (Study 1 & 2) will be perceived as warmer (Study 1 & 2) and either more benevolent (Study 1) or more human (Study 2). Conversely, when presented in a more technical manner (Study 1 & 2) without warm cues (Study 1) or with cold cues (Study 2), it is likely to be perceived as less warm (Study 1 & 2) and either less benevolent (Study 1) or less human (Study 2).

Methodology

The research comprised two 2×2 experiments examining chatbot perception. Study 1 varied description type (anthropomorphic vs. technical) and cue to kindness (yes vs. no) among German-speaking university students. Study 2 refined this by manipulating language style as in Study 1 and cue to emotional background (warm vs. cold), using open-ended questions to reduce priming. Methodologically, both studies were largely identical.



ChatGPT in the Classroom: Bridging or Widening the Digital Divide?

Ceciley Zhang, Laurent WANG, Ronald Rice

University of California, Santa Barbara, United States of America

Theoretical Background:

The rapid diffusion of ChatGPT in educational settings has various implications. One question is whether existing social inequalities manifest in this new technological context. The digital divide research argues that the inequalities manifest in the form of physical and material device access (i.e., the first-level divide), in skills, knowledge, use (i.e., the second-level divide), and in tangible outcomes (i.e., the third-level divide). Underlying this literature are two theoretical perspectives. The normalization model argues that disparities in access will eventually disappear as device ownership saturates. Conversely, the stratification model suggests that technology is fundamentally an amplifier of underlying human and institutional intent and capacity; as such, socioeconomically advantaged groups will benefit from technology to a larger extent.

Drawing from digital divide theory, this study seeks to investigate inequalities in U.S. college students’ attitudes about ChatGPT for educational purposes. Specifically, we aim to understand whether socioeconomic and demographic differences shape perceived acceptability and perceived benefits of ChatGPT use for educational activities and whether previous adoption experience may amplify or weaken digital inequalities in ChatGPT attitudes. Findings extend digital divide research to a new context, provide guidance for educators, and inform policymakers and stakeholders.

Research Question:

RQ1. Among U.S. college student ChatGPT users, how are SES, gender, and race/ethnicity associated with their a) perceived acceptability and b) perceived benefits of ChatGPT use for educational activities?

RQ2. How does ChatGPT adoption moderate the associations proposed in RQ1?

Methodological Approach:

We collected survey data from U.S. college student adopters and non-adopters of ChatGPT in 2023. The analyses included two phases: measurement reliability and validity (N=360) and structural equation modeling (N=1,267). Measurement models were tested using exploratory and confirmatory factor analyses and structural models were tested using structural equation modeling (SEM). Moderation effects were tested using multi-group SEM.



Talking to AI: A Qualitative Investigation into Children’s Use of LLMs

Antonia Rosada1,2, Nicole Krämer1,2

1Social Psychology: Media and Communication; University Duisburg-Essen, Germany; 2Research Center Trustworthy Data Science and Security; University Duisburg-Essen, Germany

The increasing use of Large Language Models (LLMs) is shaping various aspects of daily life. One in six interactions with LLMs appears to involve learning-related activities, from information retrieval to AI-driven tutoring. With these technologies becoming more common in educational contexts, understanding how children perceive, evaluate, and engage with LLMs is crucial.

This study focuses on the roles of AI literacy, trust, and anthropomorphism in children’s interactions with LLMs. AI literacy is conceptualized here not only as the ability to operate LLMs but also as an understanding of their functioning and potential benefits and risks—an area that remains under-researched among young users. We therefore aim to explore how children comprehend and reflect on the nature of LLMs.

The effect of anthropomorphism is also examined by investigating the children’s perception of social cues. While attributing human-like qualities may increase acceptance, it can also lead to misconceptions about the LLMs actual capabilities.

Additionally, trust is a crucial determinant in children’s interaction with LLMs due to the novelty and associated uncertainties of AI technologies, especially in educational settings where reliable and safe information exchange is paramount. Trust is investigated through its dimensions of ability, integrity, and benevolence, with the goal of identifying which facets influence children’s trust in the context of education.

Methodologically, we conduct semi-structured interviews with elementary school children (N=25). This approach ensures a comprehensive understanding through open-ended questions. During each session, children directly interact with an LLM, enabling real-time observation of their spontaneous behaviors, reflections, and overall experiences.