GeNeMe 2025
Communities in New Media
17 - 19 September 2025 in Dresden, Germany
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Digital Education: AI (engl.)
| ||
| Presentations | ||
ID: 1120
/ DigEd - AI: 1
Research Paper Topics: Track - Digital Education Keywords: Emotion Recognition, AI Mentoring, Animated Avatars, FER2013, Multimodal Interaction, OpenGL, Responsible AI, Design-Based Research Emotion-Aware AI Avatars for Personalized Learning Support Using Multimodal Interaction ScaDS.AI, Technische Universität Dresden, Deutschland <p>This contribution presents a human-centered AI mentoring system that combines animated avatars, real-time emotion detection, and dialogically grounded learner support to foster emotionally intelligent interaction in digital education. The system addresses a key challenge in online learning: the absence of socio-emotional feedback loops between learners and educators (Salloum et al., 2025; D’Mello & Graesser, 2013). At its core is a 2D avatar rendered in real time via OpenGL, controlled through a Python-based framework. The avatar adapts facial expressions and gestures based on multimodal emotion recognition using CNNs trained on FER2013. Visual cues are derived from facial and audio signals to represent and respond to learner affect, enhancing trust and engagement through social presence. The mentoring process begins with a narrative-based onboarding procedure that captures learners’ experiences and goals across 14 biographical dimensions (Hummel & Donner, 2024). The resulting profile combines formal, informal, and transversal competencies (Crasovan, 2016; Egger & Hummel, 2020), forming the basis for personalized mentoring. A Retrieval-Augmented Generation (RAG) pipeline and Learning Record Store (LRS) allow adaptive feedback, content suggestions, and timely interventions. Grounded in relational learning theory (Prange, 2005; Koller, 2023), the system frames mentoring as a co-constructed process. The avatar functions not as an instructor but as a reflective, supportive learning companion, promoting agency and self-regulation over performance optimization. Unlike conventional avatar-based tutor bots, our system integrates narrative goal setting and affective responsiveness rooted in educational theory. Methodologically, the contribution builds on a design-based research framework. A formative study with 20 higher education students explores user trust, perceived support, and acceptance via interaction data and qualitative feedback. To ensure ethical alignment, the system integrates federated learning, transparent consent protocols, and privacy-aware data use (de Witt et al., 2023). It deliberately avoids surveillance-based logic, emphasizing human dignity and autonomy in digital mentoring contexts. Future iterations aim to extend the system beyond higher education and examine its transferability across diverse learning domains and populations.</p> ID: 1152
/ DigEd - AI: 2
Research Paper Topics: Track - Digital Education Keywords: Lesson preparation, artificial intelligence, digital skills, teacher training “Hey AI, how can I design my lesson for tomorrow?”. First insights into pre-service teachers’ use of artificial intelligence for lesson planning. 1TUD Dresden University of Technology, Chair of Educational Technology; 2TUD Dresden University of Technology, Center for Open Digital Innovation and Participation; 3TUD Dresden University of Technology, Chair of Media Education <p>The rapid development and spread of artificial intelligence (AI) are currently transforming a multitude of occupational areas, and are in consequence also increasingly impacting professional activities in the field of education. Especially for school teachers, who are facing mounting challenges due to student behavior, large and heterogeneous classes, and increasing bureaucracy (e.g., Sichma & Wolf, 2023), AI not only poses new demands, but also offers a variety of opportunities to facilitate their work, in particular during lesson planning and implementation.</p> <p>According to a recent pilot study, teachers are already using AI for certain professional activities like brainstorming ideas, adapting and summarizing text-based materials, and creating tasks for both formative and summative assessments (Pettera et al., 2024). However, we still know very little about how prospective teachers currently use basic and complex features of AI to particularly plan and prepare lessons. General competency models such as the Artificial Intelligence Competence Model (AIComp; Ehlers et al., 2023) describe basic AI competencies. However, there has so far been a lack of specific adaptation and operationalization of such models for concrete professional application contexts in the education sector - for example for lesson planning.</p> <p>To provide empirical evidence on this issue, a qualitative study was conducted to investigate the use of AI during a planning task of a 90-minute lesson on the topic of “Coffee as a cultivated plant” by three pre-service teachers. For this purpose, a research instrument was first developed that depicts activities relevant to operationalize (levels of) AI competence specifically for the case of lesson planning on the basis of the AIComp model. It was then used during observing participants’ visible actions and analyzing their Chat history. In a subsequent qualitative content analysis, students’ activities were coded according to the dimensions of lesson planning by Klafki (2007).</p> <p>The study's findings suggest that pre-service teachers mainly use AI during the conceptual phase of lesson planning, in particular to generate ideas and to methodically design their lessons (e.g., by defining learning objectives and meaningfully integrating digital media). However, more complex tasks like creating handouts or multimedia learning materials were rarely performed using AI.</p> <p>The results indicate that the AI skills of trainee teachers are still limited and underline the need for further research into the causes and for targeted media didactic training in the context of teacher training. The measuring instrument used provides a suitable basis for structuring learning content. It can be used in future studies both to assess the level of knowledge and for targeted support and at the same time provides impulses for the well-founded development of an instrument for recording AI skills in this professional field.</p> ID: 1131
/ DigEd - AI: 3
Research Paper Topics: Track - Digital Education Keywords: Prompt Engineering, Educational Chatbots, Generative AI, LLM Evaluation, Pedagogical Coherence, Responsible AI, GPT-4, Instructional Design Prompt Engineering in Educational AI: A Benchmark Study on the Pedagogical Performance of Large Language Models ScaDS.AI, Technische Universität Dresden, Deutschland <p>Prompt engineering plays an increasingly central role in shaping the instructional value of generative AI systems in education (Bartok et al., 2023; Bates et al., 2020; Hummel & Donner, 2023). Yet empirical evidence on how different prompt strategies influence the pedagogical quality of AI-generated responses remains scarce (Cain, 2023; White et al., 2023; Bender et al., 2021). Addressing this gap, the present study conducts a controlled benchmarking experiment comparing three prompt formats - generic, role-based, and iteratively optimized - within a GPT-4-powered educational chatbot. A total of <em>n = 56</em> prompts derived from authentic learner questions served as the basis for a multidimensional evaluation framework that integrates semantic coherence assessed via BERTScore (Zhang et al., 2020), readability and fluency via METEOR (Banerjee & Lavie, 2005), contextual appropriateness through LLM-based meta-evaluation (Fu et al., 2023), and conceptual depth via structured, double-blind expert ratings (<em>n = 5</em>). While generic prompts often produce fluent surface-level responses, iteratively optimized prompts consistently achieve higher scores across all dimensions, particularly in terms of pedagogical alignment and instructional depth. Theoretically, the study is grounded in constructivist and relational learning theories (Koller, 2023; Prange, 2005), viewing prompts not merely as input strings but as didactic mediators that shape instructional interaction, dialogical quality, and learner agency. Drawing on educational psychology and instructional design (Casovan, 2016; Zawacki-Richter et al., 2019), we argue that deliberate prompt formulation enables more meaningful, responsive, and personalized AI-assisted learning (Zhang & Aslan, 2021). Ethical dimensions - including transparency, contextual sensitivity, and bias mitigation - are operationalized in accordance with Responsible AI principles (de Witt et al., 2023), ensuring that evaluation processes remain aligned with pedagogical and normative concerns. The study contributes a replicable framework for prompt benchmarking in educational AI, empirically substantiates the relationship between prompt structure and pedagogical quality, and reframes prompt engineering as an instructional design practice situated at the intersection of educational intention and technical realization.</p> | ||
