Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 18th Apr 2026, 03:59:54pm CEST
|
Agenda Overview |
| Session | ||
D342: LLM-SUPPORTED USER RESEARCH
| ||
| Presentations | ||
From online reviews to Kano model: a large language model method and case study Università di Pisa, Italy We introduce a method that turns online customer reviews into design insights. By analysing smartphone reviews, we extract the product features customers talk about and identify the sentiment linked to them. The approach combines Large Language Models (LLMs) with the Kano model, showing how specific features influence satisfaction or dissatisfaction. The results are coherent with the dimensions of the Kano model. The work demonstrates that LLMs can be informed and constrained by established design frameworks, bridging LMMs and design reasoning to provide theory-grounded insights. Can LLM-driven synthetic participants help user research? A case study in designing augmented reality for education 1University of Bath, United Kingdom; 2SENAI Innovation Institute for Information and Communication Technologies, Brazil; 3Universidade de Pernambuco, Brazil; 4University of Oxford, United Kingdom Conventional user research with human participants faces significant challenges, including substantial time and resource requirements, and limited scalability. In response, this study presents an efficient, cost-effective workflow driven by large language models (LLMs) for simulating user research with synthetic participants (SPs) at scale. In a case study in design augmented reality for education, SPs’ open-ended answers were plausible and comprehensive, yet semi-open and closed items diverged from those of humans. SPs can augment early qualitative work, but cannot replace human studies. LLM-based voice chatbot surveys as an alternative to post-experience questionnaires: probe-controlled, ultra-short field interviews Institute of Science Tokyo, Japan Chatbot-based surveys offer low-burden, in-situ data collection, yet unconstrained LLMs often drift from research aims. We conducted 359 ultra-short, post-experience voice interviews in a public venue to compare a framework-guided LLM, an unconstrained LLM, and fixed questions. The guided approach produced significantly longer responses than fixed questions and yielded the richest diversity of process-specific accounts. These findings show that probe control is essential for eliciting actionable, experience-grounded feedback in real-world, time-limited settings. On using LLM reasoning to support reflection in design thinking 1University of Thessaly, Greece; 2University of the Aegean Design thinking fosters creativity but it’s susceptible to cognitive biases. We propose a rule-based framework supported by large language models that uses a Prompt-Reflection-Reframe loop to identify bias mechanisms in designers’ verbal reasoning and generate theory-grounded reflective prompts. Through scenario-based evaluations, we validate the framework’s theoretical foundations and establish a methodological basis for supporting bias-aware design practice. | ||

