Examining Therapist and Counselors’ Attitudes on Griefbots: A Qualitative Study on Expert Insight into Grieving with Technology
Marco Dehnert
University of Arkansas, United States of America
Large language models are pushing into the space of death and grief, through the development of so-called griefbots, which are chatbots trained on a deceased loved one’s digital footprint (their messages and texts). Griefbots are primarily designed as tools to support grieving individuals after the loss of a loved one by simulating the deceased loved one’s style of communication, yet there exists little to no empirical research on griefbots. The majority of griefbot research is theoretical and ethical (e.g., Bao & Zeng, 2024; Brescó de Luna et al., 2024; Henrickson, 2023), lacking evidence-based insights into how griefbots affect grieving individuals and what benefits and risks they offer.
To address this gap, this study reports findings based on interviews with experts who encounter grief routinely in their everyday work: relational therapists and grief counselors. Their expert insights, experience-driven recommendations, and potential worries about using griefbots with grieving individuals offer a much-needed starting point for an evidence-based discussion around the use of griefbots. This project addresses the following research questions:
RQ1: What attitudes (including benefits and risks) do therapists have toward griefbots?
RQ2: In what ways do therapists consider integrating griefbots into grief therapy, if at all?
RQ3: What recommendations do therapists have for users and developers of griefbots?
In this IRB-approved study, I follow a qualitative design with semi-structured interviews (conducted via Zoom) with relational therapists and counselors who regularly work with grief and grieving individuals (target N = 25-35). Interview questions focus on participants’ experience with griefbots, their professional insights, and recommendations for or against use. Upon transcription, interview data will be analyzed following the phronetic iterative qualitative data analysis (PIQDA) process (Tracy, 2025), where the researcher iterates between the collected data and existing literature. Findings will show what grief experts recommend regarding the use and effects of griefbots.
AidMax - AI Chatbot Integration to Fight Teen Depression on Instagram
Korbinian Raphael ZACHERL, Markus Endres
University of Applied Sciences Munich, Germany
Teenagers today face increasing rates of depression and suicidal ideation, often exacerbated by social stigma and limited access to professional mental health care. Digital platforms have become both a source of distress and a potential channel for intervention. This study explores the integration of a chatbot, AidMax, into Instagram as a novel approach to assisting adolescents. AidMax builds on established therapeutic approaches such as Cognitive Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), and Behavioral Activation (BA) to provide immediate guidance within a platform that teenagers already use daily. Emerging research suggests that generative AI and interactive digital environments can reshape learning and emotional coping mechanisms by providing adaptive responses. However, gaps remain in understanding how young users interact with AI support tools and how these influence their emotional resilience, and perceptions of psychological assistance.
The study is guided by three research questions: (1) How do individuals aged 15-25 engage with an AI-based mental health chatbot within a social media platform? (2) What are the limitations of current AI models in interpreting and responding to the language used by depressed users? (3) To what extent can AI-driven interventions contribute to reducing the risk of suicidal ideation among young social media users?
To address these questions, a mixed-methods research design was employed. It combined qualitative expert interviews of professionals in psychology, crisis intervention, and AI, with a quantitative survey of young users after their interaction with AidMax. The methodological approach applied systematic qualitative content analysis to expert insights, following an inductive coding framework, and statistical evaluation of chatbot interactions to assess engagement trends.
With AI assistance in the familiar environment of social media, this research contributes to the discourse on the transformative potential of generative AI in fostering digital resilience, shaping new forms of media literacy, and promoting applications for psychological well-being.
Motivated by Machines? How (Dis)Embodied Artificial Assistants Influence Human Motivation and Task Performance
Hannah Morag BÖCKMANN, Laura KUNOLD
Ruhr-Universität Bochum, Germany
Previous research compared different artificial assistants to investigate the influence of the degree of embodiment on user experience and behaviour (e.g. Lusk & Atkinson, 2007; Nakanishi et al., 2020; Seaborn et al., 2023). The results are mixed, highlighting the importance to consider moderating variables such as the context. For example, Hoffmann and Krämer (2013) found that a physically embodied robot was perceived as more competent in a task-oriented interaction, while a virtual agent appeared more competent to the users in a conversational scenario. To explain differences resulting from interaction with differently embodied artificial entities, Hoffmann et al. (2018) established the Embodiment and Corporeality of Artificial Entities framework. They postulate that the abilities of artificial entities perceived by users mediate the relationship between the embodiment and resulting effects. Given the rise of voice assistants that do not possess a human-like embodiment but are capable to use spoken language to persuade humans, the question arises whether interactions based on voice only can effectively motivate humans, or whether physical embodiment adds something to the experience. To answer this question, we conducted a laboratory experiment with a 3x2 mixed design. Participants either interacted with a disembodied speaker, a physically embodied but static robot, or a physically embodied and gesticulating robot (between factor). Additionally, all participants solved a physical and a conversational task (within factor). We measured motivation during the tasks via the time voluntarily spent on it, and performance via the number of tasks completed and the number of errors made. In addition, we assessed the evaluation of the artificial entity (i.e., perceived capabilities and attributes) after each task via questionnaires. We hypothesized that an embodied and gesticulating robot will show the highest evaluation, motivation and performance scores, followed by an embodied but static, and finally a disembodied entity.
Talk to Me: How Active Constructive Responding and Embodiment in Voice Assistants Reduce Academic Stress
Aminou Memoko, Jana Figge, Nicklas Frochte, Mohammad Azizi, Rosika Kumar, André Helgert, Carolin Straßmann
University of Appplied Sciences Ruhr West, Germany
University students increasingly struggle with mental health issues, often exacerbated by academic stress (Fayda-Kinik, 2023). A promising approach to supporting students' well-being is leveraging everyday technology (Yurayat & Seechaliao, 2021), such as voice assistants, which promote positive emotions through conversations about gratitude (Helgert & Straßmann, 2022). One effective strategy for enhancing positive emotions and reducing stress is sharing positive experiences (Fredrickson, 2001). Research suggests Active Constructive Responding (ACR)—a style where a conversation partner acknowledges and elaborates on positive experiences—amplifies their emotional benefits (Gable et al., 2004), creating a stronger effect than just sharing experiences. However, in everyday life, students rarely encounter conversation partners skilled in ACR.
Integrating ACR into voice assistants could offer students a daily companion to reflect on positive academic experiences. Its effectiveness may depend on embodiment—the assistant’s physical presence and anthropomorphic features, which enhance social engagement and emotional responses (Kim et al., 2019; Pollmann et al., 2020; Thellman et al., 2016). This study examines whether ACR-based feedback reduces academic stress (RQ1), if embodiment amplifies these effects (RQ2), and how ACR affects perceptions of voice assistants (RQ3)
A 2x2 mixed experimental design was used, manipulating embodiment (robot with anthropomorphic features (Cozmo) vs. voice assistant (Alexa)) and feedback type (neutral vs. ACR). Participants completed a stress-inducing academic task to establish baseline stress levels, then interacted with the voice assistant (Cozmo or Alexa) by sharing three positive academic experiences. Depending on the condition, the device responded with either ACR (expressing interest and asking follow-up questions) or neutral feedback (acknowledging without elaboration). Measurement tools included the PANAS, Distress/Eustress, RoMo, and RoSAS scales.
This research explores how embodiment and conversational style influence the effectiveness of voice-assistant-based mental health interventions.
|