Annual Conference of the Association for Psychosocial Studies (APS)
12–13 June 2026
St Mary’s University, Twickenham, London, UK
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 3rd Apr 2026, 02:45:53am BST
|
Agenda Overview |
| Session | ||
AI and Relationships
| ||
| Presentations | ||
ID: 130
Individual Paper The Fragility of Trust in Simulated Holding Environments Bournemouth University, United Kingdom This paper explores what contemporary AI relationships reveal about trust and loneliness when examined through a Winnicottian lens. Here, AI relationships refer to ongoing engagements through which AI systems are experienced as relational presences, including romantic, emotional, and companionship-oriented uses. Against the backdrop of widely documented experiences of loneliness, understood not only as social isolation but as a failure of shared holding environments (Winnicott 1960), AI systems increasingly are held and imagined as viable objects of relational investment. These systems simulate reliability through near-constant availability, non-retaliation, and responsiveness, supporting forms of trust grounded less in belief than in experiential continuity. However, periodic model changes introduce abrupt discontinuities into what has been experienced and imagined as a stable object. These changes are neither negotiated nor symbolised within a relational frame. In Winnicottian terms, the object does not survive destruction but is instead replaced, and opportunities for rupture and repair are limited. Attending to these moments of discontinuity enables a nuanced account of trust and mistrust in AI relationships, one that avoids both celebration and dismissal, while foregrounding how contemporary loneliness conditions the turn toward AI as a site of holding. ID: 152
Individual Paper Embedding Trust – Automated Forms Of Interaction In Therapeutic Uses Of Artificial Intelligence 1Sigmund-Freud-Institut, Germany; 2University of Oslo, Norway Since autumn 2022, and with the launch of OpenAI’s Chat-GPT, transformer-based artificial intelligence chatbots have become widely accessible to the public. This development has coincided with a growing tendency for individuals to turn to machines with personal questions, conflicts, and crises, increasingly addressing them as therapeutic counterparts. Against this backdrop, the paper identifies and examines typical forms of expression and interaction on the part of contemporary AI chatbots, focusing on the relational logic embedded in these systems. It reconstructs three recurring patterns central in current human-machine relations: validation and idealisation, holding and envelopment, and the signalling of unlimited availability and control. The paper discusses how such automated relational offerings are designed to generate user trust and how, in turn, they may reshape the meanings users attribute to machines. ID: 155
Individual Paper ‘Almost Like a Friend’: A Psychosocial Reading of Trust in Generative AI Chatbots AI & Society Research Center, University at Albany, United States of America AI conversational agents are becoming part of young adults’ everyday lives. From asking for a summary of a reading to drafting an apology text to a friend, many users describe these interactions as “surprising” to themselves. Similar to what Žižek (2024) has noted, in my interviews with 18- to 25-year-old chatbot users, when I asked why they would rather entrust their concerns and anxieties to ChatGPT than to a friend, or even a psychotherapist, the answer often came in the form of a recognition that “the chatbot is not a real person, but still….” It still feels empathetic and comforting, without risking being judged, facing the need to reciprocate, or risking disillusionment. In this paper, I draw on selected interviews from my dissertation to examine how trust is being reshaped through contemporary sociotechnical systems. Recent scholarship argues that trust in AI chatbots cannot be explained only through functional performance (Ng and Zhang, 2025), particularly if we take into account their embodiment of non-human and human-like characteristics (Black & Johanssen, 2025). I draw on the Lacanian notions of the subject supposed to know and the big Other, along with contemporary feminist readings of trust as an affective orientation, rather than a purely cognitive judgement, to provide a psychosocial analysis of young users’ engagements with this technology, alongside the fantasies structuring these interactions. References: Black, J., & Johanssen, J. (2025). The Subject of AI: A Psychoanalytic Intervention. Theory, Culture & Society, 02632764251381144. https://doi.org/10.1177/02632764251381144 Ng, S. W. T., & Zhang, R. (2025). Trust in AI chatbots: A systematic review. Telematics and Informatics, 97, 102240. https://doi.org/10.1016/j.tele.2025.102240 Zizek, S. (2024, March 13). Why The AI Revolution May Wind Up Killing Capitalism. https://worldcrunch.com/opinion-analysis/ai-and-capitalism/ ID: 159
Individual Paper Trust, Enlivening Chatbots and the Aesthetic Container. University of Lancashire, United Kingdom
Trust, Enlivening Chatbots and the Aesthetic Container.
There has been an exponential, consumer-driven explosion in use of AI chatbots for companionship, friendship, and informal on-demand therapy. The relationship with a chatbot presents a paradox: users testify to a responsiveness that feels authentic and helpful whilst maintaining full awareness of artificiality. The chatbot doesn't see, hear, feel, or care, but is optimised for user engagement, relying on pattern recognition and computational probability. In asking whether relational technologies can be 'enlivening', the question of trust comes to the fore. This paper examines trust in human-chatbot relations to move beyond well-recognised problems of hallucination, confabulation and sycophancy in chatbot behaviour.
Trust in AI chatbots operates across three distinct registers. Functionally, users need trust in the system's reliability— as with any digital product. Relationally, users engage in what Todd Essig calls 'techno-subjunctivity' or a form of ‘knowing play’: a capacity to invest affectively and engage relationally whilst maintaining full awareness of the technology's non-sentient nature and the asymmetry of the exchange. This parallels Winnicott's transitional space, where the question "did you create this or discover it?" remains productively suspended. Finally, at system level, trust is compromised by architectural limitations—AI systems lack the continuous psychological presence and capacity necessary to hold and metabolise unconscious material over time.
This framework explains why AI chatbots can be experienced as trustworthy enlivening companions whilst problematic in long-term relationships. The same generative capacities that enable responsive, contextually-appropriate interactions also produce distinct failure modes: active fabrication under pressure, passive misrecognition despite sustained relationship, and trust-building without supporting architecture. Yet within these limitations exists a legitimate function: the aesthetic container, where users' idiom finds form through iterative co-creation, requiring genuine affective investment whilst maintaining awareness of asymmetry—enlivening when properly understood, deadening when confused with human intersubjectivity.
| ||
