Conference Agenda
| Session | ||
PS 9b: Digital self-exclusion and Human-AI collaboration
| ||
| Presentations | ||
Digital disengagement: Exploring the complex context of the right to digital self-exclusion 1Mykolas Romeris University, Lithuania; 2New Bulgarian University; 3University of the Aegean RECONNECT, the Interdisciplinary Research on People Inclusion in Technology-Dependent Societies Cluster, explores the complex dynamics determining inclusion and exclusion of individuals and groups in increasingly digitalized societies. The nuances between these concepts are multifaceted, and we approach them as complex systems where human, environmental, and technological factors are interdependent and co-evolve. Digital inclusion refers to actions ensuring that all individuals have access to digital technologies and the skills and means necessary to use them effectively. Digital exclusion highlights the barriers preventing people from accessing digital technologies due to socioeconomic status, geographic barriers, and disability. Increasing inclusion, while reducing exclusion, is a priority in the European Union (EU) which believes “the digital world should be based on European values – where no one is left behind, everyone enjoys freedom, protection and fairness.” But what happens with individuals or groups that do not want to be fully part of a digitalized world? The right to self-exclusion is a less-studied aspect of the digital inclusion and exclusion debate. We argue that the concept should take a central place, not only because it might represent a considerable legal and regulatory stalemate, but also highlights other important interdependencies such as privacy, cyber security, societal resilience, and sustainability. Can these concerns increase the number of self-exclusions in Europe? This research presents our first results, providing a state-of-the-art on the subject of digital self-exclusion, including learnings on international governance best practices, and recommendations for policy highlighting the need for new interdisciplinary methods of inquiry, and new narratives The Innovation Imperative: Redesigning Coaching Through Human–AI Collaboration Mykolas Riomeris University, Lithuania Purpose: This study explores the evolving landscape of hybrid coaching models that integrate human expertise with artificial intelligence (AI) technologies. It aims to examine how AI can complement human coaches to enhance developmental outcomes while relational and psychological depth. Originality/Value: Although AI is becoming more common in coaching, most research focuses on comparing AI coaches to human coaches. This study takes a different approach by exploring how human coaches and AI can work together in a supportive partnership. It brings together ideas from coaching, technology, and psychology to offer a fresh perspective on how hybrid coaching models can be designed to combine the strengths of both humans and AI. Methodology/Approach: This study employs structured literature analysis. The review focuses on existing theoretical frameworks, case studies, and conceptual models that inform the design, implementation, and effectiveness of hybrid coaching systems. Findings: The literature shows that hybrid coaching models can be highly effective when AI is used to support, not replace, the human coach. AI can help by providing structure, feedback, and tracking progress, while the human coach focuses on empathy, relationship-building, and complex decision-making. Trust in technology, clear roles, and ethical design are key to making these models work well. Practical Implications: The study provides conceptual guidance for coaching professionals, platform designers, and organizational leaders seeking to integrate AI ethically and effectively into coaching ecosystems. Socratic AI: Enhancing Reflection and Dialogue for Inclusive Interdisciplinary Collaboration European-University Viadrina Frankfurt (Oder), Germany Socrates famously stated, "One cannot teach anybody anything. One can only make them think." This concept of learning through reflection and dialogue is still highly relevant in today’s AI-driven work environments. As AI continues to influence the way we work, it presents a unique opportunity to go beyond simple automation and utilize AI to encourage self-reflection, critical thinking, inclusivity, and greater engagement in team settings and learning environments. This paper explores how AI can act as both a sparring partner and a catalyst for facilitating Socratic Dialogue within tools like ChatGPT, optimizing group processes at various stages. Interdisciplinary teams often encounter communication barriers, differing knowledge paradigms, and unconscious biases. While traditional AI applications prioritize efficiency, they often neglect the social learning processes that are crucial for fostering innovation. Based on prompts, it is demonstrated that AI, when designed to support Socratic Dialogue, can enhance collaboration by prompting teams to critically examine assumptions, refine their reasoning, and explore diverse perspectives. By prompting reflective engagement, AI can improve communication, decision-making, and foster more effective teamwork across different development phases. Through case studies and experimental findings, this work illustrates how AI-driven Socratic Dialogue can reduce groupthink, encourage flexible thinking, and promote deeper exploration of ideas, all while fostering open-mindedness and collaboration. Illuminating the dark forest. Grassroots initiatives reshaping narratives of AI. SWPS University, Poland This paper explores how grassroots communities in Europe are reshaping AI narratives, challenging perceptions of the technology as opaque and controlled from the top down. Using the “dark forest internet” theory as a lens, we highlight how grassroots initiatives promote ethical, community-driven AI solutions. Yancey Strickler’s essay on dark forest theory suggests that users retreat into private spaces to escape online harassment and surveillance, seeking safety and authentic communication. Similarly, grassroots communities develop counter-narratives emphasising human-centered values, inclusivity, and local needs in AI development. One example is the Poland-based community behind Bielik, a Polish large language model designed to reflect local languages, cultures, and contexts. Such efforts challenge the dominance of corporate-controlled AI, ensuring technology aligns with cultural and ethical priorities rather than prioritising efficiency and profit. Other counter-infrastructures, like the University of Chicago’s Glaze and Nightshade, undermine AI models to protect creative communities from exploitation. Artistic and cultural expressions also shape AI discourse, as digital art, storytelling, and community exhibitions amplify marginalized voices. Grassroots movements further advocate for accessibility and democratic oversight in AI governance, ensuring policies serve the public good over corporate or state interests. To illustrate these dynamics, we look at case studies of European grassroots AI initiatives, examining how they challenge dominant narratives through innovation, advocacy, and creative expression. By amplifying these voices, we contribute to a more inclusive, ethical understanding of AI’s societal impact, ensuring its development remains accountable to diverse communities. | ||