This presentation offers a critical, interdisciplinary examination of artificial intelligence through a philosophical and social care lens, responding to the conference theme by interrogating the “hereness” and “not-yetness” of AI in educational and human-centred domains. While AI is increasingly embedded in our digital campuses, classrooms, and care systems, I argue that the ethical, epistemological, and human implications of these technologies remain under-explored.
Framing AI not as a neutral tool but as a socio-technical system imbued with power, the session draws on Foucault’s (1977) concept of the panopticon and power/knowledge to critique if AI surveillance technologies, predictive algorithms, and data analytics reconfigure human behaviour and institutional control. In this “digital panopticon,” students, educators, and service users may internalise the presence of invisible AI observers, altering their choices, creativity, and agency.
The presentation further explores how AI systems shape cognition and epistemology—particularly through personalised content, search, and recommendation engines that exploit confirmation bias. These tools may inadvertently dull critical thinking, reduce exposure to diverse perspectives, and entrench polarisation. The implications for pedagogy are profound: in the age of GenAI, are we educating students to think critically, or merely to echo themselves?
Anthropomorphism is also examined through philosopher Daniel Dennett’s (1987) intentional stance, which helps explain why humans so readily ascribe belief, intention, and emotion to LLMs, chatbots, and embodied AI. Using examples like the film Her and emotionally responsive AI such as ChatGPT, the presentation questions whether these systems should ever be treated as relational agents. What are the consequences when students or vulnerable users mistake simulated empathy for authentic care, or when educators begin to trust AI’s authority over human insight?
These philosophical questions are grounded in real-world case studies from social care and education. The presentation examines AI’s use in child protection—such as predictive risk scoring tools—and raises concerns about bias, transparency, and the erosion of due process. Can health metrics miss cultural nuance, such as hydration measures failing to account for dietary diversity among Cantonese elders. In both examples, the “not-yetness” of AI’s cultural and ethical sophistication is stark.
Crucially, the presentation argues for the central role of philosophy in shaping the future of educational technology. Philosophy fosters the ethical reflection, epistemic humility, and conceptual clarity needed to navigate AI’s seductive yet unsettled terrain. It challenges us to ask: What kinds of intelligence and labour are being outsourced? What moral responsibilities are being deferred? And how do we maintain space for ambiguity, empathy, and care in increasingly automated environments?
Ultimately, this presentation invites educators and technologists to go beyond utility and toward ethical inquiry—teaching students not only to interrogate their engagement with AI, but to question what kind of human thinking we are outsourcing, overlooking, or losing. As generative AI becomes ubiquitous, our challenge is not only technical integration but human preservation. To answer “Are we there yet?” we must first ask: Where exactly are we going—and who do we become when we get there?