AI Writing Stories of Pride – Can Readers Experience Queer Perspectives Through AI-Generated Narratives?
Tanja Messingschlager, Sarah Tomiczek, Markus Appel
Universität Würzburg, Deutschland
Large language models can be used to tell diverse stories that have the potential to foster understanding for minorities like people with a LGBTQ identity. However, this requires readers to be transported into the narrative. In an online experiment (N = 214) we investigate whether the label AI-generated (vs. information about a human author) reduces transportation into a story that addresses queer perspectives. In addition, we include mind perception ascribed to AI as a mediator of this potential effect. Although we find no total effect of AI author information on transportation, we find an indirect effect. An AI author is ascribed less mind than a human author, which in turn leads to reduced transportation into the narrative.
Chatbot or Scientist? Evaluating AI vs. Human Source Credibility in Digital Climate Communication
Shirley S. Ho, Chang He
Nanyang Technological U, Singapore
Climate change misinformation threatens public understanding and evidence-based policymaking, a challenge intensified by generative AI's integration into scientific communication. While AI can democratize knowledge, risks like "hallucinations" (plausible but false content) and algorithmic opacity undermine trust. The gap between scientific consensus and public perception is often exploited by misinformation campaigns. Generative AI, as a potential "neutral intermediary," could bridge trust deficits if outputs are anchored in verifiable sources and transparent processes. This study explores how AI chatbot-generated climate information is perceived compared to human scientist-provided content and whether source attribution mitigates credibility deficits. The HAII-TIME model suggests transparent AI systems can trigger positive heuristics, enhancing perceived credibility and user engagement, while the AAA framework explains how individuals use internal authentication and external authentication. This study integrates the two models, emphasizing transparency and psychological processing in AI-based communication.
Beauty and the Bot: Perceptions of Human and AI Fitness Influencers on Instagram
Chad Edwards1,2, Autumn Edwards1,2, Varun Rijhwani2,3, Hamza Mostafa1,2, Daniel Ebo1, Dorcas Doku1
1Western Michigan University, United States of America; 2Communication and Social Robotics Labs; 3Indian Institute of Management (IIM), Indore
This study examines user perceptions of human versus AI fitness influencers on Instagram, focusing on how agent type, gender, and age influence credibility, social presence, and interpersonal attraction. An experimental design (2*2*2) with 161 participants assessed mock Instagram posts. Results indicated that human influencers were rated significantly higher in social attraction, caring credibility, and social presence. Younger influencers were perceived as more physically attractive, while older influencers were seen as more credible. Female influencers were rated higher in physical attractiveness than male influencers. These findings highlight the continued importance of human connection in digital fitness engagement and the limitations of AI influencers in achieving deep emotional connections with users.
Bot or Not? The Impact of Textual Imperfections on Students’ Perceptions of AI Humanness in Educational Forums
Patric Spence1, Jihyun Kim1, David Westerman2
1University of Central Florida, United States of America; 2North Dakota State University, United States of America
Westerman et al. (2019) found frequent typos reduced chatbot humanness and attraction, whereas capitalization had minimal negative effects. Building upon this, the current study examines how subtle textual features—minor typos and selective capitalization—influence student perceptions of an online course assistant within a simulated LMS discussion board. Capitalized words might enhance emotional immediacy or seem inappropriate academically, while small typos could either humanize assistants by suggesting relatable imperfections or undermine credibility. Using Social Presence Theory (Short et al., 1976) and Expectancy Violations Theory (Burgoon, 1993), the study employs a 2 (typos present/absent) × 2 (capitalization present/absent) experimental design. Participants evaluate posts for humanness, credibility, attraction, social presence, and affect toward both assistant and course. Results clarify whether nuanced textual imperfections increase authenticity or harm professionalism in educational contexts, helping educators and designers effectively balance language to enhance student engagement and learning outcomes.
Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions
Mahnaz Roshanaei1, Rezvaneh Rezapour2, Magy ElNasr3
1Stanford, United States of America; 2Drexel University; 3UC Santa Cruz
Research consistently demonstrates that engaging in social interactions, in particular, face-to-face interactions, promotes well-being, however in person engagement is not always accessible due to various constraints such as time limitations, geographical distance, and mental health conditions. In response to these barriers, AI-driven chatbots have emerged as supplementary tools for facilitating social interaction and offering non-judgmental and accessible support for social, emotional, and relational needs. There are significant efforts to enhance the conversational fluency and naturalness of chatbots as well as their emotional responsiveness, however, questions have been raised regarding the effectiveness and psychological impacts of these interactions, in particular as interactions become more emotionally complex. This research aims to explore how these systems can convey emotional support, in particular perceiving and conveying empathy, compared to humans and investigate factors such as personality traits, shared experiences, and the impact of fine-tuned AI models on AI’s ability to comprehend empathy.
How Anthropomorphism and Belief in Positive Machine Heuristic Shape Perceived Privacy Risk and Disclosure Intention to Machines: A Moderated- Moderation Analysis
Shirley S. Ho2,1, Junru Huang1
1CNRS@CREATE, Singapore; 2Nanyang Technological University, Singapore
In this study, we examined the role of anthropomorphism and belief in positive machine heuristic in people’s privacy decision-making in AI-powered digital twin city projects through an online survey in Singapore (N=1,000). Results showed that anthropomorphism and belief in positive machine heuristic were positively associated with individuals’ disclosure intention, while also diluting the negative association between perceived privacy risk and disclosure intention. Notably, anthropomorphism particularly mitigated the negative effect of perceived privacy risk when users held strong belief in positive machine heuristic. When people held weak belief in positive machine heuristic, the moderation effect of anthropomorphism became nonsignificant, suggesting that anthropomorphic design alone may not be sufficient to alleviate privacy concerns in human-machine communication unless users already had positive heuristic about machines.
|