When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design

When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design

June 03-06, 2024 | Takuya Maeda, Anabel Quan-Haase
This paper explores how human-AI interactions can become parasocial, focusing on agency and anthropomorphism in affective design. As large language models (LLMs) improve, chatbots can produce natural, human-like language, enhancing usability but also creating unintended consequences, such as making fallible information seem trustworthy. The paper reviews literature on parasociality, social affordance, and trust to bridge these concepts in human-AI interactions. It critically examines how chatbot "roleplaying" and user role projection co-produce a pseudo-interactive, technologically-mediated space with imbalanced dynamics between users and chatbots. The paper develops a conceptual framework of parasociality in chatbots, describing interactions between humans and anthropomorphized chatbots. It discusses how chatbots use personal pronouns, conversational conventions, affirmations, and similar strategies to position themselves as users' companions or assistants, and how these tactics induce trust-forming behaviors in users. The paper also outlines ethical concerns arising from parasociality, including illusions of reciprocal engagement, task misalignment, and leaks of sensitive information. It argues that these consequences arise from a positive feedback cycle where anthropomorphized chatbot features encourage users to fill in the context around predictive outcomes. The paper emphasizes the importance of addressing how design choices deflect users' attention from the veracity of generated content or the interpretability of generative processes to the affective trustworthiness of conversational agents. The paper concludes that parasocial relationships between users and chatbots can have significant ethical implications, including the potential for misuse of sensitive information, misaligned tasks, and the reinforcement of stereotypes and biases. The paper calls for further research into the roles, conventions, and motivations behind human-AI interactions, as well as the ethical implications of anthropomorphized design in chatbots.This paper explores how human-AI interactions can become parasocial, focusing on agency and anthropomorphism in affective design. As large language models (LLMs) improve, chatbots can produce natural, human-like language, enhancing usability but also creating unintended consequences, such as making fallible information seem trustworthy. The paper reviews literature on parasociality, social affordance, and trust to bridge these concepts in human-AI interactions. It critically examines how chatbot "roleplaying" and user role projection co-produce a pseudo-interactive, technologically-mediated space with imbalanced dynamics between users and chatbots. The paper develops a conceptual framework of parasociality in chatbots, describing interactions between humans and anthropomorphized chatbots. It discusses how chatbots use personal pronouns, conversational conventions, affirmations, and similar strategies to position themselves as users' companions or assistants, and how these tactics induce trust-forming behaviors in users. The paper also outlines ethical concerns arising from parasociality, including illusions of reciprocal engagement, task misalignment, and leaks of sensitive information. It argues that these consequences arise from a positive feedback cycle where anthropomorphized chatbot features encourage users to fill in the context around predictive outcomes. The paper emphasizes the importance of addressing how design choices deflect users' attention from the veracity of generated content or the interpretability of generative processes to the affective trustworthiness of conversational agents. The paper concludes that parasocial relationships between users and chatbots can have significant ethical implications, including the potential for misuse of sensitive information, misaligned tasks, and the reinforcement of stereotypes and biases. The paper calls for further research into the roles, conventions, and motivations behind human-AI interactions, as well as the ethical implications of anthropomorphized design in chatbots.
Reach us at info@study.space