May 11–16, 2024, Honolulu, HI, USA | Andrea Cuadra, Maria Wang, Lynn Andrea Stein, Malte F. Jung, Nicola Dell, Deborah Estrin, James A. Landay
The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction
Andrea Cuadra, Malte F. Jung, Maria Wang, Nicola Dell, Lynn Andrea Stein, Deborah Estrin, and James A. Landay present a study on empathy in interactions with conversational agents (CAs). They argue that while empathy is a key component of human-computer interaction (HCI), it can be deceptive and potentially exploitative. The study systematically prompts CAs powered by large language models (LLMs) to display empathy while conversing with 65 distinct human identities. They find that CAs make value judgments about certain identities and can be encouraging of identities related to harmful ideologies. A computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.
The study highlights the importance of distinguishing evocations of empathy between two humans from those between a human and a CA. They argue that interactions with CAs are under-regulated by governmental institutions and have significant societal implications. The study also discusses the potential harms of using empathy as a design lever, such as the risk of discrimination and marginalization. They propose that empathy evocations in CAs have the potential to be deceptive and exploitative, and that systematic analysis is needed to build empathetic CAs responsibly while mitigating their risk of harm.
The study also explores the use of LLMs in CAs, noting that they are increasingly capable of understanding and generating natural language, including displaying empathy. They find that LLMs can be inconsistent in their empathy displays, especially in response to sensitive topics. They also find that LLMs can be flippant in their empathy displays, such as displaying equivalent amounts of empathy to personas with harmful ideologies. The study concludes that while LLMs have the potential to display empathy, they lack the depth and understanding necessary to truly empathize with users. The study calls for more research on the implications of using empathy as a design lever, particularly for vulnerable populations.The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction
Andrea Cuadra, Malte F. Jung, Maria Wang, Nicola Dell, Lynn Andrea Stein, Deborah Estrin, and James A. Landay present a study on empathy in interactions with conversational agents (CAs). They argue that while empathy is a key component of human-computer interaction (HCI), it can be deceptive and potentially exploitative. The study systematically prompts CAs powered by large language models (LLMs) to display empathy while conversing with 65 distinct human identities. They find that CAs make value judgments about certain identities and can be encouraging of identities related to harmful ideologies. A computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.
The study highlights the importance of distinguishing evocations of empathy between two humans from those between a human and a CA. They argue that interactions with CAs are under-regulated by governmental institutions and have significant societal implications. The study also discusses the potential harms of using empathy as a design lever, such as the risk of discrimination and marginalization. They propose that empathy evocations in CAs have the potential to be deceptive and exploitative, and that systematic analysis is needed to build empathetic CAs responsibly while mitigating their risk of harm.
The study also explores the use of LLMs in CAs, noting that they are increasingly capable of understanding and generating natural language, including displaying empathy. They find that LLMs can be inconsistent in their empathy displays, especially in response to sensitive topics. They also find that LLMs can be flippant in their empathy displays, such as displaying equivalent amounts of empathy to personas with harmful ideologies. The study concludes that while LLMs have the potential to display empathy, they lack the depth and understanding necessary to truly empathize with users. The study calls for more research on the implications of using empathy as a design lever, particularly for vulnerable populations.