Designing for Human-Agent Alignment: Understanding what humans want from their agents

Designing for Human-Agent Alignment: Understanding what humans want from their agents

May 11-16, 2024 | Nitesh Goyal, Minsuk Chang, Michael Terry
This paper presents findings from a qualitative study on designing for human-agent alignment, focusing on understanding what humans want from their agents. The study involved 10 participants who were asked to imagine they had an autonomous agent selling a used camera for them. Participants were shown transcripts of fictional negotiations between their agent and a potential buyer and asked to think aloud about how the situations could have been handled appropriately. The study identified six key dimensions of human-agent alignment: 1) Knowledge Schema Alignment, 2) Autonomy and Agency Alignment, 3) Operational Alignment and Training, 4) Reputational Heuristics Alignment, 5) Ethics Alignment, and 6) Human Engagement Alignment. These findings expand previous work on process and specification alignment and the need for values and safety in human-AI interactions. The study also highlights the importance of aligning on how agents should interact with humans, including when and how to communicate, and what is considered ethically appropriate behavior. The findings suggest that designing agents requires attention to these alignment dimensions to ensure effective human-agent collaboration. The study also discusses three design directions for designers of human-agent collaborations, emphasizing the need for human-centered research in the design of agents.This paper presents findings from a qualitative study on designing for human-agent alignment, focusing on understanding what humans want from their agents. The study involved 10 participants who were asked to imagine they had an autonomous agent selling a used camera for them. Participants were shown transcripts of fictional negotiations between their agent and a potential buyer and asked to think aloud about how the situations could have been handled appropriately. The study identified six key dimensions of human-agent alignment: 1) Knowledge Schema Alignment, 2) Autonomy and Agency Alignment, 3) Operational Alignment and Training, 4) Reputational Heuristics Alignment, 5) Ethics Alignment, and 6) Human Engagement Alignment. These findings expand previous work on process and specification alignment and the need for values and safety in human-AI interactions. The study also highlights the importance of aligning on how agents should interact with humans, including when and how to communicate, and what is considered ethically appropriate behavior. The findings suggest that designing agents requires attention to these alignment dimensions to ensure effective human-agent collaboration. The study also discusses three design directions for designers of human-agent collaborations, emphasizing the need for human-centered research in the design of agents.
Reach us at info@study.space