When Do People Want an Explanation from a Robot?

When Do People Want an Explanation from a Robot?

March 11-14, 2024 | Lennart Wachowiak, Andrew Fenn, Haris Kamran, Andrew Coles, Oya Celiktutan, Gerard Canal
When Do People Want an Explanation from a Robot? Lennart Wachowiak, Andrew Coles, and colleagues conducted a study to determine when users want explanations from robots during human-robot interactions (HRI). The study involved 186 participants who watched videos of robots in various situations, including successful interactions, errors, and social norm violations. Participants were asked to evaluate how they would like the robot to respond after each interaction. The results showed that users most often want explanations when the robot encounters unforeseen circumstances, is unable to complete a task, makes an error, or violates a social norm. Why-explanations were consistently among the top two preferred responses, except when the robot acts normally and successfully. Apologies were highly valued when the robot violated a social norm but less so when the robot was uncertain. Conversely, asking for help was highly valued when the robot was uncertain but less so when it violated a norm. The study also found no significant correlations between participants' attitudes towards robots, age, or gender and their response preferences. However, some trends were observed that could inform future research. The findings suggest that explanations are crucial for building trust and improving user perception of robots. The study highlights the importance of designing robot interactions that are transparent and user-centered, allowing for effective communication and trust-building in HRI scenarios.When Do People Want an Explanation from a Robot? Lennart Wachowiak, Andrew Coles, and colleagues conducted a study to determine when users want explanations from robots during human-robot interactions (HRI). The study involved 186 participants who watched videos of robots in various situations, including successful interactions, errors, and social norm violations. Participants were asked to evaluate how they would like the robot to respond after each interaction. The results showed that users most often want explanations when the robot encounters unforeseen circumstances, is unable to complete a task, makes an error, or violates a social norm. Why-explanations were consistently among the top two preferred responses, except when the robot acts normally and successfully. Apologies were highly valued when the robot violated a social norm but less so when the robot was uncertain. Conversely, asking for help was highly valued when the robot was uncertain but less so when it violated a norm. The study also found no significant correlations between participants' attitudes towards robots, age, or gender and their response preferences. However, some trends were observed that could inform future research. The findings suggest that explanations are crucial for building trust and improving user perception of robots. The study highlights the importance of designing robot interactions that are transparent and user-centered, allowing for effective communication and trust-building in HRI scenarios.
Reach us at info@study.space