March 11–14, 2024, Boulder, CO, USA | Gregory LeMasurier, Alvika Gautam, Zhao Han, Jacob W. Crandall, Holly A. Yanco
This paper explores the effectiveness of reactive and proactive systems in explaining robot failures to enhance trust and understanding. Reactive systems explain failures after they occur, while proactive systems predict and explain potential issues in advance. The study compares these systems using a mixed online user study with 186 participants. The results show that proactive systems are perceived as more intelligent and trustworthy, and their explanations are rated as more understandable and timely. The study also highlights the importance of explaining failure reasons to improve human assistance and team performance. The findings suggest that proactive systems can improve robot adoption and diagnostic capabilities, particularly in shared human-robot workspaces. The research provides insights for designing effective robot explanation systems and improving human-robot collaboration.This paper explores the effectiveness of reactive and proactive systems in explaining robot failures to enhance trust and understanding. Reactive systems explain failures after they occur, while proactive systems predict and explain potential issues in advance. The study compares these systems using a mixed online user study with 186 participants. The results show that proactive systems are perceived as more intelligent and trustworthy, and their explanations are rated as more understandable and timely. The study also highlights the importance of explaining failure reasons to improve human assistance and team performance. The findings suggest that proactive systems can improve robot adoption and diagnostic capabilities, particularly in shared human-robot workspaces. The research provides insights for designing effective robot explanation systems and improving human-robot collaboration.