The impact of AI errors in a human-in-the-loop process

The impact of AI errors in a human-in-the-loop process

(2024) 9:1 | Ujué Agudo, Karlos G. Liberal, Miren Arrese, Helena Matute
The article explores the impact of AI errors in human-in-the-loop decision-making processes, particularly in the context of automated decision support systems used in public sector decisions such as justice, social assistance, health, and education. The authors highlight the importance of human presence in these processes to prevent erroneous or biased algorithmic decisions, but also acknowledge the challenges in achieving effective human-computer interaction. They discuss the concept of automation bias, where humans tend to comply with automated systems even when they are incorrect, and provide examples from various domains. To address these issues, the authors conducted two experiments that manipulate the timing of AI support in a human-in-the-loop process. In Experiment 1, participants judged the guilt of defendants based on witness testimonies, with the AI providing an assessment before or after their judgment. The results showed that participants who made their judgments before receiving incorrect AI support were more accurate and less compliant with the AI's errors. In Experiment 2, the authors replicated and extended the findings using a more standardized 0-100 scale and a larger sample size, confirming that forcing participants to emit their judgments before receiving incorrect AI support improved accuracy and reduced compliance. The study suggests that manipulating the timing of AI support can help reduce automation bias and improve decision accuracy in human-in-the-loop processes. The authors recommend that future research should further explore these methods to enhance the reliability and effectiveness of automated decision support systems in public sector applications.The article explores the impact of AI errors in human-in-the-loop decision-making processes, particularly in the context of automated decision support systems used in public sector decisions such as justice, social assistance, health, and education. The authors highlight the importance of human presence in these processes to prevent erroneous or biased algorithmic decisions, but also acknowledge the challenges in achieving effective human-computer interaction. They discuss the concept of automation bias, where humans tend to comply with automated systems even when they are incorrect, and provide examples from various domains. To address these issues, the authors conducted two experiments that manipulate the timing of AI support in a human-in-the-loop process. In Experiment 1, participants judged the guilt of defendants based on witness testimonies, with the AI providing an assessment before or after their judgment. The results showed that participants who made their judgments before receiving incorrect AI support were more accurate and less compliant with the AI's errors. In Experiment 2, the authors replicated and extended the findings using a more standardized 0-100 scale and a larger sample size, confirming that forcing participants to emit their judgments before receiving incorrect AI support improved accuracy and reduced compliance. The study suggests that manipulating the timing of AI support can help reduce automation bias and improve decision accuracy in human-in-the-loop processes. The authors recommend that future research should further explore these methods to enhance the reliability and effectiveness of automated decision support systems in public sector applications.
Reach us at info@study.space