March 18-21, 2024 | Chun-Wei Chiang, Zhuoran Lu, Zhuoyan Li, Ming Yin
This paper explores how introducing LLM-powered devil's advocates can enhance AI-assisted group decision making. The study investigates whether and how LLM-powered devil's advocates, which challenge AI recommendations or majority opinions, can help groups better utilize AI assistance and improve their decision-making processes. The research uses a randomized human-subject experiment to evaluate the impact of different LLM-powered devil's advocate designs on group decision-making outcomes.
The study finds that LLM-powered devil's advocates that challenge AI recommendations can promote groups' appropriate reliance on AI, particularly in in-distribution decision-making cases. Interactive LLM-powered devil's advocates are perceived as more collaborative and of higher quality than non-interactive ones. However, the introduction of LLM-powered devil's advocates does not significantly increase the perceived workload for completing group decision-making tasks. Interestingly, participants who interacted with dynamic devil's advocates challenging AI recommendations reported the lowest self-perceived decision-making performance and teamwork quality, despite achieving the highest actual decision-making performance.
The study also examines how different designs of LLM-powered devil's advocates affect groups' utilization of AI assistance in in-distribution and out-of-distribution decision-making cases. The results show that interactive devil's advocates can reduce over-reliance on AI in cases where AI recommendations are incorrect. The findings suggest that LLM-powered devil's advocates can help groups better utilize AI assistance by encouraging critical thinking and diverse perspectives, while also highlighting the importance of designing these advocates to be interactive and collaborative.
The study contributes to the field of human-AI interaction by demonstrating the potential of LLM-powered devil's advocates to enhance group-AI interactions in decision-making scenarios. It also highlights the need for further research into the design and implementation of such advocates to ensure they are effective and ethical in their use. The study's results have practical implications for improving the reliability and fairness of AI-assisted decision-making processes in group settings.This paper explores how introducing LLM-powered devil's advocates can enhance AI-assisted group decision making. The study investigates whether and how LLM-powered devil's advocates, which challenge AI recommendations or majority opinions, can help groups better utilize AI assistance and improve their decision-making processes. The research uses a randomized human-subject experiment to evaluate the impact of different LLM-powered devil's advocate designs on group decision-making outcomes.
The study finds that LLM-powered devil's advocates that challenge AI recommendations can promote groups' appropriate reliance on AI, particularly in in-distribution decision-making cases. Interactive LLM-powered devil's advocates are perceived as more collaborative and of higher quality than non-interactive ones. However, the introduction of LLM-powered devil's advocates does not significantly increase the perceived workload for completing group decision-making tasks. Interestingly, participants who interacted with dynamic devil's advocates challenging AI recommendations reported the lowest self-perceived decision-making performance and teamwork quality, despite achieving the highest actual decision-making performance.
The study also examines how different designs of LLM-powered devil's advocates affect groups' utilization of AI assistance in in-distribution and out-of-distribution decision-making cases. The results show that interactive devil's advocates can reduce over-reliance on AI in cases where AI recommendations are incorrect. The findings suggest that LLM-powered devil's advocates can help groups better utilize AI assistance by encouraging critical thinking and diverse perspectives, while also highlighting the importance of designing these advocates to be interactive and collaborative.
The study contributes to the field of human-AI interaction by demonstrating the potential of LLM-powered devil's advocates to enhance group-AI interactions in decision-making scenarios. It also highlights the need for further research into the design and implementation of such advocates to ensure they are effective and ethical in their use. The study's results have practical implications for improving the reliability and fairness of AI-assisted decision-making processes in group settings.