2024 | Tom Lawton, Phillip Morgan, Zoe Porter, Shireen Hickey, Alice Cunningham, Nathan Hughes, Ioanna Iacovides, Yan Jia, Vishal Sharma, Ibrahim Habli
The article discusses the potential risks and challenges associated with integrating Artificial Intelligence (AI) into healthcare systems, particularly focusing on the liability issues that clinicians may face. The authors argue that while AI is often seen as a savior for healthcare, it can become a "liability sink" for clinicians if they are unfairly held responsible for errors and adverse outcomes over which they have limited control. The standard model of AI-supported decision-making in healthcare, where clinicians either accept or override AI recommendations, can lead to clinicians feeling disenfranchised and burdened with additional cognitive and practical challenges. The article highlights the complexity of attributing liability in socio-technical systems involving AI, where multiple humans are involved in the design, commissioning, and operation of AI systems. It suggests that holding clinicians solely responsible for AI-related issues may not be fair or just. The authors propose alternative models that place more emphasis on patient-centered decision-making and involve clinicians in a more traditional role, integrating various data and opinions to make decisions. They also discuss the potential for AI systems to provide predictions or highlight relevant data rather than making direct recommendations, which could reduce the clinician's liability. The article concludes by advocating for reforms in liability laws to better address the unique challenges posed by AI in healthcare.The article discusses the potential risks and challenges associated with integrating Artificial Intelligence (AI) into healthcare systems, particularly focusing on the liability issues that clinicians may face. The authors argue that while AI is often seen as a savior for healthcare, it can become a "liability sink" for clinicians if they are unfairly held responsible for errors and adverse outcomes over which they have limited control. The standard model of AI-supported decision-making in healthcare, where clinicians either accept or override AI recommendations, can lead to clinicians feeling disenfranchised and burdened with additional cognitive and practical challenges. The article highlights the complexity of attributing liability in socio-technical systems involving AI, where multiple humans are involved in the design, commissioning, and operation of AI systems. It suggests that holding clinicians solely responsible for AI-related issues may not be fair or just. The authors propose alternative models that place more emphasis on patient-centered decision-making and involve clinicians in a more traditional role, integrating various data and opinions to make decisions. They also discuss the potential for AI systems to provide predictions or highlight relevant data rather than making direct recommendations, which could reduce the clinician's liability. The article concludes by advocating for reforms in liability laws to better address the unique challenges posed by AI in healthcare.