A Survey on Explainable Artificial Intelligence (XAI): towards Medical XAI

A Survey on Explainable Artificial Intelligence (XAI): towards Medical XAI

VOL. 14, NO. 8, AUGUST 2015 | Erico Tjoa, and Cuntai Guan, Fellow, IEEE
This paper presents a survey on Explainable Artificial Intelligence (XAI), focusing on its application in the medical field. The authors discuss the importance of interpretability in AI systems, particularly in domains like medicine where accountability and transparency are crucial. They categorize different types of interpretability, including perceptive interpretability, interpretability via mathematical structures, and verbal interpretability. Perceptive interpretability involves methods that provide humanly perceivable explanations, such as saliency maps and feature maps. Interpretability via mathematical structures includes methods that use mathematical analysis to explain AI decisions, such as feature extraction and sensitivity analysis. Verbal interpretability involves methods that provide explanations in the form of logical statements or rule sets. The authors also discuss the challenges and future prospects of interpretability in AI, emphasizing the need for standardized frameworks and the importance of human studies in verifying the interpretability of AI methods. They highlight the role of mathematical structures in enhancing interpretability, such as through pre-defined models, feature extraction, and sensitivity analysis. The survey also covers the application of XAI in the medical field, discussing the risks and potential benefits of using interpretable AI in medical diagnosis and treatment. The authors conclude that while there is a growing interest in XAI, there is still a need for further research to develop more effective and reliable methods for explaining AI decisions in complex domains like medicine.This paper presents a survey on Explainable Artificial Intelligence (XAI), focusing on its application in the medical field. The authors discuss the importance of interpretability in AI systems, particularly in domains like medicine where accountability and transparency are crucial. They categorize different types of interpretability, including perceptive interpretability, interpretability via mathematical structures, and verbal interpretability. Perceptive interpretability involves methods that provide humanly perceivable explanations, such as saliency maps and feature maps. Interpretability via mathematical structures includes methods that use mathematical analysis to explain AI decisions, such as feature extraction and sensitivity analysis. Verbal interpretability involves methods that provide explanations in the form of logical statements or rule sets. The authors also discuss the challenges and future prospects of interpretability in AI, emphasizing the need for standardized frameworks and the importance of human studies in verifying the interpretability of AI methods. They highlight the role of mathematical structures in enhancing interpretability, such as through pre-defined models, feature extraction, and sensitivity analysis. The survey also covers the application of XAI in the medical field, discussing the risks and potential benefits of using interpretable AI in medical diagnosis and treatment. The authors conclude that while there is a growing interest in XAI, there is still a need for further research to develop more effective and reliable methods for explaining AI decisions in complex domains like medicine.
Reach us at info@study.space