VOL. 14, NO. 8, AUGUST 2015 | Erico Tjoa, and Cuntai Guan, Fellow, IEEE
This paper provides a comprehensive survey of Explainable Artificial Intelligence (XAI) in the context of medical applications. It highlights the importance of interpretability and explainability in machine learning, especially in critical fields like medicine, where the reliability and accountability of AI systems are crucial. The paper categorizes existing interpretability methods into two major categories: perceptive interpretability and interpretability via mathematical structures. Perceptive interpretability methods provide human-perceivable explanations, such as heatmaps or visualizations, while mathematical structure methods rely on mathematical formulations to uncover the underlying mechanisms of algorithms. The paper also discusses the challenges and future prospects of these methods, emphasizing the need for a unified framework to integrate different approaches. Additionally, it applies the categorization to medical research, aiming to guide clinicians and practitioners in selecting appropriate interpretability methods and to promote specialized education in the medical sector. The survey includes a detailed overview of various interpretability techniques, such as saliency methods, signal methods, verbal interpretability, and sensitivity analysis, along with their applications in medical contexts.This paper provides a comprehensive survey of Explainable Artificial Intelligence (XAI) in the context of medical applications. It highlights the importance of interpretability and explainability in machine learning, especially in critical fields like medicine, where the reliability and accountability of AI systems are crucial. The paper categorizes existing interpretability methods into two major categories: perceptive interpretability and interpretability via mathematical structures. Perceptive interpretability methods provide human-perceivable explanations, such as heatmaps or visualizations, while mathematical structure methods rely on mathematical formulations to uncover the underlying mechanisms of algorithms. The paper also discusses the challenges and future prospects of these methods, emphasizing the need for a unified framework to integrate different approaches. Additionally, it applies the categorization to medical research, aiming to guide clinicians and practitioners in selecting appropriate interpretability methods and to promote specialized education in the medical sector. The survey includes a detailed overview of various interpretability techniques, such as saliency methods, signal methods, verbal interpretability, and sensitivity analysis, along with their applications in medical contexts.