Causability and explainability of artificial intelligence in medicine

Causability and explainability of artificial intelligence in medicine

Received: 19 November 2018 | Revised: 26 January 2019 | Accepted: 24 February 2019 | Andreas Holzinger1 | Georg Langs2 | Helmut Denk3 | Kurt Zatloukal3 | Heimo Müller1,3
The article discusses the importance of causability and explainability in artificial intelligence (AI) within the medical field. While explainable AI (XAI) focuses on making AI systems transparent and interpretable, the authors argue that true *explainable medicine* requires a deeper understanding of causality. Causability, unlike explainability, is a property of a person and involves understanding the underlying reasons behind AI decisions, which is crucial for medical professionals to trust and utilize AI systems effectively. Explainability in AI has a long history, dating back to early AI systems that were based on logical and symbolic reasoning. However, these systems were limited in their practical applicability. Modern AI, particularly deep learning (DL), has achieved remarkable success but often lacks transparency, making it difficult to understand how decisions are made. The article highlights the need for XAI systems that not only produce accurate results but also provide clear, interpretable explanations that can be understood by medical professionals. The authors emphasize that in medicine, where decisions are often based on complex and uncertain data, explainability is essential. Medical professionals must be able to understand how AI systems arrive at their conclusions, as this is necessary for trust, safety, and effective decision-making. They also note that current AI systems often lack the ability to provide causal explanations, which are necessary for understanding the true nature of medical conditions and treatments. The article presents examples of explainable AI in histopathology, where AI systems are used to analyze tissue samples and provide explanations that help pathologists make accurate diagnoses. It also discusses the challenges of interpreting AI decisions, particularly in the context of deep learning, and the need for methods that can provide meaningful, interpretable explanations. The authors conclude that the development of causability as a scientific field is essential for advancing AI in medicine. This involves creating new methodologies and tools to measure the quality of explanations and ensure that AI systems are not only accurate but also trustworthy and understandable. The ultimate goal is to achieve *explainable medicine*, where AI systems can provide not just accurate predictions but also clear, causal explanations that help medical professionals make informed decisions.The article discusses the importance of causability and explainability in artificial intelligence (AI) within the medical field. While explainable AI (XAI) focuses on making AI systems transparent and interpretable, the authors argue that true *explainable medicine* requires a deeper understanding of causality. Causability, unlike explainability, is a property of a person and involves understanding the underlying reasons behind AI decisions, which is crucial for medical professionals to trust and utilize AI systems effectively. Explainability in AI has a long history, dating back to early AI systems that were based on logical and symbolic reasoning. However, these systems were limited in their practical applicability. Modern AI, particularly deep learning (DL), has achieved remarkable success but often lacks transparency, making it difficult to understand how decisions are made. The article highlights the need for XAI systems that not only produce accurate results but also provide clear, interpretable explanations that can be understood by medical professionals. The authors emphasize that in medicine, where decisions are often based on complex and uncertain data, explainability is essential. Medical professionals must be able to understand how AI systems arrive at their conclusions, as this is necessary for trust, safety, and effective decision-making. They also note that current AI systems often lack the ability to provide causal explanations, which are necessary for understanding the true nature of medical conditions and treatments. The article presents examples of explainable AI in histopathology, where AI systems are used to analyze tissue samples and provide explanations that help pathologists make accurate diagnoses. It also discusses the challenges of interpreting AI decisions, particularly in the context of deep learning, and the need for methods that can provide meaningful, interpretable explanations. The authors conclude that the development of causability as a scientific field is essential for advancing AI in medicine. This involves creating new methodologies and tools to measure the quality of explanations and ensure that AI systems are not only accurate but also trustworthy and understandable. The ultimate goal is to achieve *explainable medicine*, where AI systems can provide not just accurate predictions but also clear, causal explanations that help medical professionals make informed decisions.
Reach us at info@study.space
[slides and audio] Causability and explainability of artificial intelligence in medicine