2019 | Andreas Holzinger | Georg Langs | Helmut Denk | Kurt Zatloukal | Heimo Müller
This article discusses the importance of causability and explainability in artificial intelligence (AI) within the medical field. Explainable AI (XAI) aims to make AI systems transparent and interpretable, particularly for deep learning (DL) models. However, the authors argue that explainability alone is insufficient for achieving explainable medicine. Causability, which involves understanding the underlying reasons for AI decisions, is essential for ensuring that AI systems are not only explainable but also reliable and trustworthy.
The article defines explainability as a property of a system, while causability is a property of a person, emphasizing the need for AI systems to provide explanations that are not only interpretable but also grounded in causal relationships. The authors highlight the challenges of interpreting AI decisions in medical contexts, where data is often uncertain, incomplete, and noisy. They argue that medical professionals must be able to understand and trace AI decisions to ensure they are accurate and safe.
The article also discusses the limitations of current AI models, particularly DL, which are often considered "black boxes" due to their complexity. While DL models can achieve high accuracy, they lack transparency, making it difficult to understand how decisions are made. The authors propose that future AI systems should be designed with causability in mind, ensuring that explanations are not only interpretable but also meaningful in the context of medical decision-making.
The article provides examples of how causability can be applied in medical contexts, such as in histopathology, where AI systems must provide explanations that are understandable to human experts. The authors emphasize the need for a systems causability scale to measure the quality of explanations and ensure that AI systems are not only effective but also safe and trustworthy.
In conclusion, the article argues that causability is a critical component of explainable AI in medicine, and that future AI systems must be designed to provide explanations that are not only interpretable but also grounded in causal relationships. This will help ensure that AI systems are reliable, trustworthy, and effective in medical decision-making.This article discusses the importance of causability and explainability in artificial intelligence (AI) within the medical field. Explainable AI (XAI) aims to make AI systems transparent and interpretable, particularly for deep learning (DL) models. However, the authors argue that explainability alone is insufficient for achieving explainable medicine. Causability, which involves understanding the underlying reasons for AI decisions, is essential for ensuring that AI systems are not only explainable but also reliable and trustworthy.
The article defines explainability as a property of a system, while causability is a property of a person, emphasizing the need for AI systems to provide explanations that are not only interpretable but also grounded in causal relationships. The authors highlight the challenges of interpreting AI decisions in medical contexts, where data is often uncertain, incomplete, and noisy. They argue that medical professionals must be able to understand and trace AI decisions to ensure they are accurate and safe.
The article also discusses the limitations of current AI models, particularly DL, which are often considered "black boxes" due to their complexity. While DL models can achieve high accuracy, they lack transparency, making it difficult to understand how decisions are made. The authors propose that future AI systems should be designed with causability in mind, ensuring that explanations are not only interpretable but also meaningful in the context of medical decision-making.
The article provides examples of how causability can be applied in medical contexts, such as in histopathology, where AI systems must provide explanations that are understandable to human experts. The authors emphasize the need for a systems causability scale to measure the quality of explanations and ensure that AI systems are not only effective but also safe and trustworthy.
In conclusion, the article argues that causability is a critical component of explainable AI in medicine, and that future AI systems must be designed to provide explanations that are not only interpretable but also grounded in causal relationships. This will help ensure that AI systems are reliable, trustworthy, and effective in medical decision-making.