Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review

Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review

27 February 2024 | Maria Frasca, Davide La Torre, Gabriella Pravettoni, Ilaria Cutica
This review explores the growing impact of machine learning and deep learning algorithms in the medical field, focusing on the critical issues of explainability and interpretability associated with black-box algorithms. The authors analyze the challenges and solutions presented in the literature, offering an overview of recent techniques and definitions of interpretability and explainability. The analysis, based on 448 articles, reveals an exponential growth in this field over the last decade. The review highlights the psychological dimensions of public perception and the necessity for effective communication regarding AI capabilities and limitations. Researchers are developing techniques to enhance interpretability, such as visualization methods and reducing model complexity, but the challenge remains in balancing high performance and interpretability. The paper emphasizes the importance of transparency, ethical considerations, and interdisciplinary collaboration to ensure responsible use of AI in medicine. The review also discusses the role of regulatory frameworks like the General Data Protection Regulation (GDPR) and the need for accountability in data processing. The paper concludes by emphasizing the importance of establishing enduring trust between clinicians and patients and addressing emerging challenges to facilitate the informed adoption of advanced technologies in medicine.This review explores the growing impact of machine learning and deep learning algorithms in the medical field, focusing on the critical issues of explainability and interpretability associated with black-box algorithms. The authors analyze the challenges and solutions presented in the literature, offering an overview of recent techniques and definitions of interpretability and explainability. The analysis, based on 448 articles, reveals an exponential growth in this field over the last decade. The review highlights the psychological dimensions of public perception and the necessity for effective communication regarding AI capabilities and limitations. Researchers are developing techniques to enhance interpretability, such as visualization methods and reducing model complexity, but the challenge remains in balancing high performance and interpretability. The paper emphasizes the importance of transparency, ethical considerations, and interdisciplinary collaboration to ensure responsible use of AI in medicine. The review also discusses the role of regulatory frameworks like the General Data Protection Regulation (GDPR) and the need for accountability in data processing. The paper concludes by emphasizing the importance of establishing enduring trust between clinicians and patients and addressing emerging challenges to facilitate the informed adoption of advanced technologies in medicine.
Reach us at info@study.space