27 February 2024 | Maria Frasca, Davide La Torre, Gabriella Pravettoni, Ilaria Cutica
This review explores the growing impact of machine learning and deep learning algorithms in medicine, focusing on the critical issues of explainability and interpretability of black-box algorithms. While these algorithms are increasingly used for medical analysis and diagnosis, their complexity underscores the need to understand how they explain and interpret data to make informed decisions. The review comprehensively analyzes challenges and solutions presented in the literature, offering an overview of the most recent techniques used in this field. It provides precise definitions of interpretability and explainability, aiming to clarify the distinctions between these concepts and their implications for decision-making. The analysis, based on 448 articles and addressing seven research questions, reveals exponential growth in this field over the last decade. The psychological dimensions of public perception highlight the necessity for effective communication regarding the capabilities and limitations of artificial intelligence. Researchers are actively developing techniques to enhance interpretability, employing visualization methods and reducing model complexity. However, the persistent challenge lies in finding the delicate balance between achieving high performance and maintaining interpretability. Acknowledging the growing significance of AI in aiding medical diagnosis and therapy, and the creation of interpretable AI models is considered essential. In this dynamic context, an unwavering commitment to transparency, ethical considerations, and interdisciplinary collaboration is imperative to ensure the responsible use of AI. This collective commitment is vital for establishing enduring trust between clinicians and patients, addressing emerging challenges, and facilitating the informed adoption of these advanced technologies in medicine.This review explores the growing impact of machine learning and deep learning algorithms in medicine, focusing on the critical issues of explainability and interpretability of black-box algorithms. While these algorithms are increasingly used for medical analysis and diagnosis, their complexity underscores the need to understand how they explain and interpret data to make informed decisions. The review comprehensively analyzes challenges and solutions presented in the literature, offering an overview of the most recent techniques used in this field. It provides precise definitions of interpretability and explainability, aiming to clarify the distinctions between these concepts and their implications for decision-making. The analysis, based on 448 articles and addressing seven research questions, reveals exponential growth in this field over the last decade. The psychological dimensions of public perception highlight the necessity for effective communication regarding the capabilities and limitations of artificial intelligence. Researchers are actively developing techniques to enhance interpretability, employing visualization methods and reducing model complexity. However, the persistent challenge lies in finding the delicate balance between achieving high performance and maintaining interpretability. Acknowledging the growing significance of AI in aiding medical diagnosis and therapy, and the creation of interpretable AI models is considered essential. In this dynamic context, an unwavering commitment to transparency, ethical considerations, and interdisciplinary collaboration is imperative to ensure the responsible use of AI. This collective commitment is vital for establishing enduring trust between clinicians and patients, addressing emerging challenges, and facilitating the informed adoption of these advanced technologies in medicine.