An Explainable AI Paradigm for Alzheimer’s Diagnosis Using Deep Transfer Learning

An Explainable AI Paradigm for Alzheimer’s Diagnosis Using Deep Transfer Learning

5 February 2024 | Tanjim Mahmud, Koushick Barua, Sultana Umme Habiba, Nahed Sharmen, Mohammad Shahadat Hossain, Karl Andersson
This paper presents an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging deep transfer learning and ensemble modeling. The study aims to enhance the interpretability of deep learning models by incorporating XAI techniques, such as saliency maps and grad-CAM, to provide clinicians with visual insights into the neural regions influencing the diagnosis. The proposed framework uses popular pre-trained convolutional neural networks (CNNs) like VGG16, VGG19, DenseNet169, and DenseNet201. Extensive experiments were conducted on a comprehensive dataset, and the results showed that the proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), achieved superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. A novel model incorporating XAI techniques achieved an impressive accuracy of 96%. The integration of saliency maps and grad-CAM not only enhanced the model's accuracy but also provided valuable visual insights into the neural regions influencing the diagnosis. The findings highlight the potential of combining deep transfer learning with explainable AI in Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.This paper presents an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging deep transfer learning and ensemble modeling. The study aims to enhance the interpretability of deep learning models by incorporating XAI techniques, such as saliency maps and grad-CAM, to provide clinicians with visual insights into the neural regions influencing the diagnosis. The proposed framework uses popular pre-trained convolutional neural networks (CNNs) like VGG16, VGG19, DenseNet169, and DenseNet201. Extensive experiments were conducted on a comprehensive dataset, and the results showed that the proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), achieved superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. A novel model incorporating XAI techniques achieved an impressive accuracy of 96%. The integration of saliency maps and grad-CAM not only enhanced the model's accuracy but also provided valuable visual insights into the neural regions influencing the diagnosis. The findings highlight the potential of combining deep transfer learning with explainable AI in Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.
Reach us at info@study.space
[slides and audio] An Explainable AI Paradigm for Alzheimer%E2%80%99s Diagnosis Using Deep Transfer Learning