(2024) 23:8 | Yan Tang, Xing Xiong, Gan Tong, Yuan Yang, Hao Zhang
This study presents a multimodal diagnosis model for Alzheimer's disease (AD) based on an improved Transformer and 3D Convolutional Neural Network (3DCNN). The model integrates structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) data to enhance the accuracy of AD diagnosis. The 3DCNN extracts deep features from sMRI and PET images, while the improved Transformer learns global correlation information among these features. The fused information from different modalities is then used for identification. The model achieved a classification accuracy of 98.1% on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Visualization methods were employed to explain the model's decisions and identify brain regions associated with AD, such as the left parahippocampal region. The study demonstrates the effectiveness of the proposed model in AD diagnosis and provides valuable insights into the underlying pathogenesis of the disease.This study presents a multimodal diagnosis model for Alzheimer's disease (AD) based on an improved Transformer and 3D Convolutional Neural Network (3DCNN). The model integrates structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) data to enhance the accuracy of AD diagnosis. The 3DCNN extracts deep features from sMRI and PET images, while the improved Transformer learns global correlation information among these features. The fused information from different modalities is then used for identification. The model achieved a classification accuracy of 98.1% on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Visualization methods were employed to explain the model's decisions and identify brain regions associated with AD, such as the left parahippocampal region. The study demonstrates the effectiveness of the proposed model in AD diagnosis and provides valuable insights into the underlying pathogenesis of the disease.