High Throughput Deep Learning Detection of Mitral Regurgitation

High Throughput Deep Learning Detection of Mitral Regurgitation

February 12, 2024 | Amey Vrudhula B.S.E. a,b, Grant Duffy B.S. a, Milos Vukadinovic B.S. a,c, David Liang M.D., Ph.D. d, Susan Cheng, M.D., M.M.Sc., M.P.H. a, David Ouyang M.D. a,e
This study presents the development and validation of a fully automated deep learning pipeline for identifying clinically significant mitral regurgitation (MR) from transthoracic echocardiography studies. The pipeline was trained and tested on a large dataset of 58,614 studies (2,587,538 videos) from Cedars-Sinai Medical Center (CSMC) and evaluated in an external cohort from Stanford Healthcare (SHC). The view classifier, which identifies apical-4-chamber view videos with color Doppler, demonstrated high accuracy (AUC of 0.998) and specificity (0.999) in CSMC and 99.6% sensitivity and 99.9% specificity in SHC. The MR severity model, which assesses the severity of MR, showed strong performance with AUCs of 0.916 for moderate or severe MR and 0.934 for severe MR in CSMC, and 0.951 and 0.969, respectively, in SHC. The model's performance was consistent across both institutions, indicating its generalizability. The study highlights the potential of deep learning in automating the detection and severity assessment of MR, which could aid in screening and surveillance in both clinical and low-resource settings.This study presents the development and validation of a fully automated deep learning pipeline for identifying clinically significant mitral regurgitation (MR) from transthoracic echocardiography studies. The pipeline was trained and tested on a large dataset of 58,614 studies (2,587,538 videos) from Cedars-Sinai Medical Center (CSMC) and evaluated in an external cohort from Stanford Healthcare (SHC). The view classifier, which identifies apical-4-chamber view videos with color Doppler, demonstrated high accuracy (AUC of 0.998) and specificity (0.999) in CSMC and 99.6% sensitivity and 99.9% specificity in SHC. The MR severity model, which assesses the severity of MR, showed strong performance with AUCs of 0.916 for moderate or severe MR and 0.934 for severe MR in CSMC, and 0.951 and 0.969, respectively, in SHC. The model's performance was consistent across both institutions, indicating its generalizability. The study highlights the potential of deep learning in automating the detection and severity assessment of MR, which could aid in screening and surveillance in both clinical and low-resource settings.
Reach us at info@study.space