Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening

Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening

2024 | Ruoyu Chen, Weiyi Zhang, Fan Song, Honghua Yu, Dan Cao, Yingfeng Zheng, Mingguang He, Danli Shi
This study innovatively developed a deep-learning model using generative adversarial networks (GANs) to translate color fundus photography (CF) into indocyanine green angiography (ICGA) images for age-related macular degeneration (AMD) screening. The model was trained on 99,002 CF-ICGA pairs from a tertiary center and evaluated for its performance in generating realistic ICGA images. Objective evaluations using metrics such as mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), and multi-scale structural similarity measures (MS-SSIM) showed high-quality image generation, with SSIM ranging from 0.57 to 0.65 and MAE values indicating significant differences between real and generated images. Subjective evaluations by two experienced ophthalmologists rated the generated images highly, with scores ranging from 1.46 to 2.74 on a five-point scale. The model's ability to improve AMD classification accuracy was also assessed using an external dataset of 13,887 cases, where the addition of generated ICGA images increased the area under the ROC curve (AUC) from 0.93 to 0.97. These results suggest that CF-to-ICGA translation can serve as a valuable cross-modal data augmentation method, enhancing the accuracy of AMD screening models. However, further clinical validation is needed before widespread clinical application.This study innovatively developed a deep-learning model using generative adversarial networks (GANs) to translate color fundus photography (CF) into indocyanine green angiography (ICGA) images for age-related macular degeneration (AMD) screening. The model was trained on 99,002 CF-ICGA pairs from a tertiary center and evaluated for its performance in generating realistic ICGA images. Objective evaluations using metrics such as mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), and multi-scale structural similarity measures (MS-SSIM) showed high-quality image generation, with SSIM ranging from 0.57 to 0.65 and MAE values indicating significant differences between real and generated images. Subjective evaluations by two experienced ophthalmologists rated the generated images highly, with scores ranging from 1.46 to 2.74 on a five-point scale. The model's ability to improve AMD classification accuracy was also assessed using an external dataset of 13,887 cases, where the addition of generated ICGA images increased the area under the ROC curve (AUC) from 0.93 to 0.97. These results suggest that CF-to-ICGA translation can serve as a valuable cross-modal data augmentation method, enhancing the accuracy of AMD screening models. However, further clinical validation is needed before widespread clinical application.
Reach us at info@study.space
[slides] Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening | StudySpace