2024 | Mohammad Talebzadeh, Abolfazl Sodagartoji, Zahra Moslemi, Sara Sedighi, Behzad Kazemi, Faezeh Akbari
This paper addresses the challenge of detecting retinal diseases using Optical Coherence Tomography (OCT) images, particularly in scenarios with limited data. The authors propose a novel deep triplet network that incorporates a conditional loss function to enhance the model's accuracy and address overfitting issues. The network is inspired by pre-trained models like VGG16 and is trained on a public OCT dataset containing 84,000 images categorized into four classes: choroidal neovascularization (CNV), diabetic macular edema (DME), drusen, and normal. The deep triplet network uses a Siamese neural network architecture to process pairs of images and a contrastive loss function to evaluate the similarity between feature embeddings. The conditional loss function introduces a penalty term for underperforming triplets and rewards optimal triplets, improving training efficiency and convergence. Experimental results show that the proposed model achieves an overall accuracy of 92.81%, outperforming state-of-the-art models such as DenseNet, InceptionV3, Resnet152, and ResNet50. The study demonstrates the effectiveness of the deep triplet network in handling limited data and improving the accuracy of retinal disease classification.This paper addresses the challenge of detecting retinal diseases using Optical Coherence Tomography (OCT) images, particularly in scenarios with limited data. The authors propose a novel deep triplet network that incorporates a conditional loss function to enhance the model's accuracy and address overfitting issues. The network is inspired by pre-trained models like VGG16 and is trained on a public OCT dataset containing 84,000 images categorized into four classes: choroidal neovascularization (CNV), diabetic macular edema (DME), drusen, and normal. The deep triplet network uses a Siamese neural network architecture to process pairs of images and a contrastive loss function to evaluate the similarity between feature embeddings. The conditional loss function introduces a penalty term for underperforming triplets and rewards optimal triplets, improving training efficiency and convergence. Experimental results show that the proposed model achieves an overall accuracy of 92.81%, outperforming state-of-the-art models such as DenseNet, InceptionV3, Resnet152, and ResNet50. The study demonstrates the effectiveness of the deep triplet network in handling limited data and improving the accuracy of retinal disease classification.