Multi-feature Fusion Deep Network for Skin Disease Diagnosis

Multi-feature Fusion Deep Network for Skin Disease Diagnosis

25 March 2024 | Ajay Krishan Gairola¹² · Vidit Kumar² · Ashok Kumar Sahoo¹ · Manoj Diwakar²³ · Prabhishek Singh³ · Deepak Garg⁴
This paper presents a multi-feature fusion deep network for skin disease diagnosis. The proposed model, called the Fully Fused Network (FFN), incorporates an improved single block (ISB) and an improved fusion block (IFB) to achieve optimal performance. The model is based on a convolutional neural network (CNN) for multi-class recognition of skin images. The ISB is used to segment diseases in skin images, while the IFB module enhances the effectiveness of the fused network. The model is evaluated using a proprietary dataset (Skin_disease_v1) and publicly available datasets, including ISIC2016, ISIC2017, and HAM10000. The highest accuracy achieved was 86% for ISB, 90% for IFB, and 92% for FFN using HAM10000 with ResNet101V2, ResNet50 + ResNet101V2, and ResNet50 + ResNet101V2, respectively. This represents an improvement of 9.2%, 13.2%, and 15.2% compared to existing methods. The proposed network outperforms existing networks in terms of performance. The paper also discusses previous research on skin disease diagnosis, including the Fully Shared Fusion Network (MFF-Net), Hierarchy-Aware Contrastive Learning with Late Fusion (HAC-LF), and a co-attention block with cross-modal attention mechanism. The authors propose an attention fusion (AF) block with an attentional feature fusion method to improve the fusion of dermoscopy and clinical image features. The AF block dynamically selects the best fusion ratios to enable pixel-wise multimodal fusion, resulting in more detailed fusion features. The paper also discusses a Gated Fusion Attention Network (GFANet) that uses a Gated Convolutional Fusion (GCF) module to improve segmentation performance and aid network convergence. The proposed methods aim to enhance the accuracy and efficiency of skin disease diagnosis through advanced deep learning techniques.This paper presents a multi-feature fusion deep network for skin disease diagnosis. The proposed model, called the Fully Fused Network (FFN), incorporates an improved single block (ISB) and an improved fusion block (IFB) to achieve optimal performance. The model is based on a convolutional neural network (CNN) for multi-class recognition of skin images. The ISB is used to segment diseases in skin images, while the IFB module enhances the effectiveness of the fused network. The model is evaluated using a proprietary dataset (Skin_disease_v1) and publicly available datasets, including ISIC2016, ISIC2017, and HAM10000. The highest accuracy achieved was 86% for ISB, 90% for IFB, and 92% for FFN using HAM10000 with ResNet101V2, ResNet50 + ResNet101V2, and ResNet50 + ResNet101V2, respectively. This represents an improvement of 9.2%, 13.2%, and 15.2% compared to existing methods. The proposed network outperforms existing networks in terms of performance. The paper also discusses previous research on skin disease diagnosis, including the Fully Shared Fusion Network (MFF-Net), Hierarchy-Aware Contrastive Learning with Late Fusion (HAC-LF), and a co-attention block with cross-modal attention mechanism. The authors propose an attention fusion (AF) block with an attentional feature fusion method to improve the fusion of dermoscopy and clinical image features. The AF block dynamically selects the best fusion ratios to enable pixel-wise multimodal fusion, resulting in more detailed fusion features. The paper also discusses a Gated Fusion Attention Network (GFANet) that uses a Gated Convolutional Fusion (GCF) module to improve segmentation performance and aid network convergence. The proposed methods aim to enhance the accuracy and efficiency of skin disease diagnosis through advanced deep learning techniques.
Reach us at info@futurestudyspace.com