Artificial intelligence-Enabled deep learning model for multimodal biometric fusion

Artificial intelligence-Enabled deep learning model for multimodal biometric fusion

8 February 2024 | Haewon Byeon¹ · Vikas Raina² · Mukta Sandhu³ · Mohammad Shabaz⁴ · Ismail Keshta⁵ · Mukesh Soni⁶ · Khaled Matrouk⁷ · Pavitar Parkash Singh⁸ · T. R. Vijaya Lakshmi⁹
Recent advances in biometrics based on biomedical information include the development of an artificial intelligence-enabled deep learning model for multimodal biometric fusion. This model improves accuracy and generalization by integrating various fusion methods—pixel-level, feature-level, and score-level—through deep neural networks. At the pixel level, spatial, channel, and intensity fusion strategies are used to optimize the fusion process. On the feature level, modality-specific branches and jointly optimized representation layers establish robust dependencies between modalities through backpropagation. Finally, intelligent fusion techniques, such as Rank-1 and modality evaluation, are used to blend matching scores on the score level. A virtual homogeneous multimodal dataset was constructed using simulated operational data to validate the model's effectiveness. Experimental results showed significant improvements compared to single-modal algorithms, with a 2.2 percentage point increase in accuracy through multimodal feature fusion. The score fusion method surpassed single-modal algorithms by 3.5 percentage points, achieving a retrieval accuracy of 99.6%. Biometrics, including fingerprints, faces, DNA, and voiceprints, have become crucial tools in crime investigation due to their uniqueness and specificity. However, single-modal biometric recognition algorithms face challenges such as data quality issues and security limitations. To address these challenges, multimodal fusion recognition algorithms have emerged as a solution to improve recognition performance and security. Multimodal big data have characteristics such as volume, diversity, velocity, and veracity. The diversity of multimodal big data stands out more than its other qualities. Each modality has an independent distribution, and multimodal big data are composed of many modalities that each describe a portion of the same objects of interest. Intricate relationships exist between modalities, and many multimodal applications can perform better if the fusion representations concealed in the cross-modality and intermodality are fully modeled. Several modalities, including fingerprints, retinas, and finger veins, are used in multimodal authentication systems to extract features. Both key creation and encryption employ the RSA algorithm.Recent advances in biometrics based on biomedical information include the development of an artificial intelligence-enabled deep learning model for multimodal biometric fusion. This model improves accuracy and generalization by integrating various fusion methods—pixel-level, feature-level, and score-level—through deep neural networks. At the pixel level, spatial, channel, and intensity fusion strategies are used to optimize the fusion process. On the feature level, modality-specific branches and jointly optimized representation layers establish robust dependencies between modalities through backpropagation. Finally, intelligent fusion techniques, such as Rank-1 and modality evaluation, are used to blend matching scores on the score level. A virtual homogeneous multimodal dataset was constructed using simulated operational data to validate the model's effectiveness. Experimental results showed significant improvements compared to single-modal algorithms, with a 2.2 percentage point increase in accuracy through multimodal feature fusion. The score fusion method surpassed single-modal algorithms by 3.5 percentage points, achieving a retrieval accuracy of 99.6%. Biometrics, including fingerprints, faces, DNA, and voiceprints, have become crucial tools in crime investigation due to their uniqueness and specificity. However, single-modal biometric recognition algorithms face challenges such as data quality issues and security limitations. To address these challenges, multimodal fusion recognition algorithms have emerged as a solution to improve recognition performance and security. Multimodal big data have characteristics such as volume, diversity, velocity, and veracity. The diversity of multimodal big data stands out more than its other qualities. Each modality has an independent distribution, and multimodal big data are composed of many modalities that each describe a portion of the same objects of interest. Intricate relationships exist between modalities, and many multimodal applications can perform better if the fusion representations concealed in the cross-modality and intermodality are fully modeled. Several modalities, including fingerprints, retinas, and finger veins, are used in multimodal authentication systems to extract features. Both key creation and encryption employ the RSA algorithm.
Reach us at info@study.space
[slides] Artificial intelligence-Enabled deep learning model for multimodal biometric fusion | StudySpace