Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model

Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model

24 April 2024 | Talal H. Noor, Ayman Noor, Ahmed F. Alharbi, Ahmed Faisal, Rakan Alrashidi, Ahmed S. Alsaeedi, Ghada Alharbi, Tawfeeq Alsanoosy, Abdullah Alsaeedi
This paper addresses the significant shortage of sign language interpreters in Saudi Arabia, particularly for Arabic Sign Language (ArSL), by developing a hybrid deep learning model for real-time ArSL recognition. The model combines a Convolutional Neural Network (CNN) to extract spatial features from images and a Long Short-Term Memory (LSTM) to capture spatial and temporal characteristics from videos. The dataset consists of 4000 images for static gesture words and 500 videos for dynamic gesture words. The CNN and LSTM classifiers achieve accuracy rates of 94.40% and 82.70%, respectively, demonstrating the model's effectiveness in enhancing communication accessibility for the hearing-impaired community. The paper highlights the importance of deep learning in improving the quality of life for the deaf community in Saudi Arabia.This paper addresses the significant shortage of sign language interpreters in Saudi Arabia, particularly for Arabic Sign Language (ArSL), by developing a hybrid deep learning model for real-time ArSL recognition. The model combines a Convolutional Neural Network (CNN) to extract spatial features from images and a Long Short-Term Memory (LSTM) to capture spatial and temporal characteristics from videos. The dataset consists of 4000 images for static gesture words and 500 videos for dynamic gesture words. The CNN and LSTM classifiers achieve accuracy rates of 94.40% and 82.70%, respectively, demonstrating the model's effectiveness in enhancing communication accessibility for the hearing-impaired community. The paper highlights the importance of deep learning in improving the quality of life for the deaf community in Saudi Arabia.
Reach us at info@study.space