Available online 9 February 2024 | Abdul Qadir, Rabbia Mahum, Mohammed A. El-Meligy, Adham E. Ragab, Abdulmalik AlSalman, Muhammad Awais
This research presents an efficient deepfake video detection method using a hybrid deep learning approach called ResNet-Swish-BiLSTM. The method is designed to identify deepfake videos by analyzing features from successive frames and using a combination of ResNet, Swish activation function, and BiLSTM networks. The ResNet-Swish-BiLSTM model is trained on the DFDC and FF++ datasets, achieving high accuracy in detecting deepfake videos. The model outperforms existing techniques, with 96.23% accuracy on the FF++ dataset and 78.33% accuracy using aggregated records from FF++ and DFDC. The model is robust to various visual manipulations, including compression, noise, blurring, and rotation. The study also evaluates the performance of different activation functions, finding that Swish outperforms others in terms of accuracy and efficiency. The proposed method is tested on various datasets, including FF++, DFDC, and Celeb-DF, demonstrating its effectiveness in detecting deepfake videos. The results show that the ResNet-Swish-BiLSTM model achieves high accuracy in detecting different types of deepfakes, including face swap (FS), face-to-face (F2F), and natural textures (NT). The model is also compared with other deep learning models, showing superior performance in terms of accuracy and robustness. The study concludes that the proposed method is a reliable and efficient approach for deepfake detection, with potential applications in digital forensics and security.This research presents an efficient deepfake video detection method using a hybrid deep learning approach called ResNet-Swish-BiLSTM. The method is designed to identify deepfake videos by analyzing features from successive frames and using a combination of ResNet, Swish activation function, and BiLSTM networks. The ResNet-Swish-BiLSTM model is trained on the DFDC and FF++ datasets, achieving high accuracy in detecting deepfake videos. The model outperforms existing techniques, with 96.23% accuracy on the FF++ dataset and 78.33% accuracy using aggregated records from FF++ and DFDC. The model is robust to various visual manipulations, including compression, noise, blurring, and rotation. The study also evaluates the performance of different activation functions, finding that Swish outperforms others in terms of accuracy and efficiency. The proposed method is tested on various datasets, including FF++, DFDC, and Celeb-DF, demonstrating its effectiveness in detecting deepfake videos. The results show that the ResNet-Swish-BiLSTM model achieves high accuracy in detecting different types of deepfakes, including face swap (FS), face-to-face (F2F), and natural textures (NT). The model is also compared with other deep learning models, showing superior performance in terms of accuracy and robustness. The study concludes that the proposed method is a reliable and efficient approach for deepfake detection, with potential applications in digital forensics and security.