A lightweight feature extraction technique for deepfake audio detection is proposed. The method uses a modified ResNet50 to extract features from audio Mel spectrograms, followed by Linear Discriminant Analysis (LDA) for dimensionality reduction. The selected features are then used to train machine learning models, including Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbour (KNN), and Naive Bayes (NB). The ASVspoof 2019 Logical Access (LA) partition is used for training, while the ASVspoof 2021 deepfake partition is used for evaluation. The DECRO dataset is used to test the model under unseen noisy conditions. The proposed method outperforms traditional feature extraction methods such as Mel Frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC), achieving an Equal Error Rate (EER) of 0.4% and an accuracy of 99.7%. The ASVspoof datasets have been widely used in audio deepfake detection tasks, but they lack audio generated by recent algorithms like text-to-speech, which sounds more human-like. The Audio Deep Synthesis Detection Challenge (ADD 2022) has provided more realistic data and challenging scenarios for audio deepfake detection. Various feature extraction techniques have been used, including MFCC, PLP, LFCC, RFCC, and CQCC. However, traditional features are vulnerable to channel mismatch and additive noise. GTCC has gained popularity due to its noise resilience. Hybrid or integrated features have been shown to improve ASV systems. The classification model separates speech features from processable artifacts. Hidden Markov Models (HMM) have been used for classification in ASR systems. Machine learning methods can be used for speaker verification, which is a classification problem. Gaussian Mixture Models (GMM) and SVM classifiers have been investigated. Discriminative SVM classification is used to represent acoustic observations. These models are suitable for processing speech-related data but are not effective for nonlinear data. Error-Correcting Output Codes (ECOC) are used to combine binary classifiers into a multi-class classifier.A lightweight feature extraction technique for deepfake audio detection is proposed. The method uses a modified ResNet50 to extract features from audio Mel spectrograms, followed by Linear Discriminant Analysis (LDA) for dimensionality reduction. The selected features are then used to train machine learning models, including Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbour (KNN), and Naive Bayes (NB). The ASVspoof 2019 Logical Access (LA) partition is used for training, while the ASVspoof 2021 deepfake partition is used for evaluation. The DECRO dataset is used to test the model under unseen noisy conditions. The proposed method outperforms traditional feature extraction methods such as Mel Frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC), achieving an Equal Error Rate (EER) of 0.4% and an accuracy of 99.7%. The ASVspoof datasets have been widely used in audio deepfake detection tasks, but they lack audio generated by recent algorithms like text-to-speech, which sounds more human-like. The Audio Deep Synthesis Detection Challenge (ADD 2022) has provided more realistic data and challenging scenarios for audio deepfake detection. Various feature extraction techniques have been used, including MFCC, PLP, LFCC, RFCC, and CQCC. However, traditional features are vulnerable to channel mismatch and additive noise. GTCC has gained popularity due to its noise resilience. Hybrid or integrated features have been shown to improve ASV systems. The classification model separates speech features from processable artifacts. Hidden Markov Models (HMM) have been used for classification in ASR systems. Machine learning methods can be used for speaker verification, which is a classification problem. Gaussian Mixture Models (GMM) and SVM classifiers have been investigated. Discriminative SVM classification is used to represent acoustic observations. These models are suitable for processing speech-related data but are not effective for nonlinear data. Error-Correcting Output Codes (ECOC) are used to combine binary classifiers into a multi-class classifier.