This paper introduces a novel deep learning-based method to distinguish AI-generated fake videos (DeepFake videos) from real videos. The method leverages the distinctive artifacts left by the affine warping process used in DeepFake generation, which is necessary to match the target face with the source video. These artifacts, such as resolution inconsistencies, can be effectively captured by convolutional neural networks (CNNs). Unlike previous methods that require a large amount of real and DeepFake images for training, this approach simulates the warping process directly using simple image processing operations, saving time and resources. The method is evaluated on two DeepFake datasets, UADFV and DeepfakeTIMIT, demonstrating its effectiveness in detecting DeepFake videos. The results show that the proposed method outperforms other state-of-the-art methods, particularly in handling videos from different sources and under various conditions. The authors plan to further improve the method's robustness and efficiency by exploring dedicated network structures and evaluating its performance under multiple video compression scenarios.This paper introduces a novel deep learning-based method to distinguish AI-generated fake videos (DeepFake videos) from real videos. The method leverages the distinctive artifacts left by the affine warping process used in DeepFake generation, which is necessary to match the target face with the source video. These artifacts, such as resolution inconsistencies, can be effectively captured by convolutional neural networks (CNNs). Unlike previous methods that require a large amount of real and DeepFake images for training, this approach simulates the warping process directly using simple image processing operations, saving time and resources. The method is evaluated on two DeepFake datasets, UADFV and DeepfakeTIMIT, demonstrating its effectiveness in detecting DeepFake videos. The results show that the proposed method outperforms other state-of-the-art methods, particularly in handling videos from different sources and under various conditions. The authors plan to further improve the method's robustness and efficiency by exploring dedicated network structures and evaluating its performance under multiple video compression scenarios.