This paper introduces a novel threat in face forgery detection caused by backdoor attacks. The authors propose a clean-label backdoor attack framework called Poisoned Forgery Face (PFF), which enables attackers to deceive face forgery detectors into misclassifying forged faces as real. The framework addresses challenges such as backdoor label conflicts and trigger stealthiness by generating translation-sensitive trigger patterns and using landmark-based relative embedding to enhance stealthiness. The proposed method outperforms existing backdoor attack baselines in terms of attack success rate (+16.39% BD-AUC) and visibility reduction (-12.65% L∞). It also shows promising performance against backdoor defenses. The experiments demonstrate that the PFF framework is effective against both deepfake and blending artifact detection methods. The method is evaluated on three datasets: Faceforensics++ (FF++), Celeb-DF-2 (CDF), and DeepFakeDetection (DFD). The results show that the proposed method achieves high BD-AUC values across different datasets, indicating its effectiveness in evading detection. The attack is also evaluated for stealthiness, with the proposed method achieving the highest PSNR and lowest L∞ and IM-Ratio values, indicating better visual stealthiness. The paper concludes that backdoor attacks pose a significant threat to face forgery detection systems and highlights the need for robust defenses against such attacks.This paper introduces a novel threat in face forgery detection caused by backdoor attacks. The authors propose a clean-label backdoor attack framework called Poisoned Forgery Face (PFF), which enables attackers to deceive face forgery detectors into misclassifying forged faces as real. The framework addresses challenges such as backdoor label conflicts and trigger stealthiness by generating translation-sensitive trigger patterns and using landmark-based relative embedding to enhance stealthiness. The proposed method outperforms existing backdoor attack baselines in terms of attack success rate (+16.39% BD-AUC) and visibility reduction (-12.65% L∞). It also shows promising performance against backdoor defenses. The experiments demonstrate that the PFF framework is effective against both deepfake and blending artifact detection methods. The method is evaluated on three datasets: Faceforensics++ (FF++), Celeb-DF-2 (CDF), and DeepFakeDetection (DFD). The results show that the proposed method achieves high BD-AUC values across different datasets, indicating its effectiveness in evading detection. The attack is also evaluated for stealthiness, with the proposed method achieving the highest PSNR and lowest L∞ and IM-Ratio values, indicating better visual stealthiness. The paper concludes that backdoor attacks pose a significant threat to face forgery detection systems and highlights the need for robust defenses against such attacks.