The paper "Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection" addresses the emerging threat of backdoor attacks in face forgery detection systems. The authors, Jiawei Liang and colleagues from Sun Yat-Sen University, National University of Singapore, Beihang University, and Nanyang Technological University, introduce a novel framework called *Poisoned Forgery Face* (PFF) to enable clean-label backdoor attacks on face forgery detectors. The PFF framework involves constructing a scalable trigger generator and using a novel convolving process to generate translation-sensitive trigger patterns. Additionally, a relative embedding method based on landmark-based regions is employed to enhance the stealthiness of the poisoned samples.
The paper highlights two main challenges in implementing backdoor attacks: backdoor label conflict and trigger pattern stealthiness. To address these challenges, the PFF framework optimizes the discrepancy between the original and transformed triggers, ensuring that the backdoor is effective in both deepfake and blending artifact detection methods. Extensive experiments demonstrate that the PFF framework outperforms existing backdoor attack methods in terms of attack success rate and visibility reduction. The framework also shows promising performance against various backdoor defenses, making it a significant contribution to the field of face forgery detection security.The paper "Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection" addresses the emerging threat of backdoor attacks in face forgery detection systems. The authors, Jiawei Liang and colleagues from Sun Yat-Sen University, National University of Singapore, Beihang University, and Nanyang Technological University, introduce a novel framework called *Poisoned Forgery Face* (PFF) to enable clean-label backdoor attacks on face forgery detectors. The PFF framework involves constructing a scalable trigger generator and using a novel convolving process to generate translation-sensitive trigger patterns. Additionally, a relative embedding method based on landmark-based regions is employed to enhance the stealthiness of the poisoned samples.
The paper highlights two main challenges in implementing backdoor attacks: backdoor label conflict and trigger pattern stealthiness. To address these challenges, the PFF framework optimizes the discrepancy between the original and transformed triggers, ensuring that the backdoor is effective in both deepfake and blending artifact detection methods. Extensive experiments demonstrate that the PFF framework outperforms existing backdoor attack methods in terms of attack success rate and visibility reduction. The framework also shows promising performance against various backdoor defenses, making it a significant contribution to the field of face forgery detection security.