Unified Physical-Digital Face Attack Detection

Unified Physical-Digital Face Attack Detection

31 Jan 2024 | Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng, Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei
This paper proposes a unified physical-digital attack detection framework, UniAttackDetection, and a corresponding dataset, UniAttackData, to address the challenges of detecting both physical and digital face attacks in face recognition systems. The main issue is the lack of a unified dataset that includes both types of attacks with consistent ID information, which limits the development of effective detection models. To solve this, the authors collect a large-scale dataset, UniAttackData, containing 1,800 subjects with 2 physical and 12 digital attacks, resulting in 29,706 videos. This dataset provides a comprehensive set of attack types, advanced forgery methods, and diverse evaluation protocols. The proposed UniAttackDetection framework is based on Vision-Language Models (VLMs) and includes three main modules: the Teacher-Student Prompts (TSP) module, which extracts unified and specific knowledge; the Unified Knowledge Mining (UKM) module, which captures a comprehensive feature space; and the Sample-Level Prompt Interaction (SLPI) module, which grasps sample-level semantics. These modules work together to form a robust unified attack detection framework. Extensive experiments on UniAttackData and three other datasets demonstrate the superiority of the proposed approach for unified face attack detection. The results show that the method outperforms existing approaches in terms of accuracy, AUC, and EER. The framework is able to learn a compact and continuous feature space for attack categories, which enhances the generalization ability of the model. The experiments also show that the method is effective in detecting both physical and digital attacks, even when the attack types are unseen during training. The proposed dataset and framework provide a significant contribution to the field of face recognition and anti-spoofing technology.This paper proposes a unified physical-digital attack detection framework, UniAttackDetection, and a corresponding dataset, UniAttackData, to address the challenges of detecting both physical and digital face attacks in face recognition systems. The main issue is the lack of a unified dataset that includes both types of attacks with consistent ID information, which limits the development of effective detection models. To solve this, the authors collect a large-scale dataset, UniAttackData, containing 1,800 subjects with 2 physical and 12 digital attacks, resulting in 29,706 videos. This dataset provides a comprehensive set of attack types, advanced forgery methods, and diverse evaluation protocols. The proposed UniAttackDetection framework is based on Vision-Language Models (VLMs) and includes three main modules: the Teacher-Student Prompts (TSP) module, which extracts unified and specific knowledge; the Unified Knowledge Mining (UKM) module, which captures a comprehensive feature space; and the Sample-Level Prompt Interaction (SLPI) module, which grasps sample-level semantics. These modules work together to form a robust unified attack detection framework. Extensive experiments on UniAttackData and three other datasets demonstrate the superiority of the proposed approach for unified face attack detection. The results show that the method outperforms existing approaches in terms of accuracy, AUC, and EER. The framework is able to learn a compact and continuous feature space for attack categories, which enhances the generalization ability of the model. The experiments also show that the method is effective in detecting both physical and digital attacks, even when the attack types are unseen during training. The proposed dataset and framework provide a significant contribution to the field of face recognition and anti-spoofing technology.
Reach us at info@study.space