Gradient Alignment for Cross-Domain Face Anti-Spoofing

Gradient Alignment for Cross-Domain Face Anti-Spoofing

12 Mar 2024 | Binh M. Le, Simon S. Woo*
This paper introduces GAC-FAS, a novel learning objective for cross-domain face anti-spoofing (FAS) that encourages the model to converge towards an optimal flat minimum without requiring additional learning modules. Unlike conventional sharpness-aware minimizers, GAC-FAS identifies ascending points for each domain and regulates the generalization gradient updates at these points to align coherently with empirical risk minimization (ERM) gradient updates. This approach ensures the model converges to an optimal flat minimum and is robust against domain shifts. The method is motivated by recent advancements in Sharpness-Aware Minimization (SAM), which offers a promising alternative to ERM for seeking generalizable minima. Our objective function for DG in FAS is carefully modulated by considering the limitations of current SAM variants. When SAM is applied to entire datasets, it may produce biased updates due to the dominance of a particular domain or generate inconsistent gradients when applied to individual domains. Moreover, the updates to the SAM generalization gradient have a tendency to yield a model that is capable of handling many forms of noise, including label noise and adversarial noise. However, our primary focus is on addressing domain shifts in the context of DG for FAS. We propose two essential conditions for DG in FAS. First, the objective should aim for an optimal flat minimum that is both flat and low in terms of the training loss. Second, the SAM generalization gradient updates, derived at ascending points (see its definition in Sec. 3.2) for each domain, should be coherently aligned with each other and with the ERM gradient update as illustrated in Fig. 1. This dual approach enables our model to learn a more stable local minimum and become more robust to domain shifts over different face spoofing datasets. Our comprehensive experiments on benchmark datasets under various settings, including leave-one-out, limited source domain, and performance upon convergence, demonstrate the superiority of our method compared to current state-of-the-art (SoTA) baselines. The main contributions of our work are summarized as follows: 1) We offer a new perspective for cross-domain FAS, shifting the focus from learning domain-invariant features to finding an optimal flat minimum for significantly improving the generalization and robustness to domain shifts. 2) We propose a novel training objective: self-regulating generalization gradient updates at ascending points to coherently align with the ERM gradient update, benefiting DG in FAS. 3) We demonstrate that our approach outperforms well-known baselines in both snapshot and convergence performance across popular FAS evaluation protocol settings.This paper introduces GAC-FAS, a novel learning objective for cross-domain face anti-spoofing (FAS) that encourages the model to converge towards an optimal flat minimum without requiring additional learning modules. Unlike conventional sharpness-aware minimizers, GAC-FAS identifies ascending points for each domain and regulates the generalization gradient updates at these points to align coherently with empirical risk minimization (ERM) gradient updates. This approach ensures the model converges to an optimal flat minimum and is robust against domain shifts. The method is motivated by recent advancements in Sharpness-Aware Minimization (SAM), which offers a promising alternative to ERM for seeking generalizable minima. Our objective function for DG in FAS is carefully modulated by considering the limitations of current SAM variants. When SAM is applied to entire datasets, it may produce biased updates due to the dominance of a particular domain or generate inconsistent gradients when applied to individual domains. Moreover, the updates to the SAM generalization gradient have a tendency to yield a model that is capable of handling many forms of noise, including label noise and adversarial noise. However, our primary focus is on addressing domain shifts in the context of DG for FAS. We propose two essential conditions for DG in FAS. First, the objective should aim for an optimal flat minimum that is both flat and low in terms of the training loss. Second, the SAM generalization gradient updates, derived at ascending points (see its definition in Sec. 3.2) for each domain, should be coherently aligned with each other and with the ERM gradient update as illustrated in Fig. 1. This dual approach enables our model to learn a more stable local minimum and become more robust to domain shifts over different face spoofing datasets. Our comprehensive experiments on benchmark datasets under various settings, including leave-one-out, limited source domain, and performance upon convergence, demonstrate the superiority of our method compared to current state-of-the-art (SoTA) baselines. The main contributions of our work are summarized as follows: 1) We offer a new perspective for cross-domain FAS, shifting the focus from learning domain-invariant features to finding an optimal flat minimum for significantly improving the generalization and robustness to domain shifts. 2) We propose a novel training objective: self-regulating generalization gradient updates at ascending points to coherently align with the ERM gradient update, benefiting DG in FAS. 3) We demonstrate that our approach outperforms well-known baselines in both snapshot and convergence performance across popular FAS evaluation protocol settings.
Reach us at info@study.space
Understanding Gradient Alignment for Cross-Domain Face Anti-Spoofing