27 Feb 2024 | Li Lin, Xinan He, Yan Ju, Xin Wang, Feng Ding, Shu Hu
This paper addresses the issue of fairness generalization in deepfake detection. Recent deepfake detection models have shown performance disparities among demographic groups, leading to unfair targeting or exclusion of certain groups. The proposed method aims to preserve fairness across different domains by simultaneously considering features, loss, and optimization aspects. The method employs disentanglement learning to extract demographic and domain-agnostic forgery features, fusing them to encourage fair learning across a flattened loss landscape. Extensive experiments on prominent deepfake datasets demonstrate the effectiveness of the method, surpassing state-of-the-art approaches in preserving fairness during cross-domain deepfake detection. The code is available at https://github.com/Purdue-M2/Fairness-Generalization. The paper also discusses the challenges of maintaining fairness in cross-domain detection, the role of feature disentanglement and loss landscape flattening in achieving fairness generalization, and presents a novel framework that combines disentanglement learning, fair learning, and optimization to preserve fairness in deepfake detection. The method is evaluated on various deepfake datasets, showing superior performance in fairness generalization and detection accuracy. The results indicate that the proposed method outperforms existing approaches in maintaining fairness across different domains.This paper addresses the issue of fairness generalization in deepfake detection. Recent deepfake detection models have shown performance disparities among demographic groups, leading to unfair targeting or exclusion of certain groups. The proposed method aims to preserve fairness across different domains by simultaneously considering features, loss, and optimization aspects. The method employs disentanglement learning to extract demographic and domain-agnostic forgery features, fusing them to encourage fair learning across a flattened loss landscape. Extensive experiments on prominent deepfake datasets demonstrate the effectiveness of the method, surpassing state-of-the-art approaches in preserving fairness during cross-domain deepfake detection. The code is available at https://github.com/Purdue-M2/Fairness-Generalization. The paper also discusses the challenges of maintaining fairness in cross-domain detection, the role of feature disentanglement and loss landscape flattening in achieving fairness generalization, and presents a novel framework that combines disentanglement learning, fair learning, and optimization to preserve fairness in deepfake detection. The method is evaluated on various deepfake datasets, showing superior performance in fairness generalization and detection accuracy. The results indicate that the proposed method outperforms existing approaches in maintaining fairness across different domains.