CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network

CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network

June 2024 | Fan Zhang, Gongguan Chen, Hua Wang, Caiming Zhang
The paper "CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network" by Fan Zhang, Gongguan Chen, Hua Wang, and Caiming Zhang addresses the challenges of facial-expression recognition (FER) in real-world environments, such as face occlusion and image blurring. The proposed method, CF-DAN, integrates a cross-fusion dual-attention network, a novel $C^2$ activation function, and a closed-loop operation between self-attention distillation and residual connections. The cross-fusion dual-attention network consists of three parts: (1) a cross-fusion grouped dual-attention mechanism to refine local features and obtain global information, (2) a $C^2$ activation function construction method to improve flexibility and recognition abilities, and (3) a closed-loop operation to suppress redundant information and enhance model generalization. The model achieves recognition accuracies of 92.78%, 92.02%, and 63.58% on the RAF-DB, FERPlus, and AffectNet datasets, respectively. The study also includes ablation experiments to validate the effectiveness of each component of the model. The proposed method provides a more effective solution for FER tasks by improving recognition accuracy and reducing computational costs.The paper "CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network" by Fan Zhang, Gongguan Chen, Hua Wang, and Caiming Zhang addresses the challenges of facial-expression recognition (FER) in real-world environments, such as face occlusion and image blurring. The proposed method, CF-DAN, integrates a cross-fusion dual-attention network, a novel $C^2$ activation function, and a closed-loop operation between self-attention distillation and residual connections. The cross-fusion dual-attention network consists of three parts: (1) a cross-fusion grouped dual-attention mechanism to refine local features and obtain global information, (2) a $C^2$ activation function construction method to improve flexibility and recognition abilities, and (3) a closed-loop operation to suppress redundant information and enhance model generalization. The model achieves recognition accuracies of 92.78%, 92.02%, and 63.58% on the RAF-DB, FERPlus, and AffectNet datasets, respectively. The study also includes ablation experiments to validate the effectiveness of each component of the model. The proposed method provides a more effective solution for FER tasks by improving recognition accuracy and reducing computational costs.
Reach us at info@study.space
[slides and audio] CF-DAN%3A Facial-expression recognition based on cross-fusion dual-attention network