8 Jul 2024 | Dimitrios Kollias, Stefanos Zafeiriou, Irene Kotsia, Abhinav Dhall, Shreya Ghosh, Chunchang Shao, and Guanyu Hu
The 7th Affective Behavior Analysis in-the-wild (ABAW) Competition, held in conjunction with ECCV 2024, addresses challenges in understanding human expressions and behaviors, crucial for human-centered technologies. The competition includes two challenges: Multi-Task Learning (MTL) and Compound Expression Recognition (CE). The MTL challenge involves estimating valence and arousal, recognizing facial expressions, and detecting Action Units (AUs). The CE challenge focuses on recognizing seven compound expressions. The MTL challenge uses the s-Aff-Wild2 database, while the CE challenge uses a subset of the C-EXPR-DB database. The MTL challenge requires participants to use pre-trained models and the s-Aff-Wild2 database, while the CE challenge allows the use of any database. The evaluation metrics for the MTL challenge include the Concordance Correlation Coefficient (CCC) for valence and arousal, macro F1 score for expressions, and binary F1 score for AUs. The CE challenge evaluates performance based on the average F1 score across seven compound expressions. The baseline system for the MTL challenge uses a VGG16 network with pre-trained weights, achieving a combined score on the validation set. The competition aims to promote interdisciplinary collaboration and advance human-centered technologies through research on affective behavior analysis.The 7th Affective Behavior Analysis in-the-wild (ABAW) Competition, held in conjunction with ECCV 2024, addresses challenges in understanding human expressions and behaviors, crucial for human-centered technologies. The competition includes two challenges: Multi-Task Learning (MTL) and Compound Expression Recognition (CE). The MTL challenge involves estimating valence and arousal, recognizing facial expressions, and detecting Action Units (AUs). The CE challenge focuses on recognizing seven compound expressions. The MTL challenge uses the s-Aff-Wild2 database, while the CE challenge uses a subset of the C-EXPR-DB database. The MTL challenge requires participants to use pre-trained models and the s-Aff-Wild2 database, while the CE challenge allows the use of any database. The evaluation metrics for the MTL challenge include the Concordance Correlation Coefficient (CCC) for valence and arousal, macro F1 score for expressions, and binary F1 score for AUs. The CE challenge evaluates performance based on the average F1 score across seven compound expressions. The baseline system for the MTL challenge uses a VGG16 network with pre-trained weights, achieving a combined score on the validation set. The competition aims to promote interdisciplinary collaboration and advance human-centered technologies through research on affective behavior analysis.