SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations

SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations

2024 | Fanfan Wang, Heqing Ma, Jianfei Yu, Rui Xia, Erik Cambria
The SemEval-2024 Task 3 focuses on Multimodal Emotion Cause Analysis in Conversations, aiming to extract emotion-cause pairs from conversations. The task includes two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE). A dataset named ECF 2.0, sourced from the sitcom Friends, was created, containing 1,715 conversations and 16,720 utterances with 12,256 emotion-cause pairs annotated across three modalities. The task attracted 143 registrations and 216 submissions. Participants used various approaches, including large language models and multimodal techniques, to extract emotion-cause pairs. The evaluation metrics included weighted average F1 scores, with the main metric being the weighted average proportional F1 score for Subtask 1. The top teams included Samsung Research China-Beijing, petkaz, NUS-Emo, and SZTU-MIPS. The task highlights the importance of emotion cause analysis in real-world applications and the challenges of annotating multimodal data. The results show that LLMs can be effective for this task, but further research is needed to improve their performance in zero-shot and few-shot settings. The study also discusses the potential of multimodal information in emotion cause analysis and the need for more diverse datasets to enhance model generalization. The task contributes to the field of affective computing by providing a benchmark for emotion cause analysis in conversations.The SemEval-2024 Task 3 focuses on Multimodal Emotion Cause Analysis in Conversations, aiming to extract emotion-cause pairs from conversations. The task includes two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE). A dataset named ECF 2.0, sourced from the sitcom Friends, was created, containing 1,715 conversations and 16,720 utterances with 12,256 emotion-cause pairs annotated across three modalities. The task attracted 143 registrations and 216 submissions. Participants used various approaches, including large language models and multimodal techniques, to extract emotion-cause pairs. The evaluation metrics included weighted average F1 scores, with the main metric being the weighted average proportional F1 score for Subtask 1. The top teams included Samsung Research China-Beijing, petkaz, NUS-Emo, and SZTU-MIPS. The task highlights the importance of emotion cause analysis in real-world applications and the challenges of annotating multimodal data. The results show that LLMs can be effective for this task, but further research is needed to improve their performance in zero-shot and few-shot settings. The study also discusses the potential of multimodal information in emotion cause analysis and the need for more diverse datasets to enhance model generalization. The task contributes to the field of affective computing by providing a benchmark for emotion cause analysis in conversations.
Reach us at info@study.space
Understanding SemEval-2024 Task 3%3A Multimodal Emotion Cause Analysis in Conversations