SemEval-2016 Task 5: Aspect Based Sentiment Analysis

SemEval-2016 Task 5: Aspect Based Sentiment Analysis

June 16-17, 2016 | Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeny Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, Gülşen Eryiğit
The SemEval-2016 Task 5 focused on Aspect Based Sentiment Analysis (ABSA), continuing the tasks from 2014 and 2015. It provided 19 training and 20 testing datasets for 8 languages and 7 domains, along with a common evaluation procedure. The task included sentence-level and text-level ABSA annotations, with text-level introduced for the first time. 245 submissions were received from 29 teams. The task involved three subtasks: sentence-level ABSA (SB1), text-level ABSA (SB2), and out-of-domain ABSA (SB3). SB1 required identifying aspect categories, opinion target expressions, and sentiment polarity. SB2 aimed to summarize opinions in text-level. SB3 tested systems on unseen domains. Datasets covered 7 domains and 8 languages, with 70790 manually annotated ABSA tuples. Annotation processes varied by language, using tools like BRAT and TURKSENT. The datasets were provided in XML format and available through META-SHARE. Evaluation measures included F-1 scores for different slots, with baselines using SVM classifiers. The task attracted significant participation, with most submissions for SB1. SB2 had fewer submissions but showed higher performance for some slots. Results showed that unconstrained systems outperformed constrained ones. SB1 had higher scores for Slot1 and Slot2, while SB2 had higher scores for Slot1 but lower for Slot3 due to the complexity of determining dominant sentiment. The task highlighted the importance of cross-lingual approaches and the need for more diverse datasets and annotation schemas in future work.The SemEval-2016 Task 5 focused on Aspect Based Sentiment Analysis (ABSA), continuing the tasks from 2014 and 2015. It provided 19 training and 20 testing datasets for 8 languages and 7 domains, along with a common evaluation procedure. The task included sentence-level and text-level ABSA annotations, with text-level introduced for the first time. 245 submissions were received from 29 teams. The task involved three subtasks: sentence-level ABSA (SB1), text-level ABSA (SB2), and out-of-domain ABSA (SB3). SB1 required identifying aspect categories, opinion target expressions, and sentiment polarity. SB2 aimed to summarize opinions in text-level. SB3 tested systems on unseen domains. Datasets covered 7 domains and 8 languages, with 70790 manually annotated ABSA tuples. Annotation processes varied by language, using tools like BRAT and TURKSENT. The datasets were provided in XML format and available through META-SHARE. Evaluation measures included F-1 scores for different slots, with baselines using SVM classifiers. The task attracted significant participation, with most submissions for SB1. SB2 had fewer submissions but showed higher performance for some slots. Results showed that unconstrained systems outperformed constrained ones. SB1 had higher scores for Slot1 and Slot2, while SB2 had higher scores for Slot1 but lower for Slot3 due to the complexity of determining dominant sentiment. The task highlighted the importance of cross-lingual approaches and the need for more diverse datasets and annotation schemas in future work.
Reach us at info@study.space