The CoNLL-X shared task on multilingual dependency parsing involved converting treebanks for 13 languages into a common dependency format and evaluating parsing performance. The task aimed to compare parsing systems across languages and identify challenges in multilingual parsing. Participants used various approaches, including rule-based and statistical methods, to parse test data. The evaluation metric was the labeled attachment score (LAS), which measures the accuracy of predicted head and dependency labels. Results showed significant variation in performance across languages, with Japanese achieving the highest scores (91.7%) and Turkish the lowest (37.8%). The task highlighted the importance of data quality, annotation consistency, and parsing strategies for different languages. Challenges included handling non-projective dependencies, managing multiword tokens, and dealing with language-specific phenomena. The shared task provided a standardized framework for evaluating parsing systems and identified areas for future research, such as improving annotation practices, refining parsing algorithms, and exploring new data sources. The results underscored the complexity of multilingual parsing and the need for further investigation into language-specific characteristics and parsing techniques.The CoNLL-X shared task on multilingual dependency parsing involved converting treebanks for 13 languages into a common dependency format and evaluating parsing performance. The task aimed to compare parsing systems across languages and identify challenges in multilingual parsing. Participants used various approaches, including rule-based and statistical methods, to parse test data. The evaluation metric was the labeled attachment score (LAS), which measures the accuracy of predicted head and dependency labels. Results showed significant variation in performance across languages, with Japanese achieving the highest scores (91.7%) and Turkish the lowest (37.8%). The task highlighted the importance of data quality, annotation consistency, and parsing strategies for different languages. Challenges included handling non-projective dependencies, managing multiword tokens, and dealing with language-specific phenomena. The shared task provided a standardized framework for evaluating parsing systems and identified areas for future research, such as improving annotation practices, refining parsing algorithms, and exploring new data sources. The results underscored the complexity of multilingual parsing and the need for further investigation into language-specific characteristics and parsing techniques.