7 Apr 2024 | Libo Qin, Qiguang Chen, Yuhang Zhou, Zhi Chen, Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, Philip S. Yu
This paper provides a comprehensive survey of multilingual large language models (MLLMs), addressing the lack of a comprehensive review in the field. The authors introduce a new taxonomy that categorizes MLLMs into two alignment types: parameter-tuning alignment and parameter-frozen alignment. They discuss the evolution of MLLMs over the past five years and highlight emerging frontiers, including hallucination detection, knowledge editing, safety, fairness, language extension, and multi-modal extension. The paper also collects a wealth of open-source resources, including relevant papers, data corpora, and leaderboards, to facilitate further research in MLLMs. The contributions of the paper include the first survey of MLLMs, a new taxonomy, emerging frontiers, and a curated list of resources.This paper provides a comprehensive survey of multilingual large language models (MLLMs), addressing the lack of a comprehensive review in the field. The authors introduce a new taxonomy that categorizes MLLMs into two alignment types: parameter-tuning alignment and parameter-frozen alignment. They discuss the evolution of MLLMs over the past five years and highlight emerging frontiers, including hallucination detection, knowledge editing, safety, fairness, language extension, and multi-modal extension. The paper also collects a wealth of open-source resources, including relevant papers, data corpora, and leaderboards, to facilitate further research in MLLMs. The contributions of the paper include the first survey of MLLMs, a new taxonomy, emerging frontiers, and a curated list of resources.