21 May 2024 | Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, Philip S. Yu
This paper provides a comprehensive survey of Large Language Models (LLMs) in Natural Language Processing (NLP), addressing the gaps in current literature by exploring their applications, capabilities, and future prospects. The authors introduce a unified taxonomy of LLMs in NLP, categorized into parameter-frozen and parameter-tuning applications. They discuss the advancements in various NLP tasks, including sentiment analysis, information extraction, dialogue understanding, table understanding, text summarization, code generation, machine translation, and mathematical reasoning. The paper highlights the effectiveness of zero-shot and few-shot learning approaches, as well as the benefits of full-parameter and parameter-efficient tuning. It also identifies emerging research areas such as multilingual LLMs, multi-modal LLMs, tool usage, X-of-thought reasoning, hallucination evaluation, and safety concerns. The authors aim to provide valuable insights and resources for researchers and practitioners, fostering further development in LLM-based NLP.This paper provides a comprehensive survey of Large Language Models (LLMs) in Natural Language Processing (NLP), addressing the gaps in current literature by exploring their applications, capabilities, and future prospects. The authors introduce a unified taxonomy of LLMs in NLP, categorized into parameter-frozen and parameter-tuning applications. They discuss the advancements in various NLP tasks, including sentiment analysis, information extraction, dialogue understanding, table understanding, text summarization, code generation, machine translation, and mathematical reasoning. The paper highlights the effectiveness of zero-shot and few-shot learning approaches, as well as the benefits of full-parameter and parameter-efficient tuning. It also identifies emerging research areas such as multilingual LLMs, multi-modal LLMs, tool usage, X-of-thought reasoning, hallucination evaluation, and safety concerns. The authors aim to provide valuable insights and resources for researchers and practitioners, fostering further development in LLM-based NLP.