A Survey of Large Language Models

A Survey of Large Language Models

24 Nov 2023 | Wayne Xin Zhao, Kun Zhou*, Junyi Li*, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie and Ji-Rong Wen
This survey provides an overview of the recent advances in Large Language Models (LLMs), which are pre-trained language models (PLMs) with parameters in the hundreds of billions or more. The survey covers four main aspects: pre-training, adaptation tuning, utilization, and capacity evaluation. It highlights the evolution of LLMs from statistical language models (SLMs) to neural language models (NLMs) and PLMs, emphasizing the scaling effect on model capacity. Key findings include the emergence of emergent abilities in LLMs, such as in-context learning, instruction following, and step-by-step reasoning. The survey also discusses the technical evolution of GPT-series models, from GPT-1 to GPT-4, and the challenges and opportunities in developing and using LLMs. Additionally, it addresses the alignment of LLMs with human values and the potential risks associated with their use. The survey concludes by summarizing major findings and discussing future research directions.This survey provides an overview of the recent advances in Large Language Models (LLMs), which are pre-trained language models (PLMs) with parameters in the hundreds of billions or more. The survey covers four main aspects: pre-training, adaptation tuning, utilization, and capacity evaluation. It highlights the evolution of LLMs from statistical language models (SLMs) to neural language models (NLMs) and PLMs, emphasizing the scaling effect on model capacity. Key findings include the emergence of emergent abilities in LLMs, such as in-context learning, instruction following, and step-by-step reasoning. The survey also discusses the technical evolution of GPT-series models, from GPT-1 to GPT-4, and the challenges and opportunities in developing and using LLMs. Additionally, it addresses the alignment of LLMs with human values and the potential risks associated with their use. The survey concludes by summarizing major findings and discussing future research directions.
Reach us at info@study.space