June 2024 | JUNHAO ZHENG, South China University of Technology, China; SHENGJIE QIU, South China University of Technology, China; CHENGMING SHI, South China University of Technology, China; QIANLI MA, South China University of Technology, China
The paper "Towards Lifelong Learning of Large Language Models: A Survey" by Junhao Zheng, Shengjie Qiu, Chengming Shi, and Qianli Ma explores the challenges and strategies for enabling large language models (LLMs) to adapt to ongoing changes in data, tasks, and user preferences. Traditional training methods with static datasets are inadequate for the dynamic nature of real-world information, leading to the need for lifelong learning (or continual learning). The authors categorize strategies into two groups: Internal Knowledge, where LLMs absorb new knowledge through full or partial training, and External Knowledge, which incorporates new knowledge as external resources without updating model parameters.
Key contributions of the survey include:
1. Introducing a novel taxonomy to categorize lifelong learning into 12 scenarios.
2. Identifying common techniques across all lifelong learning scenarios and classifying existing literature into various technique groups.
3. Highlighting emerging techniques such as model expansion and data selection, which were less explored in the pre-LLM era.
The paper discusses the problem formulation, evaluation metrics, common techniques, benchmarks, and datasets for lifelong learning. It also delves into specific methodologies such as continual pretraining and continual finetuning, detailing various techniques and their applications in different scenarios. The authors emphasize the importance of efficient and cost-effective lifelong learning strategies, particularly for large language models, and address challenges like catastrophic forgetting and temporal adaptation.
The survey provides a comprehensive overview of the current state of lifelong learning for LLMs, offering insights into future directions and innovations in the field.The paper "Towards Lifelong Learning of Large Language Models: A Survey" by Junhao Zheng, Shengjie Qiu, Chengming Shi, and Qianli Ma explores the challenges and strategies for enabling large language models (LLMs) to adapt to ongoing changes in data, tasks, and user preferences. Traditional training methods with static datasets are inadequate for the dynamic nature of real-world information, leading to the need for lifelong learning (or continual learning). The authors categorize strategies into two groups: Internal Knowledge, where LLMs absorb new knowledge through full or partial training, and External Knowledge, which incorporates new knowledge as external resources without updating model parameters.
Key contributions of the survey include:
1. Introducing a novel taxonomy to categorize lifelong learning into 12 scenarios.
2. Identifying common techniques across all lifelong learning scenarios and classifying existing literature into various technique groups.
3. Highlighting emerging techniques such as model expansion and data selection, which were less explored in the pre-LLM era.
The paper discusses the problem formulation, evaluation metrics, common techniques, benchmarks, and datasets for lifelong learning. It also delves into specific methodologies such as continual pretraining and continual finetuning, detailing various techniques and their applications in different scenarios. The authors emphasize the importance of efficient and cost-effective lifelong learning strategies, particularly for large language models, and address challenges like catastrophic forgetting and temporal adaptation.
The survey provides a comprehensive overview of the current state of lifelong learning for LLMs, offering insights into future directions and innovations in the field.