This paper presents a survey on continual learning for large language models (LLMs), focusing on three key stages: continual pre-training, instruction tuning, and alignment. Unlike traditional adaptation methods for smaller models, continual learning for LLMs aims to enhance overall linguistic and reasoning capabilities rather than just refining domain-specific knowledge. The paper discusses challenges such as catastrophic forgetting and proposes solutions like experience replay, regularization, and dynamic architecture methods. It categorizes research into three stages: continual pre-training (CPT), continual instruction tuning (CIT), and continual alignment (CA). CPT involves updating factual knowledge, domains, and languages, while CIT focuses on task, domain, and tool incremental learning. CA aims to align LLMs with evolving human values and preferences. The paper also highlights benchmarks and evaluation metrics for assessing continual learning performance, including forward and backward transfer rates. Challenges include computational efficiency, social responsibility, automatic learning, and controllable forgetting. Future directions involve improving model adaptability, ensuring ethical alignment, and developing more efficient continual learning techniques. The survey provides a comprehensive overview of current research and identifies key areas for further exploration in the field of continual learning for LLMs.This paper presents a survey on continual learning for large language models (LLMs), focusing on three key stages: continual pre-training, instruction tuning, and alignment. Unlike traditional adaptation methods for smaller models, continual learning for LLMs aims to enhance overall linguistic and reasoning capabilities rather than just refining domain-specific knowledge. The paper discusses challenges such as catastrophic forgetting and proposes solutions like experience replay, regularization, and dynamic architecture methods. It categorizes research into three stages: continual pre-training (CPT), continual instruction tuning (CIT), and continual alignment (CA). CPT involves updating factual knowledge, domains, and languages, while CIT focuses on task, domain, and tool incremental learning. CA aims to align LLMs with evolving human values and preferences. The paper also highlights benchmarks and evaluation metrics for assessing continual learning performance, including forward and backward transfer rates. Challenges include computational efficiency, social responsibility, automatic learning, and controllable forgetting. Future directions involve improving model adaptability, ensuring ethical alignment, and developing more efficient continual learning techniques. The survey provides a comprehensive overview of current research and identifies key areas for further exploration in the field of continual learning for LLMs.