This paper presents a comprehensive survey of the latest advancements in Continual Learning (CL) using Pre-Trained Models (PTMs). CL aims to enable learning systems to absorb new knowledge as data evolves, while overcoming the issue of catastrophic forgetting. Traditional CL methods start with models trained from scratch, but PTMs, which are pre-trained on extensive datasets, offer a robust and generalizable representation for downstream tasks. The paper categorizes existing methodologies into three distinct groups: prompt-based, representation-based, and model mixture-based methods, providing a comparative analysis of their similarities, differences, and advantages and disadvantages. Extensive experiments on seven benchmark datasets are conducted to evaluate the performance of these methods. The results highlight the strong representation ability of PTMs and the effectiveness of different approaches. The paper also discusses the challenges and future directions in PTM-based CL, including the need for more challenging benchmarks, expanding beyond single modality recognition, and developing computationally efficient algorithms for restricted computational resources. The source code for reproducing the evaluations is available at: https://github.com/sun-hailong/LAMDA-PILOT.This paper presents a comprehensive survey of the latest advancements in Continual Learning (CL) using Pre-Trained Models (PTMs). CL aims to enable learning systems to absorb new knowledge as data evolves, while overcoming the issue of catastrophic forgetting. Traditional CL methods start with models trained from scratch, but PTMs, which are pre-trained on extensive datasets, offer a robust and generalizable representation for downstream tasks. The paper categorizes existing methodologies into three distinct groups: prompt-based, representation-based, and model mixture-based methods, providing a comparative analysis of their similarities, differences, and advantages and disadvantages. Extensive experiments on seven benchmark datasets are conducted to evaluate the performance of these methods. The results highlight the strong representation ability of PTMs and the effectiveness of different approaches. The paper also discusses the challenges and future directions in PTM-based CL, including the need for more challenging benchmarks, expanding beyond single modality recognition, and developing computationally efficient algorithms for restricted computational resources. The source code for reproducing the evaluations is available at: https://github.com/sun-hailong/LAMDA-PILOT.