This paper introduces a unified and general framework for continual learning (CL), aiming to unify various existing CL methods under a single optimization objective. The framework incorporates both output space and weight space regularization using Bregman divergence, allowing for the recovery of established CL approaches as special cases. The proposed framework also introduces a novel concept called refresh learning, which involves unlearning current data before relearning it to enhance CL performance. This approach is inspired by neuroscience, where the brain selectively forgets outdated information to retain crucial knowledge and facilitate new learning. The refresh learning mechanism is designed as a simple plug-in that can be seamlessly integrated with existing CL methods, improving their performance. Theoretical analysis shows that refresh learning approximately minimizes the Fisher Information Matrix (FIM) weighted gradient norm of the loss function, leading to a flatter loss landscape and improved generalization. Extensive experiments on CL benchmarks demonstrate the effectiveness of the proposed method, showing significant improvements in task incremental learning (Task-IL) and class incremental learning (Class-IL) compared to existing methods. The results indicate that refresh learning is an efficient and effective approach for addressing the challenge of knowledge retention in CL scenarios.This paper introduces a unified and general framework for continual learning (CL), aiming to unify various existing CL methods under a single optimization objective. The framework incorporates both output space and weight space regularization using Bregman divergence, allowing for the recovery of established CL approaches as special cases. The proposed framework also introduces a novel concept called refresh learning, which involves unlearning current data before relearning it to enhance CL performance. This approach is inspired by neuroscience, where the brain selectively forgets outdated information to retain crucial knowledge and facilitate new learning. The refresh learning mechanism is designed as a simple plug-in that can be seamlessly integrated with existing CL methods, improving their performance. Theoretical analysis shows that refresh learning approximately minimizes the Fisher Information Matrix (FIM) weighted gradient norm of the loss function, leading to a flatter loss landscape and improved generalization. Extensive experiments on CL benchmarks demonstrate the effectiveness of the proposed method, showing significant improvements in task incremental learning (Task-IL) and class incremental learning (Class-IL) compared to existing methods. The results indicate that refresh learning is an efficient and effective approach for addressing the challenge of knowledge retention in CL scenarios.