The paper introduces a novel Interactive Continual Learning (ICL) framework that combines the strengths of fast and slow thinking to address the challenges of continual learning (CL) in machine learning. Inspired by Complementary Learning System (CLS) theory, ICL integrates a ViT model (System1) and a multimodal Large Language Model (System2). System1, as the fast thinker, is equipped with a Class-Knowledge-Task Multi-Head Attention (CKT-MHA) module to enhance memory retrieval and a CL-vMF mechanism to improve geometric representation. System2, as the slow thinker, is responsible for handling complex reasoning and collaborating with System1 through the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) strategy to identify hard examples. The proposed ICL framework demonstrates significant resistance to forgetting and superior performance compared to existing methods, as evidenced by comprehensive evaluations on various benchmarks, including the challenging ImageNet-R dataset. The framework's effectiveness is further validated through ablation studies and comparisons with state-of-the-art methods, highlighting its ability to mitigate catastrophic forgetting and maintain high accuracy across different tasks.The paper introduces a novel Interactive Continual Learning (ICL) framework that combines the strengths of fast and slow thinking to address the challenges of continual learning (CL) in machine learning. Inspired by Complementary Learning System (CLS) theory, ICL integrates a ViT model (System1) and a multimodal Large Language Model (System2). System1, as the fast thinker, is equipped with a Class-Knowledge-Task Multi-Head Attention (CKT-MHA) module to enhance memory retrieval and a CL-vMF mechanism to improve geometric representation. System2, as the slow thinker, is responsible for handling complex reasoning and collaborating with System1 through the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) strategy to identify hard examples. The proposed ICL framework demonstrates significant resistance to forgetting and superior performance compared to existing methods, as evidenced by comprehensive evaluations on various benchmarks, including the challenging ImageNet-R dataset. The framework's effectiveness is further validated through ablation studies and comparisons with state-of-the-art methods, highlighting its ability to mitigate catastrophic forgetting and maintain high accuracy across different tasks.