This paper introduces a novel prompt-based method called Consistent Prompting (CPrompt) for rehearsal-free continual learning. The method addresses the inconsistency between training and testing in existing prompt-based approaches, which limits their effectiveness. Two types of inconsistency are identified: classifier inconsistency, where test predictions are made from all classifiers rather than just the current task's classifier, and prompt inconsistency, where the prompt used during testing may not match the one used during training. CPrompt consists of two modules: Classifier Consistency Learning (CCL) and Prompt Consistency Learning (PCL). CCL ensures that the current task's prompt is trained with all previous classifiers to achieve consistency, while PCL enhances prediction robustness and prompt selection accuracy by training the current classifier with randomly selected prompts from a pool. A multi-key mechanism is also introduced to improve prompt selection accuracy. The proposed method achieves state-of-the-art performance on multiple continual learning benchmarks, demonstrating the importance of consistent training and testing in prompt-based methods. The paper also includes extensive analysis and ablation studies to validate the effectiveness of the proposed approach.This paper introduces a novel prompt-based method called Consistent Prompting (CPrompt) for rehearsal-free continual learning. The method addresses the inconsistency between training and testing in existing prompt-based approaches, which limits their effectiveness. Two types of inconsistency are identified: classifier inconsistency, where test predictions are made from all classifiers rather than just the current task's classifier, and prompt inconsistency, where the prompt used during testing may not match the one used during training. CPrompt consists of two modules: Classifier Consistency Learning (CCL) and Prompt Consistency Learning (PCL). CCL ensures that the current task's prompt is trained with all previous classifiers to achieve consistency, while PCL enhances prediction robustness and prompt selection accuracy by training the current classifier with randomly selected prompts from a pool. A multi-key mechanism is also introduced to improve prompt selection accuracy. The proposed method achieves state-of-the-art performance on multiple continual learning benchmarks, demonstrating the importance of consistent training and testing in prompt-based methods. The paper also includes extensive analysis and ablation studies to validate the effectiveness of the proposed approach.