The paper "OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning" addresses the challenges of class-incremental learning (CIL) in the absence of rehearsal data. The authors propose a regularization method based on virtual outliers to tighten the decision boundaries of the classifier, reducing inter-task confusion. This method is designed to work with prompt-based CIL methods, which use learnable prompts to prevent overwriting knowledge from previous tasks. The proposed method, named OVOR, combines the OnePrompt approach with virtual outlier regularization (VOR). OnePrompt uses a single prompt throughout the CIL process, while VOR generates virtual outliers to further regularize the classifier head. The authors demonstrate that OVOR can achieve comparable or superior performance to state-of-the-art (SOTA) prompt-based methods on benchmarks like ImageNet-R and CIFAR-100, with reduced inference cost and fewer trainable parameters. The paper also includes experimental results showing the effectiveness of VOR and OnePrompt in improving the performance of different prompt-based methods.The paper "OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning" addresses the challenges of class-incremental learning (CIL) in the absence of rehearsal data. The authors propose a regularization method based on virtual outliers to tighten the decision boundaries of the classifier, reducing inter-task confusion. This method is designed to work with prompt-based CIL methods, which use learnable prompts to prevent overwriting knowledge from previous tasks. The proposed method, named OVOR, combines the OnePrompt approach with virtual outlier regularization (VOR). OnePrompt uses a single prompt throughout the CIL process, while VOR generates virtual outliers to further regularize the classifier head. The authors demonstrate that OVOR can achieve comparable or superior performance to state-of-the-art (SOTA) prompt-based methods on benchmarks like ImageNet-R and CIFAR-100, with reduced inference cost and fewer trainable parameters. The paper also includes experimental results showing the effectiveness of VOR and OnePrompt in improving the performance of different prompt-based methods.