OVOR: ONEPROMPT WITH VIRTUAL OUTLIER REGULARIZATION FOR REHEARSAL-FREE CLASS-INCREMENTAL LEARNING
This paper proposes a novel method for rehearsal-free class-incremental learning (CIL) called OVOR, which combines OnePrompt with virtual outlier regularization. The main idea is to use virtual outliers to regularize the decision boundaries of the classifier, thereby reducing inter-task confusion. The method is compatible with various prompt-based CIL methods and has been evaluated on ImageNet-R and CIFAR-100 benchmarks.
The paper first discusses the challenges of rehearsal-free CIL, including inter-task confusion and the need for a prompt pool. It then introduces the concept of virtual outliers, which are synthetic data points that help regularize the classifier. The method uses these virtual outliers to improve the performance of the classifier by making the decision boundaries more compact.
The paper also introduces OnePrompt, a simple prompt-based CIL method that uses a single prompt throughout the learning process. This method is more efficient in terms of computation and parameters compared to existing methods. The paper evaluates OnePrompt on ImageNet-R and CIFAR-100, showing that it achieves comparable results to state-of-the-art methods with fewer parameters and faster inference speed.
The paper then proposes OVOR, which combines OnePrompt with virtual outlier regularization. This method has been evaluated on ImageNet-R and CIFAR-100, showing that it achieves performance comparable to or better than existing methods in terms of accuracy, parameters, and computation.
The paper also discusses the effectiveness of virtual outlier regularization, showing that it improves the performance of various prompt-based methods. It also compares OVOR to other state-of-the-art methods, showing that it achieves better performance in terms of accuracy, parameters, and computation.
The paper concludes that OVOR is a promising method for rehearsal-free CIL, and that it is compatible with various prompt-based methods. The method is efficient in terms of computation and parameters, and it has been evaluated on multiple benchmarks. The paper also discusses the limitations of the method, including its applicability to domain-incremental learning and blurred boundary continual learning.OVOR: ONEPROMPT WITH VIRTUAL OUTLIER REGULARIZATION FOR REHEARSAL-FREE CLASS-INCREMENTAL LEARNING
This paper proposes a novel method for rehearsal-free class-incremental learning (CIL) called OVOR, which combines OnePrompt with virtual outlier regularization. The main idea is to use virtual outliers to regularize the decision boundaries of the classifier, thereby reducing inter-task confusion. The method is compatible with various prompt-based CIL methods and has been evaluated on ImageNet-R and CIFAR-100 benchmarks.
The paper first discusses the challenges of rehearsal-free CIL, including inter-task confusion and the need for a prompt pool. It then introduces the concept of virtual outliers, which are synthetic data points that help regularize the classifier. The method uses these virtual outliers to improve the performance of the classifier by making the decision boundaries more compact.
The paper also introduces OnePrompt, a simple prompt-based CIL method that uses a single prompt throughout the learning process. This method is more efficient in terms of computation and parameters compared to existing methods. The paper evaluates OnePrompt on ImageNet-R and CIFAR-100, showing that it achieves comparable results to state-of-the-art methods with fewer parameters and faster inference speed.
The paper then proposes OVOR, which combines OnePrompt with virtual outlier regularization. This method has been evaluated on ImageNet-R and CIFAR-100, showing that it achieves performance comparable to or better than existing methods in terms of accuracy, parameters, and computation.
The paper also discusses the effectiveness of virtual outlier regularization, showing that it improves the performance of various prompt-based methods. It also compares OVOR to other state-of-the-art methods, showing that it achieves better performance in terms of accuracy, parameters, and computation.
The paper concludes that OVOR is a promising method for rehearsal-free CIL, and that it is compatible with various prompt-based methods. The method is efficient in terms of computation and parameters, and it has been evaluated on multiple benchmarks. The paper also discusses the limitations of the method, including its applicability to domain-incremental learning and blurred boundary continual learning.