OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning

OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning

27 Mar 2024 | Noor Ahmed*, Anna Kukleva*, Bernt Schiele
The paper introduces the OrCo framework, designed to address the challenges of Few-Shot Class-Incremental Learning (FSCIL), which involves learning new classes with limited data while preserving knowledge from previously learned classes. The framework is built on two core principles: orthogonality in the feature space and contrastive learning. During the pretraining phase, the model is trained using both supervised and self-supervised contrastive losses to enhance feature separation and generalization. In the second phase, the model is aligned with pseudo-targets generated during pretraining, ensuring that the feature space maximizes margins and reserves space for new classes. The third phase, applied in each incremental session, uses the OrCo loss to align the model with pseudo-targets, addressing overfitting and catastrophic forgetting. Experimental results on three benchmark datasets (mini-ImageNet, CIFAR100, and CUB) demonstrate state-of-the-art performance, outperforming previous methods. The framework's effectiveness is validated through various analyses, including the impact of orthogonality, pseudo-target perturbations, and pretraining strategies.The paper introduces the OrCo framework, designed to address the challenges of Few-Shot Class-Incremental Learning (FSCIL), which involves learning new classes with limited data while preserving knowledge from previously learned classes. The framework is built on two core principles: orthogonality in the feature space and contrastive learning. During the pretraining phase, the model is trained using both supervised and self-supervised contrastive losses to enhance feature separation and generalization. In the second phase, the model is aligned with pseudo-targets generated during pretraining, ensuring that the feature space maximizes margins and reserves space for new classes. The third phase, applied in each incremental session, uses the OrCo loss to align the model with pseudo-targets, addressing overfitting and catastrophic forgetting. Experimental results on three benchmark datasets (mini-ImageNet, CIFAR100, and CUB) demonstrate state-of-the-art performance, outperforming previous methods. The framework's effectiveness is validated through various analyses, including the impact of orthogonality, pseudo-target perturbations, and pretraining strategies.
Reach us at info@study.space
[slides] OrCo%3A Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning | StudySpace