Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

18 Mar 2024 | Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan
EASE is a novel method for class-incremental learning (CIL) based on pre-trained models (PTMs). It addresses the challenge of learning new classes without forgetting previously learned ones by introducing expandable subspaces and semantic-guided prototype complement. The method trains lightweight adapter modules for each new task, creating task-specific subspaces that enable joint decision-making across multiple subspaces. These adapters are added to the frozen PTM, minimizing training and memory costs. To handle the incompatibility between old and new subspaces, EASE uses semantic guidance to synthesize old class prototypes in the new subspace without relying on exemplars. The method also employs subspace reweighting to enhance the contribution of core features in decision-making. Extensive experiments on seven benchmark datasets show that EASE achieves state-of-the-art performance, outperforming other methods in terms of accuracy and efficiency. The method is efficient, requiring minimal parameters and avoiding the need for exemplars, making it suitable for practical applications in CIL.EASE is a novel method for class-incremental learning (CIL) based on pre-trained models (PTMs). It addresses the challenge of learning new classes without forgetting previously learned ones by introducing expandable subspaces and semantic-guided prototype complement. The method trains lightweight adapter modules for each new task, creating task-specific subspaces that enable joint decision-making across multiple subspaces. These adapters are added to the frozen PTM, minimizing training and memory costs. To handle the incompatibility between old and new subspaces, EASE uses semantic guidance to synthesize old class prototypes in the new subspace without relying on exemplars. The method also employs subspace reweighting to enhance the contribution of core features in decision-making. Extensive experiments on seven benchmark datasets show that EASE achieves state-of-the-art performance, outperforming other methods in terms of accuracy and efficiency. The method is efficient, requiring minimal parameters and avoiding the need for exemplars, making it suitable for practical applications in CIL.
Reach us at info@study.space
[slides and audio] Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning