26 Mar 2024 | Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Zhiping Lin
DS-AL: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning
This paper proposes a Dual-Stream Analytic Learning (DS-AL) approach to address the challenge of class-incremental learning (CIL) under an exemplar-free constraint. Existing methods under this constraint suffer from catastrophic forgetting, which is more severe than replay-based techniques that retain access to past samples. DS-AL consists of a main stream that provides an analytical (i.e., closed-form) linear solution and a compensation stream that improves the inherent under-fitting limitation due to linear mapping. The main stream redefines the CIL problem into a Concatenated Recursive Least Squares (C-RLS) task, allowing an equivalence between CIL and its joint-learning counterpart. The compensation stream is governed by a Dual-Activation Compensation (DAC) module, which re-activates the embedding with a different activation function and seeks fitting compensation by projecting the embedding to the null space of the main stream's linear mapping. Empirical results show that DS-AL, despite being an exemplar-free technique, delivers performance comparable to or better than replay-based methods across various datasets, including CIFAR-100, ImageNet-100, and ImageNet-Full. Additionally, the C-RLS equivalent property allows DS-AL to execute CIL in a phase-invariant manner, as evidenced by a 500-phase ImageNet task that performs on a level identical to a 5-phase one. The DS-AL method is evaluated on benchmark datasets and shows superior performance compared to existing AL-based techniques and most replay-based methods. The DS-AL's compensation stream enhances both fitting and generalization abilities, and its phase-invariant property is demonstrated through large-phase experiments. The method is also analyzed through ablation studies, showing that the DAC and PLC modules contribute significantly to performance improvements. The DS-AL is summarized in an algorithm framework and is available at https://github.com/ZHUANGHP/Analytic-continual-learning.DS-AL: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning
This paper proposes a Dual-Stream Analytic Learning (DS-AL) approach to address the challenge of class-incremental learning (CIL) under an exemplar-free constraint. Existing methods under this constraint suffer from catastrophic forgetting, which is more severe than replay-based techniques that retain access to past samples. DS-AL consists of a main stream that provides an analytical (i.e., closed-form) linear solution and a compensation stream that improves the inherent under-fitting limitation due to linear mapping. The main stream redefines the CIL problem into a Concatenated Recursive Least Squares (C-RLS) task, allowing an equivalence between CIL and its joint-learning counterpart. The compensation stream is governed by a Dual-Activation Compensation (DAC) module, which re-activates the embedding with a different activation function and seeks fitting compensation by projecting the embedding to the null space of the main stream's linear mapping. Empirical results show that DS-AL, despite being an exemplar-free technique, delivers performance comparable to or better than replay-based methods across various datasets, including CIFAR-100, ImageNet-100, and ImageNet-Full. Additionally, the C-RLS equivalent property allows DS-AL to execute CIL in a phase-invariant manner, as evidenced by a 500-phase ImageNet task that performs on a level identical to a 5-phase one. The DS-AL method is evaluated on benchmark datasets and shows superior performance compared to existing AL-based techniques and most replay-based methods. The DS-AL's compensation stream enhances both fitting and generalization abilities, and its phase-invariant property is demonstrated through large-phase experiments. The method is also analyzed through ablation studies, showing that the DAC and PLC modules contribute significantly to performance improvements. The DS-AL is summarized in an algorithm framework and is available at https://github.com/ZHUANGHP/Analytic-continual-learning.