InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning

InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning

3 Apr 2024 | Yan-Shuo Liang and Wu-Jun Li
The paper "InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" by Yan-Shuo Liang and Wu-Jun Li introduces a new method called Interference-Free Low-Rank Adaptation (InfLoRA) for continual learning. The primary goal of InfLoRA is to address the issue of interference between new and old tasks, which hinders the model's performance in maintaining stability and adaptability. InfLoRA achieves this by reparameterizing pre-trained weights using a small number of parameters, specifically through a low-rank adaptation (LoRA)-like approach. The key contributions of InfLoRA are: 1. **Reparameterization**: InfLoRA injects a small number of parameters to reparameterize the pre-trained weights, making fine-tuning these parameters equivalent to fine-tuning the pre-trained weights within a subspace. 2. **Subspace Design**: The subspace is designed to eliminate the interference of new tasks on old tasks, ensuring that the model can adapt to new tasks without affecting its performance on old tasks. 3. **Performance**: Experimental results show that InfLoRA outperforms existing state-of-the-art continual learning methods on multiple datasets, demonstrating superior stability and plasticity. The method is particularly effective in the class-incremental scenario, where task identities are unknown during inference. InfLoRA uses a dimensionality reduction matrix and an expansion matrix to modify the forward propagation, ensuring that the updates for new tasks do not interfere with the performance of old tasks. The paper also includes a detailed analysis of the expanded parameters and a comparison with various baselines, further validating the effectiveness of InfLoRA.The paper "InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning" by Yan-Shuo Liang and Wu-Jun Li introduces a new method called Interference-Free Low-Rank Adaptation (InfLoRA) for continual learning. The primary goal of InfLoRA is to address the issue of interference between new and old tasks, which hinders the model's performance in maintaining stability and adaptability. InfLoRA achieves this by reparameterizing pre-trained weights using a small number of parameters, specifically through a low-rank adaptation (LoRA)-like approach. The key contributions of InfLoRA are: 1. **Reparameterization**: InfLoRA injects a small number of parameters to reparameterize the pre-trained weights, making fine-tuning these parameters equivalent to fine-tuning the pre-trained weights within a subspace. 2. **Subspace Design**: The subspace is designed to eliminate the interference of new tasks on old tasks, ensuring that the model can adapt to new tasks without affecting its performance on old tasks. 3. **Performance**: Experimental results show that InfLoRA outperforms existing state-of-the-art continual learning methods on multiple datasets, demonstrating superior stability and plasticity. The method is particularly effective in the class-incremental scenario, where task identities are unknown during inference. InfLoRA uses a dimensionality reduction matrix and an expansion matrix to modify the forward propagation, ensuring that the updates for new tasks do not interfere with the performance of old tasks. The paper also includes a detailed analysis of the expanded parameters and a comparison with various baselines, further validating the effectiveness of InfLoRA.
Reach us at info@study.space