Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning

Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning

2024 | Wenhan Xia, Chengwei Qin, Elad Hazan
The paper introduces Chain of LoRA (COLA), an iterative optimization framework inspired by the Frank-Wolfe algorithm, designed to bridge the gap between low-rank adaptation (LoRA) and full-parameter fine-tuning in large language models (LLMs). LoRA, a widely used method for parameter-efficient fine-tuning, updates weights using low-rank matrices, but it often performs worse than full-parameter fine-tuning in terms of generalization error. COLA employs a residual learning procedure where learned LoRA modules are merged into the pre-trained model parameters and re-initialized for new tasks. The method is theoretically analyzed and empirically validated across various models (OPT and Llama-2) and benchmarking tasks, demonstrating consistent improvements over LoRA without additional computational or memory costs. The paper also provides a detailed analysis of the convergence properties of COLA and discusses its potential for further exploration in different tasks and larger-scale LLMs.The paper introduces Chain of LoRA (COLA), an iterative optimization framework inspired by the Frank-Wolfe algorithm, designed to bridge the gap between low-rank adaptation (LoRA) and full-parameter fine-tuning in large language models (LLMs). LoRA, a widely used method for parameter-efficient fine-tuning, updates weights using low-rank matrices, but it often performs worse than full-parameter fine-tuning in terms of generalization error. COLA employs a residual learning procedure where learned LoRA modules are merged into the pre-trained model parameters and re-initialized for new tasks. The method is theoretically analyzed and empirically validated across various models (OPT and Llama-2) and benchmarking tasks, demonstrating consistent improvements over LoRA without additional computational or memory costs. The paper also provides a detailed analysis of the convergence properties of COLA and discusses its potential for further exploration in different tasks and larger-scale LLMs.
Reach us at info@study.space
[slides and audio] Chain of LoRA%3A Efficient Fine-tuning of Language Models via Residual Learning