A Survey on LoRA of Large Language Models

A Survey on LoRA of Large Language Models

2024 | Yuren MAO, Yuhang GE, Yijiang FAN, Wenyi XU, Yu MI, Zhonghao HU, Yunjun GAO
A survey on LoRA of large language models discusses the recent advancements and applications of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method for large language models (LLMs). LoRA updates dense neural network layers with pluggable low-rank matrices, enabling efficient adaptation to downstream tasks while preserving privacy and reducing computational costs. It has gained significant attention due to its effectiveness in cross-task generalization, efficiency, and privacy preservation. The survey categorizes LoRA's progress into five areas: (1) downstream adaptation improvements, (2) cross-task generalization via LoRA module mixing, (3) efficiency improvements, (4) federated learning applications, and (5) applications. It also discusses future directions and provides a GitHub page for further discussion. LoRA is parameter-efficient, pluggable, and compatible with various learning paradigms, including pre-training, continual learning, and Bayesian learning. It has been shown to achieve comparable or better performance than full fine-tuning on many tasks. The survey highlights methods to improve LoRA's performance, such as breaking the low-rank bottleneck, dynamic rank allocation, and optimizing the learning procedure. LoRA is also effective in federated learning, where it helps protect data privacy by allowing localized parameter updates. The survey concludes that LoRA is a promising approach for efficient and privacy-preserving adaptation of LLMs.A survey on LoRA of large language models discusses the recent advancements and applications of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method for large language models (LLMs). LoRA updates dense neural network layers with pluggable low-rank matrices, enabling efficient adaptation to downstream tasks while preserving privacy and reducing computational costs. It has gained significant attention due to its effectiveness in cross-task generalization, efficiency, and privacy preservation. The survey categorizes LoRA's progress into five areas: (1) downstream adaptation improvements, (2) cross-task generalization via LoRA module mixing, (3) efficiency improvements, (4) federated learning applications, and (5) applications. It also discusses future directions and provides a GitHub page for further discussion. LoRA is parameter-efficient, pluggable, and compatible with various learning paradigms, including pre-training, continual learning, and Bayesian learning. It has been shown to achieve comparable or better performance than full fine-tuning on many tasks. The survey highlights methods to improve LoRA's performance, such as breaking the low-rank bottleneck, dynamic rank allocation, and optimizing the learning procedure. LoRA is also effective in federated learning, where it helps protect data privacy by allowing localized parameter updates. The survey concludes that LoRA is a promising approach for efficient and privacy-preserving adaptation of LLMs.
Reach us at info@study.space