A Survey on LoRA of Large Language Models

A Survey on LoRA of Large Language Models

2024 | Yuren MAO, Yuhang GE, Yijiang FAN, Wenyi XU, Yu MI, Zhonghao HU, Yunjun GAO
This survey provides a comprehensive overview of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning paradigm for large language models (LLMs). LoRA updates dense neural network layers with pluggable low-rank matrices, offering significant advantages in cross-task generalization and privacy-preserving. The survey categorizes and reviews progress in five areas: (1) downstream adaptation improving variants, (2) cross-task generalization methods, (3) efficiency-improving methods, (4) data privacy-preserving methods, and (5) applications. LoRA's effectiveness is discussed through theoretical analysis, practical efficiency, and beyond fine-tuning applications. The survey also highlights future directions and provides a GitHub page for updates and discussions. LoRA's pluggable nature and parameter efficiency make it suitable for various tasks, including language, vision, and multimodal tasks, and it has been widely adopted in federated learning to protect client data privacy.This survey provides a comprehensive overview of Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning paradigm for large language models (LLMs). LoRA updates dense neural network layers with pluggable low-rank matrices, offering significant advantages in cross-task generalization and privacy-preserving. The survey categorizes and reviews progress in five areas: (1) downstream adaptation improving variants, (2) cross-task generalization methods, (3) efficiency-improving methods, (4) data privacy-preserving methods, and (5) applications. LoRA's effectiveness is discussed through theoretical analysis, practical efficiency, and beyond fine-tuning applications. The survey also highlights future directions and provides a GitHub page for updates and discussions. LoRA's pluggable nature and parameter efficiency make it suitable for various tasks, including language, vision, and multimodal tasks, and it has been widely adopted in federated learning to protect client data privacy.
Reach us at info@study.space