Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models

Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models

Last Update: June 6, 2024 | Jerry Yao-Chieh Hu†1, Maojiang Su‡2, En-Jui Kuo§3, Zhao Song‡4, Han Liu§5
This paper investigates the computational limits of Low-Rank Adaptation (LoRA) for fine-tuning transformer-based models using fine-grained complexity theory. The authors identify a phase transition behavior in the efficiency of LoRA updates, proving that efficient (sub-quadratic) algorithms exist only below a specific threshold for the norms of input, pretrained, and adaptor weights. They also demonstrate the existence of nearly linear time algorithms for LoRA adaptation by leveraging the hierarchical low-rank structures of LoRA gradients and approximating them with a series of chained low-rank approximations. The analysis covers both partial and full adaptations of attention weights, providing insights into the practical efficiency of LoRA in large transformer models. The findings highlight an "inefficiency threshold" for the norms of these matrices, below which efficient LoRA updates are possible.This paper investigates the computational limits of Low-Rank Adaptation (LoRA) for fine-tuning transformer-based models using fine-grained complexity theory. The authors identify a phase transition behavior in the efficiency of LoRA updates, proving that efficient (sub-quadratic) algorithms exist only below a specific threshold for the norms of input, pretrained, and adaptor weights. They also demonstrate the existence of nearly linear time algorithms for LoRA adaptation by leveraging the hierarchical low-rank structures of LoRA gradients and approximating them with a series of chained low-rank approximations. The analysis covers both partial and full adaptations of attention weights, providing insights into the practical efficiency of LoRA in large transformer models. The findings highlight an "inefficiency threshold" for the norms of these matrices, below which efficient LoRA updates are possible.
Reach us at info@study.space