DoRA: Weight-Decomposed Low-Rank Adaptation

DoRA: Weight-Decomposed Low-Rank Adaptation

2024 | Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen
This paper introduces a novel parameter-efficient fine-tuning method called Weight-Decomposed Low-Rank Adaptation (DoRA), which aims to bridge the accuracy gap between LoRA and full fine-tuning (FT). DoRA decomposes pre-trained weights into magnitude and direction components, using LoRA for efficient directional updates while allowing the magnitude component to be tunable. This approach enhances both the learning capacity and training stability of LoRA without introducing additional inference overhead. Experimental results on various downstream tasks, including commonsense reasoning, visual instruction tuning, and image/video-text understanding, demonstrate that DoRA consistently outperforms LoRA while maintaining similar or better accuracy. The code for DoRA is available at <https://github.com/NVlabs/DoRA>.This paper introduces a novel parameter-efficient fine-tuning method called Weight-Decomposed Low-Rank Adaptation (DoRA), which aims to bridge the accuracy gap between LoRA and full fine-tuning (FT). DoRA decomposes pre-trained weights into magnitude and direction components, using LoRA for efficient directional updates while allowing the magnitude component to be tunable. This approach enhances both the learning capacity and training stability of LoRA without introducing additional inference overhead. Experimental results on various downstream tasks, including commonsense reasoning, visual instruction tuning, and image/video-text understanding, demonstrate that DoRA consistently outperforms LoRA while maintaining similar or better accuracy. The code for DoRA is available at <https://github.com/NVlabs/DoRA>.
Reach us at info@study.space
Understanding DoRA%3A Weight-Decomposed Low-Rank Adaptation