This paper introduces a novel parameter-efficient fine-tuning method called Weight-Decomposed Low-Rank Adaptation (DoRA), which aims to bridge the accuracy gap between LoRA and full fine-tuning (FT). DoRA decomposes pre-trained weights into magnitude and direction components, using LoRA for efficient directional updates while allowing the magnitude component to be tunable. This approach enhances both the learning capacity and training stability of LoRA without introducing additional inference overhead. Experimental results on various downstream tasks, including commonsense reasoning, visual instruction tuning, and image/video-text understanding, demonstrate that DoRA consistently outperforms LoRA while maintaining similar or better accuracy. The code for DoRA is available at <https://github.com/NVlabs/DoRA>.This paper introduces a novel parameter-efficient fine-tuning method called Weight-Decomposed Low-Rank Adaptation (DoRA), which aims to bridge the accuracy gap between LoRA and full fine-tuning (FT). DoRA decomposes pre-trained weights into magnitude and direction components, using LoRA for efficient directional updates while allowing the magnitude component to be tunable. This approach enhances both the learning capacity and training stability of LoRA without introducing additional inference overhead. Experimental results on various downstream tasks, including commonsense reasoning, visual instruction tuning, and image/video-text understanding, demonstrate that DoRA consistently outperforms LoRA while maintaining similar or better accuracy. The code for DoRA is available at <https://github.com/NVlabs/DoRA>.