Time-, Memory- and Parameter-Efficient Visual Adaptation

Time-, Memory- and Parameter-Efficient Visual Adaptation

5 Feb 2024 | Otniel-Bogdan Mercea, Alexey Gritsenko, Cordelia Schmid, Anurag Arnab
The paper introduces Low-Rank Side Adaptation (LoSA), an efficient adaptation method for large, pre-trained models. Unlike existing methods that primarily focus on parameter efficiency, LoSA aims to reduce training time and memory usage. The method operates by learning a parallel network that refines features from a frozen backbone, without backpropagating gradients through the backbone. This approach achieves state-of-the-art accuracy-parameter trade-offs on the VTAB benchmark and demonstrates superior performance in terms of training time and memory usage. LoSA is further evaluated on large-scale image and video classification tasks, showing its scalability and efficiency with large models like ViT-e (4 billion parameters). The method outperforms prior works in accuracy, training speed, and memory consumption, making it a promising approach for efficient adaptation of large models.The paper introduces Low-Rank Side Adaptation (LoSA), an efficient adaptation method for large, pre-trained models. Unlike existing methods that primarily focus on parameter efficiency, LoSA aims to reduce training time and memory usage. The method operates by learning a parallel network that refines features from a frozen backbone, without backpropagating gradients through the backbone. This approach achieves state-of-the-art accuracy-parameter trade-offs on the VTAB benchmark and demonstrates superior performance in terms of training time and memory usage. LoSA is further evaluated on large-scale image and video classification tasks, showing its scalability and efficiency with large models like ViT-e (4 billion parameters). The method outperforms prior works in accuracy, training speed, and memory consumption, making it a promising approach for efficient adaptation of large models.
Reach us at info@study.space