IMPROVING LoRA IN PRIVACY-PRESERVING FEDERATED LEARNING

IMPROVING LoRA IN PRIVACY-PRESERVING FEDERATED LEARNING

18 Mar 2024 | Youbang Sun*, Zitao Li, Yaliang Li & Bolin Ding
This paper addresses the challenges of applying Low-Rank Adaptation (LoRA) in privacy-preserving federated learning (FL). LoRA is a popular parameter-efficient fine-tuning method for pre-trained language models, but it faces issues in FL due to data heterogeneity, multi-step local updates, and the amplification of noise from differential privacy (DP) enforcement. The authors propose Federated Freeze A LoRA (FFA-LoRA), which fixes the non-zero initialized low-rank matrices and only fine-tunes the zero-initialized matrices, reducing the number of parameters by half. This approach alleviates the mentioned challenges and improves computational efficiency. Experiments demonstrate that FFA-LoRA consistently outperforms LoRA in various FL tasks, showing better performance and computational efficiency. The paper also provides theoretical insights and ablation studies to support the effectiveness of FFA-LoRA.This paper addresses the challenges of applying Low-Rank Adaptation (LoRA) in privacy-preserving federated learning (FL). LoRA is a popular parameter-efficient fine-tuning method for pre-trained language models, but it faces issues in FL due to data heterogeneity, multi-step local updates, and the amplification of noise from differential privacy (DP) enforcement. The authors propose Federated Freeze A LoRA (FFA-LoRA), which fixes the non-zero initialized low-rank matrices and only fine-tunes the zero-initialized matrices, reducing the number of parameters by half. This approach alleviates the mentioned challenges and improves computational efficiency. Experiments demonstrate that FFA-LoRA consistently outperforms LoRA in various FL tasks, showing better performance and computational efficiency. The paper also provides theoretical insights and ablation studies to support the effectiveness of FFA-LoRA.
Reach us at info@study.space