This paper proposes FFA-LoRA, an improved version of Low-Rank Adaptation (LoRA) for privacy-preserving federated learning (FL). LoRA is a parameter-efficient fine-tuning method for pre-trained language models, but it faces challenges in FL due to data heterogeneity, noise amplification from differential privacy (DP), and hyper-parameter sensitivity. FFA-LoRA addresses these issues by fixing the non-zero initialized low-rank matrices and only fine-tuning the zero-initialized matrices, reducing the number of trainable parameters by half. This approach improves computational efficiency and communication cost in federated learning. The paper demonstrates that FFA-LoRA outperforms LoRA in various FL tasks, especially under privacy constraints. The key contributions include identifying three main discordances between LoRA and FL, proposing FFA-LoRA to mitigate these issues, and conducting extensive experiments to validate its effectiveness. FFA-LoRA is theoretically motivated, empirically verified, and computationally more efficient than LoRA. The paper also discusses the benefits of FFA-LoRA in terms of noise resilience, compatibility with DP-SGD, and reduced dependency on hyper-parameters like α. The experiments show that FFA-LoRA achieves better performance in language understanding and generation tasks, including on the GSM-8K dataset and the Food-101 dataset. The paper concludes that FFA-LoRA is a promising solution for parameter-efficient fine-tuning in FL with privacy guarantees.This paper proposes FFA-LoRA, an improved version of Low-Rank Adaptation (LoRA) for privacy-preserving federated learning (FL). LoRA is a parameter-efficient fine-tuning method for pre-trained language models, but it faces challenges in FL due to data heterogeneity, noise amplification from differential privacy (DP), and hyper-parameter sensitivity. FFA-LoRA addresses these issues by fixing the non-zero initialized low-rank matrices and only fine-tuning the zero-initialized matrices, reducing the number of trainable parameters by half. This approach improves computational efficiency and communication cost in federated learning. The paper demonstrates that FFA-LoRA outperforms LoRA in various FL tasks, especially under privacy constraints. The key contributions include identifying three main discordances between LoRA and FL, proposing FFA-LoRA to mitigate these issues, and conducting extensive experiments to validate its effectiveness. FFA-LoRA is theoretically motivated, empirically verified, and computationally more efficient than LoRA. The paper also discusses the benefits of FFA-LoRA in terms of noise resilience, compatibility with DP-SGD, and reduced dependency on hyper-parameters like α. The experiments show that FFA-LoRA achieves better performance in language understanding and generation tasks, including on the GSM-8K dataset and the Food-101 dataset. The paper concludes that FFA-LoRA is a promising solution for parameter-efficient fine-tuning in FL with privacy guarantees.