FedImpro: Measuring and Improving Client Update in Federated Learning

FedImpro: Measuring and Improving Client Update in Federated Learning

2024 | Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu
**FEDIMPRO: MEASURING AND IMPROVING CLIENT UPDATE IN FEDERATED LEARNING** Federated Learning (FL) faces challenges due to client drift caused by heterogeneous data distributions. This paper introduces FedImpro, a method that improves client models by generating more consistent local models. FedImpro decouples the model into high-level and low-level components, training the high-level part on reconstructed feature distributions. This approach enhances generalization performance and reduces gradient dissimilarity in FL. Theoretical analysis shows that the generalization contribution of local training is bounded by the conditional Wasserstein distance between clients' data distributions. Experimental results demonstrate that FedImpro effectively mitigates data heterogeneity and improves generalization performance. FedImpro achieves this by estimating and sharing feature distributions while protecting privacy. The method is evaluated on four datasets under various FL settings, showing improved accuracy and reduced gradient dissimilarity. The paper also discusses related works, theoretical foundations, and experimental results, highlighting FedImpro's effectiveness in enhancing FL performance through generalization improvement and gradient dissimilarity reduction.**FEDIMPRO: MEASURING AND IMPROVING CLIENT UPDATE IN FEDERATED LEARNING** Federated Learning (FL) faces challenges due to client drift caused by heterogeneous data distributions. This paper introduces FedImpro, a method that improves client models by generating more consistent local models. FedImpro decouples the model into high-level and low-level components, training the high-level part on reconstructed feature distributions. This approach enhances generalization performance and reduces gradient dissimilarity in FL. Theoretical analysis shows that the generalization contribution of local training is bounded by the conditional Wasserstein distance between clients' data distributions. Experimental results demonstrate that FedImpro effectively mitigates data heterogeneity and improves generalization performance. FedImpro achieves this by estimating and sharing feature distributions while protecting privacy. The method is evaluated on four datasets under various FL settings, showing improved accuracy and reduced gradient dissimilarity. The paper also discusses related works, theoretical foundations, and experimental results, highlighting FedImpro's effectiveness in enhancing FL performance through generalization improvement and gradient dissimilarity reduction.
Reach us at info@study.space
[slides and audio] FedImpro%3A Measuring and Improving Client Update in Federated Learning