15 Jul 2020 | Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor
This paper addresses the issue of objective inconsistency in federated optimization, where clients' local datasets and computation speeds vary, leading to different numbers of local updates. Naive averaging of models can result in convergence to a stationary point of an inconsistent objective function, which may be significantly different from the true objective. The authors propose a general framework to analyze the convergence of federated heterogeneous optimization algorithms, providing insights into solution bias and convergence slowdown due to objective inconsistency. They introduce FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence. FedNova adjusts the aggregated weights and effective local steps according to the local progress, outperforming existing methods in simulations and experiments. The framework subsumes previous methods like FedAvg and FedProx, offering a principled understanding of their convergence behaviors.This paper addresses the issue of objective inconsistency in federated optimization, where clients' local datasets and computation speeds vary, leading to different numbers of local updates. Naive averaging of models can result in convergence to a stationary point of an inconsistent objective function, which may be significantly different from the true objective. The authors propose a general framework to analyze the convergence of federated heterogeneous optimization algorithms, providing insights into solution bias and convergence slowdown due to objective inconsistency. They introduce FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence. FedNova adjusts the aggregated weights and effective local steps according to the local progress, outperforming existing methods in simulations and experiments. The framework subsumes previous methods like FedAvg and FedProx, offering a principled understanding of their convergence behaviors.