30 Nov 2014 | WEI SHI, QING LING, GANG WU, AND WOTAO YIN
This paper introduces a novel decentralized first-order algorithm called EXTRA for solving consensus optimization problems in a multi-agent network. The algorithm is designed to converge exactly to the optimal solution without requiring a central server or long-distance communication, offering better load balance. EXTRA uses a fixed large step size, independent of network size, and has synchronized iterations. Each agent's local variable converges uniformly and consensually to the exact minimizer of the objective function. Unlike traditional decentralized gradient descent (DGD), which requires diminishing step sizes for exact convergence, EXTRA uses gradients from the last two iterations, enabling faster and more accurate convergence.
EXTRA achieves the best known convergence rates for decentralized consensus optimization with convex, Lipschitz-differentiable objectives. If the functions are convex with Lipschitz continuous gradients, EXTRA has an ergodic convergence rate of $ O(1/k) $ in terms of first-order optimality residual. If the objective is also (restricted) strongly convex, EXTRA converges linearly at a rate of $ O(C^{-k}) $ for some constant $ C > 1 $.
The algorithm is based on a modified version of DGD, where the update formula is derived by taking the difference of two DGD update formulas. This approach ensures that the algorithm converges to a point that satisfies both consensus and optimality conditions. The convergence analysis shows that EXTRA converges to an optimal solution under standard assumptions, including convexity, Lipschitz continuity, and strong convexity of the objective function.
Numerical simulations demonstrate that EXTRA outperforms traditional methods in terms of convergence speed and accuracy. The algorithm is applicable to various consensus optimization problems, including decentralized averaging, learning, estimation, sparse optimization, and low-rank matrix completion. The key contributions of this paper include the development of EXTRA, its convergence analysis, and numerical validation of its performance.This paper introduces a novel decentralized first-order algorithm called EXTRA for solving consensus optimization problems in a multi-agent network. The algorithm is designed to converge exactly to the optimal solution without requiring a central server or long-distance communication, offering better load balance. EXTRA uses a fixed large step size, independent of network size, and has synchronized iterations. Each agent's local variable converges uniformly and consensually to the exact minimizer of the objective function. Unlike traditional decentralized gradient descent (DGD), which requires diminishing step sizes for exact convergence, EXTRA uses gradients from the last two iterations, enabling faster and more accurate convergence.
EXTRA achieves the best known convergence rates for decentralized consensus optimization with convex, Lipschitz-differentiable objectives. If the functions are convex with Lipschitz continuous gradients, EXTRA has an ergodic convergence rate of $ O(1/k) $ in terms of first-order optimality residual. If the objective is also (restricted) strongly convex, EXTRA converges linearly at a rate of $ O(C^{-k}) $ for some constant $ C > 1 $.
The algorithm is based on a modified version of DGD, where the update formula is derived by taking the difference of two DGD update formulas. This approach ensures that the algorithm converges to a point that satisfies both consensus and optimality conditions. The convergence analysis shows that EXTRA converges to an optimal solution under standard assumptions, including convexity, Lipschitz continuity, and strong convexity of the objective function.
Numerical simulations demonstrate that EXTRA outperforms traditional methods in terms of convergence speed and accuracy. The algorithm is applicable to various consensus optimization problems, including decentralized averaging, learning, estimation, sparse optimization, and low-rank matrix completion. The key contributions of this paper include the development of EXTRA, its convergence analysis, and numerical validation of its performance.