ACHIEVING GEOMETRIC CONVERGENCE FOR DISTRIBUTED OPTIMIZATION OVER TIME-VARYING GRAPHS

ACHIEVING GEOMETRIC CONVERGENCE FOR DISTRIBUTED OPTIMIZATION OVER TIME-VARYING GRAPHS

20 Mar 2017 | ANGELIA NEDIĆ, ALEX OLSHEVSKY, AND WEI SHI
This paper presents distributed optimization algorithms for time-varying graphs, achieving geometric convergence. For undirected graphs, the DIGing algorithm combines a distributed inexact gradient method with gradient tracking, using doubly stochastic mixing matrices and fixed step-sizes to converge to a global minimizer. For directed graphs, the Push-DIGing algorithm uses column stochastic matrices and fixed step-sizes, also converging to a global minimizer. Under strong convexity, both algorithms achieve R-linear convergence rates, with explicit convergence rate estimates. DIGing scales polynomially in the number of agents for undirected graphs. Numerical experiments validate the algorithms' effectiveness. The analysis uses the small-gain theorem to establish convergence, showing that the algorithms converge at R-linear rates when step-sizes are within certain bounds. The paper also discusses related algorithms like EXTRA and Aug-DGM, highlighting their connections to DIGing. The key contributions include the development of linearly convergent methods for time-varying graphs and the use of the small-gain theorem for convergence analysis.This paper presents distributed optimization algorithms for time-varying graphs, achieving geometric convergence. For undirected graphs, the DIGing algorithm combines a distributed inexact gradient method with gradient tracking, using doubly stochastic mixing matrices and fixed step-sizes to converge to a global minimizer. For directed graphs, the Push-DIGing algorithm uses column stochastic matrices and fixed step-sizes, also converging to a global minimizer. Under strong convexity, both algorithms achieve R-linear convergence rates, with explicit convergence rate estimates. DIGing scales polynomially in the number of agents for undirected graphs. Numerical experiments validate the algorithms' effectiveness. The analysis uses the small-gain theorem to establish convergence, showing that the algorithms converge at R-linear rates when step-sizes are within certain bounds. The paper also discusses related algorithms like EXTRA and Aug-DGM, highlighting their connections to DIGing. The key contributions include the development of linearly convergent methods for time-varying graphs and the use of the small-gain theorem for convergence analysis.
Reach us at info@study.space