April 12, 2011 | John C. Duchi, Alekh Agarwal, Martin J. Wainwright
The paper presents a distributed optimization algorithm based on dual averaging of subgradients for solving convex optimization problems over a network. The authors develop and analyze the algorithm, providing sharp bounds on its convergence rates as a function of the network size and topology. The analysis separates the convergence of the optimization algorithm from the effects of communication constraints due to network structure. The key contribution is a method that links the convergence rate to the spectral properties of the network, showing that the number of iterations required scales inversely with the spectral gap of the network. The paper also covers deterministic and stochastic optimization and communication scenarios, and includes simulations that validate the theoretical predictions. The results are compared with previous work, demonstrating improved convergence rates and a better understanding of the impact of network topology on convergence.The paper presents a distributed optimization algorithm based on dual averaging of subgradients for solving convex optimization problems over a network. The authors develop and analyze the algorithm, providing sharp bounds on its convergence rates as a function of the network size and topology. The analysis separates the convergence of the optimization algorithm from the effects of communication constraints due to network structure. The key contribution is a method that links the convergence rate to the spectral properties of the network, showing that the number of iterations required scales inversely with the spectral gap of the network. The paper also covers deterministic and stochastic optimization and communication scenarios, and includes simulations that validate the theoretical predictions. The results are compared with previous work, demonstrating improved convergence rates and a better understanding of the impact of network topology on convergence.