Distributed optimization over time-varying directed graphs

Distributed optimization over time-varying directed graphs

16 Mar 2014 | Angelia Nedić and Alex Olshevsky
This paper presents a distributed optimization algorithm called subgradient-push for a network of nodes, each with access to a convex function. The goal is to minimize the sum of these functions. The communication between nodes is described by a time-varying sequence of directed graphs, which is uniformly strongly connected. The subgradient-push algorithm uses broadcast-based communication and requires each node to know its out-degree. It converges at a rate of $ O(\ln t/\sqrt{t}) $, with the convergence rate depending on the initial values, subgradient norms, and the network's information diffusion speed and influence imbalances. The algorithm is fully distributed and does not require knowledge of the graph sequence or the number of agents. The paper also shows that the subgradient-push algorithm converges to the optimal solution under the assumption of subgradient boundedness. The analysis demonstrates that the algorithm's convergence rate depends on the network's properties, including the speed of information diffusion and the imbalance of influence among nodes. The paper compares the subgradient-push algorithm with previous work on distributed optimization and shows that it is more efficient in handling time-varying directed graphs. The algorithm is implemented using a combination of subgradient methods and the push-sum protocol, which allows nodes to compute averages and other aggregates in the network. The paper provides theoretical analysis and numerical simulations to support the algorithm's performance.This paper presents a distributed optimization algorithm called subgradient-push for a network of nodes, each with access to a convex function. The goal is to minimize the sum of these functions. The communication between nodes is described by a time-varying sequence of directed graphs, which is uniformly strongly connected. The subgradient-push algorithm uses broadcast-based communication and requires each node to know its out-degree. It converges at a rate of $ O(\ln t/\sqrt{t}) $, with the convergence rate depending on the initial values, subgradient norms, and the network's information diffusion speed and influence imbalances. The algorithm is fully distributed and does not require knowledge of the graph sequence or the number of agents. The paper also shows that the subgradient-push algorithm converges to the optimal solution under the assumption of subgradient boundedness. The analysis demonstrates that the algorithm's convergence rate depends on the network's properties, including the speed of information diffusion and the imbalance of influence among nodes. The paper compares the subgradient-push algorithm with previous work on distributed optimization and shows that it is more efficient in handling time-varying directed graphs. The algorithm is implemented using a combination of subgradient methods and the push-sum protocol, which allows nodes to compute averages and other aggregates in the network. The paper provides theoretical analysis and numerical simulations to support the algorithm's performance.
Reach us at info@study.space
[slides] Distributed optimization over time-varying directed graphs | StudySpace