Federated Optimization: Distributed Machine Learning for On-Device Intelligence

Federated Optimization: Distributed Machine Learning for On-Device Intelligence

October 11, 2016 | Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik
Federated Optimization is a distributed machine learning setting where data is unevenly distributed across a large number of nodes. The goal is to train a centralized model efficiently, with communication rounds being the primary concern. This setting is motivated by the need to keep training data on users' devices rather than in a centralized data center, enabling privacy and reducing network bandwidth usage. Federated learning, a key approach, allows models to be trained by aggregating local updates from devices without storing raw data on a server. Privacy can be enhanced using differential privacy techniques. The paper introduces federated optimization as a new setting for distributed learning, where data is not uniformly distributed, and algorithms must handle non-IID, unbalanced, and sparse data. It discusses the challenges of existing algorithms in this setting and proposes a new algorithm for sparse convex problems. The paper also reviews related work, including stochastic gradient descent, randomized coordinate descent, and variance-reduced methods, and highlights the need for communication-efficient algorithms in distributed settings. The paper concludes that federated optimization presents a promising direction for future research in distributed machine learning.Federated Optimization is a distributed machine learning setting where data is unevenly distributed across a large number of nodes. The goal is to train a centralized model efficiently, with communication rounds being the primary concern. This setting is motivated by the need to keep training data on users' devices rather than in a centralized data center, enabling privacy and reducing network bandwidth usage. Federated learning, a key approach, allows models to be trained by aggregating local updates from devices without storing raw data on a server. Privacy can be enhanced using differential privacy techniques. The paper introduces federated optimization as a new setting for distributed learning, where data is not uniformly distributed, and algorithms must handle non-IID, unbalanced, and sparse data. It discusses the challenges of existing algorithms in this setting and proposes a new algorithm for sparse convex problems. The paper also reviews related work, including stochastic gradient descent, randomized coordinate descent, and variance-reduced methods, and highlights the need for communication-efficient algorithms in distributed settings. The paper concludes that federated optimization presents a promising direction for future research in distributed machine learning.
Reach us at info@study.space
[slides and audio] Federated Optimization%3A Distributed Machine Learning for On-Device Intelligence