Federated Learning of Deep Networks using Model Averaging

Federated Learning of Deep Networks using Model Averaging

17 Feb 2016 | H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agüera y Arcas
The paper "Federated Learning of Deep Networks using Model Averaging" by H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas from Google, introduces a decentralized approach to training deep neural networks on mobile devices. The authors advocate for a method that keeps the training data distributed on the devices and learns a shared model by aggregating locally computed updates, termed *Federated Learning*. They present the *FederatedAveraging* (FedAvg) algorithm, which combines local SGD training on each client with model averaging at the central server. This method is robust to unbalanced and non-IID data distributions and can significantly reduce the number of communication rounds needed to train a deep network, by one to two orders of magnitude. The paper discusses the privacy benefits of federated learning, particularly in handling sensitive data, and highlights its advantages for large datasets. Experimental results on image classification and language modeling tasks demonstrate the effectiveness of the proposed approach, showing substantial speedups and high-quality model performance.The paper "Federated Learning of Deep Networks using Model Averaging" by H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas from Google, introduces a decentralized approach to training deep neural networks on mobile devices. The authors advocate for a method that keeps the training data distributed on the devices and learns a shared model by aggregating locally computed updates, termed *Federated Learning*. They present the *FederatedAveraging* (FedAvg) algorithm, which combines local SGD training on each client with model averaging at the central server. This method is robust to unbalanced and non-IID data distributions and can significantly reduce the number of communication rounds needed to train a deep network, by one to two orders of magnitude. The paper discusses the privacy benefits of federated learning, particularly in handling sensitive data, and highlights its advantages for large datasets. Experimental results on image classification and language modeling tasks demonstrate the effectiveness of the proposed approach, showing substantial speedups and high-quality model performance.
Reach us at info@study.space
[slides] Federated Learning of Deep Networks using Model Averaging | StudySpace