Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

17 Feb 2019 | Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, Kevin Chan
This paper addresses the problem of efficiently utilizing limited computation and communication resources in edge computing systems for optimal learning performance in federated learning. The focus is on gradient-descent based federated learning algorithms, which are applicable to a wide range of machine learning models. The paper analyzes the convergence bound of distributed gradient descent from a theoretical perspective and proposes a control algorithm that dynamically adapts the frequency of global aggregation to minimize the learning loss under a given resource budget. The algorithm is evaluated using real datasets in both a networked prototype system and a larger-scale simulated environment, showing near-optimal performance for various machine learning models and data distributions. The paper also discusses related work in distributed machine learning and federated learning, highlighting the importance of adapting global aggregation frequency for resource-constrained edge computing systems. The main contributions include a theoretical analysis of convergence bounds for federated learning with non-i.i.d. data distributions, a control algorithm that adapts global aggregation frequency in real time, and extensive experiments demonstrating the effectiveness of the proposed approach.This paper addresses the problem of efficiently utilizing limited computation and communication resources in edge computing systems for optimal learning performance in federated learning. The focus is on gradient-descent based federated learning algorithms, which are applicable to a wide range of machine learning models. The paper analyzes the convergence bound of distributed gradient descent from a theoretical perspective and proposes a control algorithm that dynamically adapts the frequency of global aggregation to minimize the learning loss under a given resource budget. The algorithm is evaluated using real datasets in both a networked prototype system and a larger-scale simulated environment, showing near-optimal performance for various machine learning models and data distributions. The paper also discusses related work in distributed machine learning and federated learning, highlighting the importance of adapting global aggregation frequency for resource-constrained edge computing systems. The main contributions include a theoretical analysis of convergence bounds for federated learning with non-i.i.d. data distributions, a control algorithm that adapts global aggregation frequency in real time, and extensive experiments demonstrating the effectiveness of the proposed approach.
Reach us at info@study.space