This survey provides a comprehensive overview of Federated Learning (FL) in mobile edge networks. With the increasing capabilities of mobile devices and the rise of Deep Learning (DL), FL has emerged as a promising solution to address privacy concerns and communication inefficiencies in traditional cloud-based Machine Learning (ML) approaches. FL enables collaborative training of ML models without sharing raw data, instead sending only model updates to a central server for aggregation. This approach enhances privacy, reduces communication costs, and improves latency, making it suitable for mobile edge networks.
FL is particularly beneficial for mobile edge networks due to its ability to enable collaborative training and DL-based optimization. However, implementing FL at scale presents challenges such as communication costs, resource allocation, and privacy and security issues. The survey discusses these challenges and reviews existing solutions, as well as the applications of FL in mobile edge network optimization. It also highlights the importance of FL in enabling privacy-preserving and collaborative ML model training.
The survey also discusses the unique characteristics of FL compared to traditional distributed ML approaches, including slow and unstable communication, heterogeneous devices, and privacy and security concerns. It reviews approaches to reduce communication costs, such as edge and end computation, model compression, and importance-based updating. Additionally, it explores the use of FL in mobile edge network optimization, including cell association, computation offloading, and vehicular networks.
The survey concludes with a discussion of the challenges and future research directions in FL, emphasizing the need for further research to address the issues of communication costs, resource allocation, and privacy and security in FL. The survey also highlights the importance of FL as an enabling technology for ML model training and mobile edge network optimization.This survey provides a comprehensive overview of Federated Learning (FL) in mobile edge networks. With the increasing capabilities of mobile devices and the rise of Deep Learning (DL), FL has emerged as a promising solution to address privacy concerns and communication inefficiencies in traditional cloud-based Machine Learning (ML) approaches. FL enables collaborative training of ML models without sharing raw data, instead sending only model updates to a central server for aggregation. This approach enhances privacy, reduces communication costs, and improves latency, making it suitable for mobile edge networks.
FL is particularly beneficial for mobile edge networks due to its ability to enable collaborative training and DL-based optimization. However, implementing FL at scale presents challenges such as communication costs, resource allocation, and privacy and security issues. The survey discusses these challenges and reviews existing solutions, as well as the applications of FL in mobile edge network optimization. It also highlights the importance of FL in enabling privacy-preserving and collaborative ML model training.
The survey also discusses the unique characteristics of FL compared to traditional distributed ML approaches, including slow and unstable communication, heterogeneous devices, and privacy and security concerns. It reviews approaches to reduce communication costs, such as edge and end computation, model compression, and importance-based updating. Additionally, it explores the use of FL in mobile edge network optimization, including cell association, computation offloading, and vehicular networks.
The survey concludes with a discussion of the challenges and future research directions in FL, emphasizing the need for further research to address the issues of communication costs, resource allocation, and privacy and security in FL. The survey also highlights the importance of FL as an enabling technology for ML model training and mobile edge network optimization.