24 May 2019 | Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, Junshan Zhang
The paper "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing" by Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang explores the integration of edge computing and artificial intelligence (AI) to address the challenges of processing large volumes of data generated by mobile and IoT devices at the network edge. The authors highlight the need for AI applications to be deployed closer to the data source to reduce latency, improve energy efficiency, and enhance privacy. They discuss the motivations, benefits, and definitions of edge intelligence, which involves running AI algorithms locally on end devices or in collaboration with cloud datacenters. The paper reviews various architectures, frameworks, and key technologies for training and inference of deep learning models at the edge, including federated learning, aggregation frequency control, gradient compression, DNN splitting, and knowledge transfer learning. The authors also propose a six-level rating system to evaluate the level of edge intelligence based on data offloading and its impact on latency, privacy, and communication costs. Finally, they discuss future research opportunities and challenges in the field of edge intelligence.The paper "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing" by Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang explores the integration of edge computing and artificial intelligence (AI) to address the challenges of processing large volumes of data generated by mobile and IoT devices at the network edge. The authors highlight the need for AI applications to be deployed closer to the data source to reduce latency, improve energy efficiency, and enhance privacy. They discuss the motivations, benefits, and definitions of edge intelligence, which involves running AI algorithms locally on end devices or in collaboration with cloud datacenters. The paper reviews various architectures, frameworks, and key technologies for training and inference of deep learning models at the edge, including federated learning, aggregation frequency control, gradient compression, DNN splitting, and knowledge transfer learning. The authors also propose a six-level rating system to evaluate the level of edge intelligence based on data offloading and its impact on latency, privacy, and communication costs. Finally, they discuss future research opportunities and challenges in the field of edge intelligence.