Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

18 Oct 2018 | Nguyen Cong Luong, Dinh Thai Hoang, Member, IEEE, Shimin Gong, Member, IEEE, Dusit Niyato, Fellow, IEEE, Ping Wang, Senior Member, IEEE, Ying-Chang Liang, Fellow, IEEE, Dong In Kim, Senior Member, IEEE
This paper provides a comprehensive literature review on the applications of deep reinforcement learning (DRL) in communications and networking. It highlights the challenges and advancements in modern networks, such as IoT and UAV networks, where network entities need to make local decisions to optimize performance under uncertain environments. The paper begins with an introduction to DRL, covering fundamental concepts, Markov Decision Processes (MDPs), and reinforcement learning techniques. It then reviews various DRL approaches for addressing issues in communications and networking, including dynamic network access, data rate control, wireless caching, data offloading, network security, connectivity preservation, traffic routing, and data collection. The paper also discusses advanced DRL models and their extensions, such as Double DQN, Prioritized Experience Replay, Dueling DQN, Asynchronous Multi-step DQN, Distributional DQN, Noisy Nets, and Rainbow DQN. Finally, it outlines future research directions and challenges in applying DRL to communications and networking.This paper provides a comprehensive literature review on the applications of deep reinforcement learning (DRL) in communications and networking. It highlights the challenges and advancements in modern networks, such as IoT and UAV networks, where network entities need to make local decisions to optimize performance under uncertain environments. The paper begins with an introduction to DRL, covering fundamental concepts, Markov Decision Processes (MDPs), and reinforcement learning techniques. It then reviews various DRL approaches for addressing issues in communications and networking, including dynamic network access, data rate control, wireless caching, data offloading, network security, connectivity preservation, traffic routing, and data collection. The paper also discusses advanced DRL models and their extensions, such as Double DQN, Prioritized Experience Replay, Dueling DQN, Asynchronous Multi-step DQN, Distributional DQN, Noisy Nets, and Rainbow DQN. Finally, it outlines future research directions and challenges in applying DRL to communications and networking.
Reach us at info@study.space