Model-Assisted Learning for Adaptive Cooperative Perception of Connected Autonomous Vehicles

Model-Assisted Learning for Adaptive Cooperative Perception of Connected Autonomous Vehicles

18 Jan 2024 | Kaige Qu, Member, IEEE, Weihua Zhuang, Fellow, IEEE, Qiang Ye, Senior Member, IEEE, Wen Wu, Senior Member, IEEE, and Xuemin Shen, Fellow, IEEE
This paper proposes an adaptive cooperative perception scheme for connected and autonomous vehicles (CAVs) in a mixed-traffic autonomous driving scenario. The scheme dynamically switches between cooperative perception (CP) and stand-alone perception (SP) modes for CAV pairs based on network conditions and perception task requirements. The goal is to maximize computing efficiency gain while satisfying perception delay constraints. A model-assisted multi-agent reinforcement learning (MARL) solution is developed to adaptively decide CAV cooperation and allocate communication and computing resources. The MARL approach integrates an adaptive CAV cooperation decision with an optimization model for resource allocation. Simulation results show that the proposed scheme achieves high computing efficiency gain compared to benchmark schemes. The key contributions include: (1) an adaptive cooperative perception scheme for CAV pairs in a moving mixed-traffic vehicle cluster; (2) a joint adaptive CAV cooperation and resource allocation problem formulation; and (3) a model-assisted MARL solution for adaptive CAV cooperation and resource allocation. The scheme considers dynamic perception workloads, channel conditions, and radio resource availability to optimize computing efficiency and delay satisfaction. The model-assisted MARL solution uses a centralized training distributed execution framework to learn adaptive cooperation decisions and optimize resource allocation. The algorithm is implemented at the cluster head, which collects overall network dynamics in the vehicle cluster. The solution balances between computing efficiency gain and switching cost to adaptively switch between SP and CP modes for CAV pairs. The scheme is evaluated through simulations, demonstrating its effectiveness in achieving high computing efficiency gain under dynamic network conditions.This paper proposes an adaptive cooperative perception scheme for connected and autonomous vehicles (CAVs) in a mixed-traffic autonomous driving scenario. The scheme dynamically switches between cooperative perception (CP) and stand-alone perception (SP) modes for CAV pairs based on network conditions and perception task requirements. The goal is to maximize computing efficiency gain while satisfying perception delay constraints. A model-assisted multi-agent reinforcement learning (MARL) solution is developed to adaptively decide CAV cooperation and allocate communication and computing resources. The MARL approach integrates an adaptive CAV cooperation decision with an optimization model for resource allocation. Simulation results show that the proposed scheme achieves high computing efficiency gain compared to benchmark schemes. The key contributions include: (1) an adaptive cooperative perception scheme for CAV pairs in a moving mixed-traffic vehicle cluster; (2) a joint adaptive CAV cooperation and resource allocation problem formulation; and (3) a model-assisted MARL solution for adaptive CAV cooperation and resource allocation. The scheme considers dynamic perception workloads, channel conditions, and radio resource availability to optimize computing efficiency and delay satisfaction. The model-assisted MARL solution uses a centralized training distributed execution framework to learn adaptive cooperation decisions and optimize resource allocation. The algorithm is implemented at the cluster head, which collects overall network dynamics in the vehicle cluster. The solution balances between computing efficiency gain and switching cost to adaptively switch between SP and CP modes for CAV pairs. The scheme is evaluated through simulations, demonstrating its effectiveness in achieving high computing efficiency gain under dynamic network conditions.
Reach us at info@study.space