PACP: Priority-Aware Collaborative Perception for Connected and Autonomous Vehicles

PACP: Priority-Aware Collaborative Perception for Connected and Autonomous Vehicles

21 Aug 2024 | Zhengru Fang, Senkang Hu, Haonan An, Yuang Zhang, Jingjing Wang, Hangcheng Cao, Xianhao Chen, Member, IEEE and Yuguang Fang, Fellow, IEEE
The paper introduces a novel framework called Priority-Aware Collaborative Perception (PACP) for connected and autonomous vehicles (CAVs) to enhance surrounding perception. Traditional Bird's Eye View (BEV) systems have limitations such as blind spots, and collaborative perception, which combines data from multiple vehicles, can mitigate these issues. However, existing collaborative perception methods often use a fully connected graph with equal transmission rates, neglecting the varying importance of individual vehicles due to channel variations and perception redundancy. To address these challenges, PACP proposes a BEV-match mechanism to determine priority levels based on the correlation between nearby CAVs and the ego vehicle. This mechanism balances communication overhead with enhanced perception accuracy. The framework also employs submodular optimization to optimize transmission rates, link connectivity, and compression metrics. Additionally, a deep learning-based adaptive autoencoder is integrated to modulate image reconstruction quality under dynamic channel conditions. The experimental results on the CARLA simulation platform with the OPV2V dataset show that PACP outperforms existing methods by 8.27% and 13.60% in terms of utility value and Intersection over Union (IoU), respectively. The key contributions of the paper include the first implementation of a priority-aware collaborative perception framework, the application of submodular theory for joint optimization, and the integration of a deep learning-based adaptive autoencoder.The paper introduces a novel framework called Priority-Aware Collaborative Perception (PACP) for connected and autonomous vehicles (CAVs) to enhance surrounding perception. Traditional Bird's Eye View (BEV) systems have limitations such as blind spots, and collaborative perception, which combines data from multiple vehicles, can mitigate these issues. However, existing collaborative perception methods often use a fully connected graph with equal transmission rates, neglecting the varying importance of individual vehicles due to channel variations and perception redundancy. To address these challenges, PACP proposes a BEV-match mechanism to determine priority levels based on the correlation between nearby CAVs and the ego vehicle. This mechanism balances communication overhead with enhanced perception accuracy. The framework also employs submodular optimization to optimize transmission rates, link connectivity, and compression metrics. Additionally, a deep learning-based adaptive autoencoder is integrated to modulate image reconstruction quality under dynamic channel conditions. The experimental results on the CARLA simulation platform with the OPV2V dataset show that PACP outperforms existing methods by 8.27% and 13.60% in terms of utility value and Intersection over Union (IoU), respectively. The key contributions of the paper include the first implementation of a priority-aware collaborative perception framework, the application of submodular theory for joint optimization, and the integration of a deep learning-based adaptive autoencoder.
Reach us at info@study.space