A Cognitive-Based Trajectory Prediction Approach for Autonomous Driving

A Cognitive-Based Trajectory Prediction Approach for Autonomous Driving

29 Feb 2024 | Haicheng Liao*, Yongkang Li*, Zhenning Li†, Chengyue Wang, Zhiyong Cui, Shengbo Eben Li, Senior Member, IEEE, and Chengzhong Xu†, Fellow, IEEE
This paper introduces the Human-Like Trajectory Prediction (HLTP) model for autonomous driving, which integrates a teacher-student knowledge distillation framework inspired by human cognitive processes. The model mimics human visual processing through an adaptive visual sector and real-time decision-making through a student model that mirrors prefrontal and parietal cortex functions. The teacher model processes visual data with an adaptive visual sector and surround-aware encoder, while the student model synthesizes visual information with spatial awareness for real-time predictive decisions. HLTP demonstrates superior performance on the Macao Connected and Autonomous Driving (MoCAD) dataset, NGSIM, and HighD benchmarks, particularly in complex environments with incomplete data. The model's vision-aware pooling mechanism dynamically adjusts to vehicle speed, enhancing visual information processing. The teacher-student framework enables the student model to learn from the teacher, improving prediction accuracy and robustness. The HLTP model's contributions include a novel vision-aware pooling mechanism, a cognitive-inspired knowledge distillation framework, and the introduction of the MoCAD dataset. The model is trained using a combination of track loss and distillation loss, with a multi-level, multi-task hyperparameter tuning approach to optimize performance. Experimental results show that HLTP outperforms state-of-the-art baselines in trajectory prediction accuracy, with significant improvements in both short-term and long-term predictions. The model's lightweight design and adaptability make it suitable for real-time autonomous driving applications.This paper introduces the Human-Like Trajectory Prediction (HLTP) model for autonomous driving, which integrates a teacher-student knowledge distillation framework inspired by human cognitive processes. The model mimics human visual processing through an adaptive visual sector and real-time decision-making through a student model that mirrors prefrontal and parietal cortex functions. The teacher model processes visual data with an adaptive visual sector and surround-aware encoder, while the student model synthesizes visual information with spatial awareness for real-time predictive decisions. HLTP demonstrates superior performance on the Macao Connected and Autonomous Driving (MoCAD) dataset, NGSIM, and HighD benchmarks, particularly in complex environments with incomplete data. The model's vision-aware pooling mechanism dynamically adjusts to vehicle speed, enhancing visual information processing. The teacher-student framework enables the student model to learn from the teacher, improving prediction accuracy and robustness. The HLTP model's contributions include a novel vision-aware pooling mechanism, a cognitive-inspired knowledge distillation framework, and the introduction of the MoCAD dataset. The model is trained using a combination of track loss and distillation loss, with a multi-level, multi-task hyperparameter tuning approach to optimize performance. Experimental results show that HLTP outperforms state-of-the-art baselines in trajectory prediction accuracy, with significant improvements in both short-term and long-term predictions. The model's lightweight design and adaptability make it suitable for real-time autonomous driving applications.
Reach us at info@study.space