LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions with Large Language Models

LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions with Large Language Models

2024 | Mingxing Peng, Xusen Guo, Xianda Chen, Meixin Zhu*, and Kehua Chen
This paper proposes LC-LLM, an explainable lane change prediction model that leverages Large Language Models (LLMs) to predict lane change intentions and trajectories in autonomous driving. The key idea is to reformulate the lane change prediction task as a language modeling problem, where heterogeneous driving scenario information is converted into natural language prompts for LLMs. The model is fine-tuned using supervised learning to enhance its performance in predicting lane change intentions and trajectories. Additionally, Chain-of-Thought (CoT) reasoning is integrated to improve prediction transparency and reliability, and explanatory requirements are included in the prompts during inference. This enables the model to not only predict lane change intentions and trajectories but also provide CoT reasoning and explanations for its predictions, enhancing interpretability. Extensive experiments on the highD dataset demonstrate that LC-LLM outperforms existing methods in lane change intention prediction, achieving a 17.7% improvement, and in lateral and longitudinal trajectory prediction, with improvements of 64.4% and 66.1%, respectively. The model also shows strong performance in ablation studies and robustness evaluations, including handling out-of-distribution scenarios. The results indicate that LC-LLM provides accurate and interpretable predictions, which is crucial for the development of safe and transparent autonomous driving systems. However, the model has limitations, including being tested only on the HighD dataset and having slower inference speeds compared to baseline models. Future work includes extending the approach to urban driving scenarios, optimizing inference speed, and improving the model's ability to predict lane change intentions and trajectories for multiple vehicles simultaneously.This paper proposes LC-LLM, an explainable lane change prediction model that leverages Large Language Models (LLMs) to predict lane change intentions and trajectories in autonomous driving. The key idea is to reformulate the lane change prediction task as a language modeling problem, where heterogeneous driving scenario information is converted into natural language prompts for LLMs. The model is fine-tuned using supervised learning to enhance its performance in predicting lane change intentions and trajectories. Additionally, Chain-of-Thought (CoT) reasoning is integrated to improve prediction transparency and reliability, and explanatory requirements are included in the prompts during inference. This enables the model to not only predict lane change intentions and trajectories but also provide CoT reasoning and explanations for its predictions, enhancing interpretability. Extensive experiments on the highD dataset demonstrate that LC-LLM outperforms existing methods in lane change intention prediction, achieving a 17.7% improvement, and in lateral and longitudinal trajectory prediction, with improvements of 64.4% and 66.1%, respectively. The model also shows strong performance in ablation studies and robustness evaluations, including handling out-of-distribution scenarios. The results indicate that LC-LLM provides accurate and interpretable predictions, which is crucial for the development of safe and transparent autonomous driving systems. However, the model has limitations, including being tested only on the HighD dataset and having slower inference speeds compared to baseline models. Future work includes extending the approach to urban driving scenarios, optimizing inference speed, and improving the model's ability to predict lane change intentions and trajectories for multiple vehicles simultaneously.
Reach us at info@study.space