Driving Everywhere with Large Language Model Policy Adaptation

Driving Everywhere with Large Language Model Policy Adaptation

10 Apr 2024 | Boyi Li, Yue Wang, Jiageng Mao, Boris Ivanovic, Sushant Veer, Karen Leung, Marco Pavone
This paper introduces LLaDA, a novel approach that enables both human drivers and autonomous vehicles (AVs) to adapt their driving behavior to local traffic rules in new environments. LLaDA leverages the zero-shot generalizability of large language models (LLMs) to interpret and adapt to traffic rules in local driver handbooks. The system consists of three main components: (1) generating an executable policy, (2) extracting relevant traffic rules from the local handbook using a Traffic Rule Extractor (TRE), and (3) adapting the original plan using a pre-trained LLM. LLaDA has been tested on the nuScenes dataset and shows significant improvements in motion planning under novel scenarios. The system is also effective in adapting AV motion planning policies in real-world datasets, outperforming baseline approaches on multiple metrics. LLaDA can be applied to any autonomous driving stack to improve performance in new locations with different traffic rules. The system has been evaluated through extensive user studies and experiments, demonstrating its effectiveness in handling unexpected situations and providing accurate instructions. LLaDA is particularly useful for tourists and AVs operating in new geographical areas with different traffic rules. The system is also capable of handling multiple languages and has been tested on various challenging scenarios, including heavy rain, pedestrian crossings, and different driving conditions. The results show that LLaDA significantly improves the performance of AV planners and provides accurate instructions for human drivers. However, LLaDA has some limitations, including the need for an LLM in the control loop and sensitivity to the quality of scene descriptions. Future work includes improving GPT-4V's scene descriptions, developing an unexpected scenario detector, and providing safety certifications for LLM outputs. Overall, LLaDA represents a significant advancement in autonomous driving, enabling vehicles to adapt to new environments and traffic rules more effectively.This paper introduces LLaDA, a novel approach that enables both human drivers and autonomous vehicles (AVs) to adapt their driving behavior to local traffic rules in new environments. LLaDA leverages the zero-shot generalizability of large language models (LLMs) to interpret and adapt to traffic rules in local driver handbooks. The system consists of three main components: (1) generating an executable policy, (2) extracting relevant traffic rules from the local handbook using a Traffic Rule Extractor (TRE), and (3) adapting the original plan using a pre-trained LLM. LLaDA has been tested on the nuScenes dataset and shows significant improvements in motion planning under novel scenarios. The system is also effective in adapting AV motion planning policies in real-world datasets, outperforming baseline approaches on multiple metrics. LLaDA can be applied to any autonomous driving stack to improve performance in new locations with different traffic rules. The system has been evaluated through extensive user studies and experiments, demonstrating its effectiveness in handling unexpected situations and providing accurate instructions. LLaDA is particularly useful for tourists and AVs operating in new geographical areas with different traffic rules. The system is also capable of handling multiple languages and has been tested on various challenging scenarios, including heavy rain, pedestrian crossings, and different driving conditions. The results show that LLaDA significantly improves the performance of AV planners and provides accurate instructions for human drivers. However, LLaDA has some limitations, including the need for an LLM in the control loop and sensitivity to the quality of scene descriptions. Future work includes improving GPT-4V's scene descriptions, developing an unexpected scenario detector, and providing safety certifications for LLM outputs. Overall, LLaDA represents a significant advancement in autonomous driving, enabling vehicles to adapt to new environments and traffic rules more effectively.
Reach us at info@study.space
[slides] Driving Everywhere with Large Language Model Policy Adaptation | StudySpace