Grounding LLMs For Robot Task Planning Using Closed-loop State Feedback

Grounding LLMs For Robot Task Planning Using Closed-loop State Feedback

15 Aug 2024 | Vineet Bhat, Ali Umut Kaypak, Prashanth Krishnamurthy, Ramesh Karri, Farshad Khorrami
This paper introduces BrainBody-LLM, a novel algorithm for robotic task planning that uses two large language models (LLMs) for high-level planning and low-level control. The Brain-LLM decomposes tasks into high-level plans, while the Body-LLM translates these plans into executable actions. The algorithm incorporates a closed-loop feedback mechanism, allowing it to learn from simulator errors and adjust plans accordingly. BrainBody-LLM achieves a 29% improvement in task-oriented success rates over competitive baselines in the VirtualHome simulation environment. It also demonstrates effectiveness in real-world robotic tasks using the Franka Research 3 robotic arm. The algorithm uses in-context learning examples to ground the LLMs in the environment, improving their ability to generate accurate and executable plans. BrainBody-LLM outperforms other methods in terms of success rate and goal condition recall, and it is designed to be adaptable to various robotic environments. The algorithm uses feedback from simulator errors and human input to refine plans and improve task execution. The study highlights the potential of LLMs in robotic task planning, showing that they can learn from errors and adapt to real-world constraints. The results demonstrate that BrainBody-LLM is a robust and effective approach for robotic task planning, with the potential for further improvements as LLMs become more advanced.This paper introduces BrainBody-LLM, a novel algorithm for robotic task planning that uses two large language models (LLMs) for high-level planning and low-level control. The Brain-LLM decomposes tasks into high-level plans, while the Body-LLM translates these plans into executable actions. The algorithm incorporates a closed-loop feedback mechanism, allowing it to learn from simulator errors and adjust plans accordingly. BrainBody-LLM achieves a 29% improvement in task-oriented success rates over competitive baselines in the VirtualHome simulation environment. It also demonstrates effectiveness in real-world robotic tasks using the Franka Research 3 robotic arm. The algorithm uses in-context learning examples to ground the LLMs in the environment, improving their ability to generate accurate and executable plans. BrainBody-LLM outperforms other methods in terms of success rate and goal condition recall, and it is designed to be adaptable to various robotic environments. The algorithm uses feedback from simulator errors and human input to refine plans and improve task execution. The study highlights the potential of LLMs in robotic task planning, showing that they can learn from errors and adapt to real-world constraints. The results demonstrate that BrainBody-LLM is a robust and effective approach for robotic task planning, with the potential for further improvements as LLMs become more advanced.
Reach us at info@study.space