Understanding the planning of LLM agents: A survey

Understanding the planning of LLM agents: A survey

5 Feb 2024 | Xu Huang, Weiwu Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
This survey provides a comprehensive overview of LLM-based agent planning, categorizing existing methods into five directions: Task Decomposition, Plan Selection, External Module, Reflection and Refinement, and Memory-Augmented Planning. Each direction is analyzed to understand their approaches, strengths, and challenges. Task Decomposition involves breaking down complex tasks into sub-tasks, while Plan Selection generates multiple plans and selects the optimal one. External Module integrates external planners to enhance planning efficiency. Reflection and Refinement improve planning through iterative correction, and Memory-Augmented Planning uses memory to store and retrieve information for better planning. The survey discusses the advantages and limitations of each method, highlighting the need for further research to address challenges such as hallucinations, feasibility of generated plans, efficiency, multi-modal feedback, and fine-grained evaluation. The study also evaluates representative methods on four benchmarks, showing that performance improves with increased computational resources. Future directions include integrating multi-modal models, enhancing memory capabilities, and developing more realistic evaluation environments.This survey provides a comprehensive overview of LLM-based agent planning, categorizing existing methods into five directions: Task Decomposition, Plan Selection, External Module, Reflection and Refinement, and Memory-Augmented Planning. Each direction is analyzed to understand their approaches, strengths, and challenges. Task Decomposition involves breaking down complex tasks into sub-tasks, while Plan Selection generates multiple plans and selects the optimal one. External Module integrates external planners to enhance planning efficiency. Reflection and Refinement improve planning through iterative correction, and Memory-Augmented Planning uses memory to store and retrieve information for better planning. The survey discusses the advantages and limitations of each method, highlighting the need for further research to address challenges such as hallucinations, feasibility of generated plans, efficiency, multi-modal feedback, and fine-grained evaluation. The study also evaluates representative methods on four benchmarks, showing that performance improves with increased computational resources. Future directions include integrating multi-modal models, enhancing memory capabilities, and developing more realistic evaluation environments.
Reach us at info@study.space