Understanding the planning of LLM agents: A survey

Understanding the planning of LLM agents: A survey

5 Feb 2024 | Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
This survey provides a comprehensive overview of the planning capabilities of Large Language Models (LLMs) in autonomous agents, categorizing existing works into five main directions: Task Decomposition, Multi-Plan Selection, External Planner-Aided Planning, Reflection and Refinement, and Memory-Augmented Planning. Each direction is analyzed in detail, discussing its motivations, methods, and limitations. The survey also evaluates several representative methods on four benchmarks, highlighting the performance improvements with increased computational resources. Despite the advancements, challenges such as hallucinations, feasibility of generated plans, efficiency, and multi-modal environment feedback remain significant issues. Future research directions include addressing these challenges and developing more realistic evaluation environments.This survey provides a comprehensive overview of the planning capabilities of Large Language Models (LLMs) in autonomous agents, categorizing existing works into five main directions: Task Decomposition, Multi-Plan Selection, External Planner-Aided Planning, Reflection and Refinement, and Memory-Augmented Planning. Each direction is analyzed in detail, discussing its motivations, methods, and limitations. The survey also evaluates several representative methods on four benchmarks, highlighting the performance improvements with increased computational resources. Despite the advancements, challenges such as hallucinations, feasibility of generated plans, efficiency, and multi-modal environment feedback remain significant issues. Future research directions include addressing these challenges and developing more realistic evaluation environments.
Reach us at info@study.space
Understanding Understanding the planning of LLM agents%3A A survey