Demystifying Chains, Trees, and Graphs of Thoughts

Demystifying Chains, Trees, and Graphs of Thoughts

8 Feb 2025 | Maciej Besta¹, Florim Memedi¹, Zhenyu Zhang¹, Robert Gerstenberger¹, Guangyuan Piao², Nils Blach¹, Piotr Nyczki³, Marcin Copik¹, Grzegorz Kwaśniewski¹, Jürgen Müller⁴, Lukas Gianinazzi¹, Ales Kubicek¹, Hubert Niewiadomski³, Aidan O’Mahony², Onur Mutlu¹, Torsten Hoefler¹
This article discusses the evolution and analysis of reasoning topologies used in prompting large language models (LLMs). It introduces a general blueprint for effective and efficient LLM reasoning schemes, focusing on structures such as chains, trees, and graphs. The study analyzes existing prompting schemes, clarifying concepts and building a taxonomy of structure-enhanced LLM reasoning. It highlights the importance of reasoning topologies, which can be modeled as graphs, and how they influence performance, cost, and efficiency. The article compares different prompting schemes, discussing their design choices, performance, and cost. It also outlines theoretical underpinnings and challenges in the field, emphasizing the need for further research. The study provides insights into the use of different reasoning topologies, including chains, trees, and graphs, and their applications in tasks such as logical reasoning, planning, and creative writing. The article concludes with a discussion of the effectiveness of various prompting schemes and the potential for future improvements in LLM reasoning through the integration of advanced structures and techniques.This article discusses the evolution and analysis of reasoning topologies used in prompting large language models (LLMs). It introduces a general blueprint for effective and efficient LLM reasoning schemes, focusing on structures such as chains, trees, and graphs. The study analyzes existing prompting schemes, clarifying concepts and building a taxonomy of structure-enhanced LLM reasoning. It highlights the importance of reasoning topologies, which can be modeled as graphs, and how they influence performance, cost, and efficiency. The article compares different prompting schemes, discussing their design choices, performance, and cost. It also outlines theoretical underpinnings and challenges in the field, emphasizing the need for further research. The study provides insights into the use of different reasoning topologies, including chains, trees, and graphs, and their applications in tasks such as logical reasoning, planning, and creative writing. The article concludes with a discussion of the effectiveness of various prompting schemes and the potential for future improvements in LLM reasoning through the integration of advanced structures and techniques.
Reach us at info@study.space