8 Feb 2025 | Maciej Besta¹, Florim Memedi¹, Zhenyu Zhang¹, Robert Gerstenberger¹, Guangyuan Piao², Nils Blach¹, Piotr Nyczki³, Marcin Copik¹, Grzegorz Kwaśniewski¹, Jürgen Müller⁴, Lukas Gianinazzi¹, Ales Kubicek¹, Hubert Niewiadomski³, Aidan O’Mahony², Onur Mutlu¹, Torsten Hoefler¹
This paper explores the evolution and application of reasoning topologies in large language models (LLMs), focusing on chain-of-thought (CoT), tree-of-thoughts (ToT), and graph-of-thoughts (GoT) structures. The authors analyze how these structures enhance LLM reasoning by guiding the model through intermediate steps, enabling more accurate and efficient task solving. They propose a general blueprint for effective LLM reasoning schemes, including a taxonomy of structure-enhanced reasoning topologies. The study clarifies key concepts such as reasoning topologies, their representation, and the algorithms used to execute them. It also discusses the performance and cost implications of different prompting schemes, and outlines theoretical foundations and research challenges. The paper compares existing prompting methods using the proposed taxonomy, highlighting how design choices affect performance and cost. It also identifies fundamental use cases of reasoning topologies, such as in-context examples and solution steps, and discusses how these can be represented and processed. The authors conclude that integrating reasoning topologies with other parts of the LLM ecosystem, such as knowledge bases and external tools, can significantly improve LLM performance. The study provides a comprehensive framework for future prompt engineering and LLM reasoning research.This paper explores the evolution and application of reasoning topologies in large language models (LLMs), focusing on chain-of-thought (CoT), tree-of-thoughts (ToT), and graph-of-thoughts (GoT) structures. The authors analyze how these structures enhance LLM reasoning by guiding the model through intermediate steps, enabling more accurate and efficient task solving. They propose a general blueprint for effective LLM reasoning schemes, including a taxonomy of structure-enhanced reasoning topologies. The study clarifies key concepts such as reasoning topologies, their representation, and the algorithms used to execute them. It also discusses the performance and cost implications of different prompting schemes, and outlines theoretical foundations and research challenges. The paper compares existing prompting methods using the proposed taxonomy, highlighting how design choices affect performance and cost. It also identifies fundamental use cases of reasoning topologies, such as in-context examples and solution steps, and discusses how these can be represented and processed. The authors conclude that integrating reasoning topologies with other parts of the LLM ecosystem, such as knowledge bases and external tools, can significantly improve LLM performance. The study provides a comprehensive framework for future prompt engineering and LLM reasoning research.