3 Dec 2023 | Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan
The paper introduces a new framework called "Tree of Thoughts" (ToT) for language model (LM) inference, which generalizes the popular "Chain of Thought" (CoT) approach. ToT enables LMs to perform deliberate decision-making by considering multiple reasoning paths and self-evaluating choices, allowing for exploration and strategic lookahead. The framework is designed to address the limitations of current LM inference methods, which are often confined to token-level, left-to-right decision-making processes. ToT allows LMs to generate and evaluate coherent units of text ("thoughts") that serve as intermediate steps in problem-solving, enabling more sophisticated and flexible problem-solving abilities.
The authors conduct experiments on three novel tasks—Game of 24, Creative Writing, and Mini Crosswords—showing that ToT significantly enhances the problem-solving capabilities of LMs. For instance, while GPT-4 with CoT prompting only solved 4% of Game of 24 tasks, ToT achieved a success rate of 74%. The paper also discusses the benefits of ToT, including generality, modularity, adaptability, and convenience, and provides a detailed explanation of the ToT framework, including thought decomposition, generation, evaluation, and search algorithms.
The authors conclude that ToT provides a way to translate classical insights about problem-solving into actionable methods for contemporary LMs, enhancing their ability to solve complex problems that are not easily formalized. They also discuss potential limitations and future directions, emphasizing the need for better search and planning abilities in LMs for real-world applications.The paper introduces a new framework called "Tree of Thoughts" (ToT) for language model (LM) inference, which generalizes the popular "Chain of Thought" (CoT) approach. ToT enables LMs to perform deliberate decision-making by considering multiple reasoning paths and self-evaluating choices, allowing for exploration and strategic lookahead. The framework is designed to address the limitations of current LM inference methods, which are often confined to token-level, left-to-right decision-making processes. ToT allows LMs to generate and evaluate coherent units of text ("thoughts") that serve as intermediate steps in problem-solving, enabling more sophisticated and flexible problem-solving abilities.
The authors conduct experiments on three novel tasks—Game of 24, Creative Writing, and Mini Crosswords—showing that ToT significantly enhances the problem-solving capabilities of LMs. For instance, while GPT-4 with CoT prompting only solved 4% of Game of 24 tasks, ToT achieved a success rate of 74%. The paper also discusses the benefits of ToT, including generality, modularity, adaptability, and convenience, and provides a detailed explanation of the ToT framework, including thought decomposition, generation, evaluation, and search algorithms.
The authors conclude that ToT provides a way to translate classical insights about problem-solving into actionable methods for contemporary LMs, enhancing their ability to solve complex problems that are not easily formalized. They also discuss potential limitations and future directions, emphasizing the need for better search and planning abilities in LMs for real-world applications.