Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs

Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs

24 Apr 2024 | Yu Xia, Rui Wang, Xu Liu, Mingyan Li, Tong Yu, Xiang Chen, Julian McAuley, Shuai Li
This paper presents a comprehensive survey of Chain-of-X (CoX) methods for Large Language Models (LLMs), extending beyond the original Chain-of-Thought (CoT) prompting approach. CoT has been widely used to enhance LLM reasoning by breaking down complex problems into intermediate steps. Inspired by this, various CoX methods have been developed, each introducing different types of components or nodes to the chain structure, such as intermediates, augmentation, feedback, and models. These methods are categorized based on the types of nodes and the tasks they are applied to, offering insights into their effectiveness and potential applications. The paper discusses several CoX methods, including Chain-of-Intermediates, which decompose problems into manageable subtasks; Chain-of-Augmentation, which incorporates additional knowledge to enhance reasoning; Chain-of-Feedback, which refines outputs through iterative feedback; and Chain-of-Models, which leverages multiple models to enhance performance. These methods have been applied to a wide range of tasks, including multi-modal interaction, factuality and safety, multi-step reasoning, instruction following, LLMs as agents, and evaluation tools. The survey also highlights the potential future directions for CoX methods, including causal analysis of intermediate steps, reducing inference costs, knowledge distillation, and end-to-end fine-tuning. The paper concludes that CoX methods offer a promising framework for enhancing LLM capabilities and opens new avenues for research in this area.This paper presents a comprehensive survey of Chain-of-X (CoX) methods for Large Language Models (LLMs), extending beyond the original Chain-of-Thought (CoT) prompting approach. CoT has been widely used to enhance LLM reasoning by breaking down complex problems into intermediate steps. Inspired by this, various CoX methods have been developed, each introducing different types of components or nodes to the chain structure, such as intermediates, augmentation, feedback, and models. These methods are categorized based on the types of nodes and the tasks they are applied to, offering insights into their effectiveness and potential applications. The paper discusses several CoX methods, including Chain-of-Intermediates, which decompose problems into manageable subtasks; Chain-of-Augmentation, which incorporates additional knowledge to enhance reasoning; Chain-of-Feedback, which refines outputs through iterative feedback; and Chain-of-Models, which leverages multiple models to enhance performance. These methods have been applied to a wide range of tasks, including multi-modal interaction, factuality and safety, multi-step reasoning, instruction following, LLMs as agents, and evaluation tools. The survey also highlights the potential future directions for CoX methods, including causal analysis of intermediate steps, reducing inference costs, knowledge distillation, and end-to-end fine-tuning. The paper concludes that CoX methods offer a promising framework for enhancing LLM capabilities and opens new avenues for research in this area.
Reach us at info@study.space
[slides and audio] Beyond Chain-of-Thought%3A A Survey of Chain-of-X Paradigms for LLMs