SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS

SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS

7 Mar 2023 | Xuezhi Wang†‡ Jason Wei† Dale Schuurmans† Quoc Le† Ed H. Chi† Sharan Narang† Aakanksha Chowdhery† Denny Zhou‡§
The paper introduces a new decoding strategy called *self-consistency* to enhance chain-of-thought reasoning in language models. Self-consistency replaces the naive greedy decoding used in chain-of-thought prompting by sampling a diverse set of reasoning paths and then selecting the most consistent answer by marginalizing out the sampled paths. This approach leverages the intuition that complex reasoning problems often admit multiple valid paths to the correct answer. Extensive empirical evaluations on various arithmetic and commonsense reasoning benchmarks show that self-consistency significantly improves performance, achieving state-of-the-art results on several tasks. The method is unsupervised, works with pre-trained language models, and does not require additional human annotations or fine-tuning. Self-consistency is also robust to sampling strategies and imperfect prompts, making it a reliable and effective approach for improving reasoning capabilities in language models.The paper introduces a new decoding strategy called *self-consistency* to enhance chain-of-thought reasoning in language models. Self-consistency replaces the naive greedy decoding used in chain-of-thought prompting by sampling a diverse set of reasoning paths and then selecting the most consistent answer by marginalizing out the sampled paths. This approach leverages the intuition that complex reasoning problems often admit multiple valid paths to the correct answer. Extensive empirical evaluations on various arithmetic and commonsense reasoning benchmarks show that self-consistency significantly improves performance, achieving state-of-the-art results on several tasks. The method is unsupervised, works with pre-trained language models, and does not require additional human annotations or fine-tuning. Self-consistency is also robust to sampling strategies and imperfect prompts, making it a reliable and effective approach for improving reasoning capabilities in language models.
Reach us at info@study.space