LEAST-TO-MOST PROMPTING ENABLES COMPLEX REASONING IN LARGE LANGUAGE MODELS

LEAST-TO-MOST PROMPTING ENABLES COMPLEX REASONING IN LARGE LANGUAGE MODELS

16 Apr 2023 | Denny Zhou†*, Nathanael Schärli†, Le Hou†, Jason Wei†, Nathan Scales†, Xuezhi Wang† Dale Schuurmans†, Claire Cui†, Olivier Bousquet†, Quoc Le†, Ed Chi†
The paper introduces a novel prompting strategy called *least-to-most prompting* to enable large language models to solve complex problems that are harder than those seen in the prompts. This approach involves decomposing a complex problem into a series of simpler subproblems and solving them sequentially, where the answer to each subproblem is facilitated by the answers to previously solved subproblems. The method is evaluated on tasks such as symbolic manipulation, compositional generalization, and math reasoning. Experimental results show that least-to-most prompting significantly outperforms standard and chain-of-thought prompting, achieving high accuracy on challenging tasks like the SCAN benchmark and long-list concatenation tasks. The approach does not require additional training or fine-tuning and demonstrates the potential for more efficient and effective learning through bidirectional interactions with language models.The paper introduces a novel prompting strategy called *least-to-most prompting* to enable large language models to solve complex problems that are harder than those seen in the prompts. This approach involves decomposing a complex problem into a series of simpler subproblems and solving them sequentially, where the answer to each subproblem is facilitated by the answers to previously solved subproblems. The method is evaluated on tasks such as symbolic manipulation, compositional generalization, and math reasoning. Experimental results show that least-to-most prompting significantly outperforms standard and chain-of-thought prompting, achieving high accuracy on challenging tasks like the SCAN benchmark and long-list concatenation tasks. The approach does not require additional training or fine-tuning and demonstrates the potential for more efficient and effective learning through bidirectional interactions with language models.
Reach us at info@study.space