Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models

Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models

8 Feb 2024 | Feihu Jin, Yifan Liu, Ying Tan
This paper introduces a novel zero-shot Chain-of-Thought (CoT) prompting method for large language models (LLMs) that leverages evolutionary algorithms to generate diverse promptings dynamically. The proposed method, named zero-shot EoT prompting, involves initializing two CoT promptings, performing evolutionary operations (crossover and mutation) using LLMs to create a varied set, and selecting the most suitable CoT prompting for a given problem. Additionally, a rewriting operation guided by the selected CoT prompting enhances the LLMs' understanding of the problem. Extensive experiments on ten reasoning datasets, including arithmetic, commonsense, and symbolic reasoning, demonstrate the superior performance of zero-shot EoT prompting compared to existing zero-shot CoT prompting methods on GPT-3.5-turbo and GPT-4. The method shows notable improvements in performance, especially in arithmetic and symbolic reasoning tasks, and comparable results to few-shot CoT prompting in arithmetic reasoning. The paper also includes ablation studies and analyses to validate the effectiveness of different components of the proposed method.This paper introduces a novel zero-shot Chain-of-Thought (CoT) prompting method for large language models (LLMs) that leverages evolutionary algorithms to generate diverse promptings dynamically. The proposed method, named zero-shot EoT prompting, involves initializing two CoT promptings, performing evolutionary operations (crossover and mutation) using LLMs to create a varied set, and selecting the most suitable CoT prompting for a given problem. Additionally, a rewriting operation guided by the selected CoT prompting enhances the LLMs' understanding of the problem. Extensive experiments on ten reasoning datasets, including arithmetic, commonsense, and symbolic reasoning, demonstrate the superior performance of zero-shot EoT prompting compared to existing zero-shot CoT prompting methods on GPT-3.5-turbo and GPT-4. The method shows notable improvements in performance, especially in arithmetic and symbolic reasoning tasks, and comparable results to few-shot CoT prompting in arithmetic reasoning. The paper also includes ablation studies and analyses to validate the effectiveness of different components of the proposed method.
Reach us at info@study.space
[slides] Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models | StudySpace