Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models

Zero-Shot Chain-of-Thought Reasoning Guided by Evolutionary Algorithms in Large Language Models

8 Feb 2024 | Feihu Jin, Yifan Liu, Ying Tan
This paper introduces a novel zero-shot prompting method called zero-shot EoT prompting, which leverages evolutionary algorithms to generate diverse promptings for large language models (LLMs). The method involves initializing two CoT promptings, performing crossover and mutation operations to generate a varied set of promptings, and using LLMs to select the most suitable CoT prompting for a given problem. Additionally, a rewriting operation is performed on the selected CoT prompting to enhance the LLM's understanding of the problem. The approach is evaluated across ten reasoning datasets, demonstrating superior performance compared to existing zero-shot CoT prompting methods on GPT-3.5-turbo and GPT-4. The method also shows comparable performance to few-shot CoT prompting, particularly in arithmetic and symbolic reasoning tasks. The results indicate that the proposed method is effective and adaptable across various reasoning tasks. The paper also includes extensive analytical experiments to understand the different components of the method and the impact of various factors on EoT prompting. The findings suggest that the use of evolutionary algorithms in generating promptings can significantly enhance the reasoning capabilities of LLMs.This paper introduces a novel zero-shot prompting method called zero-shot EoT prompting, which leverages evolutionary algorithms to generate diverse promptings for large language models (LLMs). The method involves initializing two CoT promptings, performing crossover and mutation operations to generate a varied set of promptings, and using LLMs to select the most suitable CoT prompting for a given problem. Additionally, a rewriting operation is performed on the selected CoT prompting to enhance the LLM's understanding of the problem. The approach is evaluated across ten reasoning datasets, demonstrating superior performance compared to existing zero-shot CoT prompting methods on GPT-3.5-turbo and GPT-4. The method also shows comparable performance to few-shot CoT prompting, particularly in arithmetic and symbolic reasoning tasks. The results indicate that the proposed method is effective and adaptable across various reasoning tasks. The paper also includes extensive analytical experiments to understand the different components of the method and the impact of various factors on EoT prompting. The findings suggest that the use of evolutionary algorithms in generating promptings can significantly enhance the reasoning capabilities of LLMs.
Reach us at info@study.space