February 2024 | Robert Tjarko Lange, Yingtao Tian, Yujin Tang
This paper presents EvoLLM, an evolution strategy implemented using large language models (LLMs) for black-box optimization (BBO). The authors propose a novel prompting strategy that enables LLMs to act as recombination operators in evolutionary algorithms. The approach involves sorting solutions by performance, discretizing the search space, and querying the LLM to propose an improvement to the mean statistic. The resulting LLM-based evolution strategy, EvoLLM, outperforms traditional baselines such as random search and Gaussian Hill Climbing on synthetic BBOB functions and small neuroevolution tasks. The study shows that EvoLLM can be robustly applied to various BBO tasks, and that its performance can be improved by fine-tuning the base LLM model on BBO trajectories generated by teacher algorithms. The paper also investigates the impact of different prompt strategies, discretization resolutions, and context lengths on EvoLLM's performance. The results demonstrate that EvoLLM is capable of performing BBO on a wide range of tasks, including classic control tasks and neuroevolution problems. The study highlights the potential of LLMs as a viable option for large-scale autonomous optimization, leveraging their ability to process and generate text-based information.This paper presents EvoLLM, an evolution strategy implemented using large language models (LLMs) for black-box optimization (BBO). The authors propose a novel prompting strategy that enables LLMs to act as recombination operators in evolutionary algorithms. The approach involves sorting solutions by performance, discretizing the search space, and querying the LLM to propose an improvement to the mean statistic. The resulting LLM-based evolution strategy, EvoLLM, outperforms traditional baselines such as random search and Gaussian Hill Climbing on synthetic BBOB functions and small neuroevolution tasks. The study shows that EvoLLM can be robustly applied to various BBO tasks, and that its performance can be improved by fine-tuning the base LLM model on BBO trajectories generated by teacher algorithms. The paper also investigates the impact of different prompt strategies, discretization resolutions, and context lengths on EvoLLM's performance. The results demonstrate that EvoLLM is capable of performing BBO on a wide range of tasks, including classic control tasks and neuroevolution problems. The study highlights the potential of LLMs as a viable option for large-scale autonomous optimization, leveraging their ability to process and generate text-based information.