13 Jan 2024 | Erik Hemberg*, Stephen Moskal and Una-May O'Reilly
This paper presents LLM_GP, a novel evolutionary algorithm that uses Large Language Models (LLMs) to evolve code. Unlike traditional Genetic Programming (GP), LLM_GP leverages LLMs for evolutionary operators, using prompting and the LLM's pre-trained pattern matching and sequence completion capabilities. The algorithm uses LLMs to initialize populations, select parents, vary solutions through crossover and mutation, and replace old populations with new ones. It also includes an operator to designate the run's solution. The paper describes the design of LLM-based operators, prompt functions, and LLM-oriented preparatory steps. It also provides an implementation and demonstration of a simple LLM_GP variant, along with a discussion of the challenges and risks associated with using LLMs for genetic programming. The paper highlights the potential of LLMs in code generation, neural architecture search, game design, and prompt generation. It also discusses the differences between LLM_GP and traditional GP, the risks of using LLMs for evolving code, and the need for further research in this area. The paper concludes with a summary of the paper's contributions and suggests future research directions.This paper presents LLM_GP, a novel evolutionary algorithm that uses Large Language Models (LLMs) to evolve code. Unlike traditional Genetic Programming (GP), LLM_GP leverages LLMs for evolutionary operators, using prompting and the LLM's pre-trained pattern matching and sequence completion capabilities. The algorithm uses LLMs to initialize populations, select parents, vary solutions through crossover and mutation, and replace old populations with new ones. It also includes an operator to designate the run's solution. The paper describes the design of LLM-based operators, prompt functions, and LLM-oriented preparatory steps. It also provides an implementation and demonstration of a simple LLM_GP variant, along with a discussion of the challenges and risks associated with using LLMs for genetic programming. The paper highlights the potential of LLMs in code generation, neural architecture search, game design, and prompt generation. It also discusses the differences between LLM_GP and traditional GP, the risks of using LLMs for evolving code, and the need for further research in this area. The paper concludes with a summary of the paper's contributions and suggests future research directions.