Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap

Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap

January 15, 2024 | Xingyu Wu, Sheng-hao Wu*, Member, IEEE, Jibin Wu*, Liang Feng*, Senior Member, IEEE, Kay Chen Tan, Fellow, IEEE
The paper "Evolutionary Computation in the Era of Large Language Models: Survey and Roadmap" by Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, and Kay Chen Tan provides a comprehensive review and forward-looking roadmap for the integration of evolutionary algorithms (EAs) and large language models (LLMs). The authors highlight the complementary strengths of EAs and LLMs in solving complex problems, particularly in optimization and generation tasks. EAs can enhance LLMs by providing flexible global search capabilities, while LLMs can offer domain knowledge and text processing capabilities to improve EA performance. The paper categorizes the reciprocal inspiration into two main avenues: LLM-enhanced EA and EA-enhanced LLM, and introduces integrated synergy methods for various applications, including code generation, software engineering, neural architecture search, and text generation. The authors identify challenges and future directions, aiming to unlock the full potential of this innovative collaboration in advancing optimization and artificial intelligence. The paper also includes a GitHub repository for relevant papers: https://github.com/wuxingyu-ai/LLM4EC.The paper "Evolutionary Computation in the Era of Large Language Models: Survey and Roadmap" by Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, and Kay Chen Tan provides a comprehensive review and forward-looking roadmap for the integration of evolutionary algorithms (EAs) and large language models (LLMs). The authors highlight the complementary strengths of EAs and LLMs in solving complex problems, particularly in optimization and generation tasks. EAs can enhance LLMs by providing flexible global search capabilities, while LLMs can offer domain knowledge and text processing capabilities to improve EA performance. The paper categorizes the reciprocal inspiration into two main avenues: LLM-enhanced EA and EA-enhanced LLM, and introduces integrated synergy methods for various applications, including code generation, software engineering, neural architecture search, and text generation. The authors identify challenges and future directions, aiming to unlock the full potential of this innovative collaboration in advancing optimization and artificial intelligence. The paper also includes a GitHub repository for relevant papers: https://github.com/wuxingyu-ai/LLM4EC.
Reach us at info@study.space