29 Jun 2024 | Chao Wang, Jiaxuan Zhao, Licheng Jiao, Lingling Li, Fang Liu, Shuyuan Yang
This paper explores the parallels between large language models (LLMs) and evolutionary algorithms (EAs), highlighting their common characteristics such as token representation, position encoding, Transformer blocks, and model training. The authors analyze existing interdisciplinary research, focusing on evolutionary fine-tuning and LLM-enhanced EAs, and present future directions for advancing the integration of LLMs and EAs. Key challenges include managing selective pressures, integrating optimization experiences, and enhancing the generative capabilities of LLMs. The paper also discusses the potential of LLMs in evolutionary prompt tuning and self-tuning, emphasizing their cost-effectiveness and flexibility in language spaces. The integration of LLMs and EAs is seen as a promising approach to develop advanced artificial agents capable of learning from established knowledge while continuously exploring new knowledge.This paper explores the parallels between large language models (LLMs) and evolutionary algorithms (EAs), highlighting their common characteristics such as token representation, position encoding, Transformer blocks, and model training. The authors analyze existing interdisciplinary research, focusing on evolutionary fine-tuning and LLM-enhanced EAs, and present future directions for advancing the integration of LLMs and EAs. Key challenges include managing selective pressures, integrating optimization experiences, and enhancing the generative capabilities of LLMs. The paper also discusses the potential of LLMs in evolutionary prompt tuning and self-tuning, emphasizing their cost-effectiveness and flexibility in language spaces. The integration of LLMs and EAs is seen as a promising approach to develop advanced artificial agents capable of learning from established knowledge while continuously exploring new knowledge.