MAY 2024 | Chuanneng Sun, Student Member, IEEE, Songjun Huang, Student Member, IEEE, and Dario Pompili, Fellow, IEEE
This paper provides a comprehensive survey of Large Language Models (LLMs) in the context of Multi-Agent Reinforcement Learning (MARL). It highlights the challenges and potential of integrating LLMs into MARL systems, particularly in cooperative tasks and communication among agents. The authors discuss the current state of LLM-based MARL frameworks, including their ability to facilitate inter-agent communication and coordination. They also explore the integration of human-in-the-loop scenarios and the co-design of traditional MARL policies with LLMs. The paper identifies several open research problems, such as personality-enabled cooperation, language-enabled human-in/on-the-loop frameworks, traditional MARL and LLM co-design, and safety and security in MARL systems. The authors conclude by emphasizing the significant potential of LLMs in advancing multi-agent intelligence and call for further research to push the boundaries of this field.This paper provides a comprehensive survey of Large Language Models (LLMs) in the context of Multi-Agent Reinforcement Learning (MARL). It highlights the challenges and potential of integrating LLMs into MARL systems, particularly in cooperative tasks and communication among agents. The authors discuss the current state of LLM-based MARL frameworks, including their ability to facilitate inter-agent communication and coordination. They also explore the integration of human-in-the-loop scenarios and the co-design of traditional MARL policies with LLMs. The paper identifies several open research problems, such as personality-enabled cooperation, language-enabled human-in/on-the-loop frameworks, traditional MARL and LLM co-design, and safety and security in MARL systems. The authors conclude by emphasizing the significant potential of LLMs in advancing multi-agent intelligence and call for further research to push the boundaries of this field.