23 May 2024 | Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L. Griffiths, Mengdi Wang
This paper presents a framework for embodied large language model (LLM) agents to learn and cooperate in organized teams. The study addresses the challenge of LLM agents over-reporting and complying with instructions, which can lead to information redundancy and confusion in multi-agent cooperation. Inspired by human organizations, the framework introduces prompt-based organization structures to improve team efficiency and reduce communication costs. Through experiments with embodied LLM agents and human-agent collaboration, the study highlights the impact of designated leadership on team performance and the spontaneous cooperative behaviors of LLM agents.
The research explores two key questions: (1) What role do organizational structures play in multi-LLM-agent systems? (2) How can these structures be optimized for efficient multi-agent coordination? The framework leverages AutoGen, a multi-agent conversation framework, to study how to best organize embodied LLM agents for communication and collaboration in physical or simulated environments. The framework allows for flexible prompting and organizing LLM agents into various team structures, facilitating versatile inter-agent communication.
The study introduces a Criticize-Reflect framework based on LLMs to improve organizational prompts. This framework uses a dual LLM architecture to reflect on team performance and generate improved organizational prompts. Through this iterative process, LLM agents spontaneously form novel, effective team structures, leading to reduced communication costs and improved efficiency.
The experiments demonstrate that hierarchical organization improves team efficiency, aligning with findings in human organization theory. The study also shows that LLM agents can elect their own leaders and adjust leadership dynamically. Human leaders were found to be more effective in coordinating teams compared to AI agents. The research also explores the emergence of cooperative behaviors in LLM agents, such as information sharing, leadership, and assistance, and how organizational prompts influence these behaviors.
The study concludes that a hierarchically organized team with a designated or elected leader achieves superior team efficiency, which can be further improved through the Criticize-Reflect framework. The research contributes a novel multi-LLM-agent architecture and a Criticize-Reflect framework for generating efficient organizational prompts. The findings highlight the potential of LLMs in multi-agent systems and the importance of organizational structures in enhancing team performance.This paper presents a framework for embodied large language model (LLM) agents to learn and cooperate in organized teams. The study addresses the challenge of LLM agents over-reporting and complying with instructions, which can lead to information redundancy and confusion in multi-agent cooperation. Inspired by human organizations, the framework introduces prompt-based organization structures to improve team efficiency and reduce communication costs. Through experiments with embodied LLM agents and human-agent collaboration, the study highlights the impact of designated leadership on team performance and the spontaneous cooperative behaviors of LLM agents.
The research explores two key questions: (1) What role do organizational structures play in multi-LLM-agent systems? (2) How can these structures be optimized for efficient multi-agent coordination? The framework leverages AutoGen, a multi-agent conversation framework, to study how to best organize embodied LLM agents for communication and collaboration in physical or simulated environments. The framework allows for flexible prompting and organizing LLM agents into various team structures, facilitating versatile inter-agent communication.
The study introduces a Criticize-Reflect framework based on LLMs to improve organizational prompts. This framework uses a dual LLM architecture to reflect on team performance and generate improved organizational prompts. Through this iterative process, LLM agents spontaneously form novel, effective team structures, leading to reduced communication costs and improved efficiency.
The experiments demonstrate that hierarchical organization improves team efficiency, aligning with findings in human organization theory. The study also shows that LLM agents can elect their own leaders and adjust leadership dynamically. Human leaders were found to be more effective in coordinating teams compared to AI agents. The research also explores the emergence of cooperative behaviors in LLM agents, such as information sharing, leadership, and assistance, and how organizational prompts influence these behaviors.
The study concludes that a hierarchically organized team with a designated or elected leader achieves superior team efficiency, which can be further improved through the Criticize-Reflect framework. The research contributes a novel multi-LLM-agent architecture and a Criticize-Reflect framework for generating efficient organizational prompts. The findings highlight the potential of LLMs in multi-agent systems and the importance of organizational structures in enhancing team performance.