Improving Multi-Agent Debate with Sparse Communication Topology

Improving Multi-Agent Debate with Sparse Communication Topology

17 Jun 2024 | Yunxuan Li, Yibing Du, Jiageng Zhang, Le Hou, Peter Grabowski, Yeqing Li, Eugene Ie
This paper presents a study on improving multi-agent debate (MAD) by using sparse communication topologies. The research shows that sparse communication can significantly reduce computational costs while maintaining or even improving performance in reasoning and alignment tasks. The study evaluates MAD on text-only and multimodal reasoning tasks, as well as alignment labeling tasks, demonstrating its effectiveness and broad applicability. The MAD framework involves multiple language models (LLMs) engaging in discussions to generate and refine answers. In the standard MAD setup, agents communicate with all other agents, leading to high computational costs. However, the study finds that using a sparse communication topology, where agents only communicate with their neighbors, can achieve comparable or better performance while significantly reducing inference costs. For example, neighbor-connected MAD achieves a +2% improvement on the MATH dataset and maintains the same accuracy on GSM8K, while reducing the average input token cost for reasoning tasks by over 40%. The study also extends MAD to alignment labeling tasks, showing that sparse MAD can improve performance in these tasks. For instance, on the Anthropic-HH dataset, sparse MAD improves helpfulness by +0.5% and harmlessness by +1.0%, while reducing costs by 50.0% and 53.3%, respectively. The research also explores the impact of communication topology design with multiple LLMs. It finds that assigning stronger LLMs to agents with higher centrality in the communication graph leads to better performance. For example, in a harmlessness alignment labeling task with six agents, placing the stronger LLM at a node with higher centrality (degree of 5) leads to a +3.0% improvement compared to placing it at a node with lower centrality (degree of 1). The study provides insights into why sparse MAD works. It finds that sparse communication allows for more rounds of effective debate, leading to more extensive deliberation and in-depth discussion. Additionally, the study shows that sparse MAD can be more effective in cases where the number of reference solutions is high, as it reduces the likelihood of agents being misled by incorrect solutions. The research also highlights the importance of communication topology design in multi-agent systems. It finds that sparse communication topologies can significantly improve the efficiency and effectiveness of the "society of minds" approach. The study concludes that sparse MAD is a promising approach for improving multi-agent systems, with potential applications in various real-world scenarios.This paper presents a study on improving multi-agent debate (MAD) by using sparse communication topologies. The research shows that sparse communication can significantly reduce computational costs while maintaining or even improving performance in reasoning and alignment tasks. The study evaluates MAD on text-only and multimodal reasoning tasks, as well as alignment labeling tasks, demonstrating its effectiveness and broad applicability. The MAD framework involves multiple language models (LLMs) engaging in discussions to generate and refine answers. In the standard MAD setup, agents communicate with all other agents, leading to high computational costs. However, the study finds that using a sparse communication topology, where agents only communicate with their neighbors, can achieve comparable or better performance while significantly reducing inference costs. For example, neighbor-connected MAD achieves a +2% improvement on the MATH dataset and maintains the same accuracy on GSM8K, while reducing the average input token cost for reasoning tasks by over 40%. The study also extends MAD to alignment labeling tasks, showing that sparse MAD can improve performance in these tasks. For instance, on the Anthropic-HH dataset, sparse MAD improves helpfulness by +0.5% and harmlessness by +1.0%, while reducing costs by 50.0% and 53.3%, respectively. The research also explores the impact of communication topology design with multiple LLMs. It finds that assigning stronger LLMs to agents with higher centrality in the communication graph leads to better performance. For example, in a harmlessness alignment labeling task with six agents, placing the stronger LLM at a node with higher centrality (degree of 5) leads to a +3.0% improvement compared to placing it at a node with lower centrality (degree of 1). The study provides insights into why sparse MAD works. It finds that sparse communication allows for more rounds of effective debate, leading to more extensive deliberation and in-depth discussion. Additionally, the study shows that sparse MAD can be more effective in cases where the number of reference solutions is high, as it reduces the likelihood of agents being misled by incorrect solutions. The research also highlights the importance of communication topology design in multi-agent systems. It finds that sparse communication topologies can significantly improve the efficiency and effectiveness of the "society of minds" approach. The study concludes that sparse MAD is a promising approach for improving multi-agent systems, with potential applications in various real-world scenarios.
Reach us at info@study.space
[slides] Improving Multi-Agent Debate with Sparse Communication Topology | StudySpace