Chain of Agents: Large Language Models Collaborating on Long-Context Tasks

Chain of Agents: Large Language Models Collaborating on Long-Context Tasks

4 Jun 2024 | Yusen Zhang*, Ruoxi Sun*, Yanfei Chen*, Tomas Pfister*, Rui Zhang*, Sercan Ö. Arik*
The paper introduces Chain-of-Agents (CoA), a novel framework for addressing long-context tasks using multi-agent collaboration. CoA aims to overcome the limitations of input reduction and window extension methods by enabling information aggregation and context reasoning across multiple LLMs. The framework consists of worker agents that sequentially process different segments of the input text, followed by a manager agent that synthesizes the contributions into a coherent output. This approach mitigates the issue of focusing on irrelevant information and improves performance on various long-context tasks such as question answering, summarization, and code completion. Comprehensive experiments on nine datasets with six LLMs demonstrate that CoA achieves significant improvements over strong baselines, including RAG and Full-Context methods, by up to 10%. The paper also explores the benefits of multi-agent collaboration, including the ability to handle longer inputs and mitigate the "lost in the middle" phenomenon.The paper introduces Chain-of-Agents (CoA), a novel framework for addressing long-context tasks using multi-agent collaboration. CoA aims to overcome the limitations of input reduction and window extension methods by enabling information aggregation and context reasoning across multiple LLMs. The framework consists of worker agents that sequentially process different segments of the input text, followed by a manager agent that synthesizes the contributions into a coherent output. This approach mitigates the issue of focusing on irrelevant information and improves performance on various long-context tasks such as question answering, summarization, and code completion. Comprehensive experiments on nine datasets with six LLMs demonstrate that CoA achieves significant improvements over strong baselines, including RAG and Full-Context methods, by up to 10%. The paper also explores the benefits of multi-agent collaboration, including the ability to handle longer inputs and mitigate the "lost in the middle" phenomenon.
Reach us at info@study.space