Chain of Tools: Large Language Model is an Automatic Multi-tool Learner

Chain of Tools: Large Language Model is an Automatic Multi-tool Learner

26 May 2024 | Zhengliang Shi1*, Shen Gao2, Xuyi Chen3, Yue Feng4, Lingyong Yan3, Haibo Shi3, Dawei Yin3, Zhumin Chen1, Suzan Verberne5, Zhaochun Ren5†
The paper introduces the Automatic Tool Chain (ATC) framework, which enables large language models (LLMs) to act as multi-tool users by programmatically generating chains of tools to solve complex tasks. ATC addresses the limitations of existing tool learning methods, such as manual workflow design and limited tool generalization. The framework allows LLMs to learn input-output schemas and data flow dependencies from tool protocols, enabling them to generate and execute programs that utilize multiple tools. Additionally, the paper proposes a black-box probing method to empower LLMs as active multi-tool learners, allowing them to discover and document tool usages on their own. Extensive experiments on existing datasets and a new benchmark, ToolFlow, demonstrate the effectiveness of the proposed framework, showing superior performance compared to baselines. The black-box probing method also enhances the toolset by enabling LLMs to master new tools automatically. The paper concludes with discussions on the impact of iteration count in the attribution reflection mechanism, efficiency analysis, and a case study to illustrate the effectiveness of the proposed framework.The paper introduces the Automatic Tool Chain (ATC) framework, which enables large language models (LLMs) to act as multi-tool users by programmatically generating chains of tools to solve complex tasks. ATC addresses the limitations of existing tool learning methods, such as manual workflow design and limited tool generalization. The framework allows LLMs to learn input-output schemas and data flow dependencies from tool protocols, enabling them to generate and execute programs that utilize multiple tools. Additionally, the paper proposes a black-box probing method to empower LLMs as active multi-tool learners, allowing them to discover and document tool usages on their own. Extensive experiments on existing datasets and a new benchmark, ToolFlow, demonstrate the effectiveness of the proposed framework, showing superior performance compared to baselines. The black-box probing method also enhances the toolset by enabling LLMs to master new tools automatically. The paper concludes with discussions on the impact of iteration count in the attribution reflection mechanism, efficiency analysis, and a case study to illustrate the effectiveness of the proposed framework.
Reach us at info@study.space
[slides and audio] Tool Learning in the Wild%3A Empowering Language Models as Automatic Tool Agents