Call Me When Necessary: LLMs can Efficiently and Faithfully Reason over Structured Environments

Call Me When Necessary: LLMs can Efficiently and Faithfully Reason over Structured Environments

3 Jul 2024 | Sitao Cheng, Ziyuan Zhuang, Yong Xu, Fangkai Yang, Chaoyun Zhang, Xiaoting Qin, Xiang Huang, Ling Chen, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
This paper introduces Readi, a novel framework that enables Large Language Models (LLMs) to efficiently and faithfully reason over structured environments such as knowledge graphs and tables. Readi allows LLMs to initially generate a reasoning path and edit it only when necessary, reducing the need for step-by-step interaction with the environment. The framework leverages the intrinsic planning ability of LLMs and incorporates dynamic feedback from the environment to refine the reasoning path. The key idea of Readi is to generate an initial reasoning path and then instantiate it on the structured environment. If the instantiation gets stuck, the path is edited based on feedback. This approach significantly reduces the number of LLM calls compared to previous methods, while maintaining high accuracy. The framework is evaluated on three knowledge graph question answering (KGQA) datasets and two table question answering (TableQA) datasets, demonstrating its effectiveness. Readi outperforms existing LLM-based methods and fine-tuned models in terms of both LLM-calls and accuracy. For example, it achieves 67.0% Hit@1 on CWQ, 78.7% on WebQSP, and state-of-the-art results on MQA-1H. It also shows significant improvements over vanilla LLMs, with a 14.9% increase in performance on CWQ. The framework is implemented with a focus on efficiency and faithfulness. It collects reasoning logs as immediate feedback, which includes details such as the position of stuck points, associated relations, and half-way done instances. This dynamic guidance helps refine the reasoning path more effectively. The paper also presents an extensive analysis of Readi's reasoning path generation and editing modules, showing that the framework can achieve high performance with fewer LLM calls. The results demonstrate that Readi is effective in complex reasoning tasks over large-scale structured environments, and it provides a practical solution for LLMs to interact with structured data.This paper introduces Readi, a novel framework that enables Large Language Models (LLMs) to efficiently and faithfully reason over structured environments such as knowledge graphs and tables. Readi allows LLMs to initially generate a reasoning path and edit it only when necessary, reducing the need for step-by-step interaction with the environment. The framework leverages the intrinsic planning ability of LLMs and incorporates dynamic feedback from the environment to refine the reasoning path. The key idea of Readi is to generate an initial reasoning path and then instantiate it on the structured environment. If the instantiation gets stuck, the path is edited based on feedback. This approach significantly reduces the number of LLM calls compared to previous methods, while maintaining high accuracy. The framework is evaluated on three knowledge graph question answering (KGQA) datasets and two table question answering (TableQA) datasets, demonstrating its effectiveness. Readi outperforms existing LLM-based methods and fine-tuned models in terms of both LLM-calls and accuracy. For example, it achieves 67.0% Hit@1 on CWQ, 78.7% on WebQSP, and state-of-the-art results on MQA-1H. It also shows significant improvements over vanilla LLMs, with a 14.9% increase in performance on CWQ. The framework is implemented with a focus on efficiency and faithfulness. It collects reasoning logs as immediate feedback, which includes details such as the position of stuck points, associated relations, and half-way done instances. This dynamic guidance helps refine the reasoning path more effectively. The paper also presents an extensive analysis of Readi's reasoning path generation and editing modules, showing that the framework can achieve high performance with fewer LLM calls. The results demonstrate that Readi is effective in complex reasoning tasks over large-scale structured environments, and it provides a practical solution for LLMs to interact with structured data.
Reach us at info@study.space
[slides and audio] Call Me When Necessary%3A LLMs can Efficiently and Faithfully Reason over Structured Environments