Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs

Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs

20 Jun 2024 | Junjie Wang, Mingyang Chen, Binbin Hu, Dan Yang, Ziqi Liu, Yue Shen, Peng Wei, Zhiqiang Zhang, Jinjie Gu, Jun Zhou, Jeff Z. Pan, Wen Zhang, Huajun Chen
This paper introduces a novel framework called Learning to Plan from Knowledge Graphs (LPKG) to enhance the planning ability of large language models (LLMs) in complex question-answering (QA) tasks. The framework leverages knowledge graph (KG) patterns to generate planning data, which is then used to fine-tune LLMs. This approach enables LLMs to better handle complex QA tasks that involve retrieval. The paper also presents a new benchmark, CLQA-Wiki, which includes a variety of complex logical QA questions, providing a more comprehensive and challenging evaluation for LLMs. The LPKG framework consists of three main steps: (1) constructing planning data from KGs by grounding patterns and verbalizing them into natural language questions, (2) fine-tuning LLMs using this planning data to improve their planning capabilities, and (3) parsing and executing the generated plans to obtain the final answers. The framework is evaluated on multiple datasets, including the newly proposed CLQA-Wiki benchmark, and shows significant improvements over existing methods. The paper also compares the effectiveness of KG-sourced planning data with traditional distillation methods, demonstrating that KG-based data leads to better performance in enhancing LLMs' planning abilities. The results show that LPKG outperforms baseline methods on both conventional complex QA datasets and the new CLQA-Wiki benchmark. The framework is designed to be scalable and can handle a wide range of complex logical questions, including multi-hop, comparison, intersection, and union types. The study highlights the importance of using KGs to generate planning data for LLMs, as it provides more accurate and structured information that enhances the models' ability to reason and plan. The proposed framework and benchmark contribute to the field of complex QA by offering a more effective and comprehensive approach to improving LLM performance.This paper introduces a novel framework called Learning to Plan from Knowledge Graphs (LPKG) to enhance the planning ability of large language models (LLMs) in complex question-answering (QA) tasks. The framework leverages knowledge graph (KG) patterns to generate planning data, which is then used to fine-tune LLMs. This approach enables LLMs to better handle complex QA tasks that involve retrieval. The paper also presents a new benchmark, CLQA-Wiki, which includes a variety of complex logical QA questions, providing a more comprehensive and challenging evaluation for LLMs. The LPKG framework consists of three main steps: (1) constructing planning data from KGs by grounding patterns and verbalizing them into natural language questions, (2) fine-tuning LLMs using this planning data to improve their planning capabilities, and (3) parsing and executing the generated plans to obtain the final answers. The framework is evaluated on multiple datasets, including the newly proposed CLQA-Wiki benchmark, and shows significant improvements over existing methods. The paper also compares the effectiveness of KG-sourced planning data with traditional distillation methods, demonstrating that KG-based data leads to better performance in enhancing LLMs' planning abilities. The results show that LPKG outperforms baseline methods on both conventional complex QA datasets and the new CLQA-Wiki benchmark. The framework is designed to be scalable and can handle a wide range of complex logical questions, including multi-hop, comparison, intersection, and union types. The study highlights the importance of using KGs to generate planning data for LLMs, as it provides more accurate and structured information that enhances the models' ability to reason and plan. The proposed framework and benchmark contribute to the field of complex QA by offering a more effective and comprehensive approach to improving LLM performance.
Reach us at info@study.space