Graph Neural Network Enhanced Retrieval for Question Answering of LLMs

Graph Neural Network Enhanced Retrieval for Question Answering of LLMs

3 Jun 2024 | Zijian Li, Qingyan Guo, Jiawei Shao, Lei Song, Jiang Bian, Jun Zhang, Rui Wang
This paper proposes GNN-Ret and RGNN-Ret, two novel methods for enhancing retrieval in large language model (LLM) question answering. GNN-Ret leverages graph neural networks (GNNs) to capture the relatedness between passages, improving retrieval by considering structural and keyword-related connections. It constructs a graph of passages and uses GNNs to integrate semantic distances between related passages, enhancing retrieval coverage. RGNN-Ret extends this approach to handle multi-hop reasoning questions by using a recurrent graph neural network (RGNN) to integrate graphs of passages across steps, improving retrieval for complex questions. Experiments on benchmark datasets show that GNN-Ret achieves higher accuracy than strong baselines with a single query, and RGNN-Ret further improves accuracy, achieving state-of-the-art performance on the 2WikiMQA dataset with up to 10.4% accuracy improvement. The methods address the challenge of information asymmetry in complex questions by considering the relatedness between passages, leading to more accurate retrieval and better performance in multi-hop reasoning tasks. The results demonstrate the effectiveness of leveraging passage relatedness to enhance retrieval for LLMs in answering complex questions.This paper proposes GNN-Ret and RGNN-Ret, two novel methods for enhancing retrieval in large language model (LLM) question answering. GNN-Ret leverages graph neural networks (GNNs) to capture the relatedness between passages, improving retrieval by considering structural and keyword-related connections. It constructs a graph of passages and uses GNNs to integrate semantic distances between related passages, enhancing retrieval coverage. RGNN-Ret extends this approach to handle multi-hop reasoning questions by using a recurrent graph neural network (RGNN) to integrate graphs of passages across steps, improving retrieval for complex questions. Experiments on benchmark datasets show that GNN-Ret achieves higher accuracy than strong baselines with a single query, and RGNN-Ret further improves accuracy, achieving state-of-the-art performance on the 2WikiMQA dataset with up to 10.4% accuracy improvement. The methods address the challenge of information asymmetry in complex questions by considering the relatedness between passages, leading to more accurate retrieval and better performance in multi-hop reasoning tasks. The results demonstrate the effectiveness of leveraging passage relatedness to enhance retrieval for LLMs in answering complex questions.
Reach us at info@study.space
Understanding Graph Neural Network Enhanced Retrieval for Question Answering of LLMs