Don't Forget to Connect! Improving RAG with Graph-based Reranking

Don't Forget to Connect! Improving RAG with Graph-based Reranking

28 May 2024 | Jialin Dong, Bahare Fatemi, Bryan Perozzi, Lin F. Yang, Anton Tsitsulin
This paper introduces G-RAG, a graph-based reranker for Retrieval Augmented Generation (RAG) that improves the performance of large language models (LLMs) in open-domain question answering (ODQA). G-RAG leverages graph neural networks (GNNs) to model connections between documents and incorporate semantic information from Abstract Meaning Representation (AMR) graphs. The method combines document-level connections and semantic features to provide a more accurate reranking of retrieved documents. G-RAG outperforms state-of-the-art approaches while requiring fewer computational resources. The paper also evaluates the performance of PaLM 2 as a reranker and finds it to significantly underperform G-RAG, highlighting the importance of reranking in RAG even with advanced LLMs. The proposed method uses document graphs to identify relevant documents by analyzing connections between them, leading to improved ranking performance. The paper introduces new metrics to evaluate ranking scenarios, including those with tied scores, and demonstrates the effectiveness of G-RAG in improving RAG performance. The results show that G-RAG achieves competitive performance across various metrics and outperforms other methods, including those based on pre-trained language models. The study also explores the use of different embedding models and finds that Ember performs well in the proposed framework. Overall, the paper emphasizes the importance of reranking in RAG and the potential of graph-based methods in improving document retrieval and ranking.This paper introduces G-RAG, a graph-based reranker for Retrieval Augmented Generation (RAG) that improves the performance of large language models (LLMs) in open-domain question answering (ODQA). G-RAG leverages graph neural networks (GNNs) to model connections between documents and incorporate semantic information from Abstract Meaning Representation (AMR) graphs. The method combines document-level connections and semantic features to provide a more accurate reranking of retrieved documents. G-RAG outperforms state-of-the-art approaches while requiring fewer computational resources. The paper also evaluates the performance of PaLM 2 as a reranker and finds it to significantly underperform G-RAG, highlighting the importance of reranking in RAG even with advanced LLMs. The proposed method uses document graphs to identify relevant documents by analyzing connections between them, leading to improved ranking performance. The paper introduces new metrics to evaluate ranking scenarios, including those with tied scores, and demonstrates the effectiveness of G-RAG in improving RAG performance. The results show that G-RAG achieves competitive performance across various metrics and outperforms other methods, including those based on pre-trained language models. The study also explores the use of different embedding models and finds that Ember performs well in the proposed framework. Overall, the paper emphasizes the importance of reranking in RAG and the potential of graph-based methods in improving document retrieval and ranking.
Reach us at info@study.space
[slides] Don't Forget to Connect! Improving RAG with Graph-based Reranking | StudySpace