27 May 2024 | Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V. Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, Bryan Hooi
The paper introduces G-Retriever, a novel framework for enabling users to interact with textual graphs through a conversational interface. G-Retriever combines large language models (LLMs) and graph neural networks (GNNs) to enhance graph understanding and question answering. The key contributions include:
1. **GraphQA Benchmark**: Development of a comprehensive benchmark for graph question answering, tailored to real-world applications such as scene graph understanding, common sense reasoning, and knowledge graph reasoning.
2. **G-Retriever Architecture**: A flexible question-answering framework that integrates GNNs, LLMs, and retrieval-augmented generation (RAG). It addresses hallucination issues in graph LLMs by performing RAG over the graph using a Prize-Collecting Steiner Tree optimization problem.
3. **Efficiency and Scalability**: G-Retriever scales well with larger graphs by selectively retrieving relevant parts of the graph, reducing the number of tokens and nodes required, and speeding up training.
4. **Hallucination Mitigation**: Empirical evaluations show that G-Retriever significantly reduces hallucination, as evidenced by improved accuracy in referencing nodes and edges in graph-based contexts.
The paper also includes detailed experimental results, demonstrating G-Retriever's effectiveness across multiple datasets and configurations.The paper introduces G-Retriever, a novel framework for enabling users to interact with textual graphs through a conversational interface. G-Retriever combines large language models (LLMs) and graph neural networks (GNNs) to enhance graph understanding and question answering. The key contributions include:
1. **GraphQA Benchmark**: Development of a comprehensive benchmark for graph question answering, tailored to real-world applications such as scene graph understanding, common sense reasoning, and knowledge graph reasoning.
2. **G-Retriever Architecture**: A flexible question-answering framework that integrates GNNs, LLMs, and retrieval-augmented generation (RAG). It addresses hallucination issues in graph LLMs by performing RAG over the graph using a Prize-Collecting Steiner Tree optimization problem.
3. **Efficiency and Scalability**: G-Retriever scales well with larger graphs by selectively retrieving relevant parts of the graph, reducing the number of tokens and nodes required, and speeding up training.
4. **Hallucination Mitigation**: Empirical evaluations show that G-Retriever significantly reduces hallucination, as evidenced by improved accuracy in referencing nodes and edges in graph-based contexts.
The paper also includes detailed experimental results, demonstrating G-Retriever's effectiveness across multiple datasets and configurations.