GNN-RAG is a novel method that combines the language understanding abilities of large language models (LLMs) with the reasoning abilities of graph neural networks (GNNs) in a retrieval-augmented generation (RAG) style. The method first uses a GNN to reason over a dense subgraph of a knowledge graph (KG) to retrieve answer candidates for a given question. It then extracts the shortest paths in the KG that connect the question entities and the GNN-based answers, representing useful KG reasoning paths. These paths are verbalized and used as input for LLM reasoning with RAG. The GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA. Additionally, a retrieval augmentation (RA) technique is developed to further boost KGQA performance with GNN-RAG. Experimental results show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance with a 7B tuned LLM. GNN-RAG excels on multi-hop and multi-entity questions, outperforming competing approaches by 8.9–15.5% points at answer F1. The code and KGQA results are available at https://github.com/cmavro/GNN-RAG.GNN-RAG is a novel method that combines the language understanding abilities of large language models (LLMs) with the reasoning abilities of graph neural networks (GNNs) in a retrieval-augmented generation (RAG) style. The method first uses a GNN to reason over a dense subgraph of a knowledge graph (KG) to retrieve answer candidates for a given question. It then extracts the shortest paths in the KG that connect the question entities and the GNN-based answers, representing useful KG reasoning paths. These paths are verbalized and used as input for LLM reasoning with RAG. The GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA. Additionally, a retrieval augmentation (RA) technique is developed to further boost KGQA performance with GNN-RAG. Experimental results show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance with a 7B tuned LLM. GNN-RAG excels on multi-hop and multi-entity questions, outperforming competing approaches by 8.9–15.5% points at answer F1. The code and KGQA results are available at https://github.com/cmavro/GNN-RAG.