Can GNN be Good Adapter for LLMs?

Can GNN be Good Adapter for LLMs?

May 13–17, 2024, Singapore | Xuanwen Huang, Kaiqiao Han, Yang Yang, Dezheng Bao, Quanjin Tao, Ziwei Chai, Qi Zhu
This paper explores the use of Graph Neural Networks (GNNs) as adapters for Large Language Models (LLMs) to model Text-Attributed Graphs (TAGs). TAGs are graphs where nodes have textual features, and understanding the correlation between textual and structural data is crucial for effective modeling. The proposed method, GraphAdapter, leverages GNNs to efficiently integrate graph structure information with LLMs. Key contributions include: 1. **Efficiency**: GNN adapters introduce only a few trainable parameters and can be trained with low computational costs. 2. **Language-aware Graph Pre-training**: The pre-training process uses language to supervise the modeling of graph structure, enhancing LLMs' ability to understand both textual and structural information. 3. **Convenient Tuning**: Once pre-trained, GraphAdapter can be fine-tuned for various downstream tasks. Experiments on multiple real-world TAGs, including social and citation networks, demonstrate that GraphAdapter achieves an average improvement of approximately 5% in node classification tasks compared to state-of-the-art methods. Additionally, GraphAdapter can be adapted to other language models, such as RoBERTa and GPT-2, further validating its effectiveness and scalability. The results highlight that GNNs can serve as effective adapters for LLMs in TAG modeling, leveraging the strengths of both graph and language models.This paper explores the use of Graph Neural Networks (GNNs) as adapters for Large Language Models (LLMs) to model Text-Attributed Graphs (TAGs). TAGs are graphs where nodes have textual features, and understanding the correlation between textual and structural data is crucial for effective modeling. The proposed method, GraphAdapter, leverages GNNs to efficiently integrate graph structure information with LLMs. Key contributions include: 1. **Efficiency**: GNN adapters introduce only a few trainable parameters and can be trained with low computational costs. 2. **Language-aware Graph Pre-training**: The pre-training process uses language to supervise the modeling of graph structure, enhancing LLMs' ability to understand both textual and structural information. 3. **Convenient Tuning**: Once pre-trained, GraphAdapter can be fine-tuned for various downstream tasks. Experiments on multiple real-world TAGs, including social and citation networks, demonstrate that GraphAdapter achieves an average improvement of approximately 5% in node classification tasks compared to state-of-the-art methods. Additionally, GraphAdapter can be adapted to other language models, such as RoBERTa and GPT-2, further validating its effectiveness and scalability. The results highlight that GNNs can serve as effective adapters for LLMs in TAG modeling, leveraging the strengths of both graph and language models.
Reach us at info@study.space