Can GNN be Good Adapter for LLMs?

Can GNN be Good Adapter for LLMs?

May 13-17, 2024 | Xuanwen Huang, Kaiqiao Han, Yang Yang, Dezheng Bao, Quanjin Tao, Ziwei Chai, Qi Zhu
Can GNN be Good Adapter for LLMs? This paper proposes GraphAdapter, a method that uses graph neural networks (GNNs) as efficient adapters to enhance large language models (LLMs) in modeling text-attributed graphs (TAGs). TAGs are graphs where nodes have textual features, commonly found in social media, recommendation systems, etc. Existing methods for TAG modeling rely on million-scale LMs, but they face challenges when scaled to billion-scale LLMs. GraphAdapter addresses this by using a GNN as an adapter, which introduces few trainable parameters and low computational costs. It is trained using auto-regression on node text and can be fine-tuned for various downstream tasks. GraphAdapter achieves an average improvement of approximately 5% in node classification on multiple real-world TAGs, and it can also adapt to other language models like RoBERTa and GPT-2. The results show that GNNs can serve as effective adapters for LLMs in TAG modeling. The paper also discusses the effectiveness of GraphAdapter through extensive experiments and ablation studies, demonstrating its efficiency and performance across different tasks and datasets.Can GNN be Good Adapter for LLMs? This paper proposes GraphAdapter, a method that uses graph neural networks (GNNs) as efficient adapters to enhance large language models (LLMs) in modeling text-attributed graphs (TAGs). TAGs are graphs where nodes have textual features, commonly found in social media, recommendation systems, etc. Existing methods for TAG modeling rely on million-scale LMs, but they face challenges when scaled to billion-scale LLMs. GraphAdapter addresses this by using a GNN as an adapter, which introduces few trainable parameters and low computational costs. It is trained using auto-regression on node text and can be fine-tuned for various downstream tasks. GraphAdapter achieves an average improvement of approximately 5% in node classification on multiple real-world TAGs, and it can also adapt to other language models like RoBERTa and GPT-2. The results show that GNNs can serve as effective adapters for LLMs in TAG modeling. The paper also discusses the effectiveness of GraphAdapter through extensive experiments and ablation studies, demonstrating its efficiency and performance across different tasks and datasets.
Reach us at info@study.space
[slides and audio] Can GNN be Good Adapter for LLMs%3F