Can we Soft Prompt LLMs for Graph Learning Tasks?

Can we Soft Prompt LLMs for Graph Learning Tasks?

May 13–17, 2024, Singapore, Singapore | Zheyuan Liu, Xiaoxin He, Yijun Tian, Nitesh V. Chawla
The paper "Can we Soft Prompt LLMs for Graph Learning Tasks?" by Zheyuan Liu, Xiaoxin He, Yijun Tian, and Nitesh V. Chawla explores the integration of Large Language Models (LLMs) with Graph Neural Networks (GNNs) to enhance the understanding and processing of graph-structured data. The authors introduce GraphPrompter, a novel framework that aligns graph information with LLMs through soft prompts. GraphPrompter consists of two main components: a GNN for encoding complex graph information and an LLM for processing textual information. The framework aims to address the challenges of directly applying LLMs to graph modalities, such as the discrepancy between graph and text modalities. The authors conduct extensive experiments on various benchmark datasets under node classification and link prediction tasks, demonstrating the effectiveness of GraphPrompter. The results show that GraphPrompter consistently outperforms baseline methods, including traditional GNNs and soft prompting techniques. The study highlights the potential of using LLMs for graph learning tasks, particularly in scenarios requiring a combination of structural and textual data interpretation. The paper concludes by emphasizing the superior performance of GraphPrompter and suggesting future directions for extending the framework to more complex graph-level tasks.The paper "Can we Soft Prompt LLMs for Graph Learning Tasks?" by Zheyuan Liu, Xiaoxin He, Yijun Tian, and Nitesh V. Chawla explores the integration of Large Language Models (LLMs) with Graph Neural Networks (GNNs) to enhance the understanding and processing of graph-structured data. The authors introduce GraphPrompter, a novel framework that aligns graph information with LLMs through soft prompts. GraphPrompter consists of two main components: a GNN for encoding complex graph information and an LLM for processing textual information. The framework aims to address the challenges of directly applying LLMs to graph modalities, such as the discrepancy between graph and text modalities. The authors conduct extensive experiments on various benchmark datasets under node classification and link prediction tasks, demonstrating the effectiveness of GraphPrompter. The results show that GraphPrompter consistently outperforms baseline methods, including traditional GNNs and soft prompting techniques. The study highlights the potential of using LLMs for graph learning tasks, particularly in scenarios requiring a combination of structural and textual data interpretation. The paper concludes by emphasizing the superior performance of GraphPrompter and suggesting future directions for extending the framework to more complex graph-level tasks.
Reach us at info@study.space
[slides] Can we Soft Prompt LLMs for Graph Learning Tasks%3F | StudySpace