October 21–25, 2024, Boise, ID, USA | Bo Pan, Zheng Zhang, Yifei Zhang, Yuntong Hu, Liang Zhao
The paper "Distilling Large Language Models for Text-Attributed Graph Learning" addresses the challenge of training graph models on Text-Attributed Graphs (TAGs) using large language models (LLMs). The authors propose a novel framework that leverages the expressive outputs of LLMs to train an interpreter model, which then aligns with a student model. This approach aims to bridge the gap between LLMs and graph models by converting LLM-generated textual rationales into multi-level graph rationales and aligning the student model based on the features of TAGs. The framework is evaluated on four datasets, demonstrating significant improvements over baseline methods, with an average performance boost of 6.2%. The paper also includes a comprehensive ablation study and efficiency analysis, showing the effectiveness of the proposed method in various scenarios.The paper "Distilling Large Language Models for Text-Attributed Graph Learning" addresses the challenge of training graph models on Text-Attributed Graphs (TAGs) using large language models (LLMs). The authors propose a novel framework that leverages the expressive outputs of LLMs to train an interpreter model, which then aligns with a student model. This approach aims to bridge the gap between LLMs and graph models by converting LLM-generated textual rationales into multi-level graph rationales and aligning the student model based on the features of TAGs. The framework is evaluated on four datasets, demonstrating significant improvements over baseline methods, with an average performance boost of 6.2%. The paper also includes a comprehensive ablation study and efficiency analysis, showing the effectiveness of the proposed method in various scenarios.