UniGraph: Learning a Unified Cross-Domain Foundation Model for Text-Attributed Graphs

UniGraph: Learning a Unified Cross-Domain Foundation Model for Text-Attributed Graphs

25 Aug 2024 | Yufei He, Yuan Sui, Xiaoxin He, Bryan Hooi
The paper introduces UniGraph, a foundation model designed for Text-Attributed Graphs (TAGs) to enable cross-domain generalization. UniGraph leverages text as a unifying medium to align the feature spaces of different graphs, addressing the challenge of transferring learned knowledge across diverse domains. The model employs a cascaded architecture combining Language Models (LMs) and Graph Neural Networks (GNNs) and proposes a novel pre-training algorithm based on Masked Graph Modeling (MGM) for large-scale self-supervised learning on TAGs. Additionally, UniGraph introduces graph instruction tuning using Large Language Models (LLMs) to enable zero-shot prediction ability. Extensive experiments across various graph learning tasks and domains demonstrate the model's effectiveness in self-supervised representation learning, few-shot in-context transfer, and zero-shot transfer, outperforming or matching the performance of supervised methods trained on target datasets. The paper also discusses the contributions, related work, preliminaries, and experimental results, highlighting the robustness and versatility of UniGraph in cross-domain graph learning.The paper introduces UniGraph, a foundation model designed for Text-Attributed Graphs (TAGs) to enable cross-domain generalization. UniGraph leverages text as a unifying medium to align the feature spaces of different graphs, addressing the challenge of transferring learned knowledge across diverse domains. The model employs a cascaded architecture combining Language Models (LMs) and Graph Neural Networks (GNNs) and proposes a novel pre-training algorithm based on Masked Graph Modeling (MGM) for large-scale self-supervised learning on TAGs. Additionally, UniGraph introduces graph instruction tuning using Large Language Models (LLMs) to enable zero-shot prediction ability. Extensive experiments across various graph learning tasks and domains demonstrate the model's effectiveness in self-supervised representation learning, few-shot in-context transfer, and zero-shot transfer, outperforming or matching the performance of supervised methods trained on target datasets. The paper also discusses the contributions, related work, preliminaries, and experimental results, highlighting the robustness and versatility of UniGraph in cross-domain graph learning.
Reach us at info@study.space