InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment

InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment

13 Feb 2024 | Jianing Wang12*, Junda Wu2, Yupeng Hou2, Yao Liu1†, Ming Gao1, Julian McAuley2
In this paper, the authors propose InstructGraph, a framework that enhances large language models (LLMs) with graph reasoning and generation capabilities through instruction tuning and preference alignment. The framework addresses the challenges of semantic gaps between graph and text data and hallucination issues in graph tasks. Specifically, InstructGraph introduces a structured format verbalizer to unify graph data into a code-like format, enabling LLMs to understand and generate graph structures more effectively. The graph instruction tuning stage guides LLMs to solve graph reasoning and generation tasks, while the graph preference alignment stage optimizes the LLM's preferences to reduce hallucinations. Extensive experiments on multiple graph-centric tasks demonstrate that InstructGraph outperforms GPT-4 and LLaMA2 by more than 13% and 38%, respectively, achieving the best performance in both graph-centric instruction and preference tasks. The framework's effectiveness is further validated through human evaluation and ablation studies, showing its robustness and generalization capabilities.In this paper, the authors propose InstructGraph, a framework that enhances large language models (LLMs) with graph reasoning and generation capabilities through instruction tuning and preference alignment. The framework addresses the challenges of semantic gaps between graph and text data and hallucination issues in graph tasks. Specifically, InstructGraph introduces a structured format verbalizer to unify graph data into a code-like format, enabling LLMs to understand and generate graph structures more effectively. The graph instruction tuning stage guides LLMs to solve graph reasoning and generation tasks, while the graph preference alignment stage optimizes the LLM's preferences to reduce hallucinations. Extensive experiments on multiple graph-centric tasks demonstrate that InstructGraph outperforms GPT-4 and LLaMA2 by more than 13% and 38%, respectively, achieving the best performance in both graph-centric instruction and preference tasks. The framework's effectiveness is further validated through human evaluation and ablation studies, showing its robustness and generalization capabilities.
Reach us at info@study.space
Understanding InstructGraph%3A Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment