2024 | Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, James Zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs
This paper proposes LLMRG, a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs for recommendation systems. These graphs link a user's profile and behavioral sequences through causal and logical inferences, representing the user's interests in an interpretable way. LLMRG consists of four components: chained graph reasoning, divergent extension, self-verification and scoring, and knowledge base self-improvement. The resulting reasoning graph is encoded using graph neural networks, which serves as additional input to improve conventional recommender systems, without requiring extra user or item information.
The approach allows recommendations to benefit from both engineered recommendation systems and LLM-derived reasoning graphs. The paper demonstrates the effectiveness of LLMRG on benchmarks and real-world scenarios in enhancing base recommendation models. The LLMRG framework includes an adaptive reasoning module with self-verification and a base sequential recommendation model. The adaptive reasoning module takes the user's interaction sequence and attributes as input, and constructs a reasoning graph and a divergent graph through chained graph reasoning, self-verification and scoring, and divergent extension. The base sequential recommendation model directly processes the input to produce an embedding. Finally, the embeddings from the adaptive reasoning module and the base model are concatenated to obtain a fused embedding, which is used to predict the next item for the user.
The paper also presents ablation studies that demonstrate the effectiveness of the proposed reasoning graph. The results show that the reasoning graph constructed by the proposed instructions is critical for performance, and simple next-item prediction is insufficient. The ablation studies confirm the necessity and value of the reasoning graph in effectively leveraging the power of large language models for recommendation systems. The paper also presents sensitivity analysis on the two most crucial parameters, τ and l_tru, which control the threshold for verification scoring and sequence truncation length, respectively. The results show that larger τ and longer sequences both tend to improve performance. The paper concludes that LLMRG can effectively enhance multiple existing recommenders by utilizing LLM to construct personalized reasoning graphs.LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs
This paper proposes LLMRG, a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs for recommendation systems. These graphs link a user's profile and behavioral sequences through causal and logical inferences, representing the user's interests in an interpretable way. LLMRG consists of four components: chained graph reasoning, divergent extension, self-verification and scoring, and knowledge base self-improvement. The resulting reasoning graph is encoded using graph neural networks, which serves as additional input to improve conventional recommender systems, without requiring extra user or item information.
The approach allows recommendations to benefit from both engineered recommendation systems and LLM-derived reasoning graphs. The paper demonstrates the effectiveness of LLMRG on benchmarks and real-world scenarios in enhancing base recommendation models. The LLMRG framework includes an adaptive reasoning module with self-verification and a base sequential recommendation model. The adaptive reasoning module takes the user's interaction sequence and attributes as input, and constructs a reasoning graph and a divergent graph through chained graph reasoning, self-verification and scoring, and divergent extension. The base sequential recommendation model directly processes the input to produce an embedding. Finally, the embeddings from the adaptive reasoning module and the base model are concatenated to obtain a fused embedding, which is used to predict the next item for the user.
The paper also presents ablation studies that demonstrate the effectiveness of the proposed reasoning graph. The results show that the reasoning graph constructed by the proposed instructions is critical for performance, and simple next-item prediction is insufficient. The ablation studies confirm the necessity and value of the reasoning graph in effectively leveraging the power of large language models for recommendation systems. The paper also presents sensitivity analysis on the two most crucial parameters, τ and l_tru, which control the threshold for verification scoring and sequence truncation length, respectively. The results show that larger τ and longer sequences both tend to improve performance. The paper concludes that LLMRG can effectively enhance multiple existing recommenders by utilizing LLM to construct personalized reasoning graphs.