Sequential Recommendation with Latent Relations based on Large Language Model

Sequential Recommendation with Latent Relations based on Large Language Model

27 Mar 2024 | Shenghao Yang, Weizhi Ma, Peijie Sun, Qingyao Ai, Yiqun Liu, Mingchen Cai, Min Zhang
This paper introduces a novel framework for sequential recommendation that leverages Latent Relation Discovery (LRD) based on Large Language Models (LLMs). Traditional sequential recommendation methods often rely on capturing implicit collaborative filtering signals among items, while recent relation-aware models have shown promising performance by explicitly incorporating item relations. However, these methods typically depend on manually predefined relations, which can be sparse and limited in generalizing to diverse scenarios. The proposed LRD framework aims to address these limitations by using LLMs to discover new types of item relations. LLMs, with their rich world knowledge and semantic representation capabilities, are used to obtain language knowledge representations of items. These representations are then fed into a latent relation discovery module based on the discrete state variational autoencoder (DVAE). The self-supervised relation discovery tasks and recommendation tasks are jointly optimized to improve the model's ability to capture diverse user preferences. Experimental results on multiple public datasets demonstrate that the proposed LRD method significantly enhances the performance of existing relation-aware sequential recommendation models by effectively discovering reliable and useful item relations. Further analysis reveals the effectiveness and reliability of the discovered latent relations, confirming the benefits of LRD in improving recommendation accuracy and interpretability.This paper introduces a novel framework for sequential recommendation that leverages Latent Relation Discovery (LRD) based on Large Language Models (LLMs). Traditional sequential recommendation methods often rely on capturing implicit collaborative filtering signals among items, while recent relation-aware models have shown promising performance by explicitly incorporating item relations. However, these methods typically depend on manually predefined relations, which can be sparse and limited in generalizing to diverse scenarios. The proposed LRD framework aims to address these limitations by using LLMs to discover new types of item relations. LLMs, with their rich world knowledge and semantic representation capabilities, are used to obtain language knowledge representations of items. These representations are then fed into a latent relation discovery module based on the discrete state variational autoencoder (DVAE). The self-supervised relation discovery tasks and recommendation tasks are jointly optimized to improve the model's ability to capture diverse user preferences. Experimental results on multiple public datasets demonstrate that the proposed LRD method significantly enhances the performance of existing relation-aware sequential recommendation models by effectively discovering reliable and useful item relations. Further analysis reveals the effectiveness and reliability of the discovered latent relations, confirming the benefits of LRD in improving recommendation accuracy and interpretability.
Reach us at info@study.space