July 14-18, 2024 | Shenghao Yang, Weizhi Ma, Peijie Sun, Qingyao Ai, Yiqun Liu, Mingchen Cai, Min Zhang
This paper proposes a novel relation-aware sequential recommendation framework called Latent Relation Discovery (LRD), which leverages Large Language Models (LLMs) to discover latent relations between items. Traditional methods rely on predefined relations extracted from knowledge graphs, but these suffer from sparsity and limited generalization. LRD uses LLMs to generate language knowledge representations of items, which are then used to discover latent relations through a self-supervised learning process. The framework includes a relation extraction module and an item reconstruction module, which work together to optimize the discovery of latent relations and improve recommendation performance. The LRD module is integrated into existing relation-aware sequential recommendation models, and the joint optimization of the two tasks leads to significant improvements in performance. Experimental results on multiple public datasets show that LRD enhances the performance of existing models and discovers reliable latent relations. The method is effective in capturing diverse user preferences and improves recommendation accuracy. The key contributions include the first use of LLMs for discovering latent relations in relation-aware sequential recommendation, a self-supervised learning framework for LRD, and experimental validation of the effectiveness of LRD in improving recommendation performance. The paper also discusses the importance of latent relations in recommendation systems and the potential of LLMs in capturing rich world knowledge for item relation discovery.This paper proposes a novel relation-aware sequential recommendation framework called Latent Relation Discovery (LRD), which leverages Large Language Models (LLMs) to discover latent relations between items. Traditional methods rely on predefined relations extracted from knowledge graphs, but these suffer from sparsity and limited generalization. LRD uses LLMs to generate language knowledge representations of items, which are then used to discover latent relations through a self-supervised learning process. The framework includes a relation extraction module and an item reconstruction module, which work together to optimize the discovery of latent relations and improve recommendation performance. The LRD module is integrated into existing relation-aware sequential recommendation models, and the joint optimization of the two tasks leads to significant improvements in performance. Experimental results on multiple public datasets show that LRD enhances the performance of existing models and discovers reliable latent relations. The method is effective in capturing diverse user preferences and improves recommendation accuracy. The key contributions include the first use of LLMs for discovering latent relations in relation-aware sequential recommendation, a self-supervised learning framework for LRD, and experimental validation of the effectiveness of LRD in improving recommendation performance. The paper also discusses the importance of latent relations in recommendation systems and the potential of LLMs in capturing rich world knowledge for item relation discovery.