Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item

Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item

31 May 2024 | Qidong Liu, Xian Wu, Xiangyu Zhao, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng
This paper addresses the challenges of long-tail user and item in sequential recommendation systems (SRS) by leveraging large language models (LLMs). The authors propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR), which integrates semantic embeddings from LLMs to enhance SRS performance without increasing computational overhead. To tackle the long-tail item challenge, a dual-view modeling approach is introduced, fusing semantic information from LLMs with collaborative signals from traditional SRS. For the long-tail user challenge, a retrieval augmented self-distillation technique is proposed to refine user preference representations by incorporating richer interaction data from similar users. Extensive experiments on three real-world datasets using three widely used SRS models demonstrate the superior performance of the proposed framework compared to existing methodologies. The contributions of the paper include: 1. Introducing a large language models enhancement framework (LLM-ESR) to alleviate both long-tail user and item challenges in SRS. 2. Designing an embedding-based enhancement method to avoid inference burden and retain original semantic relations. 3. Conducting comprehensive experiments to validate the effectiveness and flexibility of LLM-ESR. The paper also discusses related works in sequential recommendation and LLMs for recommendation, providing a comprehensive overview of the field and highlighting the novelty of the proposed framework.This paper addresses the challenges of long-tail user and item in sequential recommendation systems (SRS) by leveraging large language models (LLMs). The authors propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR), which integrates semantic embeddings from LLMs to enhance SRS performance without increasing computational overhead. To tackle the long-tail item challenge, a dual-view modeling approach is introduced, fusing semantic information from LLMs with collaborative signals from traditional SRS. For the long-tail user challenge, a retrieval augmented self-distillation technique is proposed to refine user preference representations by incorporating richer interaction data from similar users. Extensive experiments on three real-world datasets using three widely used SRS models demonstrate the superior performance of the proposed framework compared to existing methodologies. The contributions of the paper include: 1. Introducing a large language models enhancement framework (LLM-ESR) to alleviate both long-tail user and item challenges in SRS. 2. Designing an embedding-based enhancement method to avoid inference burden and retain original semantic relations. 3. Conducting comprehensive experiments to validate the effectiveness and flexibility of LLM-ESR. The paper also discusses related works in sequential recommendation and LLMs for recommendation, providing a comprehensive overview of the field and highlighting the novelty of the proposed framework.
Reach us at info@study.space
[slides] LLM-ESR%3A Large Language Models Enhancement for Long-tailed Sequential Recommendation | StudySpace