This paper proposes a Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to address the long-tail user and long-tail item challenges in sequential recommendation systems (SRS). The long-tail user challenge refers to users with limited interaction history, while the long-tail item challenge refers to items with low popularity. Traditional SRS methods struggle with these challenges due to the scarcity of interactions, leading to suboptimal recommendations for these users and items. LLM-ESR leverages semantic embeddings from large language models (LLMs) to enhance SRS performance without increasing computational overhead.
To address the long-tail item challenge, LLM-ESR proposes a dual-view modeling approach that fuses semantic information from LLMs with collaborative signals from traditional SRS. To address the long-tail user challenge, LLM-ESR introduces a retrieval augmented self-distillation technique to refine user preference representations by incorporating richer interaction data from similar users. The framework is model-agnostic, allowing it to be adapted to any sequential recommendation model.
Extensive experiments on three real-world datasets using three widely used SRS models demonstrate that LLM-ESR outperforms existing methodologies. The framework effectively enhances the performance of both long-tail users and long-tail items, showing the potential of using semantics to address long-tail challenges in SRS. The results indicate that LLM-ESR is flexible and effective in improving the performance of SRS systems.This paper proposes a Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to address the long-tail user and long-tail item challenges in sequential recommendation systems (SRS). The long-tail user challenge refers to users with limited interaction history, while the long-tail item challenge refers to items with low popularity. Traditional SRS methods struggle with these challenges due to the scarcity of interactions, leading to suboptimal recommendations for these users and items. LLM-ESR leverages semantic embeddings from large language models (LLMs) to enhance SRS performance without increasing computational overhead.
To address the long-tail item challenge, LLM-ESR proposes a dual-view modeling approach that fuses semantic information from LLMs with collaborative signals from traditional SRS. To address the long-tail user challenge, LLM-ESR introduces a retrieval augmented self-distillation technique to refine user preference representations by incorporating richer interaction data from similar users. The framework is model-agnostic, allowing it to be adapted to any sequential recommendation model.
Extensive experiments on three real-world datasets using three widely used SRS models demonstrate that LLM-ESR outperforms existing methodologies. The framework effectively enhances the performance of both long-tail users and long-tail items, showing the potential of using semantics to address long-tail challenges in SRS. The results indicate that LLM-ESR is flexible and effective in improving the performance of SRS systems.