CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation

CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation

2017 | Junda Wu, Cheng-Chun Chang, Tong Yu, ZhanKui He, Jianing Wang, Yupeng Hou, Julian McAuley
CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation This paper proposes CoRAL, a collaborative retrieval-augmented large language model (LLM) that improves long-tail recommendation by incorporating collaborative evidence into prompts. Traditional recommender systems struggle with long-tail recommendations due to data sparsity and imbalance. LLMs excel in complex reasoning but often neglect collaborative information, leading to misalignment with task-specific user-item interaction patterns. CoRAL addresses this by directly integrating collaborative evidence into prompts, enabling LLMs to analyze shared and distinct user preferences and summarize patterns indicating which users would be attracted to certain items. The retrieved collaborative evidence aligns the LLM's reasoning with user-item interaction patterns in the dataset. However, due to limited prompt size, finding minimal-sufficient collaborative information is challenging. To address this, CoRAL uses a reinforcement learning (RL) framework to find the optimal interaction set. Experimental results show that CoRAL significantly improves LLMs' reasoning abilities on specific recommendation tasks. Analysis reveals that CoRAL can more efficiently explore collaborative information through RL. CoRAL is designed to handle long-tail recommendations by leveraging LLMs' abilities in fine-grained reasoning on collaborative information and extracting rich semantic features. The framework includes a retrieval policy network and a reinforcement learning process. The retrieval policy is trained to find minimal-sufficient collaborative information for recommendation tasks. CoRAL also uses collaborative filtering models learned on short-head data as model initialization to improve policy learning efficiency. The framework is evaluated on multiple datasets, showing significant improvements in AUC and F1 scores compared to baselines. CoRAL demonstrates the effectiveness of incorporating collaborative information into LLMs for long-tail recommendations.CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation This paper proposes CoRAL, a collaborative retrieval-augmented large language model (LLM) that improves long-tail recommendation by incorporating collaborative evidence into prompts. Traditional recommender systems struggle with long-tail recommendations due to data sparsity and imbalance. LLMs excel in complex reasoning but often neglect collaborative information, leading to misalignment with task-specific user-item interaction patterns. CoRAL addresses this by directly integrating collaborative evidence into prompts, enabling LLMs to analyze shared and distinct user preferences and summarize patterns indicating which users would be attracted to certain items. The retrieved collaborative evidence aligns the LLM's reasoning with user-item interaction patterns in the dataset. However, due to limited prompt size, finding minimal-sufficient collaborative information is challenging. To address this, CoRAL uses a reinforcement learning (RL) framework to find the optimal interaction set. Experimental results show that CoRAL significantly improves LLMs' reasoning abilities on specific recommendation tasks. Analysis reveals that CoRAL can more efficiently explore collaborative information through RL. CoRAL is designed to handle long-tail recommendations by leveraging LLMs' abilities in fine-grained reasoning on collaborative information and extracting rich semantic features. The framework includes a retrieval policy network and a reinforcement learning process. The retrieval policy is trained to find minimal-sufficient collaborative information for recommendation tasks. CoRAL also uses collaborative filtering models learned on short-head data as model initialization to improve policy learning efficiency. The framework is evaluated on multiple datasets, showing significant improvements in AUC and F1 scores compared to baselines. CoRAL demonstrates the effectiveness of incorporating collaborative information into LLMs for long-tail recommendations.
Reach us at info@study.space
[slides and audio] CoRAL%3A Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation