**CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation**
**Authors:** Junda Wu
**Abstract:**
Long-tail recommendation is a challenging task due to data sparsity and imbalance. Large language models (LLMs) have shown promise in complex reasoning, but they often rely solely on semantic meanings of items, neglecting collaborative information from user-item interactions. To address this, we introduce CoRAL, a method that incorporates collaborative retrieval into LLMs. CoRAL directly includes collaborative evidence in prompts, enabling LLMs to analyze shared and distinct preferences among users and summarize patterns for specific items. The retrieval policy, learned through reinforcement learning, finds the minimal-sufficient collaborative information for each user-item pair. Experimental results demonstrate that CoRAL significantly improves LLMs' reasoning abilities on long-tail recommendation tasks, outperforming baselines and enhancing data efficiency through efficient exploration.
Large language models, Collaborative Filtering, Long-tail Recommendation
**Introduction:**
Traditional recommendation systems struggle with long-tail items due to data sparsity and imbalance. Long-tail items have few interactions, making it difficult to capture user-item patterns accurately. Causal debiasing and data augmentation methods aim to mitigate these issues, but they often suffer from sub-optimal solutions and knowledge forgetting. LLMs, with their advanced reasoning capabilities, can help address these challenges by leveraging fine-grained reasoning on collaborative information.
**Related Work:**
Previous work has explored LLMs in recommendation systems, focusing on content augmentation and fine-grained reasoning. However, LLMs often misalign with real user-item interaction patterns due to a lack of collaborative information.
**Problem Formulation:**
We formulate long-tail recommendation as a complex reasoning task in LLMs. The goal is to predict user preferences for long-tail items by incorporating collaborative evidence. The retrieval policy learns to find the minimal-sufficient information support for the LLM to make accurate predictions.
**Proposed Framework:**
CoRAL uses collaborative prompting to construct prompts that include collaborative information. The retrieval policy, learned through reinforcement learning, sequentially includes additional users and items to maximize information gain. The policy is initialized with pre-trained models on popular items to improve learning efficiency.
**Experiments:**
Extensive experiments on multiple datasets show that CoRAL significantly improves LLMs' performance on long-tail recommendation tasks. The retrieval policy effectively finds sufficient and minimally-sufficient collaborative information, enhancing the LLMs' ability to deduce user preferences accurately.**CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation**
**Authors:** Junda Wu
**Abstract:**
Long-tail recommendation is a challenging task due to data sparsity and imbalance. Large language models (LLMs) have shown promise in complex reasoning, but they often rely solely on semantic meanings of items, neglecting collaborative information from user-item interactions. To address this, we introduce CoRAL, a method that incorporates collaborative retrieval into LLMs. CoRAL directly includes collaborative evidence in prompts, enabling LLMs to analyze shared and distinct preferences among users and summarize patterns for specific items. The retrieval policy, learned through reinforcement learning, finds the minimal-sufficient collaborative information for each user-item pair. Experimental results demonstrate that CoRAL significantly improves LLMs' reasoning abilities on long-tail recommendation tasks, outperforming baselines and enhancing data efficiency through efficient exploration.
Large language models, Collaborative Filtering, Long-tail Recommendation
**Introduction:**
Traditional recommendation systems struggle with long-tail items due to data sparsity and imbalance. Long-tail items have few interactions, making it difficult to capture user-item patterns accurately. Causal debiasing and data augmentation methods aim to mitigate these issues, but they often suffer from sub-optimal solutions and knowledge forgetting. LLMs, with their advanced reasoning capabilities, can help address these challenges by leveraging fine-grained reasoning on collaborative information.
**Related Work:**
Previous work has explored LLMs in recommendation systems, focusing on content augmentation and fine-grained reasoning. However, LLMs often misalign with real user-item interaction patterns due to a lack of collaborative information.
**Problem Formulation:**
We formulate long-tail recommendation as a complex reasoning task in LLMs. The goal is to predict user preferences for long-tail items by incorporating collaborative evidence. The retrieval policy learns to find the minimal-sufficient information support for the LLM to make accurate predictions.
**Proposed Framework:**
CoRAL uses collaborative prompting to construct prompts that include collaborative information. The retrieval policy, learned through reinforcement learning, sequentially includes additional users and items to maximize information gain. The policy is initialized with pre-trained models on popular items to improve learning efficiency.
**Experiments:**
Extensive experiments on multiple datasets show that CoRAL significantly improves LLMs' performance on long-tail recommendation tasks. The retrieval policy effectively finds sufficient and minimally-sufficient collaborative information, enhancing the LLMs' ability to deduce user preferences accurately.