Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation

Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation

July 14-18, 2024 | Sichun Luo, Yuxuan Yao, Bowei He, Yinya Huang, Aojun Zhou, Xinyi Zhang, Yuanzhang Xiao, Mingjie Zhan, Linqi Song
This paper introduces Llama4Rec, a general and model-agnostic framework that integrates conventional recommendation models with large language models (LLMs) through mutual augmentation and adaptive aggregation. The framework aims to leverage the strengths of both conventional recommendation methods and LLMs to enhance recommendation performance. Conventional methods excel at mining collaborative information and modeling sequential behaviors, while LLMs are proficient at utilizing rich textual contexts. However, both have limitations, such as data sparsity and the long-tail problem. Llama4Rec addresses these challenges by employing data augmentation and prompt augmentation strategies tailored to enhance conventional models and LLMs, respectively. An adaptive aggregation module is then used to combine the predictions of both models to refine the final recommendation results. The data augmentation strategy for conventional recommendation models involves leveraging instruction-tuned LLMs to predict items that a user may like or dislike, which helps alleviate the data sparsity and long-tail problem. For sequential recommendation, the LLM is used to predict items that are highly preferred by the user, which are then inserted into the user's interaction sequence. For rating prediction, the LLM is used to extract valuable side information, which is then integrated into the training data. Prompt augmentation for LLMs involves enriching collaborative information from similar users and providing prior knowledge from conventional recommendation models within the prompt. This helps the LLM better understand user preferences and generate more accurate recommendations. The adaptive aggregation module merges the predictions of the LLM and conventional models in an adaptive manner, combining their strengths to refine the final recommendation results. Empirical studies on three real-world datasets validate the superiority of Llama4Rec, demonstrating its consistent and significant improvements in recommendation performance over baseline methods. The results show that Llama4Rec outperforms existing baselines, exhibiting notable improvements across multiple performance metrics. The framework is effective in addressing the limitations of conventional recommendation methods and LLMs, and it provides a comprehensive solution for enhancing recommendation performance.This paper introduces Llama4Rec, a general and model-agnostic framework that integrates conventional recommendation models with large language models (LLMs) through mutual augmentation and adaptive aggregation. The framework aims to leverage the strengths of both conventional recommendation methods and LLMs to enhance recommendation performance. Conventional methods excel at mining collaborative information and modeling sequential behaviors, while LLMs are proficient at utilizing rich textual contexts. However, both have limitations, such as data sparsity and the long-tail problem. Llama4Rec addresses these challenges by employing data augmentation and prompt augmentation strategies tailored to enhance conventional models and LLMs, respectively. An adaptive aggregation module is then used to combine the predictions of both models to refine the final recommendation results. The data augmentation strategy for conventional recommendation models involves leveraging instruction-tuned LLMs to predict items that a user may like or dislike, which helps alleviate the data sparsity and long-tail problem. For sequential recommendation, the LLM is used to predict items that are highly preferred by the user, which are then inserted into the user's interaction sequence. For rating prediction, the LLM is used to extract valuable side information, which is then integrated into the training data. Prompt augmentation for LLMs involves enriching collaborative information from similar users and providing prior knowledge from conventional recommendation models within the prompt. This helps the LLM better understand user preferences and generate more accurate recommendations. The adaptive aggregation module merges the predictions of the LLM and conventional models in an adaptive manner, combining their strengths to refine the final recommendation results. Empirical studies on three real-world datasets validate the superiority of Llama4Rec, demonstrating its consistent and significant improvements in recommendation performance over baseline methods. The results show that Llama4Rec outperforms existing baselines, exhibiting notable improvements across multiple performance metrics. The framework is effective in addressing the limitations of conventional recommendation methods and LLMs, and it provides a comprehensive solution for enhancing recommendation performance.
Reach us at info@study.space
Understanding Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation