Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation

Integrating Large Language Models into Recommendation via Mutual Augmentation and Adaptive Aggregation

25 Jan 2024 | Sichun Luo1, Yuxuan Yao1, Bowei He1, Yinya Huang1, Aojun Zhou2 Xinyi Zhang3, Yuanzhang Xiao4, Mingjie Zhan2, Linqi Song1†
The paper introduces Llama4Rec, a framework that integrates large language models (LLMs) with conventional recommendation models to enhance recommendation performance. Llama4Rec addresses the limitations of both approaches by performing mutual augmentation and adaptive aggregation. Data augmentation is used to alleviate data sparsity and long-tail issues in conventional models, while prompt augmentation enriches LLMs with collaborative and sequential information. An adaptive aggregation module combines the predictions from both models to refine the final recommendations. Empirical studies on three real-world datasets validate the effectiveness of Llama4Rec, demonstrating significant improvements over baseline methods in various recommendation tasks. The framework's components, including data augmentation, prompt augmentation, and adaptive aggregation, are evaluated through ablation studies, highlighting their contributions to overall performance. The paper also discusses hyper-parameter analysis and computational efficiency, suggesting future directions for improving LLM-based recommendation systems.The paper introduces Llama4Rec, a framework that integrates large language models (LLMs) with conventional recommendation models to enhance recommendation performance. Llama4Rec addresses the limitations of both approaches by performing mutual augmentation and adaptive aggregation. Data augmentation is used to alleviate data sparsity and long-tail issues in conventional models, while prompt augmentation enriches LLMs with collaborative and sequential information. An adaptive aggregation module combines the predictions from both models to refine the final recommendations. Empirical studies on three real-world datasets validate the effectiveness of Llama4Rec, demonstrating significant improvements over baseline methods in various recommendation tasks. The framework's components, including data augmentation, prompt augmentation, and adaptive aggregation, are evaluated through ablation studies, highlighting their contributions to overall performance. The paper also discusses hyper-parameter analysis and computational efficiency, suggesting future directions for improving LLM-based recommendation systems.
Reach us at info@study.space