October 21–25, 2024, Boise, ID, USA | Zhongxiang Sun*, Zihua Si*, Xiaoxue Zang, Kai Zheng, Yang Song, Xiao Zhang, Jun Xu*
The paper "Large Language Models Enhanced Collaborative Filtering" by Zhongxiang Sun, Zihua Si, Xiaoxue Zang, Yang Song, and Xiao Zhang proposes a novel framework called Large Language Models enhanced Collaborative Filtering (LLM-CF) to integrate the world knowledge and reasoning capabilities of Large Language Models (LLMs) into Recommender Systems (RSs). The main challenge addressed is the limited collaborative filtering information provided by LLMs, which can only handle a limited number of users and items as inputs. The proposed LLM-CF framework leverages in-context learning and chain of thought reasoning from LLMs to distill their knowledge and reasoning into collaborative filtering features. The framework is decoupled into two parts: offline service and online service. In the offline service, LLMs are fine-tuned to enhance their recommendation capabilities and generate chain of thought reasoning with collaborative filtering information. In the online service, retrieved in-context chain of thought examples are used to learn world-knowledge and reasoning-guided collaborative filtering features, which are then used to improve existing recommendation models. Extensive experiments on three real-world datasets demonstrate that LLM-CF significantly enhances the performance of backbone recommendation models in both ranking and retrieval tasks, outperforming competitive baselines. The framework is efficient and can be deployed without real-time LLM generation, making it suitable for practical applications.The paper "Large Language Models Enhanced Collaborative Filtering" by Zhongxiang Sun, Zihua Si, Xiaoxue Zang, Yang Song, and Xiao Zhang proposes a novel framework called Large Language Models enhanced Collaborative Filtering (LLM-CF) to integrate the world knowledge and reasoning capabilities of Large Language Models (LLMs) into Recommender Systems (RSs). The main challenge addressed is the limited collaborative filtering information provided by LLMs, which can only handle a limited number of users and items as inputs. The proposed LLM-CF framework leverages in-context learning and chain of thought reasoning from LLMs to distill their knowledge and reasoning into collaborative filtering features. The framework is decoupled into two parts: offline service and online service. In the offline service, LLMs are fine-tuned to enhance their recommendation capabilities and generate chain of thought reasoning with collaborative filtering information. In the online service, retrieved in-context chain of thought examples are used to learn world-knowledge and reasoning-guided collaborative filtering features, which are then used to improve existing recommendation models. Extensive experiments on three real-world datasets demonstrate that LLM-CF significantly enhances the performance of backbone recommendation models in both ranking and retrieval tasks, outperforming competitive baselines. The framework is efficient and can be deployed without real-time LLM generation, making it suitable for practical applications.