May 13–17, 2024, Singapore, Singapore | Chao Zhang, Shiwei Wu, Haoxin Zhang, Tong Xu, Yan Gao, Yao Hu, Di Wu, Enhong Chen
The paper introduces NoteLLM, a novel framework for item-to-item (I2I) note recommendation that leverages Large Language Models (LLMs). Traditional methods often underutilize important cues like hashtags and categories, which represent key concepts of notes. NoteLLM addresses this by compressing notes into a single special token using a Note Compression Prompt and learning related notes' embeddings through contrastive learning. Additionally, it generates hashtags and categories for each note using Collaborative Supervised Fine-tuning (CSFT). Extensive experiments on real-world datasets, including Xiaohongshu, demonstrate the effectiveness of NoteLLM, showing significant improvements in recommendation performance and user engagement. The framework's ability to enhance note embeddings and handle varying levels of note exposure is highlighted, along with its robustness in handling cold-start notes.The paper introduces NoteLLM, a novel framework for item-to-item (I2I) note recommendation that leverages Large Language Models (LLMs). Traditional methods often underutilize important cues like hashtags and categories, which represent key concepts of notes. NoteLLM addresses this by compressing notes into a single special token using a Note Compression Prompt and learning related notes' embeddings through contrastive learning. Additionally, it generates hashtags and categories for each note using Collaborative Supervised Fine-tuning (CSFT). Extensive experiments on real-world datasets, including Xiaohongshu, demonstrate the effectiveness of NoteLLM, showing significant improvements in recommendation performance and user engagement. The framework's ability to enhance note embeddings and handle varying levels of note exposure is highlighted, along with its robustness in handling cold-start notes.