IDGenRec: LLM-RecSys Alignment with Textual ID Learning

IDGenRec: LLM-RecSys Alignment with Textual ID Learning

July 14–18, 2024 | Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, Yongfeng Zhang
The paper "IDGenRec: LLM-RecSys Alignment with Textual ID Learning" addresses the challenge of encoding recommendation items into concise and meaningful textual IDs for generative recommendation systems. Traditional ranking-based recommendation methods are transformed into text-to-text generation paradigms using large language models (LLMs). However, current research struggles to effectively encode items within this framework, leading to limited potential in LLM-based generative recommendation systems. To overcome this issue, the authors propose IDGenRec, a framework that represents each item as a unique, concise, semantically rich, platform-agnostic textual ID using human language tokens. This is achieved by training a textual ID generator alongside an LLM-based recommender, enabling seamless integration of personalized recommendations into natural language generation. The approach ensures that user history is expressed in natural language and decoupled from the original dataset, suggesting the potential for a foundational generative recommendation model. Experiments show that the proposed framework consistently outperforms existing models in sequential recommendation under standard experimental settings. Additionally, the model's zero-shot performance on unseen datasets is comparable to or even better than some traditional recommendation models based on supervised training, demonstrating its potential as a foundational model for generative recommendation. The paper also discusses related work in LLM-based discriminative and generative recommendation systems, highlighting the limitations of current methods and the advantages of the proposed IDGenRec framework. The authors conclude that their approach offers a new perspective to better align LLMs and recommender systems by bridging the two through meticulously learned textual IDs, which may serve as a solid basis for training foundational recommendation models in the future.The paper "IDGenRec: LLM-RecSys Alignment with Textual ID Learning" addresses the challenge of encoding recommendation items into concise and meaningful textual IDs for generative recommendation systems. Traditional ranking-based recommendation methods are transformed into text-to-text generation paradigms using large language models (LLMs). However, current research struggles to effectively encode items within this framework, leading to limited potential in LLM-based generative recommendation systems. To overcome this issue, the authors propose IDGenRec, a framework that represents each item as a unique, concise, semantically rich, platform-agnostic textual ID using human language tokens. This is achieved by training a textual ID generator alongside an LLM-based recommender, enabling seamless integration of personalized recommendations into natural language generation. The approach ensures that user history is expressed in natural language and decoupled from the original dataset, suggesting the potential for a foundational generative recommendation model. Experiments show that the proposed framework consistently outperforms existing models in sequential recommendation under standard experimental settings. Additionally, the model's zero-shot performance on unseen datasets is comparable to or even better than some traditional recommendation models based on supervised training, demonstrating its potential as a foundational model for generative recommendation. The paper also discusses related work in LLM-based discriminative and generative recommendation systems, highlighting the limitations of current methods and the advantages of the proposed IDGenRec framework. The authors conclude that their approach offers a new perspective to better align LLMs and recommender systems by bridging the two through meticulously learned textual IDs, which may serve as a solid basis for training foundational recommendation models in the future.
Reach us at info@study.space