25 Feb 2024 | Mathieu Ravaut, Hao Zhang, Lu Xu, Aixin Sun, Yong Liu
This paper proposes a parameter-efficient conversational recommender system (PECRS) that formulates conversational recommendation as a natural language processing task. PECRS leverages pre-trained language models to encode items, understand user intent through conversation, perform item recommendation via semantic matching, and generate dialogues. Unlike prior methods that rely on external knowledge graphs and multiple training phases, PECRS is a unified model that can be optimized in a single stage without requiring non-textual metadata. Experiments on two benchmark datasets, ReDial and INSPIRED, demonstrate that PECRS achieves competitive performance in both recommendation and conversation tasks. PECRS is parameter-efficient, using only a frozen pre-trained language model and a parameter-efficient plugin module to unify response generation and item recommendation. It also employs a shared negative sampling strategy to improve training efficiency and model performance. The method is flexible and can scale to larger language model backbones without significantly increasing training parameters. PECRS is the first work to solve conversational recommendation by optimizing a single model in a single training phase without relying on knowledge graphs or additional item encoders. The method is evaluated on both recommendation and response generation tasks, and results show that PECRS outperforms existing methods on both datasets. The paper also discusses limitations of the method, including the need for more data and the potential for future research in larger language models.This paper proposes a parameter-efficient conversational recommender system (PECRS) that formulates conversational recommendation as a natural language processing task. PECRS leverages pre-trained language models to encode items, understand user intent through conversation, perform item recommendation via semantic matching, and generate dialogues. Unlike prior methods that rely on external knowledge graphs and multiple training phases, PECRS is a unified model that can be optimized in a single stage without requiring non-textual metadata. Experiments on two benchmark datasets, ReDial and INSPIRED, demonstrate that PECRS achieves competitive performance in both recommendation and conversation tasks. PECRS is parameter-efficient, using only a frozen pre-trained language model and a parameter-efficient plugin module to unify response generation and item recommendation. It also employs a shared negative sampling strategy to improve training efficiency and model performance. The method is flexible and can scale to larger language model backbones without significantly increasing training parameters. PECRS is the first work to solve conversational recommendation by optimizing a single model in a single training phase without relying on knowledge graphs or additional item encoders. The method is evaluated on both recommendation and response generation tasks, and results show that PECRS outperforms existing methods on both datasets. The paper also discusses limitations of the method, including the need for more data and the potential for future research in larger language models.