Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects

Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects

November 3–7, 2019 | Jianmo Ni, Jiacheng Li, Julian McAuley
This paper introduces a method for generating personalized and diverse justifications for recommendations by leveraging distant-labeled reviews and fine-grained aspects. The authors propose a pipeline to extract high-quality justifications from large-scale review corpora, which are then used to build large-scale personalized recommendation justification datasets. They design two models: (1) a reference-based Seq2Seq model with aspect-planning, which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model that can generate diverse justifications based on templates extracted from justification histories. The models are evaluated on two real-world datasets from Yelp and Amazon Clothing, showing that the proposed methods generate convincing and diverse justifications. The study also highlights the importance of using fine-grained aspects in recommendation justification generation and demonstrates that incorporating prior knowledge into generation frameworks can greatly improve diversity. The results show that the reference-based models achieve higher BLEU scores, while the aspect-conditional masked language model achieves higher diversity scores compared to baselines. Human evaluation further confirms that the reference-based models obtain high relevance scores and sampling-based methods lead to more diverse and informative outputs. The study concludes that aspect-planning is a promising way to guide generation to produce personalized and relevant justifications.This paper introduces a method for generating personalized and diverse justifications for recommendations by leveraging distant-labeled reviews and fine-grained aspects. The authors propose a pipeline to extract high-quality justifications from large-scale review corpora, which are then used to build large-scale personalized recommendation justification datasets. They design two models: (1) a reference-based Seq2Seq model with aspect-planning, which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model that can generate diverse justifications based on templates extracted from justification histories. The models are evaluated on two real-world datasets from Yelp and Amazon Clothing, showing that the proposed methods generate convincing and diverse justifications. The study also highlights the importance of using fine-grained aspects in recommendation justification generation and demonstrates that incorporating prior knowledge into generation frameworks can greatly improve diversity. The results show that the reference-based models achieve higher BLEU scores, while the aspect-conditional masked language model achieves higher diversity scores compared to baselines. Human evaluation further confirms that the reference-based models obtain high relevance scores and sampling-based methods lead to more diverse and informative outputs. The study concludes that aspect-planning is a promising way to guide generation to produce personalized and relevant justifications.
Reach us at info@study.space