Bridging Language and Items for Retrieval and Recommendation

Bridging Language and Items for Retrieval and Recommendation

2024-03-06 | Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, Julian McAuley
This paper introduces BLAIR, a series of pre-trained sentence embedding models designed for recommendation scenarios. BLAIR is trained to learn correlations between item metadata and natural language contexts, which is useful for retrieving and recommending items. The authors collect a new dataset, AMAZON REVIEWS 2023, comprising over 570 million reviews and 48 million items from 33 categories, significantly expanding previous datasets. They evaluate BLAIR across multiple domains and tasks, including a new task called complex product search, which involves retrieving relevant items from long, complex natural language contexts. Using large language models like ChatGPT, they construct a semi-synthetic evaluation set, Amazon-C4. Empirical results show that BLAIR exhibits strong text and item representation capabilities. The dataset, code, and checkpoints are available at https://github.com/hyp1231/AmazonReviews2023. BLAIR is designed as a series of sentence embedding models that bridge natural language and items for retrieval and recommendation. The authors introduce the AMAZON REVIEWS 2023 dataset, which is more extensive than previous versions, featuring 3.18 times more items and 2.4 times more reviews and item metadata. The dataset includes up-to-date user reviews and item metadata, with timestamps accurate to the millisecond. BLAIR is trained using a contrastive objective that pairs user reviews with their corresponding item metadata. The model is pretrained on the dataset, with training data split into training and evaluation sets based on timestamps. The authors evaluate BLAIR on three tasks: sequential recommendation, conventional product search, and complex product search. BLAIR outperforms existing methods in these tasks, demonstrating strong generalization. They also introduce a new task, complex product search, and construct a semi-synthetic evaluation set, Amazon-C4, using LLMs. The results show that BLAIR achieves the best performance on all domains, highlighting its effectiveness in learning correlations between language contexts and items. The authors also explore the impact of multi-domain training and data curriculum strategies on BLAIR's performance. They find that multi-domain training improves generalizability, while certain data curricula negatively affect performance. The paper concludes that BLAIR is a powerful model for language-heavy recommendation tasks, and the new dataset and checkpoints are made available for further research.This paper introduces BLAIR, a series of pre-trained sentence embedding models designed for recommendation scenarios. BLAIR is trained to learn correlations between item metadata and natural language contexts, which is useful for retrieving and recommending items. The authors collect a new dataset, AMAZON REVIEWS 2023, comprising over 570 million reviews and 48 million items from 33 categories, significantly expanding previous datasets. They evaluate BLAIR across multiple domains and tasks, including a new task called complex product search, which involves retrieving relevant items from long, complex natural language contexts. Using large language models like ChatGPT, they construct a semi-synthetic evaluation set, Amazon-C4. Empirical results show that BLAIR exhibits strong text and item representation capabilities. The dataset, code, and checkpoints are available at https://github.com/hyp1231/AmazonReviews2023. BLAIR is designed as a series of sentence embedding models that bridge natural language and items for retrieval and recommendation. The authors introduce the AMAZON REVIEWS 2023 dataset, which is more extensive than previous versions, featuring 3.18 times more items and 2.4 times more reviews and item metadata. The dataset includes up-to-date user reviews and item metadata, with timestamps accurate to the millisecond. BLAIR is trained using a contrastive objective that pairs user reviews with their corresponding item metadata. The model is pretrained on the dataset, with training data split into training and evaluation sets based on timestamps. The authors evaluate BLAIR on three tasks: sequential recommendation, conventional product search, and complex product search. BLAIR outperforms existing methods in these tasks, demonstrating strong generalization. They also introduce a new task, complex product search, and construct a semi-synthetic evaluation set, Amazon-C4, using LLMs. The results show that BLAIR achieves the best performance on all domains, highlighting its effectiveness in learning correlations between language contexts and items. The authors also explore the impact of multi-domain training and data curriculum strategies on BLAIR's performance. They find that multi-domain training improves generalizability, while certain data curricula negatively affect performance. The paper concludes that BLAIR is a powerful model for language-heavy recommendation tasks, and the new dataset and checkpoints are made available for further research.
Reach us at info@study.space
[slides] Bridging Language and Items for Retrieval and Recommendation | StudySpace