Image-based Recommendations on Styles and Substitutes

Image-based Recommendations on Styles and Substitutes

June 17, 2015 | Julian McAuley*1, Christopher Targett†2, Qinfeng ('Javen') Shi‡2, and Anton van den Hengel§2,3
The paper "Image-based Recommendations on Styles and Substitutes" by Julian McAuley, Christopher Targett, Qinfeng (‘Javen’) Shi, and Anton van den Hengel explores the development of a system that can recommend clothing and accessories based on visual similarity and human preferences. The authors aim to model the human notion of which objects complement each other and which can be seen as alternatives, rather than just focusing on visual similarity. They propose a visual and relational recommender system that uses a large dataset of product relationships from Amazon to learn a global notion of which products go together. The system is trained using a convolutional neural network to extract features from images and then uses a Mahalanobis distance transform to model the relationships between objects. The model is evaluated on various datasets, including clothing, books, and movies, showing that it can accurately predict co-purchases and generate personalized recommendations. The paper also discusses the visualization of "style space" and the application of the model to assess the coordination of real-world outfits. The authors conclude that their method is capable of modeling complex visual relationships beyond simple visual similarity, particularly in identifying complementary items.The paper "Image-based Recommendations on Styles and Substitutes" by Julian McAuley, Christopher Targett, Qinfeng (‘Javen’) Shi, and Anton van den Hengel explores the development of a system that can recommend clothing and accessories based on visual similarity and human preferences. The authors aim to model the human notion of which objects complement each other and which can be seen as alternatives, rather than just focusing on visual similarity. They propose a visual and relational recommender system that uses a large dataset of product relationships from Amazon to learn a global notion of which products go together. The system is trained using a convolutional neural network to extract features from images and then uses a Mahalanobis distance transform to model the relationships between objects. The model is evaluated on various datasets, including clothing, books, and movies, showing that it can accurately predict co-purchases and generate personalized recommendations. The paper also discusses the visualization of "style space" and the application of the model to assess the coordination of real-world outfits. The authors conclude that their method is capable of modeling complex visual relationships beyond simple visual similarity, particularly in identifying complementary items.
Reach us at info@study.space