**RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems**
This paper addresses the sparsity and cold start problems in collaborative filtering by incorporating side information, specifically a knowledge graph (KG), into recommender systems. RippleNet is an end-to-end framework that naturally integrates the KG into the recommendation process. It propagates user preferences over the KG by extending a user's potential interests along links in the KG, similar to ripples spreading on water. The multiple "ripples" activated by a user's historically clicked items superpose to form the user's preference distribution for a candidate item, which is used to predict the final clicking probability.
**Key Contributions:**
1. **Combination of Embedding-based and Path-based Methods:** RippleNet combines the advantages of embedding-based and path-based methods, naturally incorporating KG into recommendation.
2. **Automatic Discovery of Hierarchical Interests:** RippleNet automatically discovers users' hierarchical potential interests by iteratively propagating preferences in the KG.
3. **Performance on Real-world Datasets:** Experiments on movie, book, and news recommendation datasets show significant gains over state-of-the-art baselines.
**Methods:**
- **Ripple Set:** A user's historical interests are treated as seeds in the KG, and these seeds are extended along links to form multiple ripple sets.
- **Preference Propagation:** Each item is associated with an embedding, and the relevance probabilities of the user's historical interests to the item are calculated. These probabilities are used to update the user's embedding iteratively.
- **Learning Algorithm:** The model aims to maximize the posterior probability of the model parameters given the KG and the interaction matrix. This is achieved through a stochastic gradient descent algorithm.
**Experiments:**
- **Performance on Datasets:** RippleNet outperforms several state-of-the-art baselines in terms of AUC and precision@K.
- **Parameter Sensitivity:** The performance is influenced by the dimension of embeddings and the regularization weight of KGE terms.
**Conclusion:**
RippleNet effectively addresses the limitations of existing KG-aware recommendation methods by combining preference propagation with KGE regularization. Extensive experiments validate its superior performance in various recommendation scenarios. Future work includes further investigation into characterizing entity-relation interactions and designing non-uniform samplers for better exploration of user interests.**RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems**
This paper addresses the sparsity and cold start problems in collaborative filtering by incorporating side information, specifically a knowledge graph (KG), into recommender systems. RippleNet is an end-to-end framework that naturally integrates the KG into the recommendation process. It propagates user preferences over the KG by extending a user's potential interests along links in the KG, similar to ripples spreading on water. The multiple "ripples" activated by a user's historically clicked items superpose to form the user's preference distribution for a candidate item, which is used to predict the final clicking probability.
**Key Contributions:**
1. **Combination of Embedding-based and Path-based Methods:** RippleNet combines the advantages of embedding-based and path-based methods, naturally incorporating KG into recommendation.
2. **Automatic Discovery of Hierarchical Interests:** RippleNet automatically discovers users' hierarchical potential interests by iteratively propagating preferences in the KG.
3. **Performance on Real-world Datasets:** Experiments on movie, book, and news recommendation datasets show significant gains over state-of-the-art baselines.
**Methods:**
- **Ripple Set:** A user's historical interests are treated as seeds in the KG, and these seeds are extended along links to form multiple ripple sets.
- **Preference Propagation:** Each item is associated with an embedding, and the relevance probabilities of the user's historical interests to the item are calculated. These probabilities are used to update the user's embedding iteratively.
- **Learning Algorithm:** The model aims to maximize the posterior probability of the model parameters given the KG and the interaction matrix. This is achieved through a stochastic gradient descent algorithm.
**Experiments:**
- **Performance on Datasets:** RippleNet outperforms several state-of-the-art baselines in terms of AUC and precision@K.
- **Parameter Sensitivity:** The performance is influenced by the dimension of embeddings and the regularization weight of KGE terms.
**Conclusion:**
RippleNet effectively addresses the limitations of existing KG-aware recommendation methods by combining preference propagation with KGE regularization. Extensive experiments validate its superior performance in various recommendation scenarios. Future work includes further investigation into characterizing entity-relation interactions and designing non-uniform samplers for better exploration of user interests.