May 13–17, 2019, San Francisco, CA, USA | Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, Minyi Guo
This paper introduces Knowledge Graph Convolutional Networks (KGCN), an end-to-end framework designed to address the sparsity and cold start problems in collaborative filtering-based recommender systems. KGCN leverages knowledge graphs (KGs) to capture the inter-item relationships and semantic information, enhancing the precision, diversity, and explainability of recommendations. The key innovation is the use of graph convolutional operations to aggregate neighborhood information, which helps in modeling high-order structural proximity and capturing users' long-distance interests. The model is implemented in a minibatch fashion to handle large datasets and KGs. Experimental results on three datasets (MovieLens-20M, Book-Crossing, and Last.FM) demonstrate that KGCN outperforms state-of-the-art baselines, achieving significant improvements in AUC and Recall@K metrics. The paper also discusses the impact of neighbor sampling size, receptive field depth, and embedding dimension on the model's performance, providing insights for future research directions.This paper introduces Knowledge Graph Convolutional Networks (KGCN), an end-to-end framework designed to address the sparsity and cold start problems in collaborative filtering-based recommender systems. KGCN leverages knowledge graphs (KGs) to capture the inter-item relationships and semantic information, enhancing the precision, diversity, and explainability of recommendations. The key innovation is the use of graph convolutional operations to aggregate neighborhood information, which helps in modeling high-order structural proximity and capturing users' long-distance interests. The model is implemented in a minibatch fashion to handle large datasets and KGs. Experimental results on three datasets (MovieLens-20M, Book-Crossing, and Last.FM) demonstrate that KGCN outperforms state-of-the-art baselines, achieving significant improvements in AUC and Recall@K metrics. The paper also discusses the impact of neighbor sampling size, receptive field depth, and embedding dimension on the model's performance, providing insights for future research directions.