2018 | Schlichtkrull, Michael; Kipf, Thomas N.; Bloem, Peter; van den Berg, Rianne; Titov, Ivan; Welling, Max
This paper introduces Relational Graph Convolutional Networks (R-GCNs) for modeling relational data in knowledge bases. R-GCNs are designed to handle the highly multi-relational nature of realistic knowledge bases and are applied to two standard tasks: link prediction and entity classification. The model uses a relational graph convolutional network (R-GCN) to generate node representations, which are then used for classification and link prediction. For link prediction, the model is combined with a DistMult decoder to predict missing triples. The R-GCN model outperforms existing methods, achieving a 29.8% improvement on the FB15k-237 dataset compared to a decoder-only baseline. The paper also introduces techniques for parameter sharing and sparsity constraints to apply R-GCNs to multigraphs with many relations. The results show that R-GCNs are effective for both entity classification and link prediction tasks, and that incorporating an R-GCN encoder significantly improves the performance of factorization models like DistMult. The model is evaluated on several benchmark datasets, including AIFB, MUTAG, BGS, and AM, and shows state-of-the-art results on AIFB and AM. The paper also discusses related work, including other methods for relational modeling and neural networks on graphs. The study demonstrates that R-GCNs are a promising approach for modeling relational data in knowledge bases.This paper introduces Relational Graph Convolutional Networks (R-GCNs) for modeling relational data in knowledge bases. R-GCNs are designed to handle the highly multi-relational nature of realistic knowledge bases and are applied to two standard tasks: link prediction and entity classification. The model uses a relational graph convolutional network (R-GCN) to generate node representations, which are then used for classification and link prediction. For link prediction, the model is combined with a DistMult decoder to predict missing triples. The R-GCN model outperforms existing methods, achieving a 29.8% improvement on the FB15k-237 dataset compared to a decoder-only baseline. The paper also introduces techniques for parameter sharing and sparsity constraints to apply R-GCNs to multigraphs with many relations. The results show that R-GCNs are effective for both entity classification and link prediction tasks, and that incorporating an R-GCN encoder significantly improves the performance of factorization models like DistMult. The model is evaluated on several benchmark datasets, including AIFB, MUTAG, BGS, and AM, and shows state-of-the-art results on AIFB and AM. The paper also discusses related work, including other methods for relational modeling and neural networks on graphs. The study demonstrates that R-GCNs are a promising approach for modeling relational data in knowledge bases.