Convolutional 2D Knowledge Graph Embeddings

Convolutional 2D Knowledge Graph Embeddings

2018 | Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
ConvE is a multi-layer convolutional network model for link prediction in knowledge graphs. It outperforms existing models like DistMult and R-GCN in terms of performance with significantly fewer parameters. ConvE uses 2D convolutions over embeddings to predict missing links in knowledge graphs. It is parameter efficient, achieving state-of-the-art results on several established datasets. The model is particularly effective at modeling nodes with high indegree, which are common in complex knowledge graphs like Freebase and YAGO3. However, datasets like WN18 and FB15k suffer from test set leakage due to inverse relations from the training set being present in the test set. To address this, robust versions of these datasets were created. ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets. The model uses 1-N scoring to speed up training and evaluation. ConvE is more parameter efficient than other models and is robust to overfitting due to batch normalization and dropout. It performs well on complex graphs with high indegree and is effective in modeling nodes with high recursive indegree. The model is shallow compared to convolutional architectures in computer vision and future work may explore deeper models and the interpretation of 2D convolution. ConvE is a promising model for link prediction in knowledge graphs.ConvE is a multi-layer convolutional network model for link prediction in knowledge graphs. It outperforms existing models like DistMult and R-GCN in terms of performance with significantly fewer parameters. ConvE uses 2D convolutions over embeddings to predict missing links in knowledge graphs. It is parameter efficient, achieving state-of-the-art results on several established datasets. The model is particularly effective at modeling nodes with high indegree, which are common in complex knowledge graphs like Freebase and YAGO3. However, datasets like WN18 and FB15k suffer from test set leakage due to inverse relations from the training set being present in the test set. To address this, robust versions of these datasets were created. ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets. The model uses 1-N scoring to speed up training and evaluation. ConvE is more parameter efficient than other models and is robust to overfitting due to batch normalization and dropout. It performs well on complex graphs with high indegree and is effective in modeling nodes with high recursive indegree. The model is shallow compared to convolutional architectures in computer vision and future work may explore deeper models and the interpretation of 2D convolution. ConvE is a promising model for link prediction in knowledge graphs.
Reach us at info@study.space
[slides] Convolutional 2D Knowledge Graph Embeddings | StudySpace