Knowledge Graph Embedding by Translating on Hyperplanes

Knowledge Graph Embedding by Translating on Hyperplanes

2014 | Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen
This paper introduces TransH, a knowledge graph embedding model that addresses the limitations of TransE in handling relations with specific mapping properties such as reflexive, one-to-many, many-to-one, and many-to-many. TransE, while efficient and achieving state-of-the-art predictive performance, struggles with these properties. TransH models relations as hyperplanes with a translation operation, enabling the preservation of these mapping properties with similar model complexity to TransE. It also introduces a method to reduce false negative labels during training by leveraging relation mapping properties. The model is evaluated on tasks such as link prediction, triplet classification, and fact extraction on benchmark datasets like WordNet and Freebase, showing significant improvements over TransE in predictive accuracy while maintaining comparable efficiency. TransH outperforms TransE in handling reflexive, one-to-many, many-to-one, and many-to-many relations, and also improves performance on one-to-one relations. The model is effective in reducing false negative labels and is efficient in training and inference. The paper also discusses related works and experimental results, demonstrating that TransH provides a good balance between model complexity and efficiency, making it a promising approach for knowledge graph embedding.This paper introduces TransH, a knowledge graph embedding model that addresses the limitations of TransE in handling relations with specific mapping properties such as reflexive, one-to-many, many-to-one, and many-to-many. TransE, while efficient and achieving state-of-the-art predictive performance, struggles with these properties. TransH models relations as hyperplanes with a translation operation, enabling the preservation of these mapping properties with similar model complexity to TransE. It also introduces a method to reduce false negative labels during training by leveraging relation mapping properties. The model is evaluated on tasks such as link prediction, triplet classification, and fact extraction on benchmark datasets like WordNet and Freebase, showing significant improvements over TransE in predictive accuracy while maintaining comparable efficiency. TransH outperforms TransE in handling reflexive, one-to-many, many-to-one, and many-to-many relations, and also improves performance on one-to-one relations. The model is effective in reducing false negative labels and is efficient in training and inference. The paper also discusses related works and experimental results, demonstrating that TransH provides a good balance between model complexity and efficiency, making it a promising approach for knowledge graph embedding.
Reach us at info@study.space
[slides] Knowledge Graph Embedding by Translating on Hyperplanes | StudySpace