Embedding Entities and Relations for Learning and Inference in Knowledge Bases

Embedding Entities and Relations for Learning and Inference in Knowledge Bases

29 Aug 2015 | Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao & Li Deng
This paper presents a general framework for learning representations of entities and relations in knowledge bases (KBs) using neural embeddings. The framework unifies various multi-relational embedding models, including NTN and TransE, under a single learning framework where entities are represented as low-dimensional vectors and relations as bilinear or linear mapping functions. The authors compare different embedding models on the link prediction task, showing that a simple bilinear formulation achieves state-of-the-art results (73.2% top-10 accuracy on Freebase vs. 54.7% by TransE). They also introduce a novel approach that uses learned relation embeddings to mine logical rules, such as BornInCity(a,b) ∧ CityInCountry(b,c) → Nationality(a,c). The embeddings learned from the bilinear objective are particularly effective at capturing relational semantics and can be used to extract Horn rules involving compositional reasoning. The authors evaluate their approach on the Freebase dataset and show that it outperforms a state-of-the-art rule mining system, AMIE, in mining such rules. The framework is also applied to a novel rule extraction task, where the learned embeddings are used to extract logical rules from KBs. The results show that the bilinear formulation achieves superior performance compared to additive formulations. The paper also discusses the use of pre-trained vectors for entity initialization and the impact of different embedding types on rule extraction. Overall, the study demonstrates the effectiveness of neural embeddings in learning representations of entities and relations in KBs and their ability to capture relational semantics for inference and rule extraction.This paper presents a general framework for learning representations of entities and relations in knowledge bases (KBs) using neural embeddings. The framework unifies various multi-relational embedding models, including NTN and TransE, under a single learning framework where entities are represented as low-dimensional vectors and relations as bilinear or linear mapping functions. The authors compare different embedding models on the link prediction task, showing that a simple bilinear formulation achieves state-of-the-art results (73.2% top-10 accuracy on Freebase vs. 54.7% by TransE). They also introduce a novel approach that uses learned relation embeddings to mine logical rules, such as BornInCity(a,b) ∧ CityInCountry(b,c) → Nationality(a,c). The embeddings learned from the bilinear objective are particularly effective at capturing relational semantics and can be used to extract Horn rules involving compositional reasoning. The authors evaluate their approach on the Freebase dataset and show that it outperforms a state-of-the-art rule mining system, AMIE, in mining such rules. The framework is also applied to a novel rule extraction task, where the learned embeddings are used to extract logical rules from KBs. The results show that the bilinear formulation achieves superior performance compared to additive formulations. The paper also discusses the use of pre-trained vectors for entity initialization and the impact of different embedding types on rule extraction. Overall, the study demonstrates the effectiveness of neural embeddings in learning representations of entities and relations in KBs and their ability to capture relational semantics for inference and rule extraction.
Reach us at info@study.space