The paper "Knowledge Graph Embedding by Translating on Hyperplanes" by Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen addresses the challenge of embedding large-scale knowledge graphs into a continuous vector space. The authors focus on the TransE model, which is efficient and achieves state-of-the-art predictive performance but struggles with certain mapping properties of relations, such as reflexive, one-to-many, many-to-one, and many-to-many. To overcome these limitations, they propose TransH, a new model that models relations as hyperplanes with translation operations. This approach allows TransH to preserve the mapping properties of relations while maintaining similar model complexity to TransE. The paper also introduces a method to reduce false negative labels during training by leveraging the mapping properties of relations. Extensive experiments on benchmark datasets like WordNet and Freebase show that TransH significantly improves predictive accuracy over TransE, with comparable scalability.The paper "Knowledge Graph Embedding by Translating on Hyperplanes" by Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen addresses the challenge of embedding large-scale knowledge graphs into a continuous vector space. The authors focus on the TransE model, which is efficient and achieves state-of-the-art predictive performance but struggles with certain mapping properties of relations, such as reflexive, one-to-many, many-to-one, and many-to-many. To overcome these limitations, they propose TransH, a new model that models relations as hyperplanes with translation operations. This approach allows TransH to preserve the mapping properties of relations while maintaining similar model complexity to TransE. The paper also introduces a method to reduce false negative labels during training by leveraging the mapping properties of relations. Extensive experiments on benchmark datasets like WordNet and Freebase show that TransH significantly improves predictive accuracy over TransE, with comparable scalability.