Dependency-Based Word Embeddings

Dependency-Based Word Embeddings

June 23-25 2014 | Omer Levy* and Yoav Goldberg
The paper "Dependency-Based Word Embeddings" by Omer Levy and Yoav Goldberg explores the generalization of the skip-gram model with negative sampling to include arbitrary contexts, particularly dependency-based contexts. The authors argue that while current word embedding models are based on linear contexts, using dependency-based contexts can produce embeddings that capture more functional similarities rather than topical similarities. They experiment with dependency-based contexts derived from dependency parse-trees and demonstrate that these embeddings yield different word similarities compared to those produced by the original skip-gram model. The paper also discusses the limitations of bag-of-words contexts and highlights the benefits of dependency-based contexts in capturing more focused and functional relationships between words. Additionally, the authors provide a qualitative and quantitative evaluation of the embeddings, showing that dependency-based contexts lead to more accurate and task-relevant representations. Finally, they introduce a method for model introspection, allowing users to query the model for the most discriminative contexts for a given word, which can help in understanding and improving the learned representations.The paper "Dependency-Based Word Embeddings" by Omer Levy and Yoav Goldberg explores the generalization of the skip-gram model with negative sampling to include arbitrary contexts, particularly dependency-based contexts. The authors argue that while current word embedding models are based on linear contexts, using dependency-based contexts can produce embeddings that capture more functional similarities rather than topical similarities. They experiment with dependency-based contexts derived from dependency parse-trees and demonstrate that these embeddings yield different word similarities compared to those produced by the original skip-gram model. The paper also discusses the limitations of bag-of-words contexts and highlights the benefits of dependency-based contexts in capturing more focused and functional relationships between words. Additionally, the authors provide a qualitative and quantitative evaluation of the embeddings, showing that dependency-based contexts lead to more accurate and task-relevant representations. Finally, they introduce a method for model introspection, allowing users to query the model for the most discriminative contexts for a given word, which can help in understanding and improving the learned representations.
Reach us at info@study.space