Deep contextualized word representations

Deep contextualized word representations

22 Mar 2018 | Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
The paper introduces a new type of deep contextualized word representation called ELMo (Embeddings from Language Models), which models both complex characteristics of word use (e.g., syntax and semantics) and how these uses vary across linguistic contexts (i.e., polysemy). ELMo representations are learned from the internal states of a deep bidirectional language model (biLM) pre-trained on a large text corpus. The authors demonstrate that these representations can be easily integrated into existing models and significantly improve performance on six challenging NLP tasks, including question answering, textual entailment, and sentiment analysis. They also show that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. The paper includes extensive experiments and ablation studies to validate the effectiveness of ELMo and to explore the different types of contextual information captured by the biLM.The paper introduces a new type of deep contextualized word representation called ELMo (Embeddings from Language Models), which models both complex characteristics of word use (e.g., syntax and semantics) and how these uses vary across linguistic contexts (i.e., polysemy). ELMo representations are learned from the internal states of a deep bidirectional language model (biLM) pre-trained on a large text corpus. The authors demonstrate that these representations can be easily integrated into existing models and significantly improve performance on six challenging NLP tasks, including question answering, textual entailment, and sentiment analysis. They also show that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals. The paper includes extensive experiments and ablation studies to validate the effectiveness of ELMo and to explore the different types of contextual information captured by the biLM.
Reach us at info@study.space