BERT RedisCOVERS THE CLASSICAL NLP PIPELINE

BERT RedisCOVERS THE CLASSICAL NLP PIPELINE

9 Aug 2019 | Ian Tenney1 Dipanjan Das1 Ellie Pavlick1,2
The paper "BERT Rediscovered the Classical NLP Pipeline" by Ian Tenney, Dipanjan Das, and Ellie Pavlick explores the linguistic information captured by the BERT model within its layers. The authors use a suite of probing tasks derived from the traditional NLP pipeline to quantify where specific types of linguistic information are encoded. They find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, with regions responsible for each step appearing in the expected sequence: POS tagging, parsing, NER, semantic roles, and coreference. Qualitative analysis reveals that the model can dynamically adjust this pipeline, revising lower-level decisions based on disambiguating information from higher-level representations. The study uses two complementary metrics—scalar mixing weights and cumulative scoring—to analyze the model's performance across different tasks and layers. The results show that while the pipeline order holds in aggregate, the model can resolve out-of-order decisions, using high-level information to disambiguate low-level decisions. This provides evidence that deep language models can represent the types of syntactic and semantic abstractions traditionally believed necessary for language processing.The paper "BERT Rediscovered the Classical NLP Pipeline" by Ian Tenney, Dipanjan Das, and Ellie Pavlick explores the linguistic information captured by the BERT model within its layers. The authors use a suite of probing tasks derived from the traditional NLP pipeline to quantify where specific types of linguistic information are encoded. They find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, with regions responsible for each step appearing in the expected sequence: POS tagging, parsing, NER, semantic roles, and coreference. Qualitative analysis reveals that the model can dynamically adjust this pipeline, revising lower-level decisions based on disambiguating information from higher-level representations. The study uses two complementary metrics—scalar mixing weights and cumulative scoring—to analyze the model's performance across different tasks and layers. The results show that while the pipeline order holds in aggregate, the model can resolve out-of-order decisions, using high-level information to disambiguate low-level decisions. This provides evidence that deep language models can represent the types of syntactic and semantic abstractions traditionally believed necessary for language processing.
Reach us at info@study.space