Pre-trained Models for Natural Language Processing: A Survey

Pre-trained Models for Natural Language Processing: A Survey

March (2020) | Xipeng Qiu*, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai & Xuanjing Huang
This survey provides a comprehensive review of Pre-trained Models (PTMs) for Natural Language Processing (NLP). It begins by introducing language representation learning and its research progress. The authors then categorize existing PTMs into four perspectives: representation type, model architecture, pre-training task, and extensions for specific scenarios. The survey describes how to adapt PTMs to downstream tasks and outlines potential future research directions. The contributions of the survey include a detailed review, a new taxonomy, abundant resources, and discussions on current challenges and future directions. The paper is structured into several sections, covering background concepts, PTMs overview, pre-training tasks, extensions, model analysis, and applications. It also discusses the advantages of pre-training, the history of PTMs, and the analysis of non-contextual and contextual embeddings. The survey highlights the importance of PTMs in NLP and provides a practical guide for understanding, using, and developing PTMs for various NLP tasks.This survey provides a comprehensive review of Pre-trained Models (PTMs) for Natural Language Processing (NLP). It begins by introducing language representation learning and its research progress. The authors then categorize existing PTMs into four perspectives: representation type, model architecture, pre-training task, and extensions for specific scenarios. The survey describes how to adapt PTMs to downstream tasks and outlines potential future research directions. The contributions of the survey include a detailed review, a new taxonomy, abundant resources, and discussions on current challenges and future directions. The paper is structured into several sections, covering background concepts, PTMs overview, pre-training tasks, extensions, model analysis, and applications. It also discusses the advantages of pre-training, the history of PTMs, and the analysis of non-contextual and contextual embeddings. The survey highlights the importance of PTMs in NLP and provides a practical guide for understanding, using, and developing PTMs for various NLP tasks.
Reach us at info@study.space
[slides] Pre-trained models for natural language processing%3A A survey | StudySpace