This primer provides an overview of neural network models in the context of natural language processing (NLP). It covers various types of neural network architectures, including feed-forward networks, convolutional networks, recurrent networks, and recursive networks. The tutorial also discusses input encoding for NLP tasks, the computation graph abstraction for automatic gradient computation, and common loss functions used in training neural networks. The focus is on practical applications and understanding the principles behind these models, rather than a comprehensive theoretical treatment. The primer aims to bridge the gap between NLP researchers and practitioners, providing a unified notation and framework for understanding and applying neural network models in NLP.This primer provides an overview of neural network models in the context of natural language processing (NLP). It covers various types of neural network architectures, including feed-forward networks, convolutional networks, recurrent networks, and recursive networks. The tutorial also discusses input encoding for NLP tasks, the computation graph abstraction for automatic gradient computation, and common loss functions used in training neural networks. The focus is on practical applications and understanding the principles behind these models, rather than a comprehensive theoretical treatment. The primer aims to bridge the gap between NLP researchers and practitioners, providing a unified notation and framework for understanding and applying neural network models in NLP.