Network In Network

Network In Network

4 Mar 2014 | Min Lin12, Qiang Chen2, Shuicheng Yan2
The paper introduces a novel deep network structure called "Network In Network" (NIN) to enhance the discriminability of local patches within the receptive field. Unlike conventional convolutional layers that use linear filters followed by nonlinear activation functions, NIN employs micro neural networks, specifically multilayer perceptrons (MLPs), to abstract data within the receptive field. These micro networks are sliding over the input, similar to how CNNs work, and their outputs are fed into the next layer. The deep NIN structure is formed by stacking multiple such micro network layers. NIN introduces a global average pooling layer instead of traditional fully connected layers for classification, which is more interpretable and less prone to overfitting. This layer enforces a correspondence between feature maps and categories, making it easier to interpret the model's decisions. The paper demonstrates state-of-the-art performance on datasets like CIFAR-10 and CIFAR-100, and reasonable performance on SVHN and MNIST. The key contributions of NIN include: 1. **Enhanced Local Modeling**: Micro networks improve the abstraction ability of local models by capturing more complex nonlinear relationships. 2. **Global Average Pooling**: This layer acts as a structural regularizer, preventing overfitting and enforcing a correspondence between feature maps and categories. 3. **State-of-the-Art Performance**: NIN achieves superior classification performance on various benchmark datasets. The paper also includes experimental results and visualizations to support the effectiveness of NIN, showing that the feature maps from the last micro network layer are indeed confidence maps of the categories.The paper introduces a novel deep network structure called "Network In Network" (NIN) to enhance the discriminability of local patches within the receptive field. Unlike conventional convolutional layers that use linear filters followed by nonlinear activation functions, NIN employs micro neural networks, specifically multilayer perceptrons (MLPs), to abstract data within the receptive field. These micro networks are sliding over the input, similar to how CNNs work, and their outputs are fed into the next layer. The deep NIN structure is formed by stacking multiple such micro network layers. NIN introduces a global average pooling layer instead of traditional fully connected layers for classification, which is more interpretable and less prone to overfitting. This layer enforces a correspondence between feature maps and categories, making it easier to interpret the model's decisions. The paper demonstrates state-of-the-art performance on datasets like CIFAR-10 and CIFAR-100, and reasonable performance on SVHN and MNIST. The key contributions of NIN include: 1. **Enhanced Local Modeling**: Micro networks improve the abstraction ability of local models by capturing more complex nonlinear relationships. 2. **Global Average Pooling**: This layer acts as a structural regularizer, preventing overfitting and enforcing a correspondence between feature maps and categories. 3. **State-of-the-Art Performance**: NIN achieves superior classification performance on various benchmark datasets. The paper also includes experimental results and visualizations to support the effectiveness of NIN, showing that the feature maps from the last micro network layer are indeed confidence maps of the categories.
Reach us at info@study.space
Understanding Network In Network