SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY

SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY

23 Feb 2019 | Namhoon Lee, Thalaiyasingam Ajanthan & Philip H. S. Torr
This paper introduces a new method for pruning neural networks, called SNIP (Single-shot Network Pruning), which identifies and removes redundant connections in a single step before training. Unlike existing methods that require iterative optimization and pretraining, SNIP uses a saliency criterion based on connection sensitivity to determine which connections are important for the given task. This criterion measures the effect of each connection on the loss function and allows for the pruning of unimportant connections without the need for pretraining or complex pruning schedules. The method is simple, versatile, and interpretable, as it can be applied to various architectures including convolutional, residual, and recurrent networks. SNIP achieves extremely sparse networks with virtually the same accuracy as the reference network on tasks such as MNIST, CIFAR-10, and Tiny-ImageNet. The method is also robust to architecture variations and can be applied to a wide range of neural network models. The results show that SNIP prunes genuinely explainable connections rather than performing blind pruning, and it is significantly simpler than other state-of-the-art approaches. The method is also effective in various modern architectures, including deep convolutional, residual, and recurrent networks. The results demonstrate that SNIP can achieve high sparsity with minimal loss in accuracy, and it is capable of pruning up to 98% for LeNet-300-100 and 99% for LeNet-5-Caffe. The method is also robust to variations in data and weight initialization, and it can be applied to a wide range of tasks and architectures. The results show that SNIP is effective in pruning connections that are important for the given task, and it is capable of achieving high accuracy with minimal loss in performance. The method is also effective in various scenarios, including when the data is inverted or when the labels are random. The results show that SNIP is a promising approach for neural network pruning, as it is simple, effective, and applicable to a wide range of architectures and tasks.This paper introduces a new method for pruning neural networks, called SNIP (Single-shot Network Pruning), which identifies and removes redundant connections in a single step before training. Unlike existing methods that require iterative optimization and pretraining, SNIP uses a saliency criterion based on connection sensitivity to determine which connections are important for the given task. This criterion measures the effect of each connection on the loss function and allows for the pruning of unimportant connections without the need for pretraining or complex pruning schedules. The method is simple, versatile, and interpretable, as it can be applied to various architectures including convolutional, residual, and recurrent networks. SNIP achieves extremely sparse networks with virtually the same accuracy as the reference network on tasks such as MNIST, CIFAR-10, and Tiny-ImageNet. The method is also robust to architecture variations and can be applied to a wide range of neural network models. The results show that SNIP prunes genuinely explainable connections rather than performing blind pruning, and it is significantly simpler than other state-of-the-art approaches. The method is also effective in various modern architectures, including deep convolutional, residual, and recurrent networks. The results demonstrate that SNIP can achieve high sparsity with minimal loss in accuracy, and it is capable of pruning up to 98% for LeNet-300-100 and 99% for LeNet-5-Caffe. The method is also robust to variations in data and weight initialization, and it can be applied to a wide range of tasks and architectures. The results show that SNIP is effective in pruning connections that are important for the given task, and it is capable of achieving high accuracy with minimal loss in performance. The method is also effective in various scenarios, including when the data is inverted or when the labels are random. The results show that SNIP is a promising approach for neural network pruning, as it is simple, effective, and applicable to a wide range of architectures and tasks.
Reach us at info@study.space