Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

19 Feb 2019 | Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu and Kaushik Roy
This paper presents a novel algorithmic technique for generating Spiking Neural Networks (SNNs) with deep architectures, demonstrating its effectiveness on complex visual recognition tasks such as CIFAR-10 and ImageNet. The technique is applicable to both VGG and Residual network architectures, achieving significantly better accuracy than state-of-the-art methods. The authors propose a weight-normalization technique that ensures the actual SNN operation is considered during the conversion process, which is crucial for minimal loss in classification accuracy. They also explore Residual Network (ResNet) architectures as a potential pathway to enable deeper SNNs, providing insights and design constraints for ANN-SNN conversion. The results show that deep SNNs can provide competitive accuracies on complex datasets like CIFAR-10 and ImageNet. The authors also demonstrate that SNN network sparsity increases with network depth, motivating the exploration of converting ANNs to SNNs for event-driven operation to reduce compute overhead. The paper also discusses the impact of sparse neural events on computation reduction, showing that SNNs can achieve significant energy savings due to their sparsity. The work contributes to the field of neuromorphic computing by demonstrating the feasibility of deep SNNs and highlighting important design constraints for minimal loss in conversion. The results show that the proposed SPIKE-NORM algorithm significantly improves the accuracy of SNNs compared to previous methods, achieving the best performance on the CIFAR-10 dataset and the first reported performance on the entire ImageNet 2012 validation set. The paper also compares the proposed method with previous works, showing that the proposed approach is more effective in minimizing conversion loss and achieving competitive accuracy. The results demonstrate that the proposed method is crucial for achieving near-lossless ANN-SNN conversion for deep architectures and complex recognition problems. The paper concludes that SNNs can exhibit similar computing power as their ANN counterparts and have potential for use in large-scale visual recognition tasks enabled by low-power neuromorphic hardware. The authors also highlight the need for further research in improving SNN performance, including exploring other neural functionalities and optimizing the accuracy loss in ANN-SNN conversion for ResNet architectures.This paper presents a novel algorithmic technique for generating Spiking Neural Networks (SNNs) with deep architectures, demonstrating its effectiveness on complex visual recognition tasks such as CIFAR-10 and ImageNet. The technique is applicable to both VGG and Residual network architectures, achieving significantly better accuracy than state-of-the-art methods. The authors propose a weight-normalization technique that ensures the actual SNN operation is considered during the conversion process, which is crucial for minimal loss in classification accuracy. They also explore Residual Network (ResNet) architectures as a potential pathway to enable deeper SNNs, providing insights and design constraints for ANN-SNN conversion. The results show that deep SNNs can provide competitive accuracies on complex datasets like CIFAR-10 and ImageNet. The authors also demonstrate that SNN network sparsity increases with network depth, motivating the exploration of converting ANNs to SNNs for event-driven operation to reduce compute overhead. The paper also discusses the impact of sparse neural events on computation reduction, showing that SNNs can achieve significant energy savings due to their sparsity. The work contributes to the field of neuromorphic computing by demonstrating the feasibility of deep SNNs and highlighting important design constraints for minimal loss in conversion. The results show that the proposed SPIKE-NORM algorithm significantly improves the accuracy of SNNs compared to previous methods, achieving the best performance on the CIFAR-10 dataset and the first reported performance on the entire ImageNet 2012 validation set. The paper also compares the proposed method with previous works, showing that the proposed approach is more effective in minimizing conversion loss and achieving competitive accuracy. The results demonstrate that the proposed method is crucial for achieving near-lossless ANN-SNN conversion for deep architectures and complex recognition problems. The paper concludes that SNNs can exhibit similar computing power as their ANN counterparts and have potential for use in large-scale visual recognition tasks enabled by low-power neuromorphic hardware. The authors also highlight the need for further research in improving SNN performance, including exploring other neural functionalities and optimizing the accuracy loss in ANN-SNN conversion for ResNet architectures.
Reach us at info@study.space