Spatio-Temporal Backpropagation for Training High-performance Spiking Neural Networks

Spatio-Temporal Backpropagation for Training High-performance Spiking Neural Networks

12 Sep 2017 | Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu and Luping Shi
This paper proposes a spatio-temporal backpropagation (STBP) training framework for spiking neural networks (SNNs), which enables efficient supervised learning by simultaneously considering both spatial and temporal domains. SNNs are promising for brain-like computing due to their ability to encode spatio-temporal information through spike patterns. However, existing methods for training SNNs face challenges such as limited spatial domain exploitation and the non-differentiable nature of spike activity, which complicates gradient-based training. To address these issues, the authors introduce an iterative leaky integrate-and-fire (LIF) model that is more suitable for gradient descent training. They propose a framework that combines both spatial and temporal domains during training, along with an approximated derivative for spike activity to handle non-differentiability. The framework is evaluated on static and dynamic datasets, including MNIST, a custom object detection dataset, and N-MNIST. Results show that the proposed method achieves the best accuracy compared to existing state-of-the-art algorithms. The STBP method enables the training of SNNs by considering both spatial and temporal dynamics, which are essential for capturing the rich spatio-temporal features of SNNs. The framework avoids the need for complex training techniques, making it more efficient and suitable for hardware implementation. The method also introduces approximated derivatives for spike activity, which are crucial for gradient-based training. The results demonstrate that the STBP method achieves high accuracy on both static and dynamic datasets, outperforming existing methods. The work provides a new perspective for exploring high-performance SNNs in future brain-like computing paradigms with rich spatio-temporal dynamics.This paper proposes a spatio-temporal backpropagation (STBP) training framework for spiking neural networks (SNNs), which enables efficient supervised learning by simultaneously considering both spatial and temporal domains. SNNs are promising for brain-like computing due to their ability to encode spatio-temporal information through spike patterns. However, existing methods for training SNNs face challenges such as limited spatial domain exploitation and the non-differentiable nature of spike activity, which complicates gradient-based training. To address these issues, the authors introduce an iterative leaky integrate-and-fire (LIF) model that is more suitable for gradient descent training. They propose a framework that combines both spatial and temporal domains during training, along with an approximated derivative for spike activity to handle non-differentiability. The framework is evaluated on static and dynamic datasets, including MNIST, a custom object detection dataset, and N-MNIST. Results show that the proposed method achieves the best accuracy compared to existing state-of-the-art algorithms. The STBP method enables the training of SNNs by considering both spatial and temporal dynamics, which are essential for capturing the rich spatio-temporal features of SNNs. The framework avoids the need for complex training techniques, making it more efficient and suitable for hardware implementation. The method also introduces approximated derivatives for spike activity, which are crucial for gradient-based training. The results demonstrate that the STBP method achieves high accuracy on both static and dynamic datasets, outperforming existing methods. The work provides a new perspective for exploring high-performance SNNs in future brain-like computing paradigms with rich spatio-temporal dynamics.
Reach us at info@study.space
Understanding Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks