Training Deep Spiking Neural Networks Using Backpropagation

Training Deep Spiking Neural Networks Using Backpropagation

08 November 2016 | Jun Haeng Lee, Tobi Delbruck, Michael Pfeiffer
This paper introduces a novel technique for training deep spiking neural networks (SNNs) using backpropagation, addressing the challenge of training SNNs due to the non-differentiable nature of spike events. The proposed method treats membrane potentials as differentiable signals, treating discontinuities at spike times as noise, enabling error backpropagation similar to conventional deep networks but applied directly to spike signals and membrane potentials. This approach captures spike statistics more precisely compared to indirect training methods. The framework includes fully connected and convolutional SNNs, leaky integrate-and-fire neurons, and layers implementing spiking winner-take-all circuits. The paper evaluates the proposed method on the MNIST handwritten digit benchmark and the N-MNIST benchmark recorded with an event-based dynamic vision sensor. The results show that the proposed method reduces the error rate by more than three times compared to the best previous SNN and achieves higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. The MNIST task demonstrates that deep SNNs trained with this method achieve accuracy equivalent to conventional neural networks, while the N-MNIST task shows that the same accuracy can be achieved with about five times fewer computational operations.This paper introduces a novel technique for training deep spiking neural networks (SNNs) using backpropagation, addressing the challenge of training SNNs due to the non-differentiable nature of spike events. The proposed method treats membrane potentials as differentiable signals, treating discontinuities at spike times as noise, enabling error backpropagation similar to conventional deep networks but applied directly to spike signals and membrane potentials. This approach captures spike statistics more precisely compared to indirect training methods. The framework includes fully connected and convolutional SNNs, leaky integrate-and-fire neurons, and layers implementing spiking winner-take-all circuits. The paper evaluates the proposed method on the MNIST handwritten digit benchmark and the N-MNIST benchmark recorded with an event-based dynamic vision sensor. The results show that the proposed method reduces the error rate by more than three times compared to the best previous SNN and achieves higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. The MNIST task demonstrates that deep SNNs trained with this method achieve accuracy equivalent to conventional neural networks, while the N-MNIST task shows that the same accuracy can be achieved with about five times fewer computational operations.
Reach us at info@study.space
[slides] Training Deep Spiking Neural Networks Using Backpropagation | StudySpace