Deep Learning in Spiking Neural Networks

Deep Learning in Spiking Neural Networks

1 Sep 2018 | Amirhossein Tavanaei*, Masoud Ghodrati†, Saeed Reza Kheradpisheh‡, Timothée Masquelier§ and Anthony Maida*
The paper reviews recent advancements in deep learning within spiking neural networks (SNNs), highlighting their biological plausibility and energy efficiency compared to traditional artificial neural networks (ANNs). SNNs use discrete spikes to transmit information, which is more biologically realistic and energy-efficient than the continuous-valued activations used in ANNs. However, training deep SNNs remains challenging due to the non-differentiability of spike trains, preventing the use of backpropagation. The authors discuss various supervised and unsupervised methods for training deep SNNs, including spike-timing-dependent plasticity (STDP) and probabilistic learning rules. They compare the performance of these methods in terms of accuracy, computational cost, and hardware friendliness. Despite the current gap in accuracy between SNNs and ANNs, the gap is narrowing, and SNNs require fewer operations, making them promising for hardware implementations. The paper also reviews specific types of deep SNNs, such as fully connected SNNs, spiking convolutional neural networks (DCNNs), spiking deep belief networks (DBNs), and recurrent SNNs (RNNs), and discusses their applications and performance. Overall, the review aims to advance the development of efficient and high-performance deep SNNs and foster cross-fertilization in neuroscience and artificial intelligence research.The paper reviews recent advancements in deep learning within spiking neural networks (SNNs), highlighting their biological plausibility and energy efficiency compared to traditional artificial neural networks (ANNs). SNNs use discrete spikes to transmit information, which is more biologically realistic and energy-efficient than the continuous-valued activations used in ANNs. However, training deep SNNs remains challenging due to the non-differentiability of spike trains, preventing the use of backpropagation. The authors discuss various supervised and unsupervised methods for training deep SNNs, including spike-timing-dependent plasticity (STDP) and probabilistic learning rules. They compare the performance of these methods in terms of accuracy, computational cost, and hardware friendliness. Despite the current gap in accuracy between SNNs and ANNs, the gap is narrowing, and SNNs require fewer operations, making them promising for hardware implementations. The paper also reviews specific types of deep SNNs, such as fully connected SNNs, spiking convolutional neural networks (DCNNs), spiking deep belief networks (DBNs), and recurrent SNNs (RNNs), and discusses their applications and performance. Overall, the review aims to advance the development of efficient and high-performance deep SNNs and foster cross-fertilization in neuroscience and artificial intelligence research.
Reach us at info@study.space
[slides and audio] Deep Learning in Spiking Neural Networks