Deep Learning in Spiking Neural Networks

Deep Learning in Spiking Neural Networks

1 Sep 2018 | Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier and Anthony Maida
Deep learning has transformed machine learning, particularly in computer vision, through deep artificial neural networks (ANNs) trained with backpropagation. However, biological neurons use discrete spikes to transmit information, making spiking neural networks (SNNs) more biologically realistic and energy-efficient. While training deep SNNs remains challenging due to non-differentiable spike transfer functions, recent supervised and unsupervised methods have improved their accuracy and computational efficiency. SNNs require fewer operations than ANNs and show promise for hardware implementation, especially in portable devices. Despite lagging in accuracy, SNNs are increasingly competitive, particularly for tasks where spike timing and rates are critical. Recent studies have explored various SNN architectures, including feedforward, convolutional, and recurrent networks, as well as spiking deep belief networks. These models leverage spike timing-dependent plasticity (STDP) and other learning rules to extract features and perform classification. SNNs offer advantages in power efficiency and biological plausibility, making them suitable for neuromorphic hardware. Supervised learning methods, such as SpikeProp and ReSuMe, have been developed to train SNNs, while unsupervised methods like STDP-based learning and probabilistic approaches have shown success in pattern recognition. Deep SNNs, including spiking CNNs, have demonstrated high accuracy in tasks like image and speech recognition, with performance comparable to traditional deep learning models. The integration of SNNs into hardware platforms is an active area of research, aiming to create efficient, brain-like systems for complex problem-solving.Deep learning has transformed machine learning, particularly in computer vision, through deep artificial neural networks (ANNs) trained with backpropagation. However, biological neurons use discrete spikes to transmit information, making spiking neural networks (SNNs) more biologically realistic and energy-efficient. While training deep SNNs remains challenging due to non-differentiable spike transfer functions, recent supervised and unsupervised methods have improved their accuracy and computational efficiency. SNNs require fewer operations than ANNs and show promise for hardware implementation, especially in portable devices. Despite lagging in accuracy, SNNs are increasingly competitive, particularly for tasks where spike timing and rates are critical. Recent studies have explored various SNN architectures, including feedforward, convolutional, and recurrent networks, as well as spiking deep belief networks. These models leverage spike timing-dependent plasticity (STDP) and other learning rules to extract features and perform classification. SNNs offer advantages in power efficiency and biological plausibility, making them suitable for neuromorphic hardware. Supervised learning methods, such as SpikeProp and ReSuMe, have been developed to train SNNs, while unsupervised methods like STDP-based learning and probabilistic approaches have shown success in pattern recognition. Deep SNNs, including spiking CNNs, have demonstrated high accuracy in tasks like image and speech recognition, with performance comparable to traditional deep learning models. The integration of SNNs into hardware platforms is an active area of research, aiming to create efficient, brain-like systems for complex problem-solving.
Reach us at info@study.space