Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning

Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning

2024 | Jiangrong Shen, Wenyao Ni, Qi Xu, Huajin Tang
The paper presents a novel spiking neural network (SNN) model, the Selective Activation SNN (SA-SNN), designed to achieve continual learning without forgetting old knowledge while conserving limited computing resources. Inspired by the selective sparse activation principle of context gating in biological systems, the SA-SNN incorporates trace-based K-Winner-Take-All (K-WTA) and variable threshold components to form sparsity in selective activation across spatial and temporal dimensions. This promotes the activation of specific subpopulations of neurons for different tasks, enabling effective continual learning. The model is evaluated on the MNIST and CIFAR10 datasets under a class incremental setting, demonstrating competitive performance similar to and even surpassing traditional regularization-based methods used in artificial neural networks (ANNs). The key contributions include the introduction of the SA-SNN model, the development of trace-based K-WTA and variable threshold mechanisms, and the experimental validation of their effectiveness in continual learning tasks. The results highlight the potential of SNNs in implementing the next generation of machine intelligence with low power consumption and enhanced learning capabilities.The paper presents a novel spiking neural network (SNN) model, the Selective Activation SNN (SA-SNN), designed to achieve continual learning without forgetting old knowledge while conserving limited computing resources. Inspired by the selective sparse activation principle of context gating in biological systems, the SA-SNN incorporates trace-based K-Winner-Take-All (K-WTA) and variable threshold components to form sparsity in selective activation across spatial and temporal dimensions. This promotes the activation of specific subpopulations of neurons for different tasks, enabling effective continual learning. The model is evaluated on the MNIST and CIFAR10 datasets under a class incremental setting, demonstrating competitive performance similar to and even surpassing traditional regularization-based methods used in artificial neural networks (ANNs). The key contributions include the introduction of the SA-SNN model, the development of trace-based K-WTA and variable threshold mechanisms, and the experimental validation of their effectiveness in continual learning tasks. The results highlight the potential of SNNs in implementing the next generation of machine intelligence with low power consumption and enhanced learning capabilities.
Reach us at info@study.space
[slides] Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning | StudySpace