2023 | Changze Lv, Jianhan Xu, and Xiaoqing Zheng*
This paper presents a two-step method for training spiking neural networks (SNNs) for text classification, combining a conversion-based approach and fine-tuning with surrogate gradients. The method involves converting a traditional neural network into an SNN by adjusting its architecture and then fine-tuning it using surrogate gradients. The key innovation is the conversion of pre-trained word embeddings into spike trains, enabling SNNs to leverage these embeddings for better performance. The converted SNNs achieve comparable results to their deep neural network (DNN) counterparts with significantly lower energy consumption across multiple English and Chinese text classification datasets. Additionally, the SNNs demonstrate greater robustness against adversarial attacks compared to DNNs. The method involves modifying the original neural network to be compatible with SNNs by replacing certain operations and using positive-valued word embeddings. The converted SNN is then fine-tuned using surrogate gradients to improve accuracy. The results show that the proposed method achieves competitive performance on text classification tasks while being more energy-efficient and robust to adversarial attacks. The study also highlights the effectiveness of the conversion and fine-tuning steps in training SNNs for language tasks.This paper presents a two-step method for training spiking neural networks (SNNs) for text classification, combining a conversion-based approach and fine-tuning with surrogate gradients. The method involves converting a traditional neural network into an SNN by adjusting its architecture and then fine-tuning it using surrogate gradients. The key innovation is the conversion of pre-trained word embeddings into spike trains, enabling SNNs to leverage these embeddings for better performance. The converted SNNs achieve comparable results to their deep neural network (DNN) counterparts with significantly lower energy consumption across multiple English and Chinese text classification datasets. Additionally, the SNNs demonstrate greater robustness against adversarial attacks compared to DNNs. The method involves modifying the original neural network to be compatible with SNNs by replacing certain operations and using positive-valued word embeddings. The converted SNN is then fine-tuned using surrogate gradients to improve accuracy. The results show that the proposed method achieves competitive performance on text classification tasks while being more energy-efficient and robust to adversarial attacks. The study also highlights the effectiveness of the conversion and fine-tuning steps in training SNNs for language tasks.