This paper proposes a novel spiking neural network (SNN) model with selective activation, called SA-SNN, for continual learning. The model is designed to reduce memory interference and mitigate catastrophic forgetting by leveraging the spatiotemporal dynamics of SNNs. The SA-SNN incorporates a trace-based K-Winner-Take-All (K-WTA) mechanism and a variable threshold mechanism to achieve selective activation in spiking neurons. The trace-based K-WTA mechanism helps in reducing interference between different tasks by selecting the top-K neurons based on their activity traces. The variable threshold mechanism allows for the adjustment of the firing threshold of neurons, enabling the participation of silent neurons in the learning process. The SA-SNN model is evaluated on the MNIST and CIFAR10 datasets under the class incremental setting, and it achieves competitive performance similar to and even surpasses other regularization-based methods deployed under traditional artificial neural networks (ANNs). The results show that the SA-SNN model is effective in continual learning scenarios without requiring additional task information or memory replay. The model's performance is further validated through ablation studies, which demonstrate the importance of the trace-based K-WTA and variable threshold components in achieving effective continual learning. The SA-SNN model is also shown to be compatible with regularization-based methods such as EWC and MAS, which add penalty terms to the loss function to prevent catastrophic forgetting. The model's effectiveness is attributed to its ability to maintain the sparsity of neural activity and synapse plasticity, which helps in reducing memory interference and mitigating catastrophic forgetting during continual learning. The SA-SNN model is a promising approach for implementing continual learning in SNNs, which has the potential to be more efficient and biologically plausible compared to traditional ANNs.This paper proposes a novel spiking neural network (SNN) model with selective activation, called SA-SNN, for continual learning. The model is designed to reduce memory interference and mitigate catastrophic forgetting by leveraging the spatiotemporal dynamics of SNNs. The SA-SNN incorporates a trace-based K-Winner-Take-All (K-WTA) mechanism and a variable threshold mechanism to achieve selective activation in spiking neurons. The trace-based K-WTA mechanism helps in reducing interference between different tasks by selecting the top-K neurons based on their activity traces. The variable threshold mechanism allows for the adjustment of the firing threshold of neurons, enabling the participation of silent neurons in the learning process. The SA-SNN model is evaluated on the MNIST and CIFAR10 datasets under the class incremental setting, and it achieves competitive performance similar to and even surpasses other regularization-based methods deployed under traditional artificial neural networks (ANNs). The results show that the SA-SNN model is effective in continual learning scenarios without requiring additional task information or memory replay. The model's performance is further validated through ablation studies, which demonstrate the importance of the trace-based K-WTA and variable threshold components in achieving effective continual learning. The SA-SNN model is also shown to be compatible with regularization-based methods such as EWC and MAS, which add penalty terms to the loss function to prevent catastrophic forgetting. The model's effectiveness is attributed to its ability to maintain the sparsity of neural activity and synapse plasticity, which helps in reducing memory interference and mitigating catastrophic forgetting during continual learning. The SA-SNN model is a promising approach for implementing continual learning in SNNs, which has the potential to be more efficient and biologically plausible compared to traditional ANNs.