Abstract
The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, it is not effective and efficient. Based on the above, we propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively. Concretely, we extend the dynamic mechanism based on the global temporal embeddings, reconstruct the features, and emphasize the events effect adaptively at the training stage. During the inference stage, eliminate fruitless frames hierarchically according to a binary mask generated by the trained temporal embeddings. Comprehensive experiments demonstrate that our Razor SNN achieves competitive performance consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101, CIFAR10-DVS and SHD.
This work was supported by the National Key Research and Development Program of China (Grant No. 2018YFE0203801).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amir, A., et al.: A low power, fully event-based gesture recognition system. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7243–7252 (2017)
Cannici, M., Ciccone, M., et al.: Attention mechanisms for object recognition with event-based cameras. In: WACV (2019)
Caporale, N., Dan, Y.: Spike timing-dependent plasticity: a hebbian learning rule. Annu. Rev. Neurosci. 31, 25–46 (2008)
Cheng, X., Hao, Y., Xu, J., Xu, B.: LISNN: improving spiking neural networks with lateral interactions for robust object recognition. In: IJCAI, pp. 1519–1525 (2020)
Cramer, B., Stradmann, Y., Schemmel, J., Zenke, F.: The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 33(7), 2744–2757 (2020)
Deng, L., et al.: Rethinking the performance comparison between SNNs and ANNs. Neural Netw. 121, 294–307 (2020)
Kim, Y., Panda, P.: Optimizing deeper spiking neural networks for dynamic vision sensing. Neural Netw. 144, 686–698 (2021)
Kugele, A., Pfeil, T., et al.: Efficient processing of spatio-temporal data streams with spiking neural networks. Front. Neurosci. 14, 439 (2020)
Kundu, S., Datta, G., Pedram, M., Beerel, P.A.: Towards low-latency energy-efficient deep snns via attention-guided compression. arXiv preprint arXiv:2107.12445 (2021)
Kundu, S., et al.: Spike-thrift: towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In: CVPR (2021)
Kushawaha, R.K., Kumar, S., Banerjee, B., Velmurugan, R.: Distilling spikes: knowledge distillation in spiking neural networks. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4536–4543. IEEE (2021)
Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)
Panchapakesan, S., Fang, Z., Li, J.: SyncNN: evaluating and accelerating spiking neural networks on FPGAs. In: 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), pp. 286–293. IEEE (2021)
Pérez-Carrasco, J.A., et al.: Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing-application to feedforward convnets. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2706–2719 (2013)
Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)
Ramesh, B., Yang, H., et al.: DART: distribution aware retinal transform for event-based cameras. TPAMI 42(11), 2767–2780 (2019)
Rao, Y., et al.: DynamicViT: efficient vision transformers with dynamic token sparsification. In: Advances in Neural Information Processing Systems (2021)
Rückauer, B., Känzig, N., Liu, S.C., Delbruck, T., Sandamirskaya, Y.: Closing the accuracy gap in an event-based visual recognition task. arXiv preprint arXiv:1906.08859 (2019)
Wu, H., et al.: Training spiking neural networks with accumulated spiking flow. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 10320–10328 (2021)
Wu, Y., Deng, L., et al.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018)
Wu, Y., Deng, L., et al.: Direct training for spiking neural networks: faster, larger, better. In: AAAI (2019)
Wu, Z., Zhang, H., et al.: LIAF-Net: leaky integrate and analog fire network for lightweight and efficient spatiotemporal information processing. IEEE Trans. Neural Netw. Learn. Syst. 33(11), 6249–6262 (2021)
Xu, Q., Qi, Y., Yu, H., Shen, J., Tang, H., Pan, G., et al.: CSNN: an augmented spiking based framework with perceptron-inception. In: IJCAI, pp. 1646–1652 (2018)
Yao, M., et al.: Temporal-wise attention spiking neural networks for event streams classification. In: ICCV (2021)
Yin, B., Corradi, F., Bohté, S.M.: Effective and efficient computation with multiple-timescale spiking recurrent neural networks. In: International Conference on Neuromorphic Systems 2020, pp. 1–8 (2020)
Zenke, F., Vogels, T.P.: The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural Comput. 33(4), 899–925 (2021)
Zhang, L., Cao, J., Zhang, Y., Zhou, B., Feng, S.: Distilling neuron spike with high temperature in reinforcement learning agents. arXiv preprint arXiv:2108.10078 (2021)
Zhang, Y., Chen, W., Lu, Y., Huang, T., Sun, X., Cao, J.: Avatar knowledge distillation: self-ensemble teacher paradigm with uncertainty. arXiv preprint arXiv:2305.02722 (2023)
Zheng, H., Wu, Y., et al.: Going deeper with directly-trained larger spiking neural networks. arXiv preprint arXiv:2011.05280 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Y., Cao, J., Chen, J., Sun, W., Wang, Y. (2023). Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14258. Springer, Cham. https://doi.org/10.1007/978-3-031-44192-9_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-44192-9_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44191-2
Online ISBN: 978-3-031-44192-9
eBook Packages: Computer ScienceComputer Science (R0)