Abstract
Sparsity in deep neural networks has been extensively studied to compress and accelerate models for environments with limited resources. The general approach of pruning aims at enforcing sparsity on the obtained model, with minimal accuracy loss, but with a sparsity structure that enables acceleration on hardware. The sparsity can be enforced on either the weights or activations of the network, and existing works tend to focus on either one for the entire network. In this paper, we suggest a strategy based on Neural Architecture Search (NAS) to sparsify both activations and weights throughout the network, while utilizing the recent approach of N:M fine-grained structured sparsity that enables practical acceleration on dedicated GPUs. We show that a combination of weight and activation pruning is superior to each option separately. Furthermore, during the training, the choice between pruning the weights of activations can be motivated by practical inference costs (e.g., memory bandwidth). We demonstrate the efficiency of the approach on several image classification datasets.
R. Akiva-Hochman and S. E. Finder—Contributed equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
To the best of our knowledge, only sparse-dense matrix-matrix products can be efficiently applied in hardware, and the speedup of sparse-sparse products is much more complicated to achieve. Hence, we consider the pruning of either the weights or the activations. However, the pruning of both can also be considered in our framework.
References
Banner, R., Hubara, I., Hoffer, E., Soudry, D.: Scalable methods for 8-bit training of neural networks. In: Advances in Neural Information Processing Systems, pp. 5145–5153 (2018)
Bengio, Y., Léonard, N., Courville, A.: Estimating or propagating gradients through stochastic neurons for conditional computation. preprint arXiv:1308.3432 (2013)
Cai, Z., Vasconcelos, N.: Rethinking differentiable search for mixed-precision neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2349–2358 (2020)
Deng, D., Socher, L., K-Li, F.F.L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Dong, Y.: Network pruning via transformable architecture search. Advances in Neural Information Processing Systems (NeurIPS) (2019)
Dong, X., Huang, J., Yang, Y., Yan, S.: More is less: a more complicated network with less inference complexity. In: CVPR, pp. 5840–5848 (2017)
Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)
Finder, S.E., Zohav, Y., Ashkenazi, M., Treister, E.: Wavelet feature maps compression for image-to-image CNNs. arXiv preprint arXiv:2205.12268 (2022)
Georgiadis, G.: Accelerating convolutional neural networks via activation map compression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7085–7095 (2019)
Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning, vol. 1. MIT Press, Cambridge (2016)
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vision 129, 1789–1819 (2021)
Guo, Y., Yao, A., Chen, Y.: Dynamic network surgery for efficient DNNs. In: Advances in Neural Information Processing Systems (NIPS), pp. 1379–1387 (2016)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)
He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. In: IJCAI International Joint Conference on Artificial Intelligence (2018)
He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4340–4349 (2019)
Howard, A., et al.: Searching for MobileNetV3. In: IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)
Hubara, I., Chmiel, B., Island, M., Banner, R., Naor, J., Soudry, D.: Accelerated sparse neural training: a provable and efficient method to find n: m transposable masks. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 34, pp. 21099–21111 (2021)
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18(1), 6869–6898 (2017)
Jang, G., Poole: categorical reparameterization with gumbel-softmax. In: International Conference on Learning Representations (ICLR) (2017)
Krizhevsky, H.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)
Kurtz, M., et al.: Inducing and exploiting activation sparsity for fast inference on deep neural networks. In: International Conference on Machine Learning (ICML), pp. 5533–5543. PMLR (2020)
Lee, N., Ajanthan, T., Torr, P.H.: Snip: Single-shot network pruning based on connection sensitivity. In: International Conference on Learning Representations (ICLR) (2019)
Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. In: International Conference on Learning Representations (ICLR) (2017)
Liu, H., Simonyan, K., Yang, Y.: Darts: differentiable architecture search. In: International Conference on Learning Representations (ICLR) (2019)
Liu, C., et al.: Progressive neural architecture search. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 19–35. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_2
Mishra, A., et al.: Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378 (2021)
Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11264–11272 (2019)
Raihan, M.A., Aamodt, T.: Sparse weight activation training. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 15625–15638 (2020)
Real, E., et al.: Large-scale evolution of image classifiers. In: International Conference on Machine Learning, pp. 2902–2911. PMLR (2017)
Ren, P., et al.: A comprehensive survey of neural architecture search: challenges and solutions. ACM Computing Surveys (CSUR), pp. 1–34 (2021)
Ronen, M., Finder, S.E., Freifeld, O.: Deepdpm: Deep clustering with an unknown number of clusters. In: Conference on Computer Vision and Pattern Recognition (2022)
Sun, Q., Li, X., Jiao, L., Ren, Y., Shang, F., Liu, F.: fast and effective: a novel sequential single-path search for mixed-precision-quantized networks. IEEE Trans. Cybern. 1–13 (2022)
Sun, W., et al.: Dominosearch: Find layer-wise fine-grained n: M sparse schemes from dense neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 34, pp. 20721–20732 (2021)
Tan, C., Pang, V., Sandler, H., Le, V.: MnasNet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2820–2828 (2019)
Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)
Tanaka, H., Kunin, D., Yamins, D.L., Ganguli, S.: Pruning neural networks without any data by iteratively conserving synaptic flow. preprint arXiv:2006.05467 (2020)
Van Gansbeke, W., Vandenhende, S., Georgoulis, S., Proesmans, M., Van Gool, L.: SCAN: learning to classify images without labels. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 268–285. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_16
Wang, C., Zhang, G., Grosse, R.: Picking winning tickets before training by preserving gradient flow. In: International Conference on Learning Representations (2020)
Wu, D., Zhang, W., Sun, W., Tian, V., Jia, K.: FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10734–10742 (2019)
Yin, P., Lyu, J., Zhang, S., Osher, S., Qi, Y., Xin, J.: Understanding straight-through estimator in training activation quantized neural nets. In: International Conference on Learning Representations (ICLR) (2019)
Yu, H., Han, Q., Li, J., Shi, J., Cheng, G., Fan, B.: Search what you want: barrier panelty NAS for mixed precision quantization. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_1
Zhou, M., Zhu, L., Zhang, Y., Sun, L.: Learning N: M fine-grained structured sparse neural networks from scratch. In: International Conference on Learning Representations (2021)
Zhu, M., Zhang, T., Gu, Z., Xie, Y.: Sparse tensor core: algorithm and hardware co-design for vector-wise sparse neural networks on modern GPUs. In: Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, pp. 359–371 (2019)
Zhu, M., Gupta, S.: To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878 (2017)
Zoph, V., Shlens, L.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710 (2018)
Acknowledgements
This work was supported in part by the Israel Innovation Authority through the Avatar consortium. The authors also thank the Israeli Council for Higher Education (CHE) via the Data Science Research Center and the Lynn and William Frankel Center for Computer Science at BGU. SF is also supported by Kreitman High-tech scholarship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Akiva-Hochman, R., Finder, S.E., Turek, J.S., Treister, E. (2023). Searching for N:M Fine-grained Sparsity of Weights and Activations in Neural Networks. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13807. Springer, Cham. https://doi.org/10.1007/978-3-031-25082-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-25082-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25081-1
Online ISBN: 978-3-031-25082-8
eBook Packages: Computer ScienceComputer Science (R0)