Abstract
Reducing the numerical precision of weights and activations of deep neural networks have proven to be a stunningly efficient way of deploying deep networks on edge devices with limited resources. With the advent of the Transformer model, several quantization techniques have been proposed to reduce the computation and model size. However, these existing quantization techniques use fixed bit-width assignments, which result in a significant degradation in the accuracy of the model. We present in this work an efficient Transformer based on our novel multi-layer quantization technique, which reduces the precision of data based on the characteristics of weights and activations in each layer of the Transformer architecture while at the same time preserving the model’s structure. The WMT2014 DE-EN and WMT2014 FR-EN datasets are used to evaluate. The results show that our efficient Transformer achieves 4x compression with improved accuracy and an overall reduction in the training time overhead. By comparing with existing state-of-the-art techniques, we further proved that with a minimum of 3-bit and a maximum of 8-bit quantization, comparable state-of-the-art BLEU scores can be obtained.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bhandare, A., et al.: Efficient 8-bit quantization of transformer neural machine language translation model. arXiv preprint arXiv:1906.00532 (2019)
Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018)
Cheong, R., Daniel, R.: transformers.zip: compressing transformers with pruning and quantization. Technical report, Stanford University, Stanford, California (2019)
Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Proceedings of SSST-8, 8th Workshop on Syntax, Semantics and Structure in Statistical Translation (2014)
Chung, I., et al.: Extremely low bit transformer quantization for on-device neural machine translation. In: Findings of the Association for Computational Linguistics: EMNLP 2020 (2020)
Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Proceedings of the 28th International Conference on Neural Information Processing Systems (2015)
Fan, A., et al.: Training with quantization noise for extreme model compression. arXiv e-prints arXiv:2004.07320 (2020)
Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, pp. 1737–1746 (2015)
He, Q., et al.: Effective quantization methods for recurrent neural networks. arXiv:1611.10176 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 4114–4122 (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)
Kuchaiev, O., Ginsburg, B., Gitman, I., Lavrukhin, V., Case, C., Micikevicius, P.: OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models. In: Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pp. 41–46 (2018)
Micikevicius, P., et al.: Mixed precision training. In: International Conference on Learning Representations (2018)
Prato, G., Charlaix, E., Rezagholizadeh, M.: Fully quantized transformer for machine translation. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1–14 (2020)
Saini, P., Kaur, P.: Automatic speech recognition: a review. Int. J. Eng. Trends Technol. 4(2), 1–5 (2013)
Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6000–6010 (2017)
Voita, E., Talbot, D., Moiseev, F., Sennrich, R., Titov, I.: Analyzing multi-head self-attention: specialized heads do the heavy lifting, the rest can be pruned. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)
Wu, Z., Liu, Z., Lin, J., Lin, Y., Han, S.: Lite transformer with long-short range attention. In: 8th International Conference on Learning Representations (2020)
Zhang, L., Wang, S., Liu, B.: Deep learning for sentiment analysis: a survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 8(4), e1253 (2018)
Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016)
Zhu, M., Gupta, S.: To prune, or not to prune: exploring the efficacy of pruning for model compression. In: 6th International Conference on Learning Representations, ICLR 2018 (2018)
Acknowledgement
This work was funded by the National Natural Science Foundation of China, Grant Number 61806086, and the Project of National Key R&D Program of China, Grant Numbers 2018YFB0804204 and 2019YFB1600500.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Mensa-Bonsu, B., Cai, T., Koffi, T.Y., Niu, D. (2021). The Novel Efficient Transformer for NLP. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management . KSEM 2021. Lecture Notes in Computer Science(), vol 12816. Springer, Cham. https://doi.org/10.1007/978-3-030-82147-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-82147-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82146-3
Online ISBN: 978-3-030-82147-0
eBook Packages: Computer ScienceComputer Science (R0)