Widening and squeezing: Towards accurate and efficient qnns

C Liu, K Han, Y Wang, H Chen, Q Tian, C Xu - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2002.00555, 2020arxiv.org
Quantization neural networks (QNNs) are very attractive to the industry because their
extremely cheap calculation and storage overhead, but their performance is still worse than
that of networks with full-precision parameters. Most of existing methods aim to enhance
performance of QNNs especially binary neural networks by exploiting more effective training
techniques. However, we find the representation capability of quantization features is far
weaker than full-precision features by experiments. We address this problem by projecting …
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters. Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques. However, we find the representation capability of quantization features is far weaker than full-precision features by experiments. We address this problem by projecting features in original full-precision networks to high-dimensional quantization features. Simultaneously, redundant quantization features will be eliminated in order to avoid unrestricted growth of dimensions for some datasets. Then, a compact quantization neural network but with sufficient representation ability will be established. Experimental results on benchmark datasets demonstrate that the proposed method is able to establish QNNs with much less parameters and calculations but almost the same performance as that of full-precision baseline models, e.g. top-1 error of binary ResNet-18 on the ImageNet ILSVRC 2012 dataset.
arxiv.org