Abstract
Convolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recognition, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e., rotations, translations or scaling).
This work proposes a versatile, straightforward and interpretable measure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neural network and/or transformation. Our technique is validated on rotation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Azulay, A., Weiss, Y.: Why do deep convolutional networks generalize so poorly to small image transformations? CoRR abs/1805.12177 (2018). http://arxiv.org/abs/1805.12177
Tensmeyer, C, Martinez, T.: Improving invariance and equivariance properties of convolutional neural networks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=SyBPtQfAZ
Cohen, T.S., Welling, M.: Steerable CNNs. arXiv:1612.08498 [cs, stat], December 2016
Dieleman, S., De Fauw, J., Kavukcuoglu, K.: Exploiting cyclic symmetry in convolutional neural networks. arXiv:1602.02660 [cs], February 2016
Engstrom, L., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: fooling CNNs with simple transformations. CoRR abs/1712.02779 (2017)
Gens, R., Domingos, P.M.: Deep symmetry networks. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 2537–2545. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5424-deep-symmetry-networks.pdf
Gilbert, A.C., Zhang, Y., Lee, K., Zhang, Y., Lee, H.: Towards understanding the invertibility of convolutional neural networks. In: Proceeding of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI) (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015). http://arxiv.org/abs/1502.03167
Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. arXiv:1506.02025 [cs], June 2015
Kauderer-Abrams, E.: Quantifying translation-invariance in convolutional neural networks. CoRR abs/1801.01450 (2018). http://arxiv.org/abs/1801.01450
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015. Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015 (2015). http://arxiv.org/abs/1412.6980
Laptev, D., Savinov, N., Buhmann, J.M., Pollefeys, M.: TI-POOLING: transformation-invariant pooling for feature learning in convolutional neural networks. CoRR abs/1604.06318 (2016). http://arxiv.org/abs/1604.06318
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, pp. 473–480. ACM, New York (2007). http://doi.acm.org/10.1145/1273496.1273556
Lenc, K., Vedaldi, A.: Understanding image representations by measuring their equivariance and equivalence. arXiv:1411.5908 [cs], November 2014
Quiroga, F., Ronchetti, F., Lanzarini, L., Bariviera, A.F.: Revisiting data augmentation for rotational invariance in convolutional neural networks. In: Ferrer-Comalat, J.C., Linares-Mustarós, S., Merigó, J.M., Kacprzyk, J. (eds.) MS-18 2018. AISC, vol. 894, pp. 127–141. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-15413-4_10
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv e-prints arXiv:1409.1556, September 2014
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.A.: Striving for simplicity: the all convolutional net. In: 3rd International Conference on Learning Representations, ICLR 2015. Workshop Track Proceedings, San Diego, CA, USA, 7–9 May 2015 (2015). http://arxiv.org/abs/1412.6806
Srivastava, M., Grill-Spector, K.: The effect of learning strategy versus inherent architecture properties on the ability of convolutional neural networks to develop transformation invariance. CoRR abs/1810.13128 (2018). http://arxiv.org/abs/1810.13128
Acknowledgments
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used in this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Quiroga, F., Torrents-Barrena, J., Lanzarini, L., Puig, D. (2019). Measuring (in)variances in Convolutional Networks. In: Naiouf, M., Chichizola, F., Rucci, E. (eds) Cloud Computing and Big Data. JCC&BD 2019. Communications in Computer and Information Science, vol 1050. Springer, Cham. https://doi.org/10.1007/978-3-030-27713-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-27713-0_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27712-3
Online ISBN: 978-3-030-27713-0
eBook Packages: Computer ScienceComputer Science (R0)