In-place activated batchnorm for memory-optimized training of dnns

SR Bulo, L Porzi… - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
Proceedings of the IEEE conference on computer vision and …, 2018openaccess.thecvf.com
In this work we present In-Place Activated Batch Normalization (InPlace-ABN)--a novel
approach to drastically reduce the training memory footprint of modern deep neural
networks in a computationally efficient way. Our solution substitutes the conventionally used
succession of BatchNorm+ Activation layers with a single plugin layer, hence avoiding
invasive framework surgery while providing straightforward applicability for existing deep
learning frameworks. We obtain memory savings of up to 50% by dropping intermediate …
Abstract
In this work we present In-Place Activated Batch Normalization (InPlace-ABN)--a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm+ Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as InPlace-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report competitive results for COCO-Stuff and set new state-of-the-art results for Cityscapes and Mapillary Vistas. Code can be found at https://github. com/mapillary/inplace_abn.
openaccess.thecvf.com