Deep surface normal estimation with hierarchical rgb-d fusion

J Zeng, Y Tong, Y Huang, Q Yan… - Proceedings of the …, 2019 - openaccess.thecvf.com
J Zeng, Y Tong, Y Huang, Q Yan, W Sun, J Chen, Y Wang
Proceedings of the IEEE/CVF conference on computer vision and …, 2019openaccess.thecvf.com
The growing availability of commodity RGB-D cameras has boosted the applications in the
field of scene understanding. However, as a fundamental scene understanding task, surface
normal estimation from RGB-D data lacks thorough investigation. In this paper, a
hierarchical fusion network with adaptive feature re-weighting is proposed for surface
normal estimation from a single RGB-D image. Specifically, the features from color image
and depth are successively integrated at multiple scales to ensure global surface …
Abstract
The growing availability of commodity RGB-D cameras has boosted the applications in the field of scene understanding. However, as a fundamental scene understanding task, surface normal estimation from RGB-D data lacks thorough investigation. In this paper, a hierarchical fusion network with adaptive feature re-weighting is proposed for surface normal estimation from a single RGB-D image. Specifically, the features from color image and depth are successively integrated at multiple scales to ensure global surface smoothness while preserving visually salient details. Meanwhile, the depth features are re-weighted with a confidence map estimated from depth before merging into the color branch to avoid artifacts caused by input depth corruption. Additionally, a hybrid multi-scale loss function is designed to learn accurate normal estimation given noisy ground-truth dataset. Extensive experimental results validate the effectiveness of the fusion strategy and the loss design, outperforming state-of-the-art normal estimation schemes.
openaccess.thecvf.com