ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance Field

Authors

  • Zhangkai Ni Tongji University
  • Peiqi Yang Tongji University
  • Wenhan Yang Peng Cheng Laboratory
  • Hanli Wang Tongji University
  • Lin Ma Meituan
  • Sam Kwong City Univeristy of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v38i5.28229

Keywords:

CV: 3D Computer Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Neural Radiance Fields (NeRF) have demonstrated impressive potential in synthesizing novel views from dense input, however, their effectiveness is challenged when dealing with sparse input. Existing approaches that incorporate additional depth or semantic supervision can alleviate this issue to an extent. However, the process of supervision collection is not only costly but also potentially inaccurate. In our work, we introduce a novel model: the Collaborative Neural Radiance Fields (ColNeRF) designed to work with sparse input. The collaboration in ColNeRF includes the cooperation among sparse input source images and the cooperation among the output of the NeRF. Through this, we construct a novel collaborative module that aligns information from various views and meanwhile imposes self-supervised constraints to ensure multi-view consistency in both geometry and appearance. A Collaborative Cross-View Volume Integration module (CCVI) is proposed to capture complex occlusions and implicitly infer the spatial location of objects. Moreover, we introduce self-supervision of target rays projected in multiple directions to ensure geometric and color consistency in adjacent regions. Benefiting from the collaboration at the input and output ends, ColNeRF is capable of capturing richer and more generalized scene representation, thereby facilitating higher-quality results of the novel view synthesis. Our extensive experimental results demonstrate that ColNeRF outperforms state-of-the-art sparse input generalizable NeRF methods. Furthermore, our approach exhibits superiority in fine-tuning towards adapting to new scenes, achieving competitive performance compared to per-scene optimized NeRF-based methods while significantly reducing computational costs. Our code is available at: https://github.com/eezkni/ColNeRF.

Published

2024-03-24

How to Cite

Ni, Z., Yang, P., Yang, W., Wang, H., Ma, L., & Kwong, S. (2024). ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance Field. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4325-4333. https://doi.org/10.1609/aaai.v38i5.28229

Issue

Section

AAAI Technical Track on Computer Vision IV