Abstract
We propose a new neural point cloud rendering method by combining point cloud multi-plane projection and NeRF [12]. Existing point-based rendering methods often rely on the high-quality geometry of point clouds. Meanwhile, NeRF and its extensions usually query the RGB and volume density of each point through neural networks, thus leading to a low inference efficiency. In this paper, we project point features to multiple random depth planes and feed them into a 3D convolutional neural network to predict the RGB and volume density maps. Then we synthesize a novel view through volume rendering. Projecting point features to multiple planes reduces the impact of geometry error and improves the rendering efficiency. Experimental results on the DTU and ScanNet dataset show that our approach achieves state-of-the-art results. Our source code is available at https://github.com/Mayxmu/PCMP-NeRF.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ali, S.G., et al.: Cost-effective broad learning-based ultrasound biomicroscopy with 3D reconstruction for ocular anterior segmentation. Multimed. Tools Appl. 80, 35105–35122 (2021)
Aliev, K.-A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020, Part XXII. LNCS, vol. 12367, pp. 696–712. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_42
Bui, G., Le, T., Morago, B., Duan, Y.: Point-based rendering enhancement via deep learning. Vis. Comput. 34, 829–841 (2018)
Chen, A., et al.: MVSNeRF: fast generalizable radiance field reconstruction from multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124–14133 (2021)
Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839 (2017)
Dai, P., Zhang, Y., Li, Z., Liu, S., Zeng, B.: Neural point cloud rendering via multi-plane projection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7830–7839 (2020)
Ha, D., Dai, A., Le, Q.V.: Hypernetworks. arXiv preprint arXiv:1609.09106 (2016)
Huang, X., Zhang, Y., Ni, B., Li, T., Chen, K., Zhang, W.: Boosting point clouds rendering via radiance mapping. arXiv preprint arXiv:2210.15107 (2022)
Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (TOG) 36(4), 1–14 (2017)
Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanæs, H.: Large scale multi-view stereopsis evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 406–413 (2014)
Kopanas, G., Philip, J., Leimkühler, T., Drettakis, G.: Point-based neural rendering with per-view optimization. In: Computer Graphics Forum, vol. 40, pp. 29–43. Wiley Online Library (2021)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)
Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 102:1–102:15 (2022)
Qiu, J., Yin, Z.X., Cheng, M.M., Ren, B.: Rendering real-world unbounded scenes with cars by learning positional bias. Vis. Comput. 1–14 (2023)
Qiu, J., Zhu, Y., Jiang, P.T., Cheng, M.M., Ren, B.: RdNeRF: relative depth guided nerf for dense free view synthesis. Vis. Comput. 1–13 (2023)
Rakhimov, R., Ardelean, A.T., Lempitsky, V., Burnaev, E.: NPBG++: accelerating neural point-based graphics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15969–15979 (2022)
Rückert, D., Franke, L., Stamminger, M.: ADOP: approximate differentiable one-pixel point rendering. ACM Trans. Graph. (TOG) 41(4), 1–14 (2022)
Thalmann, N., Kim, J., Papagiannakis, G., Thalmann, D., Sheng, B.: Computer graphics for metaverse. Virtual Reality Intell. Hardw. 4, ii–iv (10 2022)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Wang, Y., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. ACM Trans. Graph. 38(6), 1–14 (2019)
Wiles, O., Gkioxari, G., Szeliski, R., Johnson, J.: SynSin: end-to-end view synthesis from a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7467–7477 (2020)
Xu, Q., et al.: Point-NeRF: point-based neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5438–5448 (2022)
Yao, Y., Luo, Z., Li, S., Fang, T., Quan, L.: MVSNet: depth inference for unstructured multi-view stereo. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 767–783 (2018)
Yu, A., Ye, V., Tancik, M., Kanazawa, A.: pixelNeRF: neural radiance fields from one or few images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4578–4587 (2021)
Zhang, Q., Baek, S.H., Rusinkiewicz, S., Heide, F.: Differentiable point-based radiance fields for efficient view synthesis. arXiv preprint arXiv:2205.14330 (2022)
Zimny, D., Trzciński, T., Spurek, P.: Points2NeRF: generating neural radiance fields from 3D point cloud. arXiv preprint arXiv:2206.01290 (2022)
Acknowledgements
The research was supported by the National Natural Science Foundation of China (Nos. 61972327, 62272402, 62372389), the Natural Science Foundation of Fujian Province (No. 2022J01001), and the Fundamental Research Funds for the Central Universities (No. 20720220037).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ma, D., Cao, J., Chen, Z. (2024). Point Cloud Rendering via Multi-plane NeRF. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14496. Springer, Cham. https://doi.org/10.1007/978-3-031-50072-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-50072-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50071-8
Online ISBN: 978-3-031-50072-5
eBook Packages: Computer ScienceComputer Science (R0)