Abstract
State-of-the-art techniques for 3D reconstruction are largely based on volumetric scene representations, which require sampling multiple points to compute the color arriving along a ray. Using these representations for more general inverse rendering—reconstructing geometry, materials, and lighting from observed images—is challenging because recursively path-tracing such volumetric representations is expensive. Recent works alleviate this issue through the use of radiance caches: data structures that store the steady-state, infinite-bounce radiance arriving at any point from any direction. However, these solutions rely on approximations that introduce bias into the renderings and, more importantly, into the gradients used for optimization. We present a method that avoids these approximations while remaining computationally efficient. In particular, we leverage two techniques to reduce variance for unbiased estimators of the rendering equation: (1) an occlusion-aware importance sampler for incoming illumination and (2) a fast cache architecture that can be used as a control variate for the radiance from a high-quality, but more expensive, volumetric cache. We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular interreflections.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Barron, J.T., Malik, J.: Intrinsic scene properties from a single RGB-D image. In: CVPR (2013)
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)
Bi, S., et al.: Neural reflectance fields for appearance acquisition (2020). arXiv:2008.03824
Bitterli, B., Wyman, C., Pharr, M., Shirley, P., Lefohn, A., Jarosz, W.: Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting. ACM Trans. Graph. (2020)
Burley, B., Studios, W.D.A.: Physically-based shading at disney. ACM Trans. Graph. (2012)
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20
Chen, W., et al.: Learning to predict 3D objects with an interpolation-based differentiable renderer. In: NeurIPS (2019)
Debevec, P.: Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. ACM Trans. Graph. (1998)
Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)
Gkioulekas, I., Zhao, S., Bala, K., Zickler, T., Levin, A.: Inverse volume rendering with material dictionaries. ACM Trans. Graph. (2013)
Gupta, K., et al.: MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs. ACM Trans. Graph. (2023)
Hasselgren, J., Hofmann, N., Munkberg, J.: Shape, light, and material decomposition from images using Monte Carlo rendering and denoising. In: NeurIPS (2022)
Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: ICCV (2021)
Jakob, W., Speierer, S., Roussel, N., Vicini, D.: Dr. JIT: a just-in-time compiler for differentiable rendering. ACM Trans. Graph. (2022)
Jin, H., et al.: TensoIR: tensorial inverse rendering. In: CVPR (2023)
Kajiya, J.T.: The rendering equation. ACM Trans. Graph. (1986)
Kalos, M.H., Whitlock, P.A.: Monte Carlo Methods. Wiley (2009)
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. (2023)
Kloek, T., Van Dijk, H.K.: Bayesian estimates of equation system parameters: an application of integration by Monte Carlo. Econometrica J. Econometric Soc. (1978)
Krivánek, J., Gautron, P.: Practical Global Illumination with Irradiance Caching. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-79540-4
Kuang, Z., et al.: Stanford-ORB: a real-world 3D object inverse rendering benchmark. In: NeurIPS Datasets & Benchmarks Track (2023)
Li, T.M., Aittala, M., Durand, F., Lehtinen, J.: Differentiable Monte Carlo ray tracing through edge sampling. ACM Trans. Graph. (2018)
Ling, J., Yu, R., Xu, F., Du, C., Zhao, S.: NeRF as non-distant environment emitter in physics-based inverse rendering. arXiv:2402.04829 (2024)
Liu, I., et al.: OpenIllumination: a multi-illumination dataset for inverse rendering evaluation on real objects. In: NeurIPS (2024)
Liu, Y., et al.: NeRO: neural geometry and BRDF reconstruction of reflective objects from multiview images. ACM Trans. Graph. (2023)
Mai, A., Verbin, D., Kuester, F., Fridovich-Keil, S.: Neural microfacet fields for inverse rendering. In: ICCV (2023)
Max, N.: Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph. (1995)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24
Müller, T., McWilliams, B., Rousselle, F., Gross, M., Novák, J.: Neural importance sampling. ACM Trans. Graph. (2019)
Müller, T., Rousselle, F., Novák, J., Keller, A.: Real-time neural radiance caching for path tracing. ACM Trans. Graph. (2021)
Munkberg, J., et al.: Extracting triangular 3D models, materials, and lighting from images. In: CVPR (2022)
Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (2022)
Nicolet, B., Rousselle, F., Novak, J., Keller, A., Jakob, W., Müller, T.: Recursive control variates for inverse rendering. ACM Trans. Graph. (2023)
Pharr, M., Jakob, W., Humphreys, G.: Physically Based Rendering: From Theory to Implementation. MIT Press (2023)
Ramamoorthi, R., Hanrahan, P.: A signal-processing framework for inverse rendering. ACM Trans. Graph. (2001)
Reiser, C., et al.: MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Trans. Graph. (2023)
Scherzer, D., Nguyen, C.H., Ritschel, T., Seidel, H.P.: Pre-convolved radiance caching. Comput. Graph. Forum (2012)
Silvennoinen, A., Lehtinen, J.: Real-time global illumination by precomputed local reconstruction from sparse radiance probes. ACM Trans. Graph. (TOG) (2017)
Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis. In: CVPR (2021)
Veach, E.: Robust Monte Carlo Methods for Light Transport Simulation. Stanford University (1998)
Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., Srinivasan, P.P.: Ref-NeRF: structured view-dependent appearance for neural radiance fields. In: CVPR (2022)
Ward, G.J., Rubinstein, F.M., Clear, R.D.: A ray tracing solution for diffuse interreflection. ACM Trans. Graph. (1988)
Yao, Y., et al.: NeiLF: neural incident light field for physically-based material estimation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13691, pp. 700–716. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19821-2_40
Yariv, L., et al.: BakedSDF: meshing neural SDFs for real-time view synthesis. In: SIGGRAPH (2023)
Yu, Y., Debevec, P., Malik, J., Hawkins, T.: Inverse global illumination: recovering reflectance models of real scenes from photographs. In: SIGGRAPH (1999)
Zhang, J., et al.: NeILF++: inter-reflectable light fields for geometry and material estimation. In: ICCV (2023)
Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: NeRFactor: neural factorization of shape and reflectance under an unknown illumination. ACM Trans. Graph. (2021)
Zhang, Y., Sun, J., He, X., Fu, H., Jia, R., Zhou, X.: Modeling indirect illumination for inverse rendering. In: CVPR (2022)
Acknowledgements
This work was carried out while Benjamin was an intern at Google Research. Authors thank Rick Szeliski, Aleksander Holynski, and Janne Kontkanen for fruitful discussions. Benjamin Attal is supported by a Meta Research PhD Fellowship. Matthew O’Toole acknowledges support from NSF CAREER 2238485 and a Google gift.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Attal, B. et al. (2025). Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15086. Springer, Cham. https://doi.org/10.1007/978-3-031-73390-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-73390-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73389-5
Online ISBN: 978-3-031-73390-1
eBook Packages: Computer ScienceComputer Science (R0)