Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

Published: 24 September 2021 Publication History
  • Get Citation Alerts
  • Abstract

    We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a three-dimensional mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well explained as the sum of a view-independent diffuse component and a view-dependent glossy term concentrated around the mirror reflection direction. We design a convolutional network around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation. We generate these input maps by exploiting the best elements of both image-based and physically based rendering. We sample the input views to estimate diffuse scene irradiance, and compute the new illumination caused by user-specified light sources using path tracing. To facilitate the network's understanding of materials and synthesize plausible glossy reflections, we reproject the views and compute mirror images. We train the network on a synthetic dataset where each scene is also reconstructed with MVS. We show results of our algorithm relighting real indoor scenes and performing free-viewpoint navigation with complex and realistic glossy reflections, which so far remained out of reach for view-synthesis techniques.

    References

    [1]
    Miika Aittala, Tim Weyrich, Jaakko Lehtinen, et al. 2015. Two-shot SVBRDF capture for stationary materials.ACM Trans. Graph. 34, 4 (2015), 110–1.
    [2]
    Dejan Azinovic, Tzu-Mao Li, Anton Kaplanyan, and Matthias Nießner. 2019. Inverse path tracing for joint material and lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2447–2456.
    [3]
    Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. 2020. X-Fields: Implicit neural view-, light-and time-image interpolation. ACM Trans. Graph. 39, 6 (2020).
    [4]
    Benedikt Bitterli. 2016. Rendering Resources. Retrieved from https://benedikt-bitterli.me/resources/.
    [5]
    Nicolas Bonneel, Balazs Kovacs, Sylvain Paris, and Kavita Bala. 2017. Intrinsic decompositions for image editing. Comput. Graph. Forum 36, 2 (2017), 593–609.
    [6]
    Sebastien Bonopera, Peter Hedman, Jerome Esnault, Siddhant Prakash, Simon Rodriguez, Theo Thonat, Mehdi Benadel, Gaurav Chaurasia, Julien Philip, and George Drettakis. 2020. sibr: A System for Image Based Rendering. Retrieved from https://sibr.gitlabpages.inria.fr/.
    [7]
    Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. 2001. Unstructured lumigraph rendering. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. ACM, 425–432.
    [8]
    J. Burgess. 2020. RTX on the NVIDIA Turing GPU. IEEE Micro 40, 2 (2020), 36–44.
    [9]
    Chakravarty R. Alla Chaitanya, Anton S. Kaplanyan, Christoph Schied, Marco Salvi, Aaron Lefohn, Derek Nowrouzezahrai, and Timo Aila. 2017. Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder. ACM Trans. Graph. 36, 4 (2017), 98.
    [10]
    Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32, 3 (2013), 30.
    [11]
    Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N Kutulakos, and Jingyi Yu. 2020. A neural rendering framework for free-viewpoint relighting. In Proceedings of the Computer Vision and Pattern Recognition (CVPR'20).
    [12]
    Paul Debevec. 2006. Virtual cinematography: Relighting through computation. Computer 39, 8 (2006), 57–65.
    [13]
    Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. 145–156.
    [14]
    Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau. 2018. Single-image svbrdf capture with a rendering-aware deep network. ACM Trans. Graph. 37, 4 (2018), 128.
    [15]
    Sylvain Duchêne, Clement Riant, Gaurav Chaurasia, Jorge Lopez-Moreno, Pierre-Yves Laffont, Stefan Popov, Adrien Bousseau, and George Drettakis. 2015. Multi-view intrinsic images of outdoors scenes with an application to relighting. ACM Trans. Graph. 34, 5 (Nov. 2015).
    [16]
    John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View synthesis with learned gradient descent. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2367–2376.
    [17]
    John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. Deepstereo: Learning to predict new views from the world's imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5515–5524.
    [18]
    Yasutaka Furukawa and Jean Ponce. 2010. Accurate, dense, and robust multi-view stereopsis. IEEE Trans. PAMI (2010).
    [19]
    Marc-André Gardner, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Christian Gagné, and Jean-François Lalonde. 2019. Deep parametric indoor lighting estimation. In Proceedings of the IEEE International Conference on Computer Vision. 7175–7183.
    [20]
    Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde. 2017. Learning to predict indoor illumination from a single image. ACM Trans. Graph. 36, 6 (2017), 14 pages.
    [21]
    Michaël Gharbi, Tzu-Mao Li, Miika Aittala, Jaakko Lehtinen, and Frédo Durand. 2019. Sample-based monte carlo denoising using a kernel-splatting network. ACM Trans. Graph. 38, 4 (2019), 1–12.
    [22]
    Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escolano, Rohit Pandey, Jason Dourgarian, et al. 2019. The relightables: volumetric performance capture of humans with realistic relighting. ACM Trans. Graph. 38, 6 (2019), 1–19.
    [23]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
    [24]
    Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. 2018. Deep blending for free-viewpoint image-based rendering. ACM Trans. Graph. 37, 6, Article 257 (Dec. 2018), 15 pages.
    [25]
    Peter Hedman, Tobias Ritschel, George Drettakis, and Gabriel Brostow. 2016. Scalable inside-out image-based rendering. ACM Trans. Graph. 35, 6 (2016), 231.
    [26]
    Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto, and Jean-Francois Lalonde. 2017. Deep outdoor illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'17).
    [27]
    Wenzel Jakob. 2010. Mitsuba renderer. Retrieved from http://www.mitsuba-renderer.org.
    [28]
    Michal Jancosek and Tomás Pajdla. 2011. Multi-view reconstruction preserving weakly-supported surfaces. In Proceedings of the 2011 IEEE Conference onComputer Vision and Pattern Recognition (CVPR'11). IEEE, 3121–3128.
    [29]
    Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision. Springer, 694–711.
    [30]
    Alexia Jolicoeur-Martineau. 2019. The relativistic discriminator: a key element missing from standard GAN. In Proceedings of the International Conference on Learning Representations (ICLR'19).
    [31]
    Nima Khademi Kalantari and Ravi Ramamoorthi. 2017. Deep high dynamic range imaging of dynamic scenes.ACM Trans. Graph. 36, 4 (2017), 144–1.
    [32]
    Diederik P. Kingma, and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, (ICLR'15). D. Scott McCrickard and Timothy L. Stelter (Eds.).
    [33]
    Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. 2017. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Trans. Graph. 36, 4 (2017), 1–13.
    [34]
    Pierre-Yves Laffont, Adrien Bousseau, Sylvain Paris, Frédo Durand, and George Drettakis. 2012. Coherent intrinsic images from photo collections. ACM Trans. Graph. 31 (2012).
    [35]
    Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Laurent Charbonnel, Jay Busch, and Paul Debevec. 2019. Deeplight: Learning illumination for unconstrained mobile mixed reality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5918–5928.
    [36]
    Hendrik P. A. Lensch, Jan Kautz, Michael Goesele, Wolfgang Heidrich, and Hans-Peter Seidel. 2003. Image-based reconstruction of spatial appearance and geometric detail. ACM Trans. Graph. 22, 2 (2003), 234–257.
    [37]
    Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018. Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. 37, 6 (2018), 1–11.
    [38]
    Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. 2017. Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Trans. Graph. 36, 4 (2017).
    [39]
    Zhengqin Li, Mohammad Shafiei, Ravi Ramamoorthi, Kalyan Sunkavalli, and Manmohan Chandraker. 2019. Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and SVBRDF from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19).
    [40]
    Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Kalyan Sunkavalli, and Manmohan Chandraker. 2018. Learning to reconstruct shape and spatially-varying reflectance from a single image. In SIGGRAPH Asia 2018 Technical Papers. ACM, 269.
    [41]
    Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: Learning dynamic renderable volumes from images. ACM Trans. Graph. 38, 4 (2019), 14 pages.
    [42]
    Céline Loscos, Marie-Claude Frasson, George Drettakis, Bruce Walter, Xavier Granier, and Pierre Poulin. 1999. Interactive virtual relighting and remodeling of real scenes. In Proceedings of the Annual Conference on Rendering Techniques 99. Springer, 329–340.
    [43]
    Stephen R. Marschner and Donald P. Greenberg. 1997. Inverse lighting for photography. In Proceedings of the Color and Imaging Conference, Vol. 1997. Society for Imaging Science and Technology, 262–265.
    [44]
    Abhimitra Meka, Christian Haene, Rohit Pandey, Michael Zollhöfer, Sean Fanello, Graham Fyffe, Adarsh Kowdle, Xueming Yu, Jay Busch, Jason Dourgarian, et al. 2019. Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination. ACM Trans. Graph. 38, 4 (2019), 1–12.
    [45]
    Abhimitra Meka, Maxim Maximov, Michael Zollhoefer, Avishek Chatterjee, Hans-Peter Seidel, Christian Richardt, and Christian Theobalt. 2018. Lime: Live intrinsic material estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6315–6324.
    [46]
    Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. 2019. Neural rerendering in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6878–6887.
    [47]
    Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38, 4, Article Article 29 (Jul. 2019), 14 pages.
    [48]
    Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proceedings of the Computer Vision (ECCV'20). Springer International Publishing, 405–421.
    [49]
    Lukas Murmann, Michael Gharbi, Miika Aittala, and Fredo Durand. 2019. A dataset of multi-illumination images in the wild. In Proceedings of the IEEE International Conference on Computer Vision. 4080–4089.
    [50]
    Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504–3515.
    [51]
    Merlin Nimier-David, Delio Vicini, Tizian Zeltner, and Wenzel Jakob. 2019. Mitsuba 2: A retargetable forward and inverse renderer. ACM Trans. Graph. 38, 6 (2019), 1–17.
    [52]
    Steven G. Parker, James Bigler, Andreas Dietrich, Heiko Friedrich, Jared Hoberock, David Luebke, David McAllister, Morgan McGuire, Keith Morley, Austin Robison, et al. 2010. OptiX: A general purpose ray tracing engine. In ACM Transactions on Graphics, Vol. 29. ACM, 66.
    [53]
    Eric Penner and Li Zhang. 2017. Soft 3D reconstruction for view synthesis. ACM Trans. Graph. 36, 6 (2017), 235.
    [54]
    Julien Philip, Michaël Gharbi, Tinghui Zhou, Alexei A. Efros, and George Drettakis. 2019. Multi-view relighting using a geometry-aware network. ACM Trans. Graph. 38, 4, Article 78 (Jul. 2019), 14 pages.
    [55]
    Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR'16), San Juan, Puerto Rico, May 2-4, 201, Yoshua Bengio and Yann LeCun (Eds.).
    [56]
    CapturingReality RealityCapture. 2019. https://www.capturingreality.com/.
    [57]
    Gernot Riegler and Vladlen Koltun. 2020. Free view synthesis. In Computer Vision (ECCV'20), Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, 623–640.
    [58]
    Soumyadip Sengupta, Jinwei Gu, Kihwan Kim, Guilin Liu, David W. Jacobs, and Jan Kautz. 2019. Neural inverse rendering of an indoor scene from a single image. In Proceedings of the International Conference on Computer Vision (ICCV'19).
    [59]
    Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019a. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2437–2446.
    [60]
    Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019b. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems. 1121–1132.
    [61]
    Peter-Pike Sloan, Jan Kautz, and John Snyder. 2002. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques. 527–536.
    [62]
    Shuran Song and Thomas Funkhouser. 2019. Neural illumination: Lighting prediction for indoor environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6918–6926.
    [63]
    Tiancheng Sun, Jonathan T. Barron, Yun-Ta Tsai, Zexiang Xu, Xueming Yu, Graham Fyffe, Christoph Rhemann, Jay Busch, Paul Debevec, and Ravi Ramamoorthi. 2019.Single image portrait relighting. In Proceedings of the International Conference on Computer Vision (ICCV'19).
    [64]
    R. Szeliski, S. Avidan, and P. Anandan. 2000. Layer extraction from multiple images containing reflections and transparency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'00), Vol. 1. 246–253.
    [65]
    Marshall F. Tappen, William T. Freeman, and Edward H. Adelson. 2003. Recovering intrinsic images from a single image. In Advances in Neural Information Processing Systems. 1367–1374.
    [66]
    Chris Tchou, Jessi Stumpfel, Per Einarsson, Marcos Fajardo, and Paul Debevec. 2004. Unlighting the parthenon. In ACM Siggraph 2004 Sketches. ACM, 80.
    [67]
    Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred neural rendering: Image synthesis using neural textures. ACM Trans. Graph. (2019).
    [68]
    Kaixuan Wei, Jiaolong Yang, Ying Fu, David Wipf, and Hua Huang. 2019. Single image reflection removal exploiting misaligned training data and network enhancements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8178–8187.
    [69]
    Xin Wei, Guojun Chen, Yue Dong, Steve Lin, and Xin Tong. 2020. Object-based illumination estimation with rendering-aware neural networks. In Proceedings of the European Conference on Computer Vision (ECCV'20).
    [70]
    Thomas Whelan, Michael Goesele, Steven J. Lovegrove, Julian Straub, Simon Green, Richard Szeliski, Steven Butterfield, Shobhit Verma, Richard A. Newcombe, M Goesele, et al. 2018. Reconstructing scenes with mirror and glass surfaces.ACM Trans. Graph. 37, 4 (2018), 102–1.
    [71]
    Jung-Hsuan Wu and Suguru Saito. 2017. Interactive relighting in single low-dynamic range images. ACM Trans. Graph. 36, 2 (2017), 18.
    [72]
    Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. 2019. Deep view synthesis from sparse photometric images. ACM Trans. Graph. 38, 4 (2019), 76.
    [73]
    Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. 2018. Deep image-based relighting from optimal sparse samples. ACM Trans. Graph. 37, 4 (2018), 126.
    [74]
    Yizhou Yu, Paul Debevec, Jitendra Malik, and Tim Hawkins. 1999. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques. 215–224.
    [75]
    Edward Zhang, Michael F. Cohen, and Brian Curless. 2016. Emptying, refurnishing, and relighting indoor spaces. ACM Trans. Graph. 35, 6 (2016), 174.
    [76]
    Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. 2019. Fixup initialization: Residual learning without normalization. In International Conference on Learning Representations. https://openreview.net/forum?id=H1gsz30cKX.
    [77]
    Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, Jonathan T. Barron, Ravi Ramamoorthi, and William T. Freeman. 2021. Neural light transport for relighting and view synthesis. ACM Trans. Graph. 40, 1, Article 9 (Jan. 2021), 17 pages.
    [78]
    Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. 37, 4 (2018), 65.
    [79]
    Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A. Efros. 2016. View synthesis by appearance flow. In Proceedings of the European Conference on Computer Vision. Springer, 286–301.

    Cited By

    View all
    • (2024)Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View SynthesisACM Transactions on Graphics10.1145/365813043:4(1-14)Online publication date: 19-Jul-2024
    • (2024)A Diffusion Approach to Radiance Field Relighting using Multi‐Illumination SynthesisComputer Graphics Forum10.1111/cgf.1514743:4Online publication date: 24-Jul-2024
    • (2023)Shadow Harmonization for Realistic CompositingSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618227(1-12)Online publication date: 10-Dec-2023
    • Show More Cited By

    Index Terms

    1. Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 40, Issue 5
        October 2021
        190 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3477320
        Issue’s Table of Contents
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 24 September 2021
        Accepted: 01 June 2021
        Revised: 01 April 2021
        Received: 01 October 2020
        Published in TOG Volume 40, Issue 5

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Image relighting
        2. image-based rendering
        3. multi-view
        4. deep learning

        Qualifiers

        • Research-article
        • Refereed

        Funding Sources

        • ERC Advanced

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)91
        • Downloads (Last 6 weeks)7
        Reflects downloads up to 27 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View SynthesisACM Transactions on Graphics10.1145/365813043:4(1-14)Online publication date: 19-Jul-2024
        • (2024)A Diffusion Approach to Radiance Field Relighting using Multi‐Illumination SynthesisComputer Graphics Forum10.1111/cgf.1514743:4Online publication date: 24-Jul-2024
        • (2023)Shadow Harmonization for Realistic CompositingSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618227(1-12)Online publication date: 10-Dec-2023
        • (2023)Intrinsic Harmonization for Illumination-Aware Image CompositingSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618178(1-10)Online publication date: 11-Dec-2023
        • (2023)VR-NeRF: High-Fidelity Virtualized Walkable SpacesSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618139(1-12)Online publication date: 10-Dec-2023
        • (2023)NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview ImagesACM Transactions on Graphics10.1145/359213442:4(1-22)Online publication date: 26-Jul-2023
        • (2023)Single Image Neural Material RelightingACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591515(1-11)Online publication date: 23-Jul-2023
        • (2023)Relighting Neural Radiance Fields with Shadow and Highlight HintsACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591482(1-11)Online publication date: 23-Jul-2023
        • (2023)IBL‐NeRF: Image‐Based Lighting Formulation of Neural Radiance FieldsComputer Graphics Forum10.1111/cgf.1492942:7Online publication date: 30-Oct-2023
        • (2023)MILO: Multi-Bounce Inverse Rendering for Indoor Scene With Light-Emitting ObjectsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.324465845:8(10129-10142)Online publication date: 1-Aug-2023
        • Show More Cited By

        View Options

        Get Access

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media