Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Neural Light Transport for Relighting and View Synthesis

Published: 18 January 2021 Publication History

Abstract

The light transport (LT) of a scene describes how it appears under different lighting conditions from different viewing directions, and complete knowledge of a scene’s LT enables the synthesis of novel views under arbitrary lighting. In this article, we focus on image-based LT acquisition, primarily for human bodies within a light stage setup. We propose a semi-parametric approach for learning a neural representation of the LT that is embedded in a texture atlas of known but possibly rough geometry. We model all non-diffuse and global LT as residuals added to a physically based diffuse base rendering. In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint. This strategy allows the network to learn complex material effects (such as subsurface scattering) and global illumination (such as diffuse interreflection), while guaranteeing the physical correctness of the diffuse LT (such as hard shadows). With this learned LT, one can relight the scene photorealistically with a directional light or an HDRI map, synthesize novel views with view-dependent effects, or do both simultaneously, all in a unified framework using a set of sparse observations. Qualitative and quantitative experiments demonstrate that our Neural Light Transport (NLT) outperforms state-of-the-art solutions for relighting and view synthesis, without requiring separate treatments for both problems that prior work requires. The code and data are available at http://nlt.csail.mit.edu.

Supplementary Material

zhang (zhang.zip)
Supplemental movie, appendix, image and software files for, Neural Light Transport for Relighting and View Synthesis

References

[1]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the OSDI.
[2]
Edward H. Adelson and James R. Bergen. 1991. The plenoptic function and the elements of early vision. In Computational Models of Visual Processing, Vol. 2. MIT Press.
[3]
Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, and Marcus Magnor. 2019. Tex2Shape: Detailed full human body geometry from a single image. In Proceedings of the ICCV.
[4]
Jonathan T. Barron. 2019. A general and adaptive robust loss function. In Proceedings of the CVPR.
[5]
Jonathan T. Barron and Jitendra Malik. 2015. Shape, illumination, and reflectance from shading. Trans. Pattern Anal. Mach. Intell. 37, 8 (2015), 1670--1687.
[6]
H. G. Barrow and J. M. Tenenbaum. 1978. Recovering intrinsic scene characteristics from images. Comput. Vision Syst. 2, 3--26 (1978), 2 pages.
[7]
Ronen Basri, David Jacobs, and Ira Kemelmacher. 2007. Photometric stereo with general, unknown lighting. IJCV 72, 3 (2007), 239--257.
[8]
Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. 2001. Unstructured lumigraph rendering. In Proceedings of the SIGGRAPH. 425--432.
[9]
Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. 2003. Free-viewpoint video of human actors. In Proceedings of the ACM SIGGRAPH, Vol. 22. 569--577.
[10]
Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, and Jingyi Yu. 2020. A neural rendering framework for free-viewpoint relighting. In Proceedings of the IEEE/CVF CVPR. 5599--5610.
[11]
A. Cohen, Ingrid Daubechies, and J.-C. Feauveau. 1992. Biorthogonal bases of compactly supported wavelets. Commun. Pure Appl. Math. 45, 5 (1992), 485--460.
[12]
Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, and Steve Sullivan. 2015. High-quality streamable free-viewpoint video. ACM TOG 34, 4 (2015), 1--13.
[13]
Paul Debevec. 2012. The light stages and their applications to photoreal digital actors.
[14]
Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proceedings of the SIGGRAPH.
[15]
Michael Deering, Stephanie Winner, Bic Schediwy, Chris Duffy, and Neil Hunt. 1988. The triangle processor and normal vector shader: A VLSI system for high performance graphics. In Proceedings of the SIGGRAPH.
[16]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the CVPR.
[17]
David Eigen, Christian Puhrsch, and Rob Fergus. 2014. Depth map prediction from a single image using a multi-scale deep network. In Proceedings of the NIPS.
[18]
Martin Eisemann, Bert De Decker, Marcus Magnor, Philippe Bekaert, Edilson De Aguiar, Naveed Ahmed, Christian Theobalt, and Anita Sellent. 2008. Floating textures. In Proceedings of the Computer Graphics Forum.
[19]
John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View synthesis with learned gradient descent. In Proceedings of the CVPR.
[20]
Graham Fyffe, Cyrus A. Wilson, and Paul Debevec. 2009. Cosine lobe based relighting from gradient illumination photographs. In Proceedings of the SIGGRAPH.
[21]
Marc-Andre Gardner, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Christian Gagne, and Jean-Francois Lalonde. 2019. Deep parametric indoor lighting estimation. In Proceedings of the IEEE International Conference on Computer Vision. 7175--7183.
[22]
Gaurav Garg, Eino-Ville Talvala, Marc Levoy, and Hendrik P. A. Lensch. 2006. Symmetric photography: Exploiting data-sparseness in reflectance fields. In Rendering Techniques. 251--262.
[23]
S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. 1996. The lumigraph. In Proceedings of the SIGGRAPH.
[24]
Kaiwen Guo, Jason Dourgarian, Danhang Tang, Anastasia tkach, Adarsh Kowdle, Emily Cooper, Mingsong Dou, Sean Fanello, Graham Fyffe, Christoph Rhemann, Jonathan Taylor, Peter Lincoln, Paul Debevec, Shahram Izad, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escolano, and Rohit Pandey. 2019. The relightables: Volumetric performance capture of humans with realistic relighting. In Proceedings of the SIGGRAPH Asia.
[25]
Richard Hartley and Andrew Zisserman. 2003. Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, New York, NY.
[26]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the CVPR (2016).
[27]
Kurt Hornik. 1991. Approximation capabilities of multilayer feedforward networks. Neural Netw. (1991).
[28]
James T. Kajiya. 1986. The rendering equation. In Proceedings of the SIGGRAPH.
[29]
Yoshihiro Kanamori and Yuki Endo. 2018. Relighting humans: Occlusion-aware inverse rendering for full-body human images. ACM Trans. Graph. 37, 6 (2018), 11 pages.
[30]
Sean Kelly, Samantha Cordingley, Patrick Nolan, Christoph Rhemann, Sean Fanello, Danhang Tang, Jude Osborn, Jay Busch, Philip Davidson, Paul Debevec, Peter Denny, Graham Fyffe, Kaiwen Guo, Geoff Harvey, Shahram Izadi, Peter Lincoln, Wan-Chun Alex Ma, Jonathan Taylor, Xueming Yu, Matt Whalen, Jason Dourgarian, Genevieve Blanchett, Narelle French, Kirstin Sillitoe, Tea Uglow, Brenton Spiteri, Emma Pearson, Wade Kernot, and Jonathan Richards. 2019. AR-ia: Volumetric opera for mobile augmented reality. In Proceedings of the SIGGRAPH Asia.
[31]
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollöfer, and Christian Theobalt. 2018. Deep video portraits. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--14.
[32]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the ICLR.
[33]
Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Laurent Charbonnel, Jay Busch, and Paul Debevec. 2019. DeepLight: Learning illumination for unconstrained mobile mixed reality. In Proceedings of the CVPR.
[34]
Marc Levoy and Pat Hanrahan. 1996. Light field rendering. In Proceedings of the SIGGRAPH.
[35]
Yikai Li, Jiayuan Mao, Xiuming Zhang, Bill Freeman, Josh Tenenbaum, Noah Snavely, and Jiajun Wu. 2020. Multi-plane program induction with 3D box priors. Adv. Neural Info. Process. Syst. 33 (2020).
[36]
Yue Li, Pablo Wiedemann, and Kenny Mitchell. 2019. Deep precomputed radiance transfer for deformable objects. Proceedings of the ACM on Computer Graphics and Interactive Techniques 2, 1 (2019), 1--16.
[37]
Z. Li, Z. Xu, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker. 2018. Learning to reconstruct shape and spatially-varying reflectance from a single image. In Proceedings of the SIGGRAPH Asia.
[38]
Stephen Lombardi, Jason Saragih, Tomas Simon, and Yaser Sheikh. 2018. Deep appearance models for face rendering. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--13.
[39]
Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: Learning dynamic renderable volumes from images. In Proceedings of the SIGGRAPH.
[40]
Wan-Chun Ma, Tim Hawkins, Pieter Peers, Charles-Felix Chabert, Malte Weiss, and Paul Debevec. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proceedings of the EGSR.
[41]
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the ICML.
[42]
Ricardo Martin-Brualla, Rohit Pandey, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Julien Valentin, Sameh Khamis, Philip Davidson, Anastasia Tkach, Peter Lincoln, et al. 2018. LookinGood: Enhancing performance capture with real-time neural re-rendering. In Proceedings of the SIGGRAPH Asia.
[43]
Abhimitra Meka, Jason Dourgarian, Peter Denny, Sofien Bouaziz, Peter Lincoln, Matt Whalen, Geoff Harvey, Jonathan Taylor, Shahram Izadi, Andrea Tagliasacchi, Paul Debevec, Christian Haene, Christian Theobalt, Julien Valentin, Christoph Rhemann, Rohit Pandey, Michael Zollhofer, Sean Fanello, Graham Fyffe, Adarsh Kowdle, Xueming Yu, and Jay Busch. 2019. Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination. In Proceedings of the SIGGRAPH.
[44]
Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. In Proceedings of the SIGGRAPH.
[45]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing scenes as neural radiance fields for view synthesis. Retrieved from https://arXiv preprint arXiv:2003.08934.
[46]
J. F. Murray-Coleman and A. M. Smith. 1990. The automated measurement of BRDFs and their application to luminaire modeling. J. Illum. Eng. Soc. 19, 1 (1990), 87--99.
[47]
Oliver Nalbach, Elena Arabadzhiyska, Dushyant Mehta, Hans-Peter Seidel, and Tobias Ritschel. 2017. Deep shading: Convolutional neural networks for screen-space shading. Computer Graphics Forum 36, 4 (2017), 65--78.
[48]
Thomas Nestmeyer, Jean-François Lalonde, Iain Matthews, Epic Games, Andreas Lehrmann, and A. I. Borealis. 2020. Learning physics-guided face relighting under directional light. In Proceedings of the CVPR.
[49]
Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip L. Davidson, Sameh Khamis, Mingsong Dou, Vladimir Tankovich, Charles Loop, Qin Cai, Philip A. Chou, Sarah Mennicken, Julien Valentin, Vivek Pradeep, Shenlong Wang, Sing Bing Kang, Pushmeet Kohli, Yuliya Lutchyn, Cem Keskin, and Shahram Izadi. 2016. Holoportation: Virtual 3D teleportation in real-time. In Proceedings of the UIST.
[50]
Rohit Pandey, Anastasia Tkach, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Ricardo Martin-Brualla, Andrea Tagliasacchi, George Papandreou, Philip Davidson, Cem Keskin, Shahram Izadi, and Sean Fanello. 2019. Volumetric capture of humans with a single RGBD camera via semi-parametric learning. In Proceedings of the CVPR.
[51]
Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation (3rd ed.). Morgan Kaufmann Publishers Inc.
[52]
Peiran Ren, Yue Dong, Stephen Lin, Xin Tong, and Baining Guo. 2015. Image based relighting using neural networks. ACM Transactions on Graphics (ToG) 34, 4 (2015), 1--12.
[53]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the MICCAI.
[54]
Ashutosh Saxena, Min Sun, and Andrew Y. Ng. 2008. Make3d: Learning 3d scene structure from a single still image. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 5 (2008), 824--840.
[55]
Pradeep Sen, Billy Chen, Gaurav Garg, Stephen R. Marschner, Mark Horowitz, Marc Levoy, and Hendrik P. A. Lensch. 2005. Dual photography. In Proceedings of the SIGGRAPH.
[56]
Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman. 2020. Background matting: The world is your green screen. In Proceedings of the CVPR.
[57]
Soumyadip Sengupta, Angjoo Kanazawa, Carlos D. Castillo, and David W. Jacobs. 2018. SfSNet: Learning shape, refectance and illuminance of faces in the wild. In Proceedings of the CVPR.
[58]
Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, and Victor Lempitsky. 2019. Textured neural avatars. In Proceedings of the CVPR.
[59]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. Retrieved from https://arXiv preprint arXiv:1409.1556.
[60]
Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhöfer. 2019a. DeepVoxels: Learning persistent 3D feature embeddings. In Proceedings of the CVPR.
[61]
Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019b. Scene representation networks: Continuous 3D-structure-aware neural scene representations. In Advances in Neural Information Processing Systems. MIT Press, 1121--1132.
[62]
Noah Snavely, Steven M. Seitz, and Richard Szeliski. 2006. Photo tourism: Exploring photo collections in 3D. In Proceedings of the SIGGRAPH.
[63]
Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the boundaries of view extrapolation with multiplane images. In Proceedings of the CVPR.
[64]
Tiancheng Sun, Jonathan T. Barron, Yun-Ta Tsai, Zexiang Xu, Xueming Yu, Graham Fyffe, Christoph Rhemann, Jay Busch, Paul E. Debevec, and Ravi Ramamoorthi. 2019. Single image portrait relighting. In Proceedings of the SIGGRAPH.
[65]
Tiancheng Sun, Zexiang Xu, Xiuming Zhang, Sean Fanello, Christoph Rhemann, Paul Debevec, Yun-Ta Tsai, Jonathan T Barron, and Ravi Ramamoorthi. 2020. Light stage super-resolution: Continuous high-frequency relighting. Retrieved from https://arXiv:2010.08888.
[66]
Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zöllhofer, and Christian Theobalt. 2020a. StyleRig: Rigging StyleGAN for 3D control over portrait images. In Proceedings of the CVPR.
[67]
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Niessner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, and Michael Zollhoefer. 2020b. State of the art on neural rendering. In Proceedings of the Eurographics.
[68]
Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, and Matthias Nießner. 2020. Image-guided neural object rendering. In Proceedings of the ICLR.
[69]
Justus Thies, Michael Zollhöfer, and Matthias Nissner. 2019. Deferred neural rendering: Image synthesis using neural textures. In Proceedings of the SIGGRAPH.
[70]
Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600--612.
[71]
Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. 2020. Synsin: End-to-end view synthesis from a single image. In Proceedings of the IEEE/CVF CVPR. 7467--7477.
[72]
Robert J. Woodham. 1980. Photometric method for determining surface orientation from multiple images. Optic. Eng. 19, 1 (1980), 191139.
[73]
Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. 2019. Deep view synthesis from sparse photometric images. In Proceedings of the SIGGRAPH.
[74]
Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. 2018. Deep image-based relighting from optimal sparse samples. In Proceedings of the SIGGRAPH.
[75]
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the CVPR.
[76]
Xuaner Zhang, Jonathan T. Barron, Yun-Ta Tsai, Rohit Pandey, Xiuming Zhang, Ren Ng, and David E. Jacobs. 2020. Portrait shadow manipulation. ACM Trans. Graph. 39, 4.
[77]
Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo magnification: Learning view synthesis using multiplane images. In Proceedings of the SIGGRAPH.
[78]
T. Zickler, R. Ramamoorthi, S. Enrique, and P. N. Belhumeur. 2006. Reflectance sharing: Predicting appearance from a sparse set of images of a known shape. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 8 (2006), 1287--1302.

Cited By

View all
  • (2024)Person Image Generation Guided by Posture, Expression and IlluminationInternational Journal of Computational Intelligence and Applications10.1142/S146902682450010XOnline publication date: 25-Apr-2024
  • (2024)Subjective assessment for inverse rendered composite images in 360-deg imagesJournal of Electronic Imaging10.1117/1.JEI.33.1.01303733:01Online publication date: 1-Jan-2024
  • (2024)NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent ObjectsComputer Graphics Forum10.1111/cgf.1523443:7Online publication date: 18-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 40, Issue 1
February 2021
139 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3420236
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 January 2021
Accepted: 01 December 2020
Revised: 01 December 2020
Received: 01 August 2020
Published in TOG Volume 40, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Neural rendering
  2. relighting
  3. view synthesis

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)674
  • Downloads (Last 6 weeks)96
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Person Image Generation Guided by Posture, Expression and IlluminationInternational Journal of Computational Intelligence and Applications10.1142/S146902682450010XOnline publication date: 25-Apr-2024
  • (2024)Subjective assessment for inverse rendered composite images in 360-deg imagesJournal of Electronic Imaging10.1117/1.JEI.33.1.01303733:01Online publication date: 1-Jan-2024
  • (2024)NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent ObjectsComputer Graphics Forum10.1111/cgf.1523443:7Online publication date: 18-Oct-2024
  • (2024)3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer AvenueIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.338198246:9(6292-6305)Online publication date: Sep-2024
  • (2024)Differentiable Display Photometric Stereo2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01124(11831-11840)Online publication date: 16-Jun-2024
  • (2024)Holo-Relighting: Controllable Volumetric Portrait Relighting from a Single Image2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00408(4263-4273)Online publication date: 16-Jun-2024
  • (2024)Artist-Friendly Relightable and Animatable Neural Heads2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00238(2457-2467)Online publication date: 16-Jun-2024
  • (2024)IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00184(1877-1888)Online publication date: 16-Jun-2024
  • (2024)Relightable and Animatable Neural Avatar from Sparse-View Video2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00100(990-1000)Online publication date: 16-Jun-2024
  • (2024)Relightable Gaussian Codec Avatars2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00021(130-141)Online publication date: 16-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media