Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3641519.3657498acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article
Open access

Factorized Motion Fields for Fast Sparse Input Dynamic View Synthesis

Published: 13 July 2024 Publication History

Abstract

Designing a 3D representation of a dynamic scene for fast optimization and rendering is a challenging task. While recent explicit representations enable fast learning and rendering of dynamic radiance fields, they require a dense set of input viewpoints. In this work, we focus on learning a fast representation for dynamic radiance fields with sparse input viewpoints. However, the optimization with sparse input is under-constrained and necessitates the use of motion priors to constrain the learning. Existing fast dynamic scene models do not explicitly model the motion, making them difficult to be constrained with motion priors. We design an explicit motion model as a factorized 4D representation that is fast and can exploit the spatio-temporal correlation of the motion field. We then introduce reliable flow priors including a combination of sparse flow priors across cameras and dense flow priors within cameras to regularize our motion model. Our model is fast, compact and achieves very good performance on popular multi-view dynamic scene datasets with sparse input viewpoints. The source code for our model can be found on our project page: https://nagabhushansn95.github.io/publications/2024/RF-DeRF.html.

Supplemental Material

MP4 File - presentation
presentation
ZIP File
Video Comparisons, detailed performance scores

References

[1]
Ronald J Adrian. 1991. Particle-Imaging Techniques for Experimental Fluid Mechanics. Annual review of fluid mechanics 23, 1 (1991), 261–304.
[2]
Brett Allen, Brian Curless, and Zoran Popović. 2003. The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans. ACM Transactions on Graphics (TOG) 22, 3 (July 2003), 587–594. https://doi.org/10.1145/882262.882311
[3]
Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. 2005. SCAPE: Shape Completion and Animation of People. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1186822.1073207
[4]
Bradley Atcheson, Ivo Ihrke, Wolfgang Heidrich, Art Tevs, Derek Bradley, Marcus Magnor, and Hans-Peter Seidel. 2008. Time-resolved 3D Capture of Non-stationary Gas Flows. ACM Transactions on Graphics (TOG) 27, 5 (December 2008). https://doi.org/10.1145/1409060.1409085
[5]
Aayush Bansal and Michael Zollhöfer. 2023. Neural Pixel Composition for 3D-4D View Synthesis From Multi-Views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6]
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2023. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[7]
Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/311535.311556
[8]
Derek Bradley, Wolfgang Heidrich, Tiberiu Popa, and Alla Sheffer. 2010. High Resolution Passive Facial Performance Capture. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1833349.1778778
[9]
Derek Bradley, Tiberiu Popa, Alla Sheffer, Wolfgang Heidrich, and Tamy Boubekeur. 2008. Markerless Garment Capture. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1399504.1360698
[10]
Ang Cao and Justin Johnson. 2023. HexPlane: A Fast Representation for Dynamic Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11]
Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. 2003. Free-Viewpoint Video of Human Actors. ACM Transactions on Graphics (TOG) 22, 3 (July 2003), 569–577. https://doi.org/10.1145/882262.882309
[12]
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. TensoRF: Tensorial Radiance Fields. In Proceedings of the European Conference on Computer Vision (ECCV).
[13]
Edilson de Aguiar, Carsten Stoll, Christian Theobalt, Naveed Ahmed, Hans-Peter Seidel, and Sebastian Thrun. 2008. Performance Capture from Sparse Multi-View Video. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1399504.1360697
[14]
Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. 2022. Depth-Supervised NeRF: Fewer Views and Faster Training for Free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15]
P. Faloutsos, M. Van de Panne, and D. Terzopoulos. 1997. Dynamic Free-Form Deformations for Animation Synthesis. IEEE Transactions on Visualization and Computer Graphics (TVCG) 3, 3 (1997), 201–214. https://doi.org/10.1109/2945.620488
[16]
Jiemin Fang, Taoran Yi, Xinggang Wang, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Matthias Nießner, and Qi Tian. 2022. Fast Dynamic Radiance Fields with Time-Aware Neural Voxels. In Proceedings of the SIGGRAPH Asia 2022 Conference Papers. https://doi.org/10.1145/3550469.3555383
[17]
Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. 2023. K-Planes: Explicit Radiance Fields in Space, Time, and Appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18]
Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance Fields Without Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19]
James Gregson, Michael Krimerman, Matthias B Hullin, and Wolfgang Heidrich. 2012. Stochastic Tomography and its Applications in 3D Imaging of Mixing Fluids. ACM Transactions on Graphics (TOG) 31, 4 (July 2012), 1–10.
[20]
Xiang Guo, Jiadai Sun, Yuchao Dai, Guanying Chen, Xiaoqing Ye, Xiao Tan, Errui Ding, Yumeng Zhang, and Jingdong Wang. 2023. Forward Flow for Novel View Synthesis of Dynamic Scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[21]
Igor Guskov, Sergey Klibanov, and Benjamin Bryant. 2003. Trackable Surfaces. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA).
[22]
Nils Hasler, Mark Asbach, Bodo Rosenhahn, Jens-Rainer Ohm, and Hans-Peter Seidel. 2006. Physically based Tracking of Cloth. In Proceedings of the International Workshop on Vision, Modeling, and Visualization (VMV).
[23]
Tim Hawkins, Per Einarsson, and Paul Debevec. 2005. Acquisition of Time-Varying Participating Media. ACM Transactions on Graphics (TOG) 24, 3 (July 2005), 812–815. https://doi.org/10.1145/1073204.1073266
[24]
Ivo Ihrke and Marcus Magnor. 2004. Image-Based Tomographic Reconstruction of Flames. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA). https://doi.org/10.1145/1028523.1028572
[25]
Ajay Jain, Matthew Tancik, and Pieter Abbeel. 2021. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[26]
Hanbyul Joo, Tomas Simon, and Yaser Sheikh. 2018. Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 2023. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics (TOG) 42, 4 (2023).
[28]
Mijeong Kim, Seonguk Seo, and Bohyung Han. 2022. InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29]
Vladislav Kraevoy and Alla Sheffer. 2004. Cross-Parameterization and Compatible Remeshing of 3D Models. ACM Transactions on Graphics (TOG) 23, 3 (August 2004), 861–869. https://doi.org/10.1145/1015706.1015811
[30]
Vladislav Kraevoy and Alla Sheffer. 2005. Template-Based Mesh Completion. In Proceedings of the Symposium on Geometry Processing (SGP).
[31]
Yao-Chih Lee, Zhoutong Zhang, Kevin Blackburn-Matzen, Simon Niklaus, Jianming Zhang, Jia-Bin Huang, and Feng Liu. 2023. Fast View Synthesis of Casual Videos. arXiv e-prints (2023). arxiv:2312.02135arXiv:2312.02135
[32]
Deqi Li, Shi-Sheng Huang, Tianyu Shen, and Hua Huang. 2023. Dynamic View Synthesis with Spatio-Temporal Feature Warping from Sparse Views. In Proceedings of the ACM International Conference on Multimedia (ACM-MM). https://doi.org/10.1145/3581783.3612419
[33]
Tianye Li, Mira Slavcheva, Michael Zollhöfer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, and Zhaoyang Lv. 2022. Neural 3D Video Synthesis From Multi-View Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34]
Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2021. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35]
Kai-En Lin, Lei Xiao, Feng Liu, Guowei Yang, and Ravi Ramamoorthi. 2021. Deep 3D Mask Volume for View Synthesis of Dynamic Scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
[36]
Jia-Wei Liu, Yan-Pei Cao, Weijia Mao, Wenqiao Zhang, David Junhao Zhang, Jussi Keppo, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. 2022. DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS).
[37]
Liang Liu, Jiangning Zhang, Ruifei He, Yong Liu, Yabiao Wang, Ying Tai, Donghao Luo, Chengjie Wang, Jilin Li, and Feiyue Huang. 2020. Learning by Analogy: Reliable Supervision From Transformations for Unsupervised Optical Flow Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38]
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear Model. ACM Transactions on Graphics (TOG) 34, 6 (October 2015). https://doi.org/10.1145/2816795.2818013
[39]
David G Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision (IJCV) 60 (2004), 91–110.
[40]
Eder Miguel, Derek Bradley, Bernhard Thomaszewski, Bernd Bickel, Wojciech Matusik, Miguel A Otaduy, and Steve Marschner. 2012. Data-Driven Estimation of Cloth Simulation Models. Computer Graphics Forum (CGF) 31, 2pt2 (2012), 519–528. https://doi.org/10.1111/j.1467-8659.2012.03031.x
[41]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of the European Conference on Computer Vision (ECCV).
[42]
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis, Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, and Alex Levinshtein. 2023. SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting With Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[43]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1–15.
[44]
Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, and Noha Radwan. 2022. RegNeRF: Regularizing Neural Radiance Fields for View Synthesis From Sparse Inputs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[45]
Hilario Nunes, Yurdagul Uzunhan, Thomas Gille, Christine Lamberto, Dominique Valeyre, and Pierre-Yves Brillet. 2012. Imaging of Sarcoidosis of the Airways and Lung Parenchyma and Correlation with Lung Function. European Respiratory Journal 40, 3 (2012), 750–765. https://doi.org/10.1183/09031936.00025212
[46]
Eric Penner and Li Zhang. 2017. Soft 3D Reconstruction for View Synthesis. ACM Transactions on Graphics (TOG) 36, 6 (November 2017), 1–11. https://doi.org/10.1145/3130800.3130855
[47]
Gerard Pons-Moll, Sergi Pujades, Sonny Hu, and Michael J. Black. 2017. ClothCap: Seamless 4D Clothing Capture and Retargeting. ACM Transactions on Graphics (TOG) 36, 4 (July 2017). https://doi.org/10.1145/3072959.3073711
[48]
Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2021. D-NeRF: Neural Radiance Fields for Dynamic Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[49]
Jerome Revaud, Cesar De Souza, Martin Humenberger, and Philippe Weinzaepfel. 2019. R2D2: Reliable and Repeatable Detector and Descriptor. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS).
[50]
Barbara Roessle, Jonathan T. Barron, Ben Mildenhall, Pratul P. Srinivasan, and Matthias Nießner. 2022. Dense Depth Priors for Neural Radiance Fields From Sparse Input Views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[51]
Neus Sabater, Guillaume Boisson, Benoit Vandame, Paul Kerbiriou, Frederic Babon, Matthieu Hog, Remy Gendrot, Tristan Langlois, Olivier Bureller, Arno Schubert, and Valerie Allie. 2017. Dataset and Pipeline for Multi-View Light-Field Video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop.
[52]
Johannes L. Schonberger and Jan-Michael Frahm. 2016. Structure-From-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[53]
Thomas W. Sederberg and Scott R. Parry. 1986. Free-form Deformation of Solid Geometric Models. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/15922.15903
[54]
Richard Shaw, Jifei Song, Arthur Moreau, Michal Nazarczuk, Sibi Catley-Chandar, Helisa Dhamo, and Eduardo Perez-Pellitero. 2023. SWAGS: Sampling Windows Adaptively for Dynamic 3D Gaussian Splatting. arXiv e-prints (2023). arxiv:2312.13308arXiv:2312.13308
[55]
Ruoxi Shi, Xinyue Wei, Cheng Wang, and Hao Su. 2024. ZeroRF: Fast Sparse View 360° Reconstruction with Zero Pretraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[56]
Nagabhushan Somraj, Adithyan Karanayil, and Rajiv Soundararajan. 2023. SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with Simpler Solutions. In Proceedings of the ACM Special Interest Group on Computer Graphics and Interactive Techniques - Asia (SIGGRAPH-Asia).
[57]
Nagabhushan Somraj, Pranali Sancheti, and Rajiv Soundararajan. 2022. Temporal View Synthesis of Dynamic Scenes through 3D Object Motion Estimation with Multi-Plane Images. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR). https://doi.org/10.1109/ISMAR55827.2022.00100
[58]
Nagabhushan Somraj and Rajiv Soundararajan. 2023. ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields. In Proceedings of the ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/3588432.3591539
[59]
Carsten Stoll, Nils Hasler, Juergen Gall, Hans-Peter Seidel, and Christian Theobalt. 2011. Fast Articulated Motion Tracking using a Sums of Gaussians Body Model. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV.2011.6126338
[60]
Cheng Sun, Min Sun, and Hwann-Tzong Chen. 2022. Direct Voxel Grid Optimization: Super-Fast Convergence for Radiance Fields Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61]
Zachary Teed and Jia Deng. 2020. RAFT: Recurrent All-Pairs Field Transforms for Optical Flow. In Proceedings of the European Conference on Computer Vision (ECCV).
[62]
Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas, and Ke Li. 2023. SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates. (June 2023).
[63]
S. Vedula, S. Baker, S. Seitz, and T. Kanade. 2000. Shape and Motion Carving in 6D. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR.2000.854926
[64]
Daniel Vlasic, Ilya Baran, Wojciech Matusik, and Jovan Popović. 2008. Articulated Mesh Animation from Multi-View Silhouettes. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1399504.1360696
[65]
Daniel Vlasic, Pieter Peers, Ilya Baran, Paul Debevec, Jovan Popović, Szymon Rusinkiewicz, and Wojciech Matusik. 2009. Dynamic Shape Capture using Multi-View Photometric Stereo. In Proceedings of the ACM Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). https://doi.org/10.1145/1661412.1618520
[66]
Chaoyang Wang, Ben Eckart, Simon Lucey, and Orazio Gallo. 2021. Neural Trajectory Fields for Dynamic Novel View Synthesis. arXiv e-prints (2021). arxiv:2105.05994arXiv:2105.05994
[67]
Chaoyang Wang, Lachlan Ewen MacDonald, László A. Jeni, and Simon Lucey. 2023b. Flow Supervision for Deformable NeRF. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[68]
Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang, and Greg Turk. 2009. Physically Guided Liquid Surface Modeling from Videos. ACM Transactions on Graphics (TOG) 28, 3 (July 2009). https://doi.org/10.1145/1531326.1531396
[69]
Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, and Noah Snavely. 2023a. Tracking Everything Everywhere All at Once. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[70]
Zhou Wang, Eero P Simoncelli, and Alan C Bovik. 2003. Multiscale structural similarity for image quality assessment. In Proceedings of the Asilomar Conference on Signals, Systems Computers.
[71]
Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Wang Xinggang. 2023. 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering. arXiv e-prints (2023). arxiv:2310.08528arXiv:2310.08528
[72]
Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P Srinivasan, Dor Verbin, Jonathan T Barron, Ben Poole, 2024. ReconFusion: 3D Reconstruction with Diffusion Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[73]
Jamie Wynn and Daniyar Turmukhambetov. 2023. DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models. arXiv e-prints (2023). arxiv:2302.12231arXiv:2302.12231
[74]
Wenpeng Xing and Jie Chen. 2021. Temporal-MPI: Enabling Multi-Plane Images for Dynamic Scene Modelling via Temporal Basis Learning. arXiv e-prints (2021). arxiv:2111.10533arXiv:2111.10533
[75]
Haolin Xiong, Sairisheek Muttukuru, Rishi Upadhyay, Pradyumna Chari, and Achuta Kadambi. 2023. SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting. arXiv e-prints (2023). arxiv:2312.00206arXiv:2312.00206
[76]
Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. 2020. Novel View Synthesis of Dynamic Scenes With Globally Coherent Depths From a Monocular Camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[77]
Heng Yu, Joel Julin, Zoltán Á Milacski, Koichiro Niinuma, and László A Jeni. 2023. CoGS: Controllable Gaussian Splatting. arXiv e-prints (2023). arxiv:2312.05664arXiv:2312.05664
[78]
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[79]
Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning View Synthesis Using Multiplane Images. ACM Transactions on Graphics (TOG) 37, 4 (July 2018).
[80]
Zehao Zhu, Zhiwen Fan, Yifan Jiang, and Zhangyang Wang. 2023. FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting. arXiv e-prints (2023). arxiv:2312.00451arXiv:2312.00451
[81]
C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. 2004. High-Quality Video View Interpolation using a Layered Representation. ACM Transactions on Graphics (TOG) 23, 3 (August 2004), 600–608. https://doi.org/10.1145/1015706.1015766

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '24: ACM SIGGRAPH 2024 Conference Papers
July 2024
1106 pages
ISBN:9798400705250
DOI:10.1145/3641519
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2024

Check for updates

Author Tags

  1. Fast dynamic view synthesis
  2. dynamic radiance fields
  3. factorized models
  4. motion priors
  5. sparse input views

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Kotak IISc AI/ML Centre

Conference

SIGGRAPH '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 393
    Total Downloads
  • Downloads (Last 12 months)393
  • Downloads (Last 6 weeks)48
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media