Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

MetaCap: Meta-learning Priors from Multi-view Imagery for Sparse-View Human Performance Capture and Rendering

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15104))

Included in the following conference series:

  • 213 Accesses

Abstract

Faithful human performance capture and free-view rendering from sparse RGB observations is a long-standing problem in Vision and Graphics. The main challenges are the lack of observations and the inherent ambiguities of the setting, e.g. occlusions and depth ambiguity. As a result, radiance fields, which have shown great promise in capturing high-frequency appearance and geometry details in dense setups, perform poorly when naïvely supervising them on sparse camera views, as the field simply overfits to the sparse-view inputs. To address this, we propose MetaCap, a method for efficient and high-quality geometry recovery and novel view synthesis given very sparse or even a single view of the human. Our key idea is to meta-learn the radiance field weights solely from potentially sparse multi-view videos, which can serve as a prior when fine-tuning them on sparse imagery depicting the human. This prior provides a good network weight initialization, thereby effectively addressing ambiguities in sparse-view capture. Due to the articulated structure of the human body and motion-induced surface deformations, learning such a prior is non-trivial. Therefore, we propose to meta-learn the field weights in a pose-canonicalized space, which reduces the spatial feature range and makes feature learning more effective. Consequently, one can fine-tune our field parameters to quickly generalize to unseen poses, novel illumination conditions as well as novel and sparse (even monocular) camera views. For evaluating our method under different scenarios, we collect a new dataset, WildDynaCap, which contains subjects captured in, both, a dense camera dome and in-the-wild sparse camera rigs, and demonstrate superior results compared to recent state-of-the-art methods on, both, public and WildDynaCap dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. https://web.twindom.com/

  2. Antoniou, A., Edwards, H., Storkey, A.: How to train your MAML. arXiv preprint arXiv:1810.09502 (2018)

  3. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV (2021)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)

    Google Scholar 

  5. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)

    Google Scholar 

  6. Bühler, M.C., et al.: Preface: a data-driven volumetric prior for few-shot ultra high-resolution face synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3402–3413 (2023)

    Google Scholar 

  7. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20

    Chapter  Google Scholar 

  8. Chen, A., et al.: MVSNeRF: fast generalizable radiance field reconstruction from multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124–14133 (2021)

    Google Scholar 

  9. Collet, A., et al.: High-quality streamable free-viewpoint video. ACM Trans. Graph. (ToG) 34(4), 1–13 (2015)

    Article  Google Scholar 

  10. Davydov, A., Remizova, A., Constantin, V., Honari, S., Salzmann, M., Fua, P.: Adversarial parametric pose prior. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  11. De Luigi, L., Li, R., Guillard, B., Salzmann, M., Fua, P.: DrapeNet: garment generation and self-supervised draping. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023)

    Google Scholar 

  12. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)

    Google Scholar 

  13. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  14. Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: Proceedings of Machine Learning and Systems 2020, pp. 3569–3579 (2020)

    Google Scholar 

  15. Gu, J., et al.: NerfDiff: single-image view synthesis with nerf-guided distillation from 3D-aware diffusion. In: International Conference on Machine Learning (2023)

    Google Scholar 

  16. Wang, G., Chen, Z., Loy, C.C., Liu, Z.: SparseNeRF: distilling depth ranking for few-shot novel view synthesis. Technical report (2023)

    Google Scholar 

  17. Guo, K., et al.: The relightables: volumetric performance capture of humans with realistic relighting. ACM Trans. Graph. (ToG) 38(6), 1–19 (2019)

    Google Scholar 

  18. Habermann, M., Liu, L., Xu, W., Pons-Moll, G., Zollhoefer, M., Theobalt, C.: HDHumans: a hybrid approach for high-fidelity digital humans. Proc. ACM Comput. Graph. Interact. Tech. 6(3), 1–23 (2023)

    Article  Google Scholar 

  19. Habermann, M., Liu, L., Xu, W., Zollhoefer, M., Pons-Moll, G., Theobalt, C.: Real-time deep dynamic characters. ACM Trans. Graph. 40(4), 1–16 (2021)

    Article  Google Scholar 

  20. Habermann, M., Xu, W., Zollhoefer, M., Pons-Moll, G., Theobalt, C.: LiveCap: real-time human performance capture from monocular video. ACM Trans. Graph. (TOG) 38(2), 1–17 (2019)

    Article  Google Scholar 

  21. Habermann, M., Xu, W., Zollhoefer, M., Pons-Moll, G., Theobalt, C.: DeepCap: monocular human performance capture using weak supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2020)

    Google Scholar 

  22. Hadwiger, M., Al-Awami, A.K., Beyer, J., Agus, M., Pfister, H.: SparseLeap: efficient empty space skipping for large-scale volume rendering. IEEE Trans. Vis. Comput. Graph. 24(1), 974–983 (2017)

    Article  Google Scholar 

  23. Hospedales, T., Antoniou, A., Micaelli, P., Storkey, A.: Meta-learning in neural networks: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 5149–5169 (2021)

    Google Scholar 

  24. Huang, Y., et al.: TeCH: text-guided reconstruction of lifelike clothed humans. In: International Conference on 3D Vision (3DV) (2024)

    Google Scholar 

  25. Huber, P.J.: Robust estimation of a location parameter. In: Kotz, S., Johnson, N.L. (eds.) Breakthroughs in Statistics: Methodology and Distribution. SSS, pp. 492–518. Springer, New York (1992). https://doi.org/10.1007/978-1-4612-4380-9_35

    Chapter  Google Scholar 

  26. Jiang, W., Yi, K.M., Samei, G., Tuzel, O., Ranjan, A.: NeuMan: neural human radiance field from a single video. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 402–418. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_24

    Chapter  Google Scholar 

  27. Jiang, Y., Habermann, M., Golyanik, V., Theobalt, C.: HiFECap: monocular high-fidelity and expressive capture of human performances. In: BMVC (2022)

    Google Scholar 

  28. Johnson, E.C., Habermann, M., Shimada, S., Golyanik, V., Theobalt, C.: Unbiased 4D: monocular 4D reconstruction with a neural deformation model. In: Computer Vision and Pattern Recognition Workshops (CVPRW) (2023)

    Google Scholar 

  29. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 42(4) (2023). https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  30. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2017)

    Google Scholar 

  31. Kwon, Y., Kim, D., Ceylan, D., Fuchs, H.: Neural human performer: learning generalizable radiance fields for human performance rendering. In: Advances in Neural Information Processing Systems, vol. 34, pp. 24741–24752 (2021)

    Google Scholar 

  32. Kwon, Y., Liu, L., Fuchs, H., Habermann, M., Theobalt, C.: DELIFFAS: deformable light fields for fast avatar synthesis. In: Advances in Neural Information Processing Systems (2023)

    Google Scholar 

  33. Li, K., Malik, J.: Learning to optimize. arXiv preprint arXiv:1606.01885 (2016)

  34. Li, R., et al.: TAVA: template-free animatable volumetric actors. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 419–436. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_25

    Chapter  Google Scholar 

  35. Li, Y., Habermann, M., Thomaszewski, B., Coros, S., Beeler, T., Theobalt, C.: Deep physics-aware inference of cloth deformation for monocular human performance capture. In: 2021 International Conference on 3D Vision (3DV), Los Alamitos, CA, USA, pp. 373–384. IEEE Computer Society (2021). https://doi.org/10.1109/3DV53792.2021.00047. https://doi.ieeecomputersociety.org/10.1109/3DV53792.2021.00047

  36. Li, Z., Zheng, Z., Zhang, H., Ji, C., Liu, Y.: AvatarCap: animatable avatar conditioned monocular human volumetric capture. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13661, pp. 322–341. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19769-7_19

    Chapter  Google Scholar 

  37. Liu, L., Habermann, M., Rudnev, V., Sarkar, K., Gu, J., Theobalt, C.: Neural actor: neural free-view synthesis of human actors with pose control. ACM Trans. Graph. 40(6), 1–16 (2021). (ACM SIGGRAPH Asia)

    Google Scholar 

  38. Long, X., Lin, C., Wang, P., Komura, T., Wang, W.: SparseNeuS: fast generalizable neural surface reconstruction from sparse views. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 210–227. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_13

    Chapter  Google Scholar 

  39. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graph. 34(6), 248:1–248:16 (2015). (Proc. SIGGRAPH Asia)

    Google Scholar 

  40. Luvizon, D., Golyanik, V., Kortylewski, A., Habermann, M., Theobalt, C.: Relightable neural actor with intrinsic decomposition and pose control. In: European Conference on Computer Vision (ECCV) (2024)

    Google Scholar 

  41. Ma, Q., et al.: Learning to dress 3D people in generative clothing. In: Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  42. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4460–4470 (2019)

    Google Scholar 

  43. Mihajlovic, M., Bansal, A., Zollhoefer, M., Tang, S., Saito, S.: KeypointNeRF: generalizing image-based volumetric avatars using relative spatial encoding of keypoints. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13675, pp. 179–197. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19784-0_11

    Chapter  Google Scholar 

  44. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  45. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  Google Scholar 

  46. Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)

    Google Scholar 

  47. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)

  48. Niemeyer, M., Barron, J.T., Mildenhall, B., Sajjadi, M.S., Geiger, A., Radwan, N.: RegNeRF: regularizing neural radiance fields for view synthesis from sparse inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5480–5490 (2022)

    Google Scholar 

  49. Palafox, P., Sarafianos, N., Tung, T., Dai, A.: SPAMs: structured implicit parametric models. In: CVPR (2022)

    Google Scholar 

  50. Pan, X., Yang, Z., Ma, J., Zhou, C., Yang, Y.: TransHuman: a transformer-based human representation for generalizable neural human rendering. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3544–3555 (2023)

    Google Scholar 

  51. Pang, H., Zhu, H., Kortylewski, A., Theobalt, C., Habermann, M.: ASH: animatable Gaussian splats for efficient and photoreal human rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1165–1175 (2024)

    Google Scholar 

  52. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)

    Google Scholar 

  53. Pavlakos, G., et al.: Expressive body capture: 3D hands, face, and body from a single image. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10975–10985 (2019)

    Google Scholar 

  54. Peng, S., et al.: Animatable neural radiance fields for modeling dynamic human bodies. In: ICCV (2021)

    Google Scholar 

  55. Peng, S., et al.: Animatable neural implicit surfaces for creating avatars from videos. arXiv preprint arXiv:2203.08133 (2022)

  56. Peng, S., et al.: Neural body: implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In: CVPR (2021)

    Google Scholar 

  57. Rajeswaran, A., Finn, C., Kakade, S.M., Levine, S.: Meta-learning with implicit gradients. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  58. Remelli, E., et al.: Drivable volumetric avatars using texel-aligned features. In: ACM SIGGRAPH 2022 Conference Proceedings (2022)

    Google Scholar 

  59. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2304–2314 (2019)

    Google Scholar 

  60. Saito, S., Simon, T., Saragih, J., Joo, H.: PIFuHD: multi-level pixel-aligned implicit function for high-resolution 3D human digitization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 84–93 (2020)

    Google Scholar 

  61. Saito, S., Yang, J., Ma, Q., Black, M.J.: SCANimate: weakly supervised learning of skinned clothed avatar networks. In: Proceedings IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  62. Shao, R., et al.: FloRen: real-time high-quality human performance rendering via appearance flow using sparse RGB cameras. In: SIGGRAPH Asia 2022 Conference Papers, pp. 1–10 (2022)

    Google Scholar 

  63. Shao, R., et al.: DoubleField: bridging the neural surface and radiance fields for high-fidelity human reconstruction and rendering. In: CVPR (2022)

    Google Scholar 

  64. Shao, R., Zheng, Z., Zhang, H., Sun, J., Liu, Y.: DiffuStereo: high quality human reconstruction via diffusion-based stereo using sparse cameras. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 702–720. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_41

    Chapter  Google Scholar 

  65. Shen, K., et al.: X-avatar: expressive human avatars. In: Computer Vision and Pattern Recognition (CVPR) (2023)

    Google Scholar 

  66. Shetty, A., Habermann, M., Sun, G., Luvizon, D., Golyanik, V., Theobalt, C.: Holoported characters: real-time free-viewpoint rendering of humans from sparse RGB cameras (2023)

    Google Scholar 

  67. Shuai, Q., et al.: Novel view synthesis of human interactions from sparse multi-view videos. In: SIGGRAPH Conference Proceedings (2022)

    Google Scholar 

  68. Sitzmann, V., Chan, E., Tucker, R., Snavely, N., Wetzstein, G.: MetaSDF: meta-learning signed distance functions. In: Advances in Neural Information Processing Systems, vol. 33, pp. 10136–10147 (2020)

    Google Scholar 

  69. Stoll, C., Hasler, N., Gall, J., Seidel, H.P., Theobalt, C.: Fast articulated motion tracking using a sums of Gaussians body model. In: 2011 International Conference on Computer Vision, pp. 951–958. IEEE (2011)

    Google Scholar 

  70. Su, Z., Xu, L., Zheng, Z., Yu, T., Liu, Y., Fang, L.: RobustFusion: human volumetric capture with data-driven visual cues using a RGBD camera. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020, Part IV. LNCS, vol. 12349, pp. 246–264. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_15

    Chapter  Google Scholar 

  71. Sun, G., et al.: Neural free-viewpoint performance rendering under complex human-object interactions. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4651–4660 (2021)

    Google Scholar 

  72. Tancik, M., et al.: Learned initializations for optimizing coordinate-based neural representations. In: CVPR (2021)

    Google Scholar 

  73. Tretschk, E., et al.: State of the art in dense monocular non-rigid 3D reconstruction. In: Computer Graphics Forum (Eurographics State of the Art Reports) (2023)

    Google Scholar 

  74. Wang, K., Peng, S., Zhou, X., Yang, J., Zhang, G.: NerfCap: human performance capture with dynamic neural radiance fields. IEEE Trans. Vis. Comput. Graph. 29(12), 5097–5110 (2022)

    Article  Google Scholar 

  75. Wang, K., Zhang, G., Cong, S., Yang, J.: Clothed human performance capture with a double-layer neural radiance fields. In: Computer Vision and Pattern Recognition (CVPR) (2023)

    Google Scholar 

  76. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. In: NeurIPS (2021)

    Google Scholar 

  77. Wang, S., Mihajlovic, M., Ma, Q., Geiger, A., Tang, S.: MetaAvatar: learning animatable clothed human models from few depth images. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  78. Wang, S., Schwarz, K., Geiger, A., Tang, S.: ARAH: animatable volume rendering of articulated human SDFs. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 1–19. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_1

    Chapter  Google Scholar 

  79. Wang, Y., Han, Q., Habermann, M., Daniilidis, K., Theobalt, C., Liu, L.: NeuS2: fast learning of neural implicit surfaces for multi-view reconstruction. arXiv preprint arXiv:2212.05231 (2022)

  80. Weng, C.Y., Curless, B., Srinivasan, P.P., Barron, J.T., Kemelmacher-Shlizerman, I.: HumanNeRF: free-viewpoint rendering of moving people from monocular video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16210–16220 (2022)

    Google Scholar 

  81. Xiang, D., et al.: Drivable avatar clothing: faithful full-body telepresence with dynamic clothing driven by sparse RGB-D input. In: SIGGRAPH Asia 2023 Conference Papers, pp. 1–11 (2023)

    Google Scholar 

  82. Xiang, D., Prada, F., Wu, C., Hodgins, J.: MonoClothCap: towards temporally coherent clothing capture from monocular RGB video. In: 2020 International Conference on 3D Vision (3DV), pp. 322–332. IEEE (2020)

    Google Scholar 

  83. Xiu, Y., Yang, J., Cao, X., Tzionas, D., Black, M.J.: ECON: explicit clothed humans optimized via normal integration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 512–523 (2023)

    Google Scholar 

  84. Xiu, Y., Yang, J., Tzionas, D., Black, M.J.: ICON: implicit clothed humans obtained from normals. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13296–13306 (2022)

    Google Scholar 

  85. Xu, W., et al.: MonoPerfCap: human performance capture from monocular video. ACM Trans. Graph. 37(2), 27:1–27:15 (2018). https://doi.org/10.1145/3181973

  86. Xue, Y., et al.: NSF: neural surface field for human modeling from monocular depth. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023)

    Google Scholar 

  87. Yang, J., Pavone, M., Wang, Y.: FreeNeRF: improving few-shot neural rendering with free frequency regularization (2023)

    Google Scholar 

  88. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: ICCV (2021)

    Google Scholar 

  89. Yu, A., Ye, V., Tancik, M., Kanazawa, A.: pixelNeRF: neural radiance fields from one or few images. In: CVPR (2021)

    Google Scholar 

  90. Yu, T., et al.: DoubleFusion: real-time capture of human performances with inner body shapes from a single depth sensor. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7287–7296 (2018)

    Google Scholar 

  91. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 586–595. IEEE Computer Society (2018). https://doi.org/10.1109/CVPR.2018.00068. https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00068

  92. Zhao, F., et al.: Human performance modeling and rendering via neural animated mesh. ACM Trans. Graph. (TOG) 41(6), 1–17 (2022)

    Article  Google Scholar 

  93. Zhao, F., et al.: HumanNeRF: efficiently generated human radiance field from sparse inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7743–7753 (2022)

    Google Scholar 

  94. Zheng, Y., et al.: DeepMultiCap: performance capture of multiple characters using sparse multiview cameras. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6239–6249 (2021)

    Google Scholar 

  95. Zheng, Z., Huang, H., Yu, T., Zhang, H., Guo, Y., Liu, Y.: Structured local radiance fields for human avatar modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  96. Zheng, Z., Yu, T., Liu, Y., Dai, Q.: PaMIR: parametric model-conditioned implicit representation for image-based human reconstruction (2021)

    Google Scholar 

  97. Zheng, Z., Yu, T., Wei, Y., Dai, Q., Liu, Y.: DeepHuman: 3D human reconstruction from a single image. In: The IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  98. Zhu, H., Zhan, F., Theobalt, C., Habermann, M.: TriHuman: a real-time and controllable tri-plane representation for detailed human geometry and appearance synthesis (2023)

    Google Scholar 

  99. Zuo, X., et al.: SparseFusion: dynamic human avatar modeling from sparse RGBD images. IEEE Trans. Multimedia 23, 1617–1629 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the ERC Consolidator Grant 4DRepLy (770784).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoxing Sun .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2176 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, G., Dabral, R., Fua, P., Theobalt, C., Habermann, M. (2025). MetaCap: Meta-learning Priors from Multi-view Imagery for Sparse-View Human Performance Capture and Rendering. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15104. Springer, Cham. https://doi.org/10.1007/978-3-031-72952-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72952-2_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72951-5

  • Online ISBN: 978-3-031-72952-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics