Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

High-Fidelity 3D Digital Human Head Creation from RGB-D Selfies

Published: 09 November 2021 Publication History
  • Get Citation Alerts
  • Abstract

    We present a fully automatic system that can produce high-fidelity, photo-realistic three-dimensional (3D) digital human heads with a consumer RGB-D selfie camera. The system only needs the user to take a short selfie RGB-D video while rotating his/her head and can produce a high-quality head reconstruction in less than 30 s. Our main contribution is a new facial geometry modeling and reflectance synthesis procedure that significantly improves the state of the art. Specifically, given the input video a two-stage frame selection procedure is first employed to select a few high-quality frames for reconstruction. Then a differentiable renderer-based 3D Morphable Model (3DMM) fitting algorithm is applied to recover facial geometries from multiview RGB-D data, which takes advantages of a powerful 3DMM basis constructed with extensive data generation and perturbation. Our 3DMM has much larger expressive capacities than conventional 3DMM, allowing us to recover more accurate facial geometry using merely linear basis. For reflectance synthesis, we present a hybrid approach that combines parametric fitting and Convolutional Neural Networks (CNNs) to synthesize high-resolution albedo/normal maps with realistic hair/pore/wrinkle details. Results show that our system can produce faithful 3D digital human faces with extremely realistic details. The main code and the newly constructed 3DMM basis is publicly available.

    References

    [1]
    Oleg Alexander, Mike Rogers, William Lambeth, Matt Chiang, and Paul Debevec. 2009. The digital emily project: Photoreal facial modeling and animation. In ACM SIGGRAPH 2009 Courses. ACM.
    [2]
    K. Somani Arun, Thomas S. Huang, and Steven D. Blostein. 1987. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 5 (1987), 698–700.
    [3]
    Thabo Beeler, Bernd Bickel, Paul Beardsley, Bob Sumner, and Markus Gross. 2010. High-quality single-shot capture of facial geometry. ACM Trans. Graph. (Proc. SIGGRAPH) 29, 4 (2010), 1–9.
    [4]
    Bellus3D. 2020. Bellus3D. https://www.bellus3d.com/.Retrieved September 18, 2020 from
    [5]
    Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. Lightweight eye capture using a parametric model. ACM Trans. Graph. (Proc. SIGGRAPH) 35, 4 (2016), 1–12.
    [6]
    Pascal Bérard, Derek Bradley, Markus Gross, and Thabo Beeler. 2019. Practical person-specific eye rigging. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 441–454.
    [7]
    Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3D faces. In Proc. SIGGRAPH. ACM, 187–194.
    [8]
    Volker Blanz and Thomas Vetter. 2003. Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25, 9 (2003), 1063–1074.
    [9]
    James Booth, Anastasios Roussos, Stefanos Zafeiriou, Allan Ponniah, and David Dunaway. 2016. A 3d morphable model learnt from 10,000 faces. In Proc. CVPR. IEEE, 5543–5552.
    [10]
    Sofien Bouaziz and Mark Pauly. 2014. Semi-supervised Facial Animation Retargeting. Technical Report.
    [11]
    Sofien Bouaziz, Andrea Tagliasacchi, Hao Li, and Mark Pauly. 2016. Modern techniques and applications for real-time non-rigid registration. In ACM SIGGRAPH Asia 2016 Courses. 1–25.
    [12]
    Sofien Bouaziz, Yangang Wang, and Mark Pauly. 2013. Online modeling for realtime facial animation. ACM Trans. Graph. (Proc. SIGGRAPH) 32, 4 (2013), 1–10.
    [13]
    Peter J Burt and Edward H Adelson. 1983. A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2, 4 (1983), 217–236.
    [14]
    Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2014. Facewarehouse: A 3d facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20, 3 (2014), 413–425.
    [15]
    Chen Cao, Hongzhi Wu, Yanlin Weng, Tianjia Shao, and Kun Zhou. 2016. Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. (Proc. SIGGRAPH) 35, 4 (2016), 1–12.
    [16]
    Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. (Proc. SIGGRAPH) 34, 6 (2015), 1–10.
    [17]
    Yajing Chen, Fanzi Wu, Zeyu Wang, Yibing Song, Yonggen Ling, and Linchao Bao. 2020. Self-supervised learning of detailed 3D face reconstruction. IEEE Trans. Image Process. 29 (2020), 8696–8705.
    [18]
    Yen-Lin Chen, Hsiang-Tao Wu, Fuhao Shi, Xin Tong, and Jinxiang Chai. 2013. Accurate and robust 3d facial capture using a single rgbd camera. In Proc. ICCV. IEEE, 3615–3622.
    [19]
    Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proc. of SIGGRAPH. ACM, 145–156.
    [20]
    Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proc. CVPR. IEEE.
    [21]
    Pengfei Dou and Ioannis A Kakadiaris. 2018. Multi-view 3D face reconstruction with deep recurrent neural networks. Image Vis. Comput. 80 (2018), 80–91.
    [22]
    Bernhard Egger, William AP Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, Christian Theobalt, Volker Blanz, and Thomas Vetter. 2020. 3D morphable face models–past, present and future. ACM Trans. Graph. 39, 5 (2020), 1–38.
    [23]
    EpicGames. 2020. Rendering Digital Humans in Unreal Engine 4. https://docs.unrealengine.com/en-US/Resources/Showcases/DigitalHumans/index.html.Retrieved May 20, 2020 from
    [24]
    Pablo Garrido, Levi Valgaerts, Chenglei Wu, and Christian Theobalt. 2013. Reconstructing detailed dynamic face geometry from monocular video.ACM Trans. Graph. 32, 6 (2013), 158–1.
    [25]
    Pablo Garrido, Michael Zollhöfer, Dan Casas, Levi Valgaerts, Kiran Varanasi, Patrick Pérez, and Christian Theobalt. 2016. Reconstruction of personalized 3D face rigs from monocular video. ACM Trans. Graph. 35, 3 (2016), 1–15.
    [26]
    Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proc. CVPR. IEEE, 2414–2423.
    [27]
    Baris Gecer, Stylianos Ploumpis, Irene Kotsia, and Stefanos Zafeiriou. 2019. Ganfit: Generative adversarial network fitting for high fidelity 3d face reconstruction. In Proc. CVPR. IEEE, 1155–1164.
    [28]
    Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T. Freeman. 2018. Unsupervised training for 3D morphable model regression. In Proc. CVPR. IEEE, 8377–8386.
    [29]
    Yudong Guo, Juyong Zhang, Jianfei Cai, Boyi Jiang, and Jianmin Zheng. 2019. CNN-based real-time dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Trans. Pattern Anal. Mach. Intell. 41, 6 (2019), 1294–1307.
    [30]
    Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861. Retrieved from https://arxiv.org/abs/1704.04861.
    [31]
    Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. (Proc. SIGGRAPH) 34, 4 (2015), 1–9.
    [32]
    Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, and Hao Li. 2017. Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. (Proc. SIGGRAPH) 36, 6 (2017), 1–14.
    [33]
    Huirong Huang, Zhiyong Wu, Shiyin Kang, Dongyang Dai, Jia Jia, Tianxiao Fu, Deyi Tuo, Guangzhi Lei, Peng Liu, Dan Su, Dong Yu, and Helen Meng. 2020. Speaker independent and multilingual/mixlingual speech-driven talking head generation using phonetic posteriorgrams. arXiv:2006.11610. Retrieved from https://arxiv.org/abs/2006.11610.
    [34]
    Alexandru Eugen Ichim, Sofien Bouaziz, and Mark Pauly. 2015. Dynamic 3D avatar creation from hand-held video input. ACM Trans. Graph. (Proc. SIGGRAPH) 34, 4 (2015), 1–14.
    [35]
    Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proc. CVPR. IEEE, 1125–1134.
    [36]
    Aaron S. Jackson, Adrian Bulat, Vasileios Argyriou, and Georgios Tzimiropoulos. 2017. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In Proc. ICCV. IEEE, 1031–1039.
    [37]
    Ira Kemelmacher-Shlizerman and Ronen Basri. 2011. 3D face reconstruction from a single image using a single reference face shape. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2 (2011), 394–405.
    [38]
    Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Retrieved from https://arxiv.org/abs/1412.6980.
    [39]
    Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, and Stefanos Zafeiriou. 2020. AvatarMe: Realistically renderable 3D facial reconstruction” in-the-wild.” In Proc. CVPR. IEEE.
    [40]
    Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua. 2009. Epnp: An accurate o (n) solution to the pnp problem. Int. J. Comput. Vis. 81, 2 (2009), 155.
    [41]
    Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime facial animation with on-the-fly correctives.ACM Trans. Graph. (Proc. SIGGRAPH) 32, 4 (2013), 42–1.
    [42]
    Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. (Proc. SIGGRAPH) 32, 4 (2013), 1–12.
    [43]
    Marcel Lüthi, Thomas Gerig, Christoph Jud, and Thomas Vetter. 2017. Gaussian process morphable models. IEEE Trans. Pattern Anal. Mach. Intell. 40, 8 (2017), 1860–1873.
    [44]
    Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, and Hao Li. 2018. paGAN: Real-time avatars using dynamic textures. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37, 6 (2018), 1–12.
    [45]
    Thomas Neumann, Kiran Varanasi, Stephan Wenger, Markus Wacker, Marcus Magnor, and Christian Theobalt. 2013. Sparse localized deformation components. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 32, 6 (2013), 1–10.
    [46]
    Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. 2011. KinectFusion: Real-time dense surface mapping and tracking. In Proc. ISMAR. IEEE, 127–136.
    [47]
    Sylvain Paris and Frédo Durand. 2009. A fast approximation of the bilateral filter using a signal processing approach. Int. J. Comput. Vis. 81, 1 (2009), 24–52.
    [48]
    Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In Proc. BMVC.
    [49]
    Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. 2009. A 3D face model for pose and illumination invariant face recognition. In Proc. AVSS. IEEE, 296–301.
    [50]
    R3ds. 2020. Wrap 3. Retrieved May 20, 2020 from https://www.russian3dscanner.com/.
    [51]
    Erik Reinhard, Michael Adhikhmin, Bruce Gooch, and Peter Shirley. 2001. Color transfer between images. IEEE Comput. Graph. Appl. 21, 5 (2001), 34–41.
    [52]
    Elad Richardson, Matan Sela, Roy Or-El, and Ron Kimmel. 2017. Learning detailed face reconstruction from a single image. In Proc. CVPR. IEEE, 5553–5562.
    [53]
    Sami Romdhani and Thomas Vetter. 2005. Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In Proc. CVPR, Vol. 2. IEEE, 986–993.
    [54]
    Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, and Hao Li. 2018. 3D hair synthesis using volumetric variational autoencoders. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37, 6 (2018), 1–12.
    [55]
    Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, and Hao Li. 2017. Photorealistic facial texture inference using deep neural networks. In Proc. CVPR, Vol. 3. IEEE.
    [56]
    Matan Sela, Elad Richardson, and Ron Kimmel. 2017. Unrestricted facial geometry reconstruction using image-to-image translation. In Proc. ICCV. IEEE, 1585–1594.
    [57]
    Fuhao Shi, Hsiang-Tao Wu, Xin Tong, and Jinxiang Chai. 2014. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 33, 6 (2014), 1–13.
    [58]
    J. Rafael Tena, Fernando De la Torre, and Iain Matthews. 2011. Interactive region-based linear 3D face models. ACM Trans. Graph. (Proc. SIGGRAPH) 30, 4 (2011), 1–10.
    [59]
    Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, and Christian Theobalt. 2017. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proc. ICCV, Vol. 2. IEEE, 5.
    [60]
    Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, and Christian Theobalt. 2018. Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In Proc. CVPR. IEEE.
    [61]
    Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time expression transfer for facial reenactment. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 34, 6 (2015), 183–1.
    [62]
    Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proc. CVPR. 2387–2395.
    [63]
    Anh Tuan Tran, Tal Hassner, Iacopo Masi, and Gérard Medioni. 2017. Regressing robust and discriminative 3D morphable models with a very deep neural network. In Proc. CVPR. IEEE, 1493–1502.
    [64]
    Anh Tuan Tran, Tal Hassner, Iacopo Masi, Eran Paz, Yuval Nirkin, and Gérard Medioni. 2018. Extreme 3D face reconstruction: Seeing through occlusions. In Proc. CVPR. IEEE.
    [65]
    Luan Tran and Xiaoming Liu. 2018. Nonlinear 3D face morphable model. In Proc. CVPR. IEEE.
    [66]
    Zdravko Velinov, Marios Papas, Derek Bradley, Paulo Gotardo, Parsa Mirdehghan, Steve Marschner, Jan Novák, and Thabo Beeler. 2018. Appearance capture and modeling of human teeth. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37, 6 (2018), 1–13.
    [67]
    Javier von der Pahlen, Jorge Jimenez, Etienne Danvoye, Paul Debevec, Graham Fyffe, and Oleg Alexander. 2014. Digital ira and beyond: Creating real-time photoreal digital actors. In ACM SIGGRAPH 2014 Courses. ACM.
    [68]
    Mengjiao Wang, Derek Bradley, Stefanos Zafeiriou, and Thabo Beeler. 2020. Facial expression synthesis using a global-local multilinear framework. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 235–245.
    [69]
    Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. High-resolution image synthesis and semantic manipulation with conditional GANs. In Proc. CVPR. IEEE.
    [70]
    Yichen Wei, Eyal Ofek, Long Quan, and Heung-Yeung Shum. 2005. Modeling hair from multiple views. ACM Trans. Graph. (Proc. SIGGRAPH) 24, 3 (2005), 816–820.
    [71]
    Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime performance-based facial animation. ACM Trans. Graph. (Proc. SIGGRAPH) 30, 4 (2011), 1–10.
    [72]
    Chenglei Wu, Derek Bradley, Pablo Garrido, Michael Zollhöfer, Christian Theobalt, Markus H. Gross, and Thabo Beeler. 2016. Model-based teeth reconstruction.ACM Trans. Graph. (Proc. SIGGRAPH Asia) 35, 6 (2016), 220–1.
    [73]
    Fanzi Wu, Linchao Bao, Yajing Chen, Yonggen Ling, Yibing Song, Songnan Li, King Ngi Ngan, and Wei Liu. 2019. Mvf-net: Multi-view 3d face morphable model regression. In Proc. CVPR. IEEE, 959–968.
    [74]
    Shugo Yamaguchi, Shunsuke Saito, Koki Nagano, Yajie Zhao, Weikai Chen, Kyle Olszewski, Shigeo Morishima, and Hao Li. 2018. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Trans. Graph. (Proc. SIGGRAPH) 37, 4 (2018), 1–14.
    [75]
    Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, and Dong Yu. 2019. DurIAN: Duration informed attention network for multimodal synthesis. In INTERSPEECH. 2027–2031.
    [76]
    Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. 2016. Face alignment across large poses: A 3d solution. In Proc. CVPR. IEEE, 146–155.
    [77]
    Xiangyu Zhu, Zhen Lei, Junjie Yan, Dong Yi, and Stan Z Li. 2015. High-fidelity pose and expression normalization for face recognition in the wild. In Proc. CVPR. IEEE, 787–796.
    [78]
    Michael Zollhöfer, Michael Martinek, Günther Greiner, Marc Stamminger, and Jochen Süßmuth. 2011. Automatic reconstruction of personalized avatars from 3D face scans. Comput. Anim. Virt. Worlds 22, 2-3 (2011), 195–202.
    [79]
    Michael Zollhöfer, Matthias Nießner, Shahram Izadi, Christoph Rehmann, Christopher Zach, Matthew Fisher, Chenglei Wu, Andrew Fitzgibbon, Charles Loop, Christian Theobalt, et al. 2014. Real-time non-rigid reconstruction using an RGB-D camera. ACM Trans. Graph. (Proc. SIGGRAPH) 33, 4 (2014), 1–12.
    [80]
    Michael Zollhöfer, Justus Thies, Pablo Garrido, Derek Bradley, Thabo Beeler, Patrick Pérez, Marc Stamminger, Matthias Nießner, and Christian Theobalt. 2018. State of the art on monocular 3D face reconstruction, tracking, and applications. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 523–550.
    [81]
    Gaspard Zoss, Thabo Beeler, Markus Gross, and Derek Bradley. 2019. Accurate markerless jaw tracking for facial performance capture. ACM Trans. Graph. (Proc. SIGGRAPH) 38, 4 (2019), 1–8.
    [82]
    Gaspard Zoss, Derek Bradley, Pascal Bérard, and Thabo Beeler. 2018. An empirical rig for jaw animation. ACM Trans. Graph. (Proc. SIGGRAPH) 37, 4 (2018), 1–12.

    Cited By

    View all
    • (2024)GANtlitz: Ultra High Resolution Generative Model for Multi‐Modal Face TexturesComputer Graphics Forum10.1111/cgf.1503943:2Online publication date: 24-Apr-2024
    • (2024)MeshWGAN: Mesh-to-Mesh Wasserstein GAN With Multi-Task Gradient Penalty for 3D Facial Geometric Age TransformationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328450030:8(4927-4940)Online publication date: Aug-2024
    • (2024)High-Fidelity Texture Generation for 3D Avatar Based On the Diffusion Model2024 16th International Conference on Human System Interaction (HSI)10.1109/HSI61632.2024.10613538(1-6)Online publication date: 8-Jul-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 41, Issue 1
    February 2022
    178 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3484929
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2021
    Accepted: 01 June 2021
    Revised: 01 June 2021
    Received: 01 September 2020
    Published in TOG Volume 41, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Digital human
    2. 3D face
    3. avatar
    4. 3DMM

    Qualifiers

    • Research-article
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)156
    • Downloads (Last 6 weeks)11
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)GANtlitz: Ultra High Resolution Generative Model for Multi‐Modal Face TexturesComputer Graphics Forum10.1111/cgf.1503943:2Online publication date: 24-Apr-2024
    • (2024)MeshWGAN: Mesh-to-Mesh Wasserstein GAN With Multi-Task Gradient Penalty for 3D Facial Geometric Age TransformationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328450030:8(4927-4940)Online publication date: Aug-2024
    • (2024)High-Fidelity Texture Generation for 3D Avatar Based On the Diffusion Model2024 16th International Conference on Human System Interaction (HSI)10.1109/HSI61632.2024.10613538(1-6)Online publication date: 8-Jul-2024
    • (2024)Personalizing human avatars based on realistic 3D facial reconstructionMultimedia Tools and Applications10.1007/s11042-024-19583-0Online publication date: 22-Jun-2024
    • (2023)A survey on generative 3D digital humans based on neural networks: representation, rendering, and learningSCIENTIA SINICA Informationis10.1360/SSI-2022-031953:10(1858)Online publication date: 13-Oct-2023
    • (2023)EMS: 3D Eyebrow Modeling from Single-View ImagesACM Transactions on Graphics10.1145/361832342:6(1-19)Online publication date: 5-Dec-2023
    • (2023)HACK: Learning a Parametric Head and Neck Model for High-fidelity AnimationACM Transactions on Graphics10.1145/359209342:4(1-20)Online publication date: 26-Jul-2023
    • (2023)A Perceptual Shape Loss for Monocular 3D Face ReconstructionComputer Graphics Forum10.1111/cgf.1494542:7Online publication date: 6-Dec-2023
    • (2023)Neural Shading Fields for Efficient Facial Inverse RenderingComputer Graphics Forum10.1111/cgf.1494342:7Online publication date: 30-Oct-2023
    • (2023)Makeup Extraction of 3D Representation via Illumination‐Aware Image DecompositionComputer Graphics Forum10.1111/cgf.1476242:2(293-307)Online publication date: 23-May-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media