Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Real-time controllable motion transition for characters

Published: 22 July 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Real-time in-between motion generation is universally required in games and highly desirable in existing animation pipelines. Its core challenge lies in the need to satisfy three critical conditions simultaneously: quality, controllability and speed, which renders any methods that need offline computation (or post-processing) or cannot incorporate (often unpredictable) user control undesirable. To this end, we propose a new real-time transition method to address the aforementioned challenges. Our approach consists of two key components: motion manifold and conditional transitioning. The former learns the important low-level motion features and their dynamics; while the latter synthesizes transitions conditioned on a target frame and the desired transition duration. We first learn a motion manifold that explicitly models the intrinsic transition stochasticity in human motions via a multi-modal mapping mechanism. Then, during generation, we design a transition model which is essentially a sampling strategy to sample from the learned manifold, based on the target frame and the aimed transition duration. We validate our method on different datasets in tasks where no post-processing or offline computation is allowed. Through exhaustive evaluation and comparison, we show that our method is able to generate high-quality motions measured under multiple metrics. Our method is also robust under various target frames (with extreme cases).

    Supplementary Material

    VTT File (3528223.3530090.vtt)
    MP4 File (137-284-supp-video.mp4)
    supplemental material
    MP4 File (3528223.3530090.mp4)
    presentation

    References

    [1]
    Okan Arikan and D. A. Forsyth. 2002. Interactive motion generation from examples. ACM Transactions on Graphics 21, 3 (2002), 483--490.
    [2]
    Philippe Beaudoin, Stelian Coros, Michiel van de Panne, and Pierre Poulin. 2008. Motionmotif graphs. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 117--126.
    [3]
    Jinxiang Chai and Jessica K. Hodgins. 2007. Constraint-based motion optimization using a statistical dynamic model. ACM Transactions on Graphics 26, 3 (2007), 8--es.
    [4]
    Wenheng Chen, He Wang, Yi Yuan, Tianjia Shao, and Kun Zhou. 2020. Dynamic future net: diversified human motion generation. In Proceedings of the 28th ACM International Conference on Multimedia. 2131--2139.
    [5]
    Hsu-kuang Chiu, Ehsan Adeli, Borui Wang, De-An Huang, and Juan Carlos Niebles. 2019. Action-agnostic human pose forecasting. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). 1423--1432.
    [6]
    Yinglin Duan, Tianyang Shi, Zhengxia Zou, Yenan Lin, Zhehui Qian, Bohan Zhang, and Yi Yuan. 2021. Single-Shot Motion Completion with Transformer. arXiv:2103.00776 [cs] (March 2021).
    [7]
    Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. 2015. Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision. 4346--4354.
    [8]
    Félix G. Harvey and Christopher Pal. 2018. Recurrent transition networks for character locomotion. In SIGGRAPH Asia 2018 Technical Briefs (SA '18). Association for Computing Machinery, 1--4.
    [9]
    Félix G. Harvey, Mike Yurick, Derek Nowrouzezahrai, and Christopher Pal. 2020. Robust motion in-betweening. ACM Transactions on Graphics 39, 4, Article 60 (2020).
    [10]
    Alejandro Hernandez, Jurgen Gall, and Francesc Moreno-Noguer. 2019. Human motion prediction via spatio-temporal inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7134--7143.
    [11]
    Daniel Holden, Oussama Kanoun, Maksym Perepichka, and Tiberiu Popa. 2020. Learned motion matching. ACM Transactions on Graphics 39, 4, Article 53 (2020).
    [12]
    Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Transactions on Graphics 36, 4 (2017), 1--13.
    [13]
    Daniel Holden, Jun Saito, and Taku Komura. 2016. A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics 35, 4 (2016), 1--11.
    [14]
    Catalin Ionescu, Fuxin Li, and Cristian Sminchisescu. 2011. Latent structured models for human pose estimation. In 2011 International Conference on Computer Vision. 2220--2227.
    [15]
    Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. 2014. Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 7 (2014), 1325--1339.
    [16]
    Ashesh Jain, Amir R. Zamir, Silvio Savarese, and Ashutosh Saxena. 2016. Structural-RNN: deep learning on spatio-temporal graphs. In 2016 IEEE Conference on Computer Vision and Pattern Recognition. 5308--5317.
    [17]
    Manuel Kaufmann, Emre Aksan, Jie Song, Fabrizio Pece, Remo Ziegler, and Otmar Hilliges. 2020. Convolutional autoencoders for human motion infilling. In 2020 International Conference on 3D Vision. 918--927.
    [18]
    Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion graphs. In ACM SIGGRAPH 2008 Classes (SIGGRAPH '08).
    [19]
    Sergey Levine, Jack M Wang, Alexis Haraux, Zoran Popović, and Vladlen Koltun. 2012. Continuous character control with low-dimensional embeddings. ACM Transactions on Graphics (TOG) 31, 4 (2012), 1--10.
    [20]
    Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, and Yajie Zhao. 2021. Task-generic hierarchical human motion prior using vaes. In 2021 International Conference on 3D Vision. IEEE, 771--781.
    [21]
    Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel van de Panne. 2020. Character controllers using motion VAEs. ACM Transactions on Graphics 39, 4, Article 40 (2020).
    [22]
    Julieta Martinez, Michael J Black, and Javier Romero. 2017. On human motion prediction using recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2891--2900.
    [23]
    Jianyuan Min and Jinxiang Chai. 2012. Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics 31, 6, Article 153 (2012), 12 pages.
    [24]
    Dario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. 2020. Modeling human motion with quaternion-based neural networks. International Journal of Computer Vision 128 (2020), 855--872.
    [25]
    Mathis Petrovich, Michael J. Black, and Gül Varol. 2021. Action-Conditioned 3D Human Motion Synthesis with Transformer VAE. arXiv:2104.05670 [cs] (2021).
    [26]
    Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. 2021. Humor: 3d human motion model for robust pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11488--11499.
    [27]
    Alla Safonova and Jessica K. Hodgins. 2007. Construction and optimal search of interpolated motion graphs. ACM Transactions on Graphics 26 (2007).
    [28]
    Yijun Shen, He Wang, Edmond S. L. Ho, Longzhi Yang, and Hubert P. H. Shum. 2017. Posture-based and action-based graphs for boxing skill visualization. Computers and Graphics 69, Supplement C (2017), 104--115.
    [29]
    Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. 2020. Local motion phases for learning multi-contact character movements. ACM Transactions on Graphics 39, 4, Article 54 (July 2020).
    [30]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998--6008.
    [31]
    He Wang, Edmond SL Ho, and Taku Komura. 2015. An energy-driven motion planning method for two distant postures. IEEE Transactions on Visualization and Computer Graphics 21, 1 (2015), 18--30.
    [32]
    He Wang, Edmond S. L. Ho, Hubert P. H. Shum, and Zhanxing Zhu. 2021. Spatiotemporal manifold learning for human motions via long-Horizon modeling. IEEE Transactions on Visualization and Computer Graphics 27, 1 (2021), 216--227.
    [33]
    He Wang and Taku Komura. 2011. Energy-based pose unfolding and interpolation for 3D articulated characters. In Motion in Games. 110--119.
    [34]
    He Wang, Kirill A Sidorov, Peter Sandilands, and Taku Komura. 2013. Harmonic parameterization by electrostatics. ACM Transactions on Graphics 32, 5 (2013), 155.
    [35]
    Andrew Witkin and Michael Kass. 1988. Spacetime constraints. ACM Siggraph Computer Graphics 22, 4 (1988), 159--168.
    [36]
    Yuting Ye and C. Karen Liu. 2010. Synthesis of responsive motion using a dynamic model. Computer Graphic Forum 29, 2 (2010), 555--562.
    [37]
    Ye Yuan, Umar Iqbal, Pavlo Molchanov, Kris Kitani, and Jan Kautz. 2021. GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras. arXiv preprint arXiv:2112.01524 (2021).
    [38]
    He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-adaptive neural networks for quadruped motion control. ACM Transactions on Graphics 37, 4 (2018), 1--11.
    [39]
    Xinyi Zhang and Michiel van de Panne. 2018. Data-driven autocompletion for keyframe animation. In Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games. 1--11.
    [40]
    Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, et al. 2020. Generative tweening: Long-term inbetweening of 3d human motions. arXiv preprint arXiv:2005.08891 (2020).

    Cited By

    View all
    • (2024)Machine Learning-Based Hand Pose Generation Using a Haptic ControllerElectronics10.3390/electronics1310197013:10(1970)Online publication date: 17-May-2024
    • (2024)TEDi: Temporally-Entangled Diffusion for Long-Term Motion SynthesisACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657515(1-11)Online publication date: 13-Jul-2024
    • (2024)Human Motion Aware Text-to-Video Generation with Explicit Camera Control2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00500(5069-5078)Online publication date: 3-Jan-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 41, Issue 4
    July 2022
    1978 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3528223
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 July 2022
    Published in TOG Volume 41, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. animation
    2. conditional transitioning
    3. deep learning
    4. in-betweening
    5. locomotion
    6. motion manifold
    7. real-time

    Qualifiers

    • Research-article

    Funding Sources

    • the Ningbo Major Special Projects of the ?Science and Technology Innovation 2025?
    • the Key Research and Development Program of Zhejiang Province
    • the National Natural Science Foundation of China

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)148
    • Downloads (Last 6 weeks)15
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Machine Learning-Based Hand Pose Generation Using a Haptic ControllerElectronics10.3390/electronics1310197013:10(1970)Online publication date: 17-May-2024
    • (2024)TEDi: Temporally-Entangled Diffusion for Long-Term Motion SynthesisACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657515(1-11)Online publication date: 13-Jul-2024
    • (2024)Human Motion Aware Text-to-Video Generation with Explicit Camera Control2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00500(5069-5078)Online publication date: 3-Jan-2024
    • (2024)Motion In-Betweening via Deep -InterpolatorIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.330910730:8(5693-5704)Online publication date: Aug-2024
    • (2024)A Two-Part Transformer Network for Controllable Motion SynthesisIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328440230:8(5047-5062)Online publication date: Aug-2024
    • (2024)Crowd-sourced Evaluation of Combat Animations2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR59861.2024.00015(60-65)Online publication date: 17-Jan-2024
    • (2023)Neural Motion GraphSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618181(1-11)Online publication date: 10-Dec-2023
    • (2023)Motion In-Betweening with Phase ManifoldsProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36069216:3(1-17)Online publication date: 24-Aug-2023
    • (2023)RSMT: Real-time Stylized Motion Transition for CharactersACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591514(1-10)Online publication date: 23-Jul-2023
    • (2023)Synthesizing Diverse Human Motions in 3D Indoor Scenes2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.01354(14692-14703)Online publication date: 1-Oct-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media