Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Realtime style transfer for unlabeled heterogeneous human motion

Published: 27 July 2015 Publication History

Abstract

This paper presents a novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles. The key idea of our approach is an online learning algorithm that automatically constructs a series of local mixtures of autoregressive models (MAR) to capture the complex relationships between styles of motion. We construct local MAR models on the fly by searching for the closest examples of each input pose in the database. Once the model parameters are estimated from the training data, the model adapts the current pose with simple linear transformations. In addition, we introduce an efficient local regression model to predict the timings of synthesized poses in the output style. We demonstrate the power of our approach by transferring stylistic human motion for a wide variety of actions, including walking, running, punching, kicking, jumping and transitions between those behaviors. Our method achieves superior performance in a comparison against alternative methods. We have also performed experiments to evaluate the generalization ability of our data-driven model as well as the key components of our system.

Supplementary Material

ZIP File (a119-xia.zip)
Supplemental files
MP4 File (a119.mp4)

References

[1]
Aha, D. W. 1997. Editorial, Special issue on lazy learning. In Artificial Intelligence Review. 11(1--5):1--6.
[2]
Amaya, K., Bruderlin, A., and Calvert, T. 1996. Emotion from motion. In Proceedings of Graphics Interface 1996, 222--229.
[3]
Brand, M., and Hertzmann, A. 2000. Style machines. In Proceedings of ACM SIGGRAPH 2000, 183--192.
[4]
Chai, J., and Hodgins, J. 2005. Performance animation from low-dimensional control signals. ACM Transactions on Graphics 24, 3, 686--696.
[5]
Chai, J., and Hodgins, J. 2007. Constraint-based motion optimization using a statistical dynamic model. ACM Transactions on Graphics 26, 3, Article No. 8.
[6]
Grassia, F. S. 1998. Practical parameterization of rotations using the exponential map. Journal of Graphics Tools 3, 3, 29--48.
[7]
Grochow, K., Martin, S. L., Hertzmann, A., and Popović, Z. 2004. Style-based inverse kinematics. ACM Transactions on Graphics 23, 3, 522--531.
[8]
Hsu, E., Pulli, K., and Popović, J. 2005. Style translation for human motion. ACM Transactions on Graphics 24, 3, 1082--1089.
[9]
Ikemoto, L., Arikan, O., and Forsyth, D. 2009. Generalizing motion edits with gaussian processes. ACM Transactions on Graphics 28, 1, 1:1--1:12.
[10]
Kovar, L., and Gleicher, M. 2003. Registration curves. In ACM SIGGRAPH/EUROGRAPH Symposium on Computer Animation. 214--224.
[11]
Kovar, L., and Gleicher, M. 2004. Automated extraction and parameterization of motions in large data sets. ACM Transactions on Graphics 23, 3 (Aug.), 559--568.
[12]
Kovar, L., Gleicher, M., and Pighin, F. 2002. Motion graphs. ACM Transactions on Graphics 21, 3 (July), 473--482.
[13]
Lau, M., Bar-Joseph, Z., and Kuffner, J. 2009. Modeling spatial and temporal variation in motion data. ACM Transactions on Graphics 28, 5, Article No. 171.
[14]
Min, J., and Chai, J. 2012. Motion graphs++: A compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics 31, 6, 153:1--153:12.
[15]
Min, J., Liu, H., and Chai, J. 2010. Synthesis and editing of personalized stylistic human motion. In Proceedings of the 2010 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 39--46.
[16]
Monzani, J., Baerlocher, P., Boulic, R., and Thalmann, D. 2000. Using an intermediate skeleton and inverse kinematics for motion retargeting. Computer Graphics Forum 19, 3, 11--19.
[17]
Rasmussen, C. E., and Nickisch, H. 2010. Gaussian processes for machine learning (gpml) toolbox. Journal of Machine Learning Research 11, 3011--3015.
[18]
Rose, C., Cohen, M. F., and Bodenheimer, B. 1998. Verbs and adverbs: Multidimensional motion interpolation. In IEEE Computer Graphics and Applications. 18(5):32--40.
[19]
Shapiro, A., Cao, Y., and Faloutsos, P. 2006. Style components. In Proceedings of Graphics Interface 2006, 33--39.
[20]
Vicon, 2015. http://www.vicon.com.
[21]
Wang, J. M., Fleet, D. J., and Hertzmann, A. 2007. Multifactor gaussian process models for style-content separation. Proceedings of the 24th International Conference on Machine Learning. 975--982.
[22]
Wei, X., Zhang, P., and Chai, J. 2012. Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics 31, 6 (Nov.), 188:1--188:12.
[23]
Wong, C. S., and Li, W. K. 2000. On a mixture autoregressive model. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 62, 1, 95--115.

Cited By

View all
  • (2024)ASMNet: Action and Style-Conditioned Motion Generative Network for 3D Human Motion GenerationCyborg and Bionic Systems10.34133/cbsystems.00905Online publication date: 6-Feb-2024
  • (2024)3D Character Animation and Asset Generation Using Deep LearningApplied Sciences10.3390/app1416723414:16(7234)Online publication date: 16-Aug-2024
  • (2024)Generative Motion Stylization of Cross-structure Characters within Canonical Motion SpaceProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680864(7018-7026)Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. Realtime style transfer for unlabeled heterogeneous human motion

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 34, Issue 4
    August 2015
    1307 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/2809654
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2015
    Published in TOG Volume 34, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. character animation
    2. data-driven motion synthesis
    3. online local regression
    4. realtime style transfer

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)170
    • Downloads (Last 6 weeks)27
    Reflects downloads up to 23 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)ASMNet: Action and Style-Conditioned Motion Generative Network for 3D Human Motion GenerationCyborg and Bionic Systems10.34133/cbsystems.00905Online publication date: 6-Feb-2024
    • (2024)3D Character Animation and Asset Generation Using Deep LearningApplied Sciences10.3390/app1416723414:16(7234)Online publication date: 16-Aug-2024
    • (2024)Generative Motion Stylization of Cross-structure Characters within Canonical Motion SpaceProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680864(7018-7026)Online publication date: 28-Oct-2024
    • (2024)WalkTheDog: Cross-Morphology Motion Alignment via Phase ManifoldsACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657508(1-10)Online publication date: 13-Jul-2024
    • (2024)Diffusion‐based Human Motion Style Transfer with Semantic GuidanceComputer Graphics Forum10.1111/cgf.15169Online publication date: 9-Oct-2024
    • (2024)Machine Learning Approaches for 3D Motion Synthesis and Musculoskeletal Dynamics Estimation: A SurveyIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.330875330:8(5810-5829)Online publication date: 1-Aug-2024
    • (2024)A Two-Part Transformer Network for Controllable Motion SynthesisIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.328440230:8(5047-5062)Online publication date: 1-Aug-2024
    • (2024)Pose-Aware Attention Network for Flexible Motion Retargeting by Body PartIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.327791830:8(4792-4808)Online publication date: 1-Aug-2024
    • (2024)StyleVR: Stylizing Character Animations With Normalizing FlowsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.325918330:7(4183-4196)Online publication date: 1-Jul-2024
    • (2024)Motion-STUDiO : Motion Style Transfer Utilized for Dancing Operation by Considering Both Style and Dance Features2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan)10.1109/ICCE-Taiwan62264.2024.10674583(127-128)Online publication date: 9-Jul-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media