Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3610548.3618247acmconferencesArticle/Chapter ViewAbstractPublication Pagessiggraph-asiaConference Proceedingsconference-collections
research-article

GroundLink: A Dataset Unifying Human Body Movement and Ground Reaction Dynamics

Published: 11 December 2023 Publication History

Abstract

The physical plausibility of human motions is vital to various applications in fields including but not limited to graphics, animation, robotics, vision, biomechanics, and sports science. While fully simulating human motions with physics is an extreme challenge, we hypothesize that we can treat this complexity as a black box in a data-driven manner if we focus on the ground contact, and have sufficient observations of physics and human activities in the real world. To prove our hypothesis, we present GroundLink, a unified dataset comprised of captured ground reaction force (GRF) and center of pressure (CoP) synchronized to standard kinematic motion captures. GRF and CoP of GroundLink are not simulated but captured at high temporal resolution using force platforms embedded in the ground for uncompromising measurement accuracy. This dataset contains 368 processed motion trials (∼ 1.59M recorded frames) with 19 different movements including locomotion and weight-shifting actions such as tennis swings to signify the importance of capturing physics paired with kinematics. GroundLinkNet, our benchmark neural network model trained with GroundLink, supports our hypothesis by predicting GRFs and CoPs accurately and plausibly on unseen motions from various sources. The dataset, code, and benchmark models are made public for further research on various downstream tasks leveraging the rich physics information at https://csr.bu.edu/groundlink/.

Supplemental Material

MP4 File
Supplementary video and document
PDF File
Supplementary video and document

References

[1]
OSU ACCAD. 2001. ACCAD MoCap System and Data. https://accad.osu.edu/research/motion-lab/mocap-system-and-data Accessed: 2023-05-01.
[2]
Farzad Adbolhosseini, Hung Yu Ling, Zhaoming Xie, Xue Bin Peng, and Michiel van de Panne. 2019. On Learning Symmetric Locomotion. In Proc. ACM SIGGRAPH Motion, Interaction, and Games (MIG 2019).
[3]
Louis N. Awad, Michael D. Lewek, Trisha M. Kesar, Jason R. Franz, and Mark G. Bowden. 2020. These legs were made for propulsion: advancing the diagnosis and treatment of post-stroke propulsion deficits. Journal of NeuroEngineering and Rehabilitation 17, 1 (2020), 139. https://doi.org/10.1186/s12984-020-00747-6
[4]
Rama Bindiganavale and Norman I. Badler. 1998. Motion Abstraction and Mapping with Spatial Constraints. In Modelling and Motion Capture Techniques for Virtual Environments, Nadia Magnenat-Thalmann and Daniel Thalmann (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 70–82.
[5]
Maarten F. Bobbert, Henk C. Schamhardt, and Benno M. Nigg. 1991. Calculation of vertical ground reaction force estimates during running from positional data. Journal of Biomechanics 24, 12 (1991), 1095–1105. https://doi.org/10.1016/0021-9290(91)90002-5
[6]
Marcus A. Brubaker, Leonid Sigal, and David J. Fleet. 2009. Estimating contact dynamics. In 2009 IEEE 12th International Conference on Computer Vision. 2389–2396. https://doi.org/10.1109/ICCV.2009.5459407
[7]
Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2018. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008.
[8]
Henry M. Clever, Patrick Grady, Greg Turk, and Charles C. Kemp. 2021. BodyPressure – Inferring Body Pose and Contact Pressure from a Depth Image. arxiv:2105.09936 [cs.CV]
[9]
CMU CMU Graphics Lab. 2000. CMU Graphics Lab Motion Capture Database. http://mocap.cs.cmu.edu/
[10]
Robert T. Collins. 2023. A Light-Weight Contrastive Approach for Aligning Human Pose Sequences. arxiv:2303.04244 [cs.CV]
[11]
K.E. Costello, D.T. Felson, T. Neogi, N.A. Segal, C.E. Lewis, K.D. Gross, M.C. Nevitt, C.L. Lewis, and D. Kumar. 2021. Ground reaction force patterns in knees with and without radiographic osteoarthritis and pain: descriptive analyses of a large cohort (the Multicenter Osteoarthritis Study). Osteoarthritis and Cartilage 29, 8 (2021), 1138–1146. https://doi.org/10.1016/j.joca.2021.03.009
[12]
C De Bleecker, S Vermeulen, C De Blaiser, T Willems, R De Ridder, and P Roosen. 2020. Relationship Between Jump-Landing Kinematics and Lower Extremity Overuse Injuries in Physically Active Populations: A Systematic Review and Meta-Analysis. Sports Medicine 50, 8 (2020), 1515–1532. https://doi.org/10.1007/s40279-020-01296-7 32514700.
[13]
Damiana A. dos Santos and Marcos Duarte. 2016. A Public Data set of Human Balance Evaluations. PeerJ Life and Environment (2016). https://doi.org/10.7717/peerj.2648
[14]
Damiana A. dos Santos, Claudiane A. Fukuchi, Reginaldo K. Fukuchi, and Marcos Duarte. 2017. A Data set with Kinematic and Ground Reaction Forces of Human Balance. PeerJ Life and Environment (2017). https://doi.org/10.7717/peerj.3626
[15]
Claudiane A. Fukuchi, Reginaldo K. Fukuchi, and Marcos Duarte. 2018. A public dataset of overground and treadmill walking kinematics and kinetics in healthy individuals. PeerJ Life and Environment (2018). https://doi.org/10.7717/peerj.4640
[16]
Reginaldo K. Fukuchi, Claudiane A. Fukuchi, and Marcos Duarte. 2017. A public dataset of running biomechanics and the effects of running speed on lower extremity kinematics and kinetics. PeerJ Life and Environment (2017). https://doi.org/10.7717/peerj.3298
[17]
Levi Fussell, Kevin Bergamin, and Daniel Holden. 2021. SuperTrack: Motion Tracking for Physically Simulated Characters Using Supervised Learning. ACM Trans. Graph. 40, 6, Article 197 (dec 2021), 13 pages. https://doi.org/10.1145/3478513.3480527
[18]
Patrick Grady, Chengcheng Tang, Samarth Brahmbhatt, Christopher D. Twigg, Chengde Wan, James Hays, and Charles C. Kemp. 2022. PressureVision: Estimating Hand Pressure from a Single RGB Image. arxiv:2203.10385 [cs.CV]
[19]
Samuel R. Hamner and Scott L. Delp. 2013. Muscle contributions to fore-aft and vertical body mass center accelerations over a range of running speeds. Journal of Biomechanics 46, 4 (2013), 780–787. https://doi.org/10.1016/j.jbiomech.2012.11.024
[20]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-Functioned Neural Networks for Character Control. ACM Trans. Graph. 36, 4, Article 42 (jul 2017), 13 pages. https://doi.org/10.1145/3072959.3073663
[21]
Daniel Holden, Jun Saito, and Taku Komura. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph. 35, 4, Article 138 (jul 2016), 11 pages. https://doi.org/10.1145/2897824.2925975
[22]
Chun-Hao P. Huang, Hongwei Yi, Markus Höschle, Matvey Safroshkin, Tsvetelina Alexiadis, Senya Polikovsky, Daniel Scharstein, and Michael J. Black. 2022. Capturing and Inferring Dense Full-Body Human-Scene Contact. In Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). 13274–13285.
[23]
Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. 2019. A multimodal dataset of human gait at different walking speeds established on injury-free adult participants. Scientific Data 6, 1 (jul 2019), 111. https://doi.org/10.1038/s41597-019-0124-4
[24]
Hyunho Jeong and Sukyung Park. 2020. Estimation of the ground reaction forces from a single video camera based on the spring-like center of mass dynamics of human walking. Journal of Biomechanics 113 (12 2020), 110074. https://doi.org/10.1016/j.jbiomech.2020.110074
[25]
William R. Johnson, Ajmal Mian, Cyril J. Donnelly, David Lloyd, and Jacqueline Alderson. 2018. Predicting athlete ground reaction forces and moments from motion capture. Scientific Data 56, 10 (oct 2018), 1781–1792. https://doi.org/10.1007/s11517-018-1802-7
[26]
William R. Johnson, Ajmal Mian, Mark A. Robinson, Jasper Verheul, David G. Lloyd, and Jacqueline A. Alderson. 2021. Multidimensional Ground Reaction Forces and Moments From Wearable Sensor Accelerations via Deep Learning. IEEE Transactions on Biomedical Engineering 68, 1 (2021), 289–297. https://doi.org/10.1109/TBME.2020.3006158
[27]
Angelos Karatsidis, Giovanni Bellusci, H. Martin Schepers, Mark De Zee, Michael S. Andersen, and Peter H. Veltink. 2017. Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture. Sensors 17, 1 (2017). https://doi.org/10.3390/s17010075
[28]
Jason Konrath, Angelos Karatsidis, Martin Schepers, G. Bellusci, Mark de Zee, and Michael Andersen. 2019. Estimation of the Knee Adduction Moment and Joint Contact Force during Daily Living Activities Using Inertial Motion Capture. Sensors 19 (04 2019), 1681. https://doi.org/10.3390/s19071681
[29]
Marek Kulbacki, Jakub Segen, and Jerzy Nowacki. 2014. 4GAIT: Synchronized MoCap, Video, GRF and EMG Datasets: Acquisition, Management and Applications. 555–564. https://doi.org/10.1007/978-3-319-05458-2_57
[30]
Taesoo Kwon, Yoonsang Lee, and Michiel Van De Panne. 2020. Fast and Flexible Multilegged Locomotion Using Learned Centroidal Dynamics. ACM Trans. Graph. 39, 4, Article 46 (aug 2020), 24 pages. https://doi.org/10.1145/3386569.3392432
[31]
Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. 2002. Interactive Control of Avatars Animated with Human Motion Data. ACM Trans. Graph. 21, 3 (jul 2002), 491–500. https://doi.org/10.1145/566654.566607
[32]
Zongmian Li, Jiri Sedlar, Justin Carpentier, Ivan Laptev, Nicolas Mansard, and Josef Sivic. 2019. Estimating 3D Motion and Forces of Person-Object Interactions from Monocular Video. In Computer Vision and Pattern Recognition (CVPR).
[33]
Yiyue Luo, Yunzhu Li, Foshey Michael, Shou Wan, Pratyusha Sharma, Tomas Palacios, Antonio Torralba, and Wojciech Matusik. 2021. Intelligent Carpet: Inferring 3D Human Pose from Tactile Signals. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11250–11260.
[34]
Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. 2019. AMASS: Archive of Motion Capture as Surface Shapes. In International Conference on Computer Vision. 5442–5451.
[35]
Emily S. Matijevich, Lauren M. Branscombe, Leon R. Scott, and Karl E. Zelik. 2019. Ground reaction force metrics are not strongly correlated with tibial bone load when running across speeds and slopes: Implications for science, sport and wearable tech. PLoS ONE (2019). https://doi.org/10.1371/journal.pone.0210000
[36]
Jason K. Moore, Sandra K. Hnat, and Antonie J. van den Borgert. 2015. An elaborate data set on human gait and the effect of mechanical perturbations. PeerJ Life and Environment (2015). https://doi.org/10.7717/peerj.918
[37]
Lucas Mourot, Ludovic Hoyet, François Le Clerc, and Pierre Hellier. 2022a. UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup. arxiv:2208.04598 [cs.GR]
[38]
Lucas Mourot, Ludovic Hoyet, François Le Clerc, and Pierre Hellier. 2022b. UnderPressure: Deep Learning of Foot Contacts and Forces for Character Animation. (2022). https://doi.org/10.48550/arXiv.2208.04598 arXiv:2208.04598To be published in proceedings of Symposium on Computer Animation (SCA).
[39]
Marion Mundt, Zachery Born, Molly Goldacre, and Jacqueline Alderson. 2023. Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose. Sensors 23, 1 (2023). https://doi.org/10.3390/s23010078
[40]
Marion Mundt, Arnd Koeppe, Sina David, Franz Bamer, Wolfgang Potthast, and Bernd Markert. 2020. Prediction of ground reaction force and joint moments based on optical motion capture data during gait. Medical Engineering & Physics 86 (2020), 29–34. https://doi.org/10.1016/j.medengphy.2020.10.001
[41]
Niels J. Nedergaard, Jasper Verheul, Barry Drust, Terence Etchells, Paulo Lisboa, Mark A. Robinson, and Jos Vanrenterghem. 2018. The feasibility of predicting ground reaction forces during running from a trunk accelerometry driven mass-spring-damper model. PeerJ 6 (dec 2018), e6105. https://doi.org/10.7717/peerj.6105
[42]
Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. 2019. Expressive Body Capture: 3D Hands, Face, and Body from a Single Image. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 10975–10985.
[43]
Dario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. 2020. Modeling Human Motion with Quaternion-Based Neural Networks. Int. J. Comput. Vision 128, 4 (apr 2020), 855–872. https://doi.org/10.1007/s11263-019-01245-6
[44]
Mark Pogson, Jasper Verheul, Mark A. Robinson, Jos Vanrenterghem, and Paulo Lisboa. 2020. A neural network method to predict task- and step-specific ground reaction force magnitudes from trunk accelerations during running activities. Medical Engineering & Physics 78 (2020), 82–89. https://doi.org/10.1016/j.medengphy.2020.02.002
[45]
Qualisys. 2021. Qualisys Motion Capture System. https://www.qualisys.com/ Accessed: 2022-05-30.
[46]
Flavien Quijoux, Aliénor Vienne-Jumeau, François Bertin-Hugault, Philippe Zawieja, Marie Lefèvre, Pierre-Paul Vidal, and Damien Ricard. 2020. Center of pressure displacement characteristics differentiate fall risk in older people: A systematic review with meta-analysis. Ageing Research Reviews 62 (2020), 101117. https://doi.org/10.1016/j.arr.2020.101117 32565327.
[47]
Hermann Schwameder, Josef Kröll, Julia Lindorfer, and Julian Fritz. 2019. Estimation of ground reaction forces from kinematics in ski-jumping imitation jumps.
[48]
Jesse Scott, Bharadwaj Ravichandran, Christopher Funk, Robert T. Collins, and Yanxi Liu. 2020. From Image to Stability: Learning Dynamics from Human Pose. In Computer Vision – ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, Cham, 536–554. https://doi.org/10.1007/978-3-030-58592-1_32
[49]
Erfan Shahabpoor and Aleksandar Pavic. 2017. Measurement of Walking Ground Reactions in Real-Life Environments: A Systematic Review of Techniques and Technologies. Sensors (Basel) 17, 9 (Sep 2017), 2085. https://doi.org/10.3390/s17092085 PMCID: PMC5620730.
[50]
Dharmendra Sharma, Pavel Davidson, Philipp Müller, and Robert Piché. 2021. Indirect Estimation of Vertical Ground Reaction Force from a Body-Mounted INS/GPS Using Machine Learning. Sensors 21 (02 2021), 1553. https://doi.org/10.3390/s21041553
[51]
Soshi Shimada, Vladislav Golyanik, Zhi Li, Patrick Pérez, Weipeng Xu, and Christian Theobalt. 2022. HULC: 3D HUman Motion Capture with Pose Manifold SampLing and Dense Contact Guidance. In European Conference on Computer Vision (ECCV). 516–533.
[52]
Soshi Shimada, Vladislav Golyanik, Weipeng Xu, Patrick Pérez, and Christian Theobalt. 2021. Neural Monocular 3D Human Motion Capture with Physical Awareness. ACM Transactions on Graphics 40, 4, Article 83 (aug 2021).
[53]
Soshi Shimada, Vladislav Golyanik, Weipeng Xu, and Christian Theobalt. 2020. PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time. ACM Trans. Graph. 39, 6, Article 235 (nov 2020), 16 pages. https://doi.org/10.1145/3414685.3417877
[54]
Sebastian Skals, Moon Jung, Michael Damsgaard, and Michael Andersen. 2017. Prediction of ground reaction forces and moments during sports-related movements. Multibody System Dynamics 39 (03 2017). https://doi.org/10.1007/s11044-016-9537-4
[55]
Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. 2019. Neural State Machine for Character-Scene Interactions. ACM Trans. Graph. 38, 6, Article 209 (nov 2019), 14 pages. https://doi.org/10.1145/3355089.3356505
[56]
Matt Trumble, Andrew Gilbert, Charles Malleson, Adrian Hilton, and John Collomosse. 2017a. Total Capture: 3D Human Pose Estimation Fusing Video and Inertial Sensors. In 2017 British Machine Vision Conference (BMVC).
[57]
Matt Trumble, Andrew Gilbert, Charles Malleson, Adrian Hilton, and John Collomosse. 2017b. Total Capture: 3D Human Pose Estimation Fusing Video and Inertial Sensors. In 2017 British Machine Vision Conference (BMVC).
[58]
H Van der Worp, JW Vrielink, and SW Bredeweg. 2016. Do runners who suffer injuries have higher vertical ground reaction forces than those who remain injury-free? A systematic review and meta-analysis. British Journal of Sports Medicine 50, 8 (2016), 450–457. https://doi.org/10.1136/bjsports-2015-094924 26729857.
[59]
Jasper Verheul, Warren Gregson, Paulo Lisboa, Jos Vanrenterghem, and Mark A. Robinson. 2019. Whole-body biomechanical load in running-based sports: The validity of estimating ground reaction forces from segmental accelerations. Journal of Science and Medicine in Sport 22, 6 (2019), 716–722. https://doi.org/10.1016/j.jsams.2018.12.007
[60]
Huawei Wang and Antonie J. van den Bogert. 2020. Identification of the human postural control system through stochastic trajectory optimization. Journal of Neuroscience Methods 334 (2020), 108580. https://doi.org/10.1016/j.jneumeth.2020.108580
[61]
Huawei Wang and Antonie J. van den Bogert. 2020. Standing Balance Experiment with Long Duration Random Pulses Perturbation. https://api.semanticscholar.org/CorpusID:244988575
[62]
Jungdam Won and J. Lee. 2019. Learning body shape variation in physics-based characters. ACM Transactions on Graphics 38 (11 2019), 1–12. https://doi.org/10.1145/3355089.3356499
[63]
Frank Wouda, Matteo Giuberti, G. Bellusci, Erik Maartens, Jasper Reenalda, Bert-Jan BEIJNUM, and Peter Veltink. 2018. Estimation of Vertical Ground Reaction Forces and Sagittal Knee Kinematics During Running Using Three Inertial Sensors. Frontiers in Physiology 9 (03 2018). https://doi.org/10.3389/fphys.2018.00218
[64]
KangKang Yin and Dinesh K. Pai. 2003. FootSee: an interactive animation system. In Symposium on Computer Animation. https://api.semanticscholar.org/CorpusID:14380898
[65]
KangKang Yin and Goh Jing Ying. 2021. SFU Motion Capture Database. https://mocap.cs.sfu.ca/ Accessed: 2023-05-01.
[66]
Petrissa Zell, Bastian Wandt, and Bodo Rosenhahn. 2017. Joint 3D Human Motion Capture and Physical Analysis from Monocular Videos. 17–26. https://doi.org/10.1109/CVPRW.2017.9
[67]
He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-Adaptive Neural Networks for Quadruped Motion Control. ACM Trans. Graph. 37, 4, Article 145 (jul 2018), 11 pages. https://doi.org/10.1145/3197517.3201366

Cited By

View all
  • (2024)Motion-Driven Neural Optimizer for Prophylactic Braces Made by Distributed MicrostructuresSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687661(1-11)Online publication date: 3-Dec-2024
  • (2024)SolePoser: Full Body Pose Estimation using a Single Pair of Insole SensorProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676418(1-9)Online publication date: 13-Oct-2024
  • (2024)AddBiomechanics Dataset: Capturing the Physics of Human Motion at ScaleComputer Vision – ECCV 202410.1007/978-3-031-73223-2_27(490-508)Online publication date: 8-Nov-2024

Index Terms

  1. GroundLink: A Dataset Unifying Human Body Movement and Ground Reaction Dynamics

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SA '23: SIGGRAPH Asia 2023 Conference Papers
    December 2023
    1113 pages
    ISBN:9798400703157
    DOI:10.1145/3610548
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 December 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. motion capture datasets
    2. neural network

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Sloan Foundation

    Conference

    SA '23
    Sponsor:
    SA '23: SIGGRAPH Asia 2023
    December 12 - 15, 2023
    NSW, Sydney, Australia

    Acceptance Rates

    Overall Acceptance Rate 178 of 869 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)206
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 18 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Motion-Driven Neural Optimizer for Prophylactic Braces Made by Distributed MicrostructuresSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687661(1-11)Online publication date: 3-Dec-2024
    • (2024)SolePoser: Full Body Pose Estimation using a Single Pair of Insole SensorProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676418(1-9)Online publication date: 13-Oct-2024
    • (2024)AddBiomechanics Dataset: Capturing the Physics of Human Motion at ScaleComputer Vision – ECCV 202410.1007/978-3-031-73223-2_27(490-508)Online publication date: 8-Nov-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media