Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3494885.3494941acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsseConference Proceedingsconference-collections
research-article

Monocular Keypoint based Pull-ups Measurement on Strict Pull-ups Benchmark

Published: 20 December 2021 Publication History
  • Get Citation Alerts
  • Abstract

    Pull-up action is one of the critical standards and items for measuring personal physique. The traditional manual assessment method is subjective and inefficient, and the existing automatic methods require high demands for playgrounds and equipment. In this paper, we propose monocular vision-based pull-ups measurement with human keypoint estimation on our proposed strict pull-ups benchmark. Specifically, the deep neural network HRNet is employed to estimate the human keypoints frame by frame. The face keypoints are estimated with Dlib on the facial region, meanwhile, the position of the horizontal bar is estimated with Canny edge detection and Hough transform on the hand region. After keypoint smoothing and denoising with the Savitzky-Golay filter, both the valid actions and invalid actions are recognized by our proposed algorithm. On the strict pull-up video dataset we collected, this proposed method achieved 91.5% of the average counting accuracy. The dataset and code will soon be released.

    References

    [1]
    Talal Alatiah and Chen Chen. 2020. Recognizing exercises and counting repetitions in real time. arXiv preprint arXiv:2005.03194(2020).
    [2]
    Xavier P Burgos-Artizzu, Pietro Perona, and Piotr Dollár. 2013. Robust face landmark estimation under occlusion. In Proceedings of the IEEE international conference on computer vision. 1513–1520.
    [3]
    John Canny. 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence6 (1986), 679–698.
    [4]
    Xudong Cao, Yichen Wei, Fang Wen, and Jian Sun. 2014. Face alignment by explicit shape regression. International journal of computer vision 107, 2 (2014), 177–190.
    [5]
    Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7291–7299.
    [6]
    Steven Chen and Richard R Yang. 2020. Pose Trainer: correcting exercise posture using pose estimation. arXiv preprint arXiv:2006.11718(2020).
    [7]
    Yucheng Chen, Yingli Tian, and Mingyi He. 2020. Monocular human pose estimation: A survey of deep learning-based methods. Computer Vision and Image Understanding 192 (2020), 102897.
    [8]
    Xuelian Cheng, Mingyi He, and Weijun Duan. 2017. Machine vision based physical fitness measurement with human posture recognition and skeletal data smoothing. In 2017 international conference on orange technologies (ICOT). IEEE, 7–10.
    [9]
    Timothy F. Cootes, Gareth J. Edwards, and Christopher J. Taylor. 2001. Active appearance models. IEEE Transactions on pattern analysis and machine intelligence 23, 6(2001), 681–685.
    [10]
    Jiankang Deng, Jia Guo, Yuxiang Zhou, Jinke Yu, Irene Kotsia, and Stefanos Zafeiriou. 2019. Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641(2019).
    [11]
    Richard O Duda and Peter E Hart. 1972. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15, 1 (1972), 11–15.
    [12]
    Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950(2017).
    [13]
    Vahid Kazemi and Josephine Sullivan. 2014. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1867–1874.
    [14]
    Rushil Khurana, Karan Ahuja, Zac Yu, Jennifer Mankoff, Chris Harrison, and Mayank Goel. 2018. GymCam: Detecting, recognizing and tracking simultaneous exercises in unconstrained scenes. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 4 (2018), 1–17.
    [15]
    Xiaoming Liu. 2007. Generic Face Alignment using Boosted Appearance Model. In CVPR, Vol. 7. Citeseer, 1–8.
    [16]
    Ghanashyama Prabhu, Noel E O’Connor, and Kieran Moran. 2020. Recognition and Repetition Counting for LME Exercises in Exercise-Based CVD Rehabilitation: A Comparative Study Using Artificial Intelligence Models. (2020).
    [17]
    Ronald W Schafer. 2011. What is a Savitzky-Golay filter?[lecture notes]. IEEE Signal processing magazine 28, 4 (2011), 111–117.
    [18]
    Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. A dataset of 101 human action classes from videos in the wild. Center for Research in Computer Vision 2, 11 (2012).
    [19]
    Andrea Soro, Gino Brunner, Simon Tanner, and Roger Wattenhofer. 2019. Recognition and repetition counting for complex physical exercises with deep learning. Sensors 19, 3 (2019), 714.
    [20]
    Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. 2019. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5693–5703.
    [21]
    Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. 2014. Joint training of a convolutional network and a graphical model for human pose estimation. Advances in neural information processing systems 27 (2014), 1799–1807.
    [22]
    Alexander Toshev and Christian Szegedy. 2014. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1653–1660.
    [23]
    Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. 2016. Convolutional pose machines. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 4724–4732.
    [24]
    Bin Xiao, Haiping Wu, and Yichen Wei. 2018. Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV). 466–481.
    [25]
    Yuanyuan Xu, Wan Yan, Genke Yang, Jiliang Luo, Tao Li, and Jianan He. 2020. CenterFace: joint face detection and alignment using face as point. Scientific Programming 2020 (2020).
    [26]
    Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23, 10 (2016), 1499–1503.
    [27]
    Weiyu Zhang, Menglong Zhu, and Konstantinos G Derpanis. 2013. From actemes to action: A strongly-supervised representation for detailed action understanding. In Proceedings of the IEEE International Conference on Computer Vision. 2248–2255.

    Cited By

    View all

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CSSE '21: Proceedings of the 4th International Conference on Computer Science and Software Engineering
    October 2021
    366 pages
    ISBN:9781450390675
    DOI:10.1145/3494885
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 December 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Deep learning
    2. Human keypoint estimation
    3. Pull-ups
    4. Pull-ups benchmark

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Taicang Keypoint Science and Technology Plan

    Conference

    CSSE 2021

    Acceptance Rates

    Overall Acceptance Rate 33 of 74 submissions, 45%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 41
      Total Downloads
    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media