Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3640471.3686644acmconferencesArticle/Chapter ViewAbstractPublication PagesmobilehciConference Proceedingsconference-collections
extended-abstract

Situated Instructions and Guidance For Self-training and Self-coaching in Sports

Published: 21 September 2024 Publication History

Abstract

Recent advancements in Virtual Reality (VR) and Augmented Reality (AR) have made remote education immersive and 3D, gaining traction in various fields like medicine, entertainment, education, and engineering. Remote sports training has also gained attention, leading to the development of applications for analyzing movements and improving skills through 3D visualization. However, automatic guidance systems are not well-researched and limitations of existing motion capture setups have been highlighted. In this thesis, we propose to use a combination of video analysis, AR, and activity recognition for remote sports training. Our approach includes two capture modes (egocentric and exocentric) for indoor and outdoor activities and utilizes computer vision-based methods for motion estimation instead of relying on wearable sensors that need expensive facilities. We explore different visualization modes and plan to develop a deep-learning model to provide automatic guidance during training sessions.

References

[1]
Mark Billinghurst, Adrian Clark, Gun Lee, 2015. A survey of augmented reality. Foundations and Trends® in Human–Computer Interaction 8, 2-3 (2015), 73–272.
[2]
Zafer Bozyer. 2015. Augmented reality in sports: Today and tomorrow. International Journal of Sport Culture and Science 3, Special Issue 4 (2015).
[3]
Young-Woon Cha, Husam Shaik, Qian Zhang, Fan Feng, Andrei State, Adrian Ilie, and Henry Fuchs. 2021. Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE, 616–625.
[4]
Dimitris Chatzopoulos, Carlos Bermejo, Zhanpeng Huang, and Pan Hui. 2017. Mobile augmented reality survey: From where we are to where we go. Ieee Access 5 (2017), 6917–6950.
[5]
Kevin Huang, Jiannan Li, Mauricio Sousa, and Tovi Grossman. 2022. immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 91, 13 pages.
[6]
Atsuki Ikeda, Dong-Hyun Hwang, Hideki Koike, Gerd Bruder, Shunsuke Yoshimoto, and Sue Cobb. 2018. AR based Self-sports Learning System using Decayed Dynamic TimeWarping Algorithm. In ICAT-EGVE. 171–174.
[7]
Taeho Kang, Kyungjin Lee, Jinrui Zhang, and Youngki Lee. 2023. Ego3dpose: Capturing 3d cues from binocular egocentric views. In SIGGRAPH Asia 2023 Conference Papers. 1–10.
[8]
Rawal Khirodkar, Aayush Bansal, Lingni Ma, Richard Newcombe, Minh Vo, and Kris Kitani. 2023. Ego-Humans: An Ego-Centric 3D Multi-Human Benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 19807–19819.
[9]
Ryohei Komiyama, Takashi Miyaki, and Jun Rekimoto. 2017. JackIn space: designing a seamless transition between first and third person view for effective telepresence collaborations. In Proceedings of the 8th Augmented Human International Conference (Silicon Valley, California, USA) (AH ’17). Association for Computing Machinery, New York, NY, USA, Article 14, 9 pages. https://doi.org/10.1145/3041164.3041183
[10]
Felix Kosmalla, Florian Daiber, Frederik Wiehr, and Antonio Krüger. 2017. Climbvis: Investigating in-situ visualizations for understanding climbing movements by demonstration. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. 270–279.
[11]
Sven Kratz, Don Kimber, Weiqing Su, Gwen Gordon, and Don Severns. 2014. Polly: " being there" through the parrot and a guide. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services. 625–630.
[12]
Chen-Chieh Liao, Dong-Hyun Hwang, and Hideki Koike. 2022. Ai golf: Golf swing analysis tool for self-training. IEEE Access 10 (2022).
[13]
Chen-Chieh Liao, Dong-Hyun Hwang, Erwin Wu, and Hideki Koike. 2023. Ai coach: A motor skill training system using motion discrepancy detection. In Proceedings of the Augmented Humans International Conference 2023. 179–189.
[14]
Tica Lin, Rishi Singh, Yalong Yang, Carolina Nobre, Johanna Beyer, Maurice A Smith, and Hanspeter Pfister. 2021. Towards an understanding of situated ar visualization for basketball free-throw training. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[15]
Federico Manuri, Andrea Sanna, 2016. A survey on applications of augmented reality. ACSIJ Advances in Computer Science: an International Journal 5, 1 (2016), 18–27.
[16]
Mehran Rastegar Sani. 2024. Train Me: Exploring Mobile Sports Capture and Replay for Immersive Sports Coaching. In Proceedings of the ACM on Human-Computer Interaction. ACM New York, NY, USA.
[17]
Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, and Christian Theobalt. 2016. Egocap: egocentric marker-less motion capture with two fisheye cameras. ACM Transactions on Graphics (TOG) 35, 6 (2016), 1–11.
[18]
Koya Sato, Yuji Sano, Mai Otsuki, Mizuki Oka, and Kazuhiko Kato. 2019. Augmented recreational volleyball court: Supporting the beginners’ landing position prediction skill by providing peripheral visual feedback. In Proceedings of the 10th Augmented Human International Conference 2019. 1–9.
[19]
Denis Tome, Patrick Peluse, Lourdes Agapito, and Hernan Badino. 2019. xr-egopose: Egocentric 3d human pose from an hmd camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7728–7738.
[20]
Jian Wang, Zhe Cao, Diogo Luvizon, Lingjie Liu, Kripasindhu Sarkar, Danhang Tang, Thabo Beeler, and Christian Theobalt. 2023. Egocentric Whole-Body Motion Capture with FisheyeViT and Diffusion-Based Motion Refinement. arXiv preprint arXiv:2311.16495 (2023).
[21]
Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, and Christian Theobalt. 2022. Estimating egocentric 3d human pose in the wild with external weak supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13157–13166.
[22]
Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, and Christian Theobalt. 2021. Estimating egocentric 3d human pose in global space. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11500–11509.
[23]
Jian Wang, Diogo Luvizon, Weipeng Xu, Lingjie Liu, Kripasindhu Sarkar, and Christian Theobalt. 2023. Scene-aware egocentric 3d human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13031–13040.
[24]
Jiqing Wen, Lauren Gold, Qianyu Ma, and Robert LiKamWa. 2024. Augmented Coach: Volumetric Motion Annotation and Visualization for Immersive Sports Coaching. In 2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR). 137–146. https://doi.org/10.1109/VR58804.2024.00037
[25]
Chih-Hung Yu, Cheng-Chih Wu, Jye-Shyan Wang, Hou-Yu Chen, and Yu-Tzu Lin. 2020. Learning tennis through video-based reflective learning by using motion-tracking sensors. Journal of Educational Technology & Society 23, 1 (2020), 64–77.
[26]
Qian Zhang, Akshay Paruchuri, YoungWoon Cha, Jiabin Huang, Jade Kandel, Howard Jiang, Adrian Ilie, Andrei State, Danielle Albers Szafir, Daniel Szafir, 2023. Reconstruction of Human Body Pose and Appearance Using Body-Worn IMUs and a Nearby Camera View for Collaborative Egocentric Telepresence. In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 96–97.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MobileHCI '24 Adjunct: Adjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction
September 2024
252 pages
ISBN:9798400705069
DOI:10.1145/3640471
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 September 2024

Check for updates

Author Tags

  1. Augmented Reality
  2. Self-Coaching
  3. Sport self-training
  4. Virtual Reality

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Conference

MobileHCI '24
Sponsor:
MobileHCI '24: 26th International Conference on Mobile Human-Computer Interaction
September 30 - October 3, 2024
VIC, Melbourne, Australia

Acceptance Rates

Overall Acceptance Rate 202 of 906 submissions, 22%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 73
    Total Downloads
  • Downloads (Last 12 months)73
  • Downloads (Last 6 weeks)7
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media