Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3502105acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

ControllerPose: Inside-Out Body Capture with VR Controller Cameras

Published: 28 April 2022 Publication History

Abstract

We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. Our pipeline composites multiple camera views together, performs 3D body pose estimation, uses this data to control a rigged human model with inverse kinematics, and exposes the resulting user avatar to end user applications. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. We describe our proof-of-concept hardware and software, as well as results from our user study, which point to imminent feasibility.

Supplementary Material

MP4 File (3491102.3502105-video-preview.mp4)
Video Preview
MP4 File (3491102.3502105-video-figure.mp4)
Video Figure

References

[1]
Karan Ahuja, Mayank Goel, and Chris Harrison. 2020. BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences. In Symposium on Spatial User Interaction. 1–8.
[2]
Karan Ahuja, Chris Harrison, Mayank Goel, and Robert Xiao. 2019. MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 453–462. https://doi.org/10.1145/3332165.3347889
[3]
Karan Ahuja, Rahul Islam, Varun Parashar, Kuntal Dey, Chris Harrison, and Mayank Goel. 2018. Eyespyvr: Interactive eye sensing using off-the-shelf, smartphone-based vr headsets. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 1–10.
[4]
Karan Ahuja, Dohyun Kim, Franceska Xhakaj, Virag Varga, Anne Xie, Stanley Zhang, Jay Eric Townsend, Chris Harrison, Amy Ogan, and Yuvraj Agarwal. 2019. EduSense: Practical classroom sensing at Scale. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1–26.
[5]
Karan Ahuja, Andy Kong, Mayank Goel, and Chris Harrison. 2020. Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices Ecosystems(UIST ’20). Association for Computing Machinery, New York, NY, USA, 1121–1131. https://doi.org/10.1145/3379337.3415588
[6]
Karan Ahuja, Sven Mayer, Mayank Goel, and Chris Harrison. 2021. Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12.
[7]
Karan Ahuja, Eyal Ofek, Mar Gonzalez-Franco, Christian Holz, and Andrew D Wilson. 2021. CoolMoves: User Motion Accentuation in Virtual Reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1–23.
[8]
Justin Amadeus Albert, Victor Owolabi, Arnd Gebel, Clemens Markus Brahms, Urs Granacher, and Bert Arnrich. 2020. Evaluation of the pose tracking performance of the azure kinect and kinect v2 for gait analysis in comparison with a gold standard: A pilot study. Sensors 20, 18 (2020), 5104.
[9]
Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. 2018. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR ’18). IEEE, 7297–7306. https://doi.org/10.1109/CVPR.2018.00762
[10]
Gilles Bailly, Jörg Müller, Michael Rohs, Daniel Wigdor, and Sven Kratz. 2012. Shoesense: a new perspective on gestural interaction and wearable applications. In Proceedings of the SIGCHI conference on human factors in computing systems. 1239–1248.
[11]
G. Bradski. 2000. The OpenCV Library. Dr. Dobb’s Journal of Software Tools(2000).
[12]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. In Proceedings of the IEEE conference on computer vision and pattern recognition(CVPR ’17). IEEE, 7291–7299. https://doi.org/10.1109/CVPR.2017.143
[13]
Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. 2015. Cyclopsring: Enabling whole-hand and context-aware interactions through a fisheye ring. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 549–556.
[14]
Ke-Yu Chen, Shwetak N Patel, and Sean Keller. 2016. Finexus: Tracking precise motions of multiple fingertips using magnetic sensing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 1504–1514.
[15]
Beat Games. 2021. BeatSaber VR Game. https://beatsaber.com
[16]
CloudHead Games. 2021. Pistol Whip. https://cloudheadgames.com/pistol-whip
[17]
Oliver Glauser, Shihao Wu, Daniele Panozzo, Otmar Hilliges, and Olga Sorkine-Hornung. 2019. Interactive hand pose estimation using a stretch-sensing soft glove. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–15.
[18]
Google. 2022. Coral. https://coral.ai
[19]
John K Haas. 2014. A history of the unity game engine. (2014).
[20]
Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D Twigg, Peizhao Zhang, Jeff Petkau, Tsz-Ho Yu, Chun-Jung Tai, Muzaffer Akbay, Zheng Wang, 2020. MEgATrack: monochrome egocentric articulated hand-tracking for virtual reality. ACM Transactions on Graphics (TOG) 39, 4 (2020), 87–1.
[21]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[22]
HTC. 2020. VIVE. https://www.vive.com
[23]
HTC. 2020. VIVE Accessory Trackers. https://www.vive.com/us/accessory/vive-tracker
[24]
Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J Black, Otmar Hilliges, and Gerard Pons-Moll. 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–15.
[25]
Dong-Hyun Hwang, Kohei Aso, Ye Yuan, Kris Kitani, and Hideki Koike. 2020. MonoEye: Multimodal Human Motion Capture System Using A Single Ultra-Wide Fisheye Camera. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 98–111.
[26]
Intel Corporation. 2020. RealSense. https://www.intelrealsense.com/
[27]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning. PMLR, 448–456.
[28]
Yasha Iravantchi, Yang Zhang, Evi Bernitsas, Mayank Goel, and Chris Harrison. 2019. Interferi: Gesture sensing using on-body acoustic interferometry. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[29]
Hao Jiang and Kristen Grauman. 2017. Seeing invisible poses: Estimating 3d body pose from egocentric video. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 3501–3509.
[30]
Haojian Jin, Jingxian Wang, Zhijian Yang, Swarun Kumar, and Jason Hong. 2018. Rf-wear: Towards wearable everyday skeleton tracking using passive rfids. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 369–372.
[31]
David Kim, Otmar Hilliges, Shahram Izadi, Alex D Butler, Jiawen Chen, Iason Oikonomidis, and Patrick Olivier. 2012. Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 167–176.
[32]
Daehwa Kim, Keunwoo Park, and Geehyuk Lee. 2020. OddEyeCam: A Sensing Technique for Body-Centric Peephole Interaction Using WFoV RGB and NFoV Depth Cameras. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 85–97.
[33]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR (Poster). http://arxiv.org/abs/1412.6980
[34]
Elena Kokkinara, Mel Slater, and Joan López-Moliner. 2015. The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality. ACM Transactions on Applied Perception (TAP) 13, 1 (2015), 3.
[35]
George Alex Koulieris, Kaan Akşit, Michael Stengel, Rafał K Mantiuk, Katerina Mania, and Christian Richardt. 2019. Near-eye display and tracking technologies for virtual and augmented reality. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 493–519.
[36]
Gyeong-il Kweon and Young-ho Choi. 2010. Image-processing based panoramic camera employing single fisheye lens. Journal of the Optical Society of Korea 14, 3 (2010), 245–259.
[37]
Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little. 2017. A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision. 2640–2649.
[38]
Antonella Maselli and Mel Slater. 2014. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Frontiers in human neuroscience 8 (2014), 693.
[39]
Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, and Christian Theobalt. 2017. Vnect: Real-time 3d human pose estimation with a single rgb camera. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1–14.
[40]
Meta Motion. 2018. Gypsy Motion Capture System. http://metamotion.com/gypsy/gypsy-motion-capture-system.htm
[41]
Meta Technologies LLC. 2020. Oculus Quest. https://www.oculus.com/quest
[42]
Meta Technologies LLC. 2020. Oculus Rift. https://www.oculus.com/rift-s/
[43]
Damien Michel, Ammar Qammaz, and Antonis A Argyros. 2017. Markerless 3d human pose estimation and tracking based on rgbd cameras: an experimental evaluation. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. 115–122.
[44]
Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Icml.
[45]
Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, and Pattie Maes. 2013. EyeRing: a finger-worn input device for seamless interactions with our surroundings. In Proceedings of the 4th Augmented Human International Conference. 13–20.
[46]
NaturalPoint Inc.2020. OptiTrack. http://optitrack.com
[47]
Evonne Ng, Donglai Xiang, Hanbyul Joo, and Kristen Grauman. 2020. You2me: Inferring body pose in egocentric video via first and second person interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9890–9900.
[48]
Northern Digital Inc. 2020. trakSTAR. https://www.ndigital.com/msci/products/drivebay-trakstar
[49]
Oculus. 2020. Oculus Hand Tracking. https://developer.oculus.com/blog/high-frequency-hand-tracking
[50]
OpenNI. 2020. OpenNI. https://structure.io/openni
[51]
George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, and Kevin Murphy. 2018. Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In Proceedings of the European Conference on Computer Vision(ECCV ’18). 269–286. https://doi.org/10.1007/978-3-030-01264-9_17
[52]
Mathias Parger, Joerg H. Mueller, Dieter Schmalstieg, and Markus Steinberger. 2018. Human Upper-Body Inverse Kinematics for Increased Embodiment in Consumer-Grade Virtual Reality(VRST ’18). Association for Computing Machinery, New York, NY, USA, Article 23, 10 pages. https://doi.org/10.1145/3281505.3281529
[53]
Raspberry Pi. 2022. Compute Module 4. https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000
[54]
Polhemus. 2020. Polhemus Motion Capture System. https://polhemus.com/case-study/detail/polhemus-motion-capture-system-is-used-to-measure-real-time-motion-analysis
[55]
Charles Pontonnier, Georges Dumont, Asfhin Samani, Pascal Madeleine, and Marwan Badawi. 2014. Designing and evaluating a workstation in real and virtual environment: toward virtual reality based ergonomic design sessions. Journal on Multimodal User Interfaces 8, 2 (2014), 199–208.
[56]
Qualcomm. [n.d.]. Qualcomm Snapdragon XR2 Platform Commercially Debuts in Oculus Quest 2. https://www.qualcomm.com/news/releases/2020/09/16/qualcomm-snapdragon-xr2-platform-commercially-debuts-oculus-quest-2
[57]
Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, and Christian Theobalt. 2016. EgoCap: Egocentric Marker-Less Motion Capture with Two Fisheye Cameras. ACM Trans. Graph. 35, 6, Article 162 (Nov. 2016), 11 pages. https://doi.org/10.1145/2980179.2980235
[58]
G Riva and G Mantovani. 1999. The ergonomics of virtual reality: human factors in developing clinical-oriented virtual environments. In Medicine meets virtual reality. IOS Press, 278–284.
[59]
Root Motion. 2020. FINAL IK - VRIK Solver Locomotion. http://www.root-motion.com/finalikdox/html/page16.html
[60]
Takaaki Shiratori, Hyun Soo Park, Leonid Sigal, Yaser Sheikh, and Jessica K Hodgins. 2011. Motion capture from body-mounted cameras. In ACM SIGGRAPH 2011 papers. 1–10.
[61]
SONY. 2020. PlayStationVR. https://www.playstation.com/en-us/ps-vr
[62]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.
[63]
Thad Starner, Jake Auxier, Daniel Ashbrook, and Maribeth Gandy. 2000. The gesture pendant: A self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring. In Digest of Papers. Fourth International Symposium on Wearable Computers. IEEE, 87–94.
[64]
StereoPi. 2022. StereoPi V2. https://www.stereopi.com/v2
[65]
Yu Sun, Qian Bao, Wu Liu, Yili Fu, Michael J Black, and Tao Mei. 2021. Monocular, one-stage, regression of multiple 3d people. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11179–11188.
[66]
Superhot. 2021. Superhot VR. https://superhotgame.com
[67]
Ivan E. Sutherland. 1968. A Head-Mounted Three Dimensional Display. In Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I (San Francisco, California) (AFIPS ’68 (Fall, part I)). Association for Computing Machinery, New York, NY, USA, 757–764. https://doi.org/10.1145/1476589.1476686
[68]
Jochen Tautges, Arno Zinke, Björn Krüger, Jan Baumann, Andreas Weber, Thomas Helten, Meinard Müller, Hans-Peter Seidel, and Bernd Eberhardt. 2011. Motion reconstruction using sparse accelerometer data. ACM Transactions on Graphics (ToG) 30, 3 (2011), 1–12.
[69]
Techinsights. 2021. Bill of Materials of the Oculus Quest MH-B VR Headset. https://www.techinsights.com/products/bom-1905-810
[70]
Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2018. FaceVR: Real-time gaze-aware facial reenactment in virtual reality. ACM Transactions on Graphics (TOG) 37, 2 (2018), 1–15.
[71]
Denis Tome, Thiemo Alldieck, Patrick Peluse, Gerard Pons-Moll, Lourdes Agapito, Hernan Badino, and Fernando De la Torre. 2020. SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020), 1–1. https://doi.org/10.1109/TPAMI.2020.3029700
[72]
Denis Tome, Patrick Peluse, Lourdes Agapito, and Hernan Badino. 2019. xr-egopose: Egocentric 3d human pose from an hmd camera. In Proceedings of the IEEE International Conference on Computer Vision(ICCV ’19). IEEE, 7728–7738. https://doi.org/10.1109/ICCV.2019.00782
[73]
Vicon Motion Systems Ltd. 2020. Vicon. https://vicon.com
[74]
Daniel Vlasic, Rolf Adelsberger, Giovanni Vannucci, John Barnwell, Markus Gross, Wojciech Matusik, and Jovan Popović. 2007. Practical motion capture in everyday surroundings. ACM transactions on graphics (TOG) 26, 3 (2007), 35–es.
[75]
Johann Wentzel, Greg d’Eon, and Daniel Vogel. 2020. Improving virtual reality ergonomics through reach-bounded non-linear input amplification. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[76]
Wolfwhoop. 2021. Wolfwhoop FPV Cameras. https://www.amazon.com/Wolfwhoop-WT05-Transmitter-Antenna-Quadcopter/dp/B06XJMQQ6Y
[77]
Erwin Wu, Ye Yuan, Hui-Shyong Yeo, Aaron Quigley, Hideki Koike, and Kris M Kitani. 2020. Back-Hand-Pose: 3D Hand Pose Estimation for a Wrist-worn Camera via Dorsum Deformation Network. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 1147–1160.
[78]
Xsens. 2020. Xsens Motion Capture. https://www.xsens.com/motion-capture
[79]
Weipeng Xu, Avishek Chatterjee, Michael Zollhoefer, Helge Rhodin, Pascal Fua, Hans-Peter Seidel, and Christian Theobalt. 2019. Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 2093–2101. https://doi.org/10.1109/TVCG.2019.2898650
[80]
Xing-Dong Yang, Tovi Grossman, Daniel Wigdor, and George Fitzmaurice. 2012. Magic finger: always-available input through finger instrumentation. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 147–156.
[81]
KangKang Yin and Dinesh K Pai. 2003. FootSee: an interactive animation system. In Symposium on Computer Animation. Citeseer, 329–338.
[82]
Yang Zhang and Chris Harrison. 2015. Tomo: Wearable, low-cost electrical impedance tomography for hand gesture recognition. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 167–173.
[83]
Yang Zhang, Chouchang Yang, Scott E Hudson, Chris Harrison, and Alanson Sample. 2018. Wall++ room-scale interactive and context-aware sensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–15.
[84]
Zhengyou Zhang. 2012. Microsoft kinect sensor and its effect. IEEE multimedia 19, 2 (2012), 4–10.
[85]
Mingmin Zhao, Tianhong Li, Mohammad Abu Alsheikh, Yonglong Tian, Hang Zhao, Antonio Torralba, and Dina Katabi. 2018. Through-wall human pose estimation using radio signals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR ’18). IEEE, 7356–7365. https://doi.org/10.1109/CVPR.2018.00768
[86]
Christian Zimmermann, Tim Welschehold, Christian Dornhege, Wolfram Burgard, and Thomas Brox. 2018. 3d human pose estimation in rgbd images for robotic task learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1986–1992.

Cited By

View all
  • (2024)Investigating Creation Perspectives and Icon Placement Preferences for On-Body Menus in Virtual RealityProceedings of the ACM on Human-Computer Interaction10.1145/36981368:ISS(236-254)Online publication date: 24-Oct-2024
  • (2024)Above-Screen Fingertip Tracking and Hand Representation for Precise Touch Input with a Phone in Virtual RealityProceedings of the 50th Graphics Interface Conference10.1145/3670947.3670961(1-15)Online publication date: 3-Jun-2024
  • (2024)MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer DevicesProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676461(1-11)Online publication date: 13-Oct-2024
  • Show More Cited By

Index Terms

  1. ControllerPose: Inside-Out Body Capture with VR Controller Cameras

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      10459 pages
      ISBN:9781450391573
      DOI:10.1145/3491102
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 April 2022

      Check for updates

      Author Tags

      1. Motion Capture
      2. Pose Tracking
      3. Virtual Reality

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '22
      Sponsor:
      CHI '22: CHI Conference on Human Factors in Computing Systems
      April 29 - May 5, 2022
      LA, New Orleans, USA

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,169
      • Downloads (Last 6 weeks)116
      Reflects downloads up to 12 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Investigating Creation Perspectives and Icon Placement Preferences for On-Body Menus in Virtual RealityProceedings of the ACM on Human-Computer Interaction10.1145/36981368:ISS(236-254)Online publication date: 24-Oct-2024
      • (2024)Above-Screen Fingertip Tracking and Hand Representation for Precise Touch Input with a Phone in Virtual RealityProceedings of the 50th Graphics Interface Conference10.1145/3670947.3670961(1-15)Online publication date: 3-Jun-2024
      • (2024)MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer DevicesProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676461(1-11)Online publication date: 13-Oct-2024
      • (2024)SeamPose: Repurposing Seams as Capacitive Sensors in a Shirt for Upper-Body Pose TrackingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676341(1-13)Online publication date: 13-Oct-2024
      • (2024)WAVE: Anticipatory Movement Visualization for VR DancingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642145(1-9)Online publication date: 11-May-2024
      • (2024)Ecological Validity and the Evaluation of Avatar Facial Animation Noise2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00019(72-79)Online publication date: 16-Mar-2024
      • (2024)Head Pose Estimation Using a Chest-Mounted Camera and Its Evaluation Based on CG Images2024 Joint 13th International Conference on Soft Computing and Intelligent Systems and 25th International Symposium on Advanced Intelligent Systems (SCIS&ISIS)10.1109/SCISISIS61014.2024.10760076(1-4)Online publication date: 9-Nov-2024
      • (2024)System Architecture for VR Yoga Therapy Platform with 6-DoF Whole-Body Avatar Tracking2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR59861.2024.00062(360-366)Online publication date: 17-Jan-2024
      • (2024)VRmonic: A VR Piano Playing Form Trainer2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR59861.2024.00056(330-334)Online publication date: 17-Jan-2024
      • (2024)MLE-Loss Driven Robust Hand Pose EstimationIEEE Access10.1109/ACCESS.2024.342953112(99794-99805)Online publication date: 2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media