Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

ViSig: Automatic Interpretation of Visual Body Signals Using On-Body Sensors

Published: 28 March 2023 Publication History

Abstract

Visual body signals are designated body poses that deliver an application-specific message. Such signals are widely used for fast message communication in sports (signaling by umpires and referees), transportation (naval officers and aircraft marshallers), and construction (signaling by riggers and crane operators), to list a few examples. Automatic interpretation of such signals can help maintaining safer operations in these industries, help in record-keeping for auditing or accident investigation purposes, and function as a score-keeper in sports. When automation of these signals is desired, it is traditionally performed from a viewer's perspective by running computer vision algorithms on camera feeds. However, computer vision based approaches suffer from performance deterioration in scenarios such as lighting variations, occlusions, etc., might face resolution limitations, and can be challenging to install. Our work, ViSig, breaks with tradition by instead deploying on-body sensors for signal interpretation. Our key innovation is the fusion of ultra-wideband (UWB) sensors for capturing on-body distance measurements, inertial sensors (IMU) for capturing orientation of a few body segments, and photodiodes for finger signal recognition, enabling a robust interpretation of signals. By deploying only a small number of sensors, we show that body signals can be interpreted unambiguously in many different settings, including in games of Cricket, Baseball, and Football, and in operational safety use-cases such as crane operations and flag semaphores for maritime navigation, with > 90% accuracy. Overall, we have seen substantial promise in this approach and expect a large body of future follow-on work to start using UWB and IMU fused modalities for the more general human pose estimation problems.

Supplemental Material

ZIP File - cao
Supplemental movie, appendix, image and software files for, ViSig: Automatic Interpretation of Visual Body Signals Using On-Body Sensors

References

[1]
2002. Taxiing Accident involving Arrow Air APWP6L. https://www.mot.gov.sg/docs/default-source/about-mot/investigation-report/28-feb-2002.pdf.
[2]
2013. IEEE Standard for Information technology- Telecommunications and information exchange between systemsLocal and metropolitan area networks- Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications-Amendment 4: Enhancements for Very High Throughput for Operation in Bands below 6 GHz. IEEE Std 802.11ac-2013 (Amendment to IEEE Std 802.11-2012, as amended by IEEE Std 802.11ae-2012, IEEE Std 802.11aa-2012, and IEEE Std 802.11ad-2012) (2013), 1--425.
[3]
2016. IEEE Standard for Low-Rate Wireless Networks. IEEE Std 802.15.4-2015 (Revision of IEEE Std 802.15.4-2011) (2016), 1--709. https://doi.org/10.1109/IEEESTD.2016.7460875
[4]
2017. Decawave User Manual. https://www.decawave.com/sites/default/files/resources/dw1000_user_manual_2.11.pdf.
[5]
2018. Antenna Delay Calibration of DW1000-based Products and Systems (APS014). https://www.qorvo.com/innovation/ultra-wideband/resources/application-notes.
[6]
2021. 5DT Data Glove Ultra - 5DT. https://5dt.com/5dt-data-glove-ultra/.
[7]
2021. Baseball umpire signal. https://www.nfhs.org/media/1017816/baseball_umpires_signals_2021-1.pdf.
[8]
2021. CyberGlove Systems LLC. http://www.cyberglovesystems.com/.
[9]
2021. Football official signal. https://www.nfhs.org/media/4016213/2021-nfhs-official-football-signals.pdf.
[10]
2021. Industry leading VR techology - Manus VR. https://www.manus-vr.com/.
[11]
2021. Labor Force Statistics from the Current Population Survey. https://www.bls.gov/cps/cpsaat11.htm.
[12]
2021. optiTrack. https://optitrack.com/.
[13]
2021. Vicon motion capture system. https://www.vicon.com/.
[14]
Alireza Abedin, Mahsa Ehsanpour, Qinfeng Shi, Hamid Rezatofighi, and Damith C Ranasinghe. 2020. Attend And Discriminate: Beyond the State-of-the-Art for Human Activity Recognition using Wearable Sensors. arXiv preprint arXiv:2007.07172 (2020).
[15]
Boyd Anderson, Mingqian Shi, Vincent YF Tan, and Ye Wang. 2019. Mobile gait analysis using foot-mounted UWB sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1--22.
[16]
Ling Bao and Stephen S Intille. 2004. Activity recognition from user-annotated acceleration data. In International conference on pervasive computing. Springer, 1--17.
[17]
Sedney R Bedico, Edrhiza Mae L Lope, Erdwin John L Lope, Edward B Lunjas, Andrea Paola D Lustre, and Roselito E Tolentino. 2020. Gesture recognition of basketball referee violation signal by applying dynamic time warping algorithm using a wearable device. In 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 249--254.
[18]
Sambit Bhattacharya, Bogdan Czejdo, and Nicolas Perez. 2012. Gesture classification with machine learning using kinect sensor data. In 2012 Third International Conference on Emerging Applications of Information Technology. IEEE, 348--351.
[19]
Hakan Bilen, Basura Fernando, Efstratios Gavves, Andrea Vedaldi, and Stephen Gould. 2016. Dynamic Image Networks for Action Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20]
Andreas Bulling, Jamie A Ward, and Hans Gellersen. 2012. Multimodal recognition of reading activity in transit using body-worn sensors. ACM Transactions on Applied Perception (TAP) 9, 1 (2012), 1--21.
[21]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7291--7299.
[22]
Hubert Cecotti and Axel Graser. 2010. Convolutional neural networks for P300 detection with application to brain-computer interfaces. IEEE transactions on pattern analysis and machine intelligence 33, 3 (2010), 433--445.
[23]
Americrane & Hoist Corporation. 2021. CRANE OPERATOR HAND SIGNALS AND THEIR IMPORTANCE. https://www.amchoist.com/news/crane-operator-hand-signals-and-their-importance-46177.
[24]
Navneet Dalal and Bill Triggs. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), Vol. 1. Ieee, 886--893.
[25]
Wilfrid Taylor Dempster. 1955. The anthropometry of body action. (1955).
[26]
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. SlowFast Networks for Video Recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[27]
Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. 2016. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1933--1941.
[28]
Marilynn P Green. 2005. N-way time transfer ('NWTT') method for cooperative ranging. Contribution 802.15-05-0499-00-004a to the IEEE 802.15. 4a Ranging Subcommittee (2005).
[29]
Yu Guan and Thomas Plötz. 2017. Ensembles of deep lstm learners for activity recognition using wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (2017), 1--28.
[30]
R Hari and M Wilscy. 2014. Event detection in cricket videos using intensity projection profile of Umpire gestures. In 2014 Annual IEEE India Conference (INDICON). IEEE, 1--6.
[31]
Jogi Hofmueller, Aaron Bachmann, and IOhannes zmoelnig. 2007. The Transmission of IP Datagrams over the Semaphore Flag Signaling System (SFSS). (2007). https://datatracker.ietf.org/doc/html/rfc4824.
[32]
HM Sajjad Hossain, MD Abdullah Al Haiz Khan, and Nirmalya Roy. 2018. DeActive: scaling activity recognition with active deep learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 1--23.
[33]
Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J Black, Otmar Hilliges, and Gerard Pons-Moll. 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1--15.
[34]
IEEE802.15.4z. 2020. IEEE Standard for Low-Rate Wireless Networks-Amendment 1: Enhanced Ultra Wideband (UWB) Physical Layers (PHYs) and Associated Ranging Techniques. IEEE Std 802.15.4z-2020 (Amendment to IEEE Std 802.15.4-2020) (2020), 1--174. https://doi.org/10.1109/IEEESTD.2020.9179124
[35]
Jeya Vikranth Jeyakumar, Liangzhen Lai, Naveen Suda, and Mani Srivastava. 2019. SenseHAR: a robust virtual activity sensor for smartphones and wearables. In Proceedings of the 17th Conference on Embedded Networked Sensor Systems. 15--28.
[36]
Antonio Ramón Jiménez and Fernando Seco. 2016. Comparing Decawave and Bespoon UWB location systems: Indoor/outdoor performance analysis. In 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 1--8.
[37]
Alexander Klaser, Marcin Marszałek, and Cordelia Schmid. 2008. A spatio-temporal descriptor based on 3d-gradients. In BMVC 2008-19th British Machine Vision Conference. British Machine Vision Association, 275--1.
[38]
Ming Hsiao Ko, Geoff West, Svetha Venkatesh, and Mohan Kumar. 2005. Online context recognition in multisensor systems using dynamic time warping. In 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing. IEEE, 283--288.
[39]
Jennifer R Kwapisz, Gary M Weiss, and Samuel A Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter 12, 2 (2011), 74--82.
[40]
Ivan Laptev, Marcin Marszalek, Cordelia Schmid, and Benjamin Rozenfeld. 2008. Learning realistic human actions from movies. In 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1--8.
[41]
Oscar D Lara, Alfredo J Pérez, Miguel A Labrador, and José D Posada. 2012. Centinela: A human activity recognition system based on acceleration and vital sign data. Pervasive and mobile computing 8, 5 (2012), 717--729.
[42]
Selena Larson. 2017. Google Home now recognizes your individual voice. https://money.cnn.com/2017/04/20/technology/google-home-voice-recognition/index.html.
[43]
Yilin Liu, Shijia Zhang, and Mahanth Gowda. 2021. NeuroPose: 3D Hand Pose Tracking using EMG Wearables. In Proceedings of the Web Conference 2021. 1471--1482.
[44]
Ziyu Liu, Hongwen Zhang, Zhenghao Chen, Zhiyong Wang, and Wanli Ouyang. 2020. Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[45]
Marine Construction Magazine. 2020. CRANE OPERATION AND ROUTINE SAFETY PROCEDURES. https://marineconstructionmagazine.com/safety/crane-operation-and-routine-safety-procedures/.
[46]
Alan Mazankiewicz, Klemens Böhm, and Mario Bergés. 2020. Incremental real-time personalization in human activity recognition using domain adaptive batch normalization. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1--20.
[47]
Michael McLaughlin and Billy Verso. 2016. Asymmetric Double-sided Two-way ranging in an UWB Communication System.
[48]
Pierre Merriaux, Yohan Dupuis, Rémi Boutteau, Pascal Vasseur, and Xavier Savatier. 2017. A study of vicon system positioning performance. Sensors 17, 7 (2017), 1591.
[49]
Sina Mohseni, Mandar Pitale, JBS Yadawa, and Zhangyang Wang. 2020. Self-supervised learning for generalizable out-of-distribution detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 5216--5223.
[50]
Vishvak S Murahari and Thomas Plötz. 2018. On attention models for human activity recognition. In Proceedings of the 2018 ACM international symposium on wearable computers. 100--103.
[51]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
[52]
International Civil Aviation Organization. 2005. Rules of the Air - Annex 2. https://www.icao.int/Meetings/anconf12/Document%20Archive/an02_cons%5B1%5D.pdf.
[53]
Timothy Otim, Alfonso Bahillo, Luis Enrique Díez, Peio Lopez-Iturri, and Francisco Falcone. 2019. FDTD and empirical exploration of human body and UWB radiation interaction on TOF ranging. IEEE Antennas and Wireless Propagation Letters 18, 6 (2019), 1119--1123.
[54]
Timothy Otim, Alfonso Bahillo, Luis Enrique Díez, Peio Lopez-Iturri, and Francisco Falcone. 2019. Impact of body wearable sensor positions on UWB ranging. IEEE Sensors Journal 19, 23 (2019), 11449--11457.
[55]
Guansong Pang, Chunhua Shen, and Anton van den Hengel. 2019. Deep anomaly detection with deviation networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 353--362.
[56]
Sarah Perez. 2019. Alexa developers can now personalize their skills by recognizing the user's voice. https://techcrunch.com/2019/09/26/alexa-developers-can-now-personalize-their-skills-by-recognizing-the-users-voice/.
[57]
AJ Piergiovanni and Michael S Ryoo. 2018. Fine-grained activity recognition in baseball videos. In Proceedings of the ieee conference on computer vision and pattern recognition workshops. 1740--1748.
[58]
Bahareh Pourbabaee, Mehrsan Javan Roshtkhari, and Khashayar Khorasani. 2018. Deep convolutional neural networks and learning ECG features for screening paroxysmal atrial fibrillation patients. IEEE Transactions on Systems, Man, and Cybernetics: Systems 48, 12 (2018), 2095--2104.
[59]
Nikhil Raveendranathan, Stefano Galzarano, Vitali Loseu, Raffaele Gravina, Roberta Giannantonio, Marco Sgroi, Roozbeh Jafari, and Giancarlo Fortino. 2011. From modeling to implementation of virtual sensors in body sensor networks. IEEE Sensors Journal 12, 3 (2011), 583--593.
[60]
Aravind Ravi, Harshwin Venugopal, Sruthy Paul, and Hamid R Tizhoosh. 2018. A dataset and preliminary results for umpire pose detection using SVM classification of deep features. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1396--1402.
[61]
Daniel Roetenberg, Henk Luinge, and Per Slycke. 2007. Moven: Full 6dof human motion tracking using miniature inertial sensors. Xsen Technologies, December 2, 3 (2007), 8.
[62]
Daniel Roggen, Alberto Calatroni, Mirco Rossi, Thomas Holleczek, Kilian Förster, Gerhard Tröster, Paul Lukowicz, David Bannach, Gerald Pirkl, Alois Ferscha, et al. 2010. Collecting complex activity datasets in highly rich networked sensor environments. In 2010 Seventh international conference on networked sensing systems (INSS). IEEE, 233--240.
[63]
Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. 2018. Deep one-class classification. In International conference on machine learning. 4393--4402.
[64]
Lukas Ruff, Robert A Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, and Marius Kloft. 2019. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694 (2019).
[65]
Occupational Safety and Health Administration (OSHA). 2010. HAND SIGNALS FOR CRANE OPERATION. https://www.osha.gov/sites/default/files/laws-regs/federalregister/2010-08-09.pdf.
[66]
Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. 2001. Estimating the support of a high-dimensional distribution. Neural computation (2001), 1443--1471.
[67]
Paul Scovanner, Saad Ali, and Mubarak Shah. 2007. A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th ACM international conference on Multimedia. 357--360.
[68]
Karen Simonyan and Andrew Zisserman. 2014. Two-stream convolutional networks for action recognition in videos. arXiv preprint arXiv:1406.2199 (2014).
[69]
BBC Sport. 2021. The umpire's signals. http://news.bbc.co.uk/sportacademy/hi/sa/cricket/rules/umpire_signals/newsid_3809000/3809867.stm.
[70]
Jie Su, Zhenyu Wen, Tao Lin, and Yu Guan. 2022. Learning Disentangled Behaviour Patterns for Wearable-based Human Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 1 (2022), 1--19.
[71]
Luke Sy, Nigel H Lovell, and Stephen J Redmond. 2020. Estimating lower limb kinematics using distance measurements with a reduced wearable inertial sensor count. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 4858--4862.
[72]
David MJ Tax and Robert PW Duin. 2004. Support vector data description. Machine learning 54, 1 (2004), 45--66.
[73]
Yu Tian, Guansong Pang, Yuanhong Chen, Rajvinder Singh, Johan W. Verjans, and Gustavo Carneiro. 2021. Weakly-Supervised Video Anomaly Detection With Robust Temporal Feature Magnitude Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 4975--4986.
[74]
Roberto Luis Shinmoto Torres, Qinfeng Shi, Anton van den Hengel, and Damith C Ranasinghe. 2017. A hierarchical model for recognizing alarming states in a batteryless sensor alarm intervention for preventing falls in older people. Pervasive and Mobile Computing 40 (2017), 1--16.
[75]
Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning Spatiotemporal Features With 3D Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
[76]
Linlin Tu, Xiaomin Ouyang, Jiayu Zhou, Yuze He, and Guoliang Xing. 2021. FedDL: Federated Learning via Dynamic Layer Sharing for Human Activity Recognition. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 15--28.
[77]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).
[78]
Timo Von Marcard, Bodo Rosenhahn, Michael J Black, and Gerard Pons-Moll. 2017. Sparse inertial poser: Automatic 3d human pose estimation from sparse imus. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 349--360.
[79]
Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng, and Lisha Hu. 2019. Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters 119 (2019), 3--11.
[80]
Xin Wang and Zhenhua Zhu. 2021. Vision-based hand signal recognition in construction: A feasibility study. Automation in Construction 125 (2021), 103625.
[81]
Wikipedia. 2021. Flag semaphore. https://en.wikipedia.org/wiki/Flag_semaphore.
[82]
Wikipedia. 2021. List of International Cricket Council members. https://en.wikipedia.org/wiki/List_of_International_Cricket_Council_members.
[83]
Wikipedia. 2021. Underway replenishment. https://en.wikipedia.org/wiki/Underway_replenishment.
[84]
Sijie Yan, Yuanjun Xiong, and Dahua Lin. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-second AAAI conference on artificial intelligence.
[85]
Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Twenty-fourth international joint conference on artificial intelligence.
[86]
Piero Zappi, Clemens Lombriser, Thomas Stiefmeier, Elisabetta Farella, Daniel Roggen, Luca Benini, and Gerhard Tröster. 2008. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. In European Conference on Wireless Sensor Networks. Springer, 17--33.
[87]
Julius Žemgulys, Vidas Raudonis, Rytis Maskeliūnas, and Robertas Damaševičius. 2018. Recognition of basketball referee signals from videos using Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM). Procedia computer science 130 (2018), 953--960.
[88]
Julius Žemgulys, Vidas Raudonis, Rytis Maskeliūnas, and Robertas Damaševičius. 2020. Recognition of basketball referee signals from real-time videos. Journal of Ambient Intelligence and Humanized Computing 11, 3 (2020), 979--991.
[89]
Ming Zeng, Le T Nguyen, Bo Yu, Ole J Mengshoel, Jiang Zhu, Pang Wu, and Joy Zhang. 2014. Convolutional neural networks for human activity recognition using mobile sensors. In 6th international conference on mobile computing, applications and services. IEEE, 197--205.
[90]
Mi Zhang and Alexander A Sawchuk. 2013. Human daily activity recognition with sparse representation using wearable sensors. IEEE journal of Biomedical and Health Informatics 17, 3 (2013), 553--560.
[91]
Yi Zheng, Qi Liu, Enhong Chen, Yong Ge, and J Leon Zhao. 2014. Time series classification using multi-channels deep convolutional neural networks. In International conference on web-age information management. Springer, 298--310.
[92]
Hao Zhou, Taiting Lu, Yilin Liu, Shijia Zhang, and Mahanth Gowda. 2022. Learning on the Rings: Self-Supervised 3D Finger Motion Tracking Using Wearable Sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (2022), 1--31.

Cited By

View all
  • (2024)EarSpeech: Exploring In-Ear Occlusion Effect on Earphones for Data-efficient Airborne Speech EnhancementProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785948:3(1-30)Online publication date: 9-Sep-2024
  • (2024)Energy and QoE Optimization for Mobile Video Streaming with Adaptive Brightness ScalingACM Transactions on Sensor Networks10.1145/367099920:4(1-24)Online publication date: 8-Jul-2024
  • (2024) BrailleReader: Braille Character Recognition Using Wearable Motion Sensor IEEE Transactions on Mobile Computing10.1109/TMC.2024.337956923:11(10538-10553)Online publication date: Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 7, Issue 1
March 2023
1243 pages
EISSN:2474-9567
DOI:10.1145/3589760
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 March 2023
Published in IMWUT Volume 7, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. IMU
  2. UWB
  3. body signals
  4. fallback communication
  5. gestures
  6. on-body sensors
  7. postures
  8. sports automation
  9. visual signalling

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)354
  • Downloads (Last 6 weeks)45
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)EarSpeech: Exploring In-Ear Occlusion Effect on Earphones for Data-efficient Airborne Speech EnhancementProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785948:3(1-30)Online publication date: 9-Sep-2024
  • (2024)Energy and QoE Optimization for Mobile Video Streaming with Adaptive Brightness ScalingACM Transactions on Sensor Networks10.1145/367099920:4(1-24)Online publication date: 8-Jul-2024
  • (2024) BrailleReader: Braille Character Recognition Using Wearable Motion Sensor IEEE Transactions on Mobile Computing10.1109/TMC.2024.337956923:11(10538-10553)Online publication date: Nov-2024
  • (2023)ModBand: Design of a Modular Headband for Multimodal Data Collection and InferenceAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586182.3616682(1-3)Online publication date: 29-Oct-2023
  • (2023)HeadTrack: Real-Time Human–Computer Interaction via Wireless EarphonesIEEE Journal on Selected Areas in Communications10.1109/JSAC.2023.334538142:4(990-1002)Online publication date: 25-Dec-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media