Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Exploring Uni-manual Around Ear Off-Device Gestures for Earables

Published: 06 March 2024 Publication History

Abstract

Small form factor limits physical input space in earable (i.e., ear-mounted wearable) devices. Off-device earable inputs in alternate mid-air and on-skin around-ear interaction spaces using uni-manual gestures can address this input space limitation. Segmenting these alternate interaction spaces to create multiple gesture regions for reusing off-device gestures can expand earable input vocabulary by a large margin. Although prior earable interaction research has explored off-device gesture preferences and recognition techniques in such interaction spaces, supporting gesture reuse over multiple gesture regions needs further exploration. We collected and analyzed 7560 uni-manual gesture motion data from 18 participants to explore earable gesture reuse by segmentation of on-skin and mid-air spaces around the ear. Our results show that gesture performance degrades significantly beyond 3 mid-air and 5 on-skin around-ear gesture regions for different uni-manual gesture classes (e.g., swipe, pinch, tap). We also present qualitative findings on most and least preferred regions (and associated boundaries) by end-users for different uni-manual gesture shapes across both interaction spaces for earable devices. Our results complement earlier elicitation studies and interaction technologies for earables to help expand the gestural input vocabulary and potentially drive future commercialization of such devices.

References

[1]
David Ahlström, Khalad Hasan, and Pourang Irani. 2014. Are You Comfortable Doing That? Acceptance Studies of around-Device Gestures in and for Public Settings. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (Toronto, ON, Canada) (MobileHCI '14). Association for Computing Machinery, New York, NY, USA, 193--202. https://doi.org/10.1145/2628363.2628381
[2]
Khaled Alkiek, Khaled A. Harras, and Moustafa Youssef. 2022. EarGest: Hand Gesture Recognition with Earables. In 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). IEEE Press, Stockholm, Sweden, 91--99. https://doi.org/10.1109/SECON55815.2022.9918622
[3]
Takashi Amesaka, Hiroki Watanabe, and Masanori Sugimoto. 2019. Facial Expression Recognition Using Ear Canal Transfer Function. In Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, United Kingdom) (ISWC '19). Association for Computing Machinery, New York, NY, USA, 1--9. https://doi.org/10.1145/3341163.3347747
[4]
Apple. 2023. Airpods Pro (2nd generation) - technical specifications. https://www.apple.com/ca/airpods-pro/specs/.
[5]
Felix Barroso, Norbert Freedman, and Stanley Grand. 1980. Self-touching, performance, and attentional processes. Perceptual and Motor Skills 50, 3_suppl (1980), 1083--1089. https://doi.org/10.2466/pms.1980.50.3c.1083
[6]
Ana Berdasco, Gustavo López, Ignacio Diaz, Luis Quesada, and Luis A Guerrero. 2019. User experience comparison of intelligent personal assistants: Alexa, Google Assistant, Siri and Cortana. Multidisciplinary Digital Publishing Institute Proceedings 31, 1 (2019), 51. https://doi.org/10.3390/proceedings2019031051
[7]
Xiang Cao, Jacky Jie Li, and Ravin Balakrishnan. 2008. Peephole Pointing: Modeling Acquisition of Dynamically Revealed Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI '08). Association for Computing Machinery, New York, NY, USA, 1699--1708. https://doi.org/10.1145/1357054.1357320
[8]
Yetong Cao, Chao Cai, Anbo Yu, Fan Li, and Jun Luo. 2023. EarAcE: Empowering Versatile Acoustic Sensing via Earable Active Noise Cancellation Platform. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 2, Article 47 (jun 2023), 23 pages. https://doi.org/10.1145/3596242
[9]
Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu, Da-Yuan Huang, and Bing-Yu Chen. 2020. Exploring User Defined Gestures for Ear-Based Interactions. Proc. ACM Hum.-Comput. Interact. 4, ISS, Article 186 (nov 2020), 20 pages. https://doi.org/10.1145/3427314
[10]
Seokmin Choi, Yang Gao, Yincheng Jin, Se jun Kim, Jiyang Li, Wenyao Xu, and Zhanpeng Jin. 2022. PPGface: Like What You Are Watching? Earphones Can "Feel" Your Facial Expressions. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 48 (jul 2022), 32 pages. https://doi.org/10.1145/3534597
[11]
Romit Roy Choudhury. 2021. Earable Computing: A New Area to Think About. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications (Virtual, United Kingdom) (HotMobile '21). Association for Computing Machinery, New York, NY, USA, 147--153. https://doi.org/10.1145/3446382.3450216
[12]
Niloofar Dezfuli, Mohammadreza Khalilbeigi, Jochen Huber, Florian Müller, and Max Mühlhäuser. 2012. PalmRC: Imaginary Palm-Based Remote Control for Eyes-Free Television Interaction. In Proceedings of the 10th European Conference on Interactive TV and Video (Berlin, Germany) (EuroITV '12). Association for Computing Machinery, New York, NY, USA, 27--34. https://doi.org/10.1145/2325616.2325623
[13]
John D Eastwood, Daniel Smilek, and Philip M Merikle. 2003. Negative facial expression captures attention and disrupts performance. Perception & psychophysics 65, 3 (2003), 352--358. https://doi.org/10.1080/026999300378996
[14]
Paul Ekman and Wallace V. Friesen. 2006. Hand Movements. Journal of Communication 22, 4 (02 2006), 353--374. https://doi.org/10.1111/j.1460-2466.1972.tb00163.x
[15]
Barrett Ens, David Ahlström, and Pourang Irani. 2016. Moving Ahead with Peephole Pointing: Modelling Object Selection with Head-Worn Display Field of View Limitations. In Proceedings of the 2016 Symposium on Spatial User Interaction (Tokyo, Japan) (SUI '16). Association for Computing Machinery, New York, NY, USA, 107--110. https://doi.org/10.1145/2983310.2985756
[16]
Jonny Farringdon, Vanessa Oni, Chi Ming Kan, and Leo Poll. 1999. Co-Modal Browser-An Interface for Wearable Computers. In Proceedings of the 3rd IEEE International Symposium on Wearable Computers (ISWC '99). IEEE Computer Society, USA, 45. https://doi.org/10.1109/ISWC.1999.806644
[17]
Andrea Ferlini, Alessandro Montanari, Cecilia Mascolo, and Robert Harle. 2020. Head Motion Tracking Through In-Ear Wearables. In Proceedings of the 1st International Workshop on Earable Computing (London, United Kingdom) (EarComp'19). Association for Computing Machinery, New York, NY, USA, 8--13. https://doi.org/10.1145/3345615.3361131
[18]
Paul M Fitts. 1954. The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology 47, 6 (1954), 381. https://doi.org/10.1037/0096-3445.121.3.262
[19]
Hyunjae Gil and Ian Oakley. 2023. ThumbAir: In-Air Typing for Head Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 4, Article 164 (jan 2023), 30 pages. https://doi.org/10.1145/3569474
[20]
Sean Gustafson, Daniel Bierwirth, and Patrick Baudisch. 2010. Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology (New York, New York, USA) (UIST '10). Association for Computing Machinery, New York, NY, USA, 3--12. https://doi.org/10.1145/1866029.1866033
[21]
Jinni A Harrigan, John R Kues, John J Steffen, and Robert Rosenthal. 1987. Self-touching and impressions of others. Personality and Social Psychology Bulletin 13, 4 (1987), 497--512. https://doi.org/10.1177/0146167287134007
[22]
Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI '14). Association for Computing Machinery, New York, NY, USA, 1063--1072. https://doi.org/10.1145/2556288.2557130
[23]
Tamzid Hossain, Md. Fahimul Islam, William Delamare, Farida Chowdhury, and Khalad Hasan. 2022. Exploring Social Acceptability and Users' Preferences of Head-and Eye-Based Interaction with Mobile Devices. In Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia (Leuven, Belgium) (MUM '21). Association for Computing Machinery, New York, NY, USA, 12--23. https://doi.org/10.1145/3490632.3490636
[24]
Tahera Hossain, Md Shafiqul Islam, Md Atiqur Rahman Ahad, and Sozo Inoue. 2019. Human Activity Recognition Using Earable Device. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, United Kingdom) (UbiComp/ISWC '19 Adjunct). Association for Computing Machinery, New York, NY, USA, 81--84. https://doi.org/10.1145/3341162.3343822
[25]
Imagimob. 2021. AOI A1 Touchless In-Ear Headphones. https://www.imagimob.com/gestures
[26]
Delaram Jarchi, Charence Wong, Richard Mark Kwasnicki, Ben Heller, Garry A. Tew, and Guang-Zhong Yang. 2014. Gait Parameter Estimation From a Miniaturized Ear-Worn Sensor Using Singular Spectrum Analysis and Longest Common Subsequence. IEEE Transactions on Biomedical Engineering 61, 4 (2014), 1261--1273. https://doi.org/10.1109/TBME.2014.2299772
[27]
Yincheng Jin, Yang Gao, Xuhai Xu, Seokmin Choi, Jiyang Li, Feng Liu, Zhengxiong Li, and Zhanpeng Jin. 2022. EarCommand: "Hearing" Your Silent Speech Commands In Ear. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 57 (jul 2022), 28 pages. https://doi.org/10.1145/3534613
[28]
Yincheng Jin, Yang Gao, Yanjun Zhu, Wei Wang, Jiyang Li, Seokmin Choi, Zhangyu Li, Jagmohan Chauhan, Anind K. Dey, and Zhanpeng Jin. 2021. SonicASL: An Acoustic-Based Sign Language Gesture Recognizer Using Earphones. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 2, Article 67 (jun 2021), 30 pages. https://doi.org/10.1145/3463519
[29]
Vimal Kakaraparthi, Qijia Shao, Charles J. Carver, Tien Pham, Nam Bui, Phuc Nguyen, Xia Zhou, and Tam Vu. 2021. FaceSense: Sensing Face Touch with an Ear-Worn System. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 3, Article 110 (sep 2021), 27 pages. https://doi.org/10.1145/3478129
[30]
Fahim Kawsar, Chulhong Min, Akhil Mathur, and Alessandro Montanari. 2018. Earables for Personal-Scale Behavior Analytics. IEEE Pervasive Computing 17, 3 (2018), 83--89. https://doi.org/10.1109/MPRV.2018.03367740
[31]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: turning the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (Vienna, Austria) (MobileHCI '17). Association for Computing Machinery, New York, NY, USA, Article 27, 6 pages. https://doi.org/10.1145/3098279.3098538
[32]
Marion Koelle, Swamy Ananthanarayan, and Susanne Boll. 2020. Social Acceptability in HCI: A Survey of Methods, Measures, and Design Strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--19. https://doi.org/10.1145/3313831.3376162
[33]
Maurice H Krout. 1954. An Experimental Attempt to Produce Unconscious Manual Symbolic Movements. The Journal of General Psychology 51, 1 (1954), 93--120. https://doi.org/10.1080/00221309.1954.9920209
[34]
Yen Lee Angela Kwok, Jan Gralton, and Mary-Louise McLaws. 2015. Face touching: A frequent habit that has implications for hand hygiene. American Journal of Infection Control 43, 2 (2015), 112--114. https://doi.org/10.1016/j.ajic.2014.10.015
[35]
Ke Li, Ruidong Zhang, Bo Liang, François Guimbretière, and Cheng Zhang. 2022. EarIO: A Low-Power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 62 (jul 2022), 24 pages. https://doi.org/10.1145/3534621
[36]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, and Max Mühlhäuser. 2013. EarPut: Augmenting behind-the-Ear Devices for Ear-Based Interaction. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA '13). Association for Computing Machinery, New York, NY, USA, 1323--1328. https://doi.org/10.1145/2468356.2468592
[37]
Jindong Liu, Edward Johns, Louis Atallah, Claire Pettitt, Benny Lo, Gary Frost, and Guang-Zhong Yang. 2012. An Intelligent Food-Intake Monitoring System Using Wearable Sensors. In Proceedings of the 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks (BSN '12). IEEE, USA, 154--160. https://doi.org/10.1109/BSN.2012.11
[38]
Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, and Maki Sugimoto. 2020. Face Commands - User-Defined Facial Gestures for Smart Glasses. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, USA, 374--386. https://doi.org/10.1109/ISMAR50242.2020.00064
[39]
Douglas McIlwraith, Julien Pansiot, and Guang-Zhong Yang. 2010. Wearable and ambient sensor fusion for the characterisation of human motion. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Taipei, Taiwan, 5505--5510. https://doi.org/10.1109/IROS.2010.5650512
[40]
Christian Metzger, Matt Anderson, and Thad Starner. 2004. FreeDigiter: A Contact-Free Device for Gesture Control. In Proceedings of the Eighth International Symposium on Wearable Computers (ISWC '04). IEEE Computer Society, USA, 18--21. https://doi.org/10.1109/ISWC.2004.23
[41]
Chulhong Min, Akhil Mathur, and Fahim Kawsar. 2018. Exploring Audio and Kinetic Sensing on Earable Devices. In Proceedings of the 4th ACM Workshop on Wearable Systems and Applications (Munich, Germany) (WearSys '18). Association for Computing Machinery, New York, NY, USA, 5--10. https://doi.org/10.1145/3211960.3211970
[42]
Stephanie Margarete Mueller, Sven Martin, and Martin Grunwald. 2019. Self-touch: contact durations and point of touch of spontaneous facial self-touches differ depending on cognitive and emotional load. PloS one 14, 3 (2019), e0213677. https://doi.org/10.1371/journal.pone.0213677
[43]
Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of Pre-Designed and User-Defined Gesture Sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI '13). Association for Computing Machinery, New York, NY, USA, 1099--1108. https://doi.org/10.1145/2470654.2466142
[44]
Colver Ken Howe Ne, Jameel Muzaffar, Aakash Amlani, and Manohar Bance. 2021. Hearables, in-ear sensing devices for bio-signal acquisition: a narrative review. Expert Review of Medical Devices 18, sup1 (2021), 95--128. https://doi.org/10.1080/17434440.2021.2014321
[45]
Mark Nicas and Daniel Best. 2008. A study quantifying the hand-to-face contact rate and its potential application to predicting respiratory tract infection. Journal of occupational and environmental hygiene 5, 6 (2008), 347--352. https://doi.org/10.1080/15459620802003896
[46]
Joseph Plazak and Marta Kersten-Oertel. 2018. A Survey on the Affordances of "Hearables". Inventions 3, 3 (2018), 48. https://doi.org/10.3390/inventions3030048
[47]
Hanae Rateau, Edward Lank, and Zhe Liu. 2022. Leveraging Smartwatch and Earbuds Gesture Capture to Support Wearable Interaction. Proc. ACM Hum.-Comput. Interact. 6, ISS, Article 557 (nov 2022), 20 pages. https://doi.org/10.1145/3567710
[48]
Grand View Research. 2022. Smart Headphones Market Analysis By Product (Wired, Wireless) And Segment Forecasts To 2022. https://www.grandviewresearch.com/industry-analysis/smart-headphones-market.
[49]
Grand View Research. 2023. Earphones And Headphones Market Size, Share & Trends Analysis Report By Price Band (>100, 50-100, <50), By Product (Earphones And Headphones), By Technology, By Application, By Region, And Segment Forecasts, 2023-2030. https://www.grandviewresearch.com/industry-analysis/earphone-and-headphone-market.
[50]
Bradley Rey, Kening Zhu, Simon Tangi Perrault, Sandra Bardot, Ali Neshati, and Pourang Irani. 2022. Understanding and Adapting Bezel-to-Bezel Interactions for Circular Smartwatches in Mobile and Encumbered Scenarios. Proc. ACM Hum.-Comput. Interact. 6, MHCI, Article 201 (sep 2022), 28 pages. https://doi.org/10.1145/3546736
[51]
Tobias Röddiger, Christopher Clarke, Paula Breitling, Tim Schneegans, Haibin Zhao, Hans Gellersen, and Michael Beigl. 2022. Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 3, Article 135 (sep 2022), 57 pages. https://doi.org/10.1145/3550314
[52]
Michael Rohs and Antti Oulasvirta. 2008. Target Acquisition with Camera Phones When Used as Magic Lenses. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI '08). Association for Computing Machinery, New York, NY, USA, 1409--1418. https://doi.org/10.1145/1357054.1357275
[53]
Samsung. 2023. Samsung Galaxy Buds Pro. https://www.samsung.com/global/galaxy/galaxy-buds-pro/specs/.
[54]
Khairul Khaizi Mohd Shariff, Auni Nadiah Yusni, Mohd Adli Md Ali, Megat Syahirul Amin Megat Ali, Megat Zuhairy Megat Tajuddin, and MAA Younis. 2022. CW Radar Based Silent Speech Interface Using CNN. In 2022 IEEE Symposium on Wireless Technology & Applications (ISWTA). IEEE, USA, 76--81. https://doi.org/10.1109/ISWTA55313.2022.9942730
[55]
Fangli Song and Wei Wang. 2021. Designing the Security Enhancement Features in the Future Headphone Experience. In Design, User Experience, and Usability: Design for Contemporary Technological Environments: 10th International Conference, DUXU 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24--29, 2021, Proceedings, Part III. Springer, Springer-VerlagBerlin, Heidelberg, 90--108. https://doi.org/10.1007/978-3-030-78227-6_8
[56]
Xingzhe Song, Kai Huang, and Wei Gao. 2022. FaceListener: Recognizing Human Facial Expressions via Acoustic Sensing on Commodity Headphones. In 2022 21st ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, USA, 145--157. https://doi.org/10.1109/IPSN54338.2022.00019
[57]
Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, and Shubham Jain. 2022. MuteIt: Jaw Motion Based Unvoiced Command Recognition Using Earable. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 3, Article 140 (sep 2022), 26 pages. https://doi.org/10.1145/3550281
[58]
Dennis Stanke, Pia Brandt, and Michael Rohs. 2022. Exploring the Design Space of Headphones as Wearable Public Displays. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA '22). Association for Computing Machinery, New York, NY, USA, Article 295, 7 pages. https://doi.org/10.1145/3491101.3519756
[59]
Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2009. Brainy Hand: An Ear-Worn Hand Gesture Interaction Device. In CHI '09 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 4255--4260. https://doi.org/10.1145/1520340.1520649
[60]
Jean Vanderdonckt, Nathan Magrofuoco, Suzanne Kieffer, Jorge Pérez, Ysabelle Rase, Paolo Roselli, and Santiago Villarreal. 2019. Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper Body. In Design, User Experience, and Usability. User Experience in Advanced Technological Environments: 8th International Conference, DUXU 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II (Orlando, FL, USA). Springer-Verlag, Berlin, Heidelberg, 192--213. https://doi.org/10.1007/978-3-030-23541-3_15
[61]
Pui Chung Wong, Kening Zhu, Xing-Dong Yang, and Hongbo Fu. 2020. Exploring Eyes-Free Bezel-Initiated Swipe on Round Smartwatches. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--11. https://doi.org/10.1145/3313831.3376393
[62]
Xuhai Xu, Haitian Shi, Xin Yi, WenJia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376836
[63]
Hoon Sik Yoo and Da Young Ju. 2019. Analysis of the emotional user experience elements of wireless earphones. In Advances in Interdisciplinary Practice in Industrial Design: Proceedings of the AHFE 2018 International Conference on Interdisciplinary Practice in Industrial Design, July 21-25, 2018, Loews Sapphire Falls Resort at Universal Studios, Orlando, Florida, USA 9. Springer, USA, 247--255. https://doi.org/10.1007/978-3-319-94601-6_26
[64]
Shang Zeng, Haoran Wan, Shuyu Shi, and Wei Wang. 2023. MSilent: Towards General Corpus Silent Speech Recognition Using COTS MmWave Radar. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 1, Article 39 (mar 2023), 28 pages. https://doi.org/10.1145/3580838
[65]
Yuke Zhang, Ken Takaki, Hiroaki Murakami, Takuya Sasatani, and Yoshihiro Kawahara. 2023. Toward Continuous Finger Positioning on Ear Using Bone Conduction Speaker. In Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems (Boston, Massachusetts) (SenSys '22). Association for Computing Machinery, New York, NY, USA, 847--848. https://doi.org/10.1145/3560905.3568075
[66]
Quan Zhou, Bin Fang, Jianhua Shan, Fuchun Sun, and Di Guo. 2020. A Survey of the Development of Wearable Devices. In 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, Shenzhen, China, 198--203. https://doi.org/10.1109/ICARM49381.2020.9195351

Cited By

View all
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 8, Issue 1
March 2024
1182 pages
EISSN:2474-9567
DOI:10.1145/3651875
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 March 2024
Published in IMWUT Volume 8, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Ear-based Interaction
  2. Earables
  3. Embodied Interaction
  4. Input Techniques
  5. Touch Surfaces
  6. Uni-manual Interaction

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)330
  • Downloads (Last 6 weeks)28
Reflects downloads up to 16 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media