Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642613acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses

Published: 11 May 2024 Publication History

Abstract

In this paper, we introduce EyeEcho, a minimally-obtrusive acoustic sensing system designed to enable glasses to continuously monitor facial expressions. It utilizes two pairs of speakers and microphones mounted on glasses, to emit encoded inaudible acoustic signals directed towards the face, capturing subtle skin deformations associated with facial expressions. The reflected signals are processed through a customized machine-learning pipeline to estimate full facial movements. EyeEcho samples at 83.3 Hz with a relatively low power consumption of 167mW. Our user study involving 12 participants demonstrates that, with just four minutes of training data, EyeEcho achieves highly accurate tracking performance across different real-world scenarios, including sitting, walking, and after remounting the devices. Additionally, a semi-in-the-wild study involving 10 participants further validates EyeEcho’s performance in naturalistic scenarios while participants engage in various daily activities. Finally, we showcase EyeEcho’s potential to be deployed on a commercial-off-the-shelf (COTS) smartphone, offering real-time facial expression tracking.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Meta AI. 2022. PyTorch Mobile - Home | PyTorch. Retrieved Aug 19, 2022 from https://pytorch.org/mobile/home/
[2]
Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST). 679–689.
[3]
Md Tanvir Islam Aumi, Sidhant Gupta, Mayank Goel, Eric Larson, and Shwetak Patel. 2013. DopLink: using the doppler effect for multi-device interaction. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. 583–586.
[4]
BOGO. 2023. OWR-05049T-38D. Retrieved Sept 13, 2023 from https://www.bogosemi-ca.com/products/B01976428/OWR-05049T-38D.html
[5]
Tuochao Chen, Yaxuan Li, Songyun Tao, Hyunchul Lim, Mose Sakashita, Ruidong Zhang, François Guimbretière, and Cheng Zhang. 2021. NeckFace: Continuously Tracking Full Facial Expressions on Neck-mounted Wearables. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 5. 1–31.
[6]
Tuochao Chen, Benjamin Steeper, Kinan Alsheikh, Songyun Tao, François Guimbretière, and Cheng Zhang. 2020. C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-mounted Miniature Cameras. In Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST). 112–125.
[7]
Bose Corporation. 2023. Bose Frames Tempo. Retrieved Sept 14, 2023 from https://www.bose.com/p/headphones/bose-frames-tempo/TEMPO-FRAMES.html?dwvar_TEMPO-FRAMES_color=BLACK&quantity=1
[8]
Artem Dementyev and Christian Holz. 2017. DualBlink: A Wearable Device to Continuously Detect, Track, and Actuate Blinking For Alleviating Dry Eyes and Computer Vision Syndrome. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 1, Article 1 (2017), 19 pages. https://doi.org/10.1145/3053330
[9]
Epson. 2022. Moverio® BT-35ES Smart Glasses. Retrieved Aug 19, 2022 from https://mediaserver.goepson.com/ImConvServlet/imconv/b1cac7eaccf8017600cf8e0ac112f5403b86e4de/original?assetDescr=Moverio_BT-35ES_Glasses_and_Intelligent_Controller_Specification_Sheet_CPD-60652R1.pdf
[10]
Yang Gao, Yincheng Jin, Seokmin Choi, Jiyang Li, Junjie Pan, Lin Shu, Chi Zhou, and Zhanpeng Jin. 2022. SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 4, Article 156 (2022), 33 pages. https://doi.org/10.1145/3494988
[11]
Yang Gao, Wei Wang, Vir V. Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article 81 (sep 2019), 24 pages. https://doi.org/10.1145/3351239
[12]
Anna Gruebler and Kenji Suzuki. 2010. Measurement of distal EMG signals using a wearable device for reading facial expressions. In Annual International Conference of the IEEE Engineering in Medicine and Biology. IEEE, 4594–4597.
[13]
Akio Hayashida, Yosuke Mizuno, and Kentaro Nakamura. 2020. Estimation of room temperature based on acoustic frequency response. Acoustic Science and Technology 41, 4 (2020), 693–696. https://www.jstage.jst.go.jp/article/ast/41/4/41_E1954/_pdf/-char/ja#: :text=The%20frequency%20response%20exhibits%20many,dip%20frequencies%20change%20with%20temperature.
[14]
Shan He, Shangfei Wang, Wuwei Lan, Huan Fu, and Qiang Ji. 2013. Facial expression recognition using deep Boltzmann machine from thermal infrared images. In Humaine Association Conference on Affective Computing and Intelligent Interaction. 239–244.
[15]
Carl Howard, Colin Hansen, and A Zander. 2005. A review of current airborne ultrasound exposure limits. The Journal of Occupational Health and Safety - Australia and New Zealand 21 (01 2005), 253–257.
[16]
Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, and Hao Li. 2015. Unconstrained realtime facial performance capture. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1675–1683.
[17]
Yigong Hu, Jingping Nie, Yuanyuting Wang, Stephen Xia, and Xiaofan Jiang. 2020. Demo Abstract: Wireless Glasses for Non-contact Facial Expression Monitoring. In ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). 367–368. https://doi.org/10.1109/IPSN48710.2020.000-1
[18]
Jun Ho Huh, Hyejin Shin, HongMin Kim, Eunyong Cheon, Youngeun Song, Choong-Hoon Lee, and Ian Oakley. 2023. WristAcoustic: Through-Wrist Acoustic Response Based Authentication for Smartwatches. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 4, Article 167 (jan 2023), 34 pages. https://doi.org/10.1145/3569473
[19]
Earnest Paul Ijjina and C Krishna Mohan. 2014. Facial expression recognition using kinect depth sensor and convolutional neural networks. In International Conference on Machine Learning and Applications. 392–396.
[20]
Apple Inc.2022. Tracking and Visualizing Faces | Apple Developer Documentation. Retrieved Aug 19, 2022 from https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces
[21]
InvenSense. 2022. ICS-43434 | TDK. Retrieved Aug 19, 2022 from https://invensense.tdk.com/products/ics-43434/
[22]
Yasha Iravantchi, Yang Zhang, Evi Bernitsas, Mayank Goel, and Chris Harrison. 2019. Interferi: Gesture Sensing Using On-Body Acoustic Interferometry. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–13.
[23]
Shunsuke Iwakiri and Kazuya Murao. 2023. User Authentication Method for Wearable Ring Devices using Active Acoustic Sensing. In Proceedings of the 2023 ACM International Symposium on Wearable Computers (Cancun, Quintana Roo, Mexico) (ISWC ’23). Association for Computing Machinery, New York, NY, USA, 17–21. https://doi.org/10.1145/3594738.3611357
[24]
Samira Ebrahimi Kahou, Christopher Pal, Xavier Bouthillier, Pierre Froumenty, Çaglar Gülçehre, Roland Memisevic, Pascal Vincent, Aaron Courville, Yoshua Bengio, Raul Chandias Ferrari, 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the ACM on International Conference on Multimodal Interaction. 543–550.
[25]
Daehwa Kim and Chris Harrison. 2023. Pantœnna: Mouth Pose Estimation for VR/AR Headsets Using Low-Profile Antenna and Impedance Characteristic Sensing. In Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST)(UIST ’23). Article 83, 12 pages. https://doi.org/10.1145/3586183.3606805
[26]
Davis E King. 2009. Dlib-ml: A machine learning toolkit. In The Journal of Machine Learning Research, Vol. 10. 1755–1758.
[27]
Knowles. 2022. SR6438NWS-000. Retrieved Aug 19, 2022 from https://www.knowles.com/docs/default-source/model-downloads/sr6438nws-000.pdf
[28]
Jangho Kwon, Jihyeon Ha, Da-Hye Kim, Jun Won Choi, and Laehyun Kim. 2021. Emotion Recognition Using a Glasses-Type Wearable Device via Multi-Channel Facial Responses. In IEEE Access, Vol. 9. 146392–146403. https://doi.org/10.1109/ACCESS.2021.3121543
[29]
Ying-Hsiu Lai and Shang-Hong Lai. 2018. Emotion-preserving representation learning via generative adversarial network for multi-view facial expression recognition. In IEEE International Conference on Automatic Face & Gesture Recognition (FG). 263–270.
[30]
Michael Lankes, Stefan Riegler, Astrid Weiss, Thomas Mirlacher, Michael Pirker, and Manfred Tscheligi. 2008. Facial Expressions as Game Input with Different Emotional Feedback Conditions. In Proceedings of the International Conference on Advances in Computer Entertainment Technology. 253––256. https://doi.org/10.1145/1501750.1501809
[31]
Chi-Jung Lee, Ruidong Zhang, Devansh Agarwal, Tianhong Catherine Yu, Vipin Gunda, Oliver Lopez, James Kim, Sicheng Yin, Boao Deng, Ke Li, Mose Sakashita, Francois Guimbretiere, and Cheng Zhang. 2024. EchoWrist: Continuous Hand Pose Tracking and Hand-Object Interaction Recognition Using Low-Power Active Acoustic Sensing On a Wristband. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24). Association for Computing Machinery, New York, NY, USA, 21 pages. https://doi.org/10.1145/3613904.3642910
[32]
Lenovo. 2022. ThinkReality A3 Smart Glasses. Retrieved Aug 19, 2022 from https://www.lenovo.com/us/en/p/smart-devices/virtual-reality/thinkreality-a3/wmd00000500
[33]
Ke Li, Ruidong Zhang, Boao Chen, Siyuan Chen, Sicheng Yin, Saif Mahmud, Qikang Liang, François Guimbretière, and Cheng Zhang. 2024. GazeTrak: Exploring Acoustic-based Eye Tracking on a Glass Frame. In Proceedings of the Annual International Conference on Mobile Computing and Networking (Washington D.C., DC, USA) (MobiCom ’24). Association for Computing Machinery, New York, NY, USA, 16 pages. https://doi.org/10.1145/3636534.3649376
[34]
Ke Li, Ruidong Zhang, Bo Liang, François Guimbretière, and Cheng Zhang. 2022. EarIO: A Low-Power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 6. Article 62, 24 pages.
[35]
Jie Lian, Jiadong Lou, Li Chen, and Xu Yuan. 2021. EchoSpot: Spotting Your Locations via Acoustic Sensing. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 5. 1–21.
[36]
Jialin Liu, Dong Li, Lei Wang, and Jie Xiong. 2021. BlinkListener: "Listen" to Your Eye Blink Using Your Smartphone. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 2, Article 73 (2021), 27 pages. https://doi.org/10.1145/3463521
[37]
Mengxi Liu, Sizhen Bian, and Paul Lukowicz. 2022. Non-Contact, Real-Time Eye Blink Detection with Capacitive Sensing. In Proceedings of the 2022 ACM International Symposium on Wearable Computers(ISWC ’22). 49–53. https://doi.org/10.1145/3544794.3558462
[38]
Mengyi Liu, Shiguang Shan, Ruiping Wang, and Xilin Chen. 2014. Learning expressionlets on spatio-temporal manifold for dynamic facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1749–1756.
[39]
Ping Liu, Shizhong Han, Zibo Meng, and Yan Tong. 2014. Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1805–1812.
[40]
Li Lu, Jiadi Yu, Yingying Chen, Hongbo Liu, Yanmin Zhu, Linghe Kong, and Minglu Li. 2019. Lip reading-based user authentication through acoustic sensing on smartphones. In IEEE/ACM Transactions on Networking (TON), Vol. 27. 447–460.
[41]
Saif Mahmud, Ke Li, Guilin Hu, Hao Chen, Richard Jin, Ruidong Zhang, François Guimbretière, and Cheng Zhang. 2023. PoseSonic: 3D Upper Body Pose Estimation Through Egocentric Acoustic Sensing on Smartglasses. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 3, Article 111 (sep 2023), 28 pages. https://doi.org/10.1145/3610895
[42]
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki Sugimoto. 2016. Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear. In Proceedings of the International Conference on Intelligent User Interfaces (IUI). 317––326. https://doi.org/10.1145/2856767.2856770
[43]
Denys J. C. Matthies, Bernhard A. Strecker, and Bodo Urban. 2017. EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1911––1922. https://doi.org/10.1145/3025453.3025692
[44]
Delina Beh Mei Yin, Amalia-Amelia Mukhlas, Rita Zaharah Wan Chik, Abu Talib Othman, and Shariman Omar. 2018. A Proposed Approach for Biometric-Based Authentication Using of Face and Facial Expression Recognition. In IEEE International Conference on Communication and Information Systems (ICCIS). 28–33. https://doi.org/10.1109/ICOMIS.2018.8644974
[45]
Rajalakshmi Nandakumar, Shyamnath Gollakota, and Nathaniel Watson. 2015. Contactless sleep apnea detection on smartphones. In Proceedings of the Annual International Conference on Mobile Systems, Applications, and Services. 45–57.
[46]
Rajalakshmi Nandakumar, Vikram Iyer, Desney Tan, and Shyamnath Gollakota. 2016. Fingerio: Using active sonar for fine-grained finger tracking. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1515–1525.
[47]
Niora. 2022. Microsoft HoloLens - Review - Full specification - Where to buy?Retrieved Aug 19, 2022 from https://www.niora.net/en/p/microsoft_hololens
[48]
Amazon.com Inc. or its affiliates. 2022. Echo Frames Battery Life and Testing Information - Amazon Customer Service. Retrieved Aug 19, 2022 from https://www.amazon.com/gp/help/customer/display.html?nodeId=GSVK3ZY3G43K435E
[49]
Jinhwan Park and Sehyun Baek. 2019. Dry eye syndrome in thyroid eye disease patients: The role of increased incomplete blinking and Meibomian gland loss. Acta ophthalmologica [Acta Ophthalmol] 97, 5 (2019), e800–e806. https://research-ebsco-com.proxy.library.cornell.edu/c/u2yil2/details/bmwiotplxn
[50]
John AM Paro, Rahim Nazareli, Anadev Gurjala, Aaron Berger, and Gordon K Lee. 2015. Video-based self-review: comparing Google Glass and GoPro technologies. Annals of plastic surgery 74 (2015), S71–S74.
[51]
Ville Rantanen, Pekka-Henrik Niemenlehto, Jarmo Verho, and Jukka Lekkala. 2010. Capacitive facial movement detection for human–computer interaction to click by frowning and lifting eyebrows. In Medical & biological engineering & computing, Vol. 48. Springer, 39–47.
[52]
Marc’Aurelio Ranzato, Joshua Susskind, Volodymyr Mnih, and Geoffrey Hinton. 2011. On deep generative models with applications to recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2857–2864.
[53]
Santiago Rayes. 2022. What are microphone environmental coefficients?Retrieved Dec 6, 2023 from https://www.grasacoustics.com/blog/working-with-environmental-coefficients#: :text=Higher%20frequencies%20(above%20about%206,frequency%20response%20depending%20on%20temperature.
[54]
Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. 2012. Disentangling factors of variation for facial expression recognition. In European Conference on Computer Vision. 808–822.
[55]
Jocelyn Scheirer, Raul Fernandez, and Rosalind W. Picard. 1999. Expression Glasses: A Wearable Device for Facial Expression Recognition. In CHI Extended Abstracts on Human Factors in Computing Systems. 262––263. https://doi.org/10.1145/632716.632878
[56]
John Scott-Thomas. 2023. Ray-Ban Stories smart glasses use two frame mounted cameras to capture images in a first step towards Augmented Reality. Retrieved Sept 13, 2023 from https://www.techinsights.com/blog/ray-ban-stories-smart-glasses-cameras
[57]
Nicu Sebe, Michael S Lew, Yafei Sun, Ira Cohen, Theo Gevers, and Thomas S Huang. 2007. Authentic facial expression analysis. In Image and Vision Computing, Vol. 25. 1856–1863.
[58]
Nordic Semiconductor. 2022. Bluetooth Low Energy data throughput - Nordic Semiconductor Infocenter. Retrieved Aug 19, 2022 from https://infocenter.nordicsemi.com/index.jsp?topic=%2Fsds_s140%2FSDS%2Fs1xx%2Fble_data_throughput%2Fble_data_throughput.html&cp=4_7_4_0_16
[59]
Nordic Semiconductor. 2022. nRF52840 - Bluetooth 5.2 SoC - nordicsemi.com. Retrieved Aug 19, 2022 from https://www.nordicsemi.com/Products/nRF52840
[60]
Ke Sun, Ting Zhao, Wei Wang, and Lei Xie. 2018. Vskin: Sensing touch gestures on surfaces of mobile devices using acoustic signals. In Proceedings of the Annual International Conference on Mobile Computing and Networking (MobiCom). 591–605.
[61]
Rujia Sun, Xiaohe Zhou, Benjamin Steeper, Ruidong Zhang, Sicheng Yin, Ke Li, Shengzhang Wu, Sam Tilsen, Francois Guimbretiere, and Cheng Zhang. 2023. EchoNose: Sensing Mouth, Breathing and Tongue Gestures inside Oral Cavity using a Non-contact Nose Interface. In Proceedings of the 2023 ACM International Symposium on Wearable Computers (Cancun, Quintana Roo, Mexico) (ISWC ’23). Association for Computing Machinery, New York, NY, USA, 22–26. https://doi.org/10.1145/3594738.3611358
[62]
Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time expression transfer for facial reenactment. In ACM Transactions on Graphics, Vol. 34. Article 183, 14 pages.
[63]
Dhruv Verma, Sejal Bhalla, Dhruv Sahnan, Jainendra Shukla, and Aman Parnami. 2021. ExpressEar: Sensing Fine-Grained Facial Expressions with Earables. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 5. 1–28.
[64]
Ulrich von Agris, Moritz Knorr, and Karl-Friedrich Kraiss. 2008. The significance of facial features for automatic sign language recognition. In IEEE International Conference on Automatic Face & Gesture Recognition. 1–6. https://doi.org/10.1109/AFGR.2008.4813472
[65]
Tianben Wang, Daqing Zhang, Yuanqing Zheng, Tao Gu, Xingshe Zhou, and Bernadette Dorizzi. 2018. C-FMCW based contactless respiration detection using acoustic signal. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 1. 1–20.
[66]
Wei Wang, Alex X Liu, and Ke Sun. 2016. Device-free gesture tracking using acoustic signals. In Proceedings of the Annual International Conference on Mobile Computing and Networking (MobiCom). 82–94.
[67]
Zi Wang, Yili Ren, Yingying Chen, and Jie Yang. 2022. ToothSonic: Earable Authentication via Acoustic Toothprint. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 78 (jul 2022), 24 pages. https://doi.org/10.1145/3534606
[68]
Shih-En Wei, Jason Saragih, Tomas Simon, Adam W. Harley, Stephen Lombardi, Michal Perdoch, Alexander Hypes, Dawei Wang, Hernan Badino, and Yaser Sheikh. 2019. VR Facial Animation via Multiview Image Translation. In ACM Transactions on Graphics, Vol. 38. Article 67, 16 pages. https://doi.org/10.1145/3306346.3323030
[69]
Wikipedia. 2023. Google Glass. Retrieved Sept 14, 2023 from https://en.wikipedia.org/wiki/Google_Glass
[70]
SG Wireless. 2022. SGW111X BLE Modules. Retrieved Aug 19, 2022 from https://www.sgwireless.com/product/SGW111X
[71]
Wayne Wu, Chen Qian, Shuo Yang, Quan Wang, Yici Cai, and Qiang Zhou. 2018. Look at boundary: A boundary-aware face alignment algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2129–2138.
[72]
Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, and Phuc Nguyen. 2021. BioFace-3D: Continuous 3d Facial Reconstruction through Lightweight Single-Ear Biosensors. In Proceedings of the Annual International Conference on Mobile Computing and Networking (MobiCom). 350–363.
[73]
Jiahong Xie, Hao Kong, Jiadi Yu, Yingying Chen, Linghe Kong, Yanmin Zhu, and Feilong Tang. 2023. mm3DFace: Nonintrusive 3D Facial Reconstruction Leveraging mmWave Signals. In Proceedings of the Annual International Conference on Mobile Systems, Applications and Services (MobiSys)(MobiSys ’23). 462–474. https://doi.org/10.1145/3581791.3596839
[74]
Wentao Xie, Qian Zhang, and Jin Zhang. 2021. Acoustic-Based Upper Facial Action Recognition for Smart Eyewear. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 5. Article 41, 28 pages. https://doi.org/10.1145/3448105
[75]
Sijie Xiong, Sujie Zhu, Yisheng Ji, Binyao Jiang, Xiaohua Tian, Xuesheng Zheng, and Xinbing Wang. 2017. IBlink: Smart Glasses for Facial Paralysis Patients. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services(MobiSys ’17). 359–370. https://doi.org/10.1145/3081333.3081343
[76]
Xreal. 2024. Xreal Air 2 Ultra. Retrieved Feb 13, 2024 from https://developer.xreal.com/?lang=en
[77]
Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–14.
[78]
Sangki Yun, Yi-Chao Chen, Huihuang Zheng, Lili Qiu, and Wenguang Mao. 2017. Strata: Fine-grained acoustic-based device-free tracking. In Proceedings of the Annual International Conference on Mobile Systems, Applications, and Services. 15–28.
[79]
Cheng Zhang, Qiuyue Xue, Anandghan Waghmare, Ruichen Meng, Sumeet Jain, Yizeng Han, Xinyu Li, Kenneth Cunefare, Thomas Ploetz, Thad Starner, 2018. FingerPing: Recognizing fine-grained hand poses using active acoustic on-body sensing. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–10.
[80]
Ruidong Zhang, Hao Chen, Devansh Agarwal, Richard Jin, Ke Li, François Guimbretière, and Cheng Zhang. 2023. HPSpeech: Silent Speech Interface for Commodity Headphones. In Proceedings of the 2023 ACM International Symposium on Wearable Computers (Cancun, Quintana Roo, Mexico) (ISWC ’23). Association for Computing Machinery, New York, NY, USA, 60–65. https://doi.org/10.1145/3594738.3611365
[81]
Ruidong Zhang, Ke Li, Yihong Hao, Yufan Wang, Zhengnan Lai, François Guimbretière, and Cheng Zhang. 2023. EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 852, 18 pages. https://doi.org/10.1145/3544548.3580801
[82]
Shijia Zhang, Taiting Lu, Hao Zhou, Yilin Liu, Runze Liu, and Mahanth Gowda. 2023. I Am an Earphone and I Can Hear My Users Face: Facial Landmark Tracking Using Smart Earphones. ACM Trans. Internet Things (TIOT) (Aug 2023). https://doi.org/10.1145/3614438
[83]
Yongzhao Zhang, Wei-Hsiang Huang, Chih-Yun Yang, Wen-Ping Wang, Yi-Chao Chen, Chuang-Wen You, Da-Yuan Huang, Guangtao Xue, and Jiadi Yu. 2020. Endophasia: Utilizing Acoustic-Based Imaging for Issuing Contact-Free Silent Speech Commands. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 4. 1–26.
[84]
Yunting Zhang, Jiliang Wang, Weiyi Wang, Zhao Wang, and Yunhao Liu. 2018. Vernier: Accurate and fast acoustic motion tracking using mobile devices. In IEEE International Conference on Computer Communications (INFOCOM). 1709–1717.

Cited By

View all
  • (2024)SonicID: User Identification on Smart Glasses with Acoustic SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997348:4(1-27)Online publication date: 21-Nov-2024
  • (2024)MunchSonic: Tracking Fine-grained Dietary Actions through Active Acoustic Sensing on EyeglassesProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676619(96-103)Online publication date: 5-Oct-2024
  • (2024)Unvoiced: Designing an LLM-assisted Unvoiced User Interface using EarablesProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699374(784-798)Online publication date: 4-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Acoustic Sensing
  2. Eye-mounted Wearable
  3. Facial Expression Tracking
  4. Low-power

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,171
  • Downloads (Last 6 weeks)110
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)SonicID: User Identification on Smart Glasses with Acoustic SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997348:4(1-27)Online publication date: 21-Nov-2024
  • (2024)MunchSonic: Tracking Fine-grained Dietary Actions through Active Acoustic Sensing on EyeglassesProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676619(96-103)Online publication date: 5-Oct-2024
  • (2024)Unvoiced: Designing an LLM-assisted Unvoiced User Interface using EarablesProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699374(784-798)Online publication date: 4-Nov-2024
  • (2024)SeamPose: Repurposing Seams as Capacitive Sensors in a Shirt for Upper-Body Pose TrackingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676341(1-13)Online publication date: 13-Oct-2024
  • (2024)GazeTrak: Exploring Acoustic-based Eye Tracking on a Glass FrameProceedings of the 30th Annual International Conference on Mobile Computing and Networking10.1145/3636534.3649376(497-512)Online publication date: 29-May-2024
  • (2024)EchoWrist: Continuous Hand Pose Tracking and Hand-Object Interaction Recognition Using Low-Power Active Acoustic Sensing On a WristbandProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642910(1-21)Online publication date: 11-May-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media