Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3332165.3347881acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

GesturePod: Enabling On-device Gesture-based Interaction for White Cane Users

Published: 17 October 2019 Publication History

Abstract

People using white canes for navigation find it challenging to concurrently access devices such as smartphones. Building on prior research on abandonment of specialized devices, we explore a new touch free mode of interaction wherein a person with visual impairment can perform gestures on their existing white cane to trigger tasks on their smartphone. We present GesturePod, an easy-to-integrate device that clips on to any white cane, and detects gestures performed with the cane. With GesturePod, a user can perform common tasks on their smartphone without touch or even removing the phone from their pocket or bag. We discuss the challenges in building the device and our design choices. We propose a novel, efficient machine learning pipeline to train and deploy the gesture recognition model. Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster. Our in-wild study suggests that GesturePod is a promising tool to improve smartphone access for people with VI, especially in constrained outdoor scenarios.

Supplementary Material

ZIP File (ufp2672aux.zip)
The PDF format of the supplementary materials is included. For any queries - please contact - [email protected]
MP4 File (ufp2672pv.mp4)
Preview video
MP4 File (p403-patil.mp4)

References

[1]
Daniel Ashbrook, Patrick Baudisch, and Sean White. 2011. Nenya: Subtle and Eyes-free Mobile Input with a Magnetically-tracked Finger Ring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 2043--2046. http://dx.doi.org/10.1145/1978942.1979238
[2]
Daniel Ashbrook, Carlos Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, Sangeeta Gajendra, and Ross Tallents. 2016. Bitey: An Exploration of Tooth Click Gestures for Hands-free User Interface Control. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '16). ACM, New York, NY, USA, 158--169. http://dx.doi.org/10.1145/2935334.2935389
[3]
Lawrence K. Au, Winston H. Wu, Maxim A. Batalin, Thanos Stathopoulos, and William J. Kaiser. 2008. Demonstration of Active Guidance with SmartCane. In Proceedings of the 7th International Conference on Information Processing in Sensor Networks (IPSN '08). IEEE Computer Society, Washington, DC, USA, 537--538. http://dx.doi.org/10.1109/IPSN.2008.52
[4]
Jared M. Batterman, Vincent F. Martin, Derek Yeung, and Bruce N. Walker. 2018. Connected cane: Tactile button input for controlling gestures of iOS voiceover embedded in a white cane. Assistive Technology 30, 2 (2018), 91--99. http://dx.doi.org/10.1080/10400435.2016.1265024
[5]
Gabe Cohn, Daniel Morris, Shwetak Patel, and Desney Tan. 2012. Humantenna: using the body as an antenna for real-time whole-body interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, 1901--1910.
[6]
Don Kurian Dennis, Shishir G. Patil, Chirag Pabbaraju, Nadeem Shaheer, Harsha Simhadri, Vivek Seshadri, Manik Varma, and Prateek Jain. 2018. GesturePod: Gesture-based Interaction Cane for People with Visual Impairments. Technical Report MSR-TR-2018--14. Microsoft.
[7]
J. Faria, S. Lopes, H. Fernandes, P. Martins, and J. Barroso. 2010. Electronic white cane for blind people navigation assistance. In 2010 World Automation Congress. 1--7.
[8]
Davide Figo, Pedro C. Diniz, Diogo R. Ferreira, and Jo ao M. Cardoso. 2010. Preprocessing Techniques for Context Recognition from Accelerometer Data. Personal Ubiquitous Comput. 14, 7 (Oct. 2010), 645--662. http://dx.doi.org/10.1007/s00779-010-0293--9
[9]
Fitbit. 2007. Fitbit. (2007). https://www.fitbit.com/com/home.
[10]
German H. Flores and Roberto Manduchi. 2016. WeAllWalk: An Annotated Data Set of Inertial Sensor Time Series from Blind Walkers. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, NY, USA, 141--150. http://dx.doi.org/10.1145/2982142.2982179
[11]
A. J. Fukasawa and K. Magatani. 2012. A navigation system for the visually impaired an intelligent white cane. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 4760--4763. http://dx.doi.org/10.1109/EMBC.2012.6347031
[12]
Maribeth Gandy, Thad Starner, Jake Auxier, and Daniel Ashbrook. 2000. The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring. In Proceedings of the 4th IEEE International Symposium on Wearable Computers (ISWC '00). IEEE Computer Society, Washington, DC, USA. http://dl.acm.org/citation.cfm?id=851037.856538
[13]
Chirag Gupta, Arun Sai Suggala, Ankit Goyal, Harsha Vardhan Simhadri, Bhargavi Paranjape, Ashish Kumar, Saurabh Goyal, Raghavendra Udupa, Manik Varma, and Prateek Jain. 2017. ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 1331--1340. http://proceedings.mlr.press/v70/gupta17a.html
[14]
Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R. Klemmer. 2007. Authoring Sensor-based Interactions by Demonstration with Direct Manipulation and Pattern Recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 145--154. http://dx.doi.org/10.1145/1240624.1240646
[15]
Yongsik Jin, Jonghong Kim, Bumhwi Kim, Rammohan Mallipeddi, and Minho Lee. 2015. Smart Cane: Face Recognition System for Blind. In Proceedings of the 3rd International Conference on Human-Agent Interaction (HAI '15). ACM, New York, NY, USA, 145--148. http://dx.doi.org/10.1145/2814940.2814952
[16]
Lei Jing, Yinghui Zhou, Zixue Cheng, and Tongjun Huang. 2012. Magic Ring: A Finger-Worn Device for Multiple Appliances Control Using Static Finger Gestures. Sensors 12, 5 (2012), 5775--5790. http://dx.doi.org/10.3390/s120505775
[17]
Jin Sun Ju, Eunjeong Ko, and Eun Yi Kim. 2009. EYECane: Navigating with Camera Embedded White Cane for Visually Impaired Person. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '09). ACM, New York, NY, USA, 237--238. http://dx.doi.org/10.1145/1639642.1639693
[18]
Sung Yeon Kim and Kwangsu Cho. 2007. Usability and design guidelines of smart canes for users with visual impairments. International Journal of Design 7, 1 (2007), 99--110. http://www.ijdesign.org/index.php/IJDesign/article/view/1209/559
[19]
Louis Kratz, Daniel Morris, and T. Scott Saponas. 2012. Making Gestural Input from Arm-worn Inertial Sensors More Practical. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1747--1750. http://dx.doi.org/10.1145/2207676.2208304
[20]
Sven Kratz and Maribeth Back. 2015. Towards Accurate Automatic Segmentation of IMU-Tracked Motion Gestures. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 1337--1342. http://dx.doi.org/10.1145/2702613.2732922
[21]
Sven Kratz, Michael Rohs, and Georg Essl. 2013. Combining Acceleration and Gyroscope Data for Motion Gesture Recognition Using Classifiers with Dimensionality Constraints. In Proceedings of the 2013 International Conference on Intelligent User Interfaces (IUI '13). ACM, New York, NY, USA, 173--178. http://dx.doi.org/10.1145/2449396.2449419
[22]
Ravi Kuber, Amanda Hastings, Matthew Tretter, and Dónal Fitzpatrick. 2012. Determining the accessibility of mobile screen readers for blind users. (2012).
[23]
Fingertips Lab. 2017. O6. (2017). https://www.kickstarter.com/projects/55699542/o6-free-your-eyes.
[24]
Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time Crowd Labeling for Deployable Activity Recognition. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13). ACM, New York, NY, USA, 1203--1212. http://dx.doi.org/10.1145/2441776.2441912
[25]
Je Seok Lee, Heeryung Choi, and Joonhwan Lee. 2015. TalkingCane: Designing Interactive White Cane for Visually Impaired People's Bus Usage. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI '15). ACM, New York, NY, USA, 668--673. http://dx.doi.org/10.1145/2786567.2793686
[26]
J. Liu, Z. Wang, L. Zhong, J. Wickramasuriya, and V. Vasudevan. 2009. uWave: Accelerometer-based personalized gesture recognition and its applications. In 2009 IEEE International Conference on Pervasive Computing and Communications. 1--9. http://dx.doi.org/10.1109/PERCOM.2009.4912759
[27]
Zhiyuan Lu, Xiang Chen, Zhangyan Zhao, and Kongqiao Wang. 2011. A Prototype of Gesture-based Interface. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '11). ACM, New York, NY, USA, 33--36. http://dx.doi.org/10.1145/2037373.2037380
[28]
David Mace, Wei Gao, and Ayse Coskun. 2013. Accelerometer-based Hand Gesture Recognition Using Feature Weighted Naive Bayesian Classifiers and Dynamic Time Warping. In Proceedings of the Companion Publication of the 2013 International Conference on Intelligent User Interfaces Companion (IUI '13 Companion). ACM, New York, NY, USA, 83--84. http://dx.doi.org/10.1145/2451176.2451211
[29]
Joseph Malloch, Carla F. Griggio, Joanna McGrenere, and Wendy E. Mackay. 2017. Fieldward and Pathward: Dynamic Guides for Defining Your Own Gestures. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 4266--4277. http://dx.doi.org/10.1145/3025453.3025764
[30]
Jani M"antyj"arvi, Juha Kela, Panu Korpip"a"a, and Sanna Kallio. 2004. Enabling Fast and Effortless Customisation in Accelerometer Based Gesture Interaction. In Proceedings of the 3rd International Conference on Mobile and Ubiquitous Multimedia (MUM '04). ACM, New York, NY, USA, 25--31. http://dx.doi.org/10.1145/1052380.1052385
[31]
David A. Mellis, Ben Zhang, Audrey Leung, and Björn Hartmann. 2017. Machine Learning for Makers: Interactive Sensor Data Classification Based on Augmented Code Examples. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS '17). ACM, New York, NY, USA, 1213--1225. http://dx.doi.org/10.1145/3064663.3064735
[32]
Microsoft. 2017a. Microsoft Soundscape. (2017). https://www.microsoft.com/en-us/research/product/soundscape/.
[33]
Microsoft. 2017b. Seeing AI. (2017). https://www.microsoft.com/en-us/seeing-ai/.
[34]
Annika Muehlbradt, Varsha Koushik, and Shaun K. Kane. 2017. Goby: A Wearable Swimming Aid for Blind Athletes. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17). ACM, New York, NY, USA, 377--378. http://dx.doi.org/10.1145/3132525.3134822
[35]
Uran Oh, Lee Stearns, Alisha Pradhan, Jon E. Froehlich, and Leah Findlater. 2017. Investigating Microinteractions for People with Visual Impairments and the Potential Role of On-Body Interaction. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17). ACM, New York, NY, USA, 22--31. http://dx.doi.org/10.1145/3132525.3132536
[36]
Philip O'Keefe. 2007. People with disabilities in India: from commitments to outcomes (English). Washington, DC: World Bank 1, 41585 (2007), 1--185.
[37]
Tomàs Pallejà, Marcel Tresanchez, Mercè Teixidá, and Jordi Palacin. 2010. Bioinspired Electronic White Cane Implementation Based on a LIDAR, a Tri-Axial Accelerometer and a Tactile Belt. Sensors 10, 12 (2010), 11322--11339. http://dx.doi.org/10.3390/s101211322
[38]
Sumita Sharma, Saurabh Srivastava, Krishnaveni Achary, Blessin Varkey, Tomi Heimonen, Jaakko Hakulinen, Markku Turunen, and Nitendra Rajput. 2016. Gesture-based Interaction for Individuals with Developmental Disabilities in India. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, NY, USA, 61--70. http://dx.doi.org/10.1145/2982142.2982166
[39]
Vaibhav Singh, Rohan Paul, Dheeraj Mehra, Anurag Gupta, Vasu Dev Sharma, Saumya Jain, Chinmay Agarwal, Ankush Garg, Sandeep Singh Gujral, M. Balakrishnan, Kolin Paul, P.V.M. Rao, and Dipendra Manocha. 2010. Smart cane for the visually impaired: Design and controlled field testing of an affordable obstacle detection system. In 12th International Conference on Mobility and Transport for Elderly and Disabled Persons (TRANSED '10).
[40]
Dring Alert System. 2017. The Connected Cane. (2017). http://dring.io/en/the-connected-cane/.
[41]
APH Tech. 2017. Long Cane Techniques. (2017). https://tech.aph.org/sbs/04_sbs_lc_study.html.
[42]
GingerMind Technologies. 2017. Eye-d. (2017). https://www.eye-d.in/.
[43]
T Warren Liao. 2005. Clustering of time series data-a survey. Pattern Recognition 38, 11 (2005), 1857--1874.
[44]
WeWALK. 2018. WeWALK SMART CANE. (2018). https://get.wewalk.io/.
[45]
WhiteCaneDay 2017. White Cane Day. (2017). http://www.whitecaneday.org/canes/.
[46]
Michele A. Williams, Caroline Galbraith, Shaun K. Kane, and Amy Hurst. 2014. "Just Let the Cane Hit It": How the Blind and Sighted See Navigation Differently. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14). ACM, New York, NY, USA, 217--224. http://dx.doi.org/10.1145/2661334.2661380
[47]
Koji Yatani and Khai N. Truong. 2012. BodyScope: A Wearable Acoustic Sensor for Activity Recognition. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp '12). ACM, New York, NY, USA, 341--350. http://dx.doi.org/10.1145/2370216.2370269
[48]
Hanlu Ye, Meethu Malu, Uran Oh, and Leah Findlater. 2014. Current and Future Mobile and Wearable Device Use by People with Visual Impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3123--3132. http://dx.doi.org/10.1145/2556288.2557085
[49]
Ying Yin and Randall Davis. 2013. Gesture Spotting and Recognition Using Salience Detection and Concatenated Hidden Markov Models. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction (ICMI '13). ACM, New York, NY, USA, 489--494. http://dx.doi.org/10.1145/2522848.2532588
[50]
Yuhang Zhao, Cynthia L. Bennett, Hrvoje Benko, Edward Cutrell, Christian Holz, Meredith Ringel Morris, and Mike Sinclair. 2018a. Demonstration of Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Article D409, 4 pages. http://dx.doi.org/10.1145/3170427.3186485
[51]
Yuhang Zhao, Cynthia L. Bennett, Hrvoje Benko, Edward Cutrell, Christian Holz, Meredith Ringel Morris, and Mike Sinclair. 2018b. Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 116, 14 pages. http://dx.doi.org/10.1145/3173574.3173690

Cited By

View all
  • (2024)SonicVista: Towards Creating Awareness of Distant Scenes through SonificationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596098:2(1-32)Online publication date: 15-May-2024
  • (2024)An Embedded AI System for Predicting and Correcting the Sensor-Orientation of an Electronic Travel Aid During Use by a Visually Impaired PersonComputers Helping People with Special Needs10.1007/978-3-031-62846-7_53(444-453)Online publication date: 8-Jul-2024
  • (2024)Supporting Parent-Child Interactions in Child Riding: Exploring Design Opportunities for Digital Interaction StrategiesHCI International 2024 Posters10.1007/978-3-031-61932-8_13(102-111)Online publication date: 1-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
October 2019
1229 pages
ISBN:9781450368162
DOI:10.1145/3332165
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. gesture recognition
  2. resource constrained machine learning
  3. smartphone access
  4. visual impairment
  5. white cane

Qualifiers

  • Research-article

Conference

UIST '19

Acceptance Rates

Overall Acceptance Rate 842 of 3,967 submissions, 21%

Upcoming Conference

UIST '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)53
  • Downloads (Last 6 weeks)2
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)SonicVista: Towards Creating Awareness of Distant Scenes through SonificationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596098:2(1-32)Online publication date: 15-May-2024
  • (2024)An Embedded AI System for Predicting and Correcting the Sensor-Orientation of an Electronic Travel Aid During Use by a Visually Impaired PersonComputers Helping People with Special Needs10.1007/978-3-031-62846-7_53(444-453)Online publication date: 8-Jul-2024
  • (2024)Supporting Parent-Child Interactions in Child Riding: Exploring Design Opportunities for Digital Interaction StrategiesHCI International 2024 Posters10.1007/978-3-031-61932-8_13(102-111)Online publication date: 1-Jun-2024
  • (2023)Laser Sensing and Vision Sensing Smart Blind Cane: A ReviewSensors10.3390/s2302086923:2(869)Online publication date: 12-Jan-2023
  • (2023)On-Sensor Online Learning and Classification Under 8 KB Memory2023 26th International Conference on Information Fusion (FUSION)10.23919/FUSION52260.2023.10224228(1-8)Online publication date: 28-Jun-2023
  • (2023)T-RecXProceedings of the 20th ACM International Conference on Computing Frontiers10.1145/3587135.3592204(123-133)Online publication date: 9-May-2023
  • (2023)Ocularone: Exploring Drones-based Assistive Technologies for the Visually ImpairedExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544549.3585863(1-9)Online publication date: 19-Apr-2023
  • (2023)TinyML: Tools, applications, challenges, and future research directionsMultimedia Tools and Applications10.1007/s11042-023-16740-983:10(29015-29045)Online publication date: 9-Sep-2023
  • (2022)Assessing Versatility of a Generic End-to-End Platform for IoT Ecosystem ApplicationsSensors10.3390/s2203071322:3(713)Online publication date: 18-Jan-2022
  • (2022)Machine Learning for Microcontroller-Class Hardware: A ReviewIEEE Sensors Journal10.1109/JSEN.2022.321077322:22(21362-21390)Online publication date: 15-Nov-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media