Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2702123.2702371acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables

Published: 18 April 2015 Publication History

Abstract

In this paper we present zSense, which provides greater input expressivity for spatially limited devices such as smart wearables through a shallow depth gesture recognition system using non-focused infrared sensors. To achieve this, we introduce a novel Non-linear Spatial Sampling (NSS) technique that significantly cuts down the number of required infrared sensors and emitters. These can be arranged in many different configurations; for example, number of sensor emitter units can be as minimal as one sensor and two emitters. We implemented different configurations of zSense on smart wearables such as smartwatches, smartglasses and smart rings. These configurations naturally fit into the flat or curved surfaces of such devices, providing a wide scope of zSense enabled application scenarios. Our evaluations reported over 94.8% gesture recognition accuracy across all configurations.

Supplementary Material

suppl.mov (pn1268.mp4)
Supplemental video
MP4 File (p3661-withana.mp4)

References

[1]
Ashbrook, D., Baudisch, P., and White, S. Nenya: Subtle and eyes-free mobile input with a magnetically-tracked finger ring. In Proc. of CHI '11 (2011), 2043--2046.
[2]
Bailly, G., Mller, J., Rohs, M., Wigdor, D., and Kratz, S. ShoeSense: A new perspective on gestural interaction and wearable applications. In Proc. of CHI '12 (2012), 1239--1248.
[3]
Baraniuk, R. G. Compressive sensing. IEEE signal processing magazine 24, 4 (2007), 118--121.
[4]
Bencina, R., Kaltenbrunner, M., and Jorda, S. Improved topological fiducial tracking in the reacTIVision system. In IEEE CVPR Workshops (2005), 99--99.
[5]
Butler, A., Izadi, S., and Hodges, S. Sidesight: multi- touch interaction around small devices. In Proc. of UIST '08 (2008), 201--204.
[6]
Chan, L., Liang, R.-H., Tsai, M.-C., Cheng, K.-Y., Su, C.-H., Chen, M. Y., Cheng, W.-H., and Chen, B.-Y. FingerPad: Private and subtle interaction using fingertips. In Proc. of UIST '13 (2013), 255--260.
[7]
Chen, K.-Y., Lyons, K., White, S., and Patel, S. uTrack: 3d input using two magnetic sensors. In Proc. of UIST '13 (2013), 237--244.
[8]
Colao, A., Kirmani, A., Yang, H. S., Gong, N.-W., Schmandt, C., and Goyal, V. K. Mime: Compact, low-power 3d gesture sensing for interaction with head-mounted displays. In Proc. of UIST '13 (2013), 227--236.
[9]
Duarte, M., Davenport, M., Takhar, D., Laska, J., Kelly, K., and Baraniuk, R. Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine 25, 2 (2008), 83--91.
[10]
Gustafson, S., Bierwirth, D., and Baudisch, P. Imaginary interfaces: Spatial interaction with empty hands and without visual feedback. In Proc. of UIST '10 (2010), 3--12.
[11]
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. The WEKA data mining software. ACM SIGKDD Explorations Newsletter 11, 1 (2009), 10.
[12]
Harrison, C., Benko, H., and Wilson, A. D. OmniTouch: Wearable multitouch interaction everywhere. In Proc. of UIST '11 (2011), 441--450.
[13]
Harrison, C., and Hudson, S. E. Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices. In Proc. of UIST '09 (2009), 121.
[14]
Harrison, C., Schwarz, J., and Hudson, S. TapSense: enhancing finger interaction on touch surfaces. In Proc. of UIST '11 (2011), 627--636.
[15]
Harrison, C., Tan, D., and Morris, D. Skinput: Appropriating the body as an input surface chris. In Proc. of CHI '10 (2010), 453--462.
[16]
Hwang, S., Ahn, M., and Wohn, K.-y. MagGetz: customizable passive tangible controllers on and around conventional mobile devices. In Proc. of UIST '13 (2013), 411--416.
[17]
Jones, B., Sodhi, R., Forsyth, D., Bailey, B., and Maciocci, G. Around device interaction for multiscale navigation. In Proc. of MobileHCI '12 (2012), 83--92.
[18]
Ketabdar, H., Roshandel, M., and Yksel, K. A. Towards using embedded magnetic field sensor for around mobile device 3d interaction. In Proc. of MobileHCI '10 (2010), 153--156.
[19]
Kim, D., Hilliges, O., Izadi, S., Butler, A. D., Chen, J., Oikonomidis, I., and Olivier, P. Digits:freehand 3d interactions anywhere using a wrist-worn gloveless sensor. In Proc. of UIST'12 (2012), 167--176.
[20]
Kim, J., He, J., Lyons, K., and Starner, T. The gesture watch: A wireless contact-free gesture based wrist interface. In Proc. of ISWC '07 (2007), 1--8.
[21]
MacKenzie, D. Compressed sensing makes every pixel count. What's Happening in the Mathematical Sciences, July (2009), 114--127.
[22]
Mitra, S., and Acharya, T. Gesture recognition: A survey. IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 37, 3 (May 2007), 311--324.
[23]
Nakatsuma, K., Shinoda, H., Makino, Y., Sato, K., and Maeno, T. Touch interface on back of the hand. In proc. of SIGGRAPH '11 (2011), 39:1--39:1.
[24]
Nanayakkara, S., Shilkrot, R., Yeo, K. P., and Maes, P. EyeRing: A finger-worn input device for seamless interactions with our surroundings. In Proc. of AH '13 (2013), 13--20.
[25]
Olberding, S., Yeo, K. P., Nanayakkara, S., and Steimle, J. AugmentedForearm: Exploring the design space of a display-enhanced forearm. In Proc. of AH '13 (2013), 9--12.
[26]
Rekimoto, J., and Saitoh, M. Augmented surfaces: A spatially continuous work space for hybrid computing environments. In Proc. of CHI '99 (1999), 378--385.
[27]
Ruiz, J., Li, Y., and Lank, E. User-defined motion gestures for mobile interaction. In Proc. of CHI '11 (2011), 197.
[28]
Siek, K. A., Rogers, Y., and Connelly, K. H. Fat finger worries : How older and younger users physically interact with PDAs. In Proc. of INTERACT '05, Springer Berlin Heidelberg (2005), 267--280.
[29]
Wobbrock, J. O., Morris, M. R., and Wilson, A. D. User-defined gestures for surface computing. In Proc. of CHI'09 (2009), 1083--1092.
[30]
Xiao, R., Laput, G., and Harrison, C. Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click. In Proc. of CHI '14 (2014), 193--196.

Cited By

View all
  • (2024)One is Enough: Enabling One-shot Device-free Gesture Recognition with COTS WiFiIEEE INFOCOM 2024 - IEEE Conference on Computer Communications10.1109/INFOCOM52122.2024.10621091(1231-1240)Online publication date: 20-May-2024
  • (2023)TongueMendous: IR-Based Tongue-Gesture Interface with Tiny Machine LearningProceedings of the 8th international Workshop on Sensor-Based Activity Recognition and Artificial Intelligence10.1145/3615834.3615843(1-8)Online publication date: 21-Sep-2023
  • (2023)RetroSphereProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35694796:4(1-36)Online publication date: 11-Jan-2023
  • Show More Cited By

Index Terms

  1. zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
    April 2015
    4290 pages
    ISBN:9781450331456
    DOI:10.1145/2702123
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 April 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. compressive sensing
    2. interacting with small devices
    3. shallow depth gesture recognition
    4. smart wearables

    Qualifiers

    • Research-article

    Conference

    CHI '15
    Sponsor:
    CHI '15: CHI Conference on Human Factors in Computing Systems
    April 18 - 23, 2015
    Seoul, Republic of Korea

    Acceptance Rates

    CHI '15 Paper Acceptance Rate 486 of 2,120 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)40
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 25 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)One is Enough: Enabling One-shot Device-free Gesture Recognition with COTS WiFiIEEE INFOCOM 2024 - IEEE Conference on Computer Communications10.1109/INFOCOM52122.2024.10621091(1231-1240)Online publication date: 20-May-2024
    • (2023)TongueMendous: IR-Based Tongue-Gesture Interface with Tiny Machine LearningProceedings of the 8th international Workshop on Sensor-Based Activity Recognition and Artificial Intelligence10.1145/3615834.3615843(1-8)Online publication date: 21-Sep-2023
    • (2023)RetroSphereProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35694796:4(1-36)Online publication date: 11-Jan-2023
    • (2023)Ubi Edge: Authoring Edge-Based Opportunistic Tangible User Interfaces in Augmented RealityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580704(1-14)Online publication date: 19-Apr-2023
    • (2023)Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition With Photo-Reflective Sensors Embedded in a Head-Mounted DisplayIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.317976629:10(4124-4139)Online publication date: 1-Oct-2023
    • (2023)A Corneal Surface Reflections-Based Intelligent System for Lifelogging ApplicationsInternational Journal of Human–Computer Interaction10.1080/10447318.2022.216324039:9(1963-1980)Online publication date: 4-Jan-2023
    • (2022)Metaphoraction: Support Gesture-based Interaction Design with Metaphorical MeaningsACM Transactions on Computer-Human Interaction10.1145/351189229:5(1-33)Online publication date: 20-Oct-2022
    • (2022)MyoSpring: 3D Printing Mechanomyographic Sensors for Subtle Finger Gesture RecognitionProceedings of the Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction10.1145/3490149.3501321(1-13)Online publication date: 13-Feb-2022
    • (2022)Leveraging Wearables for Assisting the Elderly With Dementia in HandwashingIEEE Transactions on Mobile Computing10.1109/TMC.2022.3193615(1-16)Online publication date: 2022
    • (2021)GestuRING: A Web-based Tool for Designing Gesture Input with Rings, Ring-Like, and Ring-Ready DevicesThe 34th Annual ACM Symposium on User Interface Software and Technology10.1145/3472749.3474780(710-723)Online publication date: 10-Oct-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media