Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Studying the Visual Representation of Microgestures

Published: 13 September 2023 Publication History

Abstract

The representations of microgestures are essentials for researchers presenting their results through academic papers and system designers proposing tutorials to novice users. However, those representations remain disparate and inconsistent. As a first attempt to investigate how to best graphically represent microgestures, we created 21 designs, each depicting static and dynamic versions of 4 commonly used microgestures (tap, swipe, flex and hold). We first studied these designs in a quantitative online experiment with 45 participants. We then conducted a qualitative laboratory experiment in Augmented Reality with 16 participants. Based on the results, we provide design guidelines on which elements of a microgesture should be represented and how. In particular, it is recommended to represent the actuator and the trajectory of a microgesture. Also, although preferred by users, dynamic representations are not considered better than their static counterparts for depicting a microgesture and do not necessarily result in a better user recognition.

Supplementary Material

ZIP File (v7mhci225aux.zip)
# Supplementary material for the paper "Studying the Visual Representation of Microgestures" This archive contains the supplementary material for the paper "Studying the Visual Representation of Microgestures" submitted to the ACM MobileHCI 2023 conference. It is divided into two parts: - The [Data](./ExperimentData) folder contains the adress of the online experiment and the raw data of both experiments. - The [Family](./FamilyMaterial) folder contains the family symbols and the LaTeX code to use them in a paper. It also has its own README.md file to explain how to create a new family and use it in a paper.
MP4 File (v7mhci225.mp4)
Supplemental video

References

[1]
Yoshizawa Akira. Beautiful Origami / Utsukushi origami. Kamakura Shobo, January 1974.
[2]
Axel Antoine, Sylvain Malacria, Nicolai Marquardt, and Géry Casiez. Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios. In CHI 2021 - ACM Conference on Human Factors in Computing Systems, May 2021.
[3]
Apple. Trackpad gestures for iPad, 2022.
[4]
Rahul Arora, Rubaiat Habib Kazi, Danny M. Kaufman, Wilmot Li, and Karan Singh. MagicalHands: Mid-Air Hand Gestures for Animating in VR. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pages 463--477, New Orleans LA USA, October 2019. ACM.
[5]
Christopher Austin, Barrett Ens, Kadek Satriadi, and Bernhard Jenny. Elicitation study investigating hand and foot gesture interaction for immersive maps in augmented reality. Cartography and Geographic Information Science, 47:1--15, January 2020.
[6]
Ronald Baecker, Ian Small, and Richard Mander. Bringing icons to life. In Proceedings of the SIGCHI conference on Human factors in computing systems Reaching through technology - CHI '91, pages 1--6, New Orleans, Louisiana, United States, 1991. ACM Press.
[7]
Olivier Bau and Wendy E. Mackay. OctoPocus: a dynamic guide for learning gesture-based command sets. In Proceedings of the 21st annual ACM symposium on User interface software and technology - UIST '08, page 37, Monterey, CA, USA, 2008. ACM Press.
[8]
Jacques Bertin. Semiology of Graphics: Diagrams, Networks, Maps. Esri Press, 2011.
[9]
Roger Boldu, Alexandru Dancu, Denys Matthies, Pablo Cascon, Shanaka Ransir, and Suranga Nanayakkara. Thumb-In-Motion: Evaluating Thumb-to-Ring Microgestures for Athletic Activity. In Proceedings of the Symposium on Spatial User Interaction, pages 150--157. Association for Computing Machinery, October 2018.
[10]
Adrien Chaffangeon, Alix Goguey, and Laurence Nigay. μGlyphe: une Notation Graphique pour Décrire les Microgestes. In 33ème conférence internationale francophone sur l'Interaction Humain-Machine (IHM'22), April 2022.
[11]
Adrien Chaffangeon Caillet, Alix Goguey, and Laurence Nigay. ?Glyph: a Microgesture Notation. 2023.
[12]
Amira Chalbi, Jacob Ritchie, Deokgun Park, Jungu Choi, Nicolas Roussel, Niklas Elmqvist, and Fanny Chevalier. Common Fate for Animated Transitions in Visualization. IEEE Transactions on Visualization and Computer Graphics, pages 1--1, 2019.
[13]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. User Elicitation on Single-hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 3403--3414, San Jose California USA, May 2016. ACM.
[14]
Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pages 549--556, Charlotte NC USA, November 2015. ACM.
[15]
Liwei Chan, Rong-Hao Liang, Ming-Chang Tsai, Kai-Yin Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, and Bing-Yu Chen. FingerPad: private and subtle interaction using fingertips. In Proceedings of the 26th annual ACM symposium on User interface software and technology, pages 255--260, St. Andrews Scotland, United Kingdom, October 2013. ACM.
[16]
Ulysse Cote Allard, Francois Nougarou, Cheikh Latyr Fall, Philippe Giguere, Clement Gosselin, Francois Laviolette, and Benoit Gosselin. A convolutional neural network for robotic arm guidance using sEMG based frequency-features. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2464--2470, Daejeon, South Korea, October 2016. IEEE.
[17]
James E Cutting. Representing Motion in a Static Image: Constraints and Parallels in Art, Science, and Popular Culture. Perception, 31(10):1165--1193, October 2002. Number: 10.
[18]
Thomas De Fanti and Daniel Sandin. Sayre Glove. Technical report, University of Illinois, Chicago Circle, Box 4348 Chicago II 60680, 1977.
[19]
Bastian Dewitz, Frank Steinicke, and Christian Geiger. Functional Workspace for One-Handed Tap and Swipe Microgestures. In Mensch und Computer 2019 - Workshopband. Gesellschaft für Informatik e.V., 2019. Publisher: Gesellschaft für Informatik e.V.
[20]
Pierre Dragicevic. Fair Statistical Communication in HCI. In Modern Statistical Methods for HCI, pages 291--330. Springer International Publishing, Cham, 2016. Series Title: Human--Computer Interaction Series.
[21]
Christoph Endres, Tim Schwartz, and Christian A. Müller. Geremin": 2D microgestures for drivers based on electric field sensing. In Proceedings of the 16th international conference on Intelligent user interfaces, IUI '11, pages 327--330, New York, NY, USA, 2011. Association for Computing Machinery.
[22]
Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, and Mark Billinghurst. Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1--6, Montreal QC Canada, April 2018. ACM.
[23]
Jacqui Fashimpaur, Kenrick Kin, and Matt Longest. PinchType: Text Entry for Virtual and Augmented Reality Using Comfortable Thumb to Fingertip Pinches. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--7, Honolulu HI USA, April 2020. ACM.
[24]
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster. Rhythmic micro-gestures: discreet interaction on-the-go. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 115--119, Glasgow UK, November 2017. ACM.
[25]
Asaf Friedman. The Role of Visual Design in Game Design. Games and Culture, 10(3):291--305, May 2015. Number: 3.
[26]
Roman Ganhör and Wolfgang Spreicer. Monox: extensible gesture notation for mobile devices. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services - MobileHCI '14, pages 203--212, Toronto, ON, Canada, 2014. ACM Press.
[27]
Huaiying Gao. The Effects of Still Images and Animated Images on. PhD thesis, Virginia Polytechnic Institute and State University, 2005.
[28]
Emilien Ghomi, Stéphane Huot, Olivier Bau, Michel Beaudouin-Lafon, and Wendy E. Mackay. Arpège: learning multitouch chord gestures vocabularies. In Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces, pages 209--218, St. Andrews Scotland, United Kingdom, October 2013. ACM.
[29]
Emmanouil Giannisakis, Gilles Bailly, Sylvain Malacria, and Fanny Chevalier. IconHK: Using Toolbar button Icons to Communicate Keyboard Shortcuts. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pages 4715--4726, New York, NY, USA, 2017. Association for Computing Machinery.
[30]
Jun Gong, Yang Zhang, Xia Zhou, and Xing-Dong Yang. Pyro: Thumb-Tip Gesture Recognition Using Pyroelectric Infrared Sensing. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17, pages 553--563, New York, NY, USA, October 2017. Association for Computing Machinery.
[31]
Jeffrey T. Hansberger, Chao Peng, Shannon L. Mathis, Vaidyanath Areyur Shanthakumar, Sarah C. Meacham, Lizhou Cao, and Victoria R. Blakely. Dispelling the Gorilla Arm Syndrome: The Viability of Prolonged Gesture Interactions. In Virtual, Augmented and Mixed Reality, volume 10280, pages 505--520. Springer International Publishing, Cham, 2017. Series Title: Lecture Notes in Computer Science.
[32]
Da-Yuan Huang, Liwei Chan, Shuo Yang, Fan Wang, Rong-Hao Liang, De-Nian Yang, Yi-Ping Hung, and Bing-Yu Chen. DigitSpace: Designing Thumb-to-Fingers Touch Interfaces for One-Handed and Eyes-Free Interactions. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 1526--1537, San Jose California USA, May 2016. ACM.
[33]
Thomas Hull, editor. Origami3: Third International Meeting of Origami Science, Math, and Education. A K Peters, Natick, MA, 2002. Meeting Name: International Meeting of Origami Science, Mathematics, and Education.
[34]
Renate Häuslschmid, Benjamin Menrad, and Andreas Butz. Freehand vs. micro gestures in the car: Driving performance and user experience. In 2015 IEEE Symposium on 3D User Interfaces (3DUI), pages 159--160, March 2015.
[35]
Haiyan Jiang, Dongdong Weng, Zhenliang Zhang, and Feng Chen. HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers. Sensors, 19(14):3063, July 2019. Number: 14.
[36]
Yuan Jing, Mansouri Behzad, Pettey Jeff H, Ahmed Sarah Farukhi, and Khaderi S Khizer. The Visual Effects Associated with Head-Mounted Displays. International Journal of Ophthalmology and Clinical Research, 5(2), June 2018. Number: 2.
[37]
Kenrick Kin, Björn Hartmann, Tony DeRose, and Maneesh Agrawala. Proton: a customizable declarative multitouch framework. In Proceedings of the 25th annual ACM symposium on User interface software and technology - UIST '12, page 477, Cambridge, Massachusetts, USA, 2012. ACM Press.
[38]
Joseph LaViola and Robert Zeleznik. Flex And Pinch: A Case Study Of Whole Hand Input Design For Virtual Environment Interaction. Brown University Site of the NSF Science and Technology Center for Computer Graphics and Scientific Visualization, page 5, June 2000.
[39]
DoYoung Lee, SooHwan Lee, and Ian Oakley. Nailz: Sensing Hand Input with Touch Sensitive Nails. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--13, Honolulu HI USA, April 2020. ACM.
[40]
Guangchuan Li, David Rempel, Yue Liu, Weitao Song, and Carisa Harris Adamson. Design of 3D Microgestures for Commands in Virtual Reality or Augmented Reality. Applied Sciences, 11(14):6375, January 2021. Number: 14 Publisher: Multidisciplinary Digital Publishing Institute.
[41]
Ziheng Li, Zhenyuan Lei, An Yan, Erin Solovey, and Kaveh Pahlavan. ThuMouse: A Micro-gesture Cursor Input through mmWave Radar-based Interaction. In 2020 IEEE International Conference on Consumer Electronics (ICCE), pages 1--9, January 2020. ISSN: 2158--4001.
[42]
Sungjin Lim, Hosung Jeon, Minwoo Jung, Chulwoong Lee, Woonchan Moon, Kwangsoo Kim, Hwi Kim, and Joonku Hahn. Fatigue-free visual perception of high-density super-multiview augmented reality images. Scientific Reports, 12(1):2959, February 2022. Number: 1.
[43]
Christian Loclair and Sean Gustafson. PinchWatch: A Wearable Device for One-Handed Microinteractions. undefined, 2010.
[44]
Eva Mackamul, Géry Casiez, and Sylvain Malacria. Exploring visual signifier characteristics to improve the perception of affordances of in-place touch inputs. Proc. ACM Hum.-Comput. Interact., 7(MHCI), sep 2023.
[45]
Nathan Magrofuoco, Jorge-Luis Perez-Medina, Paolo Roselli, Jean Vanderdonckt, and Santiago Villarreal. Eliciting Contact-Based and Contactless Gestures With Radar-Based Sensors. IEEE Access, 7:176982--176997, 2019.
[46]
Mamaylya. HoloLens 2 gestures (for example, gaze and air tap) for navigating a guide in Dynamics 365 Guides - Dynamics 365 Mixed Reality, November 2022.
[47]
Sven Mayer, Lars Lischke, Adrian Lanksweirt, Huy Viet Le, and Niels Henze. How to communicate new input techniques. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction, pages 460--472, Oslo Norway, September 2018. ACM.
[48]
Erin McAweeney, Haihua Zhang, and Michael Nebeling. User-Driven Design Principles for Gesture Representations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1--13, Montreal QC Canada, April 2018. ACM.
[49]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. Understanding users' preferences for surface gestures. In Proceedings of Graphics Interface 2010, GI '10, pages 261--268, CAN, 2010. Canadian Information Processing Society.
[50]
Alex Olwal, Thad Starner, and Gowa Mainini. E-Textile Microinteractions: Augmenting Twist with Flick, Slide and Grasp Gestures for Soft Electronics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--13, Honolulu HI USA, April 2020. ACM.
[51]
Manuel Prätorius, Dimitar Valkov, Ulrich Burgbacher, and Klaus Hinrichs. DigiTap: an eyes-free VR/AR symbolic input device. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST '14, pages 9--18, Edinburgh, Scotland, 2014. ACM Press.
[52]
Firas Raheem and Hadeer Raheem. ASL Recognition Quality Analysis Based on Sensory Gloves and MLP Neural Network. American Scientific Research Journal for Engineering, Technology, and Sciences, 47:1--20, September 2018.
[53]
Jean-Christian Ricard. Equitation, locomotion et mécanisme des allures au XIX e siècle: De la méthode graphique à la chronophotographie, volume 41 of Revue d'histoire des sciences. Armand Colin, December 1988.
[54]
Gustavo Rovelo, Donald Degraen, Davy Vanacken, Kris Luyten, and Karin Coninx. Gestu-Wan - An Intelligible Mid-Air Gesture Guidance System for Walk-up-and-Use Displays. In Julio Abascal, Simone Barbosa, Mirko Fetter, Tom Gross, Philippe Palanque, and Marco Winckler, editors, Human-Computer Interaction -- INTERACT 2015, volume 9297, pages 368--386. Springer International Publishing, Cham, 2015. Series Title: Lecture Notes in Computer Science.
[55]
Adwait Sharma, Michael A. Hedderich, Divyanshu Bhardwaj, Bruno Fruchard, Jess McIntosh, Aditya Shekhar Nittala, Dietrich Klakow, Daniel Ashbrook, and Jürgen Steimle. SoloFinger: Robust Microgestures while Grasping Everyday Objects. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1--15, Yokohama Japan, May 2021. ACM.
[56]
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. Grasping Microgestures: Eliciting Single-hand Microgestures for Handheld Objects. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1--13, Glasgow Scotland Uk, May 2019. ACM.
[57]
Leonard J. Simms, Kerry Zelazny, Trevor F. Williams, and Lee Bernstein. Does the number of response options matter? Psychometric perspectives using personality questionnaire data. Psychological Assessment, 31(4):557--566, April 2019.
[58]
Rajinder Sodhi, Hrvoje Benko, and Andrew Wilson. LightGuide: projected visualizations for hand movement guidance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 179--188, Austin Texas USA, May 2012. ACM.
[59]
Mohamed Soliman, Franziska Mueller, Lena Hegemann, Joan Sol Roo, Christian Theobalt, and Jürgen Steimle. FingerInput: Capturing Expressive Single-Hand Thumb-to-Finger Microgestures. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, ISS '18, pages 177--187, New York, NY, USA, November 2018. Association for Computing Machinery.
[60]
Joachim Stöhr and Hans Christoph Siegmann. Magnetism: from fundamentals to nanoscale dynamics. Number 152 in Springer series in solid-state sciences. Springer, Berlin ; New York, 2006. OCLC: ocm72867752.
[61]
Richard Tang, Hesam Alizadeh, Anthony Tang, Scott Bateman, and Joaquim A.P. Jorge. Physio@Home: design explorations to support movement guidance. In CHI '14 Extended Abstracts on Human Factors in Computing Systems, pages 1651--1656, Toronto Ontario Canada, April 2014. ACM.
[62]
Carl Therrien. Illusion, idéalisation, gratification : l'immersion dans les univers de fiction à l'ère du jeu vidéo. PhD thesis, Université du Québec, Montréal, June 2011.
[63]
Cyril Thomas, André Didierjean, and Serge Nicolas. Scientific Study of Magic: Binet's Pioneering Approach Based on Observations and Chronophotography. The American Journal of Psychology, 129(3):313--326, October 2016. Number: 3.
[64]
Craig Villamor, Dan Willis, and Luke Wroblewski. Touch Gesture Reference Guide, 2010.
[65]
Robert Walter, Gilles Bailly, and Jörg Müller. StrikeAPose: revealing mid-air gestures on public displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 841--850, Paris France, April 2013. ACM.
[66]
Jérémy Wambecke, Alix Goguey, Laurence Nigay, Lauren Dargent, Daniel Hauret, Stéphanie Lafon, and Jean-Samuel Louis de Visme. M[eye]cro: Eye-gazeMicrogestures for Multitasking and Interruptions. Proceedings of the ACM on Human-Computer Interaction, 5(EICS):1--22, May 2021. Number: EICS.
[67]
David Way. EMGRIE: Ergonomic Microgesture Recognition and Interaction Evaluation, A Case Study. PhD thesis, Massachusetts Institute of Technology, 2014.
[68]
Eric Whitmire, Mohit Jain, Divye Jain, Greg Nelson, Ravi Karkar, Shwetak Patel, and Mayank Goel. DigiTouch: Reconfigurable Thumb-to-Finger Input and Text Entry on Head-mounted Displays. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(3):1--21, September 2017. Number: 3.
[69]
Katrin Wolf, Anja Naumann, Michael Rohs, and Jörg Müller. A Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-Dependent Requirements. In Human-Computer Interaction -- INTERACT 2011, volume 6946, pages 559--575. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. Series Title: Lecture Notes in Computer Science.
[70]
Zheer Xu, Pui Chung Wong, Jun Gong, Te-Yen Wu, Aditya Shekhar Nittala, Xiaojun Bi, Jürgen Steimle, Hongbo Fu, Kening Zhu, and Xing-Dong Yang. TipText: Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST '19, pages 883--899, New York, NY, USA, October 2019. Association for Computing Machinery.
[71]
Alexandra Yeung, Siegbert Schmid, Adrian George, and Michael King. Still pictures, animations or interactivity -- What is more effective for elearning? Proceedings of The Australian Conference on Science and Mathematics Education, 2007.
[72]
Jennifer M. Zosh, Brian N. Verdine, Andrew Filipowicz, Roberta Michnick Golinkoff, Kathy Hirsh-Pasek, and Nora S. Newcombe. Talking Shape: Parental Language With Electronic Versus Traditional Shape Sorters: Traditional Toys Promote Parent Spatial Talk. Mind, Brain, and Education, 9(3):136--144, September 2015. Number: 3.
[73]
Klen Copic Pucihar, Christian Sandor, Matja? Kljun, Wolfgang Huerst, Alexander Plopski, Takafumi Taketomi, Hirokazu Kato, and Luis A. Leiva. The Missing Interface: Micro-Gestures on Augmented Objects. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1--6, Glasgow Scotland Uk, May 2019. ACM.

Cited By

View all
  • (2024)Clarifying and differentiating discoverabilityHuman–Computer Interaction10.1080/07370024.2024.2364606(1-26)Online publication date: 13-Jun-2024
  • (2024)Audio-visual training and feedback to learn touch-based gesturesJournal of Visualization10.1007/s12650-024-01012-xOnline publication date: 17-Jun-2024
  • (2023)Exploring Visual Signifier Characteristics to Improve the Perception of Affordances of In-Place Touch InputsProceedings of the ACM on Human-Computer Interaction10.1145/36042577:MHCI(1-32)Online publication date: 13-Sep-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue MHCI
MHCI
September 2023
1017 pages
EISSN:2573-0142
DOI:10.1145/3624512
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 September 2023
Published in PACMHCI Volume 7, Issue MHCI

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AR
  2. discoverability
  3. microgesture
  4. microgesture representations

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)241
  • Downloads (Last 6 weeks)14
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Clarifying and differentiating discoverabilityHuman–Computer Interaction10.1080/07370024.2024.2364606(1-26)Online publication date: 13-Jun-2024
  • (2024)Audio-visual training and feedback to learn touch-based gesturesJournal of Visualization10.1007/s12650-024-01012-xOnline publication date: 17-Jun-2024
  • (2023)Exploring Visual Signifier Characteristics to Improve the Perception of Affordances of In-Place Touch InputsProceedings of the ACM on Human-Computer Interaction10.1145/36042577:MHCI(1-32)Online publication date: 13-Sep-2023

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media