Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Audio-visual training and feedback to learn touch-based gestures

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

To help people learn the touch-based gestures needed to perform various tasks, researchers commonly use training from an experimenter. However, it leads to dependence on a person, as well as memory problems with increasing number and complexity of gestures. Several on-demand training and feedback methods have been proposed that provide constant support and help people learn novel gestures without human assistance. Non-speech audio with the visual clue, a gesture training/feedback method, could be extended in the interactive visualization tools. However, the literature offers several options in the non-speech audio and visual clues but no comparisons. We conducted an online study to identify suitable non-speech audio representations with the visual clues of 12 touch-based gestures. For each audiovisual combination, we evaluated the thinking, time demand, frustration, understanding, and learnability of 45 participants. We found that the visual clue of a gesture, either iconic or ghost, did not affect the suitability of an audio representation. However, the preferences in audio channels and audio patterns differed for the different gestures and their directions. We implemented the training/feedback method in an Infovis tool. The evaluation showed significant use of the method by the participants to explore the tool.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. https://www.surveycake.com/s/MKoLl.

References

  • Absar R, Guastavino C (2015) The design and formative evaluation of nonspeech auditory feedback for an information system. J Am Soc Inf Sci 66(8):1696–1708

    Google Scholar 

  • Ahmetovic D, Bernareggi C, Mascetti S, Pini F (2021) Multi-touch exploration and sonification of line segments. In: Proceedings of the international web for all conference, pp 1–5

  • Ali A, Ringel Morris M, Wobbrock JO (2021) “I Am Iron Man” priming improves the learnability and memorability of user-elicited gestures. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–14

  • Ali AX, Morris MR, Wobbrock JO (2019) Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–12

  • AlKadi M, Serrano V, Scott-Brown J, Plaisant C, Fekete JD, Hinrichs U, Bach B (2023) Understanding barriers to network exploration with visualization: a report from the trenches. IEEE Trans Visual Comput Graph 29(1):907–917

    Google Scholar 

  • Alper B, Riche NH, Chevalier F, Boy J, Sezgin M (2017) Visualization literacy at elementary school. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 5485–5497

  • Alt F, Geiger S, Höhl W (2018) ShapelineGuide: teaching mid-air gestures for large interactive displays. In: Proceedings of ACM international symposium on pervasive displays, pp 1–8

  • Anderson F, Bischof WF (2013) Learning and performance with gesture guides. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1109–1118

  • Basdogan C, Giraud F, Levesque V, Choi S (2020) A review of surface haptics: enabling tactile effects on touch surfaces. IEEE Trans Haptics 13(3):450–470

    Article  Google Scholar 

  • Bau O, Mackay WE (2008) OctoPocus: a dynamic guide for learning gesture-based command sets. In: Proceedings of ACM symposium on user interface software and technology, pp 37–46

  • Baur D, Lee B, Carpendale S (2012) Touchwave: kinetic multi-touch manipulation for hierarchical stacked graphs. In: Proceedings of ACM conference on interactive tabletops and surfaces, pp 255–264

  • Biener V, Gesslein T, Schneider D, Kawala F, Otte A, Kristensson PO, Pahud M, Ofek E, Campos C, Kljun M et al (2022) PoVRPoint: authoring presentations in mobile virtual reality. IEEE Trans Visual Comput Graphics 28(5):2069–2079

    Article  Google Scholar 

  • Bishop F, Zagermann J, Pfeil U, Sanderson G, Reiterer H, Hinrichs U (2019) Construct-A-Vis: exploring the free-form visualization processes of children. IEEE Trans Visual Comput Graph 26(1):451–460

    Google Scholar 

  • Brewster SA (1998) Sonically-enhanced drag and drop. In: Proceedings of the international conference on auditory display, pp 1–7

  • Brewster SA, Clarke CV (2005) The design and evaluation of a sonically enhanced tool palette. ACM Trans Appl Percept 2(4):455–461

    Article  Google Scholar 

  • Cai Z, Ma Y, Lu F (2024) Robust dual-modal speech keyword spotting for XR headsets. IEEE Trans Visual Comput Graph 30(5):2507–2516

    Article  Google Scholar 

  • Castro SC, Quinan PS, Hosseinpour H, Padilla L (2021) Examining effort in 1D uncertainty communication using individual differences in working memory and NASA-TLX. IEEE Trans Visual Comput Graph 28(1):411–421

    Article  Google Scholar 

  • Cavalcanti VC, de Santana Ferreira MI, Teichrieb V, Barioni RR, Correia WFM, Da Gama AEF (2019) Usability and effects of text, image and audio feedback on exercise correction during augmented reality based motor rehabilitation. Comput Graph 85:100–110

    Article  Google Scholar 

  • Chen S, Wu H, Lin Z, Guo C, Lin L, Hong F, Yuan X (2021) Photo4Action: phone camera-based interaction for graph visualizations on large wall displays. J Vis 24(5):1083–1095

    Article  Google Scholar 

  • Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y (2019) Marvist: authoring glyph-based visualization in mobile augmented reality. IEEE Trans Visual Comput Graph 26(8):2645–2658

    Google Scholar 

  • Chundury P, Reyazuddin Y, Jordan JB, Lazar J, Elmqvist N (2024) TactualPlot: spatializing data as sound using sensory substitution for touchscreen accessibility. IEEE Trans Visual Comput Graph 30(1):836–846

    Google Scholar 

  • Danielescu A, Piorkowski D (2022) Iterative design of gestures during elicitation: understanding the role of increased production. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–14

  • Delamare W, Coutrix C, Nigay L (2015) Designing guiding systems for gesture-based interaction. In: Proceedings of SIGCHI symposium on engineering interactive computing systems, pp 44–53

  • Delamare W, Silpasuwanchai C, Sarcar S, Shiraki T, Ren X (2019) On gesture combination: an exploration of a solution to augment gesture interaction. In: Proceedings of ACM international conference on interactive surfaces and spaces, pp 135–146

  • Deng Z, Weng D, Wu Y (2023) You are experienced: interactive tour planning with crowdsourcing tour data from web. J Vis 26(2):385–401

    Article  Google Scholar 

  • Díaz-Oreiro I, López G, Quesada L, Guerrero LA (2021) UX evaluation with standardized questionnaires in ubiquitous computing and ambient intelligence: a systematic literature review. Adv Hum-Comput Interact 2021:1–22

    Article  Google Scholar 

  • Dim NK, Silpasuwanchai C, Sarcar S, Ren X (2016) Designing mid-air TV gestures for blind people using user-and choice-based elicitation approaches. In: Proceedings of ACM SIGCHI conference on designing interactive systems, pp 204–214

  • Emgin SE, Aghakhani A, Sezgin TM, Basdogan C (2018) HapTable: an interactive tabletop providing online haptic feedback for touch gestures. IEEE Trans Vis Comput Graph 25(9):2749–2762

    Article  Google Scholar 

  • Firat EE, Denisova A, Wilson ML, Laramee RS (2022) P-Lite: a study of parallel coordinate plot literacy. Vis Inf 6(3):81–99

    Google Scholar 

  • Françoise J, Chapuis O, Hanneton S, Bevilacqua F (2016) SoundGuides: adapting continuous auditory feedback to users. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 2829–2836

  • Freeman D, Benko H, Morris MR, Wigdor D (2009) ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proceedings of ACM conference on interactive tabletops and surfaces, pp 165–172

  • Gabbard JL, Swan JE II (2008) Usability engineering for augmented reality: employing user-based studies to inform design. IEEE Trans Vis Comput Graph 14(3):513–525

    Article  Google Scholar 

  • Gao B, Kim H, Lee H, Lee J, Kim JI (2018) Effects of continuous auditory feedback on drawing trajectory-based finger gestures. IEEE Trans Hum-Mach Syst 48(6):658–669

    Article  Google Scholar 

  • Gavgiotaki D, Ntoa S, Margetis G, Apostolakis KC, Stephanidis C (2023) Gesture-based interaction for AR systems: a short review. In: Proceedings of international conference on PErvasive technologies related to assistive environments, pp 284–292

  • Gheran BF, Villarreal-Narvaez S, Vatavu RD, Vanderdonckt J (2022) RepliGES and GEStory: visual tools for systematizing and consolidating knowledge on user-defined gestures. In: Proceedings of the international conference on advanced visual interfaces, pp 1–9

  • Ghomi E, Huot S, Bau O, Beaudouin-Lafon M, Mackay WE (2013) Arpège: learning multitouch chord gestures vocabularies. In: Proceedings of ACM conference on interactive tabletops and surfaces, pp 209–218

  • Gorlewicz JL, Tennison JL, Uesbeck PM, Richard ME, Palani HP, Stefik A, Smith DW, Giudice NA (2020) Design guidelines and recommendations for multimodal, touchscreen-based graphics. ACM Trans Access Comput 13(3):1–30

    Article  Google Scholar 

  • Guarino A, Malandrino D, Zaccagnino R, Capo C, Lettieri N (2023) Touchscreen gestures as images. A transfer learning approach for soft biometric traits recognition. Expert Syst Appl 219:119614

    Article  Google Scholar 

  • Hart SG, Staveland LE (1988) Development of NASA-TLX (task load index): results of empirical and theoretical research 52:139–183

  • He S, Chen Y, Xia Y, Li Y, Liang HN, Yu L (2024) Visual harmony: text-visual interplay in circular infographics. J Vis 27(2):255–271

    Article  Google Scholar 

  • Hermann T, Henning T, Ritter H (2004) Gesture desk–an integrated multi-modal gestural workplace for sonification. In: Proceedings of human-computer interaction, 5th international gesture workshop. Springer, pp 369–379

  • Hermann T, Hunt A, Neuhoff JG, et al. (2011) The sonification handbook, vol 1. Logos Verlag Berlin

  • Hinckley K, Yatani K, Pahud M, Coddington N, Rodenhouse J, Wilson A, Benko H, Buxton B (2010) Pen+ touch= new tools. In: Proceedings of ACM symposium on user interface software and technology, pp 27–36

  • Hiniker A, Sobel K, Suh H, Kientz JA et al (2016) Hidden symbols: how informal symbolism in digital interfaces disrupts usability for preschoolers. Int J Hum Comput Stud 90:53–67

    Article  Google Scholar 

  • Kamal A, Li Y, Lank E (2014) Teaching motion gestures via recognizer feedback. In: Proceedings of ACM conference on intelligent user interfaces, pp 73–82

  • Kammer D, Wojdziak J, Keck M, Groh R, Taranko S (2010) Towards a formalization of multi-touch gestures. In: Proceedings of ACM conference on interactive tabletops and surfaces, pp 49–58

  • Kern F, Niebling F, Latoschik ME (2023) Text input for non-stationary XR workspaces: investigating tap and word-gesture keyboards in virtual and augmented reality. IEEE Trans Vis Comput Graph 29(5):2658–2669

    Article  Google Scholar 

  • Kim NW, Schweickart E, Liu Z, Dontcheva M, Li W, Popovic J, Pfister H (2016) Data-driven guides: supporting expressive design for information graphics. IEEE Trans Vis Comput Graph 23(1):491–500

    Article  Google Scholar 

  • Kim NW, Im H, Henry Riche N, Wang A, Gajos K, Pfister H (2019a) DataSelfie: empowering people to design personalized visuals to represent their data. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–12

  • Kim NW, Riche NH, Bach B, Xu G, Brehmer M, Hinckley K, Pahud M, Xia H, McGuffin MJ, Pfister H, et al. (2019b) DataToon: drawing data comics about dynamic networks with pen+ touch interaction. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–12

  • Kim NW, Ataguba G, Joyner SC, Zhao C, Im H (2023) Beyond alternative text and tables: comparative analysis of visualization tools and accessibility methods. In: Computer graphics forum, vol. 42. Wiley Online Library, pp 323–335

  • Korzetz M, Kühn R, Büschel L, Schumann FW, Aßmann U, Schlegel T (2020) Introducing mobile device-based interactions to users: an investigation of onboarding tutorials. In: Proceedings of international conference on human-computer interaction. Springer, pp 428–442

  • Krasner A, Gabbard J (2024) MusiKeys: exploring haptic-to-auditory sensory substitution to improve mid-air text-entry. IEEE Trans Vis Comput Graph 30(5):2247–2256

    Article  Google Scholar 

  • Kurtenbach G, Moran TP, Buxton W (1994) Contextual animation of gestural commands. In: Computer graphics forum, vol. 13. Wiley Online Library, pp 305–314

  • Lambert V, Chaffangeon Caillet A, Goguey A, Malacria S, Nigay L (2023) Studying the visual representation of microgestures. In: Proceedings of the ACM on human-computer interaction 7(MHCI):1–36

  • Laugwitz B, Held T, Schrepp M (2008) Construction and evaluation of a user experience questionnaire. In: Symposium of the Austrian HCI and usability engineering group. Springer, pp 63–76

  • Lee B, Kazi RH, Smith G (2013) SketchStory: telling more engaging stories with data through freeform sketching. IEEE Trans Vis Comput Graph 19(12):2416–2425

    Article  Google Scholar 

  • Lee G, Lee DY, Su GM, Manocha D (2024) “May I Speak?": multi-modal attention guidance in social VR group conversations. IEEE Trans Vis Comput Graph 30(5):2287–2297

    Article  Google Scholar 

  • Li T, Wu S, Jin Y, Shi H, Liu S (2023) X-Space: interaction design of extending mixed reality space from Web2D visualization. Vis Inform 7(4):73–83

    Article  Google Scholar 

  • Lin Y, Li H, Yang L, Wu A, Qu H (2023) InkSight: leveraging sketch interaction for documenting chart findings in computational notebooks. IEEE Trans Vis Comput Graph, To appear

  • Liu Z, Thompson J, Wilson A, Dontcheva M, Delorey J, Grigg S, Kerr B, Stasko J (2018) Data Illustrator: augmenting vector design tools with lazy data binding for expressive visualization authoring. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–13

  • Lu F, Nanjappan V, Parsons P, Yu L, Liang HN (2023) Effect of display platforms on spatial knowledge acquisition and engagement: an evaluation with 3d geometry visualizations. J Vis 26(3):667–686

    Article  Google Scholar 

  • Lu Z, Fan M, Wang Y, Zhao J, Annett M, Wigdor D (2018) InkPlanner: supporting prewriting via intelligent visual diagramming. IEEE Trans Vis Comput Graph 25(1):277–287

    Article  Google Scholar 

  • May KR, Gable TM, Walker BN (2017) Designing an in-vehicle air gesture set using elicitation methods. In: Proceedings of the AutomotiveUI, pp 74–83

  • McAweeney E, Zhang H, Nebeling M (2018) User-driven design principles for gesture representations. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–13

  • Méndez GG, Hinrichs U, Nacenta MA (2017) Bottom-up vs. top-down: Trade-offs in efficiency, understanding, freedom and creativity with infoVis tools. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 841–852

  • Morrison-Smith S, Hofmann M, Li Y, Ruiz J (2016) Using audio cues to support motion gesture interaction on mobile devices. ACM Trans Appl Percept 13(3):1–19

    Article  Google Scholar 

  • Mynatt ED (1994) Designing with auditory icons: how well do we identify auditory cues? In: Proceedings of ACM CHI conference on human factors in computing systems, pp 269–270

  • Nacenta MA, Kamber Y, Qiang Y, Kristensson PO (2013) Memorability of pre-designed and user-defined gesture sets. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1099–1108

  • Nacher V, Jaen J, Catala A (2014) Exploring visual cues for intuitive communicability of touch gestures to pre-kindergarten children. In: Proceedings of ACM conference on interactive tabletops and surfaces, pp 159–162

  • Ng C, Marquardt N (2022) Eliciting user-defined touch and mid-air gestures for co-located mobile gaming. In: Proceedings of the ACM on human-computer interaction 6(ISS):303–327

  • Oh U, Kane SK, Findlater L (2013) Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures. In: Proceedings of ACM SIGACCESS conference on computers and accessibility, pp 1–8

  • Oh U, Branham S, Findlater L, Kane SK (2015) Audio-based feedback techniques for teaching touchscreen gestures. ACM Trans Access Comput 7(3):1–29

    Article  Google Scholar 

  • Park Y, Kim J, Lee K (2015) Effects of auditory feedback on menu selection in hand-gesture interfaces. IEEE Multimedia 22(1):32–40

    Article  Google Scholar 

  • Rico J, Brewster S (2010) Usable gestures for mobile interfaces: evaluating social acceptability. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 887–896

  • Rocchesso D, Delle Monache S, Barrass S (2019) Interaction by ear. Int J Hum Comput Stud 131:152–159

    Article  Google Scholar 

  • Romat H, Appert C, Pietriga E (2019) Expressive authoring of node-link diagrams with graphies. IEEE Trans Vis Comput Graph 27(4):2329–2340

    Article  Google Scholar 

  • Rouben A, Terveen L (2007) Speech and non-speech audio: navigational information and cognitive load. In: Proceedings of international conference on auditory display, pp 468–275

  • Rubab S, Tang J, Wu Y (2021) Examining interaction techniques in data visualization authoring tools from the perspective of goals and human cognition: a survey. J Vis 24(2):397–418

    Article  Google Scholar 

  • Rubab S, Yu L, Tang J, Wu Y (2023) Exploring effective relationships between visual-audio channels in data visualization. J Vis 26(4):937–956

    Article  Google Scholar 

  • Saktheeswaran A, Srinivasan A, Stasko J (2020) Touch? speech? or touch and speech? Investigating multimodal interaction for visual network exploration and analysis. IEEE Trans Vis Comput Graph 26(6):2168–2179

    Article  Google Scholar 

  • Schankin A, Budde M, Riedel T, Beigl M (2022) Psychometric properties of the user experience questionnaire (UEQ). In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–11

  • Sedlmair M, Meyer M, Munzner T (2012) Design study methodology: reflections from the trenches and the stacks. IEEE Trans Vis Comput Graph 18(12):2431–2440

    Article  Google Scholar 

  • Shen L, Shen E, Luo Y, Yang X, Hu X, Zhang X, Tai Z, Wang J (2023) Towards natural language interfaces for data visualization: a survey. IEEE Trans Vis Comput Graph 29(6):3121–3144

    Article  Google Scholar 

  • Smith T, Bowen SJ, Nissen B, Hook J, Verhoeven A, Bowers J, Wright P, Olivier P (2015) Exploring gesture sonification to support reflective craft practice. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 67–76

  • Soni N, Aloba A, Morga KS, Wisniewski PJ, Anthony L (2019) A framework of touchscreen interaction design recommendations for children (TIDRC) characterizing the gap between research evidence and design practice. In: Proceedings of ACM interaction design and children conference, pp 419–431

  • Soni P, de Runz C, Bouali F, Venturini G (2024) A survey on automatic dashboard recommendation systems. Vis Inform 8(1):67–79

    Article  Google Scholar 

  • Srinivasan A, Lee B, Stasko J (2020) Interweaving multimodal interaction with flexible unit visualizations for data exploration. IEEE Trans Vis Comput Graph 27(8):3519–3533

    Article  Google Scholar 

  • Stoiber C, Ceneda D, Wagner M, Schetinger V, Gschwandtner T, Streit M, Miksch S, Aigner W (2022) Perspectives of visualization onboarding and guidance in VA. Vis Inform 6(1):68–83

    Article  Google Scholar 

  • Tang T, Rubab S, Lai J, Cui W, Yu L, Wu Y (2018) iStoryline: effective convergence to hand-drawn storylines. IEEE Trans Vis Comput Graph 25(1):769–778

    Article  Google Scholar 

  • Tong W, Chen Z, Xia M, Lo LYH, Yuan L, Bach B, Qu H (2023) Exploring interactions with printed data visualizations in augmented reality. IEEE Trans Vis Comput Graph 29(1):418–428

    Article  Google Scholar 

  • Tory M, Moller T (2004) Human factors in visualization research. IEEE Trans Vis Comput Graph 10(1):72–84

    Article  Google Scholar 

  • Tory M, Moller T (2005) Evaluating visualizations: do expert reviews work? IEEE Comput Graph Appl 25(5):8–11

    Article  Google Scholar 

  • Vatavu RD, Wobbrock JO (2022) Clarifying agreement calculations and analysis for end-user elicitation studies. ACM Trans Comput-Hum Interact 29(1):1–70

    Article  Google Scholar 

  • Wang Y, Hou Z, Shen L, Wu T, Wang J, Huang H, Zhang H, Zhang D (2022) Towards natural language-based visualization authoring. IEEE Trans Vis Comput Graph 29(1):1222–1232

    Google Scholar 

  • Wersényi G, Nagy H, Csapó Á (2015) Evaluation of reaction times to sound stimuli on mobile devices. In: Proceedings of international conference on auditory display, pp 268–272

  • Williams AS, Ortega FR (2022) The impacts of referent display on gesture and speech elicitation. IEEE Trans Vis Comput Graph 28(11):3885–3895

    Article  Google Scholar 

  • Wobbrock JO, Aung HH, Rothrock B, Myers BA (2005) Maximizing the guessability of symbolic input. In: Proceedings of ACM CHI conference on human factors in computing systems-extended abstracts, pp 1869–1872

  • Wobbrock JO, Morris MR, Wilson AD (2009) User-defined gestures for surface computing. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1083–1092

  • Xia H, Araujo B, Grossman T, Wigdor D (2016) Object-oriented drawing. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 4610–4621

  • Xia H, Henry Riche N, Chevalier F, De Araujo B, Wigdor D (2018) DataInk: direct and creative data-oriented drawing. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 1–13

  • Yalla P, Walker BN (2008) Advanced auditory menus: design and evaluation of auditory scroll bars. In: Proceedings of ACM SIGACCESS conference on computers and accessibility, pp 105–112

  • Yatani K, Partridge K, Bern M, Newman MW (2008) Escape: a target selection technique using visually-cued gestures. In: Proceedings of ACM CHI conference on human factors in computing systems, pp 285–294

  • Yuan L, Li B, Li S, Wong KK, Zhang R, Qu H (2023) Tax-Scheduler: an interactive visualization system for staff shifting and scheduling at tax authorities. Vis Inform 7(2):30–40

    Article  Google Scholar 

  • Yuan Z, He S, Liu Y, Yu L (2023) MEinVR: multimodal interaction techniques in immersive exploration. Vis Inform 7(3):37–48

    Article  Google Scholar 

  • Zhang N, Wang WX, Huang SY, Luo RM (2022) Mid-air gestures for in-vehicle media player: elicitation, segmentation, recognition, and eye-tracking testing. SN Appl Sci 4(4):1–18

    Article  Google Scholar 

  • Zhang Y, Wang Z, Zhang J, Shan G, Tian D (2023) A survey of immersive visualization: focus on perception and interaction. Vis Inform 7(4):22–35

    Article  Google Scholar 

  • Zhao Y, Jiang J, Chen Y, Liu R, Yang Y, Xue X, Chen S (2022) Metaverse: perspectives from graphics, interactions and visualization. Vis Inform 6(1):56–67

    Article  Google Scholar 

  • Zhou W, Li T, Li S, Zhu Y (2022) Color-in-fist: a metaphor for color selection with mid-air interaction. J Vis 25(1):207–218

    Article  Google Scholar 

  • Zhou Y, Meng X, Wu Y, Tang T, Wang Y, Wu Y (2023) An intelligent approach to automatically discovering visual insights. J Vis 26(3):705–722

    Article  Google Scholar 

  • Ziemer T, Schultheis H (2018) A psychoacoustic auditory display for navigation. In: Proceedings of international conference on auditory display, pp 136–144

  • Zobl M, Nieschulz R, Geiger M, Lang M, Rigoll G (2004) Gesture components for natural interaction with in-car devices. In: Proceedings of human-computer interaction, 5th international gesture workshop. Springer, pp 448–459

Download references

Acknowledgements

The work was supported by Key “Pioneer” R &D Projects of Zhejiang Province (2023C01120), NSFC (U22A2032), and the Collaborative Innovation Center of Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingcai Wu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mp4 14351 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rubab, S., Zaman, M.W.U., Rashid, U. et al. Audio-visual training and feedback to learn touch-based gestures. J Vis (2024). https://doi.org/10.1007/s12650-024-01012-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12650-024-01012-x

Keywords