Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Understanding Gesture and Speech Multimodal Interactions for Manipulation Tasks in Augmented Reality Using Unconstrained Elicitation

Published: 04 November 2020 Publication History
  • Get Citation Alerts
  • Editorial Notes

    A corrigendum was issued for this paper on August 8, 2022. You can download the corrigendum from the Supplemental Material section of this citation page.

    Abstract

    This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81 ms we find that the stroke of the gesture is commonly within 10 ms of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems.

    Supplementary Material

    3427330-corrigendum (3427330-corrigendum.pdf)
    Corrigendum to "Understanding Gesture and Speech Multimodal Interactions for Manipulation Tasks in Augmented Reality Using Unconstrained Elicitation" by Williams et al., Proceedings of the ACM on Human-Computer Interaction, Volume 4, Issue ISS (PACMHCI 4:ISS).

    References

    [1]
    Ohoud Alharbi, Ahmed Sabbir Arif, Wolfgang Stuerzlinger, Mark D. Dunlop, and Andreas Komninos. 2019. WiseType: A Tablet Keyboard with Color-Coded Visualization and Various Editing Options for Error Correction. In Proceedings of the 45th Graphics Interface Conference on Proceedings of Graphics Interface 2019 (Kingston, Canada) (GI'19). Canadian Human-Computer Communications Society, Waterloo, CAN, Article 4, bibinfonumpages10 pages. https://doi.org/10.20380/GI2019.04
    [2]
    Dimitra Anastasiou, Cui Jian, and Desislava Zhekova. 2012. Speech and Gesture Interaction in an Ambient Assisted Living Lab. In Proceedings of the 1st Workshop on Speech and Multimodal Interaction in Assistive Environments (Jeju, Republic of Korea) (SMIAE '12). Association for Computational Linguistics, Stroudsburg, PA, USA, 18--27.
    [3]
    Ahmed Sabbir Arif and Wolfgang Stuerzlinger. 2010. Predicting the Cost of Error Correction in Character-Based Text Entry Technologies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '10). Association for Computing Machinery, New York, NY, USA, 5--14. https://doi.org/10.1145/1753326.1753329
    [4]
    Muhammad Zeeshan Baig and Manolya Kavakli. 2018. Qualitative analysis of a multimodal interface system using speech/gesture. In 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE, IEEE, Wuhan, China, 2811--2816.
    [5]
    Richard A. Bolt. 1980. &Ldquo;Put-that-there&Rdquo;: Voice and Gesture at the Graphics Interface. SIGGRAPH Comput. Graph., Vol. 14, 3 (July 1980), 262--270. https://doi.org/10.1145/965105.807503
    [6]
    Marie-Luce Bourguet. 2006. Towards a taxonomy of error-handling strategies in recognition-based multi-modal human-computer interfaces. Signal Processing, Vol. 86, 12 (2006), 3625--3643.
    [7]
    Marie-Luce Bourguet and Akio Ando. 1998. Synchronization of Speech and Hand Gestures during Multimodal Human-Computer Interaction. In CHI 98 Conference Summary on Human Factors in Computing Systems (Los Angeles, California, USA) (CHI '98). Association for Computing Machinery, New York, NY, USA, 241--242. https://doi.org/10.1145/286498.286726
    [8]
    Doug A. Bowman, Ernst Kruijff, Joseph J. LaViola, and Ivan Poupyrev. 2004. 3D User Interfaces: Theory and Practice .Addison Wesley Longman Publishing Co., Inc., USA.
    [9]
    Joshua Brustein. 2018. Microsoft Wins $480 Million Army Battlefield Contract. https://www.bloomberg.com/news/articles/2018--11--28/microsoft-wins-480-million-army-battlefield-contract
    [10]
    Sarah Buchanan, Bourke Floyd, Will Holderness, and Joseph J. LaViola. 2013. Towards User-Defined Multi-Touch Gestures for 3D Objects. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces (St. Andrews, Scotland, United Kingdom) (ITS '13). Association for Computing Machinery, New York, NY, USA, 231--240. https://doi.org/10.1145/2512349.2512825
    [11]
    Sébastien Carbini, Lionel Delphin-Poulat, L Perron, and Jean-Emmanuel Viallet. 2006. From a wizard of Oz experiment to a real time speech and gesture multimodal interface. Signal Processing, Vol. 86, 12 (2006), 3559--3577.
    [12]
    Joyce Y. Chai and Shaolin Qu. 2005. A Salience Driven Approach to Robust Input Interpretation in Multimodal Conversational Systems. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (Vancouver, British Columbia, Canada) (HLT '05). Association for Computational Linguistics, USA, 217--224. https://doi.org/10.3115/1220575.1220603
    [13]
    Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User Elicitation on Single-Hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 3403--3414. https://doi.org/10.1145/2858036.2858589
    [14]
    Aurélie Cohé and Martin Hachet. 2012. Understanding User Gestures for Manipulating 3D Objects from Touchscreen Inputs. In Proceedings of Graphics Interface 2012 (Toronto, Ontario, Canada) (GI '12). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 157--164. http://dl.acm.org/citation.cfm?id=2305276.2305303
    [15]
    Andrea Corradini and Philip R Cohen. 2005. On the Relationships Among Speech, Gestures, and Object Manipulation in Virtual Environments: Initial Evidence., bibinfonumpages97--112 pages.
    [16]
    Andreas Dünser, Raphaël Grasset, Hartmut Seichter, and Mark Billinghurst. 2007. Applying HCI principles to AR systems design .University of Canterbury. Human Interface Technology Laboratory., New Zealand.
    [17]
    Susan Goldin-Meadow, Martha Wagner Alibali, and R Breckinridge Church. 1993. Transitions in concept acquisition: using the hand to read the mind. Psychological review, Vol. 100, 2 (1993), 279.
    [18]
    Susumu Harada, Daisuke Sato, Hironobu Takagi, and Chieko Asakawa. 2013. Characteristics of Elderly User Behavior on Mobile Multi-touch Devices. In Human-Computer Interaction -- INTERACT 2013, Paula Kotzé, Gary Marsden, Gitte Lindgaard, Janet Wesson, and Marco Winckler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 323--341.
    [19]
    A G Hauptmann. 1989. Speech and gestures for graphic image manipulation. ACM SIGCHI Bulletin, Vol. 20, SI (1989), 241--245.
    [20]
    Alexander G Hauptmann and Paul McAvinney. 1993. Gestures with speech for graphic manipulation. International Journal of Man-Machine Studies, Vol. 38, 2 (1993), 231--249.
    [21]
    Sylvia Irawati, Scott Green, Mark Billinghurst, Andreas Duenser, and Heedong Ko. 2006. An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures. In Proceedings of the 16th International Conference on Advances in Artificial Reality and Tele-Existence (Hangzhou, China) (ICAT'06). Springer-Verlag, Berlin, Heidelberg, 272--283. https://doi.org/10.1007/11941354_28
    [22]
    Robert J.K. Jacob, Audrey Girouard, Leanne M. Hirshfield, Michael S. Horn, Orit Shaer, Erin Treacy Solovey, and Jamie Zigelbaum. 2008. Reality-Based Interaction: A Framework for Post-WIMP Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI '08). Association for Computing Machinery, New York, NY, USA, 201--210. https://doi.org/10.1145/1357054.1357089
    [23]
    Michael Johnston, Philip R. Cohen, David McGee, Sharon L. Oviatt, James A. Pittman, and Ira Smith. 1997. Unification-Based Multimodal Integration. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics (Madrid, Spain) (ACL '98/EACL '98). Association for Computational Linguistics, USA, 281--288. https://doi.org/10.3115/976909.979653
    [24]
    Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen, and Steven Feiner. 2003. Mutual Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality. In Proceedings of the 5th International Conference on Multimodal Interfaces (Vancouver, British Columbia, Canada) (ICMI '03). Association for Computing Machinery, New York, NY, USA, 12--19. https://doi.org/10.1145/958432.958438
    [25]
    A A Karpov and R M Yusupov. 2018. Multimodal Interfaces of Human-Computer Interaction. Her. Russ. Acad. Sci., Vol. 88, 1 (Jan. 2018), 67--74.
    [26]
    Spencer D Kelly, Asli Ozyürek, and Eric Maris. 2010. Two sides of the same coin: speech and gesture mutually interact to enhance comprehension. Psychol. Sci., Vol. 21, 2 (Feb. 2010), 260--267.
    [27]
    Sumbul Khan and Bige Tuncc er. 2019. Gesture and speech elicitation for 3D CAD modeling in conceptual design. Automation in Construction, Vol. 106 (2019), 102847.
    [28]
    Kenrick Kin, Maneesh Agrawala, and Tony DeRose. 2009. Determining the Benefits of Direct-Touch, Bimanual, and Multifinger Input on a Multitouch Workstation. In Proceedings of Graphics Interface 2009 (Kelowna, British Columbia, Canada) (GI '09). Canadian Information Processing Society, CAN, 119--124.
    [29]
    David B. Koons, Carlton J. Sparrell, and Kristinn Rr. Thorisson. 1998. Integrating Simultaneous Input from Speech, Gaze, and Hand Gestures .Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 53--64.
    [30]
    Minkyung Lee and Mark Billinghurst. 2008. A Wizard of Oz Study for an AR Multimodal Interface. In Proceedings of the 10th International Conference on Multimodal Interfaces (Chania, Crete, Greece) (ICMI '08). Association for Computing Machinery, New York, NY, USA, 249--256. https://doi.org/10.1145/1452392.1452444
    [31]
    Minkyung Lee, Mark Billinghurst, Woonhyuk Baek, Richard Green, and Woontack Woo. 2013. A usability study of multimodal input in an augmented reality environment. Virtual Real., Vol. 17, 4 (Nov. 2013), 293--305.
    [32]
    Daniel P Loehr. 2012. Temporal, structural, and pragmatic synchrony between intonation and gesture. Laboratory Phonology, Vol. 3, 1 (2012), 71--89.
    [33]
    David Mcneill. 2005. Gesture and Thought .the University of Chicago Press, USA. https://doi.org/10.7208/chicago/9780226514642.001.0001
    [34]
    Mark Micire, Munjal Desai, Amanda Courtemanche, Katherine M. Tsui, and Holly A. Yanco. 2009. Analysis of Natural Gestures for Controlling Robot Teams on Multi-touch Tabletop Surfaces. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (Banff, Alberta, Canada) (ITS '09). ACM, New York, NY, USA, 41--48. https://doi.org/10.1145/1731903.1731912
    [35]
    Christophe Mignot, Claude Valot, and Noëlle Carbonell. 1993. An Experimental Study of Future 'Natural' Multimodal Human-Computer Interaction. In INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems (Amsterdam, The Netherlands) (CHI '93). Association for Computing Machinery, New York, NY, USA, 67--68. https://doi.org/10.1145/259964.260075
    [36]
    Lisette Mol and Sotaro Kita. 2012. Gesture structure affects syntactic structure in speech. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 34. CogSci, USA, 761 -- 766.
    [37]
    Meredith Ringel Morris. 2012. Web on the Wall: Insights from a Multimodal Interaction Elicitation Study. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (Cambridge, Massachusetts, USA) (ITS '12). ACM, New York, NY, USA, 95--104. https://doi.org/10.1145/2396636.2396651
    [38]
    Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding Users' Preferences for Surface Gestures. In Proceedings of Graphics Interface 2010 (Ottawa, Ontario, Canada) (GI '10). Canadian Information Processing Society, CAN, 261--268.
    [39]
    Tomer Moscovich and John F. Hughes. 2008. Indirect Mappings of Multi-Touch Input Using One and Two Hands. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI '08). Association for Computing Machinery, New York, NY, USA, 1275--1284. https://doi.org/10.1145/1357054.1357254
    [40]
    Miguel A Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of pre-designed and user-defined gesture sets.
    [41]
    Michael Nielsen, Moritz Störring, Thomas B. Moeslund, and Erik Granum. 2004. A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for HCI. In Gesture-Based Communication in Human-Computer Interaction, Antonio Camurri and Gualtiero Volpe (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 409--420.
    [42]
    F. R. Ortega, A. Galvan, K. Tarre, A. Barreto, N. Rishe, J. Bernal, R. Balcazar, and J. Thomas. 2017. Gesture elicitation for 3D travel via multi-touch and mid-Air systems for procedurally generated pseudo-universe. In 2017 IEEE Symposium on 3D User Interfaces (3DUI) . IEEE, Los Angeles, CA, USA, 144--153.
    [43]
    F. R. Ortega, K. Tarre, M. Kress, A. S. Williams, A. B. Barreto, and N. D. Rishe. 2019. Selection and Manipulation Whole-Body Gesture Elicitation Study In Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) . IEEE, Osaka, Japan, Japan, 1723--1728.
    [44]
    Sharon Oviatt. 1999. Ten Myths of Multimodal Interaction. Commun. ACM, Vol. 42, 11 (Nov. 1999), 74--81. https://doi.org/10.1145/319382.319398
    [45]
    Sharon Oviatt. 2000. Taming recognition errors with a multimodal interface. Commun. ACM, Vol. 43, 9 (2000), 45--51.
    [46]
    Sharon Oviatt, Rachel Coulston, and Rebecca Lunsford. 2004. When Do We Interact Multimodally? Cognitive Load and Multimodal Communication Patterns. In Proceedings of the 6th International Conference on Multimodal Interfaces (State College, PA, USA) (ICMI '04). Association for Computing Machinery, New York, NY, USA, 129--136. https://doi.org/10.1145/1027933.1027957
    [47]
    Sharon Oviatt, Antonella DeAngeli, and Karen Kuhn. 1997. Integration and Synchronization of Input Modes during Multimodal Human-Computer Interaction. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI '97). Association for Computing Machinery, New York, NY, USA, 415--422. https://doi.org/10.1145/258549.258821
    [48]
    Helge Petersson, David Sinkvist, Chunliang Wang, and Örjan Smedby. 2009. Web-based interactive 3D visualization as a tool for improved anatomy learning. Anatomical sciences education, Vol. 2, 2 (2009), 61--68.
    [49]
    T. Piumsomboon, D. Altimira, H. Kim, A. Clark, G. Lee, and M. Billinghurst. 2014. Grasp-Shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, Munich, Germany, 73--82.
    [50]
    Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-Defined Gestures for Augmented Reality. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA '13). Association for Computing Machinery, New York, NY, USA, 955--960. https://doi.org/10.1145/2468356.2468527
    [51]
    Thomas Plank, Hans-Christian Jetter, Roman R"adle, Clemens N. Klokmose, Thomas Luger, and Harald Reiterer. 2017. Is Two Enough?: ! Studying Benefits, Barriers, and Biases of Multi-Tablet Use for Collaborative Visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI '17). ACM, New York, NY, USA, 4548--4560. https://doi.org/10.1145/3025453.3025537
    [52]
    Sandrine Robbe. 1998. An Empirical Study of Speech and Gesture Interaction: Toward the Definition of Ergonomic Design Guidelines. In CHI 98 Conference Summary on Human Factors in Computing Systems (Los Angeles, California, USA) (CHI '98). Association for Computing Machinery, New York, NY, USA, 349--350. https://doi.org/10.1145/286498.286815
    [53]
    Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction.
    [54]
    Emanuel A Schegloff. 1984. On some gestures' relation to talk.(pp. 266--296) In J. Maxwell and J. Heritage (Eds.) Structures of social action.
    [55]
    Katherine Tarre, Adam S. Williams, Lukas Borges, Naphtali D. Rishe, Armando B. Barreto, and Francisco R. Ortega. 2018. Towards First Person Gamer Modeling and the Problem with Game Classification in User Studies. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST '18). ACM, New York, NY, USA, Article 125, bibinfonumpages2 pages. https://doi.org/10.1145/3281505.3281590
    [56]
    Theophanis Tsandilas. 2018. Fallacies of Agreement: A Critical Review of Consensus Assessment Methods for Gesture Elicitation. ACM Trans. Comput. Hum. Interact., Vol. 25, 3 (June 2018), 18.
    [57]
    Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). Association for Computing Machinery, New York, NY, USA, 1325--1334. https://doi.org/10.1145/2702123.2702223
    [58]
    Radu-Daniel Vatavu and Jacob O. Wobbrock. 2016. Between-Subjects Elicitation Studies: Formalization and Tool Support. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 3390--3402. https://doi.org/10.1145/2858036.2858228
    [59]
    Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob A Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies. In Proceedings of ACM Int. Conf. on Designing Interactive Systems (DIS'20) . ACM Press, Eindhoven, NA.
    [60]
    Jacob O Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A Myers. 2005. Maximizing the guessability of symbolic input.
    [61]
    Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined Gestures for Surface Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). ACM, New York, NY, USA, 1083--1092.
    [62]
    Katrin Wolf, Anja Naumann, Michael Rohs, and Jörg Müller. 2011. Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-dependent Requirements. In Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part I (Lisbon, Portugal) (INTERACT'11). Springer-Verlag, Berlin, Heidelberg, 559--575. http://dl.acm.org/citation.cfm?id=2042053.2042111
    [63]
    Ionuc t-Alexandru Zaic ti, c Stefan-Gheorghe Pentiuc, and Radu-Daniel Vatavu. 2015. On free-hand TV control: experimental results on user-elicited gestures with Leap Motion. Pers. Ubiquit. Comput., Vol. 19, 5 (Aug. 2015), 821--838.

    Cited By

    View all
    • (2024)An Artists' Perspectives on Natural Interactions for Virtual Reality 3D SketchingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642758(1-20)Online publication date: 11-May-2024
    • (2024)HandyNotes: using the hands to create semantic representations of contextually aware real-world objects2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00049(265-275)Online publication date: 16-Mar-2024
    • (2024)Exploring Methods to Optimize Gesture Elicitation Studies: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.338726912(64958-64979)Online publication date: 2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 4, Issue ISS
    ISS
    November 2020
    488 pages
    EISSN:2573-0142
    DOI:10.1145/3433930
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 November 2020
    Published in PACMHCI Volume 4, Issue ISS

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. augmented reality
    2. elicitation
    3. gesture and speech interaction
    4. interaction
    5. multimodal

    Qualifiers

    • Research-article

    Funding Sources

    • National Science Foundation
    • Defense Advanced Research Projects Agency

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)234
    • Downloads (Last 6 weeks)38
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An Artists' Perspectives on Natural Interactions for Virtual Reality 3D SketchingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642758(1-20)Online publication date: 11-May-2024
    • (2024)HandyNotes: using the hands to create semantic representations of contextually aware real-world objects2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00049(265-275)Online publication date: 16-Mar-2024
    • (2024)Exploring Methods to Optimize Gesture Elicitation Studies: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.338726912(64958-64979)Online publication date: 2024
    • (2024)Collecting and Analyzing the Mid-Air Gestures Data in Augmented Reality and User Preferences in Closed Elicitation StudyVirtual, Augmented and Mixed Reality10.1007/978-3-031-61044-8_15(201-215)Online publication date: 29-Jun-2024
    • (2023)Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation StudiesACM Computing Surveys10.1145/363645856:5(1-55)Online publication date: 7-Dec-2023
    • (2023)Exploring Unimodal Notification Interaction and Display Methods in Augmented RealityProceedings of the 29th ACM Symposium on Virtual Reality Software and Technology10.1145/3611659.3615683(1-11)Online publication date: 9-Oct-2023
    • (2023)G-DAIC: A Gaze Initialized Framework for Description and Aesthetic-Based Image CroppingProceedings of the ACM on Human-Computer Interaction10.1145/35911327:ETRA(1-19)Online publication date: 18-May-2023
    • (2023)A Review of Interaction Techniques for Immersive EnvironmentsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.317480529:9(3900-3921)Online publication date: 1-Sep-2023
    • (2023)Factors Affecting the Results of Gesture Elicitation: A Review2023 11th International Conference in Software Engineering Research and Innovation (CONISOFT)10.1109/CONISOFT58849.2023.00030(169-176)Online publication date: 6-Nov-2023
    • (2023)Real-time multimodal interaction in virtual reality - a case study with a large virtual interfaceMultimedia Tools and Applications10.1007/s11042-023-14381-682:16(25427-25448)Online publication date: 2-Feb-2023
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media