Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

Published: 12 January 2024 Publication History

Abstract

How do we determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N= 267 studies encompassing a total of 187,265 gestures elicited from 6,659 participants for 4,106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2,304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology; (ii) categorizing the referents; (iii) classifying the gestures elicited for those referents; and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies.

References

[1]
Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh. 2012. Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI. Technical Report MSR-TR-2012-111. Microsoft Research, Redmond, WA. https://www.microsoft.com/en-us/research/publication/understanding-mid-air-hand-gestures-a-study-ofhuman-preferences-in-usage-of-gesture-types-for-hci/
[2]
Jason Alexander, Teng Han, William Judd, Pourang Irani, and Sriram Subramanian. 2012. Putting your best foot forward: Investigating real-world mappings for foot-based gestures. In Proceedings of the ACM Conference on Human Factors in Computing Systems(CHI’12). ACM, New York, NY, 1229–1238. DOI:
[3]
Abdullah X. Ali, Meredith Ringel Morris, and Jacob O. Wobbrock. 2018. Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST’18). ACM, New York, NY, 177–188. DOI:
[4]
Abdullah X. Ali, Meredith Ringel Morris, and Jacob O. Wobbrock. 2019. Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–12. DOI:
[5]
Bashar Altakrouri, Daniel Burmeister, Dennis Boldt, and Andreas Schrader. 2016. Insights on the impact of physical impairments in full-body motion gesture elicitation studies. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). ACM, New York, NY, 1–10. DOI:
[6]
Marius Altmann. 2017. Designing gestures for window management on large high-resolution displays. Online Publ. Univ. Stuttg. 72, 3 (2017), 1–60. DOI:
[7]
Robin Andersson, Jonas Berglund, Aykut Coşkun, Morten Fjeld, and Mohammad Obaid. 2017. Defining gestural interactions for large vertical touch displays. In Proceedings of IFIP TC13 Conference on Human-Computer Interaction, Lecture Notes in Computer Science (INTERACT’17), Regina Bernhaupt, Girish Dalvi, Anirudha Joshi, Devanuj K. Balkrishan, Jacki O’Neill, and Marco Winckler (Eds.), Vol. 10513. Springer, Cham, 36–55. DOI:
[8]
Leonardo Angelini, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Abou Khaled, Jürgen Baumgartner, Andreas Sonderegger, Denis Lalanne, and Elena Mugellini. 2014. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’14). ACM, New York, NY, 1–8. DOI:
[9]
Leonardo Angelini, Denis Lalanne, Elise van den Hoven, Omar Abou Khaled, and Elena Mugellini. 2015. Move, hold and touch: A framework for tangible gesture interactive systems. Machines 3, 3 (2015), 173–207. DOI:
[10]
Shaikh Shawon Arefin Shimon, Courtney Lutton, Zichun Xu, Sarah Morrison-Smith, Christina Boucher, and Jaime Ruiz. 2016. Exploring non-touchscreen gestures for smartwatches. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3822–3833. DOI:
[11]
Rahul Arora, Rubaiat Habib Kazi, Danny M. Kaufman, Wilmot Li, and Karan Singh. 2019. MagicalHands: Mid-air hand gestures for animating in VR. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST’19). ACM, New York, NY, 463–477. DOI:
[12]
Ilhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel, and Elisabeth André. 2018. Pen+Mid-Air gestures: Eliciting contextual gestures. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI’18). ACM, New York, NY, 135–144. DOI:
[13]
Christopher R. Austin, Barrett Ens, Kadek Ananta Satriadi, and Bernhard Jenny. 2020. Elicitation study investigating hand and foot gesture interaction for immersive maps in augmented reality. Cartogr. Geogr. Inf. Sci. 47, 3 (2020), 214–228. DOI:
[14]
Patrick Bader, Huy Viet Le, Julian Strotzer, and Niels Henze. 2017. Exploring interactions with smart windows for sunlight control. In Proceedings of the ACM Conference on Human Factors in Computing Systems, Extended Abstracts (CHI EA’17). ACM, New York, NY, 2373–2380. DOI:
[15]
Patrick Bader, Alexandra Voit, Huy Viet Le, Paweł. Woźniak, Niels Henze, and Albrecht Schmidt. 2019. WindowWall: Towards adaptive buildings with interactive windows as ubiquitous displays. ACM Trans. Comput.-Hum. Interact. 26, 2 (2019), 1–42. DOI:
[16]
Gilles Bailly, Thomas Pietrzak, Jonathan Deber, and Daniel J. Wigdor. 2013. MéTamorphe: Augmenting hotkey usage with actuated keys. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’13). ACM, New York, NY, 563–572. DOI:
[17]
Jaclyn B. Baron and Hope Turner. 2014. Assessing sailor and civilian gestural optimal relationships for multi-touch gestures and functions in computer applications. Proc. Hum. Fact. Ergon. Soc. Annu. Meet. 58, 1 (2014), 1144–1148. DOI:
[18]
Frank Beruscha, Katharina Mueller, and Thorsten Sohnke. 2020. Eliciting tangible and gestural user interactions with and on a cooking pan. In Proceedings of the Conference on Mensch und Computer (MuC’20). ACM, New York, NY, 399–408. DOI:
[19]
Ceylan Beşevli, Oğuz Turan Buruk, Merve Erkaya, and Oğuzhan Özcan. 2018. Investigating the effects of legacy bias: User elicited gestures from the end users perspective. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS’18 Companion). ACM, New York, NY, 277–281. DOI:
[20]
Sabrina S. Billinghurst and Kim-Phuong L. Vu. 2015. Touch screen gestures for web browsing tasks. Comput. Hum. Behav. 53 (2015), 71–81. DOI:
[21]
Patrik Björnfot and Victor Kaptelinin. 2017. Probing the design space of a telepresence robot gesture arm with low fidelity prototypes. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI’17). ACM, New York, NY, 352–360. DOI:
[22]
Frøy Birte Bjørneseth, Mark D. Dunlop, and Eva Hornecker. 2012. Assessing the effectiveness of direct gesture interaction for a safety critical maritime application. Int. J. Hum.-Comput. Stud. 70, 10 (2012), 729–745. DOI:
[23]
Roger Boldu, Alexandru Dancu, Denys J. C. Matthies, Pablo Gallego Cascón, Shanaka Ransir, and Suranga Nanayakkara. 2018. Thumb-In-Motion: Evaluating thumb-to-ring microgestures for athletic activity. In Proceedings of the Symposium on Spatial User Interaction (SUI’18). ACM, New York, NY, 150–157. DOI:
[24]
Pranjal Protim Borah and Keyur Sorathia. 2019. Natural and intuitive deformation gestures for one-handed landscape mode interaction. In Proceedings of the 13th International Conference on Tangible, Embedded, and Embodied Interaction (TEI’19). ACM, New York, NY, 229–236. DOI:
[25]
Idil Bostan, Oğuz Turan Buruk, Mert Canat, Mustafa Ozan Tezcan, Celalettin Yurdakul, Tilbe Göksun, and Oğuzhan Özcan. 2017. Hands as a controller: User preferences for hand specific on-skin gestures. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS’17). ACM, New York, NY, 1123–1134. DOI:
[26]
Icaro Brito, Eduardo Freire, and Elyson Carvalho. 2019. Analysis of cross-cultural effect on gesture-based human-robot interaction. Int. J. Mech. Eng. Robot. Res. 8, 6 (2019), 852–859. DOI:
[27]
Sarah Buchanan, Bourke Floyd, Will Holderness, and Joseph J. LaViola. 2013. Towards user-defined multi-touch gestures for 3D objects. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’13). ACM, New York, NY, 231–240. DOI:
[28]
Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, and Roger Zimmermann. 2019. fSense: Unlocking the dimension of force for gestural interactions using smartwatch PPG sensor. In Proceedings of the 10th International Conference on Augmented Human (AH’19). ACM, New York, NY, 1–5. DOI:
[29]
Gary Burnett, Elizabeth Crundall, David Large, Glyn Lawson, and Lee Skrypchuk. 2013. A study of unidirectional swipe gestures on in-vehicle touch screens. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’13). ACM, New York, NY, 22–29. DOI:
[30]
Daniel Buschek, Bianka Roppelt, and Florian Alt. 2018. Extending keyboard shortcuts with arm and wrist rotation gestures. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 21:1–21:12. DOI:
[31]
Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini, and Amaury Trujillo. 2017. Analyzing visually impaired people’s touch gestures on smartphones. Multimedia Tools Appl. 76, 4 (2017), 5141–5169. DOI:
[32]
Francesco Cafaro, Leilah Lyons, and Alissa N. Antle. 2018. Framed guessability: Improving the discoverability of gestures and body movements for full-body interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[33]
Maurizio Caon, Rico Süsse, Benoit Grelier, Omar Abou Khaled, and Elena Mugellini. 2019. Gesturing on the handlebar: A user-elicitation study for on-bike gestural interaction. In Proceedings of the 20th Congress of the International Ergonomics Association, Advances in Intelligent Systems and Computing (IEA’18), Sebastiano Bagnara, Riccardo Tartaglia, Sara Albolino, Thomas Alexander, and Yushi Fujita (Eds.). Springer International Publishing, 2019, 429–439.
[34]
Jessica R. Cauchard, Jane L. E, Kevin Y. Zhai, and James A. Landay. 2015. Drone & Me: An exploration into natural human-drone interaction. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’15). ACM, New York, NY, 361–365. DOI:
[35]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User elicitation on single-hand microgestures. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3403–3414. DOI:
[36]
Li-Chieh Chen, Po-Ying Chu, and Yun-Maw Cheng. 2016. Exploring the ergonomic issues of user-defined mid-air gestures for interactive product exhibition. In Proceedungs of International Conference on Distributed, Ambient, and Pervasive Interactions (DAPI’16), Vol. 9749. Springer, Cham, 180–190. DOI:
[37]
Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu, Da-Yuan Huang, and Bing-Yu Chen. 2020. Exploring user defined gestures for ear-based interactions. Proc. ACM Hum.-Comput. Interact. 4 (2020), 186:1–186:20. Issue ISS. DOI:
[38]
Zhen Chen, Xiaochi Ma, Zeya Peng, Ying Zhou, Mengge Yao, Zheng Ma, Ci Wang, Zaifeng Gao, and Mowei Shen. 2018. User-defined gestures for gestural interaction: Extending from hands to other body parts. Int. J. Hum.–Comput. Interact. 34, 3 (2018), 238–250. DOI:
[39]
D. Clark, Gradeigh, Janne Lindqvist, and Antti Oulasvirta. 2017. Composition policies for gesture passwords: User choice, security, usability and memorability. In Proceedings of the IEEE Conference on Communications and Network Security (CNS’17). IEEE, Los Alamitos, CA, 1–9. DOI:
[40]
Sabrina Connell, Pei-Yi Kuo, Liu Liu, and Anne Marie Piper. 2013. A wizard-of-oz elicitation study examining child-defined gestures with a whole-body interface. In Proceedings of the ACM Conference on Interaction Design and Children (IDC’13). ACM, New York, NY, 277–280. DOI:
[41]
Jian Cui, Arjan Kuijper, Dieter W. Fellner, and Alexei Sourin. 2016. Understanding people’s mental models of mid-air interaction for virtual assembly and shape modeling. In Proceedings of the 29th International Conference on Computer Animation and Social Agents (CASA’16). ACM, New York, NY, 139–146. DOI:
[42]
David Céspedes-Hernández and Juan Manuel González-Calleros. 2019. A methodology for gestural interaction relying on user-defined gestures sets following a one-shot learning approach. J. Intell. Fuzzy Syst. 36, 5 (2019), 5001–5010. DOI:
[43]
Lavinia Andreea Danielescu, Erin A. Walker, Winslow Burleson, Kurt VanLehn, Anastasia Kuznetsov, and Mary Lou Maher. 2019. Discoverable free space gesture sets for walk-up-and-use interactions. In ASU Electronic Theses and Dissertations. Arizona State University, Arizona, USA.
[44]
Suranjith De Silva, Michael Barlow, and Adam Easton. 2013. Harnessing multi-user design and computation to devise archetypal whole-of-body gestures: A novel framework. In Proceedings of the 25th Australian Computer-Human Interaction Conference (OzCHI’13). ACM, New York, NY, 85–94. DOI:
[45]
Giuseppe Desolda, Carmelo Ardito, Hans-Christian Jetter, and Rosa Lanzilotti. 2019. Exploring spatially-aware cross-device interaction techniques for mobile collaborative sensemaking. Int. J. Hum.-Comput. Stud. 122 (2019), 1–20. DOI:
[46]
Bastian Dewitz, Frank Steinicke, and Christian Geiger. 2019. Functional workspace for one-handed tap and swipe microgestures. In Mensch und Computer—Workshopband. Gesellschaft für Informatik e.V., Bonn. DOI:
[47]
Linda Di Geronimo, Marica Bertarini, Julia Badertscher, Maria Husmann, and Moira C. Norrie. 2017. Exploiting mid-air gestures to share data among devices. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’17). ACM, New York, NY, 1–11. DOI:
[48]
Christine Dierk, Scott Carter, Patrick Chiu, Tony Dunnigan, and Don Kimber. 2019. Use your head! Exploring interaction modalities for hat technologies. In Proceedings of the Designing Interactive Systems Conference (DIS’19). ACM, New York, NY, 1033–1045. DOI:
[49]
Nem Khan Dim and Xiangshi Ren. 2014. Designing motion gesture interfaces in mobile phones for blind people. J. Comput. Sci. Technol. 29, 5 (2014), 812–824. DOI:
[50]
Nem Khan Dim, Chaklam Silpasuwanchai, Sayan Sarcar, and Xiangshi Ren. 2016. Designing mid-air TV gestures for blind people using user- and choice-based elicitation approaches. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS’16). ACM, New York, NY, 204–214. DOI:
[51]
Tilman Dingler, Rufat Rzayev, Alireza Sahami Shirazi, and Niels Henze. 2018. Designing consistent gestures across device types: Eliciting RSVP controls for phone, watch, and glasses. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[52]
Haiwei Dong, Ali Danesh, Nadia Figueroa, and Abdulmotaleb El Saddik. 2015a. An elicitation study on gesture preferences and memorability toward a practical hand-gesture vocabulary for smart televisions. IEEE Access 3 (2015), 543–555. DOI:
[53]
Haiwei Dong, Nadia Figueroa, and Abdulmotaleb El Saddik. 2015b. An elicitation study on gesture attitudes and preferences towards an interactive hand-gesture vocabulary. In Proceedings of the 23rd ACM International Conference on Multimedia (MM’15). ACM, New York, NY, 999–1002. DOI:
[54]
Guiying Du, Auriol Degbelo, and Christian Kray. 2019. User-generated gestures for voting and commenting on immersive displays in urban planning. Multimod. Technol. Interact. 3, 2 (2019), 31. DOI:
[55]
Guiying Du, Auriol Degbelo, Christian Kray, and Marco Painho. Autumn 2018. Gestural interaction with 3D objects shown on public displays: An elicitation study. Interact. Des. Arch. 38 (Autumn 2018), 184–202.
[56]
Jane L. E, Ilene L. E, James A. Landay, and Jessica R. Cauchard. 2017. Drone & Wo: Cultural influences on human-drone interaction techniques. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, 6794–6799. DOI:
[57]
Orlando Erazo, Yosra Rekik, Laurent Grisoni, and José A. Pino. 2017. Understanding gesture articulations variability. In Proceedings of the IFIP TC13 Conference on Human-Computer Interaction, Lecture Notes in Computer Science (INTERACT’17), Vol. 10514. Springer, Cham, 293–314. DOI:
[58]
Shariff A. M. Faleel, Michael Gammon, Yumiko Sakamoto, Carlo Menon, and Pourang Irani. 2020. User gesture elicitation of common smartphone tasks for hand proximate user interfaces. In Proceedings of the 11th International Conference on Augmented Human (AH’20). ACM, New York, NY, 1–8. DOI:
[59]
Hessam Jahani Fariman, Hasan J. Alyamani, Manolya Kavakli, and Len Hamey. 2016. Designing a user-defined gesture vocabulary for an in-vehicle climate control system. In Proceedings of the 28th Australian Conference on Computer-Human Interaction (OzCHI’16). ACM, New York, NY, 391–395. DOI:
[60]
Yasmin Felberbaum and Joel Lanir. 2016. Step by step: Investigating foot gesture interaction. In Proceedings of the ACM International Working Conference on Advanced Visual Interfaces (AVI’16). ACM, New York, NY, 306–307. DOI:
[61]
Yasmin Felberbaum and Joel Lanir. 2018. Better understanding of foot gestures: An elicitation study. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[62]
Michela Ferron, Nadia Mana, and Ornella Mich. 2019. Designing mid-air gesture interaction with mobile devices for older adults. In Perspectives on Human-Computer Interaction Research with Older People, Sergio Sayago (Ed.). Springer International Publishing, Berlin, 81–100. DOI:
[63]
Leah Findlater, Ben Lee, and Jacob Wobbrock. 2012. Beyond QWERTY: Augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input. In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 2679–2682. DOI:
[64]
Justin W. Firestone, Rubi Quiñones, and Brittany A. Duncan. 2019. Learning from users: An elicitation study and taxonomy for communicating small unmanned aerial system states through gestures. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE Computer Society Press, Piscataway, NJ, 163–171. DOI:
[65]
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster. 2017. Rhythmic micro-gestures: Discreet interaction on-the-go. In Proceedings of the ACM Conference on Multimodal Interaction (ICMI’17). ACM, New York, NY, 115–119. DOI:
[66]
Mathias Frisch, Jens Heydekorn, and Raimund Dachselt. 2009. Investigating multi-touch and pen gestures for diagram editing on interactive surfaces. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’09). ACM, New York, NY, 149–156. DOI:
[67]
Priya Ganapathi and Keyur Sorathia. 2019. Elicitation study of body gestures for locomotion in HMD-VR interfaces in a sitting-position. In Proceedings of the ACM Conference on Motion, Interaction, and Games (MIG’19). ACM, New York, NY, 1–10. DOI:
[68]
Franca Garzotto, Mirko Gelsomini, Roberto Mangano, Luigi Oliveto, and Matteo Valoriani. 2014. From desktop to touchless interfaces: A model based approach. In Proceedings of the ACM International Working Conference on Advanced Visual Interfaces (AVI’14). ACM, New York, NY, 261–264. DOI:
[69]
Vito Gentile, Daniele Fundarò, and Salvatore Sorce. 2019. Elicitation and evaluation of zoom gestures for touchless interaction with desktop displays. In Proceedings of the 8th ACM International Symposium on Pervasive Displays (PerDis’19). ACM, New York, NY, 1–7. DOI:
[70]
Vito Gentile, Salvatore Sorce, Alessio Malizia, Fabrizio Milazzo, and Antonio Gentile. 2017. Investigating how user avatar in touchless interfaces affects perceived cognitive load and two-handed interactions. In Proceedings of the 6th ACM International Symposium on Pervasive Displays (PerDis’17). ACM, New York, NY, 1–7. DOI:
[71]
Bogdan-Florin Gheran, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2018. Gestures for smart rings: Empirical results, insights, and design implications. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS’18). ACM, New York, NY, 623–635. DOI:
[72]
Antonio Gomes, Lahiru Lakmal Priyadarshana, Aaron Visser, Juan Pablo Carrascal, and Roel Vertegaal. 2018. Magicscroll: A rollable display device with flexible screen real estate and gestural input. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’18). ACM, New York, NY, 1–11. DOI:
[73]
Glebys Gonzalez, Naveen Madapana, Rahul Taneja, Lingsong Zhang, Richard Rodgers, and Juan P. Wachs. 2018. Looking beyond the gesture: Vocabulary acceptability criteria for gesture elicitation studies. Proc. Hum. Fact. Ergon. Soc. Annu. Meet. 62, 1 (2018), 997–1001. DOI:
[74]
Daniela Grijincu, Miguel A. Nacenta, and Per Ola Kristensson. 2014. User-defined interface gestures: Dataset and analysis. In Proceedings of the 9th ACM International Conference on Interactive Tabletops and Surfaces (ITS’14). ACM, New York, NY, 25–34. DOI:
[75]
Josefina Guerrero-García, Claudia González, and David Pinto. 2017. Studying user-defined body gestures for navigating interactive maps. In Proceedings of the XVIII International Conference on Human Computer Interaction (Interacción’17). ACM, New York, NY, 49:1–49:4. DOI:
[76]
Saikat Gupta, Sujin Jang, and Karthik Ramani. 2014. PuppetX: A framework for gestural interactions with user constructed playthings. In Proceedings of the ACM International Working Conference on Advanced Visual Interfaces (AVI’14). ACM, New York, NY, 73–80. DOI:
[77]
Robin Guérit, Alessandro Cierro, Jean Vanderdonckt, and Jorge Luis Pérez-Medina. 2019. Gesture elicitation and usability testing for an armband interacting with netflix and spotify. In Proceedings of the International Conference on Information Technology & Systems, Advances in Intelligent Systems and Computing (ICITS’19), Álvaro Rocha, Carlos Ferrás, and Manolo Paredes (Eds.). Springer International Publishing, Berlin, 625–637. DOI:
[78]
Teng Han, Khalad Hasan, Keisuke Nakamura, Randy Gomez, and Pourang Irani. 2017. SoundCraft: Enabling spatial interactions on smartwatches using hand generated acoustics. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST’17). ACM, New York, NY, 579–591. DOI:
[79]
Hayati Havlucu, Mehmet Yarkın Ergin, İdil Bostan, Oğuz Turan Buruk, Tilbe Göksun, and Oğuzhan Özcan. 2017. It made more sense: Comparison of user-elicited on-skin touch and freehand gesture sets. In Distributed, Ambient and Pervasive Interactions, Norbert Streitz and Panos Markopoulos (Eds.). Springer International Publishing, Cham, 159–171.
[80]
Niels Henze, Andreas Löcken, Susanne Boll, Tobias Hesselmann, and Martin Pielot. 2010. Free-hand gestures for music playback: Deriving gestures with a user-centred process. In Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia (MUM’10). ACM, New York, NY, 1–10. DOI:
[81]
Nico Herbig, Santanu Pal, Josef van Genabith, and Antonio Krüger. 2019. Multi-modal approaches for post-editing machine translation. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 231:1–231:11. DOI:
[82]
Jahani F. Hessam, Massimo Zancanaro, Manolya Kavakli, and Mark Billinghurst. 2017. Towards optimization of mid-air gestures for in-vehicle interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (OZCHI’17). ACM, New York, NY, 126–134. DOI:
[83]
Jens Heydekorn, Mathias Frisch, and Raimund Dachselt. 2011. Evaluating a user-elicited gesture set for interactive displays. In Mensch & Computer 2011: überMEDIEN|ÜBERmorgen, Maximilian Eibl (Ed.). Oldenbourg Verlag, München, 191–200.
[84]
Martin Hitz, Ekaterina Konigstorfer, and Ekaterina Peshkova. 2019. Exploring cognitive load of single and mixed mental models gesture sets for UAV navigation. In 1st International Workshop on Human-Drone Interaction. Ecole Nationale de l’Aviation Civile ENAC, Glasgow, United Kingdom.
[85]
Lynn Hoff, Eva Hornecker, and Sven Bertel. 2016. Modifying gesture elicitation: Do kinaesthetic priming and increased production reduce legacy bias?. In Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI’16). Association for Computing Machinery, New York, NY, 86–91. DOI:
[86]
Wenjun Hou, Guangyu Feng, and Yiting Cheng. 2019. A fuzzy interaction scheme of mid-air gesture elicitation. J. Vis. Commun. Image Represent. 64 (2019), 102637. DOI:
[87]
Jochen Huber, Mohamed Sheik-Nainar, and Nada Matic. 2016. Towards an interaction language for force-enabled touchpads in cars. In Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’16 Adjunct). Association for Computing Machinery, New York, NY, 197–202. DOI:
[88]
Jochen Huber, Mohamed Sheik-Nainar, and Nada Matic. 2017. Force-enabled touch input on the steering wheel: An elicitation study. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct (AutomotiveUI’17). Association for Computing Machinery, New York, NY, 168–172. DOI:
[89]
Hessam Jahani, Hasan J. Alyamani, Manolya Kavakli, Arindam Dey, and Mark Billinghurst. 2017. User evaluation of hand gestures for designing an intelligent in-vehicle interface. In Designing the Digital Transformation, Alexander Maedche, Jan vom Brocke, and Alan Hevner (Eds.). Springer International Publishing, Cham, 104–121.
[90]
Hessam Jahani and Manolya Kavakli. 2018. Exploring a user-defined gesture vocabulary for descriptive mid-air interactions. Cogn. Technol. Work 20, 1 (2018), 11–22. DOI:
[91]
Hessam Jahani-Fariman. 2017. Developing a user-defined interface for in-vehicle mid-air gestural interactions. In Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion (IUI’17 Companion). Association for Computing Machinery, New York, NY, 165–168. DOI:
[92]
Xu Jia, Kun-Pyo Lee, and Hyeon-Jeong Suk. 2011. Considerations of applying surface-based phone gestures to natural context. In Proceedings of the 13th International Conference on Ubiquitous Computing (UbiComp’11). Association for Computing Machinery, New York, NY, 545–546. DOI:
[93]
Tero Jokela, Parisa Pour Rezaei, and Kaisa Väänänen. 2016. Using elicitation studies to generate collocated interaction methods. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI’16). ACM, New York, NY, 1129–1133. DOI:
[94]
Katherina A. Jurewicz, David M. Neyens, Ken Catchpole, and Scott T. Reeves. 2018. Developing a 3D gestural interface for anesthesia-related human-computer interaction tasks using both experts and novices. Hum. Fact. 60, 7 (2018), 992–1007. DOI:. PMID: 29906400.
[95]
Jean-François Jégo, Alexis Paljic, and Philippe Fuchs. 2013. User-defined gestural interaction: A study on gesture memorization. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI’13). IEEE, Los Alamitos, CA, 7–10. DOI:
[96]
Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner. 2011. Usable gestures for blind people: Understanding preference and performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, 413–422. DOI:
[97]
Frederic Kerber, Markus Löchtefeld, Antonio Krüger, Jess McIntosh, Charlie McNeill, and Mike Fraser. 2016. Understanding same-side interactions with wrist-worn devices. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). ACM, New York, NY, 28:1–28:10. DOI:
[98]
Sumbul Khan, Hasitha Rajapakse, Haimo Zhang, Suranga Nanayakkara, Bige Tuncer, and Lucienne Blessing. 2017. GesCAD: An intuitive interface for conceptual architectural design. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (OZCHI’17). Association for Computing Machinery, New York, NY, 402–406. DOI:
[99]
Sumbul Khan and Bige Tunçer. 2017. Intuitive and effective gestures for conceptual architectural design: An analysis of user elicited hand gestures for 3D CAD modeling. In Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA’17). CUMINCAD, Cambridge, 318–323.
[100]
Sumbul Khan and Bige Tunçer. 2019. Gesture and speech elicitation for 3D CAD modeling in conceptual design. Autom. Construct. 106 (2019), 102847. DOI:
[101]
Sumbul Khan, Bige Tunçer, Ramanathan Subramanian, and Lucienne Blessing. 2019. 3D CAD modeling using gestures and speech: Investigating CAD legacy and non-legacy procedures. In “Hello, Culture!” 18th International Conference, CAAD Futures, 18, 18 (2019), 20.
[102]
Hyoyoung Kim, Heesun Kim, Dongeon Lee, and Ji-hyung Park. 2017. User-defined hand gestures for small cylindrical displays. J. Kor. Contents Assoc. 17, 3 (2017), 74–87. DOI:
[103]
Ju-Whan Kim, Han-Jong Kim, and Tek-Jin Nam. 2016. M. Gesture: An acceleration-based gesture authoring system on multiple handheld and wearable devices. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 2307–2318. DOI:
[104]
KwanMyung Kim, Dongwoo Joo, and Kun-Pyo Lee. 2010. Wearable-object-based interaction for a mobile audio device. In CHI’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA’10). ACM, New York, NY, 3865–3870. DOI:
[105]
Lawrence H. Kim, Daniel S. Drew, Veronika Domova, and Sean Follmer. 2020. User-defined swarm robot control. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–13. DOI:
[106]
Sangyeon Kim and Sangwon Lee. 2020. Touch digitality: Affordance effects of visual properties on gesture selection. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–8. DOI:
[107]
Felix Kistler and Elisabeth André. 2013. User-defined body gestures for an interactive storytelling scenario. In Human-Computer Interaction (INTERACT’13), Paula Kotzé, Gary Marsden, Gitte Lindgaard, Janet Wesson, and Marco Winckler (Eds.). Springer, Berlin, 264–281.
[108]
Marion Koelle, Swamy Ananthanarayan, Simon Czupalla, Wilko Heuten, and Susanne Boll. 2018. Your smart glasses’ camera bothers me!: Exploring opt-in and opt-out gestures for privacy mediation. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (NordiCHI’18). ACM, New York, NY, 473–481. DOI:
[109]
Jung In Koh, Josh Cherian, Paul Taele, and Tracy Hammond. 2019. Developing a hand gesture recognition system for mapping symbolic hand gestures to analogous emojis in computer-mediated communication. ACM Trans. Interact. Intell. Syst. 9, 1 (2019), 1–35. DOI:
[110]
Barry Kollee, Sven Kratz, and Anthony Dunnigan. 2014. Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction (SUI’14). Association for Computing Machinery, New York, NY, 40–49. DOI:
[111]
Panayiotis Koutsabasis and Chris K. Domouzis. 2016. Mid-air browsing and selection in image collections. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’16). Association for Computing Machinery, New York, NY, 21–27. DOI:
[112]
Christine Kühnel, Tilo Westermann, Fabian Hemmert, Sven Kratz, Alexander Müller, and Sebastian Möller. 2011. I’m home: Defining and evaluating a gesture set for smart-home control. Int. J. Hum.-Comput. Stud. 69, 11 (2011), 693–704. DOI:
[113]
Daniel Künkel, Birgit Bomsdorf, Rainer Röhrig, Janko Ahlbrandt, and Markus Weigand. 2015. Participative development of touchless user interfaces: Elicitation and evaluation of contactless hand gestures for anesthesia. In Computer Graphics, Visualization, Computer Vision and Image Processing 2015 (2015). iades, Las Palmas de Gran Canaria, Spain, 43–50. http://www.iadisportal.org/digital-library/participativedevelopment-of-touchless-user-interfaces-elicitation-and-evaluation-of-contactless-hand-gestures-for-anesthesia
[114]
Huy Viet Le, Sven Mayer, Maximilian Weiß, Jonas Vogelsang, Henrike Weingärtner, and Niels Henze. 2020. Shortcut gestures for mobile text editing on fully touch sensitive smartphones. ACM Trans. Comput.-Hum. Interact. 27, 5 (2020), 33:1–33:38. DOI:
[115]
Bokyung Lee, Minjoo Cho, Joonhee Min, and Daniel Saakes. 2016. Posing and acting as input for personalizing furniture. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). ACM, New York, NY, 44:1–44:10. DOI:
[116]
DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley. 2018. Designing socially acceptable hand-to-face input. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST’18). ACM, New York, NY, 711–723. DOI:
[117]
DoYoung Lee, Ian Roland Oakley, and YuRyang Lee. 2016. Bodily input for wearables: An elicitation study. Hum. Comput. Interact. 1, 1 (2016), 283–285.
[118]
Lina Lee, Yousra Javed, Steven Danilowicz, and Mary Lou Maher. 2014. Information at the wave of your hand. In Proceedings of HCI Korea (HCIK’15). Hanbit Media, Inc., Seoul, KOR, 63–70.
[119]
Sang-Su Lee, Jeonghun Chae, Hyunjeong Kim, Youn-kyung Lim, and Kun-pyo Lee. 2013. Towards more natural digital content manipulation via user freehand gestural interaction in a living room. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’13). Association for Computing Machinery, New York, NY, 617–626. DOI:
[120]
Sang-Su Lee, Sohyun Kim, Bopil Jin, Eunji Choi, Boa Kim, Xu Jia, Daeeop Kim, and Kun-pyo Lee. 2010. How users manipulate deformable displays as input devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, NY, 1647–1656. DOI:
[121]
Sang-su Lee, Youn-kyung Lim, and Kun-Pyo Lee. 2012. Exploring the effects of size on deformable user interfaces. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services Companion (MobileHCI’12). ACM, New York, NY, 89–94. DOI:
[122]
Hoo Yong Leng, Noris Mohd Norowi, and Azrul Hazri Jantan. 2017. A user-defined gesture set for music interaction in immersive virtual environment. In Proceedings of the 3rd International Conference on Human-Computer Interaction and User Experience in Indonesia (CHIuXiD’17). Association for Computing Machinery, New York, NY, 44–51. DOI:
[123]
Wing Ho Andy Li, Kening Zhu, and Hongbo Fu. 2017. Exploring the design space of bezel-initiated gestures for mobile interaction. Int. J. Mob. Hum. Comput. Interact. 9, 1 (2017), 16–29. DOI:
[124]
Xuan Li, Daisong Guan, Jingya Zhang, Xingtong Liu, Siqi Li, and Hui Tong. 2019. Exploration of ideal interaction scheme on smart TV: Based on user experience research of far-field speech and mid-air gesture interaction. In Design, User Experience, and Usability. User Experience in Advanced Technological Environments, Aaron Marcus and Wentao Wang (Eds.). Springer International Publishing, Cham, 144–162.
[125]
Hai-Ning Liang, Cary Williams, Myron Semegen, Wolfgang Stuerzlinger, and Pourang Irani. 2012. User-defined surface+motion gestures for 3d manipulation of objects at a distance through a mobile device. In Proceedings of the 10th Asia Pacific Conference on Computer Human Interaction (APCHI’12). Association for Computing Machinery, New York, NY, 299–308. DOI:
[126]
Hongnan Lin. 2019. Using passenger elicitation for developing gesture design guidelines for adjusting highly automated vehicle dynamics. In Companion Publication of the Designing Interactive Systems Conference Companion (DIS’19 Companion). ACM, New York, NY, 97–100. DOI:
[127]
Qi Feng Liu, Keiko Katsuragawa, and Edward Lank. 2019. Eliciting wrist and finger gestures to guide recognizer design. In Proceedings of Graphics Interface 2019 (GI’19). Canadian Information Processing Society, Kingston, Ontario, 9. DOI:
[128]
Jessica Lo and Audrey Girouard. 2017. Bendy: Exploring mobile gaming with flexible devices. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI’17). Association for Computing Machinery, New York, NY, 163–172. DOI:
[129]
Yihua Lou, Wenjun Wu, Radu-Daniel Vatavu, and Wei-Tek Tsai. 2017. Personalized gesture interactions for cyber-physical smart-home environments. Sci. Chin. Inf. Sci. 60, 7 (2017), 072104. DOI:
[130]
Byron M. Lowens. 2018. Toward privacy enhanced solutions for granular control over health data collected by wearable devices. In Proceedings of the Workshop on MobiSys Ph.D. Forum (MobiSys PhD Forum’18). ACM, New York, NY, 5–6. DOI:
[131]
Vikas Luthra and Sanjay Ghosh. 2015. Understanding, evaluating and analyzing touch screen gestures for visually impaired users in mobile environment. In Universal Access in Human-Computer Interaction. Access to Interaction, Margherita Antona and Constantine Stephanidis (Eds.). Springer International Publishing, Cham, 25–36.
[132]
Andreas Löcken, Tobias Hesselmann, Martin Pielot, Niels Henze, and Susanne Boll. 2012. User-centred process for the definition of free-hand gestures applied to controlling music playback. Multimedia Syst. 18, 1 (2012), 15–31. DOI:
[133]
Naveen Madapana, Glebys Gonzalez, Richard Rodgers, Lingsong Zhang, and Juan P. Wachs. 2018. Gestures for picture archiving and communication systems (PACS) operation in the operating room: Is there any standard? PLoS One 13, 6 (2018), e0198092. DOI:
[134]
Naveen Madapana, Glebys Gonzalez, Rahul Taneja, Richard Rodgers, Lingsong Zhang, and Juan Wachs. 2019. Preference elicitation: Obtaining gestural guidelines for PACS in neurosurgery. Int. J. Med. Inf. 130 (2019), 103934. DOI:
[135]
Nathan Magrofuoco, Jorge-Luis Pérez-Medina, Paolo Roselli, Jean Vanderdonckt, and Santiago Villarreal. 2019. Eliciting contact-based and contactless gestures with radar-based sensors. IEEE Access 7 (2019), 176982–176997. DOI:
[136]
Nathan Magrofuoco and Jean Vanderdonckt. 2019. Gelicit: A cloud platform for distributed gesture elicitation studies. Proc. ACM Hum.-Comput. Interact. 3 (2019), 1–41. Issue EICS. DOI:
[137]
Meethu Malu, Pramod Chundury, and Leah Findlater. 2018. Exploring accessible smartwatch interactions for people with upper body motor impairments. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 488:1–488:12. DOI:
[138]
Vito M. Manghisi, Antonio E. Uva, Michele Fiorentino, Michele Gattullo, Antonio Boccaccio, and Giuseppe Monno. 2018. Enhancing user engagement through the user centric design of a mid-air gesture-based interface for the navigation of virtual-tours in cultural heritage expositions. J. Cult. Herit. 32 (2018), 186–197. DOI:
[139]
Francisco J. Martínez-Ruiz, Sebastian F. Rauh, and Gerrit Meixner. 2020. Understanding peripheral audiences: From subtle to full body gestures. In Human Interaction and Emerging Technologies, Tareq Ahram, Redha Taiar, Serge Colson, and Arnaud Choplin (Eds.). Springer International Publishing, Cham, 489–495.
[140]
Kohei Matsumura. 2015. Studying user-defined gestures toward off the screen interactions. In Proceedings of the International Conference on Interactive Tabletops & Surfaces (ITS’15). ACM, New York, NY, 295–300. DOI:
[141]
Dan Mauney, Jonathan Howarth, Andrew Wirtanen, and Miranda Capra. 2010. Cultural similarities and differences in user-defined gestures for touchscreen user interfaces. In CHI’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA’10). Association for Computing Machinery, New York, NY, 4015–4020. DOI:
[142]
Keenan R. May, Thomas M. Gable, and Bruce N. Walker. 2017. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’17). Association for Computing Machinery, New York, NY, 74–83. DOI:
[143]
John C. McClelland, Robert J. Teather, and Audrey Girouard. 2017. Haptobend: Shape-changing passive haptic feedback in virtual reality. In Proceedings of the 5th Symposium on Spatial User Interaction (SUI’17). Association for Computing Machinery, New York, NY, 82–90. DOI:
[144]
Fabrizio Milazzo, Vito Gentile, Antonio Gentile, and Salvatore Sorce. 2017. KIND-DAMA: A modular middleware for Kinect-like device data management. Softw.: Pract. Exp. 48, 1 (2017), 141–160. DOI:
[145]
G. Modanwal and K. Sarawadekar. 2017. A new dactylology and interactive system development for blind-computer interaction. IEEE Trans. Hum.-Mach. Syst. PP, 99 (2017), 1–6. DOI:
[146]
Gourav Modanwal and Kishor Sarawadekar. 2018. A gesture elicitation study with visually impaired users. In HCI International 2018—Posters’ Extended Abstracts (Communications in Computer and Information Science). Springer, Cham, 54–61. DOI:
[147]
Meredith Ringel Morris. 2012. Web on the wall: Insights from a multimodal interaction elicitation study. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS’12). ACM, New York, NY, 95–104. DOI:
[148]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding users’ preferences for surface gestures. In Proceedings of Graphics Interface (GI’10). Canadian Information Processing Society, 261–268.
[149]
Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’13). Association for Computing Machinery, New York, NY, 1099–1108. DOI:
[150]
Vijayakumar Nanjappan, Hai-Ning Liang, Feiyu Lu, Konstantinos Papangelis, Yong Yue, and Ka Lok Man. 2018. User-elicited dual-hand interactions for manipulating 3D objects in virtual reality environments. Hum.-centr. Comput. Inf. Sci. 8, 1 (2018), 31. DOI:
[151]
Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019a. Towards a taxonomy for in-vehicle interactions using wearable smart textiles: Insights from a user-elicitation study. Multimod. Technol. Interact. 3, 2 (2019), 33. DOI:
[152]
Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Haoru Xiao, Kim King-Tong Lau, and Khalad Hasan. 2019b. Design of interactions for handheld augmented reality devices using wearable smart textiles: Findings from a user elicitation study. Appl. Sci. 9, 15 (2019), 3177. DOI:
[153]
Andrés Adolfo Navarro-Newball, Isidro Moreno, Edmond Prakash, Ali Arya, Victoria E. Contreras, Victor A. Quiceno, Santiago Lozano, Juan David Mejìa, and Diego Fernando Loaiza. 2016. Gesture based human motion and game principles to aid understanding of science and cultural practices. Multimed. Tools Appl. 75, 19 (2016), 11699–11722. DOI:
[154]
Samuel Navas Medrano, Max Pfeiffer, and Christian Kray. 2017. Enabling remote deictic communication with mobile devices: An elicitation study. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’17). Association for Computing Machinery, New York, NY, Article 19, 13 pages. DOI:
[155]
Michael Nebeling, Alexander Huber, David Ott, and Moira C. Norrie. 2014. Web on the wall reloaded: Implementation, replication and refinement of user-defined interaction sets. In Proceedings of the 9th ACM International Conference on Interactive Tabletops and Surfaces (ITS’14). ACM, New York, NY, 15–24. DOI:
[156]
Jaye Nias. 2015. Guessability as an ethnographic study of mobile technology usage in Kenya. In Proceedings of the Seventh International Conference on Information and Communication Technologies and Development (ICTD’15). ACM, New York, NY, 1–4. DOI:
[157]
Juliet Norton, Chadwick A. Wingrave, and Joseph J. LaViola, Jr. 2010. Exploring strategies and guidelines for developing full body video game interfaces. In Proceedings of the 5th International Conference on the Foundations of Digital Games (FDG’10). ACM, New York, NY, 155–162. DOI:
[158]
Mohammad Obaid, Markus Häring, Felix Kistler, René Bühling, and Elisabeth André. 2012. User-defined body gestures for navigational control of a humanoid robot. In Social Robotics, Shuzhi Sam Ge, Oussama Khatib, John-John Cabibihan, Reid Simmons, and Mary-Anne Williams (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 367–377.
[159]
Mohammad Obaid, Felix Kistler, Gabrielundefined Kasparavičiūtundefined, Asim Evren Yantaç, and Morten Fjeld. 2016. How would you gesture navigate a drone? A user-centered approach to control a drone. In Proceedings of the 20th International Academic Mindtrek Conference (AcademicMindtrek’16). Association for Computing Machinery, New York, NY, 113–121. DOI:
[160]
Alex Olwal, Thad Starner, and Gowa Mainini. 2020. E-textile microinteractions: Augmenting twist with flick, slide and grasp gestures for soft electronics. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–13. DOI:
[161]
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas. 2017. Gesture elicitation for 3D travel via multi-touch and mid-Air systems for procedurally generated pseudo-universe. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI’17). IEEE, 144–153. DOI:
[162]
Francisco R. Ortega, Katherine Tarre, Mathew Kress, Adam S. Williams, Armando B. Barreto, and Naphtali D. Rishe. 2019. Selection and manipulation whole-body gesture elicitation study in virtual reality. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR’19). IEEE, 1723–1728. DOI:
[163]
Mehdi Ousmer, Jean Vanderdonckt, and Sabin Buraga. 2019. An ontology for reasoning on body-based gestures. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS’19). ACM, New York, NY, 1–6. DOI:
[164]
Maulishree Pandey, Hariharan Subramonyam, Brooke Sasia, Steve Oney, and Sile O’Modhrain. 2020. Explore, create, annotate: Designing digital drawing tools with visually impaired people. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–12. DOI:
[165]
Donggun Park, Yu Shin Lee, Sejin Song, Ilsun Rhiu, Sanghyun Kwon, Yongdae An, and Myung Hwan Yun. 2016. User centered gesture development for smart lighting. In Proceedings of HCI Korea (HCIK’16). Hanbit Media, Inc., Jeongseon, Republic of Korea, 146–150. DOI:
[166]
Ekaterina Peshkova and Martin Hitz. 2017a. Coherence evaluation of input vocabularies to enhance usability and user experience. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS’17). ACM, New York, NY, 15–20. DOI:
[167]
Ekaterina Peshkova and M. Hitz. 2017b. Exploring user-defined gestures to control a group of four UAVs. In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’17). IEEE, Lisbon, Portugal, 169–174. DOI:
[168]
Ekaterina Peshkova, Martin Hitz, and David Ahlström. 2017. Exploring user-defined gestures and voice commands to control an unmanned aerial vehicle. In Intelligent Technologies for Interactive Entertainment, Ronald Poppe, John-Jules Meyer, Remco Veltkamp, and Mehdi Dastani (Eds.). Springer International Publishing, Cham, 47–62.
[169]
Tran Pham, Jo Vermeulen, Anthony Tang, and Lindsay MacDonald Vermeulen. 2018. Scale impacts elicited gestures for manipulating holograms: Implications for AR gesture design. In Proceedings of the Designing Interactive Systems Conference (DIS’18). ACM, New York, NY, 227–240. DOI:
[170]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013a. User-defined gestures for augmented reality. In CHI’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA’13). Association for Computing Machinery, New York, NY, 955–960. DOI:
[171]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013b. User-defined gestures for augmented reality. In Human-Computer Interaction–INTERACT 2013, Paula Kotzé, Gary Marsden, Gitte Lindgaard, Janet Wesson, and Marco Winckler (Eds.). Springer, Berlin, 282–299.
[172]
Henning Pohl and Michael Rohs. 2014. Around-device devices: My coffee mug is a volume dial. In Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services (MobileHCI’14). ACM, New York, NY, 81–90. DOI:
[173]
Patryk Pomykalski, Mikołaj P. Woźniak, Paweł W. Woźniak, Krzysztof Grudzień, Shengdong Zhao, and Andrzej Romanowski. 2020. Considering wake gestures for smart assistant use. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–8. DOI:
[174]
Patricia Pons and Javier Jaen. 2019. Interactive spaces for children: Gesture elicitation for controlling ground mini-robots. J. Amb. Intell. Hum. Comput. 11 (2019), 2467–2488. DOI:
[175]
Benjamin Poppinga, Alireza Sahami Shirazi, Niels Henze, Wilko Heuten, and Susanne Boll. 2014. Understanding shortcut gestures on mobile touch devices. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (MobileHCI’14). Association for Computing Machinery, New York, NY, 173–182. DOI:
[176]
Dmitry Pyryeskin, Mark Hancock, and Jesse Hoey. 2012. Comparing elicited gestures to designer-created gestures for selection above a multitouch surface. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’12). ACM, New York, NY, 1–10. DOI:
[177]
F. Quek, D. McNeill, R. Ansari, Xin-Feng Ma, R. Bryll, S. Duncan, and K.E. McCullough. 1999. Gesture cues for conversational interaction in monocular video. In Proceedings International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems in Conjunction with ICCV'99 (Cat. No.PR00378). IEEE, 119–126. DOI:
[178]
Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E. McCullough, and Rashid Ansari. 2002. Multimodal human discourse: Gesture and speech. ACM Trans. Comput.-Hum. Interact. 9, 3 (2002), 171–193. DOI:
[179]
Tsele Rakubutu, Helene Gelderblom, and Jason Cohen. 2014. Participatory design of touch gestures for informational search on a tablet device. In Proceedings of the Southern African Institute for Computer Scientist and Information Technologists Annual Conference on SAICSIT Empowered by Technology (SAICSIT’14). Association for Computing Machinery, New York, NY, 276–285. DOI:
[180]
Silvia Ramis, Francisco J. Perales, Cristina Manresa-Yee, and Antoni Bibiloni. 2015. Usability study of gestures to control a smart-TV. In Applications and Usability of Interactive TV, Communications in Computer and Information Science, María José Abásolo and Raoni Kulesza (Eds.). Springer International Publishing, Cham, 135–146. DOI:
[181]
Hanae Rateau, Laurent Grisoni, and Bruno De Araujo. 2014. Mimetic interaction spaces: Controlling distant displays in pervasive environments. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI’14). ACM, New York, NY, 89–94. DOI:
[182]
Isabel Benavente Rodriguez and Nicolai Marquardt. 2017. Gesture elicitation study on how to opt-in & opt-out from interactions with public displays. In Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS’17). ACM, New York, NY, 32–41. DOI:
[183]
Marco Romano, Andrea Bellucci, and Ignacio Aedo. 2015. Understanding touch and motion gestures for blind people on mobile devices. In Human-Computer Interaction–INTERACT 2015, Lecture Notes in Computer Science. Springer, Cham, 38–46. DOI:
[184]
Gustavo Alberto Rovelo Ruiz, Davy Vanacken, Kris Luyten, Francisco Abad, and Emilio Camahort. 2014. Multi-viewer gesture-based interaction for omni-directional video. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). ACM, New York, NY, 4077–4086. DOI:
[185]
David M. Roy, Marilyn Panayi, Richard Foulds, Roman Erenshteyn, William S. Harwin, and Robert Fawcus. 1994. The enhancement of interaction for people with severe speech and physical impairment through the computer recognition of gesture and manipulation. Presence: Teleoperat. Virt. Environ. 3, 3 (1994), 227–235. DOI:
[186]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, NY, 197–206. DOI:
[187]
Jaime Ruiz and Daniel Vogel. 2015. Soft-constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In Proceedings of the IFIP TC5 WG5.7 International Workshop on Modelling Techniques for Business Process Re-Engineering and Benchmarking (CHI’15). ACM, New York, NY, 3347–3350. DOI:
[188]
Natalie Ruiz, Fang Chen, and Eric Choi. 2006. Exploratory study of lexical patterns in multimodal cues. In Proceedings of the NICTA-HCSNet Multimodal User Interaction Workshop, Volume 57 (MMUI’05). Australian Computer Society, Inc., Darlinghurst, Australia, 47–50. http://dl.acm.org/citation.cfm?id=1151804.1151812
[189]
Dominik Rupprecht, Rainer Blum, and Birgit Bomsdorf. 2013. Towards a gesture set for a virtual try-on. In Proceedings of the IADIS International Conference Game and Entertainment Technologies 2013 (part of MCCSIS 2013) (2013). Association for Computing Machinery, 273–277.
[190]
Vít Rusnák, Caroline Appert, Olivier Chapuis, and Emmanuel Pietriga. 2018. Designing coherent gesture sets for multi-scale navigation on tabletops. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[191]
Karen Rust, Meethu Malu, Lisa Anthony, and Leah Findlater. 2014. Understanding childdefined gestures and children’s mental models for touchscreen tabletop interaction. In Proceedings of the ACM Conference on Interaction Design and Children (IDC’14). ACM, New York, NY, 201–204. DOI:
[192]
Roman Rädle, Hans-Christian Jetter, Mario Schreiner, Zhihao Lu, Harald Reiterer, and Yvonne Rogers. 2015. Spatially-aware or spatially-agnostic?: Elicitation and evaluation of user-defined cross-device interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). ACM, New York, NY, 3913–3922. DOI:
[193]
Atilim Sahin. 2013. Hacking the gestures of past for future interactions. In Proceedings of 11th International Conference on Advances in Mobile Computing & Multimedia (MoMM’13). ACM, New York, NY, 484–489. DOI:
[194]
Gazelle Saniee-Monfared, Kevin Fan, Qianq Xu, Sachi Mizobuchi, Lewis Zhou, Pourang Polad Irani, and Wei Li. 2020. Tent mode interactions: Exploring collocated multi-user interaction on a foldable device. In Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’20). Association for Computing Machinery, New York, NY, 1–12. DOI:
[195]
O. Schipor and R. Vatavu. 2018. Invisible, inaudible, and impalpable: Users’ preferences and memory performance for digital content in thin air. IEEE Perv. Comput. 17, 4 (2018), 76–85. DOI:
[196]
Robin Schweigert, Jan Leusmann, Simon Hagenmayer, Maximilian Weiß, Huy Viet Le, Sven Mayer, and Andreas Bulling. 2019. KnuckleTouch: Enabling knuckle gestures on capacitive touchscreens using deep learning. In Proceedings of Mensch und Computer (MuC’19). Association for Computing Machinery, Hamburg, Germany, 387–397. DOI:
[197]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’14). ACM, New York, NY, 3181–3190. DOI:
[198]
Matthias Seuter, Eduardo Rodriguez Macrillante, Gernot Bauer, and Christian Kray. 2018. Running with drones: Desired services and control gestures. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI’18). ACM, New York, NY, 384–395. DOI:
[199]
Teddy Seyed, Chris Burns, Mario Costa Sousa, Frank Maurer, and Anthony Tang. 2012. Eliciting usable gestures for multi-display environments. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’12). ACM, New York, NY, 41–50. DOI:
[200]
David A. Shamma, Jennifer Marlow, and Laurent Denoue. 2019. Interacting with smart consumer cameras: Exploring gesture, voice, and AI control in video streaming. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video. ACM, New York, NY, 137–144. DOI:
[201]
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. 2019. Grasping microgestures: Eliciting single-hand microgestures for handheld objects. In Proceedings of theCHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 402:1–402:13. DOI:
[202]
Alex Shaw and Lisa Anthony. 2016a. Analyzing the articulation features of children’s touchscreen gestures. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI 2016). ACM, New York, NY, 333–340. DOI:
[203]
Alex Shaw and Lisa Anthony. 2016b. Toward a systematic understanding of children’s touchscreen gestures. In Proceedings of the CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, New York, NY, 1752–1759. DOI:
[204]
Lei Shi, Yuhang Zhao, and Shiri Azenkot. 2017. Designing interactions for 3D printed models with blind people. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’17). ACM, New York, NY, 200–209. DOI:
[205]
Shaikh Shawon Arefin Shimon, Sarah Morrison-Smith, Noah John, Ghazal Fahimi, and Jaime Ruiz. 2015. Exploring user-defined back-of-device gestures for mobile devices. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’15). ACM, New York, NY, 227–232. DOI:
[206]
Shaishav Siddhpuria, Keiko Katsuragawa, James R. Wallace, and Edward Lank. 2017. Exploring at-your-side gestural interaction for ubiquitous environments. In Proceedings of the Conference on Designing Interactive Systems (DIS’17). ACM, New York, NY, 1111–1122. DOI:
[207]
Chaklam Silpasuwanchai and Xiangshi Ren. 2014. Jump and shoot!: Prioritizing primary and alternative body gestures for intense gameplay. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). ACM, New York, NY, 951–954. DOI:
[208]
Chaklam Silpasuwanchai and Xiangshi Ren. 2015. Designing concurrent full-body gestures for intense gameplay. Int. J. Hum.-Comput. Stud. 80 (2015), 1–13. DOI:
[209]
Tiffanie R. Smith and Juan E. Gilbert. 2018. Dancing to design: A gesture elicitation study. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC’18). ACM, New York, NY, 638–643. DOI:
[210]
Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2019. Do user-defined gestures for flatscreens generalize to interactive spherical displays for adults and children? In Proceedings of the 8th ACM International Symposium on Pervasive Displays (PerDis’19). ACM, New York, NY, 24:1–24:7. DOI:
[211]
Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2020. Adults’ and children’s mental models for gestural interactions with interactive spherical displays. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–12. DOI:
[212]
Keyur Sorathia, Minal Jain, Mannu Amrit, Ravi Mokashi Punekar, Saurabh Srivastava, and Nitendra Rajput. 2015. Gesture selection study for a maternal healthcare information system in rural Assam, India. J. Usabil. Stud. 11, 1 (2015), 7–20. http://dl.acm.org/citation.cfm?id=2870660.2870662
[213]
Kashmiri Stec and Lars Bo Larsen. 2018. Gestures for controlling a moveable TV. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX’18). ACM, New York, NY, 5–14. DOI:
[214]
H. Subramonyam and E. Adar. 2019. SmartCues: A multitouch query approach for details-on-demand through dynamically computed overlays. IEEE Trans. Vis. Comput. Graph. 25, 1 (2019), 597–607. DOI:
[215]
Ke Sun, Yuntao Wang, Chun Yu, Yukang Yan, Hongyi Wen, and Yuanchun Shi. 2017. Float: One-handed and touch-free target selection on smartwatches. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 692–704. DOI:
[216]
Saiganesh Swaminathan, Michael Rivera, Runchang Kang, Zheng Luo, Kadri Bugra Ozutemiz, and Scott E. Hudson. 2019. Input, output and construction methods for custom fabrication of room-scale deployable pneumatic structures. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 2 (2019), 1–17. DOI:
[217]
Poorna Talkad Sukumar, Anqing Liu, and Ronald Metoyer. 2018. Replicating user-defined gestures for text editing. In Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS’18). ACM, New York, NY, 97–106. DOI:
[218]
Yanke Tan, Sang Ho Yoon, and Karthik Ramani. 2017. BikeGesture: User elicitation and performance of micro hand gesture as input for cycling. In Proceedings of the CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’17). ACM, New York, NY, 2147–2154. DOI:
[219]
Florent Taralle, Alexis Paljic, Sotiris Manitsaris, Jordane Grenier, and Christophe Guettier. 2015. A consensual and non-ambiguous set of gestures to interact with UAV in infantrymen. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15). ACM, New York, NY, 797–803. DOI:
[220]
Giovanni Maria Troiano, Esben Warming Pedersen, and Kasper Hornb{\textbackslash}a ek. 2014. User-defined gestures for elastic, deformable displays. In Proceedings of the ACM International Working Conference on Advanced Visual Interfaces (AVI’14). ACM, New York, NY, 1–8. DOI:
[221]
Huawei Tu, Qihan Huang, Yanchao Zhao, and Boyu Gao. 2020. Effects of holding postures on user-defined touch gestures for tablet interaction. Int. J. Hum.-Comput. Stud. 141 (2020), 102451. DOI:
[222]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-defined game input for smart glasses in public space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). Association for Computing Machinery, Seoul, Republic of Korea, 3327–3336. DOI:
[223]
Antonio Emmanuele Uva, Michele Fiorentino, Vito Modesto Manghisi, Antonio Boccaccio, Saverio Debernardis, Michele Gattullo, and Giuseppe Monno. 2019. A user-centered framework for designing midair gesture interfaces. IEEE Trans. Hum.-Mach. Syst. 49, 5 (2019), 421–429. DOI:
[224]
Consuelo Valdes, Diana Eastman, Casey Grote, Shantanu Thatte, Orit Shaer, Ali Mazalek, Brygg Ullmer, and Miriam K. Konkel. 2014. Exploring the design space of gestural interaction with active tokens through user-defined gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems ((CHI’14). Association for Computing Machinery, New York, NY, 4107–4116. DOI:
[225]
J. Vanattenhoven, D. Geerts, J. Vanderdonckt, and J. Perez-Medina. 2019. The impact of comfortable viewing positions on smart TV gestures. In Proceedings of the International Conference on Information Systems and Computer Science (INCISCOS’19). IEEE Computer Society, Los Alamitos, CA, 296–303. DOI:
[226]
Jean Vanderdonckt, Nathan Magrofuoco, Suzanne Kieffer, Jorge Pérez, Ysabelle Rase, Paolo Roselli, and Santiago Villarreal. 2019. Head and shoulders gestures: Exploring user-defined gestures with upper body. In Design, User Experience, and Usability. User Experience in Advanced Technological Environments, Lecture Notes in Computer Science), Aaron Marcus and Wentao Wang (Eds.). Springer International Publishing, Cyprus, 192–213.
[227]
Radu-Daniel Vatavu. 2012. User-defined gestures for free-hand TV control. In Proceedings of the 10th European Conference on Interactive TV and Video (EuroITV’12). Association for Computing Machinery, New York, NY, 45–48. DOI:
[228]
Radu-Daniel Vatavu. 2013. A comparative study of user-defined handheld vs. freehand gestures for home entertainment environments. J. Amb. Intell. Smart Environ. 5, 2 (2013), 187–211. DOI:
[229]
Radu-Daniel Vatavu. 2017a. Characterizing gesture knowledge transfer across multiple contexts of use. J. Multimodal User Interfaces 11, 4 (2017), 301–314. DOI:
[230]
Radu-Daniel Vatavu. 2017b. Smart-pockets: Body-deictic gestures for fast access to personal data during ambient interactions. Int. J. Hum.-Comput. Stud. 103 (2017), 1–21. DOI:
[231]
Radu-Daniel Vatavu. 2019. The dissimilarity-consensus approach to agreement analysis in gesture elicitation studies. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–13. DOI:
[232]
Radu-Daniel Vatavu, Annette Mossel, and Christian Schönauer. 2016. Digital vibrons: Understanding users’ perceptions of interacting with invisible, zero-weight matter. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’16). Association for Computing Machinery, New York, NY, 217–226. DOI:
[233]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2016. Between-subjects elicitation studies: Formalization and tool support. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 3390–3402. DOI:
[234]
Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap gestures for TV: Insights from an elicitation study. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX’14). Association for Computing Machinery, New York, NY, 131–138. DOI:
[235]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2019. Frame-based elicitation of mid-air gestures for a smart home device ecosystem. Informatics 6, 2 (2019), 23. DOI:
[236]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2020. Mid-air gesture control of multiple home devices in spatial augmented reality prototype. Multimod. Technol. Interact. 4, 3 (2020), 61. DOI:
[237]
Spyros Vosinakis and Anna Gardeli. 2019. On the use of mobile devices as controllers for first-person navigation in public installations. Information 10, 7 (2019), 238. DOI:
[238]
Tijana Vuletic, Alex Duffy, Laura Hay, Chris McTeague, Gerard Campbell, Pei Ling Choo, and Madeleine Grealy. 2018. Natural and intuitive gesture interaction for 3D object manipulation in conceptual design. In Proceedings of the 15th International Design Conference (Design’18).
[239]
Benjamin Walther-Franks, Tanja Döring, Meltem Yilmaz, and Rainer Malaka. 2019. Embodiment or manipulation? Understanding users’ strategies for free-hand character control. In Proceedings of Mensch und Computer (MuC’19). Association for Computing Machinery, 661–665. DOI:
[240]
Ruolin Wang, Chun Yu, Xing-Dong Yang, Weijie He, and Yuanchun Shi. 2019. EarTouch: Facilitating smartphone use for visually impaired people in mobile and public scenarios. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY, 1–13. DOI:
[241]
Yuntao Wang, Chun Yu, Yuhang Zhao, Jin Huang, and Yuanchun Shi. 2014. Defining and analyzing a gesture set for interactive TV remote on touchscreen phones. In Proceedigns of the IEEE 11th International Conference on Ubiquitous Intelligence and Computing and IEEE 11th International Conference on Autonomic and Trusted Computing and IEEE 14th International Conference on Scalable Computing and Communications and Its Associated Workshops. IEEE, 362–365. DOI:
[242]
Florian Weidner and Wolfgang Broll. 2019. Interact with your car: A user-elicited gesture set to inform future in-car user interfaces. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (MUM’19). Association for Computing Machinery, New York, NY, Article 11, 12 pages. DOI:
[243]
Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More than touch: Understanding how people use skin as an input surface for mobile computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). Association for Computing Machinery, New York, NY, 179–188. DOI:
[244]
Wesley Willett, Qi Lan, and Petra Isenberg. 2014. Eliciting multi-touch selection gestures for interactive data graphics. In Short-Paper Proceedings of the European Conference on Visualization (EuroVis’14). Eurographics, Aire-la-Ville, Switzerland. https://hal.inria.fr/hal-00990928
[245]
Adam S. Williams and Francisco R. Ortega. 2020. Understanding gesture and speech multimodal interactions for manipulation tasks in augmented reality using unconstrained elicitation. Proc. ACM Hum.-Comput. Interact. 4, ISS, Article 202 (Nov. 2020), 21 pages. DOI:
[246]
Markus L. Wittorf and Mikkel R. Jakobsen. 2016. Eliciting mid-air gestures for wall-display interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). Association for Computing Machinery, New York, NY, Article 3, 4 pages. DOI:
[247]
Jacob O. Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A. Myers. 2005. Maximizing the guessability of symbolic input. In CHI’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA’05). Association for Computing Machinery, New York, NY, 1869–1872. DOI:
[248]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). Association for Computing Machinery, New York, NY, 1083–1092. DOI:
[249]
Huiyue Wu, Jinxuan Gai, Yu Wang, Jiayi Liu, Jiali Qiu, Jianmin Wang, and Xiaolong(Luke) Zhang. 2020. Influence of cultural factors on freehand gesture design. Int. J. Hum.-Comput. Stud. 143 (2020), 102502. DOI:
[250]
Huiyue Wu, Jiayi Liu, Jiali Qiu, and Xiaolong (Luke) Zhang. 2018. Seeking common ground while reserving differences in gesture elicitation studies. Multimedia Tools Appl. 78 (2018), 14989–15010. DOI:
[251]
Huiyue Wu, Weizhou Luo, Neng Pan, Shenghuan Nan, Yanyi Deng, Shengqian Fu, and Liuqingqing Yang. 2019. Understanding freehand gestures: A study of freehand gestural interaction for immersive VR shopping applications. Hum. Cent. Comput. Inf. Sci. 9, 1 (2019), 43. DOI:
[252]
Huiyue Wu, Jianmin Wang, and Xiaolong (Luke) Zhang. 2016. User-centered gesture development in TV viewing environment. Multimedia Tools Appl. 75, 2 (2016), 733–760. DOI:
[253]
Huiyue Wu, Yu Wang, Jiayi Liu, Jiali Qiu, and Xiaolong (Luke) Zhang. 2019a. User-defined gesture interaction for in-vehicle information systems. Multimedia Tools Appl. 79 (2019), 263–288. DOI:
[254]
Huiyue Wu, Yu Wang, Jiali Qiu, Jiayi Liu, and Xiaolong (Luke) Zhang. 2019b. User-defined gesture interaction for immersive VR shopping applications. Behav. Inf. Technol. 38, 7 (2019), 726–741. DOI:
[255]
Huiyue Wu and Liuqingqing Yang. 2019. User-defined gestures for dual-screen mobile interaction. Int. J. Hum.–Comput. Interact. 0, 0 (2019), 1–15. DOI:
[256]
Huiyue Wu, Liuqingqing Yang, Shengqian Fu, and Xiaolong (Luke) Zhang. 2019c. Beyond remote control: Exploring natural gesture inputs for smart TV systems. J. Amb. Intell. Smart Environ. 11, 4 (2019), 335–354. DOI:
[257]
Huiyue Wu, Shaoke Zhang, Jiayi Liu, Jiali Qiu, and Xiaolong (Luke) Zhang. 2019d. The gesture disagreement problem in free-hand gesture interaction. Int. J. Hum.–Comput. Interact. 35, 12 (2019), 1102–1114. DOI:
[258]
Yiqi Xiao and Renke He. 2019a. The handlebar as an input field: Evaluating finger gestures designed for bicycle riders. In Advances in Human Aspects of Transportation, Neville Stanton (Ed.). Springer International Publishing, Cham, 648–659.
[259]
Yiqi Xiao and Renke He. 2019b. The intuitive grasp interface: Design and evaluation of micro-gestures on the steering wheel for driving scenario. Univ. Access Inf. Soc. 19 (2019), 433–450. DOI:
[260]
Li Xuan, Guan Daisong, Zhou Moli, Zhang Jingya, Liu Xingtong, and Li Siqi. 2019. Comparison on user experience of mid-air gesture interaction and traditional remotes control. In Proceedings of the 7th International Symposium of Chinese CHI (Chinese CHI’19). Association for Computing Machinery, New York, NY, 16–22. DOI:
[261]
Yukang Yan, Xin Yi, Chun Yu, and Yuanchun Shi. 2019. Gesture-based target acquisition in virtual and augmented reality. Virt. Real. Intell. Hardw. 1, 3 (2019), 276–289. DOI:
[262]
Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, and Yuanchun Shi. 2018a. VirtualGrasp: Leveraging experience of interacting with physical objects to facilitate digital object retrieval. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–13. DOI:
[263]
Yukang Yan, Chun Yu, Xin Yi, and Yuanchun Shi. 2018b. HeadGesture: Hands-free input approach leveraging head movements for HMD devices. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 4 (2018), 198:1–198:23. DOI:
[264]
Zhican Yang, Chun Yu, Fengshi Zheng, and Yuanchun Shi. 2019. ProxiTalk: Activate speech input by bringing smartphone to the mouth. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 3 (2019), 118:1–118:25. DOI:
[265]
Ionuţ-Alexandru Zaiţi, Ştefan-Gheorghe Pentiuc, and Radu-Daniel Vatavu. 2015. On free-hand TV control: Experimental results on user-elicited gestures with Leap Motion. Pers. Ubiquit. Comput. 19, 5 (2015), 821–838. DOI:
[266]
Xiaojie Zha and Marie-Luce Bourguet. 2016. Experimental study to elicit effective multimodal behaviour in pedagogical agents. In Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents (DAA’16). Association for Computing Machinery, New York, NY, Article 1, 6 pages. DOI:
[267]
Oren Zuckerman, Dina Walker, Andrey Grishko, Tal Moran, Chen Levy, Barak Lisak, Iddo Yehoshua Wald, and Hadas Erel. 2020. Companionship is not a function: The effect of a novel robotic object on healthy older adults’ feelings of “Being-Seen”. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–14. DOI:

References

[1]
Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and Jeffrey Tzu Kwan Valino Koh. 2012. Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI. Technical Report MSR-TR-2012-111. Microsoft. https://www.microsoft.com/en-us/research/publication/understanding-mid-air-hand-gestures-a-study-of-human-preferences-in-usage-of-gesture-types-for-hci/
[2]
Ahmad Sami Al-Shamayleh, Rodina Ahmad, Mohammad A. Abushariah, Khubaib Amjad Alam, and Nazean Jomhari. 2018. A systematic literature review on vision based gesture recognition techniques. Multimedia Tools Appl. 77, 21 (Nov. 2018), 28121–28184. DOI:
[3]
Jason Alexander, Teng Han, William Judd, Pourang Irani, and Sriram Subramanian. 2012. Putting your best foot forward: Investigating real-world mappings for foot-based gestures. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 1229–1238. DOI:
[4]
Abdullah X. Ali, Meredith Ringel Morris, and Jacob O. Wobbrock. 2018. Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST’18). ACM, New York, NY, 177–188. DOI:
[5]
Abdullah X. Ali, Meredith Ringel Morris, and Jacob O. Wobbrock. 2019. Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–12. DOI:
[6]
Leonardo Angelini, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Abou Khaled, Jürgen Baumgartner, Andreas Sonderegger, Denis Lalanne, and Elena Mugellini. 2014. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’14). ACM, New York, NY, 1–8. DOI:
[7]
Leonardo Angelini, Denis Lalanne, Elise van den Hoven, Omar Abou Khaled, and Elena Mugellini. 2015. Move, hold and touch: A framework for tangible gesture interactive systems. Machines 3, 3 (2015), 173–207. DOI:
[8]
Ilhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel, and Elisabeth André. 2018. Pen+Mid-Air gestures: Eliciting contextual gestures. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI’18). ACM, New York, NY, 135–144. DOI:
[9]
Gilles Bailly, Thomas Pietrzak, Jonathan Deber, and Daniel J. Wigdor. 2013. MéTamorphe: Augmenting hotkey usage with actuated keys. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’13). ACM, New York, NY, 563–572. DOI:
[10]
Niels Ole Bernsen. 1994. Foundations of multimodal representations: A taxonomy of representational modalities. Interact. Comput. 6, 4 (1994), 347–371. DOI:
[11]
Sabrina S. Billinghurst and Kim-Phuong L. Vu. 2015. Touch screen gestures for web browsing tasks. Comput. Hum. Behav. 53 (2015), 71–81. DOI:
[12]
Roger Boldu, Alexandru Dancu, Denys J. C. Matthies, Pablo Gallego Cascón, Shanaka Ransir, and Suranga Nanayakkara. 2018. Thumb-In-Motion: Evaluating thumb-to-ring microgestures for athletic activity. In Proceedings of the Symposium on Spatial User Interaction (SUI’18). ACM, New York, NY, 150–157. DOI:
[13]
Pearl Brereton, Barbara A. Kitchenham, David Budgen, Mark Turner, and Mohamed Khalil. 2007. Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80, 4 (2007), 571–583. DOI: Software Performance.
[14]
Icaro Brito, Eduardo Freire, and Elyson Carvalho. 2019. Analysis of cross-cultural effect on gesture-based human-robot interaction. Int. J. Mech. Eng. Robot. Res. 8, 6 (2019), 852–859. DOI:
[15]
Thisum Buddhika, Haimo Zhang, Samantha W. T. Chan, Vipula Dissanayake, Suranga Nanayakkara, and Roger Zimmermann. 2019. fSense: Unlocking the dimension of force for gestural interactions using smartwatch PPG sensor. In Proceedings of the 10th International Conference on Augmented Human (AH’19). ACM, New York, NY, 1–5. DOI:
[16]
Francesco Cafaro, Leilah Lyons, and Alissa N. Antle. 2018. Framed guessability: Improving the discoverability of gestures and body movements for full-body interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[17]
Gaëlle Calvary, Joëlle Coutaz, David Thevenin, Quentin Limbourg, Laurent Bouillon, and Jean Vanderdonckt. 2003. A unifying reference framework for multi-target user interfaces. Interact. Comput. 15, 3 (2003), 289–308. DOI:
[18]
Jessica R. Cauchard, Jane L. E, Kevin Y. Zhai, and James A. Landay. 2015. Drone & Me: An exploration into natural human-drone interaction. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’15). ACM, New York, NY, 361–365. DOI:
[19]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User elicitation on single-hand microgestures. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3403–3414. DOI:
[20]
Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, and Yuntao Wang. 2021. Understanding the design space of mouth microgestures. In Proceedings of the ACM Designing Interactive Systems Conference (DIS’21). Association for Computing Machinery, New York, NY, 1068–1081. DOI:
[21]
Zhen Chen, Xiaochi Ma, Zeya Peng, Ying Zhou, Mengge Yao, Zheng Ma, Ci Wang, Zaifeng Gao, and Mowei Shen. 2018. User-defined gestures for gestural interaction: Extending from hands to other body parts. Int. J. Hum.–Comput. Interact. 34, 3 (2018), 238–250. DOI:
[22]
Sabrina Connell, Pei-Yi Kuo, Liu Liu, and Anne Marie Piper. 2013. A wizard-of-oz elicitation study examining child-defined gestures with a whole-body interface. In Proceedings of the ACM International Conference on Interaction Design and Children (IDC’13). ACM, New York, NY, 277–280. DOI:
[23]
Jian Cui, Arjan Kuijper, Dieter W. Fellner, and Alexei Sourin. 2016. Understanding people’s mental models of mid-air interaction for virtual assembly and shape modeling. In Proceedings of the 29th International Conference on Computer Animation and Social Agents (CASA’16). ACM, New York, NY, 139–146. DOI:
[24]
Suranjith De Silva, Michael Barlow, and Adam Easton. 2013. Harnessing multi-user design and computation to devise archetypal whole-of-body gestures: A novel framework. In Proceedings of the 25th Australian Computer-Human Interaction Conference (OzCHI’13). ACM, New York, NY, 85–94. DOI:
[25]
Linda Di Geronimo, Marica Bertarini, Julia Badertscher, Maria Husmann, and Moira C. Norrie. 2017. Exploiting mid-air gestures to share data among devices. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’17). ACM, New York, NY, 1–11. DOI:
[26]
Nem Khan Dim, Chaklam Silpasuwanchai, Sayan Sarcar, and Xiangshi Ren. 2016. Designing mid-air TV gestures for blind people using user- and choice-based elicitation approaches. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS’16). ACM, New York, NY, 204–214. DOI:
[27]
Haiwei Dong, Nadia Figueroa, and Abdulmotaleb El Saddik. 2015. An elicitation study on gesture attitudes and preferences towards an interactive hand-gesture vocabulary. In Proceedings of the 23rd ACM International Conference on Multimedia (MM’15). Association for Computing Machinery, New York, NY, 999–1002. DOI:
[28]
Jane L. E, Ilene L. E, James A. Landay, and Jessica R. Cauchard. 2017. Drone & Wo: Cultural influences on human-drone interaction techniques. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, 6794–6799. DOI:
[29]
Hessam Jahani Fariman, Hasan J. Alyamani, Manolya Kavakli, and Len Hamey. 2016. Designing a user-defined gesture vocabulary for an in-vehicle climate control system. In Proceedings of the 28th Australian Conference on Computer-Human Interaction (OzCHI’16). ACM, New York, NY, 391–395. DOI:
[30]
Yasmin Felberbaum and Joel Lanir. 2016. Step by step: Investigating foot gesture interaction. In Proceedings of the ACM International Working Conference on Advanced Visual Interfaces (AVI’16). ACM, New York, NY, 306–307. DOI:
[31]
Yasmin Felberbaum and Joel Lanir. 2018. Better understanding of foot gestures: An elicitation study. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–12. DOI:
[32]
Leah Findlater, Ben Lee, and Jacob Wobbrock. 2012. Beyond QWERTY: Augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input. In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 2679–2682. DOI:
[33]
Simon Fothergill, Helena Mentis, Pushmeet Kohli, and Sebastian Nowozin. 2012. Instructing people for training gestural interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). Association for Computing Machinery, New York, NY, 1737–1746. DOI:
[34]
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster. 2017. Rhythmic micro-gestures: Discreet interaction on-the-go. In Proceedings of the ACM Conference on Multimodal Interaction (ICMI’17). ACM, New York, NY, 115–119. DOI:
[35]
Mathias Frisch, Jens Heydekorn, and Raimund Dachselt. 2009. Investigating multi-touch and pen gestures for diagram editing on interactive surfaces. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’09). ACM, New York, NY, 149–156. DOI:
[36]
Bogdan-Florin Gheran, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2023. New insights into user-defined smart ring gestures with implications for gesture elicitation studies. In Proceedings of the ACM International Conference on Human Factors in Computing Systems, Extended Abstracts (CHI EA’23), Albrecht Schmidt, Kaisa Väänänen, Tesh Goyal, Per Ola Kristensson, and Anicia Peters (Eds.). ACM, 216:1–216:8. DOI:
[37]
Bogdan-Florin Gheran, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2018a. Gestures for smart rings: Empirical results, insights, and design implications. In Proceedings of the Designing Interactive Systems Conference (DIS’18). ACM, New York, NY, 623–635. DOI:
[38]
Bogdan-Florin Gheran, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2018b. Ring x2: Designing gestures for smart rings using temporal calculus. In Proceedings of the ACM Conference Companion Publication on Designing Interactive Systems (DIS’18 Companion). ACM, New York, NY, 117–122. DOI:
[39]
Bogdan-Florin Gheran, Santiago Villarreal-Narvaez, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2022. RepliGES and GEStory: Visual tools for systematizing and consolidating knowledge on user-defined gestures. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI’22). ACM, New York, NY, 9. DOI:
[40]
Celeste Groenewald, Craig Anslow, Junayed Islam, Chris Rooney, Peter Passmore, and William Wong. 2016. Understanding 3D mid-air hand gestures with interactive surfaces and displays: A systematic literature review. In Proceedings of the 30th International BCS Human Computer Interaction Conference: Fusion! (HCI’16). BCS Learning & Development Ltd., Swindon, GBR, Article 43, 13 pages. DOI:
[41]
Robin Guérit, Alessandro Cierro, Jean Vanderdonckt, and Jorge Luis Pérez-Medina. 2019. Gesture elicitation and usability testing for an armband interacting with netflix and spotify. In Proceedings of the International Conference on Information Technology & Systems,Advances in Intelligent Systems and Computing (ICITS’19), Álvaro Rocha, Carlos Ferrás, and Manolo Paredes (Eds.). Springer International Publishing, Berlin, 625–637. DOI:
[42]
Hayati Havlucu, Mehmet Yarkın Ergin, İdil Bostan, Oğuz Turan Buruk, Tilbe Göksun, and Oğuzhan Özcan. 2017. It made more sense: Comparison of user-elicited on-skin touch and freehand gesture sets. In Distributed, Ambient and Pervasive Interactions, Norbert Streitz and Panos Markopoulos (Eds.). Springer International Publishing, Cham, 159–171.
[43]
Lynn Hoff, Eva Hornecker, and Sven Bertel. 2016. Modifying gesture elicitation: Do kinaesthetic priming and increased production reduce legacy bias? In Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI’16). Association for Computing Machinery, New York, NY, 86–91. DOI:
[44]
Masoumehsadat Hosseini, Tjado Ihmels, Ziqian Chen, Marion Koelle, Heiko Müller, and Susanne Boll. 2023. Towards a consensus gesture set: A survey of mid-air gestures in HCI for maximized agreement across domains. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (CHI’23). Association for Computing Machinery, New York, NY, Article 311, 24 pages. DOI:
[45]
Hessam Jahani-Fariman. 2017. Developing a user-defined interface for in-vehicle mid-air gestural interactions. In Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion (IUI’17 Companion). Association for Computing Machinery, New York, NY, 165–168. DOI:
[46]
Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner. 2011. Usable gestures for blind people: Understanding preference and performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, 413–422. DOI:
[47]
Frederic Kerber, Markus Löchtefeld, Antonio Krüger, Jess McIntosh, Charlie McNeill, and Mike Fraser. 2016. Understanding same-side interactions with wrist-worn devices. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). ACM, New York, NY, 28:1–28:10. DOI:
[48]
Barbara Kitchenham, Rialette Pretorius, David Budgen, O. Pearl Brereton, Mark Turner, Mahmood Niazi, and Stephen Linkman. 2010. Systematic literature reviews in software engineering—A tertiary study. Inf. Softw. Technol. 52, 8 (2010), 792–805. DOI:
[49]
Barry Kollee, Sven Kratz, and Anthony Dunnigan. 2014. Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction (SUI’14). Association for Computing Machinery, New York, NY, 40–49. DOI:
[50]
Panayiotis Koutsabasis and Panagiotis Vogiatzidakis. 2019. Empirical research in mid-air interaction: A systematic review. Int. J. Hum.–Comput. Interact. 35, 18 (2019), 1747–1768. DOI:
[51]
Christine Kühnel, Tilo Westermann, Fabian Hemmert, Sven Kratz, Alexander Müller, and Sebastian Möller. 2011. I’m home: Defining and evaluating a gesture set for smart-home control. Int. J. Hum.-Comput. Stud. 69, 11 (2011), 693–704. DOI:
[52]
DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley. 2018. Designing socially acceptable hand-to-face input. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST’18). ACM, New York, NY, 711–723. DOI:
[53]
DoYoung Lee, Ian Roland Oakley, and YuRyang Lee. 2016. Bodily input for wearables: An elicitation study. Hum. Comput. Interact. 1, 1 (2016), 283–285.
[54]
Sang-Su Lee, Sohyun Kim, Bopil Jin, Eunji Choi, Boa Kim, Xu Jia, Daeeop Kim, and Kun-pyo Lee. 2010. How users manipulate deformable displays as input devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, NY, 1647–1656. DOI:
[55]
Hoo Yong Leng, Noris Mohd Norowi, and Azrul Hazri Jantan. 2017. A user-defined gesture set for music interaction in immersive virtual environment. In Proceedings of the 3rd International Conference on Human-Computer Interaction and User Experience in Indonesia (CHIuXiD’17). Association for Computing Machinery, New York, NY, 44–51. DOI:
[56]
David R. Lenorovitz, Mark D. Phillips, R. S. Ardrey, and Gregory V. Kloster. 1984. A taxonomic approach to characterizing human-computer interaction. In Human-Computer Interaction, G. Salvendy (Ed.). Elsevier Science Publishers, Amsterdam, Netherlands, 111–116.
[57]
Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gõtzsche, John P. A. Ioannidis, Mike Clarke, P. J. Devereaux, Jos Kleijnen, and David Moher. July 2009. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Med. 6, 7 (July 2009), 1–22. DOI:
[58]
Mingyu Liu, Mathieu Nancel, and Daniel Vogel. 2015. Gunslinger: Subtle arms-down mid-air interaction. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST’15). ACM, New York, NY, 63–71. DOI:
[59]
Yihua Lou, Wenjun Wu, Radu-Daniel Vatavu, and Wei-Tek Tsai. 2017. Personalized gesture interactions for cyber-physical smart-home environments. Sci. Chin. Inf. Sci. 60, 7 (2017), 072104. DOI:
[60]
Naveen Madapana, Glebys Gonzalez, Richard Rodgers, Lingsong Zhang, and Juan P. Wachs. 2018. Gestures for picture archiving and communication systems (PACS) operation in the operating room: Is there any standard? PLoS One 13, 6 (2018), e0198092. DOI:
[61]
Nathan Magrofuoco, Jorge-Luis Pérez-Medina, Paolo Roselli, Jean Vanderdonckt, and Santiago Villarreal. 2019. Eliciting contact-based and contactless gestures with radar-based sensors. IEEE Access 7 (2019), 176982–176997. DOI:
[62]
Nathan Magrofuoco and Jean Vanderdonckt. 2019. Gelicit: A cloud platform for distributed gesture elicitation studies. Proc. ACM Hum.-Comput. Interact. 3 (2019), 1–41. Issue EICS. DOI:
[63]
Diako Mardanbegi, Dan Witzner Hansen, and Thomas Pederson. 2012. Eye-based head gestures. In Proceedings of the ACM International Symposium on Eye Tracking Research and Applications (ETRA’12). ACM, New York, NY, 139–146. DOI:
[64]
Francisco Javier Martínez-Ruiz and Santiago Villarreal-Narvaez. 2021. Eliciting user-defined zenithal gestures for privacy preferences. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP’21), Alexis Paljic, Tabitha C. Peck, José Braz, and Kadi Bouatouch (Eds.). SCITEPRESS, 205–213. DOI:
[65]
Kohei Matsumura. 2015. Studying user-defined gestures toward off the screen interactions. In Proceedings of the ACM International Conference on Interactive Tabletops & Surfaces (ITS’15). ACM, New York, NY, 295–300. DOI:
[66]
Dan Mauney, Jonathan Howarth, Andrew Wirtanen, and Miranda Capra. 2010. Cultural similarities and differences in user-defined gestures for touchscreen user interfaces. In CHI’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA’10). Association for Computing Machinery, New York, NY, 4015–4020. DOI:
[67]
Keenan R. May, Thomas M. Gable, and Bruce N. Walker. 2017. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’17). Association for Computing Machinery, New York, NY, 74–83. DOI:
[68]
Erin McAweeney, Haihua Zhang, and Michael Nebeling. 2018. User-driven design principles for gesture representations. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, Article 547, 13 pages. DOI:
[69]
G. Modanwal and K. Sarawadekar. 2017. A new dactylology and interactive system development for blind-computer interaction. IEEE Trans. Hum.-Mach. Syst. PP, 99 (2017), 1–6. DOI:
[70]
Gourav Modanwal and Kishor Sarawadekar. 2018. A gesture elicitation study with visually impaired users. In HCI International—Posters’ Extended Abstracts (Communications in Computer and Information Science). Springer, Cham, Cham, 54–61. DOI:
[71]
Meredith Ringel Morris. 2012. Web on the wall: Insights from a multimodal interaction elicitation study. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’12). ACM, New York, NY, 95–104. DOI:
[72]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, m. c. schraefel, and Jacob O. Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. Interactions 21, 3 (May 2014), 40–45. DOI:
[73]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding users’ preferences for surface gestures. In Proceedings of Graphics Interface (GI’10). Canadian Information Processing Society, CAN, 261–268. http://dl.acm.org/ft_gateway.cfm?id=1839260
[74]
Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’13). Association for Computing Machinery, New York, NY, 1099–1108. DOI:
[75]
Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Haoru Xiao, Kim King-Tong Lau, and Khalad Hasan. 2019. Design of interactions for handheld augmented reality devices using wearable smart textiles: Findings from a user elicitation study. Appl. Sci. 9, 15 (2019), 3177. DOI:
[76]
Andrés Adolfo Navarro-Newball, Isidro Moreno, Edmond Prakash, Ali Arya, Victoria E. Contreras, Victor A. Quiceno, Santiago Lozano, Juan David Mejìa, and Diego Fernando Loaiza. 2016. Gesture based human motion and game principles to aid understanding of science and cultural practices. Multimedia Tools Appl. 75, 19 (2016), 11699–11722. DOI:
[77]
Samuel Navas Medrano, Max Pfeiffer, and Christian Kray. 2017. Enabling remote deictic communication with mobile devices: An elicitation study. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’17). Association for Computing Machinery, New York, NY, Article 19, 13 pages. DOI:
[78]
Michael Nebeling, David Ott, and Moira C. Norrie. 2015. Kinect analysis: A system for recording, analysing and sharing multimodal interaction elicitation studies. In Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS’15). ACM, New York, NY, 142–151. DOI:
[79]
Michael Nielsen, Moritz Störring, Thomas B. Moeslund, and Erik Granum. 2004. A procedure for developing intuitive and ergonomic gesture interfaces for HCI. In Gesture-Based Communication in Human-Computer Interaction, Antonio Camurri and Gualtiero Volpe (Eds.). Springer, Berlin, 409–420.
[80]
Mohammad Obaid, Markus Häring, Felix Kistler, René Bühling, and Elisabeth André. 2012. User-defined body gestures for navigational control of a humanoid robot. In Social Robotics, Shuzhi Sam Ge, Oussama Khatib, John-John Cabibihan, Reid Simmons, and Mary-Anne Williams (Eds.). Springer, Berlin, 367–377.
[81]
Francisco R. Ortega, Alain Galvan, Katherine Tarre, Armando Barreto, Naphtali Rishe, Jonathan Bernal, Ruben Balcazar, and Jason-Lee Thomas. 2017. Gesture elicitation for 3D travel via multi-touch and mid-Air systems for procedurally generated pseudo-universe. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI’17). IEEE, 144–153. DOI:
[82]
Mehdi Ousmer, Jean Vanderdonckt, and Sabin Buraga. 2019. An ontology for reasoning on body-based gestures. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS’19). ACM, New York, NY, 1–6. DOI:
[83]
Jorge Luis Pérez-Medina, Santiago Villarreal, and Jean Vanderdonckt. 2020. A gesture elicitation study of nose-based gestures. Sensors 20, 24 (2020), 7118. DOI:
[84]
Ekaterina Peshkova, Martin Hitz, and David Ahlström. 2016. Exploring user-defined gestures and voice commands to control an unmanned aerial vehicle. In Proceedings of 8th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN'16, Utrecht, The Netherlands, June 28-30, 2016 (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Vol. 178), Ronald Poppe, John-Jules Meyer, Remco Veltkamp, and Mehdi Dastani (Eds.). Springer International Publishing, Cham, 47--62.
[85]
Tran Pham, Jo Vermeulen, Anthony Tang, and Lindsay MacDonald Vermeulen. 2018. Scale impacts elicited gestures for manipulating holograms: Implications for AR gesture design. In Proceedings of the Designing Interactive Systems Conference (DIS’18). ACM, New York, NY, 227–240. DOI:
[86]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013a. User-defined gestures for augmented reality. In Human-Computer Interaction–INTERACT 2013, Lecture Notes in Computer Science. Springer, Berlin, 282–299. DOI:
[87]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013b. User-defined gestures for augmented reality. In CHI’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA’13). ACM, New York, NY, 955–960. DOI:
[88]
Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E. McCullough, and Rashid Ansari. 2002. Multimodal human discourse: Gesture and speech. ACM Trans. Comput.-Hum. Interact. 9, 3 (2002), 171–193. DOI:
[89]
Siddharth S. Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: A survey. Artif. Intell. Rev. 43, 1 (Jan. 2015), 1–54. DOI:
[90]
Julie Rico and Stephen Brewster. 2010. Usable gestures for mobile interfaces: Evaluating social acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, NY, 887–896. DOI:
[91]
Isabel Benavente Rodriguez and Nicolai Marquardt. 2017. Gesture elicitation study on how to opt-in & opt-out from interactions with public displays. In Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS’17). ACM, New York, NY, 32–41. DOI:
[92]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, NY, 197–206. DOI:
[93]
Jaime Ruiz and Daniel Vogel. 2015. Soft-constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In Proceedings of the IFIP TC5 WG5.7 International Workshop on Modelling Techniques for Business Process Re-Engineering and Benchmarking (CHI’15). ACM, New York, NY, 3347–3350. DOI:
[94]
Karen Rust, Meethu Malu, Lisa Anthony, and Leah Findlater. 2014. Understanding childdefined gestures and children’s mental models for touchscreen tabletop interaction. In Proceedings of the ACM Conference on Interaction Design and Children (IDC’14). ACM, New York, NY, 201–204. DOI:
[95]
O. Schipor and R. Vatavu. 2018. Invisible, inaudible, and impalpable: Users’ preferences and memory performance for digital content in Thin Air. IEEE Perv. Comput. 17, 4 (2018), 76–85. DOI:
[96]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’14). ACM, New York, NY, 3181–3190. DOI:
[97]
Teddy Seyed, Chris Burns, Mario Costa Sousa, Frank Maurer, and Anthony Tang. 2012. Eliciting usable gestures for multi-display environments. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’12). ACM, New York, NY, 41–50. DOI:
[98]
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. 2019. Grasping microgestures: Eliciting single-hand microgestures for handheld objects. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 402:1–402:13. DOI:
[99]
Alex Shaw and Lisa Anthony. 2016. Analyzing the articulation features of children’s touchscreen gestures. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI 2016). ACM, New York, NY, 333–340. DOI:
[100]
Alex Shaw and Lisa Anthony. 2016. Toward a systematic understanding of children’s touchscreen gestures. In Proceedings of the CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, New York, NY, 1752–1759. DOI:
[101]
Alexandru-Ionut Siean, Cristian Pamparau, Arthur Sluÿters, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2023. Flexible gesture input with radars: Systematic literature review and taxonomy of radar sensing integration in ambient intelligence environments. J. Amb. Intell. Humaniz. Comput. 14, 6 (2023), 7967–7981. DOI:
[102]
Chaklam Silpasuwanchai and Xiangshi Ren. 2014. Jump and shoot!: Prioritizing primary and alternative body gestures for intense gameplay. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). ACM, New York, NY, 951–954. DOI:
[103]
Chaklam Silpasuwanchai and Xiangshi Ren. 2015. Designing concurrent full-body gestures for intense gameplay. Int. J. Hum.-Comput. Stud. 80 (2015), 1–13. DOI:
[104]
Arthur Sluÿters, Sébastien Lambot, and Jean Vanderdonckt. 2022. Hand gesture recognition for an off-the-shelf radar by electromagnetic modeling and inversion. In Proceedings of the 27th ACM International Conference on Intelligent User Interfaces (IUI’22). Association for Computing Machinery, New York, NY, 506–522. DOI:
[105]
Arthur Sluÿters, Sébastien Lambot, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2023. RadarSense: Accurate recognition of mid-air hand gestures with radar sensing and few training examples. ACM Trans. Interact. Intell. Syst. 13, 3, Article 16 (Sep 2023), 45 pages. DOI:
[106]
Arthur Sluÿters, Quentin Sellier, Jean Vanderdonckt, Vik Parthiban, and Pattie Maes. 2023. Consistent, continuous, and customizable mid-air gesture interaction for browsing multimedia objects on large displays. Int. J. Hum.–Comput. Interact. 39, 12 (2023), 2492–2523. DOI:
[107]
Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2019. Do user-defined gestures for flatscreens generalize to interactive spherical displays for adults and children? In Proceedings of the 8th ACM International Symposium on Pervasive Displays (PerDis’19). ACM, New York, NY, 24:1–24:7. DOI:
[108]
Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2020. Adults’ and children’s mental models for gestural interactions with interactive spherical displays. In Proceedings of the Conference on Human Factors in Computing Systems (CHI’20). ACM, New York, NY, 1–12.
[109]
Maximilian Speicher and Michael Nebeling. 2018. GestureWiz: A human-powered gesture design environment for user interface prototypes. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, Article 107, 11 pages. DOI:
[110]
Kashmiri Stec and Lars Bo Larsen. 2018. Gestures for controlling a moveable TV. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX’18). ACM, New York, NY, 5–14. DOI:
[111]
Yanke Tan, Sang Ho Yoon, and Karthik Ramani. 2017. BikeGesture: User elicitation and performance of micro hand gesture as input for cycling. In Proceedings of the CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’17). ACM, New York, NY, 2147–2154. DOI:
[112]
Florent Taralle, Alexis Paljic, Sotiris Manitsaris, Jordane Grenier, and Christophe Guettier. 2015. A consensual and non-ambiguous set of gestures to interact with UAV in infantrymen. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15). ACM, New York, NY, 797–803. DOI:
[113]
Vuletic Tijana, Alex Duffy, Laura Hay, Chris McTeague, Gerard Campbell, and Madeleine Grealy. 2019. Systematic literature review of hand gestures used in human computer interaction interfaces. Int. J. Hum.-Comput. Stud. 129 (2019), 74–94. DOI:
[114]
Theophanis Tsandilas. 2018. Fallacies of agreement: A critical review of consensus assessment methods for gesture elicitation. ACM Trans. Comput.-Hum. Interact. 25, 3, Article 18 (Jun. 2018), 49 pages. DOI:
[115]
Theophanis Tsandilas and Pierre Dragicevic. 2022. Gesture elicitation as a computational optimization problem. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22). Association for Computing Machinery, New York, NY, Article 498, 13 pages. DOI:
[116]
Sekaran U. and Bougie R. 2010. Research Methods for Business: A Skill-Building Approach. John Wiley & Sons, Haddington.
[117]
Antonio Emmanuele Uva, Michele Fiorentino, Vito Modesto Manghisi, Antonio Boccaccio, Saverio Debernardis, Michele Gattullo, and Giuseppe Monno. 2019. A user-centered framework for designing midair gesture interfaces. IEEE Trans. Hum.-Mach. Syst. 49, 5 (2019), 421–429. DOI:
[118]
Jean Vanderdonckt, Nathan Magrofuoco, Suzanne Kieffer, Jorge Pérez, Ysabelle Rase, Paolo Roselli, and Santiago Villarreal. 2019. Head and shoulders gestures: Exploring user-defined gestures with upper body. In Design, User Experience, and Usability. User Experience in Advanced Technological Environments, Lecture Notes in Computer Science, Aaron Marcus and Wentao Wang (Eds.). Springer International Publishing, Cyprus, 192–213.
[119]
Radu-Daniel Vatavu. 2012. User-defined gestures for free-hand TV control. In Proceedings of the 10th European Conference on Interactive TV and Video (EuroITV’12). Association for Computing Machinery, New York, NY, 45–48. DOI:
[120]
Radu-Daniel Vatavu. 2019. The dissimilarity-consensus approach to agreement analysis in gesture elicitation studies. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–13. DOI:
[121]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). ACM, New York, NY, 1325–1334. DOI:
[122]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2016. Between-subjects elicitation studies: Formalization and tool support. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 3390–3402. DOI:
[123]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2022. Clarifying agreement calculations and analysis for end-user elicitation studies. ACM Trans. Comput.-Hum. Interact. 29, 1, Article 5 (Jan. 2022), 70 pages. DOI:
[124]
Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap gestures for TV: Insights from an elicitation study. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX’14). Association for Computing Machinery, New York, NY, 131–138. DOI:
[125]
Santiago Villarreal-Narvaez, Alexandru-Ionuţ Şiean, Arthur Sluÿters, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2022. Informing future gesture elicitation studies for interactive applications that use radar sensing. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI’22). Association for Computing Machinery, New York, NY, Article 50, 3 pages. DOI:
[126]
Santiago Villarreal-Narvaez, Jorge Luis Perez-Medina, and Jean Vanderdonckt. 2023. Exploring user-defined gestures for lingual and palatal interaction. J. Multimodal User Interfaces 17, 3 (2023), 19. DOI:
[127]
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu. 2022. Theoretically-defined vs. user-defined squeeze gestures. Proc. ACM Hum.-Comput. Interact. 6, ISS, Article 559 (Nov. 2022), 30 pages. DOI:
[128]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A systematic review of gesture elicitation studies: What can we learn from 216 studies?. In Proceedings of the ACM Designing Interactive Systems Conference (DIS’20). Association for Computing Machinery, New York, NY, 855–872. DOI:
[129]
Dong-Bach Vo, Eric Lecolinet, and Yves Guiard. 2014. Belly gestures: Body centric gestures on the Abdomen. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (NordiCHI’14). ACM, New York, NY, 687–696. DOI:
[130]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2018. Gesture elicitation studies for mid-air interaction: A review. Multimodal Technol. Interact. 2, 4 (2018), 65–. DOI:
[131]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2019. Frame-based elicitation of mid-air gestures for a smart home device ecosystem. Informatics 6, 2 (2019), 23. DOI:
[132]
Oleg Špakov and Päivi Majaranta. 2012. Enhanced gaze interaction using simple head gestures. In Proceedings of the ACM Conference on Ubiquitous Computing (UbiComp’12). ACM, New York, NY, 705–710. DOI:
[133]
Tijana Vuletic, Alex Duffy, Laura Hay, Chris McTeague, Gerard Campbell, and Madeleine Grealy. 2019. Systematic literature review of hand gestures used in human computer interaction interfaces. Int. J. Hum.-Comput. Stud. 129 (2019), 74–94. DOI:
[134]
Daniel Wigdor and Dennis Wixon. 2011. Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA.
[135]
Markus L. Wittorf and Mikkel R. Jakobsen. 2016. Eliciting mid-air gestures for wall-display interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). Association for Computing Machinery, New York, NY, Article 3, 4 pages. DOI:
[136]
Jacob O. Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A. Myers. 2005. Maximizing the guessability of symbolic input. In CHI’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA’05). Association for Computing Machinery, New York, NY, 1869–1872. DOI:
[137]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). Association for Computing Machinery, New York, NY, 1083–1092. DOI:
[138]
Jacob O. Wobbrock, Brad A. Myers, and John A. Kembel. 2003. EdgeWrite: A stylus-based text entry method designed for high accuracy and stability of motion. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (UIST’03). ACM, New York, NY, 61–70. DOI:
[139]
Huiyue Wu, Liuqingqing Yang, Shengqian Fu, and Xiaolong (Luke) Zhang. 2019. Beyond remote control: Exploring natural gesture inputs for smart TV systems. J. Amb. Intell. Smart Environ. 11, 4 (2019), 335–354. DOI:
[140]
Ho Chung Wu, Robert Wing Pong Luk, Kam Fai Wong, and Kui Lam Kwok. 2008. Interpreting TF-IDF term weights as making relevance decisions. ACM Trans. Inf. Syst. 26, 3, Article 13 (Jun. 2008), 37 pages. DOI:
[141]
Haijun Xia, Michael Glueck, Michelle Annett, Michael Wang, and Daniel Wigdor. 2022. Iteratively designing gesture vocabularies: A survey and analysis of best practices in the HCI literature. ACM Trans. Comput.-Hum. Interact. 29, 4, Article 37 (may 2022), 54 pages. DOI:
[142]
Yukang Yan, Xin Yi, Chun Yu, and Yuanchun Shi. 2019. Gesture-based target acquisition in virtual and augmented reality. Virt. Real. Intell. Hardw. 1, 3 (2019), 276–289. DOI:
[143]
Ying Yin and Randall Davis. 2013. Gesture spotting and recognition using salience detection and concatenated hidden Markov models. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction (ICMI’13). ACM, New York, NY, 489–494. DOI:
[144]
Ionuţ-Alexandru Zaiţi, Ştefan-Gheorghe Pentiuc, and Radu-Daniel Vatavu. 2015. On free-hand TV control: Experimental results on user-elicited gestures with Leap Motion. Pers. Ubiquit. Comput. 19, 5 (2015), 821–838. DOI:
[145]
Xiaojie Zha and Marie-Luce Bourguet. 2016. Experimental study to elicit effective multimodal behaviour in pedagogical agents. In Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents (DAA’16). Association for Computing Machinery, New York, NY, Article 1, 6 pages. DOI:
[146]
Oren Zuckerman, Dina Walker, Andrey Grishko, Tal Moran, Chen Levy, Barak Lisak, Iddo Yehoshua Wald, and Hadas Erel. 2020. Companionship is not a function: The effect of a novel robotic object on healthy older adults’ feelings of “Being-Seen.” In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–14. DOI:

Cited By

View all
  • (2024)Beyond Radar Waves: The First Workshop on Radar-Based Human-Computer InteractionCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3662836(97-102)Online publication date: 24-Jun-2024
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024
  • (2024)Expanding V2X with V2DUIs: Distributed User Interfaces for Media Consumption in the Vehicle-to-Everything EraProceedings of the 2024 ACM International Conference on Interactive Media Experiences10.1145/3639701.3663643(394-401)Online publication date: 7-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 56, Issue 5
May 2024
1019 pages
EISSN:1557-7341
DOI:10.1145/3613598
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 January 2024
Online AM: 07 December 2023
Accepted: 01 December 2023
Revised: 25 August 2023
Received: 04 November 2022
Published in CSUR Volume 56, Issue 5

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gesture elicitation studies
  2. systematic literature review
  3. gesture interaction
  4. gesture input
  5. gesture vocabulary
  6. gesture set
  7. referents
  8. gesture identification studies

Qualifiers

  • Survey

Funding Sources

  • Wallonie-Bruxelles-International (WBI), Belgium
  • UEFISCDI, Romania
  • Santiago Villarreal-Narvaez and Jean Vanderdonckt

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)689
  • Downloads (Last 6 weeks)88
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Beyond Radar Waves: The First Workshop on Radar-Based Human-Computer InteractionCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3662836(97-102)Online publication date: 24-Jun-2024
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024
  • (2024)Expanding V2X with V2DUIs: Distributed User Interfaces for Media Consumption in the Vehicle-to-Everything EraProceedings of the 2024 ACM International Conference on Interactive Media Experiences10.1145/3639701.3663643(394-401)Online publication date: 7-Jun-2024
  • (2024)Take a Seat, Make a Gesture: Charting User Preferences for On-Chair and From-Chair Gesture InputProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642028(1-17)Online publication date: 11-May-2024
  • (2024)Exploring Methods to Optimize Gesture Elicitation Studies: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.338726912(64958-64979)Online publication date: 2024
  • (2024)Evaluating gesture user interfacesInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103242185:COnline publication date: 25-Jun-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media