Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Recognizing Unintentional Touch on Interactive Tabletop

Published: 18 March 2020 Publication History

Abstract

A multi-touch interactive tabletop is designed to embody the benefits of a digital computer within the familiar surface of a physical tabletop. However, the nature of current multi-touch tabletops to detect and react to all forms of touch, including unintentional touches, impedes users from acting naturally on them. In our research, we leverage gaze direction, head orientation and screen contact data to identify and filter out unintentional touches, so that users can take full advantage of the physical properties of an interactive tabletop, e.g., resting hands or leaning on the tabletop during the interaction. To achieve this, we first conducted a user study to identify behavioral pattern differences (gaze, head and touch) between completing usual tasks on digital versus physical tabletops. We then compiled our findings into five types of spatiotemporal features, and train a machine learning model to recognize unintentional touches with an F1 score of 91.3%, outperforming the state-of-the-art model by 4.3%. Finally we evaluated our algorithm in a real-time filtering system. A user study shows that our algorithm is stable and the improved tabletop effectively screens out unintentional touches, and provide more relaxing and natural user experience. By linking their gaze and head behavior to their touch behavior, our work sheds light on the possibility of future tabletop technology to improve the understanding of users' input intention.

Supplementary Material

xu (xu.zip)
Supplemental movie, appendix, image and software files for, Recognizing Unintentional Touch on Interactive Tabletop

References

[1]
Michelle Annett, Fraser Anderson, Walter F. Bischof, and Anoop Gupta. 2014. The Pen is Mightier: Understanding Stylus Behaviour While Inking on Tablets. In Proceedings of Graphics Interface 2014 (GI '14). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 193--200. http://dl.acm.org/citation.cfm?id=2619648.2619680
[2]
Michelle Annett, Anoop Gupta, and Walter F. Bischof. 2014. Exploring and Understanding Unintended Touch During Direct Pen Interaction. ACM Trans. Comput.-Hum. Interact. 21, 5, Article 28 (Nov. 2014), 39 pages. https://doi.org/10.1145/2674915
[3]
Florian Block, James Hammerman, Michael Horn, Amy Spiegel, Jonathan Christiansen, Brenda Phillips, Judy Diamond, E. Margaret Evans, and Chia Shen. 2015. Fluid Grouping: Quantifying Group Engagement Around Interactive Tabletop Exhibits in the Wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 867--876. https://doi.org/10.1145/2702123.2702231
[4]
Christophe Bortolaso, Matthew Oskamp, Greg Phillips, Carl Gutwin, and T.C. Nicholas Graham. 2014. The Effect of View Techniques on Collaboration and Awareness in Tabletop Map-Based Tasks. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2669485.2669504
[5]
Mark Brown, Winyu Chinthammit, and Paddy Nixon. 2014. A Comparison of User Preferences for Tangible Objects vs Touch Buttons with a Map-based Tabletop Application. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI '14). ACM, New York, NY, USA, 212--215. https://doi.org/10.1145/2686612.2686645
[6]
Y.-L. Betty Chang, Stacey D. Scott, and Mark Hancock. 2014. Supporting Situation Awareness in Collaborative Tabletop Systems with Automation. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 185--194. https://doi.org/10.1145/2669485.2669496
[7]
Manoranjan Dash and Huan Liu. 1997. Feature selection for classification. Intelligent data analysis 1, 1--4 (1997), 131--156.
[8]
Fred D Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly (1989), 319--340.
[9]
Tulio de Souza Alcantara, Jennifer Ferreira, and Frank Maurer. 2013. Interactive Prototyping of Tabletop and Surface Applications. In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '13). ACM, New York, NY, USA, 229--238. https://doi.org/10.1145/2494603.2480313
[10]
Abigail C. Evans, Katie Davis, James Fogarty, and Jacob O. Wobbrock. 2017. Group Touch: Distinguishing Tabletop Users in Group Settings via Statistical Modeling of Touch Pairs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 35--47. https://doi.org/10.1145/3025453.3025793
[11]
Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189--1232.
[12]
Matt W Gardner and SR Dorling. 1998. Artificial neural networks (the multilayer perceptron)---a review of applications in the atmospheric sciences. Atmospheric environment 32, 14 (1998), 2627--2636.
[13]
Jens Gerken, Hans-Christian Jetter, Toni Schmidt, and Harald Reiterer. 2010. Can "Touch" Get Annoying?. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 257--258. https://doi.org/10.1145/1936652.1936704
[14]
Jason Tyler Griffin. 2013. Touch screen palm input rejection. US Patent App. 13/469,354.
[15]
Jefferson Y Han. 2005. Low-cost multi-touch sensing through frustrated total internal reflection. In Proceedings of the 18th annual ACM symposium on User interface software and technology. ACM, 115--118.
[16]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology 52 (1988), 139--183.
[17]
John M Henderson. 2003. Human gaze control during real-world scene perception. Trends in cognitive sciences 7, 11 (2003), 498--504.
[18]
Mark A Hollands, Aftab E Patla, and Joan N Vickers. 2002. "Look where you're going!": gaze behaviour associated with maintaining and changing the direction of locomotion. Experimental brain research 143, 2 (2002), 221--230.
[19]
Jeff Huang, Ryen White, and Georg Buscher. 2012. User See, User Point: Gaze and Cursor Alignment in Web Search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 1341--1350. https://doi.org/10.1145/2207676.2208591
[20]
Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (April 1991), 152--169. https://doi.org/10.1145/123078.128728
[21]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1151--1160. https://doi.org/10.1145/2638728.2641695
[22]
Adam Kendon. 1967. Some functions of gaze-direction in social interaction. Acta psychologica 26 (1967), 22--63.
[23]
Ahmed Kharrufa, Madeline Balaam, Phil Heslop, David Leat, Paul Dolan, and Patrick Olivier. 2013. Tables in the Wild: Lessons Learned from a Large-scale Multi-tabletop Deployment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1021--1030. https://doi.org/10.1145/2470654.2466130
[24]
Seiya Koura, Shunsuke Suo, Asako Kimura, Fumihisa Shibata, and Hideyuki Tamura. 2012. Amazing Forearm As an Innovative Interaction Device and Data Storage on Tabletop Display. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 383--386. https://doi.org/10.1145/2396636.2396706
[25]
Yoshinori Kuno, Tomoyuki Ishiyama, Satoru Nakanishi, and Yoshiaki Shirai. 1999. Combining Observations of Intentional and Unintentional Behaviors for Human-computer Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 238--245. https://doi.org/10.1145/302979.303051
[26]
Andreas Kunz, Ali Alavi, Jonas Landgren, Asim Evren Yantaç, PawełWoźniak, Zoltán Sárosi, and Morten Fjeld. 2013. Tangible Tabletops for Emergency Response: An Exploratory Study. In Proceedings of the International Conference on Multimedia, Interaction, Design and Innovation (MIDI '13). ACM, New York, NY, USA, Article 10, 8 pages. https://doi.org/10.1145/2500342.2500352
[27]
Ricardo Langner, John Brosz, Raimund Dachselt, and Sheelagh Carpendale. 2010. PhysicsBox: Playful Educational Tabletop Games. In ACM International Conference on Interactive Tabletops and Surfaces (ITS '10). ACM, New York, NY, USA, 273--274. https://doi.org/10.1145/1936652.1936712
[28]
Khanh-Duy Le, Mahsa Paknezhad, Paweł W. Woźniak, Maryam Azh, Gabrielė Kasparavičiūtė, Morten Fjeld, Shengdong Zhao, and Michael S. Brown. 2016. Towards Leaning Aware Interaction with Multitouch Tabletops. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, Article 4, 4 pages. https://doi.org/10.1145/2971485.2971553
[29]
Wen-Hung Liao. 2009. A Framework for Attention-based Personal Photo Manager. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics (SMC '09). IEEE Press, Piscataway, NJ, USA, 2128--2132. http://dl.acm.org/citation.cfm?id=1732003.1732067
[30]
Daniel J. Liebling and Susan T. Dumais. 2014. Gaze and Mouse Coordination in Everyday Work. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1141--1150. https://doi.org/10.1145/2638728.2641692
[31]
Roman Lissermann, Jochen Huber, Martin Schmitz, Jürgen Steimle, and Max Mühlhäuser. 2014. Permulin: Mixed-focus Collaboration on Multi-view Tabletops. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3191--3200. https://doi.org/10.1145/2556288.2557405
[32]
Hao Lu and Yang Li. 2015. Gesture On: Enabling Always-On Touch Gestures for Fast Mobile Access from the Device Standby Mode. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3355--3364. https://doi.org/10.1145/2702123.2702610
[33]
Paul P. Maglio, Teenie Matlock, Christopher S. Campbell, Shumin Zhai, and Barton A. Smith. 2000. Gaze and Speech in Attentive User Interfaces. In Proceedings of the Third International Conference on Advances in Multimodal Interfaces (ICMI '00). Springer-Verlag, London, UK, UK, 1--7. http://dl.acm.org/citation.cfm?id=645524.656806
[34]
Alexander Mariakakis, Mayank Goel, Md Tanvir Islam Aumi, Shwetak N. Patel, and Jacob O. Wobbrock. 2015. SwitchBack: Using Focus and Saccade Tracking to Guide Users' Attention for Mobile Task Resumption. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2953--2962. https://doi.org/10.1145/2702123.2702539
[35]
Paul Marshall, Richard Morris, Yvonne Rogers, Stefan Kreitmayer, and Matt Davies. 2011. Rethinking 'Multi-user': An In-the-wild Study of How Groups Approach a Walk-up-and-use Tabletop Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 3033--3042. https://doi.org/10.1145/1978942.1979392
[36]
Juha Matero and Ashley Colley. 2012. Identifying Unintentional Touches on Handheld Touch Screen Devices. In Proceedings of the Designing Interactive Systems Conference (DIS '12). ACM, New York, NY, USA, 506--509. https://doi.org/10.1145/2317956.2318031
[37]
Fabrice Matulic and Moira Norrie. 2012. Empirical Evaluation of Uni- and Bimodal Pen and Touch Interaction Properties on Digital Tabletops. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA, 143--152. https://doi.org/10.1145/2396636.2396659
[38]
Fabrice Matulic, Daniel Vogel, and Raimund Dachselt. 2017. Hand Contact Shape Recognition for Posture-Based Tabletop Widgets and Interaction. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17). ACM, New York, NY, USA, 3--11. https://doi.org/10.1145/3132272.3134126
[39]
Michael Mauderer and Florian Daiber. 2013. Combining Touch and Gaze for Distant Selection in a Tabletop Setting. (2013).
[40]
Pranav Mistry, Pattie Maes, and Liyan Chang. 2009. WUW - Wear Ur World: A Wearable Gestural Interface. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, New York, NY, USA, 4111--4116. https://doi.org/10.1145/1520340.1520626
[41]
Erik Murphy-Chutorian and Mohan Manubhai Trivedi. 2009. Head pose estimation in computer vision: A survey. IEEE transactions on pattern analysis and machine intelligence 31, 4 (2009), 607--626.
[42]
Jakob Nielsen. 1994. Usability inspection methods. In Conference companion on Human factors in computing systems. ACM, 413--414.
[43]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 509--518. https://doi.org/10.1145/2642918.2647397
[44]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 373--383. https://doi.org/10.1145/2807442.2807460
[45]
Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 301--311. https://doi.org/10.1145/2984511.2984514
[46]
Natural Point. 2011. Optitrack. Natural Point, Inc.,[Online]. Available: http://www. naturalpoint. com/optitrack/.[Accessed 22 2 2014] (2011).
[47]
Argenis Ramirez Gomez and Hans Gellersen. 2019. SuperVision: Playing with Gaze Aversion and Peripheral Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 473, 12 pages. https://doi.org/10.1145/3290605.3300703
[48]
Neil Robertson and Ian Reid. 2006. Estimating gaze direction from low-resolution faces in video. Computer Vision-ECCV 2006 (2006), 402--415.
[49]
Hasibullah Sahibzada, Eva Hornecker, Florian Echtler, and Patrick Tobias Fischer. 2017. Designing Interactive Advertisements for Public Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 1518--1529. https://doi.org/10.1145/3025453.3025531
[50]
Johannes Schöning, Peter Brandl, Florian Daiber, Florian Echtler, Otmar Hilliges, Jonathan Hook, Markus Löchtefeld, Nima Motamedi, Laurence Muller, Patrick Olivier, et al. 2008. Multi-touch surfaces: A technical guide. IEEE Tabletops and Interactive Surfaces 2, 11 (2008).
[51]
Julia Schwarz, Charles Claudius Marais, Tommer Leyvand, Scott E. Hudson, and Jennifer Mankoff. 2014. Combining Body Pose, Gaze, and Gesture to Determine Intention to Interact in Vision-based Interfaces. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3443--3452. https://doi.org/10.1145/2556288.2556989
[52]
Julia Schwarz, Robert Xiao, Jennifer Mankoff, Scott E. Hudson, and Chris Harrison. 2014. Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 2009--2012. https://doi.org/10.1145/2556288.2557056
[53]
Barış Serim and Giulio Jacucci. 2019. Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 417, 16 pages. https://doi.org/10.1145/3290605.3300647
[54]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32Nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 1161--1174. https://doi.org/10.1145/3332165.3347921
[55]
Ludwig Sidenmark and Anders Lundström. 2019. Gaze Behaviour on Interacted Objects During Hand Interaction in Virtual Reality for Eye Tracking Calibration. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA '19). ACM, New York, NY, USA, Article 6, 9 pages. https://doi.org/10.1145/3314111.3319815
[56]
Misha Sra, Xuhai Xu, Aske Mottelson, and Pattie Maes. 2018. VMotion: Designing a Seamless Walking Experience in VR. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). ACM, New York, NY, USA, 59--70. https://doi.org/10.1145/3196709.3196792
[57]
Dave M Stampe. 1993. Heuristic filtering and reliable calibration methods for video-based pupil-tracking systems. Behavior Research Methods, Instruments, & Computers 25, 2 (1993), 137--142.
[58]
Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 285--294. https://doi.org/10.1145/2470654.2470695
[59]
Rainer Stiefelhagen, Michael Finke, Jie Yang, and Alex Waibel. 1999. From gaze to focus of attention. In Visual Information and Information Systems. Springer, 765--772.
[60]
Lucia Terrenghi, David Kirk, Abigail Sellen, and Shahram Izadi. 2007. Affordances for Manipulation of Physical Versus Digital Media on Interactive Surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 1157--1166. https://doi.org/10.1145/1240624.1240799
[61]
Reed L Townsend, Alexander J Kolmykov-Zotov, Steven P Dodge, and Bryan D Scott. 2011. Unintentional touch rejection. US Patent 8,018,440.
[62]
Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2015. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 4179--4188. https://doi.org/10.1145/2702123.2702355
[63]
Jayson Turner, Andreas Bulling, and Hans Gellersen. 2011. Combining Gaze with Manual Interaction to Extend Physical Reach. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-based Interaction (PETMEI '11). ACM, New York, NY, USA, 33--36. https://doi.org/10.1145/2029956.2029966
[64]
Simon Voelker, Andrii Matviienko, Johannes Schöning, and Jan Borchers. 2015. Combining Direct and Indirect Touch Input for Interactive Workspaces Using Gaze Input. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction (SUI '15). ACM, New York, NY, USA, 79--88. https://doi.org/10.1145/2788940.2788949
[65]
Daniel Vogel and Ravin Balakrishnan. 2010. Occlusion-aware Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 263--272. https://doi.org/10.1145/1753326.1753365
[66]
Xuhai Xu, Ahmed Hassan Awadallah, Susan T. Dumais, Farheen Omar, Bogdan Popp, Robert Routhwaite, and Farnaz Jahanbakhsh. 2020. Understanding User Behavior For Document Recommendation. In The World Wide Web Conference (WWW '20). Association for Computing Machinery, New York, NY, USA, 7. https://doi.org/10.1145/3366423.3380071
[67]
Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella K. Villalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey G. Creswell, J. David Creswell, and et al. 2019. Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article Article 116 (Sept. 2019), 33 pages. https://doi.org/10.1145/3351274
[68]
Xuhai Xu, Alexandru Dancu, Pattie Maes, and Suranga Nanayakkara. 2018. Hand Range Interface: Information Always at Hand with a Body-centric Mid-air Input Surface. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). ACM, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3229434.3229449
[69]
Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376836
[70]
Xuhai Xu, Chun Yu, Anind K. Dey, and Jennifer Mankoff. 2019. Clench Interface: Novel Biting Input Techniques. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 275, 12 pages. https://doi.org/10.1145/3290605.3300505
[71]
Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1 (March 2020), 22. https://doi.org/10.1145/3380983
[72]
Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, and Yuanchun Shi. 2018. VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 78, 13 pages. https://doi.org/10.1145/3173574.3173652
[73]
Yukang Yan, Chun Yu, Wengrui Zheng, Ruining Tang, Xuhai Xu, and Yuanchun Shi. 2020. FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 14. https://doi.org/10.1145/3313831.3376810
[74]
Alfred L Yarbus. 1967. Eye movements during perception of complex objects. Springer.
[75]
Yaning Luo Zejiang Liu. 2012. Nanometer touch film production method.
[76]
Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 246--253. https://doi.org/10.1145/302979.303053
[77]
Yang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton, and Ken Hinckley. 2019. Sensing Posture-Aware Pen+Touch Interaction on Tablets. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Article 55, 14 pages. https://doi.org/10.1145/3290605.3300285

Cited By

View all
  • (2024)G-VOILA: Gaze-Facilitated Information Querying in Daily ScenariosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596238:2(1-33)Online publication date: 15-May-2024
  • (2024)Vision- and Tactile-Based Continuous Multimodal Intention and Attention Recognition for Safer Physical Human–Robot InteractionIEEE Transactions on Automation Science and Engineering10.1109/TASE.2023.327685621:3(3205-3215)Online publication date: Jul-2024
  • (2024)HCI Research and Innovation in China: A 10-Year PerspectiveInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2323858(1-33)Online publication date: 22-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 1
March 2020
1006 pages
EISSN:2474-9567
DOI:10.1145/3388993
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 March 2020
Published in IMWUT Volume 4, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Behavior pattern
  2. Gaze and head
  3. Interactive tabletop
  4. Unintentional input

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • Beijing Key Lab of Networked Multimedia
  • the National Key Research and Development Plan
  • the Natural Science Foundation of China under Grant

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)73
  • Downloads (Last 6 weeks)7
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)G-VOILA: Gaze-Facilitated Information Querying in Daily ScenariosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596238:2(1-33)Online publication date: 15-May-2024
  • (2024)Vision- and Tactile-Based Continuous Multimodal Intention and Attention Recognition for Safer Physical Human–Robot InteractionIEEE Transactions on Automation Science and Engineering10.1109/TASE.2023.327685621:3(3205-3215)Online publication date: Jul-2024
  • (2024)HCI Research and Innovation in China: A 10-Year PerspectiveInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2323858(1-33)Online publication date: 22-Mar-2024
  • (2023)GLOBEMProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35694856:4(1-34)Online publication date: 11-Jan-2023
  • (2023)XAIR: A Framework of Explainable AI in Augmented RealityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581500(1-30)Online publication date: 19-Apr-2023
  • (2022)Gaze as an Indicator of Input Recognition ErrorsProceedings of the ACM on Human-Computer Interaction10.1145/35308836:ETRA(1-18)Online publication date: 13-May-2022
  • (2022)Towards Future Health and Well-being: Bridging Behavior Modeling and InterventionAdjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526114.3558524(1-5)Online publication date: 29-Oct-2022
  • (2022)FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on EarphonesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517698(1-12)Online publication date: 29-Apr-2022
  • (2022)Ubilung: Multi-Modal Passive-Based Lung Health AssessmentICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP43922.2022.9746614(551-555)Online publication date: 23-May-2022
  • (2022)Investigating user-defined flipping gestures for dual-display phonesInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2022.102800163(102800)Online publication date: Jul-2022
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media