Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Gaze-based predictive user interfaces: : Visualizing user intentions in the presence of uncertainty

Published: 01 March 2018 Publication History

Highlights

We propose two novel gaze-based predictive user interfaces.
Our interfaces are able to dynamically provide adaptive interventions.
Interventions reflect user’s task-related intentions and goals.
The presence of uncertainty in prediction model outputs is handled.
Usability and perceived task load are not adversely affected.

Abstract

Human eyes exhibit different characteristic patterns during different virtual interaction tasks such as moving a window, scrolling a piece of text, or maximizing an image. Human-computer studies literature contains examples of intelligent systems that can predict user’s task-related intentions and goals based on eye gaze behavior. However, these systems are generally evaluated in terms of prediction accuracy, and on previously collected offline interaction data. Little attention has been paid to creating real-time interactive systems using eye gaze and evaluating them in online use. We have five main contributions that address this gap from a variety of aspects. First, we present the first line of work that uses real-time feedback generated by a gaze-based probabilistic task prediction model to build an adaptive real-time visualization system. Our system is able to dynamically provide adaptive interventions that are informed by real-time user behavior data. Second, we propose two novel adaptive visualization approaches that take into account the presence of uncertainty in the outputs of prediction models. Third, we offer a personalization method to suggest which approach will be more suitable for each user in terms of system performance (measured in terms of prediction accuracy). Personalization boosts system performance and provides users with the more optimal visualization approach (measured in terms of usability and perceived task load). Fourth, by means of a thorough usability study, we quantify the effects of the proposed visualization approaches and prediction errors on natural user behavior and the performance of the underlying prediction systems. Finally, this paper also demonstrates that our previously-published gaze-based task prediction system, which was assessed as successful in an offline test scenario, can also be successfully utilized in realistic online usage scenarios.

References

[1]
T. Bader, M. Vogelgesang, E. Klaus, Multimodal integration of natural gaze behavior for intention recognition during object manipulation, Proceedings of the Eleventh International Conference on Multimodal Interfaces, ACM, New York, NY, USA, 2009, pp. 199–206.
[2]
R. Bednarik, H. Vrzakova, M. Hradis, What do you want to do next: a novel approach for intent prediction in gaze-based interaction, Proceedings of the Symposium on Eye Tracking Research and Applications, ACM, New York, NY, USA, 2012, pp. 83–90.
[3]
J. Brooke, Sus - a quick and dirty usability scale, Usability Eval. Ind. 189 (194) (1996) 4–7.
[4]
C.S. Campbell, P.P. Maglio, A robust algorithm for reading detection, Proceedings of the 2001 Workshop on Perceptive User Interfaces, ACM, New York, NY, USA, 2001, pp. 1–7.
[5]
G. Carenini, C. Conati, E. Hoque, B. Steichen, D. Toker, J. Enns, Highlighting interventions and user differences: informing adaptive information visualization support, Proceedings of the Thirty-second Annual ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2014, pp. 1835–1844.
[6]
Ç. Çığ, T.M. Sezgin, Gaze-based prediction of pen-based virtual interaction tasks, Int. J. Hum.-Comput. Stud. 73 (2015) 91–106.
[7]
Ç. Çığ, T.M. Sezgin, Real-time activity prediction: a gaze-based approach for early recognition of pen-based interaction tasks, Proceedings of the Twelfth Sketch-Based Interfaces and Modeling Symposium, Eurographics Association, Aire-la-Ville, Switzerland, 2015, pp. 59–65.
[8]
C. Conati, G. Carenini, E. Hoque, B. Steichen, D. Toker, Evaluating the impact of user characteristics and different layouts on an interactive visualization for decision making, Comput. Graph. Forum 33 (3) (2014) 371–380.
[9]
F. Courtemanche, E. Aïmeur, A. Dufresne, M. Najjar, F. Mpondo, Activity recognition using eye-gaze movements and traditional interactions, Interact. Comput. 23 (3) (2011) 202–213.
[11]
S. D’Mello, A. Olney, C. Williams, P. Hays, Gaze tutor: a gaze-reactive intelligent tutoring system, Int. J. Hum.-Comput. Stud. 70 (5) (2012) 377–398.
[12]
A.T. Duchowski, N. Cournia, H.A. Murphy, Gaze-contingent displays: a review, Cyberpsychol. Behav. Social Networking 7 (6) (2004) 621–634.
[14]
S.G. Hart, L.E. Staveland, Development of nasa-tlx (task load index): results of empirical and theoretical research, Adv. Psychol. 52 (1988) 139–183.
[15]
W. Ju, B.A. Lee, S.R. Klemmer, Range: exploring implicit interaction through electronic whiteboard design, Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, ACM, New York, NY, USA, 2008, pp. 17–26.
[16]
D.A. Norman, The Design of Everyday Things, Basic Book, 1988.
[17]
M. Okoe, S.S. Alam, R. Jianu, A gaze-enabled graph visualization to improve graph reading tasks, Comput. Graph. Forum 33 (3) (2014) 251–260.
[18]
T.Y. Ouyang, R. Davis, A visual approach to sketched symbol recognition, Proceedings of the Twenty-first International Joint Conference on Artifical Intelligence, 2009, pp. 1463–1468.
[19]
A. Schmidt, Implicit human computer interaction through context, Pers. Technol. 4 (2–3) (2000) 191–199.
[20]
J.L. Sibert, M. Gokturk, R.A. Lavine, The reading assistant: eye gaze triggered auditory prompting for reading remediation, Proceedings of the Thirteenth Annual ACM Symposium on User Interface Software and Technology, ACM, San Diego, CA, USA, 2000, pp. 101–107.
[21]
I. Starker, R.A. Bolt, A gaze-responsive self-disclosing display, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 1990, pp. 3–10.
[22]
B. Steichen, G. Carenini, C. Conati, User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities, Proceedings of the Eighteenth International Conference on Intelligent User Interfaces, ACM, New York, NY, USA, 2013, pp. 317–328.
[23]
B. Steichen, C. Conati, G. Carenini, Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data, TiiS 4 (2) (2014) 1–29.
[24]
M. Streit, A. Lex, H. Müller, D. Schmalstieg, Gaze-based focus adaption in an information visualization system, IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing, 2009, pp. 303–307.
[25]
H. Wang, M. Chignell, M. Ishizuka, Empathic tutoring software agents using real-time eye tracking, Proceedings of the Symposium on Eye Tracking Research and Applications, ACM, New York, NY, USA, 2006, pp. 73–78.

Cited By

View all
  • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)What is supposed to help, actually hindersProcedia Computer Science10.1016/j.procs.2023.10.220225:C(2292-2301)Online publication date: 4-Mar-2024
  • (2024)AdaptLIL: A Real-Time Adaptive Linked Indented List Visualization for Ontology MappingThe Semantic Web – ISWC 202410.1007/978-3-031-77850-6_1(3-22)Online publication date: 11-Nov-2024
  • Show More Cited By

Index Terms

  1. Gaze-based predictive user interfaces: Visualizing user intentions in the presence of uncertainty
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image International Journal of Human-Computer Studies
      International Journal of Human-Computer Studies  Volume 111, Issue C
      Mar 2018
      101 pages

      Publisher

      Academic Press, Inc.

      United States

      Publication History

      Published: 01 March 2018

      Author Tags

      1. Implicit interaction
      2. Activity prediction
      3. Task prediction
      4. Uncertainty visualization
      5. Gaze-based interfaces
      6. Predictive interfaces
      7. Proactive interfaces
      8. Gaze-contingent interfaces
      9. Usability study

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 05 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
      • (2024)What is supposed to help, actually hindersProcedia Computer Science10.1016/j.procs.2023.10.220225:C(2292-2301)Online publication date: 4-Mar-2024
      • (2024)AdaptLIL: A Real-Time Adaptive Linked Indented List Visualization for Ontology MappingThe Semantic Web – ISWC 202410.1007/978-3-031-77850-6_1(3-22)Online publication date: 11-Nov-2024
      • (2023)BIGazeAdvanced Engineering Informatics10.1016/j.aei.2023.10215958:COnline publication date: 1-Oct-2023
      • (2022)Dwell Selection with ML-based Intent Prediction Using Only Gaze DataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35503016:3(1-21)Online publication date: 7-Sep-2022
      • (2020)Bridging the Virtual and Real Worlds: A Preliminary Study of Messaging Notifications in Virtual RealityProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376228(1-14)Online publication date: 21-Apr-2020

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media