Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Inferring Visualization Task Properties, User Performance, and User Cognitive Abilities from Eye Gaze Data

Published: 18 July 2014 Publication History

Abstract

Information visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities, and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to the individual user. To this end, this article presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict properties of the user's visualization task; the user's performance (in terms of predicted task completion time); and the user's individual cognitive abilities, such as perceptual speed, visual working memory, and verbal working memory. We provide a detailed analysis of different eye gaze feature sets, as well as over-time accuracies. We show that these predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are then discussed with a view to designing visualization systems that can adapt to the individual user in real time.

References

[1]
R. Amar, J. Eagan, and J. Stasko. 2005. Low-level components of analytic activity in information visualization. In Proceedings of the 2005 IEEE Symposium on Information Visualization. 15--21.
[2]
D. Bondareva, C. Conati, R. Feyzi-Behnagh, J. M. Harley, R. Azevedo, and F. Bouchet. 2013. Inferring learning from gaze data during interaction with an environment to support self-regulated learning. In Artificial Intelligence in Education. Lecture Notes in Computer Science, Vol. 7926, 229--238.
[3]
G. Carenini, C. Conati, E. Hoque, B. Steichen, D. Toker, and J. T. Enns. 2014. Highlighting interventions and user differences: Informing adaptive information visualization support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). 1835--1844.
[4]
S. M. Casner. 1991. Task-analytic approach to the automated design of graphic presentations. ACM Transactions on Graphics 10, 111--151.
[5]
C. Chen and M. Czerwinski. 1997. Spatial ability and visual navigation: An empirical study. New Review of Hypermedia and Multimedia 3, 67--89.
[6]
C. Conati and H. Maclaren. 2008. Exploring the role of individual differences in information visualization. In Proceedings of the Working Conference on Advanced Visual Interfaces (AVI'08). 199--206.
[7]
C. Conati and C. Merten. 2007. Eye-tracking for user modeling in exploratory learning environments: An empirical evaluation. Knowledge-Based Systems 20, 557--574.
[8]
F. Courtemanche, E. Aïmeur, A. Dufresne, M. Najjar, and F. Mpondo. 2011. Activity recognition using eye-gaze movements and traditional interactions. Interacting with Computers 23, 202--213.
[9]
J. P. Egan. 1975. Signal Detection Theory and ROC-Analysis. Academic Press.
[10]
S. Eivazi and R. Bednarik. 2011. Predicting problem-solving behavior and performance levels from visual attention data. In Proceedings of the 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction (IUI'11). 9--16.
[11]
R. B. Ekstrom and U.S. Office of Naval Research. 1996. Manual for Kit of Factor-Referenced Cognitive Tests. Educational Testing Service.
[12]
S. Elzer, S. Carberry, and I. Zukerman. 2011. The automated understanding of simple bar charts. Artificial Intelligence 175, 526--555.
[13]
S. Few. 2005. Keep Radar Graphs Below the Radar—Far Below. Perceptual Edge.
[14]
K. Fukuda and E. K. Vogel. 2009. Human variation in overriding attentional capture. Journal of Neuroscience 29, 8726--8733.
[15]
J. Goldberg and J. Helfman. 2011. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Information Visualization 10, 182--195.
[16]
J. H. Goldberg and J. I. Helfman. 2010. Comparing information graphics: A critical look at eye tracking. In Proceedings of the 3rd BELIV'10 Workshop: BEyond Time and Errors: Novel evaLuation Methods for Information Visualization (BELIV'10). 71--78.
[17]
D. Gotz and Z. Wen. 2009. Behavior-driven visualization recommendation. In Proceedings of the 14th International Conference on Intelligent User Interfaces (IUI'09). 315--324.
[18]
B. Grawemeyer. 2006. Evaluation of ERST: An external representation selection tutor. In Proceedings of the 4th International Conference on Diagrammatic Representation and Inference (Diagrams'06). 154--167.
[19]
T. M. Green and B. Fisher. 2010. Towards the personal equation of interaction: The impact of personality factors on visual analytics interface interaction. In Proceedings of the 2010 IEEE Symposium on Visual Analytics Science and Technology (VAST'10). 203--210.
[20]
M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations Newsletter 11, 10--18.
[21]
S. T. Iqbal and B. P. Bailey. 2004. Using eye gaze patterns to identify user tasks. Presented at the the Grace Hopper Celebration of Women in Computing.
[22]
A. Jameson. 2008. Adaptive interfaces and agents. In A. Sears and J. A. Jacko (Eds.), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (2nd ed.). CRC Press, Boca Raton, FL, 433--458.
[23]
S. Kardan and C. Conati. 2012. Exploring gaze data for determining user learning with an interactive simulation. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization (UMAP'12). 126--138.
[24]
J. Mackinlay. 1986. Automating the design of graphical presentations of relational information. ACM Transactions on Graphics 5, 110--141.
[25]
M. D. Plumlee and C. Ware. 2006. Zooming versus multiple window interfaces: Cognitive costs of visual comparisons. ACM Transactions on Computer-Human Interaction 13, 179--209.
[26]
F. J. Provost, T. Fawcett, and R. Kohavi. 1998. The case against accuracy estimation for comparing induction algorithms. In Proceedings of the 15th International Conference on Machine Learning (ICML'98). 445--453.
[27]
K. Rayner. 1995. Eye movements and cognitive processes in reading, visual search, and scene perception. Studies in Visual Information Processing, 3--22.
[28]
K. Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological Bulletin 124, 372--422.
[29]
L. Sesma, A. Villanueva, and R. Cabeza. 2012. Evaluation of pupil center-eye corner vector for gaze estimation using a Web cam. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA'12). 217--220.
[30]
J. Simola, J. Salojärvi, and I. Kojo. 2008. Using hidden Markov model to uncover processing states from eye movements in information search tasks. Cognitive Systems Research 9, 237--251.
[31]
B. Steichen, H. Ashman, and V. Wade. 2012. A comparative survey of personalised information retrieval and adaptive hypermedia techniques. Information Processing and Management 48, 4, 698--724.
[32]
B. Steichen, G. Carenini, and C. Conati. 2013. User-adaptive information visualization: Using eye gaze data to infer visualization tasks and user cognitive abilities. In Proceedings of the 2013 International Conference on Intelligent User Interfaces (IUI'13). 317--328.
[33]
D. Toker, C. Conati, G. Carenini, and M. Haraty. 2012. Towards adaptive information visualization: On the influence of user characteristics. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization (UMAP'12). 274--285.
[34]
D. Toker, C. Conati, B. Steichen, and G. Carenini. 2013. Individual user characteristics and information visualization: Connecting the dots through eye tracking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'13). 295--304.
[35]
D. Toker, B. Steichen, M. Gingerich, C. Conati, and G. Carenini. 2014. Towards facilitating user skill acquisition: Identifying untrained visualization users through eye tracking. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI'14). 105--114.
[36]
M. L. Turner and R. W. Engle. 1989. Is working memory capacity task dependent? Journal of Memory and Language 28, 127--154.
[37]
M. C. Velez, D. Silver, and M. Tremaine. 2005. Understanding visualization through spatial ability differences. in: IEEE Visualization, 2005. VIS 05. In Proceedings of IEEE Visualization (VIS'05). 511--518.
[38]
S. Westerman and T. Cribbin. 2000. Mapping semantic information in virtual space: Dimensions, variance and individual differences. Journal of Human-Computer Studies 53, 765--787.
[39]
C. Ziemkiewicz, R. J. Crouser, A. R. Yauilla, S. L. Su, W. Ribarsky, and R. Chang. 2011. How locus of control influences compatibility with visualization style. In Proceedings of the 2011 IEEE Conference on Visual Analytics Science and Technology (VAST'11). 81--90.

Cited By

View all
  • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
  • (2024)The State of the Art in User‐Adaptive VisualizationsComputer Graphics Forum10.1111/cgf.15271Online publication date: 4-Dec-2024
  • (2024)A Comprehensive Analysis of Cognitive CAPTCHAs through Eye TrackingIEEE Access10.1109/ACCESS.2024.3373542(1-1)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Inferring Visualization Task Properties, User Performance, and User Cognitive Abilities from Eye Gaze Data

    Recommendations

    Reviews

    Ying Zhu

    This research project tries to answer two questions: (1) Can a user's current visualization task properties, performance, and long-term cognitive abilities be inferred solely based on eye gaze data__?__ (2) Which eye gaze features would be the most informative__?__ The long-term goal of this research is to develop data visualizations that are adaptive to different users and tasks. The paper describes a user study in which 35 subjects performed five types of tasks on two different types of data visualizations: a bar graph and a radar graph. Each data visualization is divided into five areas of interest (AOIs): high area, low area, labels, question text, and legend. The authors collected eye gaze data such as number of fixations, fixation rate, fixation duration, saccade length, and saccade angles. They have developed a toolkit to convert the raw eye gaze data into more useful statistics, particularly with respect to specific AOIs. The authors applied various classifiers to the statistical data to predict task type, task complexity, task difficulty, user performance, and cognitive abilities. Regarding the first research question (1), the authors found that a user's eye gaze data alone can predict visualization task type, task complexity, difficulty, user performance, and user cognitive abilities with accuracy ranging from 40 to 80 percent. However, it is less effective in predicting user expertise. Some experiments also show that the accuracy of prediction is highest using the eye gaze data collected at the beginning of each task. In addition, logistic regression consistently outperforms other machine learning models in this study. Regarding the second research question (2), the authors found that AOI-related features were crucial for more accurate predictions. This research provides some evidence that eye gaze data can be used to help develop adaptive data visualization. However, eye gaze data may need to be integrated with other information to improve the accuracy of predictions. Online Computing Reviews Service

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Interactive Intelligent Systems
    ACM Transactions on Interactive Intelligent Systems  Volume 4, Issue 2
    July 2014
    101 pages
    ISSN:2160-6455
    EISSN:2160-6463
    DOI:10.1145/2638542
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 July 2014
    Accepted: 01 March 2014
    Revised: 01 February 2014
    Received: 01 August 2013
    Published in TIIS Volume 4, Issue 2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Adaptive information visualization
    2. adaptation
    3. eye tracking
    4. machine learning

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)61
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
    • (2024)The State of the Art in User‐Adaptive VisualizationsComputer Graphics Forum10.1111/cgf.15271Online publication date: 4-Dec-2024
    • (2024)A Comprehensive Analysis of Cognitive CAPTCHAs through Eye TrackingIEEE Access10.1109/ACCESS.2024.3373542(1-1)Online publication date: 2024
    • (2024)Computational Methods to Infer Human Factors for Adaptation and Personalization Using Eye TrackingA Human-Centered Perspective of Intelligent Personalized Environments and Systems10.1007/978-3-031-55109-3_7(183-204)Online publication date: 1-May-2024
    • (2023)Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learningProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108767:3(1-35)Online publication date: 27-Sep-2023
    • (2023)Classification of Alzheimer's Disease with Deep Learning on Eye-tracking DataProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614149(104-113)Online publication date: 9-Oct-2023
    • (2023)User Evaluation of Conversational Agents for Aerospace DomainInternational Journal of Human–Computer Interaction10.1080/10447318.2023.223954440:19(5549-5568)Online publication date: 2-Aug-2023
    • (2022)Towards Supporting Adaptive Training of Injection Procedures: Detecting Differences in the Visual Attention of Nursing Students and ExpertsProceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3503252.3531302(286-294)Online publication date: 4-Jul-2022
    • (2021)Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human GazeSensors10.3390/s2112414321:12(4143)Online publication date: 16-Jun-2021
    • (2021)Predicting Visual Search Task Success from Eye Gaze Data as a Basis for User-Adaptive Information Visualization SystemsACM Transactions on Interactive Intelligent Systems10.1145/344663811:2(1-25)Online publication date: 20-May-2021
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media