Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581754.3584179acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
poster

Interactive Fixation-to-AOI Mapping for Mobile Eye Tracking Data based on Few-Shot Image Classification

Published: 27 March 2023 Publication History

Abstract

Mobile eye tracking is an important tool in psychology and human-centred interaction design for understanding how people process visual scenes and user interfaces. However, analysing recordings from mobile eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we propose a web-based annotation tool that leverages few-shot image classification and interactive machine learning (IML) to accelerate the annotation process. The tool allows users to efficiently map fixations to areas of interest (AOI) in a video-editing-style interface. It includes an IML component that generates suggestions and learns from user feedback using a few-shot image classification model initialised with a small number of images per AOI. Our goal is to improve the efficiency and accuracy of fixation-to-AOI mapping in mobile eye tracking.

Supplementary Material

MOV File (iui23b-sub2375-cam-i45.mov)
Demo video

References

[1]
Kristin Altmeyer, Sebastian Kapp, Michael Barz, Luisa Lauer, Sarah Malone, Jochen Kuhn, and Roland Brünken. 2020. The effect of augmented reality on global coherence formation processes during STEM laboratory work in elementary school children. (Oct. 2020). https://doi.org/10.17605/OSF.IO/GWHU5
[2]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4 (Dec. 2014), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
[3]
Michael Barz, Florian Daiber, Daniel Sonntag, and Andreas Bulling. 2018. Error-aware gaze-based interfaces for robust mobile gaze interaction. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, ETRA 2018, Warsaw, Poland, June 14-17, 2018, Bonita Sharif and Krzysztof Krejtz (Eds.). ACM, 24:1–24:10. https://doi.org/10.1145/3204493.3204536
[4]
Michael Barz, Sebastian Kapp, Jochen Kuhn, and Daniel Sonntag. 2021. Automatic Recognition and Augmentation of Attended Objects in Real-time using Eye Tracking and a Head-mounted Display. In ACM Symposium on Eye Tracking Research and Applications(ETRA ’21 Adjunct). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3450341.3458766
[5]
Michael Barz and Daniel Sonntag. 2016. Gaze-guided object classification using deep neural networks for attention-based computing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp Adjunct 2016, Heidelberg, Germany, September 12-16, 2016, Paul Lukowicz, Antonio Krüger, Andreas Bulling, Youn-Kyung Lim, and Shwetak N. Patel (Eds.). ACM, 253–256. https://doi.org/10.1145/2968219.2971389
[6]
Michael Barz and Daniel Sonntag. 2021. Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze. Sensors 21, 12 (Jan. 2021), 4143. https://doi.org/10.3390/s21124143 Number: 12 Publisher: Multidisciplinary Digital Publishing Institute.
[7]
Stijn De Beugher, Geert Brône, and Toon Goedemé. 2014. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection. In 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Vol. 1. 625–633.
[8]
Oliver Deane, Eszter Toth, and Sang-Hoon Yeo. 2022. Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data. Behavior Research Methods (June 2022). https://doi.org/10.3758/s13428-022-01833-4
[9]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778. https://doi.org/10.1109/CVPR.2016.90
[10]
Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: from eye fixations to comprehension.Psychological review 87, 4 (1980), 329. Publisher: American Psychological Association.
[11]
Niharika Kumari, Verena Ruf, Sergey Mukhametov, Albrecht Schmidt, Jochen Kuhn, and Stefan Küchemann. 2021. Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4. Sensors 21, 22 (2021). https://doi.org/10.3390/s21227668
[12]
Kuno Kurzhals. 2021. Image-Based Projection Labeling for Mobile Eye Tracking. In ACM Symposium on Eye Tracking Research and Applications. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3448017.3457382
[13]
Kuno Kurzhals, Cyrill Fabian Bopp, Jochen Bässler, Felix Ebinger, and Daniel Weiskopf. 2014. Benchmark Data for Evaluating Visualization and Analysis Techniques for Eye Tracking for Video Stimuli. In Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization(BELIV ’14). Association for Computing Machinery, New York, NY, USA, 54–60. https://doi.org/10.1145/2669557.2669558 event-place: Paris, France.
[14]
Kuno Kurzhals, Florian Heimerl, and Daniel Weiskopf. 2014. ISeeCube: Visual Analysis of Gaze Data for Video. In Proceedings of the Symposium on Eye Tracking Research and Applications(ETRA ’14). Association for Computing Machinery, New York, NY, USA, 43–50. https://doi.org/10.1145/2578153.2578158 event-place: Safety Harbor, Florida.
[15]
Dong Hoon Lee and Sae-Young Chung. 2021. Unsupervised embedding adaptation via early-stage feature reconstruction for few-shot classification. In International Conference on Machine Learning. PMLR, 6098–6108.
[16]
Yuewen Li, Wenquan Feng, Shuchang Lyu, and Qi Zhao. 2023. Feature reconstruction and metric based network for few-shot object detection. Computer Vision and Image Understanding 227 (2023), 103600. Publisher: Elsevier.
[17]
Eduardo Manuel Silva Machado, Ivan Carrillo, Miguel Collado, and Liming Chen. 2019. Visual Attention-Based Object Detection in Cluttered Environments. In 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). 133–139. https://doi.org/10.1109/SmartWorld-UIC-ATC-SCALCOM-IOP-SCI.2019.00064
[18]
Gregor Mehlmann, Markus Häring, Kathrin Janowski, Tobias Baur, Patrick Gebhard, and Elisabeth André. 2014. Exploring a Model of Gaze for Grounding in Multimodal HRI. In Proceedings of the 16th International Conference on Multimodal Interaction(ICMI ’14). Association for Computing Machinery, New York, NY, USA, 247–254. https://doi.org/10.1145/2663204.2663275 event-place: Istanbul, Turkey.
[19]
Karen Panetta, Qianwen Wan, Aleksandra Kaszowska, Holly A. Taylor, and Sos Agaian. 2019. Software Architecture for Automating Cognitive Science Eye-Tracking Data Analysis and Object Annotation. IEEE Transactions on Human-Machine Systems 49, 3 (2019), 268–277. https://doi.org/10.1109/THMS.2019.2892919
[20]
Thies Pfeiffer, Patrick Renner, and Nadine Pfeiffer-Leßmann. 2016. EyeSee3D 2.0: Model-Based Real-Time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications(ETRA ’16). Association for Computing Machinery, New York, NY, USA, 189–196. https://doi.org/10.1145/2857491.2857532 event-place: Charleston, South Carolina.
[21]
Daniel F. Pontillo, Thomas B. Kinsman, and Jeff B. Pelz. 2010. SemantiCode: Using Content Similarity and Database-Driven Matching to Code Wearable Eyetracker Gaze Data. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications(ETRA ’10). Association for Computing Machinery, New York, NY, USA, 267–270. https://doi.org/10.1145/1743666.1743729 event-place: Austin, Texas.
[22]
Ömer Sümer, Patricia Goldberg, Kathleen Stürmer, Tina Seidel, Peter Gerjets, Ulrich Trautwein, and Enkelejda Kasneci. 2018. Teachers’ Perception in the Classroom. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 2315–2324. https://openaccess.thecvf.com/content_cvpr_2018_workshops/w47/html/Sumer_Teachers_Perception_in_CVPR_2018_paper.html
[23]
Takumi Toyama, Thomas Kieninger, Faisal Shafait, and Andreas Dengel. 2012. Gaze Guided Object Recognition Using a Head-Mounted Eye Tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications(ETRA ’12). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/2168556.2168570 event-place: Santa Barbara, California.
[24]
Karan Uppal, Jaeah Kim, and Shashank Singh. 2022. Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models. In NeuRIPS 2022 Workshop on Gaze Meets ML. https://openreview.net/forum?id=1Ty3Xd9HUQv
[25]
Pranav Venuprasad, Li Xu, Enoch Huang, Andrew Gilman, Leanne Chukoskie Ph.D., and Pamela Cosman. 2020. Analyzing Gaze Behavior Using Object Detection and Unsupervised Clustering. In ACM Symposium on Eye Tracking Research and Applications(ETRA ’20 Full Papers). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3379155.3391316 event-place: Stuttgart, Germany.
[26]
Yaqing Wang, Quanming Yao, James Kwok, and Lionel Ni. 2020. Generalizing from a Few Examples: A Survey on Few-shot Learning. Comput. Surveys 53 (June 2020), 1–34. https://doi.org/10.1145/3386252
[27]
Davis Wertheimer, Luming Tang, and Bharath Hariharan. 2021. Few-Shot Classification With Feature Map Reconstruction Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8012–8021.
[28]
Julian Wolf, Stephan Hess, David Bachmann, Quentin Lohmeyer, and Mirko Meboldt. 2018. Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research 11, 6 (Dec. 2018). https://doi.org/10.16910/jemr.11.6.6 Section: Articles.
[29]
L.H. Yu and M. Eizenman. 2004. A new methodology for determining point-of-gaze in head-mounted eye tracking systems. IEEE Transactions on Biomedical Engineering 51, 10 (Oct. 2004), 1765–1773. https://doi.org/10.1109/TBME.2004.831523
[30]
Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. 2020. DeepEMD: Few-Shot Image Classification With Differentiable Earth Mover’s Distance and Structured Classifiers. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 12200–12210. https://doi.org/10.1109/CVPR42600.2020.01222

Cited By

View all
  • (2025)The fundamentals of eye tracking part 4: Tools for conducting an eye tracking studyBehavior Research Methods10.3758/s13428-024-02529-757:1Online publication date: 6-Jan-2025
  • (2024)Towards Automatic Object Detection and Activity Recognition in Indoor ClimbingSensors10.3390/s2419647924:19(6479)Online publication date: 8-Oct-2024
  • (2024)The MASTER XR Platform for Robotics Training in ManufacturingProceedings of the 30th ACM Symposium on Virtual Reality Software and Technology10.1145/3641825.3689514(1-2)Online publication date: 9-Oct-2024
  • Show More Cited By

Index Terms

  1. Interactive Fixation-to-AOI Mapping for Mobile Eye Tracking Data based on Few-Shot Image Classification

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        IUI '23 Companion: Companion Proceedings of the 28th International Conference on Intelligent User Interfaces
        March 2023
        266 pages
        ISBN:9798400701078
        DOI:10.1145/3581754
        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 27 March 2023

        Check for updates

        Author Tags

        1. area of interest
        2. eye tracking
        3. eye tracking data analysis
        4. fixation to AOI mapping
        5. interactive machine learning
        6. mobile eye tracking
        7. visual attention

        Qualifiers

        • Poster
        • Research
        • Refereed limited

        Funding Sources

        Conference

        IUI '23
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 746 of 2,811 submissions, 27%

        Upcoming Conference

        IUI '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)78
        • Downloads (Last 6 weeks)6
        Reflects downloads up to 14 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)The fundamentals of eye tracking part 4: Tools for conducting an eye tracking studyBehavior Research Methods10.3758/s13428-024-02529-757:1Online publication date: 6-Jan-2025
        • (2024)Towards Automatic Object Detection and Activity Recognition in Indoor ClimbingSensors10.3390/s2419647924:19(6479)Online publication date: 8-Oct-2024
        • (2024)The MASTER XR Platform for Robotics Training in ManufacturingProceedings of the 30th ACM Symposium on Virtual Reality Software and Technology10.1145/3641825.3689514(1-2)Online publication date: 9-Oct-2024
        • (2024)MASTER-XR: Mixed Reality Ecosystem for Teaching Robotics in ManufacturingIntegrated Systems: Data Driven Engineering10.1007/978-3-031-53652-6_10(167-182)Online publication date: 17-Sep-2024

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media