Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

Published: 11 September 2017 Publication History
  • Get Citation Alerts
  • Abstract

    Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5 × 5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces.

    References

    [1]
    Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, and others. 2016. Tensorflow: Large-scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467 (2016).
    [2]
    Evgeniy Abdulin, Ioannis Rigas, and Oleg Komogortsev. 2016. Eye Movement Biometrics on Wearable Devices: What Are the Limits?. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1503--1509.
    [3]
    Deepak Akkil, Jari Kangas, Jussi Rantala, Poika Isokoski, Oleg Spakov, and Roope Raisamo. 2015. Glance Awareness and Gaze Interaction in Smartwatches. In Proc. of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (HCI). ACM, 1271--1276.
    [4]
    Nuri Murat Arar, Hua Gao, and Jean-Philippe Thiran. 2015. Robust Gaze Estimation Based on Adaptive Fusion of Multiple Cameras. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Vol. 1. IEEE, 1--7.
    [5]
    Shumeet Baluja and Dean Pomerleau. 1994. Non-intrusive Gaze Tracking Using Artificial Neural Networks. Technical Report. DTIC Document.
    [6]
    Michael Barz, Florian Daiber, and Andreas Bulling. 2016. Prediction of Gaze Estimation Error for Error-Aware Gaze-Based Interfaces. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 275--278.
    [7]
    Roman Bednarik, Hana Vrzakova, and Michal Hradis. 2012. What Do You Want to Do Next: A Novel Approach for Intent Prediction in Gaze-based Interaction. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 83--90.
    [8]
    Frank H Borsato and Carlos H Morimoto. 2016. Episcleral Surface Tracking: Challenges and Possibilities for Using Mice Sensors for Wearable Eye Tracking. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 39--46.
    [9]
    Andreas Bulling and Hans Gellersen. 2010. Toward Mobile Eye-Based Human-Computer Interaction. IEEE Pervasive Computing 9, 4 (2010), 8--12.
    [10]
    Andreas Bulling and Daniel Roggen. 2011. Recognition of Visual Memory Recall Processes Using Eye Movement Analysis. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 455--464.
    [11]
    Andreas Bulling, Daniel Roggen, and Gerhard Tröster. 2008. It’s in Your Eyes: Towards Context-awareness and Mobile HCI Using Wearable EOG Goggles. In Proc. of the 10th International Conference on Ubiquitous Computing (UbiComp). ACM, 84--93.
    [12]
    Andreas Bulling, Daniel Roggen, and Gerhard Troster. 2009. Wearable EOG Goggles: Seamless Sensing and Context-awareness in Everyday Environments. Journal of Ambient Intelligence and Smart Environments 1, 2 (2009), 157--171.
    [13]
    Andreas Bulling, Daniel Roggen, and Gerhard Troster. 2011. What’s in the Eyes for Context-Awareness? IEEE Pervasive Computing 10, 2 (April 2011), 48 -- 57.
    [14]
    Andreas Bulling, Jamie A. Ward, Hans Gellersen, and Gerhard Tröster. 2008. Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. In Proc. International Conference on Pervasive Computing (Pervasive). 19--37.
    [15]
    Andreas Bulling, Jamie A. Ward, Hans Gellersen, and Gerhard Troster. 2009. Eye Movement Analysis for Activity Recognition. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 41--50.
    [16]
    Andreas Bulling, Jamie A Ward, Hans Gellersen, and Gerhard Troster. 2011. Eye movement analysis for activity recognition using electrooculography. IEEE transactions on pattern analysis and machine intelligence 33, 4 (2011), 741--753.
    [17]
    Andreas Bulling, Christian Weichel, and Hans Gellersen. 2013. EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 305-308.
    [18]
    Andreas Bulling and Thorsten O. Zander. 2014. Cognition-Aware Computing. IEEE Pervasive Computing 13, 3 (2014), 80--83.
    [19]
    François Chollet. 2015. Keras. (2015).
    [20]
    John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research 12, Jul (2011), 2121--2159.
    [21]
    Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Enabling Gaze Interaction in Smart Watches using Moving Targets. In Proc. ACM Symposium on User Interface Software and Technology (UIST). 457--466.
    [22]
    Wolfgang Fuhl, Thomas Köbler, Katrin Sippel, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2015. ExCuSe: Robust Pupil Detection in Real-World Scenarios. In International Conference on Computer Analysis of Images and Patterns (CAIP). Springer, 39--51.
    [23]
    Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, and Enkelejda Kasneci. 2016. PupilNet: Convolutional Neural Networks for Robust Pupil Detection. arXiv preprint arXiv:1601.04902 (2016).
    [24]
    Wolfgang Fuhl, Thiago C Santini, Thomas Kübler, and Enkelejda Kasneci. 2016. ElSe: Ellipse Selection for Robust Pupil Detection in Real-World Environments. In Proc. of the Ninth Biennial ACM Symposium on Eye Tracking Research 8 Applications. ACM, 123--130.
    [25]
    Sabrina Hoppe, Tobias Loetscher, Stephanie Morey, and Andreas Bulling. 2015. Recognition of Curiosity Using Eye Movement Analysis. In Adj. Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 185--188.
    [26]
    Yoshio Ishiguro, Adiyan Mujibiya, Takashi Miyaki, and Jun Rekimoto. 2010. Aided Eyes: Eye Activity Sensing for Daily Life. In Proc. of the 1st Augmented Human International Conference. ACM, 25.
    [27]
    Amir-Homayoun Javadi, Zahra Hakimi, Morteza Barati, Vincent Walsh, and Lili Tcheang. 2015. SET: A Pupil Detection Method Using Sinusoidal Approximation. Frontiers in neuroengineering 8 (2015).
    [28]
    Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adj. Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 1151--1160.
    [29]
    Elizabeth S. Kim, Adam Naples, Giuliana Vaccarino Gearty, Quan Wang, Seth Wallace, Carla Wall, Michael Perlmutter, Jennifer Kowitt, Linda Friedlaender, Brian Reichow, Fred Volkmar, and Frederick Shic. 2014. Development of an Untethered, Mobile, Low-cost Head-mounted Eye Tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ’14). ACM, New York, NY, USA, 247--250.
    [30]
    Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye Tracking for Everyone. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2176--2184.
    [31]
    Christian Lander, Sven Gehring, Antonio Krüger, Sebastian Boring, and Andreas Bulling. 2015. GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays. In Proc. ACM Symposium on User Interface Software and Technology (UIST).
    [32]
    Dongheng Li and Derrick Parkhurst. 2006. Open-Source Software for Real-Time Visible-Spectrum Eye Tracking. In Proc. of the COGAIN Conference, Vol. 17.
    [33]
    Dongheng Li, David Winfield, and Derrick J Parkhurst. 2005. Starburst: A Hybrid Algorithm for Video-based Eye Tracking Combining Feature-based and Model-based Approaches. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops. IEEE, 79--79.
    [34]
    Xindian Long, Ozan K Tonguz, and Alex Kiderman. 2007. A High Speed Eye Tracking System with Robust Pupil Center Estimation Algorithm. In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 3331--3334.
    [35]
    Feng Lu, Yusuke Sugano, Takahiro Okabe, and Yoichi Sato. 2014. Adaptive Linear Regression for Appearance-Based Gaze Estimation. IEEE transactions on pattern analysis and machine intelligence (TPAMI) 36, 10 (2014), 2033--2046.
    [36]
    Päivi Majaranta and Andreas Bulling. 2014. Eye Tracking and Eye-Based Human-Computer Interaction. Springer Publishing London, 39--65.
    [37]
    Hiroyuki Manabe and Masaaki Fukumoto. 2006. Full-time Wearable Headphone-type Gaze Detector. In CHI’06 Extended Abstracts on Human Factors in Computing Systems. ACM, 1073--1078.
    [38]
    Mohsen Mansouryar, Julian Steil, Yusuke Sugano, and Andreas Bulling. 2016. 3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 197--200.
    [39]
    Addison Mayberry, Pan Hu, Benjamin Marlin, Christopher Salthouse, and Deepak Ganesan. 2014. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 82--94.
    [40]
    Atsushi Nakazawa and Christian Nitschke. 2012. Point of Gaze Estimation through Corneal Surface Reflection in an Active Illumination Environment. Computer Vision--ECCV 2012 (2012), 159--172.
    [41]
    Eleni Nasiopoulos, Evan F Risko, Tom Foulsham, and Alan Kingstone. 2015. Wearable Computing: Will It Make People Prosocial? British Journal of Psychology 106, 2 (2015), 209--216.
    [42]
    Basilio Noris, Jean-Baptiste Keller, and Aude Billard. 2011. A Wearable Gaze Tracking System for Children in Unconstrained Environments. Computer Vision and Image Understanding 115, 4 (2011), 476--486.
    [43]
    Bernardo Rodrigues Pires, Michäel Devyver, Akihiro Tsukada, and Takeo Kanade. 2013. Unwrapping the Eye for Visible-spectrum Gaze Tracking on Wearable Devices. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on. IEEE, 369--376.
    [44]
    Alexander Plopski, Christian Nitschke, Kiyoshi Kiyokawa, Dieter Schmalstieg, and Haruo Takemura. 2015. Hybrid Eye Tracking: Combining Iris Contour and Corneal Imaging. In ICAT-EGVE. 183--190.
    [45]
    Evan F Risko and Alan Kingstone. 2011. Eyes Wide Shut: Implied Social Presence, Eye Tracking and Attention. Attention, Perception, 8 Psychophysics 73, 2 (2011), 291--296.
    [46]
    Ravikrishna Ruddarraju, Antonio Haro, and Irfan Essa. 2003. Fast Multiple Camera Head Pose Tracking. In Vision Interface, Vol. 2. Citeseer.
    [47]
    Ravikrishna Ruddarraju, Antonio Haro, Kris Nagel, Quan T Tran, Irfan A Essa, Gregory Abowd, and Elizabeth D Mynatt. 2003. Perceptual User Interfaces Using Vision-based Eye Tracking. In Proc. of the 5th international conference on Multimodal interfaces. ACM, 227--233.
    [48]
    Javier San Agustin, Henrik Skovsgaard, Emilie Mollenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, and John Paulin Hansen. 2010. Evaluation of a Low-cost Open-source Gaze Tracker. In Proc. of the 2010 Symposium on Eye-Tracking Research 8 Applications. ACM, 77--80.
    [49]
    Hosnieh Sattar, Mario Fritz, and Andreas Bulling. 2017. Visual Decoding of Targets During Visual Search From Human Eye Fixations. arXiv:1706.05993.
    [50]
    Hosnieh Sattar, Sabine Müller, Mario Fritz, and Andreas Bulling. 2015. Prediction of Search Targets From Fixations in Open-world Settings. In Proc. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). 981--990.
    [51]
    Ricardo Sousa, Martin Wany, Pedro Santos, and Fernando Morgado-Dias. 2017. NanEye--An Endoscopy Sensor with 3D Image Synchronization. IEEE Sensors Journal 17 (2017), 623--631. Issue 3.
    [52]
    Julian Steil and Andreas Bulling. 2015. Discovery of Everyday Human Activities From Long-term Visual Behaviour Using Topic Models. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 75--85.
    [53]
    Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 285--294.
    [54]
    Yusuke Sugano and Andreas Bulling. 2015. Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency. In Proc. ACM Symposium on User Interface Software and Technology (UIST). 363--372.
    [55]
    Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato. 2014. Learning-by-synthesis for appearance-based 3d gaze estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1821--1828.
    [56]
    Yusuke Sugano, Xucong Zhang, and Andreas Bulling. 2016. Aggregaze: Collective estimation of audience attention on public displays. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 821--831.
    [57]
    Lech Świrski, Andreas Bulling, and Neil Dodgson. 2012. Robust real-time pupil tracking in highly off-axis images. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 173--176.
    [58]
    Lech Świrski and Neil Dodgson. 2013. A Fully-automatic, Temporal Approach to Single Camera, Glint-free 3d Eye Model Fitting. Proc. PETMEI (2013).
    [59]
    Lech Świrski and Neil Dodgson. 2014. Rendering Synthetic Ground Truth Images for Eye Tracker Evaluation. In Proc. of the Symposium on Eye Tracking Research and Applications. ACM, 219--222.
    [60]
    Marc Tonsen, Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2016. Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research 8 Applications. ACM, 139--142.
    [61]
    Cihan Topal, Serkan Gunal, Onur Koçdeviren, Atakan Doğan, and Ömer N Gerek. 2014. A Low-Computational Approach on Gaze Estimation With Eye Touch System. IEEE transactions on Cybernetics 44, 2 (2014), 228--239.
    [62]
    Akihiro Tsukada, Motoki Shino, Michael Devyver, and Takeo Kanade. 2011. Illumination-Free Gaze Estimation Method for First-Person Vision Wearable Device. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. IEEE, 2084--2091.
    [63]
    Jayson Turner, Andreas Bulling, Jason Alexander, and Hans Gellersen. 2014. Cross-Device Gaze-Supported Point-to-Point Content Transfer. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 19--26.
    [64]
    Akira Utsumi, Kotaro Okamoto, Norihiro Hagita, and Kazuhiro Takahashi. 2012. Gaze Tracking in Wide Area Using Multiple Camera Observations. In Proc. of the Symposium on Eye Tracking Research and Applications. ACM, 273--276.
    [65]
    Mélodie Vidal, Ken Pfeuffer, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: eye-based interaction with moving targets. In Ext. Abstr. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 3147--3150.
    [66]
    Michael Voit and Rainer Stiefelhagen. 2006. Tracking Head Pose and Focus of Attention with Multiple Far-field Cameras. In Proc. of the 8th international conference on Multimodal interfaces. ACM, 281--286.
    [67]
    Erroll Wood, Tadas Baltrušaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. 2016. A 3D Morphable Eye Region Model for Gaze Estimation. In Proc. European Conference on Computer Vision (ECCV).
    [68]
    Erroll Wood, Tadas Baltrušaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. 2016. Learning an Appearance-based Gaze Estimator from One Million Synthesised Images. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 131--138.
    [69]
    Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling. 2015. Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. In Proc. of the IEEE International Conference on Computer Vision (ICCV). 3756--3764.
    [70]
    Gregory J Zelinsky, Hossein Adeli, Yifan Peng, and Dimitris Samaras. 2013. Modelling eye movements in a categorical search task. Philosophical Transactions of the Royal Society B: Biological Sciences 368, 1628 (2013), 20130058.
    [71]
    Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4511--4520.
    [72]
    Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2017. It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. In Proc. IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
    [73]
    Yanxia Zhang, Hans Jörg Müller, Ming Ki Chong, Andreas Bulling, and Hans Gellersen. 2014. GazeHorizon: Enabling Passers-by to Interact with Public Displays by Gaze. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 559--563.

    Cited By

    View all
    • (2024)PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party ComputationProceedings of the ACM on Human-Computer Interaction10.1145/36556068:ETRA(1-23)Online publication date: 28-May-2024
    • (2024)Automatic Gaze Analysis: A Survey of Deep Learning Based ApproachesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332133746:1(61-84)Online publication date: Jan-2024
    • (2024)Differentiable Deflectometric Eye TrackingIEEE Transactions on Computational Imaging10.1109/TCI.2024.338249410(888-898)Online publication date: 2024
    • Show More Cited By

    Index Terms

    1. InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
          Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 3
          September 2017
          2023 pages
          EISSN:2474-9567
          DOI:10.1145/3139486
          Issue’s Table of Contents
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 11 September 2017
          Accepted: 01 July 2017
          Received: 01 May 2017
          Published in IMWUT Volume 1, Issue 3

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Appearance-Based Gaze Estimation
          2. Mobile Eye Tracking

          Qualifiers

          • Research-article
          • Research
          • Refereed

          Funding Sources

          • Cluster of Excellence on Multimodal Computing and Interaction, Saarland University
          • Alexander von Humboldt-Foundation

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)192
          • Downloads (Last 6 weeks)18
          Reflects downloads up to 26 Jul 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party ComputationProceedings of the ACM on Human-Computer Interaction10.1145/36556068:ETRA(1-23)Online publication date: 28-May-2024
          • (2024)Automatic Gaze Analysis: A Survey of Deep Learning Based ApproachesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332133746:1(61-84)Online publication date: Jan-2024
          • (2024)Differentiable Deflectometric Eye TrackingIEEE Transactions on Computational Imaging10.1109/TCI.2024.338249410(888-898)Online publication date: 2024
          • (2024)Design and FPGA Implementation of a Light-Weight Calibration-Friendly Eye Gaze Tracking Algorithm2024 IEEE International Symposium on Circuits and Systems (ISCAS)10.1109/ISCAS58744.2024.10558094(1-5)Online publication date: 19-May-2024
          • (2024)New Perspectives in e-Learning: EEG-Based Modelling of Human Cognition Individual DifferencesArtificial Intelligence Applications and Innovations. AIAI 2024 IFIP WG 12.5 International Workshops10.1007/978-3-031-63227-3_20(290-299)Online publication date: 23-Jun-2024
          • (2024)Using Eye Movement Features for Secure AuthenticationAdvances in Emerging Information and Communication Technology10.1007/978-3-031-53237-5_22(351-371)Online publication date: 7-May-2024
          • (2023)Multi-Rate Sensor Fusion for Unconstrained Near-Eye Gaze EstimationProceedings of the 2023 Symposium on Eye Tracking Research and Applications10.1145/3588015.3588407(1-8)Online publication date: 30-May-2023
          • (2023)SUPREYES: SUPer Resolutin for EYES Using Implicit Neural Representation LearningProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606780(1-13)Online publication date: 29-Oct-2023
          • (2023)Gaze Direction Classification Using Vision Transformer2023 15th International Congress on Advanced Applied Informatics Winter (IIAI-AAI-Winter)10.1109/IIAI-AAI-Winter61682.2023.00044(193-198)Online publication date: 11-Dec-2023
          • (2023)DVGaze: Dual-View Gaze Estimation2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.01886(20575-20584)Online publication date: 1-Oct-2023
          • Show More Cited By

          View Options

          Get Access

          Login options

          Full Access

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media