Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (89)

Search Parameters:
Keywords = pupil tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5430 KiB  
Article
Evaluation Method for Virtual Museum Interface Integrating Layout Aesthetics and Visual Cognitive Characteristics Based on Improved Gray H-Convex Correlation Model
by Weiwei Wang, Zhiqiang Wen, Jian Chen, Yanhui Gu and Qizhao Peng
Appl. Sci. 2024, 14(16), 7006; https://doi.org/10.3390/app14167006 - 9 Aug 2024
Viewed by 677
Abstract
A scientific method for evaluating the design of interfaces is proposed to address the unique characteristics and user needs of infrequent-contact public service interfaces. This method is significant for enhancing service efficiency and promoting the sustainable development of public services. Current interface evaluation [...] Read more.
A scientific method for evaluating the design of interfaces is proposed to address the unique characteristics and user needs of infrequent-contact public service interfaces. This method is significant for enhancing service efficiency and promoting the sustainable development of public services. Current interface evaluation methods are limited in scope and often fail to meet actual user needs. To address this, this study focuses on virtual museums, examining users’ aesthetic psychology and cognitive behavior in terms of layout aesthetics and visual cognitive characteristics, aiming to explore the relationship between the two. Interface layout aesthetic values and user visual cognitive measurements were obtained by using computational aesthetics methods and eye-tracking experiments. These served as input data for a new model. An improved gray H-convex correlation model utilizing the ICRITIC method is proposed to examine the mapping relationship between interface layout aesthetics and visual cognitive features. The results demonstrate that our new model achieves over 90% accuracy, outperforming existing models. For virtual museum interfaces, symmetry and dominance significantly influence user visual cognition, with the most notable correlations found between density and gaze shift frequency, simplicity and mean pupil diameter, and order and gaze shift frequency. Additionally, fixation duration, fixation count, and mean pupil diameter were inversely correlated with interface layout aesthetics, whereas gaze shift frequency and gaze time percentage were positively correlated. Full article
Show Figures

Figure 1

21 pages, 2765 KiB  
Article
Combined Effects of Moderate Hypoxia and Sleep Restriction on Mental Workload
by Anaïs Pontiggia, Pierre Fabries, Vincent Beauchamps, Michael Quiquempoix, Olivier Nespoulous, Clémentine Jacques, Mathias Guillard, Pascal Van Beers, Haïk Ayounts, Nathalie Koulmann, Danielle Gomez-Merino, Mounir Chennaoui and Fabien Sauvet
Clocks & Sleep 2024, 6(3), 338-358; https://doi.org/10.3390/clockssleep6030024 - 23 Jul 2024
Viewed by 552
Abstract
Aircraft pilots face a high mental workload (MW) under environmental constraints induced by high altitude and sometimes sleep restriction (SR). Our aim was to assess the combined effects of hypoxia and sleep restriction on cognitive and physiological responses to different MW levels using [...] Read more.
Aircraft pilots face a high mental workload (MW) under environmental constraints induced by high altitude and sometimes sleep restriction (SR). Our aim was to assess the combined effects of hypoxia and sleep restriction on cognitive and physiological responses to different MW levels using the Multi-Attribute Test Battery (MATB)-II with an additional auditory Oddball-like task. Seventeen healthy subjects were subjected in random order to three 12-min periods of increased MW level (low, medium, and high): sleep restriction (SR, <3 h of total sleep time (TST)) vs. habitual sleep (HS, >6 h TST), hypoxia (HY, 2 h, FIO2 = 13.6%, ~3500 m vs. normoxia, NO, FIO2 = 21%). Following each MW level, participants completed the NASA-TLX subjective MW scale. Increasing MW decreases performance on the MATB-II Tracking task (p = 0.001, MW difficulty main effect) and increases NASA-TLX (p = 0.001). In the combined HY/SR condition, MATB-II performance was lower, and the NASA-TLX score was higher compared with the NO/HS condition, while no effect of hypoxia alone was observed. In the accuracy of the auditory task, there is a significant interaction between hypoxia and MW difficulty (F(2–176) = 3.14, p = 0.04), with lower values at high MW under hypoxic conditions. Breathing rate, pupil size, and amplitude of pupil dilation response (PDR) to auditory stimuli are associated with increased MW. These parameters are the best predictors of increased MW, independently of physiological constraints. Adding ECG, SpO2, or electrodermal conductance does not improve model performance. In conclusion, hypoxia and sleep restriction have an additive effect on MW. Physiological and electrophysiological responses must be taken into account when designing a MW predictive model and cross-validation. Full article
(This article belongs to the Section Human Basic Research & Neuroimaging)
Show Figures

Figure 1

20 pages, 11544 KiB  
Article
Predicting Emotional Experiences through Eye-Tracking: A Study of Tourists’ Responses to Traditional Village Landscapes
by Feng Ye, Min Yin, Leilei Cao, Shouqian Sun and Xuanzheng Wang
Sensors 2024, 24(14), 4459; https://doi.org/10.3390/s24144459 - 10 Jul 2024
Viewed by 656
Abstract
This study investigates the relationship between eye-tracking metrics and emotional experiences in the context of cultural landscapes and tourism-related visual stimuli. Fifty-three participants were involved in two experiments: forty-three in the data collection phase and ten in the model validation phase. Eye movements [...] Read more.
This study investigates the relationship between eye-tracking metrics and emotional experiences in the context of cultural landscapes and tourism-related visual stimuli. Fifty-three participants were involved in two experiments: forty-three in the data collection phase and ten in the model validation phase. Eye movements were recorded and the data were analyzed to identify correlations between four eye-tracking metrics—average number of saccades (ANS), total dwell fixation (TDF), fixation count (FC), and average pupil dilation (APD)—and 19 distinct emotional experiences, which were subsequently grouped into three categories: positive, neutral, and negative. The study examined the variations in eye-tracking metrics across architectural, historic, economic, and life landscapes, as well as the three primary phases of a tour: entry, core, and departure. Findings revealed that architectural and historic landscapes demanded higher levels of visual and cognitive engagement, especially during the core phase. Stepwise regression analysis identified four key eye-tracking predictors for emotional experiences, enabling the development of a prediction model. This research underscores the effectiveness of eye-tracking technology in capturing and predicting emotional responses to different landscape types, offering valuable insights for optimizing rural tourism environments and enhancing visitors’ emotional experiences. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

15 pages, 15194 KiB  
Article
Artificial Vision System on Digital Devices for Real-Time Head Tilt Control
by Miguel Ángel Tomé de la Torre, Antonio Álvarez Fernández-Balbuena, Ricardo Bernárdez-Vilaboa and Daniel Vázquez Molini
Sensors 2024, 24(12), 3756; https://doi.org/10.3390/s24123756 - 9 Jun 2024
Viewed by 631
Abstract
It is common to see cases in which, when performing tasks in close vision in front of a digital screen, the posture or position of the head is not adequate, especially in young people; it is essential to have a correct posture of [...] Read more.
It is common to see cases in which, when performing tasks in close vision in front of a digital screen, the posture or position of the head is not adequate, especially in young people; it is essential to have a correct posture of the head to avoid visual, muscular, or joint problems. Most of the current systems to control head inclination require an external part attached to the subject’s head. The aim of this study is the validation of a procedure that, through a detection algorithm and eye tracking, can control the correct position of the head in real time when subjects are in front of a digital device. The system only needs a digital device with a CCD receiver and downloadable software through which we can detect the inclination of the head, indicating if a bad posture is adopted due to a visual problem or simply inadequate visual–postural habits, alerting us to the postural anomaly to correct it.The system was evaluated in subjects with disparate interpupillary distances, at different working distances in front of the digital device, and at each distance, different tilt angles were evaluated. The system evaluated favorably in different lighting environments, correctly detecting the subjects’ pupils. The results showed that for most of the variables, particularly good absolute and relative reliability values were found when measuring head tilt with lower accuracy than most of the existing systems. The evaluated results have been positive, making it a considerably inexpensive and easily affordable system for all users. It is the first application capable of measuring the head tilt of the subject at their working or reading distance in real time by tracking their eyes. Full article
(This article belongs to the Special Issue Sensors for Human Posture and Movement)
Show Figures

Figure 1

11 pages, 3201 KiB  
Article
Deep Learning-Based Nystagmus Detection for BPPV Diagnosis
by Sae Byeol Mun, Young Jae Kim, Ju Hyoung Lee, Gyu Cheol Han, Sung Ho Cho, Seok Jin and Kwang Gi Kim
Sensors 2024, 24(11), 3417; https://doi.org/10.3390/s24113417 - 26 May 2024
Viewed by 824
Abstract
In this study, we propose a deep learning-based nystagmus detection algorithm using video oculography (VOG) data to diagnose benign paroxysmal positional vertigo (BPPV). Various deep learning architectures were utilized to develop and evaluate nystagmus detection models. Among the four deep learning architectures used [...] Read more.
In this study, we propose a deep learning-based nystagmus detection algorithm using video oculography (VOG) data to diagnose benign paroxysmal positional vertigo (BPPV). Various deep learning architectures were utilized to develop and evaluate nystagmus detection models. Among the four deep learning architectures used in this study, the CNN1D model proposed as a nystagmus detection model demonstrated the best performance, exhibiting a sensitivity of 94.06 ± 0.78%, specificity of 86.39 ± 1.31%, precision of 91.34 ± 0.84%, accuracy of 91.02 ± 0.66%, and an F1-score of 92.68 ± 0.55%. These results indicate the high accuracy and generalizability of the proposed nystagmus diagnosis algorithm. In conclusion, this study validates the practicality of deep learning in diagnosing BPPV and offers avenues for numerous potential applications of deep learning in the medical diagnostic sector. The findings of this research underscore its importance in enhancing diagnostic accuracy and efficiency in healthcare. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision and Image Processing Sensors)
Show Figures

Figure 1

21 pages, 6048 KiB  
Article
Morphological and Position Factors of Vertical Surface Light Source Affecting Discomfort Glare Perception
by Guangyan Kong, Lixiong Wang, Peng Chen, Shuo Wang and Fengrui Ma
Buildings 2024, 14(5), 1227; https://doi.org/10.3390/buildings14051227 - 25 Apr 2024
Viewed by 719
Abstract
Distinguished from conventional lighting, the LED vertical surface light source (VSLS) is directly exposed to human view, and the effects of which form it takes on visual perception are non-negligible. In the current discomfort glare evaluation system, the solid angle and the position [...] Read more.
Distinguished from conventional lighting, the LED vertical surface light source (VSLS) is directly exposed to human view, and the effects of which form it takes on visual perception are non-negligible. In the current discomfort glare evaluation system, the solid angle and the position index, which represent the relative relation between the glaring light source and human visual field, are not completely applicable for large-area VSLS, and hence are awaiting supplementation and modification. In this study, a physical experimental setup was established to conduct an evaluation experiment on discomfort glare, employing an LED display and white translucent frosted film to simulate vertical surface light sources (VSLS). The experiments were arranged with 21 VSLS shapes (comprising 3 areas and 7 length-to-width ratios) and 11 mounting positions. Subjective ratings and four eye-movement data parameters—namely, the change rate of pupil diameter (CRPD), mean saccadic amplitude (SA), blinking frequency (BF), and saccadic speed (SS)—were collected from 24 participants under each working condition using the Boyce Evaluation Scale and eye tracking techniques. The main results of this study are the following: (a) CRPD is the most appropriate eye-movement index for characterizing VSLS glare perception; (b) The area of the VSLS is the primary shape element influencing discomfort glare. Furthermore, with the same surface area, the lateral view angle (LaVA) and the longitudinal view angle (LoVA) perceived by the human eye also impact glare perception; (c) A functional equation between the VSLS area, LaVA, and LoVA to the borderline luminance between comfort and discomfort (BCD luminance) is fitted; (d) Based on the eccentric angle and the azimuthal angle, a modified position index P’ is proposed to represent the relative position of the VSLS in the visual field, and the ratio function of BCD luminance of the VSLS at non-central positions and the central position is fitted. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

19 pages, 10662 KiB  
Article
SVD-Based Mind-Wandering Prediction from Facial Videos in Online Learning
by Nguy Thi Lan Anh, Nguyen Gia Bach, Nguyen Thi Thanh Tu, Eiji Kamioka and Phan Xuan Tan
J. Imaging 2024, 10(5), 97; https://doi.org/10.3390/jimaging10050097 - 24 Apr 2024
Viewed by 996
Abstract
This paper presents a novel approach to mind-wandering prediction in the context of webcam-based online learning. We implemented a Singular Value Decomposition (SVD)-based 1D temporal eye-signal extraction method, which relies solely on eye landmark detection and eliminates the need for gaze tracking or [...] Read more.
This paper presents a novel approach to mind-wandering prediction in the context of webcam-based online learning. We implemented a Singular Value Decomposition (SVD)-based 1D temporal eye-signal extraction method, which relies solely on eye landmark detection and eliminates the need for gaze tracking or specialized hardware, then extract suitable features from the signals to train the prediction model. Our thorough experimental framework facilitates the evaluation of our approach alongside baseline models, particularly in the analysis of temporal eye signals and the prediction of attentional states. Notably, our SVD-based signal captures both subtle and major eye movements, including changes in the eye boundary and pupil, surpassing the limited capabilities of eye aspect ratio (EAR)-based signals. Our proposed model exhibits a 2% improvement in the overall Area Under the Receiver Operating Characteristics curve (AUROC) metric and 7% in the F1-score metric for ‘not-focus’ prediction, compared to the combination of EAR-based and computationally intensive gaze-based models used in the baseline study These contributions have potential implications for enhancing the field of attentional state prediction in online learning, offering a practical and effective solution to benefit educational experiences. Full article
Show Figures

Figure 1

15 pages, 1333 KiB  
Article
Improving Eye-Tracking Data Quality: A Framework for Reproducible Evaluation of Detection Algorithms
by Christopher Gundler, Matthias Temmen, Alessandro Gulberti, Monika Pötter-Nerger and Frank Ückert
Sensors 2024, 24(9), 2688; https://doi.org/10.3390/s24092688 - 24 Apr 2024
Viewed by 1224
Abstract
High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs [...] Read more.
High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs is the primary contribution of this paper. We developed a framework to systematically assess and compare the effectiveness of 13 state-of-the-art algorithms through a unified application interface. Hence, we more than double the number of algorithms that are currently usable within a single software package and allow researchers to identify the best-suited algorithm for a given scientific setup. Our framework validation on retrospective data underscores its suitability for algorithm selection. Through a detailed and reproducible step-by-step workflow, we hope to contribute towards significantly improved data quality in scientific experiments. Full article
Show Figures

Figure 1

11 pages, 2226 KiB  
Article
Pupil Response in Visual Tracking Tasks: The Impacts of Task Load, Familiarity, and Gaze Position
by Yun Wu, Zhongshi Zhang, Yao Zhang, Bin Zheng and Farzad Aghazadeh
Sensors 2024, 24(8), 2545; https://doi.org/10.3390/s24082545 - 16 Apr 2024
Cited by 1 | Viewed by 881
Abstract
Pupil size is a significant biosignal for human behavior monitoring and can reveal much underlying information. This study explored the effects of task load, task familiarity, and gaze position on pupil response during learning a visual tracking task. We hypothesized that pupil size [...] Read more.
Pupil size is a significant biosignal for human behavior monitoring and can reveal much underlying information. This study explored the effects of task load, task familiarity, and gaze position on pupil response during learning a visual tracking task. We hypothesized that pupil size would increase with task load, up to a certain level before decreasing, decrease with task familiarity, and increase more when focusing on areas preceding the target than other areas. Fifteen participants were recruited for an arrow tracking learning task with incremental task load. Pupil size data were collected using a Tobii Pro Nano eye tracker. A 2 × 3 × 5 three-way factorial repeated measures ANOVA was conducted using R (version 4.2.1) to evaluate the main and interactive effects of key variables on adjusted pupil size. The association between individuals’ cognitive load, assessed by NASA-TLX, and pupil size was further analyzed using a linear mixed-effect model. We found that task repetition resulted in a reduction in pupil size; however, this effect was found to diminish as the task load increased. The main effect of task load approached statistical significance, but different trends were observed in trial 1 and trial 2. No significant difference in pupil size was detected among the three gaze positions. The relationship between pupil size and cognitive load overall followed an inverted U curve. Our study showed how pupil size changes as a function of task load, task familiarity, and gaze scanning. This finding provides sensory evidence that could improve educational outcomes. Full article
(This article belongs to the Special Issue Vision and Sensor-Based Sensing in Human Action Recognition)
Show Figures

Figure 1

23 pages, 3571 KiB  
Article
Studying the Role of Visuospatial Attention in the Multi-Attribute Task Battery II
by Daniel Gugerell, Benedikt Gollan, Moritz Stolte and Ulrich Ansorge
Appl. Sci. 2024, 14(8), 3158; https://doi.org/10.3390/app14083158 - 9 Apr 2024
Viewed by 778
Abstract
Task batteries mimicking user tasks are of high heuristic value. Supposedly, they measure individual human aptitude regarding the task in question. However, less is often known about the underlying mechanisms or functions that account for task performance in such complex batteries. This is [...] Read more.
Task batteries mimicking user tasks are of high heuristic value. Supposedly, they measure individual human aptitude regarding the task in question. However, less is often known about the underlying mechanisms or functions that account for task performance in such complex batteries. This is also true of the Multi-Attribute Task Battery (MATB-II). The MATB-II is a computer display task. It aims to measure human control operations on a flight console. Using the MATB-II and a visual-search task measure of spatial attention, we tested if capture of spatial attention in a bottom-up or top-down way predicted performance in the MATB-II. This is important to understand for questions such as how to implement warning signals on visual displays in human–computer interaction and for what to practice during training of operating with such displays. To measure visuospatial attention, we used both classical task-performance measures (i.e., reaction times and accuracy) as well as novel unobtrusive real-time pupillometry. The latter was done as pupil size covaries with task demands. A large number of analyses showed that: (1) Top-down attention measured before and after the MATB-II was positively correlated. (2) Test-retest reliability was also given for bottom-up attention, but to a smaller degree. As expected, the two spatial attention measures were also negatively correlated with one another. However, (3) neither of the visuospatial attention measures was significantly correlated with overall MATB-II performance, nor with (4) any of the MATB-II subtask performance measures. The latter was true even if the subtask required visuospatial attention (as in the system monitoring task of the MATB-II). (5) Neither did pupillometry predict MATB-II performance, nor performance in any of the MATB-II’s subtasks. Yet, (6) pupil size discriminated between different stages of subtask performance in system monitoring. This finding indicated that temporal segregation of pupil size measures is necessary for their correct interpretation, and that caution is advised regarding average pupil-size measures of task demands across tasks and time points within tasks. Finally, we observed surprising effects of workload (or cognitive load) manipulation on MATB-II performance itself, namely, better performance under high- rather than low-workload conditions. The latter findings imply that the MATB-II itself poses a number of questions about its underlying rationale, besides allowing occasional usage in more applied research. Full article
(This article belongs to the Special Issue Eye-Tracking Technologies: Theory, Methods and Applications)
Show Figures

Figure 1

11 pages, 1720 KiB  
Article
Retinal Microcirculation Measurements in Response to Endurance Exercises Analysed by Adaptive Optics Retinal Camera
by Maria Anna Żmijewska, Zbigniew M. Wawrzyniak, Maciej Janiszewski and Anna Zaleska-Żmijewska
Diagnostics 2024, 14(7), 710; https://doi.org/10.3390/diagnostics14070710 - 28 Mar 2024
Viewed by 858
Abstract
This study aimed to precisely investigate the effects of intensive physical exercise on retinal microvascular regulation in healthy volunteers through adaptive optics retinal camera (AO) measurement. We included healthy volunteers (11 men and 14 women) aged 20.6 ± 0.9. The heart rate (HR) [...] Read more.
This study aimed to precisely investigate the effects of intensive physical exercise on retinal microvascular regulation in healthy volunteers through adaptive optics retinal camera (AO) measurement. We included healthy volunteers (11 men and 14 women) aged 20.6 ± 0.9. The heart rate (HR) and systolic and diastolic blood pressures (SBP, DBP) were recorded before and after a submaximal physical exertion of continuously riding a training ergometer. The superior temporal retinal artery measurements were captured using the AO—rtx1TM (Imagine Eyes, Orsay, France) without pupil dilation. We compared measures of vessel diameter (VD), lumen diameter (LD), two walls (Wall 1, 2), wall-to-lumen ratio (WLR), and wall cross-sectional analysis (WCSA) before and immediately after the cessation of exercise. Cardiovascular parameter results: After exercise, SBP, DBP, and HR changed significantly from 130.2 ± 13.2 to 159.7 ± 15.6 mm Hg, 81.2 ± 6.3 to 77.1 ± 8.2 mm Hg, and 80.8 ± 16.1 to 175.0 ± 6.2 bpm, respectively (p < 0.002). Retinal microcirculation analysis showed no significant decrease in LD, Wall 1 after exercise: from 96.0 ± 6.8 to 94.9 ± 6.7 (p = 0.258), from 11.0 ± 1.5 to 10.4 ± 1.5 (p = 0.107), respectively, and significant reduction in VD from 118.5 ± 8.3 to 115.9 ± 8.3 (p = 0.047), Wall 2 from 11.5 ± 1.0 to 10.7 ± 1.3 (p = 0.017), WLR from 0.234 ± 0.02 to 0.222 ± 0.010 (p = 0.046), WCSA from 3802.8 ± 577.6 to 3512.3 ± 535.3 (p = 0.016). The AO is a promising technique for investigating the effects of exercise on microcirculation, allowing for the tracking of changes throughout the observation. Intensive dynamic physical exertion increases blood pressure and heart rate and causes the vasoconstriction of small retinal arterioles due to the autoregulation mechanism. Full article
(This article belongs to the Section Biomedical Optics)
Show Figures

Figure 1

20 pages, 4100 KiB  
Protocol
Automated Analysis Pipeline for Extracting Saccade, Pupil, and Blink Parameters Using Video-Based Eye Tracking
by Brian C. Coe, Jeff Huang, Donald C. Brien, Brian J. White, Rachel Yep and Douglas P. Munoz
Vision 2024, 8(1), 14; https://doi.org/10.3390/vision8010014 - 18 Mar 2024
Cited by 4 | Viewed by 1745
Abstract
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are [...] Read more.
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are not viable for the large datasets being collected. Additionally, video-based eye trackers allow for the analysis of pupil responses and blink behaviors. Here, we present a detailed description of our pipeline for collecting, storing, and cleaning data, as well as for organizing participant codes, which are fairly lab-specific but nonetheless, are important precursory steps in establishing standardized pipelines. More importantly, we also include descriptions of the automated detection and classification of saccades, blinks, “blincades” (blinks occurring during saccades), and boomerang saccades (two nearly simultaneous saccades in opposite directions where speed-based algorithms fail to split them), This is almost entirely task-agnostic and can be used on a wide variety of data. We additionally describe novel findings regarding post-saccadic oscillations and provide a method to achieve more accurate estimates for saccade end points. Lastly, we describe the automated behavior classification for the interleaved pro/anti-saccade task (IPAST), a task that probes voluntary and inhibitory control. This pipeline was evaluated using data collected from 592 human participants between 5 and 93 years of age, making it robust enough to handle large clinical patient datasets. In summary, this pipeline has been optimized to consistently handle large datasets obtained from diverse study cohorts (i.e., developmental, aging, clinical) and collected across multiple laboratory sites. Full article
Show Figures

Figure 1

17 pages, 3383 KiB  
Article
Measuring Efficiency and Accuracy in Locating Symbols on Mobile Maps Using Eye Tracking
by Wojciech Rymarkiewicz, Paweł Cybulski and Tymoteusz Horbiński
ISPRS Int. J. Geo-Inf. 2024, 13(2), 42; https://doi.org/10.3390/ijgi13020042 - 30 Jan 2024
Viewed by 1773
Abstract
This study investigated the impact of smartphone usage frequency on the effectiveness and accuracy of symbol location in a variety of spatial contexts on mobile maps using eye-tracking technology while utilizing the example of Mapy.cz. The scanning speed and symbol detection were also [...] Read more.
This study investigated the impact of smartphone usage frequency on the effectiveness and accuracy of symbol location in a variety of spatial contexts on mobile maps using eye-tracking technology while utilizing the example of Mapy.cz. The scanning speed and symbol detection were also considered. The use of mobile applications for navigation is discussed, emphasizing their popularity and convenience of use. The importance of eye tracking as a valuable tool for testing the usability of cartographic products, enabling the assessment of users’ visual strategies and their ability to memorize information, was highlighted. The frequency of smartphone use has been shown to be an important factor in users’ ability to locate symbols in different spatial contexts. Everyday smartphone users have shown higher accuracy and efficiency in image processing, suggesting a potential link between habitual smartphone use and increased efficiency in mapping tasks. Participants who were dissatisfied with the legibility of a map looked longer at the symbols, suggesting that they put extra cognitive effort into decoding the symbols. In the present study, gender differences in pupil size were also observed during the study. Women consistently showed a larger pupil diameter, potentially indicating greater cognitive load on the participants. Full article
Show Figures

Figure 1

0 pages, 1767 KiB  
Communication
Cognitive Vergence Recorded with a Webcam-Based Eye-Tracker during an Oddball Task in an Elderly Population
by August Romeo, Oleksii Leonovych, Maria Solé Puig and Hans Supèr
Sensors 2024, 24(3), 888; https://doi.org/10.3390/s24030888 - 30 Jan 2024
Viewed by 1016
Abstract
(1) Background: Our previous research provides evidence that vergence eye movements may significantly influence cognitive processing and could serve as a reliable measure of cognitive issues. The rise of consumer-grade eye tracking technology, which uses sophisticated imaging techniques in the visible light spectrum [...] Read more.
(1) Background: Our previous research provides evidence that vergence eye movements may significantly influence cognitive processing and could serve as a reliable measure of cognitive issues. The rise of consumer-grade eye tracking technology, which uses sophisticated imaging techniques in the visible light spectrum to determine gaze position, is noteworthy. In our study, we explored the feasibility of using webcam-based eye tracking to monitor the vergence eye movements of patients with Mild Cognitive Impairment (MCI) during a visual oddball paradigm. (2) Methods: We simultaneously recorded eye positions using a remote infrared-based pupil eye tracker. (3) Results: Both tracking methods effectively captured vergence eye movements and demonstrated robust cognitive vergence responses, where participants exhibited larger vergence eye movement amplitudes in response to targets versus distractors. (4) Conclusions: In summary, the use of a consumer-grade webcam to record cognitive vergence shows potential. This method could lay the groundwork for future research aimed at creating an affordable screening tool for mental health care. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 3524 KiB  
Article
Proximal Policy Optimization-Based Reinforcement Learning and Hybrid Approaches to Explore the Cross Array Task Optimal Solution
by Samuel Corecco, Giorgia Adorni and Luca Maria Gambardella
Mach. Learn. Knowl. Extr. 2023, 5(4), 1660-1679; https://doi.org/10.3390/make5040082 - 20 Nov 2023
Viewed by 2873
Abstract
In an era characterised by rapid technological advancement, the application of algorithmic approaches to address complex problems has become crucial across various disciplines. Within the realm of education, there is growing recognition of the pivotal role played by computational thinking (CT). This skill [...] Read more.
In an era characterised by rapid technological advancement, the application of algorithmic approaches to address complex problems has become crucial across various disciplines. Within the realm of education, there is growing recognition of the pivotal role played by computational thinking (CT). This skill set has emerged as indispensable in our ever-evolving digital landscape, accompanied by an equal need for effective methods to assess and measure these skills. This research places its focus on the Cross Array Task (CAT), an educational activity designed within the Swiss educational system to assess students’ algorithmic skills. Its primary objective is to evaluate pupils’ ability to deconstruct complex problems into manageable steps and systematically formulate sequential strategies. The CAT has proven its effectiveness as an educational tool in tracking and monitoring the development of CT skills throughout compulsory education. Additionally, this task presents an enthralling avenue for algorithmic research, owing to its inherent complexity and the necessity to scrutinise the intricate interplay between different strategies and the structural aspects of this activity. This task, deeply rooted in logical reasoning and intricate problem solving, often poses a substantial challenge for human solvers striving for optimal solutions. Consequently, the exploration of computational power to unearth optimal solutions or uncover less intuitive strategies presents a captivating and promising endeavour. This paper explores two distinct algorithmic approaches to the CAT problem. The first approach combines clustering, random search, and move selection to find optimal solutions. The second approach employs reinforcement learning techniques focusing on the Proximal Policy Optimization (PPO) model. The findings of this research hold the potential to deepen our understanding of how machines can effectively tackle complex challenges like the CAT problem but also have broad implications, particularly in educational contexts, where these approaches can be seamlessly integrated into existing tools as a tutoring mechanism, offering assistance to students encountering difficulties. This can ultimately enhance students’ CT and problem-solving abilities, leading to an enriched educational experience. Full article
Show Figures

Figure 1

Back to TopTop