Abstract
Unilateral Spatial Neglect is a cognitive impairment commonly observed in patients after right hemispheric lesions. A patient with this condition will show a lack of attention or response to visual stimuli presented to the left space.
In order to assess visual neglect, several “paper and pencil” tests are traditionally used, like the Albert’s Test or the Bells Test. Computer supported tests have been also proposed like Visual Spatial Search Task (VISSTA) or Starry Night Test (SNT). In this kind of tests, the patient is asked to detect all occurrences of a target among distractors (“cancellation task”). However, it has been noticed that these tests are not able to identify subtle but relevant NSU, especially in the chronic stage. In addition, compensatory attentional strategies can be developed by the patients in order to pass a test in which they have unlimited time. It is then important to measure reaction time and to adapt tests to increase diagnostic accuracy.
This abstract presents READAPT an application supporting the Bells Test, with the aim of overcoming the limits of existing static paper-and-pencil diagnostic tools and to facilitate recording and analysis of patient’s visual scanning.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Unilateral spatial neglect (USN) is a syndrome commonly observed after a right brain damage [1]. Spatial neglect has been defined by Heilman [2] as «a failure to report, respond, or orient to contralateral stimuli that is not caused by an elemental sensorimotor deficit». Patients with USN do not orient or respond to visual stimuli on their left side [3, 4]. Although USN often reported with elementary sensory or motor neurological disorders, most researchers emphasize the role of impaired mechanisms of spatial attention [5] and non-spatially lateralized deficits of attention [6]. USN is a complex and heterogeneous syndrome, which could affect personal (e.g. patients can omit to shave/make up the left side of their face) peripersonal (e.g. patients cannot eat the left part of their dish) and/or extrapersonal spaces (e.g. patients can bump their wheelchair into left obstacles). This syndrome is one of the leading causes of handicap and long-term disability [7].
In order to assess visual neglect, several “paper and pencil” tests have been proposed like the Albert’s Test [8] and the Bells Test [9]. Computer supported tests like VISSTA [10] or SNT [11] are also available. In this kind of tests, the patient is asked to detect all occurrences of a target among distractors. In the Bells Test, which is included in the French battery of USN [12], a 21.5 × 28 cm sheet is presented to the patient, 3 columns are on the left side of the sheet, one is in the middle and 3 on the right. The image contains different objects and a total of 35 bells (the target) distributed equally in the seven columns. The examiner notes by successive numbering the order of circling of bells. At the end of the task, the examiner can appreciate the spatial distribution of the omitted targets and evaluate the severity of the visual neglect. Indeed, the realization time, the number of omissions, and the analysis of the scanning strategy allow a quantitative and qualitative assessment of the unilateral spatial neglect. In addition, the test has a relatively weak learning effect.
However, it have been noticed that these tests are not able to identify subtle but relevant USN, especially in the chronic stage [13]. In addition, Pedroli [14] states that compensatory attentional strategies can be developed by the patients in order to pass a test in which they have unlimited time to identify static targets. The possibility to change the stimulus, the background and the time of presentation is useful to increase complexity and sensitivity of the test, even when only mild USN signs, and allowed for a better assessment of the deficit in spatial attention. On the other hand, one of the difficulties of detecting USN is that the patient’s eyes are unimpaired, they can orient towards signals and the pupil can also react to signals.
In this study, we have created a computerized adaptation of the Bells Test called READAPT, which permit to manipulate different factors that could affect USN signs, and to realize eye tracking measures. Therefore, READAPT might overcome the limits of existing diagnostic tools and facilitate recording and analysis of patient’s visual scanning.
The remainder of this paper is organized as follows. Section 2 discusses related work. Section 3 details READAPT application and how it implements the Bells Test. Section 4 presents eye-tracking functionality. The experimental protocol and results are presented in Sect. 5. Finally, Sect. 6 provides conclusions and directions for future work.
2 Related Work
On clinical trials, numerous paper and pencil tests are used to assess USN. Neglect signs can occur on bisection tasks (rightward deviation), copy tasks (omission of the left part of drawings) or visual search tasks (omission of targets on the left side). More than 60 standardized and non-standardized tools were identified to evaluate USN [15]. The variability of these assessments affected the reported rate of occurrence of USN. Visual cancellation tests are often used but such task can fail to detect mild [16]. Sensitivity results are different according batteries or tests used to assess USN [17]. Moreover, patterns of visual exploration are often lacking in such tests.
This has led to implement cancellation tests with a touchscreen interface [18], eye-tracking device [19, 20]. The importance of using dynamic task in the assessment of USN has been previously highlighted by Peskine et al. [21] and Deouell et al. [11] who showed that a computerized dynamic Starry Night Test (SNT) was more sensitive than paper and pencil tests. Computerized visual search task like Visual Spatial Search Task (VISSTA) has been also developed allowing to manipulate the number of distractors, the colour of the target, the exposure time of the stimuli presentation and the repetition of presentation over different periods of times [10].
Developing a digital diagnostic tool allows to:
-
Ensure reproducibility under controlled experimental conditions (one of the limitations of current diagnostic tools is that conditions vary depending on the therapist and facility equipment).
-
Simplify data entry and saving, and therefore monitor patient outcomes and compare these results with those of other clinical cases.
-
Take additional information, impossible to follow without a numerical tool, during the execution of the task, such as the movements of patients eyes.
-
Couple this diagnosis with rehabilitation exercises adapted to the results.
3 READAPT Web Application
Our application, READAPT, is a web application for USN assessment in the peripersonal space. It extends the traditional Bells tests by providing customizable scenes. The Bells test consists in finding specific graphical objects called targets, for instance bells, among many objects called distractors (apples, fishes…). Graphical objects differ from each other on shape and position, but are similar on other aspects, such as size, color… (Fig. 1). READAPT has been designed to provide a wider range of graphical aspects.
3.1 Design Study
In the following, “marks” denote graphical objects and “visual channels” are the variables controlling the aspects of these marks [22]. Since the raise of graphical semiology [23], and more recently data visualization [24, 25], we consider that a scene is composed by three kinds of marks: points, lines and areas. Visual channels controlling the appearance of these marks are position, shape, size, tilt, color (hue, luminance, saturation), and motion. In the early development of semiology, in particular when cartographic maps were made manually, texture was also considered as a visual channel, but it is not employed anymore since the raise of computers. Another channel, curvature, is also often mentioned in the literature. We do not include it in our list because it highly interferes with shape and thus is not relevant in our context.
In semiology and visualization, marks and channels are used to represent data. For instance, marks denote data elements and channels represent quantitative or qualitative attributes associated with these elements. This approach is not related for our work here. However, development in these fields pointed out interesting characteristics of visual channels (see [22, 24]). In particular, perception of visual channels differs in terms of effectiveness and expressiveness. Our assumption here is that, while variation on the different channels are not perceived equally, it is worth to explore further channels instead of just varying position and shape. That is why the application we propose here includes many parameters enabling to customize the different aspects of the marks as described in the next section. The purpose is to provide a more flexible tool, where the practitioner can adapt the level of difficulty according to her patient by using various visual channels. For instance, instead of using shape, or in addition to shape, targets can be rendered with specific color to make them more salient in a rehabilitation process context.
3.2 Tool Description
Performing a Test.
READAPT is a Web application to test USN. The patient performing a test is asked to find a particular object, called target, on successive scenes. Each attempt is called a trial. Figure 1 shows such a scene, composed of 49 items: a target and 48 distractors. In the catch-trials (trials where the target is not present), the scene is composed of 49 distractors. In this example, the patient has to click on the red bell to success the trial. When she clicks on the screen, even if she does not succeed, a new trial is proposed. To launch the new trial, the patient is asked to click on a cross positioned at the center of the screen. Since the scene is divided as the original seven-column test, the order of appearance of the target in each column as well as the order of the trials, with and without a target, are randomized, in order to limit the learning effect. The result of each trial is recorded in a database.
Customizing a Test.
In regard of the purpose defined above, before launching a test session, the practitioner has to set up the parameters listed in Table 1.
Figure 2 shows the interface given to the practitioner. When several values are available for a visual channel, a random value is computed for each mark when the trials are rendered. For instance, if “Tilt” is checked, a random angle is computed and then applied to the mark. In the same way, when the practitioner selects a range for the hue, a random hue within this range is computed.
Additional Functionalities.
When a practitioner access READAPT, she can select a patient or create a new one in the database. Each test session will be associated to this patient and the practitioner will have access to an interface showing his evolution over time. First a table shows the parameters of each session. Furthermore, the patient’s response time, the distance from the click to the target (if present), and the relevance of his selection are recorded. A stacked bar chart is also provided. Each bar represents a session, and the portions of the bar represent the number of success (blue), nearby clicks, failures (red) and timeouts (orange).
3.3 Technical Aspects
This application consists of a set of programs written in PHP and JavaScript. PHP programs allow to configure via forms the different elements of the scenes composing a test session. The context of the sessions (non-nominal identifiers of medical agents and patients, dates, etc.) is recorded in a PHP database. The test results (identification of the presence or absence of a target by a left or right mouse click) are saved in tabular format.
The JavaScript language, which permits to modify dynamically the structure of the web pages displayed by the browser, and to record the Internet user’s activity, was used to:
-
create dual sliders selecting the different color components (hue, saturation, brightness and opacity) when configuring the tests;
-
display in the browser, the different graphic elements of the scenes;
-
record mouse usa and/or eye movements of patients.
Note that the d3.js JavaScript library is used to create sliders and display scene elements. This library allows to create graphical elements using the SVG (Scalable Vector Graphics) language, which produces vector graphics (all browsers host an SVG interpreter).
In addition, using the Node.js platform - a modular JavaScript interpreter running at the server level - we have also implemented JavaScript programs to record test settings and results, as well as the reconstruction of scenes as PNG images (again using SVG), on which the heat maps produced by the eye tracker are subsequently superimposed.
4 Eye-Tracking
READAPT is coupled with an eye-tracking recorder. We use the Eye Tribe eye tracker [26] for our experimental set-up. The eye movements of the patient are collected and then processed to build heat maps and gaze trajectory animation (Fig. 3). This data is superposed to trial’s image in order to analyse the patient’s visual search strategy. It has been shown that subjects presenting an attentional deficit will demonstrate a disorganized and chaotic scanning [9]. Therefore, following the gaze trajectory step by step will help examiners to detect a disturbed visual search.
The eye-tracker is calibrated for each user. During the calibration, the user is asked to focus at several points on the screen. The device will then make a mapping between the point on the screen that the user looked at (gaze point) and the eye-tracker output.
To use the eye-tracker within the web navigator, the EyeTribe-WebSocket [27] has been used, a javascript based websocket application to wrap the EyeTribe SDK and transmit data. The d3.js library is used to display graphic elements. An application has been developed to accessing the eye-tracker data and information from READAPT in order to superpose the image generated for a trial and the gaze measures [28].
The heat map uses different shades of colour, here from yellow to red, to indicate the number of gaze positions of each zone (gaze positions are grouped into hexagons). Darker colours close to red indicate zones of high activity. An animation allows following the trajectory of the gaze. Once the animation is over, the positions are connected by lines representing the trajectory of the gaze [29].
5 Experiments
5.1 Participants
4 adults (1 man and 3 women) aged between 34 and 75 years old (m = 55,5 ± 17.02), suffering spatial neglect after right hemispherical stroke have been recruited for the study. Patients with neurological or degenerative disease prior the stroke, with major comprehension deficit, and/or refusing the consent form, have been excluded. Patients come from the medicine and rehabilitation department of the Saint-Maurice hospitals (Paris, France. Dr R. Péquignot) and the medicine and rehabilitation department of the Pitié-Salpetriere hospital (Paris, France. Dr P. Pradat-Diehl).
On the same basis, 8 control subjects (5 men /4 women) aged between 30 and 71 (m = 49.37 ± 12.97) have been included. The two groups do not present any differences concerning age (t test; t = −0.69, p = .500).
5.2 Task
Readapt software is based on the Bells Test evaluating spatial neglect in a peri-personal reference frame. This computerized version displays a visual scene made of 49 items: 1 target and 48 distractors. During the catch-trials (trial without target to test the attention AL involvement of the subject), the display is made of 49 distractors. On each trial, the subject have to indicate the presence or not of the target using respectively the left or right button of the mouse (thumb or index of the right hand). The visual scene is divided in 7 columns and lines. To reduce the learning effect, the target is presented randomly in every column and the trials/catch-trials are in a random order. Time in each trial is limited to 8 s while the time between trials depends on the action of the subject (click with the left button on the gray cross). Before each trial, a gray fixation cross is presented with a random letter on the middle of it. The subject has to read it out loud to ensure the experimenter that the visual attention is stick to the letter. This letter has a angular size of 0,5° to be sure that the reading of it is in foveal vision [30].
From this basic architecture, three versions were derived:
-
1.
Standard: one block of 80 trials, including 10 catch-trials, during which the target appears 10 times in each column. No variation from the basic set-up is made here, to allow the strict comparison of the sensibility between the original test and ours.
-
2.
Items: five blocks of 14 trials and 2 catch-trials corresponding to the 5 conditions of variation of the repartition of the distractors on the horizontal plan. This repartition evolves according to a gradient from the left to the right (in condition 1 and 2), from the right to the left (condition 4 and 5). The condition 5 is the control condition with no variation.
-
3.
Interference: five blocks of 14 trials and 2 catch-trials corresponding to the 5 conditions of variation of the percentage of degradation of the scene on the horizontal plan. This degradation evolves according to a descending gradient from the left to the right (in condition 1 and 2), from the right to the left (condition 4 and 5). The condition 5 is the control condition with no degradation.
5.3 Procedure
All the participants have been tested in a quiet and isolated place, seating in front of the experimenter. Once the task has been explained, understood and that all the eventual interrogations have been cleared, the subject is tested with both the original Bells Test and the three versions of the task.
To evaluate the validity of our task, the subject is tested with the original Bells Test before the Readapt version, to allow the comparison between the results.
According to the set-up of [26], the subject is installed in front of the computer with the straightest position as possible at 60 cm of the screen with the eye in the axe of the center of the screen. The Eye-tracker is just under the screen at a distance between 30 to 45 cm of the participant.
The computerized version it-self is split in two stages. A first stage of training allows ensuring the comprehension of the task, to validate the posture and the comfort, and to calibrate the device. With the same instructions as for the experimental stage, the training is composed of one block of 16 trials including 2 catch-trials. Once the training completed, the subject can start the second stage.
This second stage is composed of 3 variations (“Standard”, “Items”, and “Interference”) in a randomized order. Each trial is made of a fixation cross at the center of the screen in which a random letter appears that has to be named out loud. Then, the subject must click on the left button of the mouse controller to trigger the appearance of the scene. In this phase, the task is to indicate the presence or not of a target (a black bell) using respectively the left or right button of the mouse controller. This controller is placed under the right hand of the subject with a 90° rotation counter clockwise to avoid the stimulus-response compatibility effect, or “Simon Effect” [31]. Depending of the subject’s fatigability, breaks can be taken anytime during the appearance of the fixation cross whose time duration is determined by the action of the subject only.
5.4 Results
All the analysis realized for the study have been done with the RStudio software (open-source) and Microsoft Excel 2016. A control subject has been removed from the analysis because of abnormal performances in the computerized test. Possibly resulting of a slight vision anomaly due to a retinal surgery.
Original Bell Test.
The analysis of the performances is based on the following quantitative variables: total duration of the task (seconds), number of omissions (total, and per columns), and the position of the first target marked (starting column of visual search). The visual search strategy has been recorded as a qualitative variable. The average performances are presented below (Table 2).
-
Total duration. The data follow a normal distribution (Shapiro-Wilk; Patient: W = 0.93, p = .63; Control : W = 0.95, p = .72) and the homoscedasticity is not refuted (Fisher; F = 1.91, p = .43). The T-test confirm a significant difference between the 2 groups (Student; t = -5.0115, p < .001).
-
Omissions. The comparison between omissions in the left half of the scene (col. 1,2,3) between groups reveals no significant difference (Mann-Whitney; U = 20, p = .216). However, the total number of omissions (in the 7 columns) is significantly different between patients and controls (U = 31, p = .005). Patients omit actually more target than the controls.
-
Starting point of the visual search. Investigate by the first bell crossed, this variable do not reveal an inter-group difference (U = 17, p = .911).
Readapt.
The quantitative variables used for the performance analysis of the 3 versions of the computerized test are: the reaction time, number and position (column number) of the omissions, false alarms. The reaction time used corresponds to the trials with target during which it was correctly detected.
Standard Version.
Reaction time (RT) – The comparison of the RT in and between groups shows significant differences. The comparison has been done using non-parametric tests because of the distribution of the data (Shapiro-Wilk).
In-group: A significant difference is observed in the patient group regarding of the position of the target in the visual scene (Kruskal-Wallis; H(6) = 37.594, p < .001). The Wilcoxon tests for paired samples show differences between the columns 1 and 5 (p = .001), 1 and 6 (p < .001), 1 and 7 (p < .001). In the control group (H(6) = 30.712, p < .001), the differences concern the columns 1–3, 1–4, 1–5, 2–4, 2–5, and 4–7.
Between-groups (see Fig. 4): The Mann-Whitney test shows evidence for significant differences for the columns 1 (p < .001), 2 (p < .001), 3 (p < .001), 4 (p < .001) et 5 (p = .001).
Omissions (see Fig. 5) – Mann-Whitney tests show between-groups significant differences for the number of omissions in the columns 1 (U = 31.5, p = .008), 2 (U = 28.5, p = .028), 3 (U = 24, p = .049) et 4 (U = 24, p = .049).
False alarms – The false alarms rate for the patient and control groups are respectively 12,5% and 17,5%.
Items Version.
Reaction Time – We used non-parametric tests because of the distribution of the data. The Mann-Whitney tests revealed evidence for differences of reaction time per columns and conditions between the two groups (Table 3).
Omissions – Significant differences about the number of omissions per column emerged from the between group comparison for the column 1 in conditions 1 (p = .049), 3 (p = .009) and 4 (p = .01); for the column 2 in conditions 4 (p = .049) and 5 (p = .049); for the column 3 in condition 5 (p = .049) (cf. Annexe 7).
False alarms – The false alarms rates for the patient and control groups are respectively 0% and 2.5% in condition 1, 0% and 5% in condition 2, 2.5% and 8.75% in condition 3, 2.5% and 0% in condition 4, 5% and 1.25% in condition 5.
Interference Version.
Reaction Time – We used non-parametric tests because of the distribution of the data. The Mann-Whitney tests revealed evidence for differences of reaction time per columns and conditions between the two groups (Table 4).
Omissions – Significant differences about the number of omissions per columns emerged from the between group comparison for the column 1 in conditions 1 (p = .01), 2 (p = .001), 4 (p = .041) and 5 (p = .009); for the column 2 in conditions 3 (p = .049) and 5 (p = .049); for the column 4 in condition 1 (p = .049); for the column 7 in condition 1 (p = .041) (cf. Annexe 8).
False alarms – The false alarms rates for the patient and control groups are respectively 0% and 2.5% in condition 1, 0% et 5% in condition 2, 2% et 1,25% in condition 3, 2.5% et 3,75% in condition 4, 2,5% et 5% in condition 5.
6 Conclusion
Spatial neglect is the difficulty in detecting, responding to or pointing to significant stimuli located on the opposite side of a brain injury, which cannot be attributed to a sensory or motor deficit. Patients often have weak spatial neglect and only capture a portion of the signals from the left side of the image. The traditional technique for assessing USN is the Bells Test, which consist in finding specific graphical objects (bells) called targets, among many objects called distractors. However, this kind of tests fails to identify subtle impairments and compensatory attentional strategies. It is then important to accurately measure reaction time and to help examiners to easily adapt tests to increase diagnostic accuracy.
Our application, READAPT, is a web application for USN assessment. It extends the traditional Bells tests by providing customizable scenes. The purpose is to provide a more flexible tool, where the practitioner can adapt the level of difficulty according to her patient by using various visual channels. For instance, instead of using shape, or in addition to shape, targets can be rendered with specific color to make them more salient in a rehabilitation process context. The application includes many parameters enabling to customize the different aspects of the marks: type (distractor or target), number, shape, size and colour of objects as well as the background colour and distribution of distractors and targets. A scene can also be created without targets. A session can be defined, regrouping several trials and a timeout.
Performances of several patients and control subjects were used to demonstrate the feasibility and the ability of the system to distinguish between normal and neglect subjects, based on reaction times and target omissions.
These results represent a proof of concept for our application and encourage us to continue its development. We hope that the new data acquired and the follow-up of patients will make it possible to better understand USN. In particular, we believe that there are many degrees of impairment and we hope that appropriate responses are possible depending on the different cases.
In addition to being usable with patients in a clinical evaluation context, the application allows for the construction of several personalized tests and therefore could be used for experiments concerning the evaluation of the relevance of visual variables in the HCI domain.
References
Azouvi, P., et al.: Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. J. Neurol. Neurosurg. Psychiatry 73, 160–166 (2002)
Heilman, K.M.: Neglect and Related Disorders. Oxford University Press, New York (1985)
Heilman, K.M., Valenstein, E.: Mechanisms underlying hemispatial neglect. Ann. Neurol. 5, 166–170 (1979)
Heilman, K.M., Watson, R.T., Valenstein, E.: Neglect and related disorders. In: Heilman, K.M., Valenstein, E. (eds.) Clinical Neuropsychology, 2nd edn, pp. 279–336. Oxford University Press, New York (1985)
Urbanski, M., et al.: Unilateral spatial neglect: a dramatic but often neglected consequence of right brain damage. Revue Neurologique 162, 1–18 (2007). (in French)
Husain, M., Rorden, C.: Non-spatially lateralized mechanisms in hemispatial neglect. Nat. Rev. Neurosci. 4(1), 26–36 (2003)
Bartolomeo, P.: Attention Disorders After Right Brain Damage - Living in Halved Worlds. Springer, London (2004)
Albert, M.L.: A simple test of visual neglect. Neurology 23, 658–664 (1973)
Gauthier, L., Dehaut, F., Joanette, Y.: The bells test: a quantitative and qualitative test for visual neglect. Int. J. Clin. Neuropsychol. 11(2), 49–53 (1989)
Erez, A.B., Katz, N., Ring, H., Soroker, N.: Assessment of spatial neglect using computerised feature and conjunction visual search tasks. Neuropsychol. Rehabil. 19(5), 677–695 (2009). https://doi.org/10.1080/09602010802711160
Deouell, L.Y., Sacher, Y., Soroker, N.: Assessment of spatial attention after brain damage with a dynamic reaction time test. J. Int. Neuropsychol. Soc. 11(6), 697–707 (2005). https://doi.org/10.1017/S1355617705050824
Azouvi, P., Bartolomeo, P., Beis, J.-M., Perennou, D., Pradat-Diehl, P., Rousseaux, M.: A battery of tests for the quantitative assessment of unilateral neglect. Restor. Neurol. Neurosci. 24, 273–285 (2006)
Rengachary, J., d’Avossa, G., Sapir, A., Shulman, G.L., Corbetta, M.: Is the posner reaction time test more accurate than clinical tests in detecting left neglect in acute and chronic stroke? Arch. Phys. Med. Rehabil. 90, 2081–2088 (2009). https://doi.org/10.1016/j.apmr.2009.07.014
Pedroli, E., Serino, S., Cipresso, P., Pallavicini, F., Riva, G.: Assessment and rehabilitation of neglect using virtual reality: a systematic review. Front. Behav. Neurosci. 9 (2015). https://doi.org/10.3389/fnbeh.2015.00226
Menon, A., Korner-Bitensky, N.: Evaluating unilateral spatial neglect post stroke: working your way through the maze of assessment choices. Top. Stroke Rehabil. 11(3), 41–66 (2004). https://doi.org/10.1310/KQWL-3HQL-4KNM-5F4U
Bowen, A., McKenna, K., Tallis, R.C.: Reasons for variability in the reported rate of occurrence of unilateral spatial neglect after stroke. Stroke 30(6), 1196–1202 (1999)
Mizuno, K., Kato, K., Tsuji, T., Shindo, K., Kobayashi, Y., Liu, M.: Spatial and temporal dynamics of visual search tasks distinguish subtypes of unilateral spatial neglect: comparison of two cases with viewer-centered and stimulus-centered neglect. Neuropsychol. Rehabil. 26(4), 610–634 (2016). https://doi.org/10.1080/09602011.2015.1051547
Rabuffetti, M., et al.: Spatio-temporal features of visual exploration in unilaterally brain-damaged subjects with or without neglect: results from a touchscreen test. PloS One 7(2), e31511 (2012). https://doi.org/10.1371/journal.pone.0031511
Behrmann, M., Watt, S., Black, S.E., Barton, J.J.: Impaired visual search in patients with unilateral neglect: an oculographic analysis. Neuropsychologia 35(11), 1445–1458 (1997)
Müri, R.M., Cazzoli, D., Nyffeler, T., Pflugshaupt, T.: Visual exploration pattern in hemineglect. Psychol. Res. 73(2), 147–157 (2009). https://doi.org/10.1007/s00426-008-0204-0
Peskine, A., et al.: Virtual reality assessment for visuospatial neglect: importance of a dynamic task. J. Neurol. Neurosurg. Psychiatry 82(12), 1407–1409 (2011). https://doi.org/10.1136/jnnp.2010.217513
Munzner, T.: Visualization Analysis & Design. AK Peters Visualization Series. CRC Press, Boca Raton (2014)
Bertin, J.: Semiology of Graphics: Diagrams, Networks, Maps. University of Wisconsin Press, Madison (1983)
Ware, C.: Information Visualization: Perception for Design. Morgan Kaufmann, Burlington (2000)
Ward, M., Grinstein, G., Keim, D.: Interactive Data Visualization: Foundations, Techniques, and Applications. A K Peters Ltd., Natick (2010)
Ooms, K., Dupont, L., Lapon L., Popelka S.: Evaluation of the accuracy and precision of a low-cost eye tracking device. J. Eye Mov. Res. 8(1), 5, 1–24 (2015)
The Eye Tribe Web Socket. https://github.com/kzokm/eyetribe-websocket
Govin, C.: Development of the READAPT application – Eye-tracking & Web. Internship report. University of Montpellier (2018). (in French)
Beuret, M.: Analysis of visual characteristics using an eye-tracker. Internship report. University of Montpellier (2017). (in French)
Kristjánsson, A., Vuilleumier, P., Malhotra, P., Husain, M., Driver, J.: Priming of color and position during visual search in unilateral spatial neglect. J. Cogn. Neurosci. 17(6), 859–873 (2005)
Simon, J.R., Rudell, A.P.: Auditory S-R compatibility: the effect of an irrelevant cue on information processing. J. Appl. Psychol. 51, 300–304 (1967)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Rossa, T. et al. (2019). Experimental Web Service and Eye-Tracking Setup for Unilateral Spatial Neglect Assessment. In: Duffy, V. (eds) Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Healthcare Applications. HCII 2019. Lecture Notes in Computer Science(), vol 11582. Springer, Cham. https://doi.org/10.1007/978-3-030-22219-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-22219-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22218-5
Online ISBN: 978-3-030-22219-2
eBook Packages: Computer ScienceComputer Science (R0)