Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

The Effect of Task on Visual Attention in Interactive Virtual Environments

Published: 06 September 2019 Publication History

Abstract

Virtual environments for gaming and simulation provide dynamic and adaptive experiences, but, despite advances in multisensory interfaces, these are still primarily visual experiences. To support real-time dynamic adaptation, interactive virtual environments could implement techniques to predict and manipulate human visual attention. One promising way of developing such techniques is to base them on psychophysical observations, an approach that requires a sound understanding of visual attention allocation. Understanding how this allocation of visual attention changes depending on a user’s task offers clear benefits in developing these techniques and improving virtual environment design. With this aim, we investigated the effect of task on visual attention in interactive virtual environments. We recorded fixation data from participants completing freeview, search, and navigation tasks in three different virtual environments. We quantified visual attention differences between conditions by identifying the predictiveness of a low-level saliency model and its corresponding color, intensity, and orientation feature-conspicuity maps, as well as measuring fixation center bias, depth, duration, and saccade amplitude. Our results show that task does affect visual attention in virtual environments. Navigation relies more than search or freeview on intensity conspicuity to allocate visual attention. Navigation also produces fixations that are more central, longer, and deeper into the scenes. Further, our results suggest that it is difficult to distinguish between freeview and search tasks. These results provide important guidance for designing virtual environments for human interaction, as well as identifying future avenues of research for developing “attention-aware” virtual worlds.

References

[1]
Sander Bakkes, Chek Tien Tan, and Yusuf Pisan. 2012. Personalised gaming: A motivation and overview of literature. In Proceedings of the 8th Australasian Conference on Interactive Entertainment: Playing the System. ACM, 4.
[2]
Robert W. Baloh, Andrew W. Sills, Warren E. Kumley, and Vicente Honrubia. 1975. Quantitative measurement of saccade amplitude, duration, and velocity. Neurology 25, 11 (1975), 1065--1065.
[3]
Jonathan F. G. Boisvert and Neil D. B. Bruce. 2016. Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing 207 (2016), 653--668.
[4]
Ali Borji and Laurent Itti. 2014. Defending Yarbus: Eye movements reveal observers’ task. J. Vis. 14, 3 (2014), 29--29.
[5]
Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba. 2015. MIT saliency benchmark. http://saliency.mit.edu/.
[6]
Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, and Frédo Durand. 2018. What do different evaluation metrics tell us about saliency models? IEEE Trans. Pattern Anal. Machine Intell. 41, 3 (2018).
[7]
Monica S. Castelhano, Michael L. Mack, and John M. Henderson. 2009. Viewing task influences eye movement control during active scene perception. J. Vis. 9, 3 (2009), 6--6.
[8]
C. J. S. Collins and G. R. Barnes. 1999. Independent control of head and gaze movements during head-free pursuit in humans. J. Physiol. 515, 1 (1999), 299--314.
[9]
Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. 2016. A deep multi-level network for saliency prediction. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR’16). IEEE, 3488--3493.
[10]
Antoine Coutrot, Janet H. Hsiao, and Antoni B. Chan. 2018. Scanpath modeling and classification with hidden Markov models. Behav. Res. Meth. 50, 1 (2018), 362--379.
[11]
Heiner Deubel and Werner X. Schneider. 1996. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vis. Res. 36, 12 (1996), 1827--1837.
[12]
Magy Seif El-Nasr, Athanasios Vasilakos, Chinmay Rao, and Joseph Zupko. 2009. Dynamic intelligent lighting for directing visual attention in interactive 3-D scenes. IEEE Trans. Comput. Intell. AI Games 1, 2 (2009), 145--153.
[13]
John M. Franchak and Karen E. Adolph. 2010. Visually guided navigation: Head-mounted eye-tracking of natural locomotion in children and adults. Vis. Res. 50, 24 (2010), 2766--2774.
[14]
Edward G. Freedman. 2008. Coordination of the eyes and head during visual orienting. Exper. Brain Res. 190, 4 (2008), 369.
[15]
Dashan Gao, Vijay Mahadevan, and Nuno Vasconcelos. 2008. On the plausibility of the discriminant center-surround hypothesis for visual saliency. J. Vis. 8, 7 (2008), 13--13.
[16]
Klaus Gramann, Jennifer El Sharkawy, and Heiner Deubel. 2009. Eye-movements during navigation in a virtual tunnel. Int. J. Neurosci. 119, 10 (2009), 1755--1778.
[17]
Amin Haji-Abolhassani and James J. Clark. 2014. An inverse Yarbus process: Predicting observers’ task from eye movement patterns. Vis. Res. 103 (2014), 127--142.
[18]
S. Navid Hajimirza, Michael J. Proulx, and Ebroul Izquierdo. 2012. Reading users’ minds from their eyes: A method for implicit image annotation. IEEE Trans. Multimed. 14, 3 (2012), 805--815.
[19]
Jonathan Harel, C. Koch, and P. Perona. 2006. A saliency implementation in matlab. Retrieved from: http://www.klab.caltech.edu/harel/share/gbvs.php.
[20]
Jonathan Harel, Christof Koch, and Pietro Perona. 2007. Graph-based visual saliency. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 545--552.
[21]
Mary Hayhoe and Dana Ballard. 2005. Eye movements in natural behavior. Trends Cog. Sci. 9, 4 (2005), 188--194.
[22]
John M. Henderson, James R. Brockmole, Monica S. Castelhano, and Michael Mack. 2007. Visual saliency does not account for eye movements during visual search in real-world scenes. In Eye Movements. Elsevier, 537--III.
[23]
John M. Henderson and Taylor R. Hayes. 2017. Meaning-based guidance of attention in scenes as revealed by meaning maps. Nat. Hum. Behav. 1, 10 (2017), 743.
[24]
John M. Henderson, Svetlana V. Shinkareva, Jing Wang, Steven G. Luke, and Jenn Olejarczyk. 2013. Predicting cognitive state from eye movements. PLOS One 8, 5 (2013), e64937.
[25]
Sébastien Hillaire, Anatole Lécuyer, Gaspard Breton, and Tony Regia Corte. 2009. Gaze behavior and visual attention model when turning in virtual environments. In Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology. ACM, 43--50.
[26]
Sebastien Hillaire, Anatole Lecuyer, Tony Regia-Corte, Rémi Cozot, Jerome Royan, and Gaspard Breton. 2012. Design and application of real-time visual attention model for the exploration of 3D virtual environments. IEEE Trans. Vis. Comput. Graph. 18, 3 (2012), 356--368.
[27]
Laurent Itti. 2002. Real-time high-performance attention focusing in outdoors color video streams. In Proceedings of the Conference on Human Vision and Electronic Imaging VII, Vol. 4662. International Society for Optics and Photonics, 235--244.
[28]
Laurent Itti, Christof Koch, and Ernst Niebur. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Machine Intell. 20, 11 (1998), 1254--1259.
[29]
Yuhong V. Jiang, Z. Sha Li, and Roger W. Remington. 2015. Modulation of spatial attention by goals, statistical learning, and monetary reward. Attent., Percept., Psychophys. 77, 7 (2015), 2189--2206.
[30]
Tilke Judd, Frédo Durand, and Antonio Torralba. 2012. A benchmark of computational models of saliency to predict human fixations. http://hdl.handle.net/1721.1/68590.
[31]
Aarlenne Z. Khan, Gunnar Blohm, Robert M. McPeek, and Philippe Lefevre. 2009. Differential influence of attention on gaze and head movements. J. Neurophys. 101, 1 (2009), 198--206.
[32]
Christof Koch and Shimon Ullman. 1987. Shifts in selective visual attention: Towards the underlying neural circuitry. In Matters of Intelligence. Springer, 115--141.
[33]
Srinivas S. S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. 2017. Deepfix: A fully convolutional neural network for predicting human eye fixations. IEEE Trans. Image Proc. 26, 9 (2017), 4446--4456.
[34]
Michael F. Land and Mary Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vis. Res. 41, 25--26 (2001), 3559--3565.
[35]
Olivier Le Meur, Patrick Le Callet, and Dominique Barba. 2007. Predicting visual fixations on video based on low-level visual features. Vis. Res. 47, 19 (2007), 2483--2498.
[36]
Sungkil Lee, Gerard Jounghyun Kim, and Seungmoon Choi. 2009. Real-time tracking of visually attended objects in virtual environments and its application to LOD. IEEE Trans. Vis. Comput. Graph. 15, 1 (2009), 6--19.
[37]
Peter McLeod, Jon Driver, and Jennie Crisp. 1988. Visual search for a conjunction of movement and form is parallel. Nature 332, 6160 (1988), 154.
[38]
Mark Mills, Andrew Hollingworth, Stefan Van der Stigchel, Lesa Hoffman, and Michael D. Dodd. 2011. Examining the influence of task set on eye movements and fixations. J. Vis. 11, 8 (2011), 17--17.
[39]
Lennart Erik Nacke, Michael Kalyn, Calvin Lough, and Regan Lee Mandryk. 2011. Biofeedback game design: Using direct and indirect physiological control to enhance game interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 103--112.
[40]
Anneli Olsen. 2012. The Tobii I-VT Fixation Filter. Tobii Technology, Stockholm, Sweden.
[41]
Alice J. O’Toole and Candice L. Walker. 1997. On the preattentive accessibility of stereoscopic disparity: Evidence from visual search. Percept. Psychophys. 59, 2 (1997), 202--218.
[42]
Derrick Parkhurst, Klinton Law, and Ernst Niebur. 2002. Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42, 1 (2002), 107--123.
[43]
Robert J. Peters and Laurent Itti. 2008. Applying computational tools to predict gaze direction in interactive visual environments. ACM Trans. Appl. Percept. 5, 2 (2008), 9.
[44]
Robert J. Peters, Asha Iyer, Laurent Itti, and Christof Koch. 2005. Components of bottom-up gaze allocation in natural images. Vis. Res. 45, 18 (2005), 2397--2416.
[45]
Michael J. Proulx. 2007. Bottom-up guidance in visual search for conjunctions.J. Exper.: Hum. Percept. Perf. 33, 1 (2007), 48.
[46]
Michael J. Proulx and Monique Green. 2011. Does apparent size capture attention in visual search? Evidence from the Müller--Lyer illusion. J. Vis. 11, 13 (2011), 21--21.
[47]
Constantin A. Rothkopf, Dana H. Ballard, and Mary M. Hayhoe. 2007. Task and context determine where you look. J. Vis. 7, 14 (2007), 16--16.
[48]
Tim J. Smith, Daniel Levin, and James E. Cutting. 2012. A window on reality: Perceiving edited moving images. Curr. Direct. Psychol. Sci. 21, 2 (2012), 107--113.
[49]
Michael J. Spivey, Melinda J. Tyler, Kathleen M. Eberhard, and Michael K. Tanenhaus. 2001. Linguistically mediated visual search. Psychol. Sci. 12, 4 (2001), 282--286.
[50]
Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, metric, and computational evaluation of foveated rendering methods. In Proceedings of the ACM Symposium on Applied Perception. ACM, 7--14.
[51]
Benjamin W. Tatler, Roland J. Baddeley, and Benjamin T. Vincent. 2006. The long and the short of it: Spatial statistics at fixation vary with saccade amplitude and task. Vis. Res. 46, 12 (2006), 1857--1862.
[52]
Benjamin W. Tatler, Mary M. Hayhoe, Michael F. Land, and Dana H. Ballard. 2011. Eye guidance in natural vision: Reinterpreting salience. J. Vis. 11, 5 (2011), 5--5.
[53]
Anne M. Treisman and Garry Gelade. 1980. A feature-integration theory of attention. Cog. Psychol. 12, 1 (1980), 97--136.
[54]
Po-He Tseng, Ran Carmi, Ian G. M. Cameron, Douglas P. Munoz, and Laurent Itti. 2009. Quantifying center bias of observers in free viewing of dynamic natural scenes. J. Vis. 9, 7 (2009), 4--4.
[55]
Timothy J. Vickery, Li-Wei King, and Yuhong Jiang. 2005. Setting up the target template in visual search. J. Vis. 5, 1 (2005), 8--8.
[56]
Martin Weier, Thorsten Roth, Ernst Kruijff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, and Yongmin Li. 2016. Foveated real-time ray tracing for head-mounted displays. In Proceedings of the Computer Graphics Forum, Vol. 35. Wiley Online Library, 289--298.
[57]
Jeremy M. Wolfe, George A. Alvarez, Ruth Rosenholtz, Yoana I. Kuzmova, and Ashley M. Sherman. 2011. Visual search for arbitrary objects in real scenes. Attent., Percept., Psychophys. 73, 6 (2011), 1650.
[58]
Jeremy M. Wolfe and W. Gray. 2007. Guided search 4.0. Integrated Models of Cognitive Systems. Oxford Scholarship Online, 99--119.
[59]
Jeremy M. Wolfe and Todd S. Horowitz. 2004. What attributes guide the deployment of visual attention and how do they do it? Nat. Rev. Neurosci. 5, 6 (2004), 495.
[60]
Jeremy M. Wolfe and Todd S. Horowitz. 2017. Five factors that guide attention in visual search. Nat. Hum. Behav. 1, 3 (2017), 0058.
[61]
Jeremy M. Wolfe, Aude Oliva, Todd S. Horowitz, Serena J. Butcher, and Aline Bompas. 2002. Segmentation of objects from backgrounds in visual search tasks. Vis. Res. 42, 28 (2002), 2985--3004.
[62]
Alfred L. Yarbus. 1967. Eye movements during perception of complex objects. In Eye Movements and Vision. Springer, 171--211.
[63]
Qi Zhao and Christof Koch. 2011. Learning a saliency map using fixated locations in natural scenes. J. Vis. 11, 3 (2011), 9--9.

Cited By

View all
  • (2024)AR-in-VR simulator: A toolbox for rapid augmented reality simulation and user researchACM Symposium on Applied Perception 202410.1145/3675231.3675240(1-11)Online publication date: 30-Aug-2024
  • (2024)Real-World Scanpaths Exhibit Long-Term Temporal Dependencies: Considerations for Contextual AI for AR ApplicationsProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3656352(1-7)Online publication date: 4-Jun-2024
  • (2024)Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345616430:11(7277-7287)Online publication date: 1-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Applied Perception
ACM Transactions on Applied Perception  Volume 16, Issue 3
Special Issue on SAP 2019 and Regular Paper
July 2019
91 pages
ISSN:1544-3558
EISSN:1544-3965
DOI:10.1145/3360014
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 September 2019
Accepted: 01 July 2019
Received: 01 July 2019
Published in TAP Volume 16, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Saliency
  2. virtual environments
  3. visual attention

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • RCUK Centre for the Analysis of Motion
  • Entertainment Research and Applications
  • CAMERA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)115
  • Downloads (Last 6 weeks)13
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)AR-in-VR simulator: A toolbox for rapid augmented reality simulation and user researchACM Symposium on Applied Perception 202410.1145/3675231.3675240(1-11)Online publication date: 30-Aug-2024
  • (2024)Real-World Scanpaths Exhibit Long-Term Temporal Dependencies: Considerations for Contextual AI for AR ApplicationsProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3656352(1-7)Online publication date: 4-Jun-2024
  • (2024)Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345616430:11(7277-7287)Online publication date: 1-Nov-2024
  • (2024)HOIMotion: Forecasting Human Motion During Human-Object Interactions Using Egocentric 3D Object Bounding BoxesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345616130:11(7375-7385)Online publication date: 1-Nov-2024
  • (2024)Effects of Gameplay Dynamics on Visual AttentionIEEE Access10.1109/ACCESS.2024.345475612(126961-126969)Online publication date: 2024
  • (2024)Interacting with virtual characters, objects and environments: investigating immersive virtual reality in rehabilitationDisability and Rehabilitation: Assistive Technology10.1080/17483107.2024.2353284(1-11)Online publication date: 23-May-2024
  • (2024)Motor “laziness” constrains fixation selection in real-world tasksProceedings of the National Academy of Sciences10.1073/pnas.2302239121121:12Online publication date: 12-Mar-2024
  • (2024)DPGazeSynth: Enhancing eye-tracking virtual reality privacy with differentially private data synthesisInformation Sciences10.1016/j.ins.2024.120720675(120720)Online publication date: Jul-2024
  • (2023)The Shortest Route is Not Always the Fastest: Probability-Modeled Stereoscopic Eye Movement Completion Time in VRACM Transactions on Graphics10.1145/361833442:6(1-14)Online publication date: 5-Dec-2023
  • (2023)Virtual Reality Solutions Employing Artificial Intelligence Methods: A Systematic Literature ReviewACM Computing Surveys10.1145/356502055:10(1-29)Online publication date: 2-Feb-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media