Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

Assessing the contribution of color in visual attention

Published: 01 October 2005 Publication History

Abstract

Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene, on which higher level vision tasks, such as object recognition, can focus. The saliency-based model of visual attention represents one of the main attempts to simulate this visual mechanism on computers. Though biologically inspired, this model has only been partially assessed in comparison with human behavior. Our methodology consists in comparing the computational saliency map with human eye movement patterns. This paper presents an in-depth analysis of the model by assessing the contribution of different cues to visual attention. It reports the results of a quantitative comparison of human visual attention derived from fixation patterns with visual attention as modeled by different versions of the computer model. More specifically, a one-cue gray-level model is compared to a two-cues color model. The experiments conducted with over 40 images of different nature and involving 20 human subjects assess the quantitative contribution of chromatic features in visual attention.

References

[1]
Kustov, A.A. and Robinson, D.L., Shared neural control of attentional shifts and eye movements. Nature. v384. 74-77.
[2]
D.D. Salvucci, A model of eye movements and visual attention, in: Third Internat. Conf. on Cognitive Modeling, 2000, pp. 252-259.
[3]
Privitera, C. and Stark, L., Algorithms for defining visual regions-of-interest: comparison with eye fixations. Pattern Anal. Mach. Intell. v22 i9. 970-981.
[4]
Hoffman, J.E. and Subramaniam, B., Saccadic eye movements and visual selective attention. Percept. Psychophys. v57. 787-795.
[5]
Findlay, J.M., Saccade target selection during visual search. Vision Res. v37. 617-631.
[6]
Maioli, C., Benaglio, I., Siri, S., Sosta, K. and Cappa, S., The integration of parallel and serial processing mechanisms in visual search: evidence from eye movement recording. Eur. J. Neurosci. v13. 364-372.
[7]
Tsotsos, J.K., Analyzing vision at the complexity level. Behav. Brain Sci. v13. 423-469.
[8]
Julesz, B. and Bergen, J., Textons, the fundamental elements in preattentive vision and perception of textures. Bell Syst. Tech. J. v62. 1619-1645.
[9]
S. Ahmed, VISIT: An Efficient Computational Model of Human Visual Attention, PhD thesis, University of Illinois at Urbana-Champaign, 1991.
[10]
R. Milanese, Detecting Salient Regions in an Image: From Biological Evidence to Computer implementation, PhD thesis, Dept. of Computer Science, University of Geneva, Switzerland, 1993.
[11]
S.M. Culhane, J.K. Tsotsos. A prototype for data-driven visual attention, in: Internat. Conf. on Pattern Recognition 92, vol. 1, 1992, pp. 36-40.
[12]
Tsotsos, J.K., Toward computational model of visual attention. In: Papathomas, T.V., Chubb, C., Gorea, A., Kowler, E. (Eds.), Early vision and beyond, MIT Press, Cambridge, MA. pp. 207-226.
[13]
Backer, G., Mertsching, B. and Bollmann, M., Data- and model-driven gaze control for an active-vision system. IEEE Trans. Pattern Anal. Mach. Intell. v23 i12. 1415-1429.
[14]
D. Heinke, G.W. Humphreys, Computational models of visual selective attention: a review, in: G. Houghton, (Ed.), Connectionist Models in Psychology, Taylor&Francis, London, 2005.
[15]
Treisman, A.M. and Gelade, G., A feature-integration theory of attention. Cogn. Psychol. 97-136.
[16]
Koch, Ch. and Ullman, S., Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiol. v4. 219-227.
[17]
Itti, L., Koch, Ch. and Niebur, E., A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. v20 i11. 1254-1259.
[18]
N. Ouerhani, H. Hugli, Computing visual attention from scene depth, in: Internat. Conf. on Pattern Recognition (ICPR'00), vol. 1, 2000, pp. 375-378.
[19]
Ouerhani, N. and Hugli, H., A real time implementation of visual attention on a SIMD architecture. Internat. J. Real Time Imaging. v9. 189-196.
[20]
N. Ouerhani, J. Bracamonte, H. Hugli, M. Ansorge, F. Pellandini, Adaptive color image compression based on visual attention, in: Internat. Conf. on Image Analysis and Processing (ICIAP'01), 2001, pp. 416-421.
[21]
Ouerhani, N. and Hugli, H., MAPS: multiscale attention-based presegmentation of color images. In: Lecture notes in computer science, vol. 2695. Springer, Berlin. pp. 537-549.
[22]
Ouerhani, N. and Hugli, H., A model of dynamic visual attention for object tracking in natural image sequences. In: Lecture Notes in Computer Science, vol. 2686. Springer, Berlin. pp. 702-709.
[23]
Ouerhani, N., von Wartburg, R., Hügli, H. and Müri, R.M., Empirical validation of the saliency-based model of visual attention. Electron. Lett. Comput. Vision Image Anal. v3 i1. 13-24.
[24]
Parkhurst, D., Law, K. and Niebur, E., Modeling the role of salience in the allocation of overt visual attention. Vision Res. v42. 107-123.
[25]
Wyszecki, G. and Styles, W.S., Color science: Concepts and methods, quantitative data and formulae. 1982. second ed. Wiley, New York.
[26]
A.G. Leventhal, The neural basis of visual function, in: Vision and Visual Dysfunction, vol. 4, CRC Press, Boca Raton, FL, 1991.
[27]
Engel, S., Zhang, X. and Wandell, B., Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature. v388 i6637. 68-71.
[28]
L. Itti, Ch. Koch, A comparison of feature combination strategies for saliency-based visual attention systems, in: SPIE Human Vision and Electronic Imaging IV (HVEI'99), San Jose, CA, vol. 3644, 1999, pp. 373-382.
[29]
R. von Wartburg. "Visuo-motor behaviour during complex image viewing: The influence of colour and image type. Licentiate paper, Dept. of Psychology, University of Bern, Switzerland, 2003.

Cited By

View all
  • (2024)Saliency3D: A 3D Saliency Dataset Collected on ScreenProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653350(1-6)Online publication date: 4-Jun-2024
  • (2022)Toward Visual Behavior and Attention Understanding for Augmented 360 Degree VideosACM Transactions on Multimedia Computing, Communications, and Applications10.1145/356502419:2s(1-24)Online publication date: 29-Sep-2022
  • (2021)Saliency prediction based on object recognition and gaze analysisElectronics and Communications in Japan10.1002/ecj.12303104:2Online publication date: 27-Jan-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Computer Vision and Image Understanding
Computer Vision and Image Understanding  Volume 100, Issue 1-2
Special issue: Attention and performance in computer vision
October 2005
250 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 01 October 2005

Author Tags

  1. Color vision
  2. Eye movements
  3. Human perception
  4. Saliency map
  5. Visual attention

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Saliency3D: A 3D Saliency Dataset Collected on ScreenProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653350(1-6)Online publication date: 4-Jun-2024
  • (2022)Toward Visual Behavior and Attention Understanding for Augmented 360 Degree VideosACM Transactions on Multimedia Computing, Communications, and Applications10.1145/356502419:2s(1-24)Online publication date: 29-Sep-2022
  • (2021)Saliency prediction based on object recognition and gaze analysisElectronics and Communications in Japan10.1002/ecj.12303104:2Online publication date: 27-Jan-2021
  • (2019)Saliency-based framework for facial expression recognitionFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-017-6114-913:1(183-198)Online publication date: 1-Feb-2019
  • (2018)Robustness of metrics used for scanpath comparisonProceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications10.1145/3204493.3204580(1-5)Online publication date: 14-Jun-2018
  • (2017)Survey of recent advances in 3D visual attention for roboticsInternational Journal of Robotics Research10.1177/027836491772658736:11(1159-1176)Online publication date: 1-Sep-2017
  • (2015)Predicting Eye Fixations With Higher-Level Visual FeaturesIEEE Transactions on Image Processing10.1109/TIP.2015.239571324:3(1178-1189)Online publication date: 10-Feb-2015
  • (2013)Learning saliency-based visual attentionSignal Processing10.1016/j.sigpro.2012.06.01493:6(1401-1407)Online publication date: 1-Jun-2013
  • (2012)Modulating Shape Features by Color Attention for Object RecognitionInternational Journal of Computer Vision10.1007/s11263-011-0495-298:1(49-64)Online publication date: 1-May-2012
  • (2010)Relevance of a feed-forward model of visual attention for goal-oriented and free-viewing tasksIEEE Transactions on Image Processing10.1109/TIP.2010.205226219:11(2801-2813)Online publication date: 1-Nov-2010
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media