Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2401836.2401837acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Brain-enhanced synergistic attention (BESA)

Published: 26 October 2012 Publication History

Abstract

In this paper, we describe a hybrid human-machine system for searching and detecting Objects of Interest (OI) in imagery. Automated methods for OI detection based on models of human visual attention have received much interest, but are inherently bottom-up and driven by features. Humans fixate on regions of imagery based on a much stronger top-down component. While it may be possible to include some aspects of top-down cognition into these methods, it is difficult to fully capture all aspects of human cognition into an automated algorithm. Our hypothesis is that combination of automated methods with human fixations will provide a better solution than either alone. In this work, we describe a Brain-Enhanced Synergistic Attention (BESA) system that combines models of visual attention with real-time eye fixations from a human for accurate search and detections of OI. We describe two different BESA schemes and provide implementation details. Preliminary studies were conducted to determine the efficacy of the system and initial results are promising. Typical applications of this technology are in surveillance, reconnaissance and intelligence analysis.

References

[1]
Huber, D., Khosla, D. 2010. A Bio-Inspired Method and System for Visual Object-Based Attention and Segmentation. In Automatic Target Recognition XX; Acquisition, Tracking, Pointing, and Laser Systems Technologies XXIV; Optical Pattern Recognition XXI, Proc. of SPIE Vol. 7696, 769613.
[2]
Huber, D., Khosla, D. 2011. Bio-inspired "Surprise" for Real-Time Change Detection in Visual Imagery. In Automatic Target Recognition XXI, Proc. of SPIE Vol. 8049 804904-1
[3]
Itti, L., Koch, C., Niebur, E., 1998. A Model of Saliency-Based Visual Attention for Rapid Scene Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 20, No 11, pp. 1254--1259. {1998 impact factor: 1.417}
[4]
Baluch, F., Itti, L., 2010. Mechanisms of Top-Down Attention, Trends in Neurosciences, Vol. 34, pp. 210--224, {2010 impact factor: 13.320}
[5]
Itti, P., Baldi, P. F., 2006. Bayesian Surprise Attracts Human Attention, In: Advances in Neural Information Processing Systems, Vol. 19 (NIPS*2005), pp. 547--554, Cambridge, MA:MIT Press.
[6]
Navalpakkam, V., Itti, L., 2006. An Integrated Model of Top-down and Bottom-up Attention for Optimal Object Detection, In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2049--2056.
[7]
Judd, T., Ehinger, K., Durand, F., Torralba, A., 2009. Learning to predict where humans look. IEEE International Conference on Computer Vision (ICCV).
[8]
Oliva, A., Torralba, A., The Role of Context in Object Recognition. 2007. Trends in Cognitive Sciences, vol. 11(12), pp. 520--527.
[9]
Chamaret, C., Le Meur, O., Chevet, J. C., 2010. Spatiotemporal Combination of Saliency Maps and Eye-Tracking Assessment of Different Strategies, ICIP, pp. 1077--1080.
[10]
Le Meur, O., Le Callet, P., Barba, D., 2007. Predicting Visual Fixations on Video Based on Low-Level Visual Features, Vision Research, Vol. 47/19 pp. 2483--2498.

Cited By

View all
  • (2015)Quality control of geological voxel models using experts' gazeComputers & Geosciences10.1016/j.cageo.2014.11.01176:C(50-58)Online publication date: 1-Mar-2015

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
Gaze-In '12: Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
October 2012
88 pages
ISBN:9781450315166
DOI:10.1145/2401836
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. attention
  2. cognitive processing
  3. eye-tracking
  4. fixation
  5. human-in-the-loop
  6. search and detection
  7. surveillance
  8. user interface

Qualifiers

  • Research-article

Conference

ICMI '12
Sponsor:
ICMI '12: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 26, 2012
California, Santa Monica

Acceptance Rates

Overall Acceptance Rate 19 of 21 submissions, 90%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2015)Quality control of geological voxel models using experts' gazeComputers & Geosciences10.1016/j.cageo.2014.11.01176:C(50-58)Online publication date: 1-Mar-2015

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media