Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2632856.2632866acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicimcsConference Proceedingsconference-collections
research-article

Depth Enhanced Saliency Detection Method

Published: 10 July 2014 Publication History
  • Get Citation Alerts
  • Abstract

    Human vision system understands the environment from 3D perception. However, most existing saliency detection algorithms detect the salient foreground based on 2D image information. In this paper, we propose a saliency detection method using the additional depth information. In our method, saliency cues are provided to follow the laws of the visually salient stimuli in both color and depth spaces. Simultaneously, the 'center bias' is also extended to 'spatial' bias to represent the nature advantage in 3D image. In addition, We build a dataset to test our method and the experiments demonstrate that the depth information is useful for extracting the salient object from the complex scenes.

    References

    [1]
    R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In CVPR, pages 1597--1604, 2009.
    [2]
    Z. Chen, J. Yuan, and Y. Tan. Hybrid saliency detection for images. Signal Processing Letters, 20(1):95--98, 2013.
    [3]
    M. Cheng, G. Zhang, N. Mitra, X. Huang, and S. Hu. Global contrast based salient region detection. In CVPR, pages 409--416, 2011.
    [4]
    H. Fu, X. Cao, and Z. Tu. Cluster-based co-saliency detection. IEEE Transactions on Image Processing, 22(10):3766--3778, 2013.
    [5]
    L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. TPAMI, 20(11):1254--1259, 1998.
    [6]
    C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan. Depth matters: Influence of depth cues on visual saliency. ECCV, pages 101--115, 2012.
    [7]
    S. Liu, Y. Wang, L. Yuan, J. Bu, P. Tan, and J. Sun. Video stabilization with a depth camera. In CVPR, pages 89--95, 2012.
    [8]
    T. Liu, Z. Yuan, J. Sun, N. Z. J. Wang, X. Tang, and H.-Y. Shum. Learning to detect a salient object. TPAMI, 33(2):353--367, 2011.
    [9]
    Y. Niu, Y. Geng, X. Li, and F. Liu. Leveraging stereopsis for saliency analysis. In CVPR, pages 454--461, 2012.
    [10]
    J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In CVPR, pages 1297--1304, 2011.
    [11]
    H. Simon and B. Richard. Kinecting the dots: Particle based scene flow from depth sensors. In ICCV, pages 2290--2295, 2011.
    [12]
    J. Yan, M. Zhu, H. Liu, and Y. Liu. Visual saliency detection via sparsity pursuit. Signal Processing Letters, 17(8):739--742, 2010.
    [13]
    Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In CVPR, pages 1155--1162, 2013.
    [14]
    C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In CVPR, pages 3166--3173, 2013.

    Cited By

    View all
    • (2024)SLMSF-Net: A Semantic Localization and Multi-Scale Fusion Network for RGB-D Salient Object DetectionSensors10.3390/s2404111724:4(1117)Online publication date: 8-Feb-2024
    • (2024)MLBSNet: Mutual Learning and Boosting Segmentation Network for RGB-D Salient Object DetectionElectronics10.3390/electronics1314269013:14(2690)Online publication date: 10-Jul-2024
    • (2024)Feature interaction and two-stage cross-modal fusion for RGB-D salient object detectionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23322546:2(4543-4556)Online publication date: 14-Feb-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIMCS '14: Proceedings of International Conference on Internet Multimedia Computing and Service
    July 2014
    430 pages
    ISBN:9781450328104
    DOI:10.1145/2632856
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • NSF of China: National Natural Science Foundation of China
    • Beijing ACM SIGMM Chapter

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 July 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. RGB-D image
    2. depth map
    3. saliency detection

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICIMCS '14

    Acceptance Rates

    Overall Acceptance Rate 163 of 456 submissions, 36%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)39
    • Downloads (Last 6 weeks)4

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)SLMSF-Net: A Semantic Localization and Multi-Scale Fusion Network for RGB-D Salient Object DetectionSensors10.3390/s2404111724:4(1117)Online publication date: 8-Feb-2024
    • (2024)MLBSNet: Mutual Learning and Boosting Segmentation Network for RGB-D Salient Object DetectionElectronics10.3390/electronics1314269013:14(2690)Online publication date: 10-Jul-2024
    • (2024)Feature interaction and two-stage cross-modal fusion for RGB-D salient object detectionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23322546:2(4543-4556)Online publication date: 14-Feb-2024
    • (2024)Heterogeneous Fusion and Integrity Learning Network for RGB-D Salient Object DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365647620:7(1-24)Online publication date: 5-Apr-2024
    • (2024)Robust Perception and Precise Segmentation for Scribble-Supervised RGB-D Saliency DetectionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332480746:1(479-496)Online publication date: Jan-2024
    • (2024)3-D Convolutional Neural Networks for RGB-D Salient Object Detection and BeyondIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.320224135:3(4309-4323)Online publication date: Mar-2024
    • (2024)Joint Correcting and Refinement for Balanced Low-Light Image EnhancementIEEE Transactions on Multimedia10.1109/TMM.2023.334833326(6310-6324)Online publication date: 2024
    • (2024)DGFNet: Depth-Guided Cross-Modality Fusion Network for RGB-D Salient Object DetectionIEEE Transactions on Multimedia10.1109/TMM.2023.330128026(2648-2658)Online publication date: 1-Jan-2024
    • (2024)CATNet: A Cascaded and Aggregated Transformer Network for RGB-D Salient Object DetectionIEEE Transactions on Multimedia10.1109/TMM.2023.329400326(2249-2262)Online publication date: 1-Jan-2024
    • (2024)SCFANet: Semantics and Context Feature Aggregation Network for 360° Salient Object DetectionIEEE Transactions on Multimedia10.1109/TMM.2023.329399426(2276-2288)Online publication date: 2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media