Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Texture edge-guided depth recovery for structured light-based depth sensor

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The emergence of depth sensor facilitates the real-time and low-cost depth capture. However, the quality of its depth map is still inadequate for further applications due to holes, noises and artifacts existing within its depth information. In this paper, we propose an iterative depth boundary refinement framework to recover Kinect depth map. We extract depth edges and detect the incorrect regions, and then re-fill the incorrect regions until the depth edges are consistent with color edges. In the incorrect region detection procedure, we propose a RGB-D data edge detection method inspired by the recently developed deep learning. In the depth in-painting procedure, we propose a priority-determined fill order in which the high confidence pixels and strong edges are assigned to high priority. The actual depth values are computed by using a weighted cost filter, in which color, spatial similarity measures and Gaussian error model are considered. Experimental results demonstrate that the proposed method provides sharp and clear edges for the Kinect depth, and depth edges are aligned with the color edges.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Bo LL, Ren X, Fox D (2011) A large-scale hierarchical multiview RGB-D object dataset. In Proc Int Conf Robot Autom, 1817–1824

  2. Canny J (1986) A computational approach to edge detection. PAMI 8(6):679–698

    Article  Google Scholar 

  3. Caspi D, Kyriati N, Shamir J (1998) Range imaging with adaptive color structured light. IEEE Trans Pattern Anal Mach Intell 20(5):470–480

    Article  Google Scholar 

  4. Chen L, Lin H, Li S (2012) Depth image enhancement for kinect using region growing and bilateral filter. In Proc ICPR, 3070–3073

  5. Dollár P, Zitnick C (2013) Structured forests for fast edge detection. ICCV, Sydney

    Book  Google Scholar 

  6. Dollár P, Zitnick C (2015) Fast edge detection using structured forests, PAMI

  7. Gao Y, Tang J, Hong R, Yan S, Dai Q, Zhang N, Chua T-S (2012) Camera constraint-free view-based 3D object retrieval. IEEE Trans Image Process 21(4):2269–2281

    Article  MathSciNet  Google Scholar 

  8. Gao Y, Wang M, Ji R, Wu X, Dai Q (2014) 3D object retrieval with hausdorff distance learning. IEEE Trans Ind Electron 61(4):2088–2098

    Article  Google Scholar 

  9. Gao Y, Wang M, Tao D, Ji R, Dai Q (2012) 3D object retrieval and recognition with hypergraph analysis. IEEE Trans Image Process 21(9):4290–303

    Article  MathSciNet  Google Scholar 

  10. Hu J, Hu R, Wang Z, Gong Y, Duan M (2013) Color image guided locality regularized representation for Kinect depth holes filling, Visual Communications and Image Processing (VCIP), 1–6

  11. Koninckx TP, Gool LV (2006) Real-time range acquisition by adaptive structured light. IEEE Trans Pattern Anal Mach Intell 28(3):432–445

    Article  Google Scholar 

  12. Liu M-Y, Tuzel O, Taguchi Y (2013) Joint geodesic upsampling of depth images. In Proc IEEE Conf Comput Vis Pattern Recognit (CVPR), 169–176

  13. Maimone A, Fuchs H (2011) Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras. In Proc 10th IEEE Int Symp Mixed Augmented Reality (ISMAR), 137–146

  14. Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV

  15. Martin D, Fowlkes C, Malik J (2004) Learning to detect natural image boundaries using local brightness, color, and texture cues. PAMI 26(5):530–549

    Article  Google Scholar 

  16. Miao D, Fu J, Lu Y, Li S, Chen C (2012) Texture-assisted kinect depth inpainting. In Proc ISCAS, 604–607

  17. Milani S, Calvagno G (2012) Joint denoising and interpolation of depth maps for MS Kinect sensors. In Proc ICASSP, 797–800

  18. Min D, Lu J, Do MN (2012) Depth video enhancement based on weighted mode filtering. IEEE Trans Image Process 21(3):1176–1190

    Article  MathSciNet  Google Scholar 

  19. Nowozin S, Lampert CH (2011) Structured learning and prediction in computer vision. Foundations Trends Comput Graphics Vision 6:185–365

    Article  MATH  Google Scholar 

  20. Shen J, Cheung S-C S (2013) Layer depth denoising and completion for structured-light RGB-D cameras. In Proc IEEE Con- ference Computer Vision Pattern Recognition, IEEE, 1187–1194

  21. Silberman N, Fergus R (2011) Indoor scene segmentation using a structured light sensor. In ICCV Workshop 3D Representation Recognition

  22. Xiang S, Yu L, Yang Y, Liu Q, Zhou J (2015) Interfered depth map recovery with texture guidance for multiple structured light depth cameras. Signal Process Image Commun 31(2015):34–46

    Article  Google Scholar 

  23. Xu Y, Jin X, Dai Q (2014) Spatial-temporal depth de-noising for Kinect based on texture edge-assisted depth classification. In Proc 19th International Conference on Digital Signal Processing (DSP), 327–332

  24. Yang Q, Tan K, Culbertson B, Apostolopoulos J (2010) Fusion of active and passive sensors for fast 3D capture. In Proc IEEE Int Workshop Multimedia Signal Process (MMSP), 69–74

  25. Yang Q, Yang R, Davis J, Nistér D (2007) Spatial-depth super resolution for range images. In Proc IEEE Comput Vis Pattern Recognit (CVPR), 1–8

  26. Yang J, Ye X, Li K, Hou C, Wang Y (2014) Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Trans Image Process 23(8):3443–3458

    Article  MathSciNet  Google Scholar 

  27. Yao Y, Fu Y (2014) Contour model based hand-gesture recognition using Kinect sensor. IEEE Trans Circuits Syst Video Technol 24(11):1935–1944

    Article  Google Scholar 

  28. Ziou D, Tabbone S et al (1998) Edge detection techniques-an overview. Pattern Recognition Image Analysis 8:537–559

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huiping Deng.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, H., Wu, J., Zhu, L. et al. Texture edge-guided depth recovery for structured light-based depth sensor. Multimed Tools Appl 76, 4211–4226 (2017). https://doi.org/10.1007/s11042-016-3340-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3340-3

Keywords