Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3490035.3490297acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvgipConference Proceedingsconference-collections
research-article

Disparity based depth estimation using light field camera

Published: 19 December 2021 Publication History

Abstract

Light field cameras have a unique feature of capturing the direction of light rays along with its intensity. This additional information is used to estimate the depth of a 3-D scene. Recently a considerable amount of research has been done in depth estimation using light field data. However, these depth estimation methods heavily rely on iterative optimization techniques and do not fully exploit the inherent structured light field data. In this paper, we present a novel three-step disparity based algorithm to estimate accurate depth maps of a light field image. First, an initial depth map of scene points is estimated using principal views by estimating the disparity of segments in the central image. This initial depth helps in resolving ambiguity in depth propagation of refined depth map in Epi-Polar line Images (EPIs). Second, refined depth is estimated at lines using the disparity vector in EPIs. Finally, refined depth at lines in EPIs is propagated to other locations using the initial depth map. We also provided a synthetic data-set having inherent characteristic of a light field. We have tested our approach on a variety of real-world scenes captured with Lytro Illum camera and also on synthetic images. The proposed method outperforms several state-of-the-art algorithms.

References

[1]
Robert C Bolles, H Harlyn Baker, and David H Marimont. 1987. Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision 1, 1 (1987), pp. 7--55.
[2]
John Canny. 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 8, 6 (1986), pp. 679--698.
[3]
Shenchang Eric Chen. 1995. Quicktime VR: An image-based approach to virtual environment navigation. Annual Conference on Computer Graphics and Interactive Techniques (1995), pp. 29--38.
[4]
Antonio Criminisi, Sing Bing Kang, Rahul Swaminathan, Richard Szeliski, and P Anandan. 2005. Extracting layers and analyzing their specular properties using epipolar-plane-image analysis. Computer Vision and Image Understanding 97, 1 (2005), pp. 51--85.
[5]
Donald G Dansereau, Oscar Pizarro, and Stefan B Williams. 2013. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In IEEE conference on computer vision and pattern recognition. 1027--1034.
[6]
Christopher Hahne, Amar Aggoun, Shyqyri Haxha, Vladan Velisavljevic, and Juan CJ Fernández. 2014. Baseline of virtual cameras acquired by a standard plenoptic camera setup. 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (2014), pp. 1--3.
[7]
Stefan Heber, Rene Ranftl, and Thomas Pock. 2013. Variational shape from light field. International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition (2013), pp. 66--79.
[8]
Katrin Honauer, Ole Johannsen, Daniel Kondermann, and Bastian Goldluecke. 2016. A dataset and evaluation methodology for depth estimation on 4d light fields. Asian Conference on Computer Vision (2016), pp. 19--34.
[9]
Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, and In So Kweon. 2015. Accurate depth map estimation from a lenslet light field camera. IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1547--1555.
[10]
Changil Kim, Henning Zimmer, Yael Pritch, Alexander Sorkine-Hornung, and Markus Gross. 2013. Scene Reconstruction from High Spatio-angular Resolution Light Fields. ACM Transactions on Graphics 32, 4 (2013), pp. 73:1--73:12.
[11]
Dongwoo Lee, Haesol Park, In Kyu Park, and Kyoung Mu Lee. 2018. Joint blind motion deblurring and depth estimation of light field. European Conference on Computer Vision (ECCV) (2018), pp. 288--303.
[12]
Marc Levoy and Pat Hanrahan. 1996. Light field rendering. Annual Conference on Computer graphics and Interactive Techniques (1996), pp. 31--42.
[13]
Marc Levoy and Pat Hanrahan. 1996. Light field rendering. Annual conference on Computer Graphics and Interactive Techniques (1996), pp. 31--42.
[14]
Haiting Lin, Can Chen, Sing Bing Kang, and Jingyi Yu. 2015. Depth recovery from light field using focal stack symmetry. IEEE International Conference on Computer Vision (2015), pp. 3451--3459.
[15]
Martin Matoušek, Tomáš Werner, and Václav Hlavác. 2001. Accurate correspondences from epipolar plane images. Computer Vision Winter Workshop (2001), pp. 181--189.
[16]
Kazu Mishiba. 2020. Fast Depth Estimation for Light Field Cameras. IEEE Transactions on Image Processing 29, 1 (2020), pp. 4232--4242.
[17]
Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, and Seon Joo Kim. 2018. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 4748--4757.
[18]
Michael W Tao, Pratul P Srinivasan, Jitendra Malik, Szymon Rusinkiewicz, and Ravi Ramamoorthi. 2015. Depth from shading, defocus, and correspondence using light-field angular coherence. IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1940--1948.
[19]
Michael W Tao, Jong-Chyi Su, Ting-Chun Wang, Jitendra Malik, and Ravi Ramamoorthi. 2016. Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 6 (2016), pp. 1155--1169.
[20]
Ting-Chun Wang, Alexei A Efros, and Ravi Ramamoorthi. 2015. Occlusion-aware depth estimation using light-field cameras. IEEE International Conference on Computer Vision (2015), pp. 3487--3495.
[21]
Sven Wanner and Bastian Goldluecke. 2013. Variational light field analysis for disparity estimation and super-resolution. IEEE transactions on Pattern Analysis and Machine Intelligence 36, 3 (2013), pp. 606--619.
[22]
Sven Wanner, Stephan Meister, and Bastian Goldluecke. 2013. Datasets and benchmarks for densely sampled 4d light fields. Internetional Workshop on Vision, Modeling and Visualization (2013), pp. 225--226.
[23]
Williem, I K Park, and M U Lee. 2018. Robust light field depth estimation using occlusion-noise aware data costs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 10 (2018), pp. 2484--2497.
[24]
Zhan Yu, Xinqing Guo, Haibing Ling, Andrew Lumsdaine, and Jingyi Yu. 2013. Line assisted light field triangulation and stereo matching. IEEE International Conference on Computer Vision (2013), pp. 2792--2799.
[25]
Shuo Zhang, Hao Sheng, Chao Li, Jun Zhang, and Zhang Xiong. 2016. Robust depth estimation for light field via spinning parallelogram operator. Computer Vision and Image Understanding 145 (2016), pp. 148--159.

Cited By

View all
  • (2024)Adapting the Learning Models of Single Image Super-Resolution Into Light-Field ImagingIEEE Transactions on Computational Imaging10.1109/TCI.2024.338034810(496-509)Online publication date: 2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVGIP '21: Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing
December 2021
428 pages
ISBN:9781450375962
DOI:10.1145/3490035
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 December 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. depth estimation
  2. light field imaging
  3. lytro illum

Qualifiers

  • Research-article

Conference

ICVGIP '21

Acceptance Rates

Overall Acceptance Rate 95 of 286 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)4
Reflects downloads up to 10 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Adapting the Learning Models of Single Image Super-Resolution Into Light-Field ImagingIEEE Transactions on Computational Imaging10.1109/TCI.2024.338034810(496-509)Online publication date: 2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media