Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3571600.3571624acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvgipConference Proceedingsconference-collections
research-article

Depth estimation using Stereo Light Field Camera✱

Published: 12 May 2023 Publication History

Abstract

Light field imaging has emerged as a new modality, enabling to capture the angular and spatial information of a scene. This additional angular information is used to estimate the depth of a 3-D scene. The continuum of virtual view-points in light field data efficiently handles occlusion and provides a robust depth estimate for smaller distances. However, a narrow baseline in a light field camera limits the depth estimation for larger distances. To have an efficient occlusion handling and increase the operating distances, we proposed a novel disparity based stereo light field depth estimation method. First, segments are obtained in central sub-aperture of left view and then estimate the disparity vector of these segments using left camera sub-aperture images. This handles occlusion efficiently. Then stereo disparity at boundaries of these segments exploiting the epipolar geometry inherent in a light field data. Finally this stereo disparity at boundaries is propagated to other pixels and normalized. We provided a synthetic stereo light field data-set having inherent characteristic of a light field. We have tested our approach on a variety of real-world scenes captured with Lytro Illum camera and also on synthetic images. The proposed method outperforms several state-of-the-art algorithms.

References

[1]
Robert C Bolles, H Harlyn Baker, and David H Marimont. 1987. Epipolar-plane image analysis: An approach to determining structure from motion. International journal of computer vision 1, 1 (1987), 7–55.
[2]
Vinh Quang Dinh, Cuong Cao Pham, and Jae Wook Jeon. 2016. Robust adaptive normalized cross-correlation for stereo matching cost computation. IEEE Transactions on Circuits and Systems for Video Technology 27, 7(2016), 1421–1434.
[3]
Andreī Gershun. 1939. The light field. Journal of Mathematics and Physics 18, 1-4 (1939), 51–151.
[4]
Rostam Affendi Hamzah and Haidi Ibrahim. 2016. Literature survey on stereo vision disparity map algorithms. Journal of Sensors 2016(2016).
[5]
Caner Hazirbas, Sebastian Georg Soyer, Maximilian Christian Staab, Laura Leal-Taixé, and Daniel Cremers. 2018. Deep depth from focus. In Asian Conference on Computer Vision. Springer, 525–541.
[6]
Stefan Heber, Wei Yu, and Thomas Pock. 2017. Neural EPI-volume networks for shape from light field. In Proceedings of the IEEE International Conference on Computer Vision. 2252–2260.
[7]
Katrin Honauer, Ole Johannsen, Daniel Kondermann, and Bastian Goldluecke. 2016. A dataset and evaluation methodology for depth estimation on 4d light fields. Asian Conference on Computer Vision(2016), pp. 19–34.
[8]
Asmaa Hosni, Christoph Rhemann, Michael Bleyer, Carsten Rother, and Margrit Gelautz. 2012. Fast cost-volume filtering for visual correspondence and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 2(2012), 504–511.
[9]
Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, and In So Kweon. 2015. Accurate depth map estimation from a lenslet light field camera. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1547–1555.
[10]
Marc Levoy and Pat Hanrahan. 1996. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. 31–42.
[11]
Haiting Lin, Can Chen, Sing Bing Kang, and Jingyi Yu. 2015. Depth recovery from light field using focal stack symmetry. In Proceedings of the IEEE International Conference on Computer Vision. 3451–3459.
[12]
Fei Liu, Shubo Zhou, Yunlong Wang, Guangqi Hou, Zhenan Sun, and Tieniu Tan. 2019. Binocular Light-Field: Imaging Theory and Occlusion-Robust Depth Perception Application. IEEE Transactions on Image Processing 29 (2019), 1628–1640.
[13]
Martin Matoušek, Tomáš Werner, and Václav Hlavác. 2001. Accurate correspondences from epipolar plane images. In Proc. Computer Vision Winter Workshop. Citeseer, 181–189.
[14]
Xing Mei, Xun Sun, Weiming Dong, Haitao Wang, and Xiaopeng Zhang. 2013. Segment-tree based cost aggregation for stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 313–320.
[15]
Saad Merrouche, Milenko Andrić, Boban Bondžulić, and Dimitrije Bujaković. 2020. Objective Image Quality Measures for Disparity Maps Evaluation. Electronics 9, 10 (2020), 1625.
[16]
Suresh Nehra, Tamal Das, Simantini Chakraborty, Prabir K Biswas, and Jayanta Mukhopadhyay. 2021. Disparity based depth estimation using light field camera. In Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing. 1–9.
[17]
Daniel Scharstein and Richard Szeliski. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision 47, 1-3 (2002), 7–42.
[18]
Alexander Sorkine-hornung, Changil Kim, Henning Zimmer, Yael Pritch, and Markus Gross. 2017. Scene reconstruction from high spatio-angular resolution light fields. US Patent 9,786,062.
[19]
Michael W Tao, Pratul P Srinivasan, Jitendra Malik, Szymon Rusinkiewicz, and Ravi Ramamoorthi. 2015. Depth from shading, defocus, and correspondence using light-field angular coherence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1940–1948.
[20]
Carlo Tomasi and Roberto Manduchi. 1998. Bilateral filtering for gray and color images. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271). IEEE, 839–846.
[21]
Ting-Chun Wang, Alexei A Efros, and Ravi Ramamoorthi. 2015. Occlusion-aware depth estimation using light-field cameras. In Proceedings of the IEEE International Conference on Computer Vision. 3487–3495.
[22]
Sven Wanner, Stephan Meister, and Bastian Goldluecke. 2013. Datasets and benchmarks for densely sampled 4d light fields.Internetional Workshop on Vision, Modeling and Visualization (2013), pp. 225–226.
[23]
W Williem and In Kyu Park. 2016. Robust light field depth estimation for noisy scene with occlusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4396–4404.
[24]
Tao Yan, Fan Zhang, Yiming Mao, Hongbin Yu, Xiaohua Qian, and Rynson WH Lau. 2019. Depth estimation from a light field image pair with a generative model. IEEE Access 7(2019), 12768–12778.
[25]
Qingxiong Yang. 2012. A non-local cost aggregation method for stereo matching. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1402–1409.
[26]
Zhan Yu, Xinqing Guo, Haibing Lin, Andrew Lumsdaine, and Jingyi Yu. 2013. Line assisted light field triangulation and stereo matching. In Proceedings of the IEEE International Conference on Computer Vision. 2792–2799.
[27]
Kang Zhang, Yuqiang Fang, Dongbo Min, Lifeng Sun, Shiqiang Yang, Shuicheng Yan, and Qi Tian. 2014. Cross-scale cost aggregation for stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1590–1597.
[28]
Shuo Zhang, Hao Sheng, Chao Li, Jun Zhang, and Zhang Xiong. 2016. Robust depth estimation for light field via spinning parallelogram operator. Computer Vision and Image Understanding 145 (2016), 148–159.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVGIP '22: Proceedings of the Thirteenth Indian Conference on Computer Vision, Graphics and Image Processing
December 2022
506 pages
ISBN:9781450398220
DOI:10.1145/3571600
© 2022 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 May 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Light Field Imaging
  2. Lytro Illum
  3. stereo depth estimation.
  4. stereo light field

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICVGIP'22

Acceptance Rates

Overall Acceptance Rate 95 of 286 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 51
    Total Downloads
  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)1
Reflects downloads up to 08 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media