New Method of Microimages Generation for 3D Display
Abstract
:1. Introduction
2. Previous Work
3. Proposed Technique
3.1. Microimages Generation Process
- Point Cloud creation. The scene is captured and its depth information is extracted. A point cloud representing the scene is generated merging the RGB and depth information.
- EIs capturing. The EIs are generated using a VPA. The number of virtual pinhole cameras in vertical (horizontal) direction is set equal to the number of pixels behind each microlens of the InI monitor in vertical (horizontal) direction. To capture EIs, the VPA is placed far away from the scene. A trade-off between the resolution of the EIs and the absence of black pixels depends on the position of the VPA. If the VPA is set too close to the point cloud, some information of the scene is lost and black pixels appear in the EIs (for the same reason as in [12,13]). On the other hand, as the VPA is moved further from the point cloud, black pixels’ issue disappears, but the scene is captured in the EIs with lower resolution. Therefore, the position of the VPA is set empirically at the minimum distance from the point cloud that ensures the absence of black pixels. Then, this value is refined to set the reference plane’s position, as explained in Section 3.2.
- Shifted cropping. A portion of (L × V) pixels of every EIs is cropped as in Figure 2. Shifting the cropped region with a constant step between adjacent EIs, allows setting the reference plane of the final image. This sets the portion of the 3D scene that will be reconstructed inside and outside the InI display, changing the depth sensation. More details on the parameters used in this step are given in Section 3.2.
- Resize. The cropped EIs (sub-EIs) are resized to the spatial resolution of the InI monitor, that is, the number of microlenses of the MLA in horizontal and vertical direction.
- Transposition. The pixels are resampled as in Figure 3, to convert the sub-EIs to the final microimages to project in the 3D InI monitor.
3.2. Geometrical Model
4. Experimental Results
5. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Son, J.Y.; Javidi, B. Three-dimensional imaging methods based on multiview images. J. Disp. Technol. 2005, 1, 125–140. [Google Scholar] [CrossRef]
- Dodgson, N.A.; Moore, J.; Lang, S. Multi-view autostereoscopic 3D display. In International Broadcasting Convention; Citeseer: State College, PA, USA, 1999; Volume 2, pp. 497–502. [Google Scholar]
- Lippmann, G. Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 1908, 7, 821–825. [Google Scholar] [CrossRef]
- Raytrix. 3D Lightfield Camera. Available online: https://raytrix.de/ (accessed on 24 August 2018).
- Martínez-Corral, M.; Dorado, A.; Barreiro, J.C.; Saavedra, G.; Javidi, B. Recent advances in the capture and display of macroscopic and microscopic 3-D scenes by integral imaging. Proc. IEEE 2017, 105, 825–836. [Google Scholar] [CrossRef]
- Kwon, K.C.; Park, C.; Erdenebat, M.U.; Jeong, J.S.; Choi, J.H.; Kim, N.; Park, J.H.; Lim, Y.T.; Yoo, K.H. High speed image space parallel processing for computer-generated integral imaging system. Opt. Express 2012, 20, 732–740. [Google Scholar] [CrossRef] [PubMed]
- Jiao, S.; Wang, X.; Zhou, M.; Li, W.; Hong, T.; Nam, D.; Lee, J.H.; Wu, E.; Wang, H.; Kim, J.Y. Multiple ray cluster rendering for interactive integral imaging system. Opt. Express 2013, 21, 10070–10086. [Google Scholar] [CrossRef] [PubMed]
- Li, S.L.; Wang, Q.H.; Xiong, Z.L.; Deng, H.; Ji, C.C. Multiple orthographic frustum combing for real-time computer-generated integral imaging system. J. Disp. Technol. 2014, 10, 704–709. [Google Scholar] [CrossRef]
- Chen, G.; Ma, C.; Fan, Z.; Cui, X.; Liao, H. Real-time Lens based Rendering Algorithm for Super-multiview Integral Photography without Image Resampling. IEEE Trans. Vis. Comput. Gragh. 2018, 24, 2600–2609. [Google Scholar] [CrossRef] [PubMed]
- Navarro, H.; Martínez-Cuenca, R.; Saavedra, G.; Martínez-Corral, M.; Javidi, B. 3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC). Opt. Express 2010, 18, 25573–25583. [Google Scholar] [CrossRef] [PubMed]
- Martínez-Corral, M.; Dorado, A.; Navarro, H.; Saavedra, G.; Javidi, B. Three-dimensional display by smart pseudoscopic-to-orthoscopic conversion with tunable focus. Appl. Opt. 2014, 53, E19–E25. [Google Scholar] [CrossRef] [PubMed]
- Hong, S.; Dorado, A.; Saavedra, G.; Martínez-Corral, M.; Shin, D.; Lee, B.G. Full-parallax 3D display from single-shot Kinect capture. In Three-Dimensional Imaging, Visualization, and Display 2015; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9495. [Google Scholar]
- Hong, S.; Ansari, A.; Saavedra, G.; Martinez-Corral, M. Full-parallax 3D display from stereo-hybrid 3D camera system. Opt. Lasers Eng. 2018, 103, 46–54. [Google Scholar] [CrossRef]
- Piao, Y.; Qu, H.; Zhang, M.; Cho, M. Three-dimensional integral imaging display system via off-axially distributed image sensing. Opt. Lasers Eng. 2016, 85, 18–23. [Google Scholar] [CrossRef]
- Cho, M.; Shin, D. 3D integral imaging display using axially recorded multiple images. J. Opt. Soc. Korea 2013, 17, 410–414. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Incardona, N.; Hong, S.; Martínez-Corral, M.; Saavedra, G. New Method of Microimages Generation for 3D Display. Sensors 2018, 18, 2805. https://doi.org/10.3390/s18092805
Incardona N, Hong S, Martínez-Corral M, Saavedra G. New Method of Microimages Generation for 3D Display. Sensors. 2018; 18(9):2805. https://doi.org/10.3390/s18092805
Chicago/Turabian StyleIncardona, Nicolò, Seokmin Hong, Manuel Martínez-Corral, and Genaro Saavedra. 2018. "New Method of Microimages Generation for 3D Display" Sensors 18, no. 9: 2805. https://doi.org/10.3390/s18092805