Post-processing of light fields enables us to extract more information from a scene compared to t... more Post-processing of light fields enables us to extract more information from a scene compared to traditional camera. Plenoptic cameras and camera arrays are two common methods for light-field capture. In fact, it has been long recognized that the two devices are in some ways equivalent. Practically though, light field capture via camera arrays results in poor angular sampling. Similarly, the plenoptic camera often suffers from relatively poor spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional view point images and combining them into a four dimensional light field. In this work, we present a formalism for converting between equivalent plenoptic configurations and camera arrays. We use this approach to simulate a simple scene and explore the trade-offs in angular and spatial sampling in light-field capture.
A fully convolutional autoencoder is developed for the detection of anomalies in multi-sensor veh... more A fully convolutional autoencoder is developed for the detection of anomalies in multi-sensor vehicle drive-cycle data from the powertrain domain. Preliminary results collected on real-world powertrain data show that the reconstruction error of faulty drive cycles deviates significantly relative to the reconstruction of healthy drive cycles using the trained autoencoder. The results demonstrate applicability for identifying faulty drive-cycles, and for improving the accuracy of system prognosis and predictive maintenance in connected vehicles.
Abstract. Postprocessing of light fields enables us to extract more information from a scene comp... more Abstract. Postprocessing of light fields enables us to extract more information from a scene compared to traditional cameras. Plenoptic cameras and camera arrays are two common methods for light field capture. It has been long recognized that the two devices are in some ways equivalent. Practically, both techniques have important constraints. Camera arrays are unable to provide high angular sampling, and the plenoptic camera can have a limited spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional viewpoint images and combining them into a four-dimensional light field. We present a transformation for converting between equivalent plenoptic configurations and camera arrays when they capture pristine light fields produced in simulation. We use this approach to simulate light fields of simple scenes and validate our transformation by comparing the focus distance of a standard plenoptic camera and the equivalent camera array’s light field. We also show how some simple practical effects can be added to the pristine, synthetic light field via postprocessing and their effect on refocusing distance.
Post-processing of light fields enables us to extract more information from a scene compared to t... more Post-processing of light fields enables us to extract more information from a scene compared to traditional camera. Plenoptic cameras and camera arrays are two common methods for light-field capture. In fact, it has been long recognized that the two devices are in some ways equivalent. Practically though, light field capture via camera arrays results in poor angular sampling. Similarly, the plenoptic camera often suffers from relatively poor spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional view point images and combining them into a four dimensional light field. In this work, we present a formalism for converting between equivalent plenoptic configurations and camera arrays. We use this approach to simulate a simple scene and explore the trade-offs in angular and spatial sampling in light-field capture.
Light field imaging introduced the capability to refocus an image after capturing. Currently ther... more Light field imaging introduced the capability to refocus an image after capturing. Currently there are two popular methods for refocusing, shift-and-sum and Fourier slice methods. Neither of these two methods can refocus the light field in real-time without any pre-processing. In this paper we introduce a machine learning based refocusing technique that is capable of extracting 16 refocused images with refocusing parameters of α = 0.125, 0.250, 0.375, ..., 2.0 in real-time. We have trained our network, which is called RefNet, in two experiments. Once using the Fourier slice method as the training—i.e., “ground truth”—data and another using the shift-and-sum method as the training data. We showed that in both cases, not only is the RefNet method at least 134× faster than previous approaches, but also the color prediction of RefNet is superior to both Fourier slice and shift-and-sum methods while having similar depth of field and focus distance performance.
Light field (LF) imaging has gained significant attention due to its recent success in 3-dimensio... more Light field (LF) imaging has gained significant attention due to its recent success in 3-dimensional (3D) displaying and rendering as well as augmented and virtual reality usage. Nonetheless, because of the two extra dimensions, LFs are much larger than conventional images. We develop a JPEG-assisted learning-based technique to reconstruct an LF from a JPEG bitstream with a bit per pixel ratio of 0.0047 on average. For compression, we keep the LF's center view and use JPEG compression with 50\% quality. Our reconstruction pipeline consists of a small JPEG enhancement network (JPEG-Hance), a depth estimation network (Depth-Net), followed by view synthesizing by warping the enhanced center view. Our pipeline is significantly faster than using video compression on pseudo-sequences extracted from an LF, both in compression and decompression, while maintaining effective performance. We show that with a 1\% compression time cost and 18x speedup for decompression, our methods reconstru...
Post-processing of light fields enables us to extract more information from a scene compared to t... more Post-processing of light fields enables us to extract more information from a scene compared to traditional camera. Plenoptic cameras and camera arrays are two common methods for light-field capture. In fact, it has been long recognized that the two devices are in some ways equivalent. Practically though, light field capture via camera arrays results in poor angular sampling. Similarly, the plenoptic camera often suffers from relatively poor spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional view point images and combining them into a four dimensional light field. In this work, we present a formalism for converting between equivalent plenoptic configurations and camera arrays. We use this approach to simulate a simple scene and explore the trade-offs in angular and spatial sampling in light-field capture.
A fully convolutional autoencoder is developed for the detection of anomalies in multi-sensor veh... more A fully convolutional autoencoder is developed for the detection of anomalies in multi-sensor vehicle drive-cycle data from the powertrain domain. Preliminary results collected on real-world powertrain data show that the reconstruction error of faulty drive cycles deviates significantly relative to the reconstruction of healthy drive cycles using the trained autoencoder. The results demonstrate applicability for identifying faulty drive-cycles, and for improving the accuracy of system prognosis and predictive maintenance in connected vehicles.
Abstract. Postprocessing of light fields enables us to extract more information from a scene comp... more Abstract. Postprocessing of light fields enables us to extract more information from a scene compared to traditional cameras. Plenoptic cameras and camera arrays are two common methods for light field capture. It has been long recognized that the two devices are in some ways equivalent. Practically, both techniques have important constraints. Camera arrays are unable to provide high angular sampling, and the plenoptic camera can have a limited spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional viewpoint images and combining them into a four-dimensional light field. We present a transformation for converting between equivalent plenoptic configurations and camera arrays when they capture pristine light fields produced in simulation. We use this approach to simulate light fields of simple scenes and validate our transformation by comparing the focus distance of a standard plenoptic camera and the equivalent camera array’s light field. We also show how some simple practical effects can be added to the pristine, synthetic light field via postprocessing and their effect on refocusing distance.
Post-processing of light fields enables us to extract more information from a scene compared to t... more Post-processing of light fields enables us to extract more information from a scene compared to traditional camera. Plenoptic cameras and camera arrays are two common methods for light-field capture. In fact, it has been long recognized that the two devices are in some ways equivalent. Practically though, light field capture via camera arrays results in poor angular sampling. Similarly, the plenoptic camera often suffers from relatively poor spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional view point images and combining them into a four dimensional light field. In this work, we present a formalism for converting between equivalent plenoptic configurations and camera arrays. We use this approach to simulate a simple scene and explore the trade-offs in angular and spatial sampling in light-field capture.
Light field imaging introduced the capability to refocus an image after capturing. Currently ther... more Light field imaging introduced the capability to refocus an image after capturing. Currently there are two popular methods for refocusing, shift-and-sum and Fourier slice methods. Neither of these two methods can refocus the light field in real-time without any pre-processing. In this paper we introduce a machine learning based refocusing technique that is capable of extracting 16 refocused images with refocusing parameters of α = 0.125, 0.250, 0.375, ..., 2.0 in real-time. We have trained our network, which is called RefNet, in two experiments. Once using the Fourier slice method as the training—i.e., “ground truth”—data and another using the shift-and-sum method as the training data. We showed that in both cases, not only is the RefNet method at least 134× faster than previous approaches, but also the color prediction of RefNet is superior to both Fourier slice and shift-and-sum methods while having similar depth of field and focus distance performance.
Light field (LF) imaging has gained significant attention due to its recent success in 3-dimensio... more Light field (LF) imaging has gained significant attention due to its recent success in 3-dimensional (3D) displaying and rendering as well as augmented and virtual reality usage. Nonetheless, because of the two extra dimensions, LFs are much larger than conventional images. We develop a JPEG-assisted learning-based technique to reconstruct an LF from a JPEG bitstream with a bit per pixel ratio of 0.0047 on average. For compression, we keep the LF's center view and use JPEG compression with 50\% quality. Our reconstruction pipeline consists of a small JPEG enhancement network (JPEG-Hance), a depth estimation network (Depth-Net), followed by view synthesizing by warping the enhanced center view. Our pipeline is significantly faster than using video compression on pseudo-sequences extracted from an LF, both in compression and decompression, while maintaining effective performance. We show that with a 1\% compression time cost and 18x speedup for decompression, our methods reconstru...
Uploads
Papers by Eisa Hedayati