We have developed a robust method for image segmentation based on a local multiscale texture desc... more We have developed a robust method for image segmentation based on a local multiscale texture description. We first apply a set of 4 by 4 complex Gabor filters, plus a low-pass residual (LPR), producing a log-polar sampling of the frequency domain. Contrary to other analysis methods, our Gabor scheme produces a visually complete multipurpose representation of the image, so that
Storage and Retrieval for Image and Video Databases, 1999
Automatic object segmentation in highly noisy image sequences, composed by a translating object o... more Automatic object segmentation in highly noisy image sequences, composed by a translating object over a background having a different motion, is achieved through joint motion-texture analysis. Local motion and/or texture is characterized by the energy of the local spatio-temporal spectrum, as different textures undergoing different translational motions display distinctive features in their 3D (x,y,t) spectra. Measurements of local spectrum energy
Storage and Retrieval for Image and Video Databases, 1999
Directional filters are not normally used as pre-filters for optical flow estimation because orie... more Directional filters are not normally used as pre-filters for optical flow estimation because orientation selectivity tends to increase the aperture problem. Despite this fact, here we apply a subband decomposition using directional spatio-temporal filters at different resolution to discriminate multiple motions at the same location. We first obtain multiple estimates of the velocity by applying the classic gradient constraint to
This chapter adch-esses an open problem in visual motion analysis, the estimationof image motion ... more This chapter adch-esses an open problem in visual motion analysis, the estimationof image motion in the vicinity of occlusion boundaries. With a Bayesianformulation, local image motion is explained in terms of multiple, competing,nonlinear models, including models for smooth (translational) motion and formotion boundaries. The generarive model for motion boundaries explicitlyencodes the orientation of the boundary, the velocities on either side,
Abstract. We present an augmented reality tourist guide on mobile devices. Many of latest mobile ... more Abstract. We present an augmented reality tourist guide on mobile devices. Many of latest mobile devices contain cameras, location, orien-tation and motion sensors. We demonstrate how these devices can be used to bring tourism information to users in a much more immersive ...
Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
We describe a probabilistic framework for detecting and tracking motion boundaries. It builds on ... more We describe a probabilistic framework for detecting and tracking motion boundaries. It builds on previous work [4] that used a particle filter to compute a posterior distribution over multiple, local motion models, one of which was specific for motion boundaries. We extend that framework in two ways: 1) with an enhanced likelihood that combines motion and edge support, 2) with a spatiotemporal model that propagates beliefs between adjoining image neighborhoods to encourage boundary continuity and provide better temporal predictions for motion boundaries. Approximate inference is achieved with a combination of tools: Sampled representations allow us to represent multimodal non-Gaussian distributions and to apply nonlinear dynamics. Mixture models are used to simplify the computation of joint prediction distributions.
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000
This paper addresses the derivation of likelihood functions and confidence bounds for problems in... more This paper addresses the derivation of likelihood functions and confidence bounds for problems involving overdetermined linear systems with noise in all measurements, often referred to as total-least-squares (TLS). It has been shown previously that TLS provides maximum likelihood estimates. But rather than being a function solely of the variables of interest, the associated likelihood functions increase in dimensionality with the number of equations. This has made it difficult to derive suitable confidence bounds, and impractical to use these probability functions with Bayesian belief propagation or Bayesian tracking. This paper derives likelihood functions that are defined only on the parameters of interest. This has two main advantages: first, the likelihood functions are much easier to use within a Bayesian framework; and second it is straightforward to obtain a reliable confidence bound on the estimates. We demonstrate the accuracy of our confidence bound in relation to others that have been proposed. Also, we use our theoretical results to obtain likelihood functions for estimating the direction of 3d camera translation.
Three-Dimensional Image Processing (3DIP) and Applications 2013, 2013
ABSTRACT The widespread success of Kinect enables users to acquire both image and depth informati... more ABSTRACT The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features' extraction, nor descriptors' generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.
Directional filters are not normally used as pre-filters for optical flow estimation because orie... more Directional filters are not normally used as pre-filters for optical flow estimation because orientation selectivity tends to increase the aperture problem. Despite this fact, here we apply a subband decomposition using directional spatio-temporal filters at different resolutions to discriminate multiple motions at the same location. We first obtain multiple estimates of the velocity by applying the classic gradient constraint to the output of each filter (a bank of 6 directional second order Gaussian derivatives -GD2-at 3 spatial scales). Spatio-temporal gradients of GD2 channel responses are easily obtained as linear combinations of the set of 10 separable GD3 channel responses, which constitutes a multipurpose scheme for visual representation of image sequences. Then, we obtain an overdetermined linear system by imposing local constant velocity. This system is solved by least-squares yielding an estimate of the velocity and its covariance matrix (a 2D confidence measure). After segmenting the resulting 6x3 velocity estimates (grouping together those estimates whose Mahalanobis distance is below a given threshold) we combine them using Bayesian probability rules. Segmentation maintains the ability to represent multiple motions, while the combination of estimates reduces the aperture problem. Results for synthetic and real sequences are highly satisfactory. Mean errors in complex standard sequences are below those provided by most published methods.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010
ABSTRACT We discuss the problem of fusing the information in a video stream with synchronized str... more ABSTRACT We discuss the problem of fusing the information in a video stream with synchronized streams of location and orientation data obtained from sensors attached to the video camera. We are interested in using this information for the reconstruction of camera trajectory and observed scenery from consumer videos with the objective of visualizing those in a 3D virtual environment. We review existing literature and applications and suggest application scenarios for personal video. We discuss the issues that distinguish the personal video application from similar applications and present outlines of algorithms for several consumer video application scenarios.
2010 IEEE International Symposium on Mixed and Augmented Reality, 2010
ABSTRACT Estimating the 3D orientation of the camera in a video sequence within a global frame of... more ABSTRACT Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.
Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 2003
Over-determined linear systems with noise in all measurements are common in computer vision, and ... more Over-determined linear systems with noise in all measurements are common in computer vision, and particularly in motion estimation. Maximum likelihood estimators have been proposed to solve such problems, but except for simple cases, the corresponding likelihood functions are extremely complex, and accurate confidence measures do not exist. This paper derives the form of simple likelihood functions for such linear systems in the general case of heteroscedastic noise. We also derive a new algorithm for computing maximum likelihood solutions based on a modified Newton method. The new algorithm is more accurate, and exhibits more reliable convergence behavior than existing methods. We present an application to affine motion estimation, a simple heteroscedastic estimation problem.
Simple cells in cat striate cortex are selective for spatial frequency. It is widely believed tha... more Simple cells in cat striate cortex are selective for spatial frequency. It is widely believed that this selectivity arises simply because of the way in which the neurons sum inputs from the lateral geniculate nucleus. Alternate models, however, advocate the need for frequency-specific inhibitory mechanisms to refine the spatial frequency selectivity. Indeed, simple cell responses are often suppressed by superimposing stimuli with spatial frequencies that flank the neuron's preferred spatial frequency. In this article, we compare two models of simple cell responses head-to-head. One of these models, theflanb"ng-suppressionmodel, includes an inhibitory mechanism that is specific to frequencies that flank the neuron's preferred spatial frequency. The other model, the nonspecijcsuppression model, includes a suppressive mechanism that is very broadly tuned for spatial frequency. Both models also include a rectification nonlinearity and both may include an additional accelerating (e.g., squaring) output nonlinearity. We demonstrate that both models can be consistent with the apparent flanking suppression. However, based on other experimental results, we argue that the nonspecific-suppression model is more plausible. We conclude that the suppression is probably broadly tuned for spatial frequency and that the apparent flanking suppression is actually due to distortions introduced by an accelerating output nonlinearity. Cl 1997
19th Congress of the International Commission for Optics: Optics for the Quality of Life, 2003
A method to predict visual acuity in individual eyes has been developed, which combines realistic... more A method to predict visual acuity in individual eyes has been developed, which combines realistic optical and neural models of early visual processing. Visual acuity is usually obtained as the outcome of a pattern recognition task. However, since the human eye is highly aberrated, standard pattern recognition methods can not be used here, because they fail severely in the presence
2012 19th IEEE International Conference on Image Processing, 2012
ABSTRACT Computational photography techniques overcome limitations of traditional image sensors s... more ABSTRACT Computational photography techniques overcome limitations of traditional image sensors such as dynamic range and noise. Many computational imaging techniques have been proposed that process image stacks acquired using different exposure, aperture or gain settings, but far less attention has been paid to determining the parameters of the stack automatically. In this paper, we propose a novel computational imaging system that automatically and efficiently computes the optimal number of shots and corresponding exposure times and gains, taking into account characteristics of the scene and sensor. Our technique seamlessly integrates the use of multiple capture for both High Dynamic Range (HDR) imaging and denoising. The acquired images are then aligned, warped and merged in the raw Bayer domain according to a statistical noise model of the sensor to produce an optimal, potentially HDR and denoised image. The result is a fully automatic camera that constantly monitors the scene in front of it and decides how many images are required to capture it, without requiring the user to explicitly switch between different capture modalities.
A Bayesian model of Snellen visual acuity (VA) has been developed that, as far as we know, is the... more A Bayesian model of Snellen visual acuity (VA) has been developed that, as far as we know, is the first one that includes the three main stages of VA: (1) optical degradations, (2) neural image representation and contrast thresholding, and (3) character recognition. The retinal image of a Snellen test chart is obtained from experimental wave-aberration data. Then a subband image decomposition with a set of visual channels tuned to different spatial frequencies and orientations is applied to the retinal image, as in standard computational models of early cortical image representation. A neural threshold is applied to the contrast responses to include the effect of the neural contrast sensitivity. The resulting image representation is the base of a Bayesian pattern-recognition method robust to the presence of optical aberrations. The model is applied to images containing sets of letter optotypes at different scales, and the number of correct answers is obtained at each scale; the final output is the decimal Snellen VA. The model has no free parameters to adjust. The main input data are the eye's optical aberrations, and standard values are used for all other parameters, including the Stiles-Crawford effect, visual channels, and neural contrast threshold, when no subject specific values are available. When aberrations are large, Snellen VA involving pattern recognition differs from grating acuity, which is based on a simpler detection (or orientation-discrimination) task and hence is basically unaffected by phase distortions introduced by the optical transfer function. A preliminary test of the model in one subject produced close agreement between actual measurements and predicted VA values. Two examples are also included: (1) application of the method to the prediction of the VA in refractive-surgery patients and (2) simulation of the VA attainable by correcting ocular aberrations.
We have developed a robust method for image segmentation based on a local multiscale texture desc... more We have developed a robust method for image segmentation based on a local multiscale texture description. We first apply a set of 4 by 4 complex Gabor filters, plus a low-pass residual (LPR), producing a log-polar sampling of the frequency domain. Contrary to other analysis methods, our Gabor scheme produces a visually complete multipurpose representation of the image, so that
Storage and Retrieval for Image and Video Databases, 1999
Automatic object segmentation in highly noisy image sequences, composed by a translating object o... more Automatic object segmentation in highly noisy image sequences, composed by a translating object over a background having a different motion, is achieved through joint motion-texture analysis. Local motion and/or texture is characterized by the energy of the local spatio-temporal spectrum, as different textures undergoing different translational motions display distinctive features in their 3D (x,y,t) spectra. Measurements of local spectrum energy
Storage and Retrieval for Image and Video Databases, 1999
Directional filters are not normally used as pre-filters for optical flow estimation because orie... more Directional filters are not normally used as pre-filters for optical flow estimation because orientation selectivity tends to increase the aperture problem. Despite this fact, here we apply a subband decomposition using directional spatio-temporal filters at different resolution to discriminate multiple motions at the same location. We first obtain multiple estimates of the velocity by applying the classic gradient constraint to
This chapter adch-esses an open problem in visual motion analysis, the estimationof image motion ... more This chapter adch-esses an open problem in visual motion analysis, the estimationof image motion in the vicinity of occlusion boundaries. With a Bayesianformulation, local image motion is explained in terms of multiple, competing,nonlinear models, including models for smooth (translational) motion and formotion boundaries. The generarive model for motion boundaries explicitlyencodes the orientation of the boundary, the velocities on either side,
Abstract. We present an augmented reality tourist guide on mobile devices. Many of latest mobile ... more Abstract. We present an augmented reality tourist guide on mobile devices. Many of latest mobile devices contain cameras, location, orien-tation and motion sensors. We demonstrate how these devices can be used to bring tourism information to users in a much more immersive ...
Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
We describe a probabilistic framework for detecting and tracking motion boundaries. It builds on ... more We describe a probabilistic framework for detecting and tracking motion boundaries. It builds on previous work [4] that used a particle filter to compute a posterior distribution over multiple, local motion models, one of which was specific for motion boundaries. We extend that framework in two ways: 1) with an enhanced likelihood that combines motion and edge support, 2) with a spatiotemporal model that propagates beliefs between adjoining image neighborhoods to encourage boundary continuity and provide better temporal predictions for motion boundaries. Approximate inference is achieved with a combination of tools: Sampled representations allow us to represent multimodal non-Gaussian distributions and to apply nonlinear dynamics. Mixture models are used to simplify the computation of joint prediction distributions.
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000
This paper addresses the derivation of likelihood functions and confidence bounds for problems in... more This paper addresses the derivation of likelihood functions and confidence bounds for problems involving overdetermined linear systems with noise in all measurements, often referred to as total-least-squares (TLS). It has been shown previously that TLS provides maximum likelihood estimates. But rather than being a function solely of the variables of interest, the associated likelihood functions increase in dimensionality with the number of equations. This has made it difficult to derive suitable confidence bounds, and impractical to use these probability functions with Bayesian belief propagation or Bayesian tracking. This paper derives likelihood functions that are defined only on the parameters of interest. This has two main advantages: first, the likelihood functions are much easier to use within a Bayesian framework; and second it is straightforward to obtain a reliable confidence bound on the estimates. We demonstrate the accuracy of our confidence bound in relation to others that have been proposed. Also, we use our theoretical results to obtain likelihood functions for estimating the direction of 3d camera translation.
Three-Dimensional Image Processing (3DIP) and Applications 2013, 2013
ABSTRACT The widespread success of Kinect enables users to acquire both image and depth informati... more ABSTRACT The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features' extraction, nor descriptors' generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.
Directional filters are not normally used as pre-filters for optical flow estimation because orie... more Directional filters are not normally used as pre-filters for optical flow estimation because orientation selectivity tends to increase the aperture problem. Despite this fact, here we apply a subband decomposition using directional spatio-temporal filters at different resolutions to discriminate multiple motions at the same location. We first obtain multiple estimates of the velocity by applying the classic gradient constraint to the output of each filter (a bank of 6 directional second order Gaussian derivatives -GD2-at 3 spatial scales). Spatio-temporal gradients of GD2 channel responses are easily obtained as linear combinations of the set of 10 separable GD3 channel responses, which constitutes a multipurpose scheme for visual representation of image sequences. Then, we obtain an overdetermined linear system by imposing local constant velocity. This system is solved by least-squares yielding an estimate of the velocity and its covariance matrix (a 2D confidence measure). After segmenting the resulting 6x3 velocity estimates (grouping together those estimates whose Mahalanobis distance is below a given threshold) we combine them using Bayesian probability rules. Segmentation maintains the ability to represent multiple motions, while the combination of estimates reduces the aperture problem. Results for synthetic and real sequences are highly satisfactory. Mean errors in complex standard sequences are below those provided by most published methods.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010
ABSTRACT We discuss the problem of fusing the information in a video stream with synchronized str... more ABSTRACT We discuss the problem of fusing the information in a video stream with synchronized streams of location and orientation data obtained from sensors attached to the video camera. We are interested in using this information for the reconstruction of camera trajectory and observed scenery from consumer videos with the objective of visualizing those in a 3D virtual environment. We review existing literature and applications and suggest application scenarios for personal video. We discuss the issues that distinguish the personal video application from similar applications and present outlines of algorithms for several consumer video application scenarios.
2010 IEEE International Symposium on Mixed and Augmented Reality, 2010
ABSTRACT Estimating the 3D orientation of the camera in a video sequence within a global frame of... more ABSTRACT Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.
Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 2003
Over-determined linear systems with noise in all measurements are common in computer vision, and ... more Over-determined linear systems with noise in all measurements are common in computer vision, and particularly in motion estimation. Maximum likelihood estimators have been proposed to solve such problems, but except for simple cases, the corresponding likelihood functions are extremely complex, and accurate confidence measures do not exist. This paper derives the form of simple likelihood functions for such linear systems in the general case of heteroscedastic noise. We also derive a new algorithm for computing maximum likelihood solutions based on a modified Newton method. The new algorithm is more accurate, and exhibits more reliable convergence behavior than existing methods. We present an application to affine motion estimation, a simple heteroscedastic estimation problem.
Simple cells in cat striate cortex are selective for spatial frequency. It is widely believed tha... more Simple cells in cat striate cortex are selective for spatial frequency. It is widely believed that this selectivity arises simply because of the way in which the neurons sum inputs from the lateral geniculate nucleus. Alternate models, however, advocate the need for frequency-specific inhibitory mechanisms to refine the spatial frequency selectivity. Indeed, simple cell responses are often suppressed by superimposing stimuli with spatial frequencies that flank the neuron's preferred spatial frequency. In this article, we compare two models of simple cell responses head-to-head. One of these models, theflanb"ng-suppressionmodel, includes an inhibitory mechanism that is specific to frequencies that flank the neuron's preferred spatial frequency. The other model, the nonspecijcsuppression model, includes a suppressive mechanism that is very broadly tuned for spatial frequency. Both models also include a rectification nonlinearity and both may include an additional accelerating (e.g., squaring) output nonlinearity. We demonstrate that both models can be consistent with the apparent flanking suppression. However, based on other experimental results, we argue that the nonspecific-suppression model is more plausible. We conclude that the suppression is probably broadly tuned for spatial frequency and that the apparent flanking suppression is actually due to distortions introduced by an accelerating output nonlinearity. Cl 1997
19th Congress of the International Commission for Optics: Optics for the Quality of Life, 2003
A method to predict visual acuity in individual eyes has been developed, which combines realistic... more A method to predict visual acuity in individual eyes has been developed, which combines realistic optical and neural models of early visual processing. Visual acuity is usually obtained as the outcome of a pattern recognition task. However, since the human eye is highly aberrated, standard pattern recognition methods can not be used here, because they fail severely in the presence
2012 19th IEEE International Conference on Image Processing, 2012
ABSTRACT Computational photography techniques overcome limitations of traditional image sensors s... more ABSTRACT Computational photography techniques overcome limitations of traditional image sensors such as dynamic range and noise. Many computational imaging techniques have been proposed that process image stacks acquired using different exposure, aperture or gain settings, but far less attention has been paid to determining the parameters of the stack automatically. In this paper, we propose a novel computational imaging system that automatically and efficiently computes the optimal number of shots and corresponding exposure times and gains, taking into account characteristics of the scene and sensor. Our technique seamlessly integrates the use of multiple capture for both High Dynamic Range (HDR) imaging and denoising. The acquired images are then aligned, warped and merged in the raw Bayer domain according to a statistical noise model of the sensor to produce an optimal, potentially HDR and denoised image. The result is a fully automatic camera that constantly monitors the scene in front of it and decides how many images are required to capture it, without requiring the user to explicitly switch between different capture modalities.
A Bayesian model of Snellen visual acuity (VA) has been developed that, as far as we know, is the... more A Bayesian model of Snellen visual acuity (VA) has been developed that, as far as we know, is the first one that includes the three main stages of VA: (1) optical degradations, (2) neural image representation and contrast thresholding, and (3) character recognition. The retinal image of a Snellen test chart is obtained from experimental wave-aberration data. Then a subband image decomposition with a set of visual channels tuned to different spatial frequencies and orientations is applied to the retinal image, as in standard computational models of early cortical image representation. A neural threshold is applied to the contrast responses to include the effect of the neural contrast sensitivity. The resulting image representation is the base of a Bayesian pattern-recognition method robust to the presence of optical aberrations. The model is applied to images containing sets of letter optotypes at different scales, and the number of correct answers is obtained at each scale; the final output is the decimal Snellen VA. The model has no free parameters to adjust. The main input data are the eye's optical aberrations, and standard values are used for all other parameters, including the Stiles-Crawford effect, visual channels, and neural contrast threshold, when no subject specific values are available. When aberrations are large, Snellen VA involving pattern recognition differs from grating acuity, which is based on a simpler detection (or orientation-discrimination) task and hence is basically unaffected by phase distortions introduced by the optical transfer function. A preliminary test of the model in one subject produced close agreement between actual measurements and predicted VA values. Two examples are also included: (1) application of the method to the prediction of the VA in refractive-surgery patients and (2) simulation of the VA attainable by correcting ocular aberrations.
Uploads
Papers by Oscar Nestares