Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing
Previous Article in Journal
The Physical Layer Security Experiments of Cooperative Communication System with Different Relay Behaviors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

1
School of Information Technology, Jilin Agricultural University, Changchun 130118, China
2
School of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
3
CSIRO Data61, PO Box 76, Epping, NSW 1710, Australia
4
College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(4), 785; https://doi.org/10.3390/s17040785
Submission received: 23 February 2017 / Revised: 31 March 2017 / Accepted: 4 April 2017 / Published: 6 April 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

1. Introduction

Because of the turbulent flow field, an imaging system with adaptive optics (AO) is interfered by heat, thermal radiation, and image transmission. The observed image contains pointing jitters and is fuzzy, and this effect is known as the aero optical effect [1,2]. However, the AO correction within the system is often only partial, and the long-exposure images must be restored by postprocessing, where a deconvolution is required for reaching the diffraction limit. The AO images in a real system are usually represented by the convolution of an ideal image with a point spread function (PSF) [3]. The AO image is also contaminated by other noises, such as read-out, photon counting, multiplicative, and compression noise [4,5]. In most practical applications, however, finding the real PSF is impossible and an estimation must be carried out. Because the observed AO image is corrupted by various sources of noise, the estimation of both the object image and the PSF should be carried out at the same time, and this process is called blind deconvolution [6].
Blind deconvolution is also very sensitive to the noise present in the observed images [7,8]. To handle this problem, researchers have been making efforts in developing AO image restoration algorithms under various conditions. Using a total variation (TV) prior on the variational Bayesian blind deconvolution algorithm (VBBD-TV algorithm), Babacan et al. [9] developed an algorithm which shows good restoration performance for total variation based blind deconvolution and parameter estimation utilizing a variational framework. Katsaggelos and Lay proposed a maximum likelihood (ML) blur identification and image restoration method using the Expectation-Maximization algorithm (ML-EM algorithm) [10], which uses no knowledge about the type of distortion or its support region during iteration.
Zhu et al. presented an adaptive algorithm for image restoration using combined penalty functions (CPF-adaptive algorithm) [11]. This algorithm is adaptive and is used to estimate the smoothing and roughness parameters. The advantage of this method is that it is able to remove noise at the same time when restoring the image information around edges.
Zhang et al. proposed a multi-frame iterative blind deconvolution algorithm based on an improved expectation maximization algorithm for AO image restoration (RT-IEM algorithm) [12], which represents a more complex function to minimize. This algorithm contains a cost function for the joint-deconvolution of multi-frame AO images, and it estimates the regularization terms. The major limitation for this algorithm is its computational speed. Robust blind deconvolution methods optimize a data fidelity term, which can be stabilized by some additional regularization terms. Yap and Guan [13] proposed an approach for adaptive image regularization based on a neural network and a hierarchical cluster model. The main advantage of the method is that the empirical relationship between the optimized regularization vector and the local perception measure can be reused in the restoration of other degraded images. The major limitation for the algorithm is its convergence speed.
Although a number of blind deconvolution methods have been proposed, a good knowledge of the PSF is always desirable in order to achieve an accurate restoration. For example, Deforest et al. [14] modeled the scattering portion of the transition region and coronal explorer (TRACE) PSF as the sum of a measured diffraction pattern and a circularly symmetric scattering profile. In addition to constructing a parametric model of the PSF using the specification instrument, Poduval et al. modeled the PSF with a diffraction kernel and an isotropic scattering term representing stray light [15]. In these works, the parametric methods have the limitation that they depend on a particular instrument and the PSF models need to be available [16].
Our goal in this paper is to propose a new deconvolution scheme based on our frame selection method and the initial PSF estimation approach. The frame selection method is used to choose the better quality images from the multi-frame short exposure image sequence, which is initially corrected by the AO system. We take the variance of the AO image as the evaluation method for frame selection. The turbulence effect of an AO imaging system refers to the fact that when a reference star is regarded as the center for wavefront detection, the fluctuation of wavefront is no longer consistent with the detected wavefront for the regions that are beyond an isoplanatic angle. If this effect exists for an imaging process, it affects the restoration of AO images. Therefore, we develop the method for initial PSF estimation, which is a parametric method based on the physics of the instrument, and we provide a more precise PSF model containing some instrument’s properties. The initial estimation of the PSF model takes into account the a priori information on the object and the AO system parameters. A theoretical expression of the PSF is derived. The proposed PSF initial estimation is also evaluated by sensitivity analysis techniques and an error analysis is carried out based on simulation results (with and without noise).
The outline of our paper is as follows. Section 2 describes the frame selection method and the PSF model. Section 3 presents our new algorithm for joint blind deconvolution based on Poisson distribution and gives several methods for improved ML estimation and regularization. The restoration algorithm is applied to real AO images and the results are presented in Section 4. Finally in Section 5 we conclude.

2. Frame Selection Method and PSF Model

2.1. AO Image Degradation Model

Affected by atmospheric turbulence, observed images from astronomical imaging and remote sensing often suffer from image quality degradation. The readout noise satisfies an additive Gaussian model [17], and the degradation process can be represented as
i ( x , y ) = o ( x , y ) h ( x , y ) + c ( x , y ) , ( x , y ) Ω ,
where ( x , y ) is the spatial coordinate in the image, o ( x , y ) is the original image, h ( x , y ) refers to the PSF, i ( x , y ) represents the degraded image acquired by the image sensor, c ( x , y ) is the noise, Ω is the region of the object image, and ⊗ stands for the convolution operation.
A multi-frame AO image degradation model can be represented as
i k ( x , y ) = o ( x , y ) h k ( x , y ) + c ( x , y ) , 1 < k M ,
where i k ( x , y ) denotes the kth frame of the observed AO images, h k ( x , y ) represents its corresponding PSF, and M is the total number of frames. In this paper, we use the sequence of the degraded multi-frame images i k ( k = 1 , 2 , , M ) of the same object to restore the original image o.

2.2. Frame Selection Technique Based on Variance

In an experiment, the system can collect thousands of images for the same object. Interfered by atmospheric turbulence with various strengths, the AO system compensates the turbulence effects, and the image restoration results are directly affected by each frame. We select better quality frames in a sequence of images for image restoration [18].
A frame selection technique is used to select the high quality images from the multi-frame images that are initially corrected by an AO system. There are a number of evaluation criteria that can be used to evaluate the quality of degraded images. In this paper, we use the variance of an image as the frame evaluation method. The variance of the kth frame of an observed degraded image i k ( x , y ) can be calculated by
S k 2 = 1 N 1 N 2 x = 1 N 1 y = 1 N 2 ( i k ( x , y ) μ k ) 2 ,
where N 1 and N 2 represent the width and height of the observed image, respectively, and μ k refers to the mean of an observed image i k ( x , y )
μ k = 1 N 1 N 2 x = 1 N 1 y = 1 N 2 i k ( x , y ) .
The mean of variance S mean 2 of the multi-frame observed images is defined as
S mean 2 = 1 M k = 1 M S k 2 .
The constraint for frame selection is defined as
| S k 2 S mean 2 | < T ,
where T is the threshold for frame selection. In this method, the variance value S k 2 of the degraded image should be close to the mean of variance S mean 2 . Note that S k 2 can be higher than S mean 2 or lower than it. We only keep those images with variance values in the middle. Images with too low variance will be removed, and images with too high ones will also be removed.
The variance for the observed multi-frame images is defined as a sequence of data samples { S 1 2 , S 2 2 , , S M 2 } , and the variance σ s 2 of the data samples is
σ s 2 = 1 M k = 1 M ( S k 2 S mean 2 ) 2 .
In the frame selection method, we choose the threshold T according to the variance σ s 2 . In the frame selection process, we have carried out a lot of experiments and chosen different T values for obtaining high quality images from the collected thousands of degraded AO images for the same object. Finally, we found that the threshold T is approximately 2–3 times of σ s 2 , which can obtain better frames in a sequence of degraded AO images.
The frame selection process is based on an iterative algorithm. That is, when a better frame has been selected, the threshold T is recalculated. Then, the remaining frames will be checked by this method until the constraint is satisfied. The frame selection algorithm is shown in Algorithm 1.
Algorithm 1 Frame selection
  • Step 1: Initialize and calculate the variance set { S 1 2 , S 2 2 , , S M 2 } for { i k ( x , y ) } , k = 1 , 2 , , M ;
  • Step 2: Calculate the following with iteration number j = 1 , 2 , , M a x I t e r a t i o n s
    • According to Equation (5), calculate the mean value S mean 2 of sequence { S 1 2 , S 2 2 , , S M 2 } ;
    • According to Equation (7), calculate the variance σ s 2 of sequence { S 1 2 , S 2 2 , , S M 2 } ;
    • Set the threshold T = 2.6 σ s 2 ;
    • Check the constraint as in Equation (6);
    • Update the sequence { i k ( x , y ) } after frame selection. If { i k ( x , y ) } no longer changes, then go to Step 3;
    • Increase j;
  • Step 3: If the number of iterations is M a x I t e r a t i o n s or if i k ( x , y ) does not change, output the selected images { i k ( x , y ) } and finish; otherwise, go to Step 2.
In Algorithm 1, the threshold T = 2.6 σ s 2 was estimated experimentally by running the frame selection method with different Ts and selecting the one with the most visually acceptable results.

2.3. PSF Model

According to Veran et al. [19], the wavefront phase that is corrected by the AO system is still a quasi-stationary random process. The optical transfer function (OTF) of an AO system is defined as
H a ( u ) = exp ( C a ( λ f u ) 2 ) ,
where f is the focal length of the imaging system, C a ( λ f u ) is the phase structure function of the wavefront, and its calculation formula can be expressed as
C a ( λ f u ) = C a ( Δ z ) = ( ϕ res ( z ) ϕ res ( z + Δ z ) ) 2 ,
where · denotes the expectation over turbulence realizations, z is a space vector within the telescope pupil, Δ z is a space interval vector within the telescope pupil, and ϕ res ( z ) denotes the residual wavefront phase after an AO system correction.
According to the analysis in [20,21], the residual wavefront phase structure function C a ( Δ z ) is the sum of an isoplanatic and an anisoplanatic structure functions, which is defined as
C a ( Δ z ) = C 0 ( Δ z ) + C ˜ ( Δ z , θ ) ,
where θ is the off-axis angle interval, C 0 ( Δ z ) represents the on-axis isoplanatic structure function, and C ˜ ( Δ z , θ ) represents the anisoplanatic structure function, which is defined as
C ˜ ( Δ z , θ ) = ( ϕ res , θ ( z ) ϕ res , θ ( z + Δ z ) ) 2 ( ϕ res , 0 ( z ) ϕ res , 0 ( z + Δ z ) ) 2 .
According to Zernike polynomial theory and assuming that an AO system can fully compensate for the first n order Zernike polynomials, the residual wavefront phase of isoplanatic ϕ res , 0 ( z ) and residual wavefront phase of anisoplanatic ϕ res , θ ( z ) are expressed as [22]
ϕ res , θ ( z ) = 2 π j = 1 n a j ( θ ) a j ( 0 ) Z j ( z ) + j = n + 1 a j ( θ ) Z j ( z ) , ϕ res , 0 ( z ) = 2 π j = n + 1 a j ( 0 ) Z j ( z ) ,
where a j ( 0 ) and a j ( θ ) represent the coefficients of the on-axis and off-axis Zernike polynomial, respectively, and Z j ( z ) is the jth order Zernike polynomial.
Under the anisoplanatic situation, the OTF of the imaging system is defined as
H ˜ ( u , θ ) = exp ( C ˜ ( λ f u , θ ) 2 ) .
Similarly, modifying Equation (8) and the atmospheric environment, an OTF of an AO system is defined as
H a ( u ) = H 0 ( u ) H ˜ ( u , θ ) ,
where H 0 ( u ) is an OTF for the anisoplanatic effect.
Therefore, an OTF of the whole AO system is
H ( u , θ ) = H a ( u ) H tel ( u ) = H 0 ( u ) H ˜ ( u , θ ) H tel ( u ) ,
where H tel ( u ) is the telescope diffraction-limited OTF, which is [20]
H tel ( u ) = 2 π circ u D arccos u D u D 1 u 2 D 2 ,
where D is the telescope diameter, and circ u D will return a circle mask array with a circle of radius u D and the default array size is 128 × 128 with a central location at the middle of the array.
Modifying Equation (11) according to [11], we obtain the expression for the anisoplanatic structure function, which is
C ˜ ( Δ z , θ ) = k = 1 n j = 1 2 ( Γ a ( θ ) + Γ a ( 0 ) ) Z k , j ( Δ z ) ,
where Γ a ( θ 1 θ 2 ) = a k ( θ 1 ) a j ( θ 2 ) , Γ a ( θ ) is the angular correlation function between coefficients a k and a j of the Zernike polynomial, and Z k , j ( Δ z ) are functions defined as [21]
Z k , j ( Δ z ) = ( Z k ( z ) Z k ( z + Δ z ) ) ( Z j ( z ) Z j ( z + Δ z ) ) d z P ( z ) P ( z + Δ z ) d z ,
where P ( z ) is the telescope pupil function.
Based on Equations (13), (15) and (17), the OTF of an AO imaging system can be calculated. Therefore, the OTF can be transformed to PSF by the inverse Fourier transform, which is
P S F = F 1 { H ( u , θ ) } .
In this paper, we adopt the above method to rebuild the AO-PSF estimated model as an initial estimation for the AO image restoration. Considering that the isoplanatic angle is 2 , it is equivalent to a square isoplanatic area with 2 × 2 . Assuming that the PSF is with circular symmetry, we build the PSF model when the field-of-view is 10 , and the grid is 5 × 5 . Based on Equation (1) for the image deconvolution reconstruction, we will describe our AO image restoration method in Section 3.

3. Joint Blind Deconvolution Algorithm Based on Poisson Distribution

3.1. Estimators with Poisson Statistics

A blind deconvolution algorithm is to estimate both the object and the PSF at the same time. The noise in an AO image is mainly photon noise that has a Poisson distribution. However, an AO astronomy image is dominated by a quite homogeneous background [23,24]. The intensity convolution based on the Shell theorem describes the non-coherent imaging system, the object image { o ( a , b ) , ( a , b ) Ω } which is a nonnegative function, and the PSF { h k ( ( x , y ) | ( a , b ) ) , ( x , y ) D } (D is the region of the observed image) which is affected by atmospheric turbulence. Then, we define the object intensity e k ( x , y ) at point ( x , y ) in the kth frame as [25]
e k ( x , y ) = ( a , b ) Ω h k ( ( x , y ) | ( a , b ) ) o ( x , y ) = ( a , b ) Ω h k ( x a , y b ) o ( a , b ) , 1 < k M ,
where ( x , y ) D , and Ω is the region of the object image. In fact, the value of e k cannot be detected perfectly because it is always contaminated by some noise.
Considering the cases where the degraded image with atmosphere turbulence has noise, the intensity e k ( x , y ) at each point ( x , y ) in the observed image is a random variable that follows an independent Poisson distribution. The ML estimator based on Poisson distribution can be expressed as [23,25]
p ( i k ( x , y ) | o , h k ) = ( x , y ) Ω ( o ( x , y ) h k ( x , y ) ) i k ( x , y ) k = 1 M i k ( x , y ) × exp ( ( x , y ) Ω o ( x , y ) h k ( x , y ) ) .
The intensity e k ( x , y ) in Equation (20) is taken into account in the calculation of the ML estimator as in Equation (21), which is
p ( i k ( x , y ) | o , h k ) = ( e k ( x , y ) ) i k ( x , y ) exp ( e k ( x , y ) ) k = 1 M i k ( x , y ) .
Assuming that the pixels of the observed image are independent from each other [15], the joint likelihood function can be defined as
p ( i k ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) | o , h k ) = ( x , y ) Ω ( e k ( x , y ) ) i k ( x , y ) exp ( e k ( x , y ) ) k = 1 M i k ( x , y ) .

3.2. Algorithm Implementation for AO Image Restoration

To estimate o ^ and h ^ , an iterative deconvolution algorithm can be used. The o ^ and h ^ are based on the maximum likelihood estimation criterion, and the cost function can be written as
J 1 ( o , h k ) = arg min i k ln ( p ( i k ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) | o , h k ) )
Taking the logarithm on both sides of Equation (23), the log-likelihood function is defined as
ln ( p ( i k ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) | o , h k ) ) = ( x , y ) Ω e k + ( x , y ) Ω i k log e k ( x , y ) Ω log k = 1 M i k ( x , y ) .
It is assumed that the M observed images { i 1 ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) , i 2 ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) , ⋯, i M ( x 1 x 2 x N 1 , y 1 y 2 y N 2 ) } are statistically independent [26], and the log-likelihood function of the multi-frame joint estimator is expressed as
ln ( p ( { i k } | o , { h k } ) = k = 1 M ( x , y ) Ω e k + k = 1 M ( x , y ) Ω i k log e k + b .
The last term of Equation (25) is a constant, and we represent it as b in Equation (26).
Based on Equations (25) and (26), we modify the cost function J 1 ( o , h k ) as
J 1 ( o , { h k } ) = arg min i k ln ( p ( { i k } | o , { h k } ) ) .
However, it has been shown that the restoration algorithm does not converge to the solution because the noise is amplified during iterations [25]. To avoid excessive noise amplification, regularization on the cost function is required. We combine Equation (27) with regularization parameters to establish the cost function for joint deconvolution for multi-frame AO images, which is
J multi ( o , { h k } ) = J 1 ( o , { h k } ) + | o ^ ( x , y ) | γ ( v ) ,
where the second term is a regularization term that can be defined as
| o ^ ( x , y ) | = ( o x ( x , y ) ) 2 + ( o y ( x , y ) ) 2
and
γ ( v ) = exp ( | v | 2 2 ξ 2 ) ,
where γ ( v ) has a variable regularization coefficient associated with the gradient of each point with 0 < γ ( v ) 1 [12,27]; parameter ξ is the smoothing coefficient; and v is the gradient for point v ( x , y ) , where v is a gray pixel value.
In order to minimize the cost function, Equation (28) can be differentiated with respect to o ( x , y ) and h k ( x , y ) , and then make the differentiation equal to zero, which is
J multi ( o , { h k } ) o ( x , y ) = 0 ,
J multi ( o , { h k } ) h k ( x , y ) = 0 .
In order to take the o ( x , y ) derivative easily, e k ( x , y ) can be formulated by Equation (20), and then brought into Equation (26). According to Equations (26), (28), and (31), the derivation process is
J multi ( o , { h k } ) o ( x , y ) = J 1 ( o , { h k } ) o ( x , y ) + ( | o ^ ( x , y ) | γ ( v ) ) o ( x , y ) = ( k = 1 M ( x , y ) Ω e k + k = 1 M ( x , y ) Ω i k log e k + b ) o ( x , y ) + 1 β γ ( v ) × div ( o ^ ( x , y ) | o ^ ( x , y ) | ) = k = 1 M ( x , y ) Ω h k ( x , y ) + k = 1 M ( x , y ) Ω i k ( x , y ) h k ( x , y ) ( x , y ) Ω h k ( x , y ) o ( x , y ) + 1 β γ ( v ) × div ( o ^ ( x , y ) | o ^ ( x , y ) | ) = 0 .
With the condition of image energy conservation, the sum value of each frame PSF of the turbulence degraded images is 1, namely k = 1 M ( x , y ) Ω h k ( x , y ) = 1 . According to conservation of energy, the image energy before and after degradation remains unchanged, that is
1 1 1 β γ ( v ) div ( o ^ ( x , y ) | o ^ ( x , y ) | ) k = 1 M ( x , y ) Ω i k ( x , y ) h k ( x , y ) ( x , y ) Ω h k ( x , y ) o ( x , y ) = 1 .
Therefore, an iterative relationship can be established as follows:
o ^ ( n + 1 ) ( x , y ) = o ^ n ( x , y ) k = 1 M ( x , y ) Ω i k ( x , y ) h ^ k n ( x , y ) ( x , y ) Ω h ^ k n ( x , y ) o ^ n ( x , y ) × 1 1 1 β γ ( v ) div ( o ^ n ( x , y ) | o ^ n ( x , y ) | ) ,
where o ^ n ( x , y ) and h ^ k n ( x , y ) are the results of the nth iteration.
In order to take the h k ( x , y ) derivative easily, e k ( x , y ) can be formulated by Equation (20). According to Equations (28) and (32), the solution is
J multi ( o , { h k } ) h k ( x , y ) = J 1 ( o , { h k } ) h k ( x , y ) + ( | o ^ ( x , y ) | γ ( v ) ) h k ( x , y ) = ( k = 1 M ( x , y ) Ω e k + k = 1 M ( x , y ) Ω i k log e k + b ) h k ( x , y ) + 0 = ( x , y ) Ω o ( x , y ) + ( x , y ) Ω i k ( x , y ) o ( x , y ) ( x , y ) Ω h k ( x , y ) o ( x , y ) = 0 .
Because the object image is within the support region of the observed image, the original object image o ( x , y ) is normalized in advance, namely ( x , y ) Ω o ( x , y ) = 1 . Therefore, within the support region for the observed image, the energy sum value for the original object images is 1, which is
( x , y ) Ω i k ( x , y ) o ( x , y ) ( x , y ) Ω h k ( x , y ) o ( x , y ) = 1 .
When the object o ^ ( n + 1 ) ( x , y ) is estimated by Equation (35), we can build the iterative relationship of the new PSF, h ^ k ( n + 1 ) ( x , y ) , which is
h ^ k ( n + 1 ) ( x , y ) = h ^ n ( x , y ) ( x , y ) Ω o ^ n ( x , y ) i ( x , y ) ( x , y ) Ω h ^ n ( x , y ) o ^ n ( x , y ) , h ^ ( n + 1 ) ( x , y ) = h ^ n ( x , y ) ( x , y ) Ω h ^ k n ( x , y ) ,
where β is a constant.
In summary, the object image o ^ and the estimation of the PSF model, h ^ , are obtained by multiple iterations of Equations (35) and (38). The specific implementation steps for our algorithm are summarized in Algorithm 2.
Algorithm 2 Steps for our proposed restoration algorithm
  • Step 1: Initialize operation. According to the method described in Section 2.2, the M frames of images ( i 1 , i 2 , , i M ) are obtained with our frame selection technique. Then, the initial object image is o ^ 0 = ( i 1 + i 2 + + i M ) / M ;
  • Step 2: Obtain the initial estimation of the PSF model, h ^ 0 , according to Equation (19), with the algorithm described in Section 2.3;
  • Step 3: Calculate parameter values. β is a constant. According to the scheme and formula described in Section 3.2, estimate the regularization parameters γ ( x ) ;
  • Step 4: Iterate through j=1, 2, , M a x I t e r a t i o n ( M a x I t e r a t i o n = 200 or 300):
    • The inner loop count variable of PSF h _ c o u n t = 0 ;
    • The iteration process of PSF estimation, p=0, 1, , M a x _ c o u n t
      (a)
      Complete the PSF estimation h ^ ( p ) using Equation (38);
      (b)
      Increase h _ c o u n t ; Increase p;
      (c)
      Check the value of the loop variable p: if p < M a x _ c o u n t , continue; otherwise, go to Step 4 (3).
    • The inner loop counter variable of object estimation: o _ c o u n t = 0 ;
    • The iteration process of object estimation, q=0, 1, , M a x _ c o u n t
      (a)
      The conjugate gradient method was used to optimize Equation (35) and to obtain object image estimation o ^ ( q ) ;
      (b)
      Increase o _ c o u n t ; Increase q;
      (c)
      Check loop variable q: if q < M a x _ c o u n t , continue; otherwise, go to Step 4 (5);
    • Check whether the outer loop is finished: if j > M a x I t e r a t i o n , then go to Step 5;
    • Increase j, return to Step 4 (1).
  • Step 5: If j > M a x I t e r a t i o n , then output object estimate image o ^ and PSF estimation h ^ , and end the algorithm; otherwise go to Step 4.

4. Experimental Results

In this section, we use simulated images and a set of real AO images to verify our proposed restoration algorithm. We present experimental results on AO images taken by a 1.2 m AO telescope (Yunnan Observatory, Yunnan, China) which uses the Hartmann–Shack wavefront sensor from the Chinese Academy of Sciences at Yunnan Observatory on 3 December 2006. The second set of AO images are from the observations of celestial bodies on 13 January 2007. The values of the main parameters for the AO imaging system are shown in Table 1.
In order to verify the restoration effect and the reliability of our algorithm, we implement it using MATLAB 6.5 (The MathWorks, Natick, MA, USA) and test it on a 2.5 GHz Intel i5-2525M CPU (Intel Corporation, Santa Clara, CA, USA) with 4.0 GB of RAM running on a 32 bit Windows 7 operating system (Microsoft Corporation, Redmond, WA, USA).
In order to evaluate the image restoration result, we adopt the objective evaluation criteria Normalized Mean Square Error ( E NMSE ) [28] and Full Width at Half Maximum ( E FWHM ) [29] in this paper. The E NMSE is defined as
E NMSE = x = 1 N 1 y = 1 N 2 ( o ^ ( x , y ) i ( x , y ) ) 2 x = 1 N 1 y = 1 N 2 ( i ( x , y ) ) 2 .
E NMSE is an estimator of the overall deviations between the original image and the object image. A lower E NMSE value indicates that the deviation between the original image and the object image is small and a better restored image quality is obtained.
E FWHM is a parameter commonly used to describe the width of a “bump” on a curve or function. It is given by the distance between points on the curve at which the function reaches half its maximum value. For a point-like object in an astronomical image, E FWHM is two times that of the pixel distance between the image peak value and half its peak value [29]. It is used to measure for the quality of an image in astronomical observations. The E FWHM components along the x- and y-directions of an image are used for evaluation in astronomical observations with the following formula:
E FWHM = E FWHM x 2 + E FWHM y 2 ,
where E FWHM x and E FWHM y represent the peak pixels for the E FWHM on the x- and y-directions, respectively. The closer the value of E FWHM is to the optical diffraction limit for the AO imaging system, the better it is for the image restoration quality.

4.1. The Restoration Experiment on Simulated Images

In the simulation experiments in this paper, the original images (“House” with 256 × 256 pixels, “Chemical Plant” with 256 × 256 pixels, and “The Little Girl” with 256 × 256 pixels) are selected from [30]. Ten frames of simulation degraded images are generated by image degradation software from the Key Optical Laboratory of the Chinese Academy of Sciences [25], and then Gaussian white noise is added and the signal-to-noise ratio (SNR) of the images is set to be 20 dB, 25 dB, and 30 dB with real AO imaging conditions including atmospheric turbulence. The equivalent parameters for the four layers turbulence model are the same as those in Reference [22], and we set the parameters to be the same as the 1.2 m AO telescope at the observatory in Yunnan, China. The main parameters of the telescope imaging system are the atmospheric coherence as shown in Table 1. The simulated degraded images are assumed to be at certain distances so that one arcsecond imaging angle corresponds to degradation of a certain linear scale. Experiments on the ML-EM algorithm [10], the CPF-adaptive algorithm [11], the RT-IEM algorithm [12], the VBBD-TV algorithm [9], and our algorithm are compared.
Figure 1 is the original image and simulated multi-frame degradation images. Figure 1a is the original images, and example frames with turbulence and noise are shown in Figure 1b–d (to save space, only three frames are shown). The comparison results based on the five algorithms are shown in Figure 2. In our algorithm, the parameters β = 1.31 and ξ = 1.35 were selected experimentally for visually acceptable results. Table 2 gives the results of our algorithm and those of the ML-EM [10], CPF-adaptive [11], RT-IEM [12], and VBBD-TV [9] methods, and the number of iterations for the four algorithms is 300. Our algorithm ranks on the top of the list, and the results demonstrate the superiority of our method.

4.2. Restoration Experiments on Binary-Star AO Images

For adaptive optics compensated images, the degradation is dominated by the residual fine-scale wavefront error, which requires several parameters to characterize and varies from frame to frame. Now, we show the restoration results of our proposed algorithm on binary-star AO images that were taken by a 1.2 m AO imaging system on 3 December 2006. The adaptive optics bench is a system with 88 actuators, with a Shack–Hartman wave-front sensor (64 sub-apertures). The values of the main parameters for the AO imaging system are shown in Table 1. The estimated PSF model for an AO image is under the following conditions: the full field-of-view for the system is 20 ; the Zernike model for the fully corrected turbulence effect is with the first 35 orders; the field-of-view is 10 ; the size of the space-variant PSF is 5 × 5 pixels; and the isoplanatic angle θ is 2 . The AO image restoration experiments based on the ML-EM algorithm [10], the CPF-adaptive algorithm [11], the RT-IEM algorithm [12], the VBBD-TV algorithm [9], and our algorithm are compared.
We apply our frame select technique, which is introduced in Section 2.2, and select 50 frames from 200 of the observed AO images as the input images for blind convolution. Figure 3 shows the observed multi-frame degraded binary-star AO images (only nine frames are shown). In order to obtain better restoration results, our frame selection technique is used to select a group of high-quality degraded AO images to improve the stability of the algorithm.
In Figure 3, frame selection experiment based on image variance is carried out for binary-star AO images. The variance of each degraded image is calculated using Equation (5). The size of the binary-star AO image is 132 × 121 pixels.
The variance value for each observed AO image is given in the caption of Figure 3. The variance values show that most of the variance values for the images are close to each other, and their values are between 500 and 570. While affected by Gaussian noise, the variance of the 3rd frame is 566.23, which is higher than the others. The variance of the 7th frame is influenced by the PSF blurring effect and it is 336.34, so its value is much lower than that of the other images. Therefore, the 3rd and 7th frames should be eliminated for estimation purposes.
To test the performance of our proposed algorithm, image restoration experiments were carried out, and the selected AO images for the experiments were shown in Figure 3 (Frames 1, 2, 4, 5, 6, 8, 9 were selected in the experiment). We choose β = 1.23 , and the parameter for the regularization function is ξ = 1.35 with 300 iterations. Figure 4 shows the restoration results on degraded AO images based on five algorithms. Figure 4a is the restored image based on the ML-EM algorithm with E NMSE = 0.0413 and E FWHM = 5.73 ; Figure 4b is the restored image based on the CPF-adaptive algorithm with E NMSE = 0.0386 and E FWHM = 5.97 ; Figure 4c is the restored image based on the RT-IEM algorithm with E NMSE = 0.0363 and E FWHM = 5.99 ; Figure 4d is the restored image based on the VBBD-TV algorithm with E NMSE = 0.0358 and E FWHM = 5.87 ; and Figure 4e is the restored image based on our algorithm with E NMSE = 0.0301 and E orFWHM = 6.13 . The total number of iteration is 200, and the restoration results are very close to the diffraction limit of the 1.2 m AO telescope. Figure 4f is the estimation of the PSF based on our algorithm.
In order to verify our algorithm, we compare our algorithm with four other algorithms (the ML-EM, CPF-adaptive, RT-IEM, and VBBD-TV algorithms) for the binary-star images, and the objective evaluation criteria for the experimental results are measured by E NMSE , E FWHM , and the computation time, which are shown in Table 3, and the number of iterations for the five algorithms is 300.
Comparing with the ML-EM, CPF-adaptive, RT-IEM, and VBBD-TV algorithms, we can see that the E NMSE measures from our algorithm are decreased by 19%, 4.2%, 2.8%, and 7.7%, respectively. It is shown in Table 3 that our algorithm can obtain a E FWHM , which is closer to the diffraction limit of the AO system and can obtain lower E NMSE measures. Therefore, our proposed method can restore binary-star degraded images effectively where the degradation was due to the effect of atmospheric turbulence acting on the optical telescope. The computation load of our method is slightly higher than the other four restoration algorithms, and we plan to further improve the performance of our algorithm in the future.
Moreover, the E NMSE and E FWHM measures of the best restored images by the five algorithms are compared in Figure 5. The E NMSE measure versus the iteration number for the five algorithms on the binary-star AO images are plotted in Figure 5a. Figure 5b is the E FWHM results for the five restoration algorithms. It can be seen from Figure 5b that our algorithm has E FWHM values that are close to the diffraction limit of the AO system when applied to binary-star images. This shows that our restoration algorithm can effectively restore the degraded AO images.

4.3. Sensitivity Analysis

Sensitivity analysis is the study of how the uncertainty in the output of a model can be portioned to different sources of variation in the model inputs [31]. In order to evaluate the benefits of using the proposed PSF initial estimation, the computation of the sensitivity indices of a PSF model was performed. In an AO image restoration process, we initialize the restoration algorithm with a perfect PSF. In this work, extensive simulations are performed to test the behaviour of the proposed PSF estimation. In a realistic case, the degraded AO image is affected by the turbulence effect, the photon, and detector noise. We assume that the dominant noise is the photon noise. The simulation conditions are the followings: the simulation is performed on three different photon noise levels varying in: 10 3 photons, 10 5 photons, and 10 7 photons; and the angle θ varies from 1 arcsec for the minimum value to 21 arcsec for the maximum value. Figure 6 shows the error on the estimation of the PSF model in the case of a noise free image, and in the cases of photon noises with 10 3 , 10 5 , and 10 7 photons in the whole image, respectively. We plot in Figure 6 the errors which are given in percentages on the magnitude estimation of star images using an isoplanatic deconvolution process. In the case of an isoplanatic deconvolution, the PSF is constant in the whole field of view [21]. For a given noise level, a limit angle θ can be defined below when the error is lower than the noise error. The lower the noise level, the greater the limit angle (see Figure 6). For instance, the error becomes greater than the noise error in the case of 10 3 photons for angle θ 7 . This limit angle can be used to define an isoplanatic angle for our AO image restoration method.
Furthermore, we perform a sensitivity analysis on the PSF starting values for our proposed AO image restoration algorithm. To validate the sensitivity behaviour of the PSF model, the experiment is performed and the conditions are:
  • Noise root mean square (RMS) changes from one percent for the minimum value of the image to 20 percent for the maximum value for the image;
  • Fifty noise realizations are calculated for each RMS noise value;
  • The simulation is performed on three different sub-images varying in size: a 64 × 64 pixels central region of the image, a 128 × 128 pixels central region of the image, and the whole 256 × 256 pixels image.
Figure 7 shows the RMS error on the estimation of the initial PSF values for different levels of noise and varying data size. It indicates that the initial value of our PSF estimation is useful to provide less sensitive final PSF estimation in our proposed restoration algorithm. This simulation shows that the PSF estimator exhibits a lower sensitivity to noises, which opens the way to its use for real AO images.

5. Conclusions

This paper presents a novel adaptive optics images restoration algorithm based on our frame selection method and the PSF model for the turbulence effect. There are two main contributions in this paper. The proposed frame selection method provides a reasonable criterion to evaluate the quality of a degraded AO image based on the variance of the image. We also propose a PSF reconstruction method for AO images, which can accurately estimate the PSF model in a blind deconvolution process using the initial estimate value. A theoretical expression of the PSF is derived. According to the Poisson distribution model for establishing a joint log-likelihood function for the multi-frame AO images, the proposed algorithm for the maximum likelihood estimator is used for the object image and the PSF estimation for each frame. A series of experiments were carried out to verify the effectiveness of the proposed algorithm and show its superiority over the current state-of-the-art blind deconvolution methods. The comparisons with the ML-EM, CPF-adaptive, RT-IEM, and VBBD-TV methods also indicate that our proposed algorithm is competitive on AO image restoration. Future works should focus on reducing the computation time by employing parallel computing. Our proposed algorithm can be used for restoring observed AO images, and it has the potential for real applications.

Acknowledgments

This research is supported by the National Science Foundation of China (Grant No. 61402193, 61401347), National Spark Program of China (Grant No. 2015GA66004), the National University Students Innovation Training Project of China (Grant No. 201610193048, 201610193053), the Scientific and Technological Research Project of the Department of Education in Jilin Province, China. We thank Wei Su of Changchun University of Science and Technology for his comments on this paper. The authors are grateful to the reviewers for the valuable suggestions. The input images used in Figure 1 and Figure 2 are obtained from https://pixabay.com/ which can be used free for any purposes.

Author Contributions

Dongming Li conceived and designed the research; Dongming Li and Lijuan Zhang designed and performed the experiments; Jinhua Yang contributed to the theory studies; Huan Liu and Jiaqi Peng analyzed the data; Dongming Li and Changming Sun wrote the manuscript; and all the authors reviewed and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, L.; Yang, J.; Su, W.; Wang, X.; Jiang, Y.; Jiang, C.; Liu, Z. Research on blind deconvolution algorithm of multiframe turbulence-degraded images. J. Inf. Comput. Sci. 2013, 10, 3625–3634. [Google Scholar] [CrossRef]
  2. Cheong, H.; Chae, E.; Lee, E.; Jo, G.; Paik, J. Fast image restoration for spatially varying defocus blur of imaging sensor. Sensors 2015, 15, 880–898. [Google Scholar] [CrossRef] [PubMed]
  3. El-Sallam, A.; Boussaid, F. Spectral-based blind image restoration method for thin TOMBO imagers. Sensors 2008, 8, 6108–6124. [Google Scholar] [CrossRef] [PubMed]
  4. Christou, J.; Hege, E.; Jefferies, S.; Keller, C. Application of multiframe iterative blind deconvolution for diverse astronomical imaging. Proceedings of SPIE 2200, Amplitude and Intensity Spatial Interferometry II, Kailua Kona, HI, USA, 9 June 1994; pp. 433–444. [Google Scholar]
  5. Conan, J.; Mugnier, L.; Fusco, T.; Michau, V.; Rousset, G. Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra. Appl. Opt. 1998, 37, 4614–4622. [Google Scholar] [CrossRef] [PubMed]
  6. Kundur, D.; Hatzinakos, D. A novel blind deconvolution scheme for image restoration using recursive filtering. IEEE Trans. Signal Process. 1998, 46, 375–390. [Google Scholar] [CrossRef]
  7. Gerwe, D.; Lee, D.; Barchers, J. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites. Opt. Eng. 2002, 41, 2238–2251. [Google Scholar]
  8. Gallé, R.; Núñez, J.; Gladysz, S. Extended-object reconstruction in adaptive-optics imaging: the multiresolution approach. Astron. Astrophys. 2013, 555, A69. [Google Scholar] [CrossRef]
  9. Babacan, S.; Molina, R.; Katsaggelos, A. Variational Bayesian blind deconvolution using a total variation prior. IEEE Trans. Image Process. 2009, 18, 12–26. [Google Scholar] [CrossRef] [PubMed]
  10. Katsaggelos, A.; Lay, K. Maximum likelihood blur identification and image restoration using the EM algorithm. IEEE Trans. Signal Process. 1991, 39, 729–733. [Google Scholar] [CrossRef]
  11. Zhu, D.; Razaz, M.; Fisher, M. An adaptive algorithm for image restoration using combined penalty functions. Pattern Recognit. Lett. 2006, 27, 1336–1341. [Google Scholar] [CrossRef]
  12. Zhang, L.; Li, D.; Su, W.; Yang, J.; Jiang, Y. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method. Abstr. Appl. Anal. 2014, 2014, 781607. [Google Scholar] [CrossRef]
  13. Yap, K.; Guan, L. Adaptive image restoration based on hierarchical neural networks. Opt. Eng. 2000, 39, 1877–1890. [Google Scholar] [CrossRef]
  14. DeForest, C.; Martens, P.; Wills-Davey, M. Solar coronal structure and stray light in TRACE. Astrophys. J. 2008, 690, 1264. [Google Scholar] [CrossRef]
  15. Poduval, B.; DeForest, C.; Schmelz, J.; Pathak, S. Point-spread functions for the extreme-ultraviolet channels of SDO/AIA telescopes. Astrophys. J. 2013, 765, 144. [Google Scholar] [CrossRef]
  16. González, A.; Delouille, V.; Jacques, L. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution. J. Space Weather Space Clim. 2016, 6, 492–504. [Google Scholar] [CrossRef]
  17. Li, D.; Zhang, L.; Yang, J.; Su, W. Research on wavelet-based contourlet transform algorithm for adaptive optics image denoising. Opt. Int. J. Light Electron Opt. 2016, 127, 5029–5034. [Google Scholar] [CrossRef]
  18. Tian, Y.; Rao, C.; Wei, K. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution. Chin. Astron. Astrophy. 2009, 33, 223–230. [Google Scholar]
  19. Veran, J.; Rigaut, F.; Maitre, H.; Rouan, D. Estimation of the adaptive optics long-exposure point spread function using control loop data: recent developments. Proceedings of SPIE 3126, Adaptive Optics and Applications, San Diego, CA, USA, 17 October 1997; pp. 3057–3069. [Google Scholar]
  20. Voitsekhovich, V.; Bara, S. Effect of anisotropic imaging in off-axis adaptive astronomical systems. Astron. Astrophys. Suppl. Ser. 1999, 137, 385–389. [Google Scholar] [CrossRef]
  21. Fusco, T.; Conan, J.; Mugnier, L.; Michau, V.; Rousset, G. Characterization of adaptive optics point spread function for anisoplanatic imaging: Application to stellar field deconvolution. Astron. Astrophys. Suppl. Ser. 2000, 142, 149–156. [Google Scholar] [CrossRef]
  22. Chang, X.; Li, R.; Xiong, Y. Predicted space varying point spread function model for anisoplanatic adaptive optics imaging. Acta Opt. Sin. 2011, 12, 003. [Google Scholar] [CrossRef]
  23. Blanco, L.; Mugnier, L. Marginal blind deconvolution of adaptive optics retinal images. Opt. Express 2011, 19, 23227–23239. [Google Scholar] [CrossRef] [PubMed]
  24. Prato, M.; Camera, A.; Bonettini, S.; Bertero, M. A convergent blind deconvolution method for post-adaptive optics astronomical imaging. Inverse Prob. 2013, 29, 065017. [Google Scholar] [CrossRef]
  25. Hong, H.; Zhang, T.; Yu, G. Iterative multi-frame restoration algorithm of turbulence-degraded images based on Poisson model. J. Astronaut. (Chin.) 2004, 25, 649–654. [Google Scholar]
  26. Sheppard, D.; Hunt, B.; Marcellin, M. Iterative multiframe superresolution algorithms for atmospheric-turbulence-degraded imagery. J. Opt. Soc. Am. A 1998, 15, 978–992. [Google Scholar] [CrossRef]
  27. Yang, H.; Li, X.; Gong, C.; Jiang, W. Restoration of turbulence-degraded extended object using the stochastic parallel gradient descent algorithm: Numerical simulation. Opt. Express 2009, 17, 3052–3062. [Google Scholar] [CrossRef] [PubMed]
  28. Gabarda, S.; Cristóbal, G. Blind image quality assessment through anisotropy. J. Opt. Soc. Am. A 2007, 24, B42–B51. [Google Scholar] [CrossRef]
  29. Chen, B.; Cheng, C.; Guo, S.; Pu, G.; Geng, Z. Unsymmetrical multi-limit iterative blind deconvolution algorithm for adaptive optics image restoration. High Power Laser Part. Beams (Chin.) 2011, 2, 313–318. [Google Scholar] [CrossRef]
  30. Pixabay. Available online: https://pixabay.com/en/ (accessed on 3 November 2016).
  31. Figueiro, T.; Saib, M.; Tortai, J.; Schiavone, P. PSF calibration patterns selection based on sensitivity analysis. Microelectron. Eng. 2013, 112, 282–286. [Google Scholar] [CrossRef]
Figure 1. The original images and the simulated multi-frame degradation images. (a) input images of the datasets; (b) images degraded by the 5 × 5 PSF at SNR of 20 dB under ideal conditions; (c) images degraded by the 5 × 5 PSF at SNR of 25 dB under ideal conditions; (d) images degraded by the 5 × 5 PSF at SNR of 30 dB under ideal conditions.
Figure 1. The original images and the simulated multi-frame degradation images. (a) input images of the datasets; (b) images degraded by the 5 × 5 PSF at SNR of 20 dB under ideal conditions; (c) images degraded by the 5 × 5 PSF at SNR of 25 dB under ideal conditions; (d) images degraded by the 5 × 5 PSF at SNR of 30 dB under ideal conditions.
Sensors 17 00785 g001
Figure 2. The comparison results of the restored images based on four algorithms (with 300 iterations). (a) by the ML-EM algorithm; (b) by the CPF-adaptive algorithm; (c) by the RT-IEM algorithm; (d) by the VBBD-TV algorithm; (e) by our algorithm with β = 1.31 and ξ = 1.35 .
Figure 2. The comparison results of the restored images based on four algorithms (with 300 iterations). (a) by the ML-EM algorithm; (b) by the CPF-adaptive algorithm; (c) by the RT-IEM algorithm; (d) by the VBBD-TV algorithm; (e) by our algorithm with β = 1.31 and ξ = 1.35 .
Sensors 17 00785 g002
Figure 3. Nine frames of binary-star images by an AO system and their variances. (a) Frame 1 with variance S 1 2 = 536.34 ; (b) Frame 2 with variance S 2 2 = 518.06 ; (c) Frame 3 with variance S 3 2 = 566.23 ; (d) Frame 4 with variance S 4 2 = 495.07 ; (e) Frame 5 with variance S 5 2 = 552.10 ; (f) Frame 6 with variance S 6 2 = 545.26 ; (g) Frame 7 with variance S 7 2 = 336.34 ; (h) Frame 8 with variance S 8 2 = 522.95 ; (i) Frame 9 with variance S 9 2 = 530.72 .
Figure 3. Nine frames of binary-star images by an AO system and their variances. (a) Frame 1 with variance S 1 2 = 536.34 ; (b) Frame 2 with variance S 2 2 = 518.06 ; (c) Frame 3 with variance S 3 2 = 566.23 ; (d) Frame 4 with variance S 4 2 = 495.07 ; (e) Frame 5 with variance S 5 2 = 552.10 ; (f) Frame 6 with variance S 6 2 = 545.26 ; (g) Frame 7 with variance S 7 2 = 336.34 ; (h) Frame 8 with variance S 8 2 = 522.95 ; (i) Frame 9 with variance S 9 2 = 530.72 .
Sensors 17 00785 g003
Figure 4. The restoration results comparison on multi-frames binary-star AO images. (a) restored image by the ML-EM algorithm; (b) restored image by the CPF-adaptive algorithm; (c) restored image by the RT-IEM algorithm; (d) restored image by the VBBD-TV algorithm; (e) restored image by our algorithm; (f) estimated PSF by our algorithm.
Figure 4. The restoration results comparison on multi-frames binary-star AO images. (a) restored image by the ML-EM algorithm; (b) restored image by the CPF-adaptive algorithm; (c) restored image by the RT-IEM algorithm; (d) restored image by the VBBD-TV algorithm; (e) restored image by our algorithm; (f) estimated PSF by our algorithm.
Sensors 17 00785 g004
Figure 5. The restoration comparisons on E NMSE and E FWHM measures versus the iteration number for the five methods. (a) E NMSE versus the iteration number of the five algorithms; (b) E FWHM versus the iteration number of the five algorithms.
Figure 5. The restoration comparisons on E NMSE and E FWHM measures versus the iteration number for the five methods. (a) E NMSE versus the iteration number of the five algorithms; (b) E FWHM versus the iteration number of the five algorithms.
Sensors 17 00785 g005
Figure 6. Error on the estimation of the PSF model in the case of a noise free image, and in the cases of photon noises with 10 3 , 10 5 , and 10 7 photons in the whole image, respectively.
Figure 6. Error on the estimation of the PSF model in the case of a noise free image, and in the cases of photon noises with 10 3 , 10 5 , and 10 7 photons in the whole image, respectively.
Sensors 17 00785 g006
Figure 7. RMS error on estimation of the PSF initial value for different noise levels (in percentage).
Figure 7. RMS error on estimation of the PSF initial value for different noise levels (in percentage).
Sensors 17 00785 g007
Table 1. The values of the main parameters for an AO imaging system.
Table 1. The values of the main parameters for an AO imaging system.
Parameter NameParameter ValueRemarks
r 0 13 cmAtmospheric coherence length
λ 0.72 μ mCentral wavelength
f20 mImaging focal length
D1.03 mTelescope aperture
The size for imaging CCD320 × 240 pixel
Size of pixel in CCD6.7 μ m
Imaging observation range0.7−0.9 μ m
Field of view for imaging system 24 × 18
Table 2. Comparison results on E NMSE and running time of different restoration algorithms.
Table 2. Comparison results on E NMSE and running time of different restoration algorithms.
ML-EM CPF-Adaptive RT-IEM VBBD-TV Our Algorithm
Image Names E NMSE Running Time (s) E NMSE Running Time (s) E NMSE Running Time (s) E NMSE Running Time (s) E NMSE Running Time (s)
House0.0034 12.18 0.002513.27 0.003013.96 0.002313.87 0.0015 13.90
Chemical Plant0.0069 11.97 0.007212.89 0.005413.04 0.005113.12 0.0045 13.21
The Little Girl0.0046 9.24 0.003910.54 0.002810.91 0.002110.87 0.0017 11.08
Table 3. The E NMSE , E FWHM , and the computation time comparison for the five algorithms for binary-star images restoration (iteration number is 300).
Table 3. The E NMSE , E FWHM , and the computation time comparison for the five algorithms for binary-star images restoration (iteration number is 300).
Algorithms E NMSE E FWHM (pixel)Computation Time (s)
ML-EM0.02526.279.872
CPF-adaptive0.02136.4612.196
RT-IEM0.02106.5110.983
VBBD-TV0.02216.488.624
Our algorithm0.02046.6912.257

Share and Cite

MDPI and ACS Style

Li, D.; Sun, C.; Yang, J.; Liu, H.; Peng, J.; Zhang, L. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics. Sensors 2017, 17, 785. https://doi.org/10.3390/s17040785

AMA Style

Li D, Sun C, Yang J, Liu H, Peng J, Zhang L. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics. Sensors. 2017; 17(4):785. https://doi.org/10.3390/s17040785

Chicago/Turabian Style

Li, Dongming, Changming Sun, Jinhua Yang, Huan Liu, Jiaqi Peng, and Lijuan Zhang. 2017. "Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics" Sensors 17, no. 4: 785. https://doi.org/10.3390/s17040785

APA Style

Li, D., Sun, C., Yang, J., Liu, H., Peng, J., & Zhang, L. (2017). Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics. Sensors, 17(4), 785. https://doi.org/10.3390/s17040785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop