Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
2,3,4
ABSTRACT
Both regularization and compression are important issues in image processing and have been widely
approached in the literature. The usual procedure to obtain the compression of an image given through a
noisy blur requires two steps: first a deblurring step of the image and then a factorization step of the
regularized image to get an approximation in terms of low rank nonnegative factors. We examine here the
possibility of swapping the two steps by deblurring directly the noisy factors or partially denoised factors.
The experimentation shows that in this way images with comparable regularized compression can be
obtained with a lower computational cost.
KEYWORDS
Image Regularization, Image Compression, Nonnegative Matrix Factorization
1. INTRODUCTION
Let X* be an nn matrix whose entries represent an image and let B* represent the corresponding
blurred image generated by an imaging system. Denoting by x*=vec(X*) and b*=vec(B*) the Nvectors (with N=n2) which store columnwise the pixels of X* and B*, the imaging system is
represented by an NN matrix A such that
Ax* = b*.
(1)
In general, the vector b* is not exactly known because it is affected by noise due to the
fluctuations in the counting process of the acquisition of the image, which obeys to Poisson
statistics, and to the readout imperfections of the recording device, which are Gaussian
distributed. Hence we assume that the available recorded image is B with vec(B)=b=b*+
. The
noise is obviously unknown, even if in general its level =||
||2/||b||2 can be roughly estimated.
Model (1) so becomes
Ax = b.
(2)
DOI:10.5121/ijcsa.2016.6601
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
1
b Ax
2
2
2
Because of the ill-conditioning of A and of the presence of the noise, when system (2) has a large
dimension the solution xls may be quite different from x*, and a regularization method must be
employed.
Iterative methods, coupled with suitable strategies for enforcing non negativity, may be used as
regularization methods if they enjoy the semi convergence property. According to this property,
the vectors computed in the first iterations are minimally affected by the noise and move toward
the solution x* but, going on with the iterations, the computed vectors are progressively
contaminated by the noise and move away from x* toward xls. A vector which is close to x* is
assumed as the regularized solution xreg of (2). A good terminating procedure is hence needed to
detect the correct index where to stop the iteration.
Once a regularized solution xreg has been found, one can apply a compression technique to the
nn matrix Xreg obtained by partitioning column wise xreg, in order to reduce the storage required
for saving it. This second step can be accomplished by using a dimensional reduction technique,
for example by computing a low rank nonnegative approximation of Xreg. Since nonnegativity is
an essential issue in our problem, we refer here to the Nonnegative Matrix Factorization (NMF)
which computes an approximation of a nonnegative matrix in the form of the product of two low
rank nonnegative factors W and H, i.e. Xreg W H.
Besides this two steps procedure, other approaches can be followed to get low rank factorized
approximations of X*. A simple one that we consider here proceeds directly with the data A and b
without first computing Xreg and combines regularization and factorization steps. Another one,
that we also take into consideration following [1], factorizes B with penalty constraints in order to
obtain partially denoised factors, which are successively refined involving the action of the
blurring matrix.
In this paper we formalize these three procedures with the aim of testing and comparing their
ability to produce acceptable regularized compressed approximations of X*. The outline of the
paper is the following: some preliminaries on NMF applied to a general matrix are given in
Section 2. Then in Section 3 the procedures are described, with details on the use of FFT to
perform the matrix by vector products and to implement the generalized cross validation (GCV)
for the stopping rules when the blurring matrix has circulant structure. Finally, Section 4 presents
and analyzes the results of the numerical experiments.
2.NOTATION
For a given n1n2 matrix M, we denote by m=vec(M) the vector of n1 n2 components which stores
columnwise the elements of M, by m(j), j=1, , n2 the jth column of M and by
mi,j = (M)i,j = m(j)i = (m)r
the (i,j)th entry of M, with r=(j-1)n1+i. Moreover we denote m=vec(MT). In the following the
pointwise multiplication and division between vectors will be denoted with . and /. respectively.
With ei we denote the ith canonical vector of suitable length.
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
1
X WH
2
(3)
where F denotes the Frobenius norm. In this way, each column x(i) is approximated by the linear
combination with nonnegative coefficients hr,i of the vectors w(r) for r=1, , k. Problem (3) is non
convex and finding its global minimum is NP-hard, so one only looks for local minima. Most non
convex optimization algorithms, like for example standard gradient methods, guarantee the
stationarity of the limit points but suffer from slow convergence. Newton-like algorithms have a
better rate of convergence but are more expensive.
The following alternating nonnegative least squares (ANLS) approach, which belongs to the
block coordinate descent framework [3] of nonlinear optimization, has shown to be very efficient.
Its scheme is very simple: one of the factors, say W, is initialized to an initial matrix W0 with
nonnegative entries and the matrix H R +kn which realizes the min X W0 H
The matrix so obtained is assumed as H1 and a new matrix W1 R
n k
+
2
F
is computed.
which attains
(4)
for =0,1, , until a suitable condition verifies that a local minimum of the object function
(W,H) of (3) has been sufficiently well approximated. The iteration stops when
X W H
+1
< tol where =
XF
(5)
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
with different initial random matrices and the run with the lowest error is chosen. Both
problems (4) can be cast in the following form
1
2
U VZ F
(6)
2
nn
nk
k n
where U R + and V R + are given and Z R + has to be computed. Specifically, U=X,
min ( Z ), under Z 0, where (Z ) =
V=W, Z=H for the first problem of (4) and U=XT, V=HT+1, Z=WT for the second problem of (4).
The gradient and the Hessian of are
G ( Z ) = V T VZ V TU and Q = V TV .
Although the original problem (3) is non convex, problem (6) is convex and can be easily dealt
with by applying any procedure for constrained quadratic optimization, for example an ActiveSet-like method [4, 5]. According to [6], the requirement that the non negativity constraints are
exactly satisfied at each step is necessary for convergence but makes the algorithm rather slow at
large dimensions. Moreover, in our context where the presence of the noise must be accounted
for, computing the exact solution of problems (4) is not worthwhile.
Faster algorithms have been suggested to compute iteratively approximate solutions with inexact
methods like gradient descent, Newton-type methods modified to satisfy the non negativity
constraints and coordinate descent methods. In the experimentation we have used the Greedy
Coordinate Descent method (GCD) specifically designed in [7] for solving problems of the form
(6) which proceeds as follows. Starting with an initial matrix Z0 (a possible choice is Z0 = O,
which ensures that G(Z0) O), the matrices Z =0,1, , are computed iteratively with
n
Z +1 = Z +
i,j
ei eTj
j=1 i I ( j )
where the indices i in the set I(i) and their order are not defined a-priori, but are determined by
running conditions. The coefficients si,j are determined by minimizing function on the dyad
ei eTj . For any j the indices i are chosen according to a maximal decrease criterium applied to .
The code for GCD can be found as Algorithm 1 in [7].
The dominant part of the computational cost of solving each problem (6) using GCD is given by
the cost of updating the gradient, in particular by the cost kn =k n2 of computing the product VZ.
Hence the cost of computing the NMF factorization of the matrix X is fac = 2 kn itfac, where itfac
is the number of iterations required to satisfy (5).
3. THE PROBLEM
The problem we consider is the following: find a compressed approximation of the original nn
image X* by solving in a regularizing way the system
Ax =b.
(7)
By the term compression we mean an approximation given by the product of two k-rank
matrices, with k an integer such that k << n. Write the NN given matrix A, with N=n2, in block
form
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
A1,1
A2,1
A =
M
An,1
A1,2 L
A2,2 K
M O
An,2 L
A1,n
A2,n
, with Ai, j R nn
+
M
An,n
i, j
x ( j ) = b (i) , i = 1,K , n
(8)
j=1
Due to the generally large size of the problem, some structure must be assumed for A. When the
problem is modeled by a Fredholm integral equation defined by a point spread function (PSF)
invariant with respect to translations on a bounded support, A is a 2-level Toeplitz matrix, i.e.
Ai , j = Ai 1, j 1 , (Ai , j )r , s = (Ai , j )r 1, s 1 ,
Moreover, if periodic boundary conditions can be safely imposed, A becomes a 2-level circulant
matrix. Hence Ai,1 = Ai1,n for any i and the same structure holds internally for each block.
We assume as regularized solution xreg an approximation of the least squares solution xls obtained
by minimizing the function
(x ) =
1
b Ax
2
(9)
under nonnegativity constraints. The gradient and the Hessian of (x) are
grad x (x) = AT Ax AT b,
Hess( (x)) = AT A
(10)
Since the Hessian is positive semidefinite, (x) is convex. Its minimum points solve the system
grad x (x) = 0 , i.e. the so-called normal equations AT Ax = AT b . The minimizer of (x) under
nonnegativity constraints can be approximated by any method for constrained quadratic
optimization. We suggest to use an iterative semiconvergent method to ensure the regularization
of the solution with starting point AT b . The number of performed iterations is denoted by itsol.
i, j
Wh ( j ) = b (i ) , i = 1,K , n
(11)
j =1
w
ht,j = br,i , r,i = 1,K , n
r,s s,t
(A )
i, j
j=1 t=1
s=1
(12)
and equivalently
A( I n W )h = b
(13)
5
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
( )
s =1
i, j
( )
r,s
t =1 j =1
k
(14)
and equivalently
(15)
System (15) allows computing the rows of W if H is assigned. The constrained least squares
solutions of the two systems (13) and (15) are
1
b A(I n W )h
2
2
2
(16)
and
2
1
b' A'(I n H T )w ' 2 (17)
2
Problems (16) and (17) can be inserted into an ANLS scheme. In the first step of this alternating
inner-outer method we assume that W has been fixed and solve with respect to H, in the second
step H is fixed and W is obtained. Thus, starting with initial (W0, H0), a sequence (W, H),
=0,1, is computed iteratively, with w = vec(W), h = vec(H) and
(18)
1
Mz u
2
2
2
(19)
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
NMF to obtain the factorization W(1)H(1) Xreg using (5) with tolerance tol as a stopping condition
(in the experiments we set tol = 10-3 since lower factorization errors in the compressed images are
not remarkable at a visual inspection). For solving (8) we use (for comparison purpose) two
semiconvergent methods which enjoy nonnegativity feature: Expectation-Maximization (EM) [8]
and a scaled gradient projection method especially designed for nonnegative regularization (SGP)
[9].
Procedure P2: apply the ANLS scheme to (18). Two initial matrices W0 and H0 are computed
through an incomplete NMF of the nn matrix obtained by partitioning columnwise AT b , i.e. an
NMF run for few iterations (in the experiments for only 5 iterations). Both problems (18) are
solved using the semiconvergent methods already used for procedure P1. The matrices computed
at the end of procedure P2 are denoted W(2) and H(2) .
Procedure P3: include a penalty term in the function (W, H) of (3) with the aim of balancing
the trade-off between the approximation error and the influence of the noise by enhancing the
smoothness of the factors. Since in general the noise acts in producing solutions of (8) of very
large magnitude, the function to minimize becomes
(W , H ) =
1
B WH
2
2
F
+ W
2
F
+ H
2
F
)]
where is a scalar regularization parameter. Then apply NMF factorization to B. The mean of the
elements of B is considered a good choice for . The penalized factors Wp and Hp so computed
must be further treated to cope with the blurring. So the vector w'0 = vec(W pT ) is taken as starting
point for solving problem (17) with matrix Hp i.e.
min
2
1
b' A'(I n H Tp )w ' , under w 0
2
2
(20)
The matrix computed by solving (20), denoted W(3), is assumed as first final factor, the second
factor being H(3) = Hp. The semi convergent methods already used for procedure P1 are used also
here.
The stopping condition used for these procedures is the generalized cross validation (GCV) (see
Section 3.5) with a tolerance which can be tuned according to the aims of the various
procedures.
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
consists of two phases, but in this case the factorization phase precedes the direct solution of the
systems of the ANLS scheme (18). The corresponding iteration numbers are itfac = 5 and itsol.
Analogously, for procedure P3 the two phases are a factorization and the direct solution of (20).
The corresponding iteration numbers are itfac and itsol.
The total cost results to be of order
itsol (2A+sto) + 2 itfac kn
itsol (2A+sto) + (10 + 2 itsol) kn
itsol (2A+sto) + 2(itfac+ itsol) kn
for P1
for P2
for P3
(21)
1
fr,s = rs , r, s = 0,K n 2 1, with = exp(2 i / n 2 )
n
T
Denoting by a the first row of A, it holds A = F diag(Fa) F*, where F* is the inverse (i.e.
transpose conjugate) of F. Hence the product z=Av, where v and z are n2 vector, can be so
computed
(22)
a"= Fa, v"= F *v, z = F( a" . v")
To obtain the product z=ATv it is sufficient to take the first column of A as a or to replace a by its
conjugate a* and z=ATA v is computed by z = F(a* . a. v). The multiplications by F or F*
can be efficiently computed by calling an FFT routine, which has a computational cost ft of
order n2 log n. Then the cost of performing a matrix by vector product when A has a circulant
structure is mxv = 2 ft.
V =
N b Ax
2
2
2
(N trace(A ))
(23)
where the influence matrix A is such that Ab = Ax. The minimizer of V gives a good
indication of the index where the iteration should be stopped, but in some cases this indication
can be misleading because of the almost flat behavior of V near the minimum, so it is suggested
to stop the iteration when
(V V +1 ) /V <
8
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
where is a pre assigned tolerance. The larger the tolerance , the more anticipated the stop.
To apply (23) an estimate of the trace of A must be provided. When the method is applied to a
matrix A which has a 2-level circulant structure, it is reasonable to look for an estimate computed
through a 2-level circulant approximation T of A such that AxTb. Denoting by tT the first
row of T, we have Tb = F diag(F t) F*b, hence
(24)
Different ways to estimate the trace of A can be found in [14]. The estimate given in (24) is very
simple, but lays open the question of which method can exploit it. A preliminary ad-hoc
experimentation on the problems considered in the next section, whose exact solution x* is
known, showed that the index found using (24) was acceptable for both EM and SGP as long as
matrix A was circulant. The indices so obtained were only slightly smaller than the ones
corresponding to the minimum error and led to an anticipated stop. Since this outcome is
considered favorable in a regularizing context where a too long computation may produce bad
solutions, (24) has been used, together with an upper limitation on the number of allowed
iterations.
Using (24), the cost of applying GCV is sto = 2 ft. Replacing sto and A = 2 ft in (21) we have
that the computational cost of the three procedures is
c1 ft + c2 kn
(25)
with c1 = 6 itsol, and c2 = 2 itfac for P1, c2 = (10+2 itsol) for P2, c2 = 2(itfac + itsol) for P3.
4. NUMERICAL EXPERIMENTS
The numerical experimentation has been conducted with double precision arithmetic. We
consider two reference objects X*, the satellite image [15] and a Hoffman phantom [16], widely
used in the literature for testing image deconvolution algorithms. For both images n=256 and
k=10. The compressed 10-rank approximations W*H* of X* are generated by means of NMF.
The compression errors of the images have been preliminarily measured by
* =
X * W * H * F
X*F
and are *=33% for the satellite and *=40% for the phantom. The quantity * can be considered
a lower bound for the compression errors obtainable by applying any procedure to the noisy
blurred image with any noise level.
The matrix A which performs the blur is a 2-level circulant matrix generated by a positive space
invariant band limited PSF with a bandwidth =8, normalized in such a way that the sum of the
elements is equal to 1. For the astronomical image we consider a motion-type PSF, which
simulates the one corresponding to a ground-based telescope, represented by the following mask:
mi,j = exp(- (i+j)2 - (i-j)2), - i,j ,
=0.19, =0.17.
For the image of medical interest we consider a Gaussian PSF represented by the following
mask:
mi,j = exp(- i2 - j2), - i,j , = =0.1.
9
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
For each problem, two noisy blurred images b (one with a small noise level and one with a large
noise level) are generated from the vector b* = Ax* by adding white Gaussian noises of different
variance. The obtained noise levels are =5.6% and =28% for the satellite, =4% and =21%
for the phantom. Figure 1 shows the original images X*, the compressed 10-rank approximations
W*H* and the noisy blurred images with the large noise level.
Figure 1. The original image X* (left), the compressed 10-rank approximation W*H* (middle) and the
noisy blurred image B with large noise level (right) for the satellite (upper row) and the phantom (lower
row).
The regularized compressed images are computed using the procedures P1, P2 and P3 described
in Section 3.2. In the case of procedure P1, the denoising is performed in the first phase and this
suggests to use in this phase a small value for , so we set = 10-4. In the case of procedure P2
two possibly different tolerances could be considered for the inner and the outer steps. Anyway,
due to the alternating nature of the procedure, a tight convergence is not required and large
tolerance values can be set. The same value = 10-2 for both the steps has been found efficacious.
Also for the second phase of procedure P3, there is no need to fix a too small tolerance value so
we set = 10-2.
Procedure P1 computes the regularized image Xreg and the factorization W(1)H(1) Xreg with the
errors
sol =
X * X reg
X*F
(1)
X * W (1) H (1)
X* F
Procedure P2 computes the initial factorization W0H0 by an incomplete NMF and the
(2) (2)
factorization W H by solving directly problems (18) with the errors
fac =
X *W0 H 0
X*F
(2) =
X * W (2) H (2)
X*F
Procedure P3 computes the initial factorization WpHp using penalized NMF and the factorization
W(3)H(3) by solving problem (20) with the errors
fac =
X * W p H p
X*F
(3)
X * W (3) H (3)
X*F
.
10
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
For procedures P2 and P3 the error fac is obviously the same for both methods EM and SGP.
Table 1 for the satellite image and Table 2 for the phantom image list
the errors,
the total iteration number, given in the form it = itsol+ itfac ,
the coefficients c1 of ft and c2 of kn in (25)
for both small and large noise levels.
Actually in a practical context, when k and log2 n are not too different (as in our experiments) ft
and kn could be assimilated and the sum C = c1 + c2 would give a simpler idea of the cost.
Table 1. Results of procedures P1, P2 and P3 for satellite.
proc.
met.
P1
EM
SGP
P2
EM
SGP
P3
EM
SGP
sol
27%
27%
fac
44%
44%
fac
40%
40%
c1
35% 131+25 786
34% 34+36 204
(2)
it
c1
41%
21+5
126
38%
30+5
180
(3)
it
c1
38%
38%
8+20
7+20
48
42
c2
50
72
c2
52
70
c2
56
54
sol
32%
32%
fac
45%
45%
fac
41%
41%
c1
36% 37+30 222
36% 11+29 66
(2)
it
c1
41% 16+5
96
40% 14+5
84
(3)
it
c1
40%
39%
5+18
5+18
30
30
c2
60
58
c2
42
38
c2
46
46
P1
P2
P3
met.
sol
(1)
it
c1
c2
sol
(1)
it
c1
c2
EM
33%
42%
185+15
1110
30
35%
43%
29+23
174
46
SGP
32%
42%
36+20
216
40
35%
43%
11+23
66
46
fac
(2)
it
c1
c2
fac
(2)
it
c1
c2
EM
50%
48%
14+5
84
38
51%
49%
8+5
48
26
SGP
50%
47%
15+5
90
40
51%
48%
11+5
66
32
fac
(3)
it
c1
c2
fac
(3)
it
c1
c2
EM
46%
45%
6+20
36
52
47%
46%
3+18
18
42
SGP
46%
44%
6+20
36
52
47%
45%
5+18
30
46
By comparing the performances of EM and SGP, we see that for what concerns the final errors
they have a comparable behaviour in nearly all the cases, but their computational costs vary.
SGP, which has often a better convergence rate, appears to be preferable.
More important are the differences among the procedures. Naturally, one expects that the earlier
we get rid of the noise, the best the final result. In fact, procedure P1, which acts on an already
regularized image, outperforms from the error point of view (but with a higher cost) the two other
procedures which act on only partially denoised images. Procedure P3, which acts on a better
denoised image than P2, outperforms it. For what concerns the computational cost, procedure P3
is clearly preferable.
11
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
Anyway, in spite of the numerical differences, the compressed images obtained from the final
products WH are hardly visually distinguishable, as it can be seen in Figure 2 (to be compared
with the intermediate image in the bottom row of Figure 1 which refers to the compressed
approximation of the original image).
Figure 2. Compressed images from the final products WH obtained by procedure P1 (left), P2 (middle) and
P3 (right) for the phantom (lower row).
5. CONCLUSIONS
In this paper three different procedures have been examined and tested to perform the regularized
compression of an image in the form of product of two low rank nonnegative factors. Such an
approximation can be obtained by applying first a regularizing step to the given noisy blurred
image and then a factorization step (procedure P1), or by combining regularization and
factorization steps in an alternating scheme (procedure P2), or by first factorizing the given image
with a denoising penalty condition and then applying a deblurring step (procedure P3).
In Figure 3 the trade-off between the final error and the computational cost is shown for the
satellite and phantom problems in the case of large noise level. It is self evident that procedure P1
allows computing a better approximation with a higher cost, but acceptable approximations can
be obtained with a significantly reduction of the cost with procedure P3.
Figure 3. Trade-off between the final errors (i) and the quantity C = c1+c2 for large noise level, method EM
(solid line), method SGP (dashed line).
REFERENCES
[1]
[2]
[3]
Pauca, V. P., Piper, J., & Plemmons, R. J. (2006). Nonnegative Matrix Factorization for Spectral Data
Analysis. Linear Algebra Appl., vol. 416, (pp. 29-47).
Paatero, P., & Tappert, U. (1994). Positive Matrix Factorization: a non-negative factor model with
optimal solution of error estimates of data values. Environmetrics, vol. 5, (pp. 111-126).
Kim, J., He, Y., & Park, H. (2014) Algorithms for nonnegative matrix and tensor factorization: an
unified view based on block coordinate descent framework. J. Glob. Optim., vol. 58 (pp. 285-319).
12
International Journal on Computational Science & Applications (IJCSA) Vol.6, No. 5/6, December 2016
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Kim, H., & Park, H. (2011). Fast nonnegative matrix factorization: an active-set-like method and
comparisons, SIAM J. on Scientific Computing, vol. 33, (pp. 3261-3281).
Lawson, C. L., & Hanson, R. J. (1974) Solving least squares problems, Prentice-Hall, Englewood
Cliffs, N.Y.
Grippo, L., & Sciandrone, M. (2000). On the convergence of the block nonlinear Gauss-Seidel
method under convex constraints. Oper. Res. Lett., vol. 26 (pp. 127-136).
Hsieh, C. J., & Dhillon, I. S. (2011). Fast coordinate descent methods with variable selection for nonnegative matrix factorization. In Proceedings of the 17th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, (pp. 1064-1072).
Shepp, L. A., & Vardi, Y. (1982). Maximum likelihood reconstruction for emission tomography.
IEEE Trans. Med. Imag., MI-1, (pp. 113-122).
Bonettini, S., Zanella, R., Zanni, L. (2009). A scaled gradient projection method for constrained
image deblurring. Inverse Problems, 25, 015002.
Golub, G. H., Heath, M., & Wahba, G. (1979). Generalized cross-validation as a method for choosing
a good ridge parameter. Technometrics, vol. 21, ( pp. 215-223).
Wahba, G. (1977) Practical approximate solutions to linear operator equations when the data are
noisy. SIAM J. Numer. Anal., vol. 14, (pp. 651-667).
Favati, P., Lotti, G., Menchi, O., & Romani, F. (2014). Stopping rules for iterative methods in
nonnegatively constrained deconvolution. Applied Numerical Mathematics, vol. 75, (pp. 154-166).
Favati, P., Lotti, G., Menchi, O., & Romani, F. (2014). Generalized Cross-Validation applied to
Conjugate Gradient for discrete ill-posed problems. Applied Mathematics and Computation, vol. 243
(pp. 258-268).
Hansen, P. C. (1998) Rank-Deficient and Discrete Ill-Posed Problems, SIAM Monographs on
Mathematical Modeling and Computation, Philadelphia.
Lee, K. P., Nagy, J. G., & Perrone, L. (2002). Iterative methods for image restoration: a Matlab object
oriented approach. http://www.mathcs.emory.edu/~nagy/RestoreTools.
Hoffman, E. J., Cutler, P. D., Digby, W. M., & Mazziotta, J. C. (1990). 3-D phantom to simulate
cerebral blood flow and metabolic images for PET. IEEE Trans. Nucl. Sci., vol. 37, (pp. 616-620).
13