Generalization of The Image Space Reconstruction Algorithm
Generalization of The Image Space Reconstruction Algorithm
S-183
r}k)
model to be used, but use a least-squares objective rather than
If for a given iteration k the following scaling (step size
a maximum likelihood objective. This work looks into the link
for each voxel) is chosen:
X(k)
'r(k) _ J
} -
(6)
k)
I aij q�
I
Manuscript received November 15, 2011.
A. J. Reader, E. Letourneau and J. Verhaeghe are with the Montreal
Neurological Institute, McGill University, Montreal, Quebec, H3A 2B4,
i=1 Wi
-
Canada (e-mail: andrew.reader@mcgill.ca). Part of this work was supported the following iterative update is obtained:
by NSERC grant number 387067-10.
978-1-4673-0120-6/111$26.00
Authorized licensed use ©2011
limited to:IEEE
Peking University. Downloaded 4233
on July 04,2024 at 12:01:25 UTC from IEEE Xplore. Restrictions apply.
using a (possibly smoothed) copy of the expected data q as the
weights is also considered here.
First, if Wi mi
= is used in (7) then what might be
(7) viewed as a "reciprocal EM" algorithm is obtained, which
uses the noisy measured data as an approximation for the
variance of the Poisson distributed measured data:
I
X(k+1) X(k)
=
d=1 d=1 (11)
J J I
q:
Laij (
i=1 " aidq(dk) + " Pidmd
I
R
I )
f:t f:t
So when «=0, M-ISRA (with or without smoothed weights) is
obtained, and when p=O Q-ISRA (with or without smoothed
which can be regarded as a least-squares update with iteration weights) is obtained. Equation (11) allows various
dependent weights. Note the perspective this gives: the ML combinations ofm and q, to give MQ-ISRA.
EM update is just a multiplication by a ratio of weighted Just as an aside, w could also include a vector of offsets 'Y to
backprojected images. It is well established that ML-EM provide another new algorithm, very similar to ML-EM, if
converges to a maximum of a Poisson likelihood objective,
w=q (k)+y is chosen. The rationale for such a choice is that the
and therefore equation (7) also converges for this particular
resulting algorithm is guaranteed to avoid divisions by zero in
choice of iteration-dependent W(k).
sinogram space, which is a problem that has to be artificially
B.Case 2: the Image Space Reconstruction Algorithm rectified with conventional implementations of ML-EM.
If the weights are chosen to be wik) = 1 then ISRA [4] is
Finally, it is worth noting the work of Anderson et al [9]
and also that of Teng and Zhang [10], who have developed
obtained: similar methods. The key distinction in the present work is the
I
consideration of different data weighting schemes and their
1 Laijmi
X ( k+ )
=
(
X i=1
k ) (9)
impact.
(k)
III. METHODS
"
J J I
L.Jaq
2D simulations of a 256x256 pixel slice from a realistic 3D
i=1 IJ I
brain phantom [11] (shown later in figure 7) were used to
It has already been established in the literature that this assess the different reconstruction methods. A system matrix
algorithm (9) converges to a non-negative least squares based on a line-integral model was calculated and stored, and
estimate [5; 6; 7; 8], and so again equation (7) also converges used for all forward and backprojection operations. The
for this particular choice ofw. ground truth brain phantom (which was first scaled according
to the desired mean count level) was forward projected to
obtain a noise-free 2D sinogram for the given scaling (mean
C.NewCases count) level. The various scaling levels of the ground truth
Hence two special cases have been considered, merely image were used to control the noise level: the noise-free
through the choice of weighting factors (at a given iteration k) sinogram for a given scaling level was used as the mean from
used during backprojection. In addition to this, this work which 100 different Poisson noise realizations of the sinogram
considers the case of using the measured sinogram data (with were generated. In this work ten different scaling levels were
optional spatial smoothing) as weights during backprojection considered to allow consideration of ten different noise levels
(required in the general updating formula (8». The case of (mean count levels ranging from 1.75k mean counts in a
sinogram through to 350k mean counts).
[..f ]
3) ISRA with PSF (w(k)=I) Also of interest, but not directly related to the task of
4) M-ISRA with PSF (w(k)=m ) interest, are bias and variance:
5) Q-ISRA with PSF (w(k)=q(k) (should match method 2)
6) (O.5M+O.5Q)-ISRA with PSF (w(k)=O.5m + O.5q (k» (k) trp
£..,. mnp -
1 (13)
7) (0.5M+0.5Q, 2S)-ISRA with PSF, using the same PixelBias1�1 = - L .!!::.!....n-)_
sinogram weights as method 6, but with the P peROI N
weights smoothed by a Gaussian convolution
kernel (0=2 sinogram bins)
where PixelBiasROI is the average pixel-level bias in the ROI.
8) (M, 2S)-ISRA with PSF, using the same sinogram
Finally the average pixel-level variance in the ROI is given
weights as method 4, but with the weights
by:
smoothed by a Gaussian convolution kernel (0=2
bins) (i.e. W�k) LhPlhmrk) , where the matrix
=
(14)
elements {fth} correspond to applying a Gaussian
convolution kernel with 0=2 bins)
9) (Q, 2S)-ISRA with PSF, using the same sinogram
weights as method 5, but with the weights
smoothed by a Gaussian convolution kernel (0=2 A best (i.e. lowest achieved) MAE analysis as a function of
count level was carried out for each reconstruction algorithm
bins) (i.e. w:k) Lhalhqrk) , where the matrix
=
.... O-ISRAPSF
-.- (O.SM + O.SO)·ISRAPSF
40 -.. (O.SM + 0.50, 2.0S)-ISRAPSF
....... (M, 2.0S)·ISRAPSF
....... (0, 2.0S)·ISRA PSF
-+- FBP ramp smooth
10 15 20 25 30 35
Iterations
, ,
I , 25
I ,
, ,
35 , ,
, ,
, ,
\ , 20
\ \
" \ -.-(O.SM + O.SO)·ISRAPSF L-7--�
10,---�
-- l S;---720
-
�� 370 -�7
� 72S 3 S --4 �S
� 0--4
30 \ -.--- -.. (O.SM + 0.50, 2.0S)·ISRA PSF Iterations
\ ...... (M, 2.0S)·ISRAPSF
\
\ ....... (0, 2.0S)·ISRAPSF
\ -+-FBP raflll smooth Fig 2. Similar to the first sub-figure in figure I, but now for the case of the
UJ \
« 25
::;;
, ROI being reduced in size (clear of the region edges through use of an
erosion).
20
Temporal lobe + Thalamus (Central pixel ROI)
Mean counts: 113908
15
__ MLEM ro PSF
1
0 15 20 25 30 35 40 45 ..... -MLEM PSF
Iterations 30
ISRA PSF
"D' M-ISRA PSF
Fig l. Example graphs of mean absolute error based on 100 realisations (given
, __ Q-ISRA PSF
as a percentage %) in three different ROIs, FBP is shown for the case of 25
.,' -.-(0.5M + 0.5Q)-ISRA PSF
-.. (0.5M + 0.5Q, 2.0S)·ISRA PSF
iteratively re-applying a post-smooth with a Gaussian convolution kernel of .. _ ....
'
,
� \ • .,. . (M, 2.0S)-ISRA PSF
cr=0,6 pixels. Note the graphs are zoomed into the main region of the graph of
interest, so not all data are shown,
�
W
20 \ --(Q, 2.0S)·ISRA PSF
:2
15
Figures 2 and 3 show similar information to figure 1, but
for when the ROI is reduced in size and for the case of a
10
central single pixel within each ROI. The trends are similar to
figure 1. The good performance of FBP can be attributed to
post-reconstruction smoothing, and the fact that the regions 10 20 30 40 50 60
Iterations
considered were uniform. The results strongly suggest the
need to consider post-reconstruction smoothing (as well as Fig 3. Similar to the third sub-figure in figure I, but for a single central pixel
iteration number) for the iterative methods. in the ROI.
- ------
--- ----
0.5 2.5 3 3.5
x 105
12
Fig 4. The lowest-achieved MAE (%) as a function of count level, for two
different ROls. Full ROI size was used in both cases. For each reconstruction
method at each count level, the iteration at which the method delivers the
lowest MAE (from 100 data realisations) was selected to form these graphs.
30
� 28 f
eft.
� 26
W
«
:::2:
'0
<l)
>
<l)
:E
()
�Cf) 18
<l)
?; 16
o
...J 14
12
ACKNOWLEDGMENT
REFERENCES
[I] L.A. Shepp, and Y. Vardi, Maximum likelihood reconstruction for
emission tomography. IEEE Trans Med Imaging I (1982) 113-22.
[2] K. Lange, and R. Carson, EM reconstruction algorithms for emission and
transmission tomography. J Comput Assist Tomogr 8 (1984) 306-16.
[3] G.!. Angelis, A. l Reader, FA Kotasidis, W. R. Lionheart, and lC.
Matthews, The performance of monotonic and new non-monotonic gradient
ascent reconstruction algorithms for high-resolution neuroreceptor PET
imaging. Physics in Medicine and Biology
56 (2011) 3895-3917.
[4] M.E. Daubewitherspoon, and G. Muehllehner, An Iterative Image Space
Reconstruction Algorithm Suitable for Volume Ect. Ieee Transactions on
Medical Imaging 5 (1986) 61-66.
[5] A. R. Depierro, On the Convergence of the Iterative Image Space
Reconstruction Algorithm for Volume Ect. Ieee Transactions on Medical
Imaging 6 (1987) 174-175.
[6] D.M. Titterington, On the Iterative Image Space Reconstruction Algorithm
for Ect. Ieee Transactions on Medical Imaging 6 (1987) 52-56.
[7] G.E.B. Archer, and D.M. Titterington, The Iterative Image Space
Reconstruction Algorithm (Isra) as an Alternative to the Em Algorithm for
Solving Positive Linear Inverse Problems. Statistica Sinica
5 (1995) 77-96.
[8] J. Han, LX Han, M. Neumann, and U. Prasad, On the Rate of
Convergence of the Image Space Reconstruction Algorithm. Operators and
Matrices
3 (2009) 41-58.
[9] lM.M. Anderson, BA Mair, M. Rao, and C.H. Wu, Weighted least
squares reconstruction methods for positron emission tomography. Ieee
Transactions on Medical Imaging 16 (1997) 159-165.
[lO] Y.Y. Teng, and T. Zhang, Iterative reconstruction algorithms with alpha
divergence for PET imaging. Computerized Medical Imaging and Graphics 35
(20II) 294-30l.
[II] A. Rahmim, K. Dinelle, J.C. Cheng, MA Shilov, W.P. Segars, S. C.
Lidstone, S. Blinder, O.G. Rousset, H. Vajihollahi, B.M. W. Tsui, D.F. Wong,
and V. Sossi, Accurate event-driven motion compensation in high-resolution
PET incorporating scattered and random events. Ieee Transactions on Medical
Imaging 27 (2008) 1018-1033.
[12] A.J. Reader, P.J. Julyan, H. Williams, D.L. Hastings, and J. Zweit, EM
algorithm system modeling by image-space techniques for PET
reconstruction. IEEE Transactions on Nuclear Science 50 (2003) 1392-1397.