Analysis of Fusion Techniques With Application To Biomedical Images: A Review
Analysis of Fusion Techniques With Application To Biomedical Images: A Review
Analysis of Fusion Techniques With Application To Biomedical Images: A Review
1. INTRODUCTION
Image fusion is the process of combining relevant information from a set of images of the same scene,
into a single image, where in the resultant fused image obtained will have more complete information
of the all the input images in a single image itself. the input image can be multi-modal, multifocal ,
multi sensor or multi temporal .image fusion finds its application in navigation ,medical diagnosis,
object detection and recognition satellite imaging etc. image fusion algorithms can be categorized into
different levels like low, middle and high; or pixel ,feature, and decision levels. The pixel level
method works either in the spatial domain or in the transformation domain. Pixel level fusion works
directly on the pixels obtained at imaging sensor outputs while feature level fusion algorithms operate
on features extracted from the source images. Several fusion algorithms starting from simple pixel
based to sophisticated wavelets and PCA based are available. Image fusion system has several
advantages over single image source and resultant fused image should have higher signal to noise
ratio, increased robustness and reliability in the event of sensor failure, extended parameter coverage
and rendering a more complete picture at different of the system. The pixel based image fusion
methods average pixel intensity values to the source images pixel by pixel which leads to undesired
side effects in the resultant image. Recently researchers have recognized that it is more meaningful to
combine objects or regions rather than pixels. The region based algorithm has many advantages over
pixel based algorithm like it is less sensitive to noise, better constrast, less affected by mis-registration
but at the cost of complexity. Section 2 describes different image fusion techniques, section 3 presents
the various performance measures .during the study we have observe various issues which are
summarised in section 4 and finally conclusion is presented in section 5.
2. IMAGE FUSION TECHNIQUES
Image fusion techniques can enhance a digital image without spoiling it. The enhancement methods
are of two types namely spatial domain methods and frequency domain methods. spatial domain
method directly deal with pixels of input images pixels .the fusion methods such as simple maximum,
simple minimum, averaging, principal component analysis (PCA), and IHS based methods fall under
spatial domain approaches.in transformation domain method image is first transferred in to frequency
©IJEERT www.ijeert.org 70
Hamsalekha.R & Dr. Rehna V. J
domain. The fusion method such as DWT fall under transform domain method. The figure 2.1 shows
the classification of different image fusion techniques.
maximum variance. The second principal component is constrained to lie in the subspace
perpendicular of the first. Within this Subspace, this component points the direction of maximum
variance. The third principal component is taken in the maximum variance direction in the subspace
perpendicular to the first two and so on. The PCA is also called as Karhunen-Loève transform or the
Hotelling transform. The PCA does not have a fixed set of basis vectors like FFT, DCT etc.
2.6. Discrete Wavelet Transform Method (DWT)
Wavelets are defined as the finite duration oscillatory functions with zero average value with finite
energy. They are suited for transient signal analysis. The irregularity and good localization properties
make them better basis for analysis of signals with discontinuities. Wavelets can be described by
using two functions they are the scaling function f (t), also known as “father wavelet‟ and the
wavelet function or “mother wavelet”. Mother wavelet (t) undergoes translation and scaling
operations to give self similar wavelet families as given by Equation:
(t)=
The wavelet transform decomposes the image into low-high, high-low, high-high spatial frequency
bands at different scales and the low-low band at the coarsest scale which is shown in fig: 2.3 The L-L
band contains the average image information whereas the other bands contain directional information
due to spatial orientation. Higher absolute values of wavelet coefficients in the high bands correspond
to salient features such as edges or lines the basic steps performed in image fusion given Figure 2.2.
The wavelets-based approach is appropriate for performing fusion tasks for the following reasons:-
It is a multi -resolution approach well suited to manage the different image resolutions. Useful in a
number of image processing applications including the image fusion
The discrete wavelets transform allows the image decomposition in different kinds of coefficients
preserving the image information. Such coefficients coming from different images can be
appropriately combined to obtain new coefficients so that the information in the original images is
collected appropriately
Once the coefficients are merged the final fused image is achieved through the inverse discrete
wavelets transform (IDWT), where the information in the merged coefficients is also preserved,
Researchers have made few attempts for the fusion of the MR and the CT images. Most of this
attempt the application of the wavelet transform for this purpose. Due to the limited ability of the
wavelet transform to deal with images having curved shapes, the application of the curvelet transform
for MR and T image fusion is presented in this work. The algorithm of the curvelet transform of an
image P can be summarized in the following steps:
The image P is split up into three sub-bands Δ1, Δ2 and P3 using the additive wavelet transform.
Tiling is performed on the sub-bands Δ1 and Δ2.
The Discrete Ridgelet transform is performed on each tile of the sub-bands Δ1 and Δ2.
2.8.1. Sub Band Filtering
It is used to decompose the image into additive components; each of which is a sub band of that
image. This step isolates the different frequency components of the image into different planes
without down.
2.8.2. Sampling as in the Traditional Wavelet Transform
Tiling: Tiling is the process by which the image is divided into overlapping tiles. These tiles are small
in dimensions to transform curved lines into small straight lines in the sub bands Δ1 and Δ2 . The
tiling improves the ability of the curve-let transform to handle curved edges.
2.8.3. Ridgelet Transform
The Ridgelet transform belongs to the family of discrete transforms employing basis functions. To
facilitate its mathematical representation, it can be viewed as a wavelet analysis in the Radon domain.
The Radon transform itself is a tool of shape detection. So, the Ridgelet transform is primarily a tool
to detection shape of the objects in Image.
2.9. Graph Cut Optimization Technique
Exactly one label is given to each pixel in the image, with associated data and smoothness costs
assigned to the links in the graph. To formulate this optimization let G = (V, E) be a weighted graph,
with V a set of nodes and E a set of weighted edges. V contains a node for each pixel in Ω and for
each label in Lα . There is an edge e{p,q} between every pair of nodes p, q. A cut C ⊂ E is a set of
edges that separates all the label nodes from each other, thereby, creating a sub-graph for each label.
The minimum-cut problem consists of finding a cut C with the lowest cost. The cost of this minimum
cut, denoted |C|, equals the sum of the edge weights in C properly setting the weights of the graph,
one can use a series of swap moves from combinatorial optimization to efficiently compute the
minimum-cost cuts corresponding to a minimum of Functional E.
3. IMAGE QUALITY METRICS
It is not possible to get an image that contains all relevant objects in focus because of limited focus
depth of the optical lens.
Fig2.6. An illustration of the graph-cut problem: a) A binary graph showing the data cost of assigning a label to
the sink/source and smoothness cost of assigning a labelling to adjacent pixel locations, b) the end result of the
labelling of the graph.
International Journal of Emerging Engineering Research and Technology 74
Hamsalekha.R & Dr. Rehna V. J
Image Quality is a characteristic of an image that measures the perceived image degradation. Imaging
systems like the fusion algorithm may introduce some amounts of distortion or artefacts in the signal,
so the quality assessment is an important problem. Image Quality assessment methods can be broadly
classified into two categories: Full Reference Methods (FR) and No Reference Method (NR). In FR,
the quality of an image is measure in comparison with a reference image which is assumed to be
perfect in quality .NR methods do not employ a reference image. The image qualities metrics
considered and implemented here fall in the FR category. In the following sections, the SSIM and
some other image quality metrics implemented to assess the quality of our fused are analysed with
their performance measures.
3.1. Structural Similarity Index Measure (SSIM )
It is defined as a measure of structural information change can provide a good approximation to
perceived image distortion. The SSIM compares local patterns of pixel intensities that have been
normalized such as luminance and contrast. It is an improved version of traditional methods like
PSNR and MSE. The SSIM index is a decimal value between 0 and 1. A value of 0 would mean zero
correlation with the original image, and 1 means the exact same image
1. Symmetry: S(x, y) = S( y, x)
2. Boundedness: S(x, y) <= 1
3. Unique maximum: S(x, y) = 1 if and only if x = y (in discrete representations xi = yi , for all i = 1,
2,………….,N)
SSIM can be calculated using SSIM
MEAN )
=(Aij².G)-
=(Aij².G)-
(Aij².Bij .G)- .
where G being Gaussian filter window,
=(K1*L)²
=(K2*L)² where L=255 , K=[0.01 to 0.03]
3.2. Laplacian Mean Squared Error (LMSE)
Laplacian mean square error, error is calculated based on the Laplacian value of the expected and
obtained data is given by LMSE is given by
LMSE
For an ideal situation, the fused and perfect image being identical, the LMSE value is supposed to be
0. The error value which would exist otherwise would range from 0 to 1.
3.3. Mean Squared Error (MSE)
Mean square error is a measure of image quality index. The large value of mean square means that
image is a poor quality. Mean square error between the reference image and the fused image is
MSE =
Where Ai, j and Bi, j are the image pixel value of reference image.
3.4. Peak signal to Noise Ratio (PSNR)
The ratio between maximum possible powers of the signal to the power of the corrupting noise that
creates distortion of image. The peak signal to noise ratio can be represented as
PSNR (db) = 20
Where A- fused image, B – perfect image, i – pixel power index, j – pixel column index, M, N –
Number of rows and columns respectively.
3.5. Entropy (EN)
Entropy is used to evaluate the information quantity contained in an image. The higher value of
entropy implies that the fused image is better than the reference image. Entropy is defined as
E=- pi
Where L = total of grey labels,
P = {p0, p1, pL-1} is the probability distribution of each labels
3.6. Structural Content (SC)
The structural content measure used to compare two images in a number of small image patches the
images have in common. The patches to be compared are chosen using 2D continuous wavelet which
acts as a low level corner detector. The large value of structural content SC means that image is poor
quality
SC =
NCC =
NAE =
4. CONCLUSION
This paper provides a review of different image fusion techniques. Spatial domain provides high
spatial resolution.
But in spatial domain spectral distortion is the main drawback therefore transform domain image
fusion is done. Based on the analysis done on various transform domain techniques such as, wavelet
transform, discrete wavelet transform, curvelet transform and graph cut techniques. It has been
concluded that each technique it meant for specific application and one technique has an edge over the
other in terms of particular application. Finally the image quality assessment parameters have been
reviewed and determine the role of individual image quality assessment parameter to determine the
quality of the fused image.
4.1. Comparative Study of Various Image Fusion Techniques
Based on the of the study, few comparisons between the different existing fusion techniques have
been made and are analysed along with their performance measures theoretically which are shown in
Table 1 as below.
REFERENCES
[1] Yan Li-ping, Liu Bao-sheng and Zhou Dong-hua, “Novel image fusion algorithm with novel
performance evaluation method,” Systems Engineering and Electronics, Vol.29, No.4, pp.509-
513, Apr 2007.
[2] Ge Wen ,GaoLiqun, “Multi-modality medical image fusion algorithm based on non-separable
wavelet,” Application Research of Computers ,Vol. 26, No. 5, May, 2009.
[3] DeepaliA.Godse, Dattatraya S. Bormane (2011) “Wavelet based image fusion using pixel based
maximum selection rule” International Journal of Engineering Science and Technology (IJEST),
Vol. 3 No. 7 July 2011, ISSN : 0975-5462
[4] M .Chandana,S. Amutha, and Naveen Kumar, “ A Hybrid Multi-focus Medical Image Fusion
Based on Wavelet Transform”. International Journal of Research and Reviews in Computer
Science (IJRRCS) Vol. 2, No. 4, August 2011, ISSN: 2079-2557.
[5] I. De, B. Chanda, “A simple and efficient algorithm for multifocus image fusion using
morphological wavelets”, Signal Processing, Vol. 86,No. 5, 924-936, 2006.
[6] P. Y. Barthez, I. A. Schaafsma, Y. W. E. A. Pollak, “Multimodality image fusion to facilitate
anatomic localization of (99m) tc-pertechnetate uptake in the feline head”, Veterinary Radiology
& Ultrasound, Vol. 47, No. 5
[7] Kirankumar Y., Shenbaga Devi S. “Transform-based medical image fusion” Int. J. Biomedical
Engineering and Technology, Vol. 1, No. 1, 2007 101
[8] ShivsubramaniKrishnamoorthy, K P Soman “Implementation and Comparative Study of Image
Fusion Algorithms” International Journal of Computer Applications (0975 – 8887) Volume 9–
No.2,November 2010
[9] Sukhdipkaur, Kamaljitkaur “Study and Implementation of ImageFusion Methods” International
Journal of Electronics and Computer Science Engineering ,2010
[10] YufengZheng, Edward A. Essock and Bruce C. Hansen, “An Advanced Image Fusion Algorithm
Based on Wavelet Transform – Incorporation with PCA and Morphological Processing”
[11] Chetan K. SolankiNarendra M. Patel, “Pixel based and Wavelet based Image fusion Methods
with their Comparative Study”. National Conference on Recent Trends in Engineering and
Technology. 13-14 May 2011
[12] Anjali Malviya, S. G. Bhirud .” Image Fusion of Digital Images”International Journal of Recent
Trends in Engineering, Vol 2, No. 3, November 2009
[13] V.P.S. Naidu and J.R. Raol, “Pixel-level Image Fusion using Wavelets and Principal Component
Analysis”. Defence Science Journal, Vol. 58, No. 3, May 2008, pp. 338-352 Ó 2008, DESIDOC
[14] Chetan K. SolankiNarendra M. Patel, “Pixel based and Wavelet based Image fusion Methods
with their Comparative Study”. National Conference on Recent Trends in Engineering
&Technology. 13-14 May 2011
[15] Hong ZHENG, Dequan ZHENG, Yanxiang Sheng “Study on the Optimal Parameters of Image
Fusion Based on Wavelet Transform” Journal of Computational Information Systems6:1(2010)
131-137, January 2010.
[16] Dr. M. Sumathi, R. Barani “Qualitative Evaluation of Pixel Level Image Fusion Algorithms”
IEEE transaction on Pattern Recognition, Informatics and Medical Engineering, March 21-23,
2012.