Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
190 views

Implementation of Medical Image Fusion Using DWT Process On FPGA

Image fusion is a data fusion technology which keeps images as main research contents. It refers to the techniques that integrate multi-images of the same scene from multiple image sensor data or integrate multi images of the same scene at different times from one image sensor.. Wavelet Transform has good time-frequency characteristics. It was applied successfully in image processing field. Nevertheless, its excellent characteristic in one-dimension can’t be extended to two dimensions or multi-dimension simply. Separable wavelet which was spanning by one-dimensional wavelet has limited directivity. The experiments show that the method could extract useful information from source images to fused images so that clear images are obtained. The selection principles about low and high frequency coefficients according to different frequency domain after wavelet. In choosing the lowfrequency coefficients, the concept of local area variance was chosen to measuring criteria. In choosing the high frequency coefficients, the window property and local characteristics of pixels were analyzed.

Uploaded by

ATS
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views

Implementation of Medical Image Fusion Using DWT Process On FPGA

Image fusion is a data fusion technology which keeps images as main research contents. It refers to the techniques that integrate multi-images of the same scene from multiple image sensor data or integrate multi images of the same scene at different times from one image sensor.. Wavelet Transform has good time-frequency characteristics. It was applied successfully in image processing field. Nevertheless, its excellent characteristic in one-dimension can’t be extended to two dimensions or multi-dimension simply. Separable wavelet which was spanning by one-dimensional wavelet has limited directivity. The experiments show that the method could extract useful information from source images to fused images so that clear images are obtained. The selection principles about low and high frequency coefficients according to different frequency domain after wavelet. In choosing the lowfrequency coefficients, the concept of local area variance was chosen to measuring criteria. In choosing the high frequency coefficients, the window property and local characteristics of pixels were analyzed.

Uploaded by

ATS
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

International Journal of Computer Applications Technology and Research Volume 2 Issue 6, 676 - 679, 2013

Implementation of Medical Image Fusion Using DWT Process on FPGA


D.Khasim Hussain P.R.R.M. Engineering College Shabad, Ranga Reddy, India C.Laxmikanth Reddy P.R.R.M. Engineering College Shabad, Ranga Reddy, India V.Ashok Kumar PRRM ENGG COLLEGE Shabad, Ranga Reddy, India

Abstract: Image fusion is a data fusion technology which keeps images as main research contents. It refers to the techniques that integrate multi-images of the same scene from multiple image sensor data or integrate multi images of the same scene at different times from one image sensor.. Wavelet Transform has good time-frequency characteristics. It was applied successfully in image processing field. Nevertheless, its excellent characteristic in one-dimension cant be extended to two dimensions or multi -dimension simply. Separable wavelet which was spanning by one-dimensional wavelet has limited directivity. The experiments show that the method could extract useful information from source images to fused images so that clear images are obtained. The selection principles about low and high frequency coefficients according to different frequency domain after wavelet. In choosing the lowfrequency coefficients, the concept of local area variance was chosen to measuring criteria. In choosing the high frequency coefficients, the window property and local characteristics of pixels were analyzed. Keywords: Fusion oriented images, FPGA, Application of computer vision, DWT, Wavelet Coefficient maps .

1. INTRODUCTION
The actual fusion process can take place at different levels of information representation [1]: a generic categorization is to consider the different levels as, sorted in ascending order of abstraction: signal, pixel, feature and symbolic level. This site focuses on the so-called pixel level fusion process, where a composite image has to be built of several input images. To date, the result of pixel level image fusion is considered primarily to be presented to the human observer, especially in image sequence fusion (where the input data consists of image sequences). A possible application is the fusion of forward looking infrared (FLIR) and low light visible images (LLTV) obtained by an airborne sensor platform to aid a pilot navigates in poor weather conditions or darkness. In pixel-level image fusion[2], some generic requirements can be imposed on the fusion result. The fusion process should preserve all relevant information of the input imagery in the composite image (pattern conservation) The fusion scheme should not introduce any artifacts or inconsistencies which would distract the human observer or following processing stages .The fusion process should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object the input imagery .In case of image sequence fusion arises the additional problem of temporal stability and consistency of the fused image sequence. The main target in these techniques is to produce an effective representation of the combined multispectral image data, i.e., an application-oriented visualization in a reduced data set [3][8].The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or time depended contrast changes introduced by the fusion process are highly distracting to the human observer. So, in case of image sequence fusion the two additional requirements apply. Temporal stability: The fused image sequence should be temporal stable, i.e. gray level changes in the fused sequence must only be caused by gray level changes in the input sequences, they must not be introduced by the fusion scheme itself. Temporal consistency: Gray level changes occurring in

the input sequences must be present in the fused sequence without any delay or contrast change.

2. IMAGE FUSION [1]


2.1 Image Fusion Process
Input Image(CT) Wavelet Coefficient maps

DWT

Fused Image

Inverse Transform

Fusion

Input Image(MRI)

DWT

Wavelet Coefficient maps

Figure 1: Block Diagram Image Fusion Process When constructing each wavelet coefficient for the fused image. We will have to determine which source image describes this coefficient better. This information will be kept in the fusion decision map. The fusion decision map has the same size as the original image. Each value is the index of the source image which may be more informative on the corresponding wavelet coefficient. Thus, we will actually make decision on each coefficient. There are two frequently used methods in the previous research. In order to make the decision on one of the coefficients of the fused image, one way is to consider the corresponding coefficients in the source images as illustrated by the red pixels. This is called pixelbased fusion rule. The other way is to consider not only the corresponding coefficients, but also their close neighbors, say a 3x3 or 5x5 windows, as illustrated by the blue and shadowing pixels. This is called window-based fusion rules.

www.ijcat.com

676

International Journal of Computer Applications Technology and Research Volume 2 Issue 6, 676 - 679, 2013
This method considered the fact that there usually has high correlation among neighboring pixels. Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. If the scales and positions are chosen based on powers of two, the so-called dyadic scales and positions, then calculating wavelet coefficients are efficient and just as accurate. This is obtained from discrete wavelet transform (DWT). The 2-D subband decomposition is just an extension of 1-D subband decomposition. The entire process is carried out by executing 1-D subband decomposition twice, first in one direction (horizontal), then in the orthogonal (vertical) direction. For example, the low-pass subbands (Li) resulting from the horizontal direction is further decomposed in the vertical direction, leading to LLi and LHi subbands. Similarly, the high pass subband (Hi) is further decomposed into HLi and HHi. After one level of transform, the image can be further decomposed by applying the 2-D subband decomposition to the existing LLi subband. This iterative process results in multiple transform levels. In Fig. 4. the first level of transform results in LH1, HL1, and HH1, in addition to LL1, which is further decomposed into LH2, HL2, HH2, LL2 at the second level, and the information of LL2 is used for the third level transform. The subband LLi is a lowresolution subband and high-pass subbands LHi, HLi, HHi are horizontal, vertical, and diagonal subband respectively since they represent the horizontal, vertical, and diagonal residual information of the original image.

CT Scan Image

DWT
IDWT

Fusion Rules

MicroBlaze Processor Fused Wavelet Coefficient Map

Fused Image

MRI Scan DWT Image


Registered source Images

FPGA Kit

Wavelet Coefficient Maps Figure 2 Fusion process on FPGA Kit

In our research, we think objects carry the information of interest, each pixel or small neighboring pixels are just one part of an object. Thus, we proposed a region-based fusion scheme. When make the decision on each coefficient, we consider not only the corresponding coefficients and their closing neighborhood, but also the regions the coefficients are in. We think the regions represent the objects of interest. We will provide more details of the scheme in the following.

LL1

HL1 HL2 HL3 HH2

2.2 Wavelet Transform


Wavelets are mathematical functions defined over a finite interval and having an average value of zero that transform data into different frequency components, representing each component with a resolution matched to its scale.

LH1 HH1 LH2

LH3

HH3

Figure 4: Subband labeling Scheme for a Three Level, 2-D Wavelet Transform To obtain a two-dimensional wavelet transform, the onedimensional transform is applied first along the rows and then along the columns to produce four subbands: low-resolution, horizontal, vertical, and diagonal. (The vertical subband is created by applying a horizontal high-pass, which yields vertical edges.) At each level, the wavelet transform can be reapplied to the low-resolution subband to further decorrelate the image. Fig. 5 illustrates the image decomposition, defining level and subband conventions. The final configuration contains a small low-resolution subband. In addition to the various transform levels, the phrase level 0 is used to refer to the original image data. When the user requests zero levels of transform, the original image data (level 0) is treated as a lowpass band and processing follows its natural flow.

Figure 3: Wavelet Coefficient Representation

2.3 2-D Transform Hierarchy


The 1-D wavelet transform can be extended to a twodimensional (2-D) wavelet transform using separable wavelet filters. With separable filters the 2-D transform can be computed by applying a 1-D transform to all the rows of the input, and then repeating on all of the columns.

2.4 Discrete Wavelet Transform

www.ijcat.com

677

International Journal of Computer Applications Technology and Research Volume 2 Issue 6, 676 - 679, 2013
Low Resolution Subband

4 3 4 3 Level 2 Level 1 Horizontal Subband LH 4 3 HL Level 2 Level 1 Diagonal Subband


(e)

Level 2

Level 1 Vertical subband

HH

Figure 6 Fused Images of CT and MRI Scan Above figure shows the fused and DWT levels of MRI Scan Images are taken from the Xilinx platform studio tool. Registered Source images like CT scan (fig 6.a) and MRI scan(fig 6.c) images are taken for fusing the both resolution images by considering pixel coefficients of the images by using DWT. After the DWT process the resulted image for CT scan and MRI scan are shown in figure 6 (b),(d). By considering the low resolution subband images in fused CT scan and MRI scan images fusion will be done using DWT method. Inverse DWT is applied for the image to get final fused image . It is implemented on Spartan -3 kit by using of Xilinx is tool. It shown in figure 7.

Figure.5 Image Decomposition Using Wavelets Wavelet transform is first performed on each source images, then a fusion decision map is generated based on a set of fusion rules. The fused wavelet coefficient map can be constructed from the wavelet coefficients of the source images according to the fusion decision map. Finally the fused image is obtained by performing the inverse wavelet transform.From the above diagram, we can see that the fusion rules are playing a very important role during the fusion process.

3. RESULTS
The image fusion process follows by considering System C coding for DWT method to implement fused image using FPGA .

Figure7: Spartan-3 kit A new approach to 3-D image fusion using wavelet transforms. Several known 2-D DWT fusion schemes have been extended to handle 3-D images have been proposed. (a) (b) Wavelet transform fusion diagrams have been introduced as a convenient tool to visually describe different image fusion schemes. A very important advantage of using 3-D DWT image fusion over alternative image fusion algorithms is that it may be combined with other 3-D image processing algorithms working in the wavelet domain

4. CONCLUSION
In order to evaluate the results and compare these methods two quantitative assessment criteria Information Entropy and Root Mean Square Error were employed. Experimental results indicated that there are no considerable differences between these two methods in performance. The fusions have been implemented for medical images and remote sensing images. It is hoped that the techniques can be extended for colored images and for fusion of multiple sensor images with memory constraints.

(c)

(d)

www.ijcat.com

678

International Journal of Computer Applications Technology and Research Volume 2 Issue 6, 676 - 679, 2013

5. REFERENCES
[1] A. Goshtaby and S. Nikolov, Image fusion: Advances in the state of the art, Inf. Fusion, vol. 8, no. 2, pp. 114 118, Apr. 2007.
fusion for improved RGB representation based on perceptual attributes, Int. J. Remote Sens., vol. 26, no. 15, pp. 32413254, Aug. 2005.

[6]

K. Kotwal and S. Chaudhuri, Visualization of hyperspectral images using bilateral filtering, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 5, pp. 23082316, May 2010.

[7] Q. Du, N. Raksuntorn, S. Cai, and R. J. Moorhead, Color


display for hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6, pp. 18581866, Jun. 2008.

[2] V. Tsagaris and V. Anastassopoulos, Multispectral image [8] S. Cai, Q. Du, and R. J. Moorhead, Feature-driven multilayer
visualization for remotely sensed hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 9, pp. 34713481, Sep. 2010.

[3] J. Tyo, A. Konsolakis, D. Diersen, and R. C. Olsen,


Principalcomponents- based display strategy for spectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 41, no. 3, pp. 708718, Mar. 2003.

[4] W. Zhang and J. Kang, QuickBird panchromatic and multi spectral image fusion using wavelet packet transform, in Lecture Notes in Control and Information Sciences, vol. 344. Berlin, Germany: Springer-Verlag, 2006, pp. 976981.

[5]

V. Shah, N. Younan, and R. King, An efficient pan -sharpening method via a combined adaptive PCA approach and contourlets, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 5, pp. 13231335, May 2008.

www.ijcat.com

679

You might also like