Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Evaluation of Pan-Sharpening Methods

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Evaluation of Pan-Sharpening Methods

Melissa Strait, Sheida Rahmani, Daria Markurjev


Faculty Advisor: Todd Wittman
UCLA Department of Mathematics
August 2008

Abstract

Pan-sharpening combines a low-resolution color multispectral image with a high-resolution


grayscale panchromatic image to create a high-resolution fused color image. In this paper we examine
five different pan-sharpening methods: IHS, PCA, Wavelet fusion, P+XS, and VWP and evaluate their
effectiveness. Additionally, we propose an extension to the IHS pan-sharpening method to improve the
resulting spectral quality. In order to compare the method results we evaluate spatial and spectral
qualities by relying on both visual inspection and metric performance data. Our results indicate that
VWP is most effective in preserving spectral data, while IHS methods produce images with the best
spatial quality.

The launch of high-resolution satellites used for remote sensing has created a need for the

development of efficient and accurate image fusion methods. These satellites are commonly capable of

producing two different types of images: a low-resolution multispectral image and a high-resolution

panchromatic image. The multispectral sensor provides multi-band images with accurate color data with

low spatial resolution. Conversely, the panchromatic sensor yields grayscale images with high spatial

resolution but imprecise color data. There are a number of applications in remote sensing that require

images with both high spatial and spectral resolutions. The fusion of the multispectral and panchromatic

images, or pan-sharpening, provides a solution to this by combining the clear geometric features of the

panchromatic image and the color information of the multispectral image.

In this project we will examine different pan-sharpening techniques and explore various metrics

that can be used to judge the image quality of the fused image. We will be working with images from

Quickbird, a commercial satellite launched in 2001 that offers high-resolution imagery of Earth. The

QuickBird satellite has four bands: red, green, blue and infrared. The most common pan-sharpening

techniques are the Intensity-Hue-Saturation Technique (IHS), the wavelet method, and principal
component analysis (PCA). In this project, we also compare these three methods to other advanced pan-

sharpening methods such as P+XS and Variational Wavelet Pan-sharpening (VWP).

IHS Pan-Sharpening Technique

IHS (Intensity-Hue-Saturation) is the most common image fusion technique for remote sensing

applications and is used in commercial pan-sharpening software. This technique converts a color image

from RGB space to the IHS color space. Here the I (intensity) band is replaced by the panchromatic

image. Before fusing the images, the multispectral and the panchromatic image are histogram matched.

The image is converted to IHS color space using the following linear transformation:

I 13 1
3
R
1
3
= 2 2 2 2
1 6 6 6 G
2 12 1
0 B
2

Therefore the entire fusion process can be expressed mathematically as

F ( R) 1 1
2
1
2
Pan
F (G ) = 1 1 1
2 2 1 .
F ( B) 1 2 0 2

This process is equivalent to

F ( R) R + Pan I
F (G ) = G + Pan I .

F ( B) B + Pan I

Implementing the IHS fusion method in this manner is very efficient and is called the fast IHS

technique (FIHS) (Tu et al., 2001), making IHS ideal for the large volumes of data produced by

satellites.

Ideally the fused image would have a higher resolution and sharper edges than the original color

image without additional changes to the spectral data. However, because the panchromatic image was

not created from the same wavelengths of light as the RGB image, this technique produces a fused
image with some color distortion from the original multispectral (Choi, 2008). This problem becomes

worse the more the intensity band and panchromatic image differ. There have been various

modifications to the FIHS method in an attempt to fix this problem.

One of the first modifications of the FIHS method extends the IHS method from three bands to

four by incorporating an infrared component (Tu et al., 2005). Because the panchromatic image sensors

pick up infrared light (IR) in addition to visible wavelengths, this modification allowed the calculated

intensity of the multispectral image to better match the panchromatic image, thus causing less color

distortion in the final fused image.

A similar modification called the FIHS-SA method uses four bands but incorporates weighting

coefficients on the green and blue bands in an attempt to minimize the difference between I and the

panchromatic image. These weighting coefficients were calculated experimentally from fused IKONOS

images (Tu et al., 2005). In 2008 Choi expanded on this work and experimentally determined

coefficients for the red and infrared bands as well for IKONOS images. The green and blue band

coefficients were taken from the 2005 paper by Tu. Since these coefficients were calculated using

IKONOS data, these parameters are not ideal for fusing Quickbird images.

In order to minimize spectral distortion in the IHS pan-sharpened image, we propose a new

modification of IHS that varies the manner the intensity band is calculated depending on the initial

multispectral and panchromatic images. To minimize spectral distortion the intensity band should

approximate the panchromatic image as closely as possible. Therefore in this Adaptive IHS method we

want to determine the non-negative coefficients is that best approximate

I = 1 M 1 + 2 M 2 + 3 M 3 + 4 M 4 Pan .

In order to calculate these coefficients we create the following function F to minimize with

F ( ) = ( ( i M i ( x)) P( x)) 2 + (max(0, i )) 2 .


x i i

respect to the is:


The first term ensures that the coefficients yield a linear combination that approximates the

panchromatic image. The second term enforces the non-negativity constraint on the is using the

Lagrange multiplier . In order to solve this minimization problem we use a semi-implicit gradient

descent method, to which Michael Moeller contributed.

PCA (Principal Component Analysis)

In the PCA-based method, the PCA transform converts intercorrelated multispectral bands into a

new set of uncorrelated components. It is assumed that the first PC image with the highest variance

contains the most amount of information from the original image and will be the ideal choice to replace

the high spatial resolution panchromatic image (Shah, 2008). All the other multispectral bands are

unaltered. An inverse PCA transform is performed on the modified panchromatic and multispectral

images to obtain a high-resolution pan-sharpened image.

Wavelet

The wavelet fusion method is based on the wavelet decomposition of images into different

components based on their local frequency content. We perform the Discrete Wavelet Transforms

(DWT) on the multispectral and panchromatic images to extract the low frequency data from the

multispectral image and the high frequency data from the panchromatic image. These components are

combined to create the Fused Wavelet Coefficient Map. The inverse wavelet transformation is

performed on the fused map to create the final pan-sharpened image. Below is a visual representation of

the wavelet method.

Pan

MS
P+XS

P+XS is a variational method, which calculates the pan-sharpened image by minimizing energy

functional (Ballester, 2006). It obtains the edge information of the panchromatic image, using the

gradient. The spectral information is obtained by approximating the panchromatic image as a linear

combination of the multispectral bands.

VWP (Variational Wavelet Pan-sharpening)

The pan-sharpening method VWP combines the Wavelet and P+XS methods. It uses the wavelet

coefficients in order to get higher spectral quality and uses the energy functional of P+XS to produce

clear edges. VWP explicitly preserves spectral quality better. Also, unlike the P+XS, the VWP does not

approximate the panchromatic image as a linear combination of the multispectral bands. This is

beneficial because it does not limit the method to four band images (Moeller, 2008).

Metrics Performance Evaluation

There are many different ways to analyze the results of pan-sharpened images and compare

different methods. When comparing different methods, we are interested in spatial and spectral quality.

In judging spatial quality, it is much easier to see the sharpness of the edges. But when judging spectral

quality, it is much more difficult to match the colors of the final result to the original multispectral by

visual inspection. There are many metricss that analyze the spectral quality. Relative dimensionless

global error in synthesis (ERGAS) calculates the amount of spectral distortion in the image (Wald,

2000). The formula for ERGAS is given by

2
h 1 N RMSE (n)
ERGAS = 100
l N N = 1 (n)

where h is the ratio between pixel sizes of Pan and MS, (n) is the mean of the nth band, and N is
l

the number of bands. Spectral Angle Mapper (SAM) compares each pixel in the image with every
endmember for each class and assigns a value between 0 (low resemblance) and 1 (high resemblance)

(Goetz, 1992). The formula for SAM at a specific pixel is given by

AB
i =1
i i
Cos =
N N

Ai Ai
i =1
B B
i =1
i i

Here, N is the number of bands, A = ( A1 , A2 , A3 ,..., AN ) and B = ( B1 , B2 , B3 ,..., B N ) are two spectral

vectors with the same wavelength from the multispectral image and fused image, respectively. is the

spectral angle at a specific point, and to compute the SAM for the entire image, we take the average of

all values. Spectral Information Divergence (SID) is derived from the concept of divergence arising

in information theory and can be used to describe the statistic of a spectrum. It also views each pixel

spectrum as a random variable and then measures the discrepancy of probabilistic behaviors between

spectra (Chang, 1999). To compute SID, we have the vector x =(x 1 ,,x N ) T , which is taken from the

multispectral image and y = (y 1 ,,y N ) T which is a vector from the final fused image. The range of x i s

and y i s needs to be between [0,1] and we define this by

xj yj
pj = N
qj = N

x
i =1
i y
i =1
i

with N being the number of bands. We define SID by

SID ( x , y ) = D ( x y ) + D ( y x )

L
where D(x y )= i =1
p i log( p i q i ) and similarly for D ( y x ) , which is called the relative entropy. A

Universal Image Quality Index (Q-average) models any distortion as a combination of three different

factors: loss of correlation, luminance distortion, and contrast distortion (Bovik, 2002).
4 xy x y
Q=
( x2 + y2 )[( x ) 2 + ( y ) 2 ]

For the above formula, let x = {xi i = 1,2,..., N } and y = { y i i = 1,2,..., N } be the original MS and fused

image vectors, respectively. Each component of the formula can be defined as follows:

N N
1 1
x=
N
x
i =1
i y=
N
y
i =1
i

1 N 1 N 1 N
x2 =
N 1 i =1
( xi x ) 2 y2 =
N 1 i =1
( yi y ) 2 xy = ( xi x )( yi y )
N 1 i =1

The relative average spectral error (RASE) characterizes the average performance of the method

of image fusion in the spectral bands (Choi, 2005).

N
100 1
RASE =
M N
RMSE
i =1
2
( Bi )

In the formula for RASE, M is the mean radiance of the N spectral bands ( Bi ) of the original MS bands.

We also used root mean squared error (RMSE) and correlation coefficient (CC) to analyze and compare

the spectral quality. The CC between the original MS image and the final fused image is defined as

CC ( A, B) =
mn
( Amn A )( Bmn B )
(mn ( Amn A ) 2 )(mn ( Bmn B ) 2 )

where A and B stand for the mean values of the corresponding data set, and CC is calculated globally

for the entire image. The formula for RMSE is

( A ( x) F ( x))
x i
i i
2

RMSE =
nmd

In this formula x is the pixel and (i) is the band number. Also n is the number of rows, m is the number
of columns and d is the number of bands. We used all the metrics stated above to conclude which pan-

sharpening method performs best spectrally.

To judge the spatial quality of the pan-sharpened images we compare the high frequency data

from the panchromatic image to the high frequency data from each band of the fused image using a

method proposed by Zhou in 2004. To extract the high frequency data we apply the following

convolution mask to the images:

1 1 1
mask = 1 8 1 .
1 1 1

We compare the resulting filtered images by considering the correlation coefficients between

each band and the panchromatic image. The closer the average correlation coefficient is to one, the more

closely the edge data of the fused image matches the edge data of the panchromatic, indicating better

spatial quality.

Analysis of the Results

In comparing the spatial quality, as mentioned before, it is relatively easy to judge spatial quality

just by looking at the image. For example, in Image 4, IHS and PCA demonstrate clear edges whereas

Wavelet experiences what is called a stair-casing effect. Similarly for all the other images this pattern

follows. In order to be more accurate we used the metric mentioned above to evaluate the images as

well. In the results of the spatial metric, it confirms our prediction that IHS and PCA have the highest

spatial quality, but it misleads the reader in evaluating P+XS and Wavelet. In Table 4, P+XS has a lower

spatial value than Wavelet, yet P+XS visually looks better. Here we can note that there is a discrepancy

in this metric. Overall, in Table 7, which is the average of the results of the six images, it is clear the IHS

and PCA perform best spatially.

The spectral quality was more difficult to judge visually; therefore we used many metrics in
order to evaluate the results. In all the fused images, IHS and PCA have the highest color

distortion. This is due to overusing the panchromatic image. The colors visually look very different that

the original MS. It is difficult to say which of the other images match better to the MS. For example in

Image 4 one can see the color of the swimming pool is very different in the IHS and PCA, but its hard

to conclude which of the other fused images has the best spectral quality. P+XS and VWP seem to have

the least spectral distortion, but its difficult to conclude this visually. We ran the metrics on all the

images and took an average of the results. Looking at table 7, we can conclude from the metrics that

VWP performs best spectrally. In comparing the three IHS methods only, the metrics conclude the

original IHS performs best spatially whereas the Adaptive IHS performs best spectrally.

In conclusion, overall the VWP performs best spectrally and the IHS performs best spatially.

There is always a tradeoff in spectral and spatial quality, because of this the choice of method can

depend on the how the fused image will be used. Also given our metric results, we concluded that

among the three different IHS methods, the Adaptive IHS performs best spectrally.
Image 1

Multispectral Panchromatic IHS

IKONOS IHS SRM IHS Wavelet

PCA P+XS VWP


Image 2

Multispectral Panchromatic IHS

Multispectral Panchromatic IHS

IKONOS IHS Adaptive IHS Wavelet

PCA P+XS VWP


Image 3

Multispectral Panchromatic IHS

IKONOS IHS SRM IHS Wavelet

PCA P+XS VWP


Image 4

Multispectral Panchromatic IHS

IKONOS IHS SRM IHS Wavelet

PCA P+XS VWP


Image 5

Multispectral Panchromatic IHS

IKONOS IHS SRM IHS Wavelet

PCA P+XS VWP


Image 6

Multispectral Panchromatic IHS

23

IKONOS IHS SRM IHS Wavelet

PCA P+XS VWP


Image 1

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.1050 3.9794 0.7401 14.4064 59.3136 1.9123 0.0070 0.9814
IKONOS IHS 0.0209 3.2221 0.9926 12.2960 50.6248 1.8393 0.0240 0.9885
Adaptive IHS 0.0397 2.7031 0.9904 10.8759 44.7780 1.0605 0.0122 0.9901
Wavelet 0.0378 2.6478 0.9878 10.0361 41.3202 2.1397 0.0246 0.8015
PCA 0.0245 2.9323 0.9800 11.7332 48.3075 2.2854 0.0146 0.9797
P+XS 0.0567 2.5124 0.9746 10.0434 41.3503 2.4978 0.0161 0.7768
VWP 0.0231 1.7669 0.9892 7.2208 29.7291 1.5062 0.0046 0.8366

Image 2

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0888 5.5938 0.7401 21.8393 24.5873 1.8430 0.0236 0.9951
IKONOS IHS 0.1991 3.9677 0.9929 15.1493 17.0556 1.7772 0.0223 0.9871
Adaptive IHS 0.0344 3.9046 0.9913 15.0835 16.9815 1.3599 0.0064 0.9931
Wavelet 0.1144 9.2674 0.9799 35.4807 39.9453 8.1552 0.0568 0.7211
PCA 0.0273 3.2483 0.9936 11.7661 13.2467 2.6966 0.0207 0.9294
P+XS 0.0636 4.3002 0.9437 16.7682 18.8781 4.6519 0.0069 0.5119
VWP 0.0417 3.3402 0.9763 13.1019 14.7505 2.7893 0.0039 0.7307

Image 3

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0072 2.7768 0.7482 10.8973 13.3846 0.8368 0.0117 0.9824
IKONOS IHS 0.0150 2.7813 0.9978 9.7105 11.9269 0.9712 0.0310 0.9852
Adaptive IHS 0.0023 2.4743 0.9975 8.7866 10.7911 0.7735 0.0023 0.9808
Wavelet 0.0177 6.1855 0.9900 21.7328 26.6933 4.5524 0.2235 0.6926
PCA 0.0042 2.7739 0.9966 9.3048 11.4286 1.1435 0.0372 0.9733
P+XS 0.0214 2.9835 0.9798 10.5126 12.9121 3.4178 0.1946 0.4490
VWP 0.0063 2.2055 0.9918 7.9164 9.7233 2.0949 0.0063 0.6110
Image 4

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0352 6.8427 0.7144 23.8951 94.8065 5.1496 0.0252 0.9920
IKONOS IHS 0.0449 3.3841 0.9917 12.6752 50.2902 2.0914 0.0132 0.9815
Adaptive IHS 0.0503 2.5706 0.9905 10.1690 40.3468 1.1522 0.0051 0.9858
Wavelet 0.0529 2.9924 0.9811 11.0957 44.0234 3.1209 0.0457 0.7561
PCA 0.1254 3.4316 0.9759 13.6625 54.2074 2.7107 0.0165 0.9874
P+XS 0.0790 2.4319 0.9764 9.6073 38.1181 2.6137 0.0080 0.8251
VWP 0.0280 1.7295 0.9912 7.0458 27.9552 1.3646 0.0015 0.8705

Image 5

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0418 7.4936 0.7079 26.3325 103.380 6.8840 0.0211 0.9897
IKONOS IHS 0.0196 3.9190 0.9900 14.7463 57.8934 2.3543 0.0098 0.9862
Adaptive IHS 0.0259 2.7180 0.9910 10.6826 41.9396 1.2363 0.0021 0.9809
Wavelet 0.0294 3.0976 0.9826 11.6112 45.5852 2.9029 0.0314 0.7836
PCA 0.1086 4.7487 0.9635 18.6189 73.0973 3.6021 0.0082 0.9917
P+XS 0.0537 2.8104 0.9652 11.0205 43.2660 3.1751 0.0066 0.7576
VWP 0.0196 1.8833 0.9869 7.6152 29.8969 1.7611 0.0015 0.8377

Image 6

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0558 5.6599 0.7079 19.9838 73.1578 2.7761 0.0224 0.9962
IKONOS IHS 0.1172 4.5449 0.9900 17.1388 62.7427 2.4835 0.0191 0.9918
Adaptive IHS 0.1024 3.4978 0.9768 14.0740 51.5229 1.7482 0.0060 0.9887
Wavelet 0.1125 4.0494 0.9826 15.1672 55.5250 3.2389 0.0404 0.7766
PCA 0.1283 4.4673 0.9635 16.8739 61.7729 3.7328 0.0184 0.9789
P+XS 0.1257 3.4060 0.9613 13.5324 49.5401 3.1916 0.0144 0.8531
VWP 0.0538 2.3775 0.9833 9.7017 35.5164 1.9511 0.0031 0.8654
Average Values

CC ERGAS Qave RASE RMSE SAM SID Spatial


Reference Value 0 0 1 0 0 0 0 1
IHS 0.0556 5.3911 0.7302 19.5590 61.4384 3.2336 0.0175 0.9895
IKONOS IHS 0.0694 3.6365 0.9919 13.6194 41.7556 1.9195 0.0162 0.9867
Adaptive IHS 0.0425 3.0646 0.9896 11.6119 34.3933 1.2218 0.0057 0.9866
Wavelet 0.0608 4.7067 0.9805 17.5206 42.1821 4.0183 0.0666 0.7553
PCA 0.0697 3.6004 0.9753 13.6599 43.6767 2.6952 0.0172 0.9734
P+XS 0.0667 3.0741 0.9668 11.9141 34.0108 3.2580 0.0411 0.6956
VWP 0.0287 2.2171 0.9865 8.7670 24.5952 1.9112 0.0035 0.7920
Works Cited

Alparone, Luciano, Lucien Wald, Jocelyn Chanussot, Lori M. Bruce, and Claire Thomas.

"Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data-Fusion

Contest." Geoscience and Remote Sensing (2007). Florence.

Ballester, Coloma, Vicent Caselles, Laura Igual, and Joan Verdera. "A Variational Model for

P+XS Image Fusion." International Journal of Computer Vision (2006): 43-58.

Bovik, Alan C., and Zhou Wang. "A Universal Image Quality Index." IEEE Signal Processing

Letters (2002).

Chang, C. "Spectral Information Divergence for Hyperspectral Image Analysis." Proc. Geosci.

Remote Sens. Symp. 1 (1999): 509-511.

Choi, M.-J, H.-C. Kim, N.I. Cho, and H.O. Kim. "An Improved Intensity-Hue-Saturation

Method for IKONOS Image Fusion." International Journal of Remote Senising (2008).

Choi, Myungjin. "A New Intensity-Hue-Saturation Fusion Approach to Image Fusion with a

Tradeoff Parameter." IEEE Transactions of Geoscience and Remote Sensing 44 (2006):

1672-1682.

Du, Qian, Oguz Gungor, and Jie Shan, comps. Performance Evaluation for Pan-Sharpening

Techniques. Department of Electrical and Computer Engineering, Mississippi State

University. 24 June 2008.

Eshtehardi, A, H Ebadi, M J. Valadan Zoej, and A Mohammadzadeh, comps. Image Fusion of

Landsat ETM+ and Spot Satellite Images Using IHS, Brovey and PCA. 2007. Toosi

University of Technology. 24 June 2008 <http://www.isprs2007ist.itu.edu.tr/2.pdf>.


Goetz, A.F. H., J. W. Boardman, and R. H. Yunas. "Discrimination Among Semi-Arid

Landscape Endmembers Using the Spectral Angle Mapper(SAM) Algorithm." Proc.

Summeries 3rd Annu. JPL Airborne Geosci. Workshop (1992): 147-149.

Gonzlez-Audcana, Mara, Xavier Otazu, Octavi Fors, and Jess Alvarez-Mozos. "A Low

Computational-Cost Method to Fuse IKONOS Images Using the Spectral Response of Its

Sensors." IEEE Transactions of Geoscience and Remote Sensing 44 (2006): 1683-1691.

Hyder, A. K., E. Shahbazian, and E. Waltz. "Assessment of Image Fusion Procedures Using

Entropy, Image Quality, and Multispectral Classification." Journal of Electronics and

Photonics (2007). Starkville.

Moeller, Michael. UCLA Technical Report, 2008.

Nencini, Filippo, Andrea Garzelli, Stefano Baronti, and Luciano Alparone. "Remote Sensing

Image Fusion Using the Curvelet Transform." Information Fusion (2006). Amsterdam.

Otazu, Xavier, Mara Gonzlez-Audcana, Octavi Fors, and Jorge Nez. "Introduction of

Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based

Methods." IEEE Transactions of Geoscience and Remote Sensing 43 (2005): 2376-

23385.

Shah, Vijay P., Nicolas H. Younan, and Roger L. King. "An Effictient Pan-Sharpening Method

Via a Combined Adaptive PCA Approach and Contourlets." IEEE Transaction on

Geoscience and Remote Sensing 46 (2008).

Smith, Lindsay I. "A Tutorial on Principal Component Analysis." 26 Feb. 2002. Department of

Computer Science, University of Otago. 24 June 2008.

Tu, T.M., S.C. Su, H.C. Shyn, and P.S. Huang. "A new look at HIS-like image fusion methods."

Information Fusion 2 (2001): 177-186.


Tu, T.M., P.S. Huang, C.L. Hung, and C.P. Chang. "A Fast Intensity-Hue Saturation Fusion

Technique with Spectral Adjustment for IKONOS Imagery." IEEE Geoscience and

Remote Sensing Letters 1 (2004): 309-312.

Vijayaraj, Veeraraghavan, Charles G. O'Hara, and Nicolas H. Younan. "Pansharpening and

Image Quality Interface." International Geoscience and Remote Sensing Symposium

(2004). Starkville.

Wald, L. "Quality of High Resolution Synthesized Images:is There a Simple Criterion?" Proc.

Int. Conf. Fusion Earth Data (2000): 99.

Zhang, Y, and G Hong. "An IHS and Wavelet Integrated Approach to Improve Pan-Sharpening

Visual Quality of Natural Colour IKONOS and QuickBird Images." Information Fusion

(2005): 225-234.

Zhang, Yun. "Understanding Image Fusion." PCI Geomatics. June 2004. PCI Geomatics. 24

June 2008.

You might also like