Evaluation of Pan-Sharpening Methods
Evaluation of Pan-Sharpening Methods
Evaluation of Pan-Sharpening Methods
Abstract
The launch of high-resolution satellites used for remote sensing has created a need for the
development of efficient and accurate image fusion methods. These satellites are commonly capable of
producing two different types of images: a low-resolution multispectral image and a high-resolution
panchromatic image. The multispectral sensor provides multi-band images with accurate color data with
low spatial resolution. Conversely, the panchromatic sensor yields grayscale images with high spatial
resolution but imprecise color data. There are a number of applications in remote sensing that require
images with both high spatial and spectral resolutions. The fusion of the multispectral and panchromatic
images, or pan-sharpening, provides a solution to this by combining the clear geometric features of the
In this project we will examine different pan-sharpening techniques and explore various metrics
that can be used to judge the image quality of the fused image. We will be working with images from
Quickbird, a commercial satellite launched in 2001 that offers high-resolution imagery of Earth. The
QuickBird satellite has four bands: red, green, blue and infrared. The most common pan-sharpening
techniques are the Intensity-Hue-Saturation Technique (IHS), the wavelet method, and principal
component analysis (PCA). In this project, we also compare these three methods to other advanced pan-
IHS (Intensity-Hue-Saturation) is the most common image fusion technique for remote sensing
applications and is used in commercial pan-sharpening software. This technique converts a color image
from RGB space to the IHS color space. Here the I (intensity) band is replaced by the panchromatic
image. Before fusing the images, the multispectral and the panchromatic image are histogram matched.
The image is converted to IHS color space using the following linear transformation:
I 13 1
3
R
1
3
= 2 2 2 2
1 6 6 6 G
2 12 1
0 B
2
F ( R) 1 1
2
1
2
Pan
F (G ) = 1 1 1
2 2 1 .
F ( B) 1 2 0 2
F ( R) R + Pan I
F (G ) = G + Pan I .
F ( B) B + Pan I
Implementing the IHS fusion method in this manner is very efficient and is called the fast IHS
technique (FIHS) (Tu et al., 2001), making IHS ideal for the large volumes of data produced by
satellites.
Ideally the fused image would have a higher resolution and sharper edges than the original color
image without additional changes to the spectral data. However, because the panchromatic image was
not created from the same wavelengths of light as the RGB image, this technique produces a fused
image with some color distortion from the original multispectral (Choi, 2008). This problem becomes
worse the more the intensity band and panchromatic image differ. There have been various
One of the first modifications of the FIHS method extends the IHS method from three bands to
four by incorporating an infrared component (Tu et al., 2005). Because the panchromatic image sensors
pick up infrared light (IR) in addition to visible wavelengths, this modification allowed the calculated
intensity of the multispectral image to better match the panchromatic image, thus causing less color
A similar modification called the FIHS-SA method uses four bands but incorporates weighting
coefficients on the green and blue bands in an attempt to minimize the difference between I and the
panchromatic image. These weighting coefficients were calculated experimentally from fused IKONOS
images (Tu et al., 2005). In 2008 Choi expanded on this work and experimentally determined
coefficients for the red and infrared bands as well for IKONOS images. The green and blue band
coefficients were taken from the 2005 paper by Tu. Since these coefficients were calculated using
IKONOS data, these parameters are not ideal for fusing Quickbird images.
In order to minimize spectral distortion in the IHS pan-sharpened image, we propose a new
modification of IHS that varies the manner the intensity band is calculated depending on the initial
multispectral and panchromatic images. To minimize spectral distortion the intensity band should
approximate the panchromatic image as closely as possible. Therefore in this Adaptive IHS method we
I = 1 M 1 + 2 M 2 + 3 M 3 + 4 M 4 Pan .
In order to calculate these coefficients we create the following function F to minimize with
panchromatic image. The second term enforces the non-negativity constraint on the is using the
Lagrange multiplier . In order to solve this minimization problem we use a semi-implicit gradient
In the PCA-based method, the PCA transform converts intercorrelated multispectral bands into a
new set of uncorrelated components. It is assumed that the first PC image with the highest variance
contains the most amount of information from the original image and will be the ideal choice to replace
the high spatial resolution panchromatic image (Shah, 2008). All the other multispectral bands are
unaltered. An inverse PCA transform is performed on the modified panchromatic and multispectral
Wavelet
The wavelet fusion method is based on the wavelet decomposition of images into different
components based on their local frequency content. We perform the Discrete Wavelet Transforms
(DWT) on the multispectral and panchromatic images to extract the low frequency data from the
multispectral image and the high frequency data from the panchromatic image. These components are
combined to create the Fused Wavelet Coefficient Map. The inverse wavelet transformation is
performed on the fused map to create the final pan-sharpened image. Below is a visual representation of
Pan
MS
P+XS
P+XS is a variational method, which calculates the pan-sharpened image by minimizing energy
functional (Ballester, 2006). It obtains the edge information of the panchromatic image, using the
gradient. The spectral information is obtained by approximating the panchromatic image as a linear
The pan-sharpening method VWP combines the Wavelet and P+XS methods. It uses the wavelet
coefficients in order to get higher spectral quality and uses the energy functional of P+XS to produce
clear edges. VWP explicitly preserves spectral quality better. Also, unlike the P+XS, the VWP does not
approximate the panchromatic image as a linear combination of the multispectral bands. This is
beneficial because it does not limit the method to four band images (Moeller, 2008).
There are many different ways to analyze the results of pan-sharpened images and compare
different methods. When comparing different methods, we are interested in spatial and spectral quality.
In judging spatial quality, it is much easier to see the sharpness of the edges. But when judging spectral
quality, it is much more difficult to match the colors of the final result to the original multispectral by
visual inspection. There are many metricss that analyze the spectral quality. Relative dimensionless
global error in synthesis (ERGAS) calculates the amount of spectral distortion in the image (Wald,
2
h 1 N RMSE (n)
ERGAS = 100
l N N = 1 (n)
where h is the ratio between pixel sizes of Pan and MS, (n) is the mean of the nth band, and N is
l
the number of bands. Spectral Angle Mapper (SAM) compares each pixel in the image with every
endmember for each class and assigns a value between 0 (low resemblance) and 1 (high resemblance)
AB
i =1
i i
Cos =
N N
Ai Ai
i =1
B B
i =1
i i
Here, N is the number of bands, A = ( A1 , A2 , A3 ,..., AN ) and B = ( B1 , B2 , B3 ,..., B N ) are two spectral
vectors with the same wavelength from the multispectral image and fused image, respectively. is the
spectral angle at a specific point, and to compute the SAM for the entire image, we take the average of
all values. Spectral Information Divergence (SID) is derived from the concept of divergence arising
in information theory and can be used to describe the statistic of a spectrum. It also views each pixel
spectrum as a random variable and then measures the discrepancy of probabilistic behaviors between
spectra (Chang, 1999). To compute SID, we have the vector x =(x 1 ,,x N ) T , which is taken from the
multispectral image and y = (y 1 ,,y N ) T which is a vector from the final fused image. The range of x i s
xj yj
pj = N
qj = N
x
i =1
i y
i =1
i
SID ( x , y ) = D ( x y ) + D ( y x )
L
where D(x y )= i =1
p i log( p i q i ) and similarly for D ( y x ) , which is called the relative entropy. A
Universal Image Quality Index (Q-average) models any distortion as a combination of three different
factors: loss of correlation, luminance distortion, and contrast distortion (Bovik, 2002).
4 xy x y
Q=
( x2 + y2 )[( x ) 2 + ( y ) 2 ]
For the above formula, let x = {xi i = 1,2,..., N } and y = { y i i = 1,2,..., N } be the original MS and fused
image vectors, respectively. Each component of the formula can be defined as follows:
N N
1 1
x=
N
x
i =1
i y=
N
y
i =1
i
1 N 1 N 1 N
x2 =
N 1 i =1
( xi x ) 2 y2 =
N 1 i =1
( yi y ) 2 xy = ( xi x )( yi y )
N 1 i =1
The relative average spectral error (RASE) characterizes the average performance of the method
N
100 1
RASE =
M N
RMSE
i =1
2
( Bi )
In the formula for RASE, M is the mean radiance of the N spectral bands ( Bi ) of the original MS bands.
We also used root mean squared error (RMSE) and correlation coefficient (CC) to analyze and compare
the spectral quality. The CC between the original MS image and the final fused image is defined as
CC ( A, B) =
mn
( Amn A )( Bmn B )
(mn ( Amn A ) 2 )(mn ( Bmn B ) 2 )
where A and B stand for the mean values of the corresponding data set, and CC is calculated globally
( A ( x) F ( x))
x i
i i
2
RMSE =
nmd
In this formula x is the pixel and (i) is the band number. Also n is the number of rows, m is the number
of columns and d is the number of bands. We used all the metrics stated above to conclude which pan-
To judge the spatial quality of the pan-sharpened images we compare the high frequency data
from the panchromatic image to the high frequency data from each band of the fused image using a
method proposed by Zhou in 2004. To extract the high frequency data we apply the following
1 1 1
mask = 1 8 1 .
1 1 1
We compare the resulting filtered images by considering the correlation coefficients between
each band and the panchromatic image. The closer the average correlation coefficient is to one, the more
closely the edge data of the fused image matches the edge data of the panchromatic, indicating better
spatial quality.
In comparing the spatial quality, as mentioned before, it is relatively easy to judge spatial quality
just by looking at the image. For example, in Image 4, IHS and PCA demonstrate clear edges whereas
Wavelet experiences what is called a stair-casing effect. Similarly for all the other images this pattern
follows. In order to be more accurate we used the metric mentioned above to evaluate the images as
well. In the results of the spatial metric, it confirms our prediction that IHS and PCA have the highest
spatial quality, but it misleads the reader in evaluating P+XS and Wavelet. In Table 4, P+XS has a lower
spatial value than Wavelet, yet P+XS visually looks better. Here we can note that there is a discrepancy
in this metric. Overall, in Table 7, which is the average of the results of the six images, it is clear the IHS
The spectral quality was more difficult to judge visually; therefore we used many metrics in
order to evaluate the results. In all the fused images, IHS and PCA have the highest color
distortion. This is due to overusing the panchromatic image. The colors visually look very different that
the original MS. It is difficult to say which of the other images match better to the MS. For example in
Image 4 one can see the color of the swimming pool is very different in the IHS and PCA, but its hard
to conclude which of the other fused images has the best spectral quality. P+XS and VWP seem to have
the least spectral distortion, but its difficult to conclude this visually. We ran the metrics on all the
images and took an average of the results. Looking at table 7, we can conclude from the metrics that
VWP performs best spectrally. In comparing the three IHS methods only, the metrics conclude the
original IHS performs best spatially whereas the Adaptive IHS performs best spectrally.
In conclusion, overall the VWP performs best spectrally and the IHS performs best spatially.
There is always a tradeoff in spectral and spatial quality, because of this the choice of method can
depend on the how the fused image will be used. Also given our metric results, we concluded that
among the three different IHS methods, the Adaptive IHS performs best spectrally.
Image 1
23
Image 2
Image 3
Image 5
Image 6
Alparone, Luciano, Lucien Wald, Jocelyn Chanussot, Lori M. Bruce, and Claire Thomas.
Ballester, Coloma, Vicent Caselles, Laura Igual, and Joan Verdera. "A Variational Model for
Bovik, Alan C., and Zhou Wang. "A Universal Image Quality Index." IEEE Signal Processing
Letters (2002).
Chang, C. "Spectral Information Divergence for Hyperspectral Image Analysis." Proc. Geosci.
Choi, M.-J, H.-C. Kim, N.I. Cho, and H.O. Kim. "An Improved Intensity-Hue-Saturation
Method for IKONOS Image Fusion." International Journal of Remote Senising (2008).
Choi, Myungjin. "A New Intensity-Hue-Saturation Fusion Approach to Image Fusion with a
1672-1682.
Du, Qian, Oguz Gungor, and Jie Shan, comps. Performance Evaluation for Pan-Sharpening
Landsat ETM+ and Spot Satellite Images Using IHS, Brovey and PCA. 2007. Toosi
Gonzlez-Audcana, Mara, Xavier Otazu, Octavi Fors, and Jess Alvarez-Mozos. "A Low
Computational-Cost Method to Fuse IKONOS Images Using the Spectral Response of Its
Hyder, A. K., E. Shahbazian, and E. Waltz. "Assessment of Image Fusion Procedures Using
Nencini, Filippo, Andrea Garzelli, Stefano Baronti, and Luciano Alparone. "Remote Sensing
Image Fusion Using the Curvelet Transform." Information Fusion (2006). Amsterdam.
Otazu, Xavier, Mara Gonzlez-Audcana, Octavi Fors, and Jorge Nez. "Introduction of
23385.
Shah, Vijay P., Nicolas H. Younan, and Roger L. King. "An Effictient Pan-Sharpening Method
Smith, Lindsay I. "A Tutorial on Principal Component Analysis." 26 Feb. 2002. Department of
Tu, T.M., S.C. Su, H.C. Shyn, and P.S. Huang. "A new look at HIS-like image fusion methods."
Technique with Spectral Adjustment for IKONOS Imagery." IEEE Geoscience and
(2004). Starkville.
Wald, L. "Quality of High Resolution Synthesized Images:is There a Simple Criterion?" Proc.
Zhang, Y, and G Hong. "An IHS and Wavelet Integrated Approach to Improve Pan-Sharpening
Visual Quality of Natural Colour IKONOS and QuickBird Images." Information Fusion
(2005): 225-234.
Zhang, Yun. "Understanding Image Fusion." PCI Geomatics. June 2004. PCI Geomatics. 24
June 2008.