Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Spectral-Spatial Hyperspectral Image Classification With Edge-Preserving Filtering

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

2666 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO.

5, MAY 2014

Spectral–Spatial Hyperspectral Image Classification


With Edge-Preserving Filtering
Xudong Kang, Student Member, IEEE, Shutao Li, Member, IEEE, and Jón Atli Benediktsson, Fellow, IEEE

Abstract—The integration of spatial context in the classification images [6], e.g., random forests [7], [8], neural networks [9],
of hyperspectral images is known to be an effective way in improv- [10], AdaBoost [11], support vector machines (SVMs) [12],
ing classification accuracy. In this paper, a novel spectral–spatial sparse representation [13], [14], and active learning [15]–[17]
classification framework based on edge-preserving filtering is pro-
posed. The proposed framework consists of the following three methods. Among these methods, the SVM classifier has, in
steps. First, the hyperspectral image is classified using a pixelwise particular, shown a good performance in terms of classification
classifier, e.g., the support vector machine classifier. Then, the accuracy [12] since it has two major advantages: First, it
resulting classification map is represented as multiple probability requires relatively few training samples to obtain good classifi-
maps, and edge-preserving filtering is conducted on each proba- cation accuracies; second, it is robust to the spectral dimension
bility map, with the first principal component or the first three
principal components of the hyperspectral image serving as the of hyperspectral images [12].
gray or color guidance image. Finally, according to the filtered To improve the classification performance further, many re-
probability maps, the class of each pixel is selected based on the searchers have worked on spectral–spatial classification which
maximum probability. Experimental results demonstrate that the can incorporate the spatial contextual information into the
proposed edge-preserving filtering based classification method can pixelwise classifiers [18]. For example, extended morpho-
improve the classification accuracy significantly in a very short
time. Thus, it can be easily applied in real applications. logical profiles (EMPs) have been proposed for constructing
spectral–spatial features [19] which are adaptive definitions
Index Terms—Classification, edge-preserving filters (EPFs), of the neighborhood of pixels. Furthermore, spectral–spatial
hyperspectral data, spatial context.
kernels, e.g., composite [20], morphological [21], and graphic
[22] kernels, have also been proposed for the improvement
I. I NTRODUCTION
of the SVM classifier. The morphological and kernel-based

S ATELLITE images captured by hyperspectral sensors of-


ten have more than 100 spectral bands for each pixel.
Therefore, a hyperspectral image can provide very informa-
methods have given good results in terms of accuracies for
classifying hyperspectral images [20]–[25].
Another family of spectral–spatial classification methods
tive spectral information regarding the physical nature of the is based on image segmentation [18], [26]. A hyperspectral
different materials which can be utilized to distinguish objects image is first segmented into different regions based on the
in the image scene. However, the high dimensionality of hy- homogeneity of either intensity or texture [27] so that all the
perspectral data may produce the Hughes phenomenon [1] and pixels within the same region can be considered as a spatial
thus may influence the performance of supervised classification neighborhood. Different hyperspectral segmentation techniques
methods. such as partitional clustering [28], watershed [29], hierarchical
During the last decade, a large number of feature extraction, segmentation [30], and minimum spanning forest [31] have
feature reduction, and combination techniques [2]–[5] have been proposed for this objective. Then, majority voting is ap-
been proposed to address the high-dimensionality problem. plied for combining the pixelwise classification results obtained
Furthermore, intensive work has been proposed to build ac- by a pixelwise classifier with the segmentation map obtained
curate pixelwise classifiers for the analysis of hyperspectral by image segmentation. Specifically, all the pixels in the same
region are assigned to the most frequent class within this region.
Furthermore, multiple spectral–spatial classification methods
Manuscript received March 14, 2013; revised April 24, 2013; accepted can be combined together to improve the classification accu-
May 11, 2013. Date of publication July 9, 2013; date of current version racy further [30]. However, segmentation-based methods rely
February 28, 2014. This paper was supported by the National Natural Science
Foundation of China under Grant 61172161, by the Hunan Provincial Innova-
on the performance of the segmentation techniques. In order
tion Foundation for Postgraduate, and by the Chinese Scholarship Award for to get accurate segmentation results, advanced segmentation
Excellent Doctoral Student. methods are usually required, but those methods may be
X. Kang is with the College of Electrical and Information Engineering,
Hunan University, Changsha 410082, China, and also with the Faculty of
time-consuming.
Electrical and Computer Engineering, University of Iceland, 101 Reykjavík, Different from the traditional segmentation-based spectral–
Iceland (e-mail: xudong_kang@163.com). spatial classification framework described previously, Li et al.
S. Li is with the College of Electrical and Information Engineering, Hunan
University, Changsha 410082, China (e-mail: shutao_li@hnu.edu.cn). proposed that the spectral–spatial classification problem can
J. A. Benediktsson is with the Faculty of Electrical and Computer Engineer- be solved in a Bayesian framework [17], [32]. The final clas-
ing, University of Iceland, 101 Reykjavk, Iceland (e-mail: benedikt@hi.is). sification result is obtained from a posterior distribution built
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. on the class distributions which are learned from multino-
Digital Object Identifier 10.1109/TGRS.2013.2264508 mial logistic regression and multilevel logistic (MLL) prior.

0196-2892 © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2667

Li et al.’s method has given better performances in terms of with


classification accuracies when compared with several widely 
used spectral–spatial classification methods [17], [32]. Kib = Gδs (i − j) Gδr (|Ii − Ij |) (1)
Recently, edge-preserving filtering [33]–[37] has become a j∈ωi

very active research topic of image processing and has been


where i and j represent the ith and jth pixels, ωi is a local
applied in many applications such as high dynamic imaging
window of size (2δs + 1) × (2δs + 1) around pixel i, K b is a
[36], stereo matching [38], image fusion [39], [40], enhancing
normalizing term of the joint bilateral filter, P and I are the
[41], dehazing [42], and denoising [43]. In hyperspectral image
input image and the guidance image, respectively, δs controls
analysis, edge-preserving filtering has also been successfully
the size of the local window used to filter a pixel, and δr
applied for hyperspectral image visualization [44].
defines how much the weight of a pixel decreases because of the
In this paper, an edge-preserving filtering based spectral–
intensity difference between the reference pixels, i.e., Ii and Ij .
spatial classification framework is proposed. The pixelwise
Based on (1), it is easy to imagine that, if the neighborhood
classification map obtained by the SVM classifier is first rep-
pixels of pixel i in the guidance image have similar intensities
resented as multiple probability maps (the probability that a
or colors, i.e., Ii ≈ Ij , the weight of pixel j, represented by
pixel belongs to a specified class) and then processed with edge-
neighboring pixel j, will be quite large, especially when it
preserving filtering. Finally, the class of each pixel is selected
is very close to i, i.e., i − j is very small. In contrast,
based on the maximum probability. Taking neighborhood in-
if the neighboring pixels have quite different intensities in
formation into account, edge-preserving filtering can smooth
the guidance image, the situation will be the opposite. This
the probabilities while ensuring that the smoothed probabilities
means that those adjacent input pixels which have similar
are aligned with real object boundaries. Experiments show
intensities or colors in the guidance image tend to have similar
that the proposed method can improve the classification accu-
outputs.
racy of SVM significantly with much less computer resources
needed. Thus, the proposed method will be quite useful in real
applications. B. Guided Filter
The rest of this paper is organized as follows. Section II
introduces two widely used edge-preserving filters (EPFs). The guided filter is based on a local linear model which
Section III describes the proposed spectral–spatial classification assumes that the filtering output O can be represented as a linear
method. Section IV gives the results and provides a discussion. transform of the guidance image I in a local window ω of size
Finally, the conclusion is given in Section V. (2r + 1) × (2r + 1) as follows:

O i = a j I i + bj , ∀i ∈ ωj . (2)
II. E DGE -P RESERVING F ILTERING
During the last decade, many different EPFs, e.g., the joint This model ensures ∇O ≈ a∇I, which means that the filtering
bilateral filter [33], the weighted least-squares (WLS) filter output O will have an edge only if the guidance image I has
[36], the guided filter [34], the domain transform filter [45], the an edge. To determine the coefficients aj and bj , an energy
local linear Stein’s unbiased risk estimate filter [46], and the function [see (3)] is constructed as follows:
L0 gradient filter [47], have been proposed. Most of these EPFs  
have a similar property, i.e., they can be used for joint filtering, E(aj , bj ) = (aj Ii + bj − Pi )2 + a2j (3)
i∈ωj
where the content of one image is smoothed based on the edge
information from a guidance image. It means that the spatial where  is a regularization parameter deciding the degree of
information of the guidance image is able to be considered blurring for the guided filter. The energy function is based on
in the filtering process. In this section, the two most widely two goals. First, the filtering output, i.e., (aj Ii + bj ), should
used EPFs, i.e., the joint bilateral filter and the guided filter, be as close as possible to the input image P . Second, the
are illustrated. local linear model should be maintained in the energy function.
By solving the energy function, abrupt intensity changes in
A. Joint Bilateral Filter the guidance image I can be mostly preserved in the filtering
The joint bilateral filter is based on the widely used Gaussian output O.
filter considering the distance in the image plane (the spatial do-
main) and the distance in the intensity axis (the range domain). C. Comparison of the Two EPFs
The spatial and range distances are defined using two Gaus-
sian decreasing functions, i.e., Gδs (i − j) = exp(−(i − Given the aforementioned simple description of the two
j)/δs2 ) and Gδr (|Ii − Ij |) = exp(−|Ii − Ij |2 /δr2 ). Specifi- EPFs, the properties of the EPFs are studied in Fig. 1. Fig. 1(a)
cally, the filtering output Oi of the input pixel Pi can be and (c) shows the input image P and the guidance image
represented as a weighted average of its neighborhood pixels I, respectively. Fig. 1(b) and (d) shows the filtering outputs
Pj as follows: obtained by different EPFs with different parameter settings.
Regarding the joint bilateral filter (see Fig. 1(b), from left
1 
Oi = b Gδs (i − j) Gδr (|Ii − Ij |) Pj to right), the two parameters, i.e., δs and δr , are set as fol-
Ki j∈ω lows: δs = 2, δr = 0.01; δs = 2, δr = 0.1; δs = 4, δr = 0.1;
i
2668 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 5, MAY 2014

Fig. 1. (a) Input image. (b) and (d) Filtered images obtained by different EPFs with different parameter settings. (c) Guidance image.

Fig. 2. Schematic of the proposed edge-preserving spectral–spatial classification method.

III. S PECTRAL –S PATIAL C LASSIFICATION W ITH


δs = 4, δr = 0.2; and δs = 8, δr = 0.4, respectively. Regarding
E DGE -P RESERVING F ILTERING
the guided filter (see Fig. 1(d), from left to right), the two
parameters, i.e., r and , are set as follows: r = 1,  = 10−6 ; Here, the spectral–spatial classification problem is consid-
r = 2,  = 10−4 ; r = 4,  = 0.01; r = 4,  = 0.1; and r = 8, ered as a probability optimization process, as illustrated in
 = 0.1, respectively. The figure shows that, as the filter size Fig. 2. The initial probability that a pixel belongs to a specified
(δs and r) and the degree of blurring (δr and ) increase, a more class is estimated based on a widely used pixelwise classifier,
obvious smoothing will be produced on the filtering outputs. i.e., the SVM classifier. Taking spatial contextual information
More importantly, compared with the input image, the pixels into account, the final probabilities are obtained by performing
in the filtering outputs tend to be aligned with edges in the edge-preserving filtering on the initial probability maps, with
guidance image. In other words, adjacent input pixels with the principal components of the hyperspectral image serving
similar intensities in the guidance image tend to get similar as the guidance image. Specifically, if the nth probability map
filtering outputs. This property is quite correlated with image has quite a large intensity in location i which refers to a higher
segmentation which aims at dividing an image into different probability, then the ith pixel of the hyperspectral image tends
regions based on the homogeneity of intensity or color. to belong to the nth class.
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2669

A. Problem Formulation
The general supervised hyperspectral classification problem
can be formulated as follows: Let S ≡ {1, . . . , i} denote the
set of pixels of the hyperspectral image, x = (x1 , . . . , xi ) ∈
Rd×i denote an image of d-dimensional feature vectors,
L ≡ {1, . . . , n} be a set of labels, and c = (c1 , . . . , ci ) be
the classification map of labels. Given a training set Tτ ≡
{(x1 , c1 ), . . . , (xτ , cτ )} ∈ (Rd × L)τ , where τ is the total Fig. 3. Example of 1-D step edge. Here, μ and σ are shown for a filtering
number of training samples, the goal of classification is to kernel centered exactly at an edge.
obtain a classification map, i.e., c, which assigns a label ci ∈ L
to each pixel i ∈ S. and the filtering weight Wi,j (I) of the guided filter can
be expressed as follows:
B. Proposed Approach 1   (Ii − μk )(Ij − μk )

The proposed approach consists of three steps: 1) construc- Wi,j (I) = 1 + (6)
|ω|2 σk2 + 
k∈ωi ,k∈ωj
tion of the initial probability maps; 2) filtering of the probability
maps; and 3) classification based on the maximum probability. where ωi and ωj are local windows around pixel i and j,
1) Construction of the Initial Probability Maps: It is known respectively, μk and σk are the mean and variance of I in
that an initial classification map c can be obtained by a pixel- ωk , and |ω| is the number of pixels in ωk . A 1-D step edge
wise classifier. In this paper, the pixelwise classification map c example is presented in Fig. 3 to demonstrate the edge-
is represented using probability maps, i.e., p = (p1 , . . . , pn ), preserving property of the filtering weight for the guided
in which pi,n ∈ [0, 1] is the initial probability that a pixel i filter. As shown in the figure, if Ii and Ij are on the same
belongs to the nth class. Specifically, the probability pi,n is side of an edge, the term (Ii − μk )(Ij − μk ) in (6) will
defined as follows:
 have a positive sign. However, if Ij is located on the other
1 if ci = n side of the edge, the term will have a negative sign. Thus,
pi,n = (4)
0 otherwise. the filtering weight becomes larger for pixel pairs on the
The SVM classifier is adopted for pixelwise classification since same side of the edge but small otherwise. Hence, those
it is one of the most widely used pixelwise classifiers and probabilities on the same side of an edge in the guidance
has been successfully applied for many other spectral–spatial image I tend to have similar filtering outputs.
classification methods [6], [18], [48]. Regarding the choice of the guidance image I, principal com-
2) Filtering of the Probability Maps: Initially, spatial infor- ponent analysis (PCA) is adopted because it gives an optimal
mation is not considered. All probabilities are valued at either representation of the image in the mean squared sense. Here,
0 or 1. Therefore, the probability maps appear noisy and not two options are given as follows.
aligned with real object boundaries. To solve this problem,
the probability maps are optimized by edge-preserving filter- 1) Gray-scale guidance image: the PCA decomposition is
ing. Specifically, the optimized probabilities are modeled as a conducted on the original hyperspectral image, and the
weighted average of its neighborhood probabilities first principal component which contains most of the edge
 information is adopted as the guidance image (see Fig. 2).
ṕi,n = Wi,j (I)pj,n (5) 2) Color guidance image: instead of guiding the filtering
j with a gray-scale image, the first three principal compo-
where i and j represent the ith and jth pixels and the filtering nents are used as the color guidance image of the EPFs
weight W is chosen such that the filter preserves edges of a (see Fig. 2).
specified guidance image I. Therefore, this step has two major Fig. 4 shows an example of probability filtering. It shows
problems: 1) how to choose an EPF and 2) how to choose a that the initial probabilities look noisy and are not aligned
guidance image. Different filters and guidance images produce with real object boundaries. Probability optimization with edge-
different filtering weights Wi,j for (5). preserving filtering has two major advantages in this example
For the choice of EPF, two widely used EPFs are adopted [see Fig. 4(b)]. First, noise probabilities that appear as scattered
in this paper. The weights Wi,j obtained by the two filters are points or lines can be effectively smoothed. Second, the refined
reviewed as follows. probabilities are always aligned with real object boundaries.
1) Filtering weight for the joint bilateral filter: the weight The two advantages demonstrate that the spatial contextual
of the joint bilateral filter information of the guidance image is well utilized in the edge-
 is already presented in
(1), i.e., Wi,j (I) = 1/Kib j∈ωi Gδs (i − j)Gδr (|Ii − preserving filtering process.
Ij |). Based on the corresponding description in Section II, 3) Classification Based on the Maximum Probability: Ac-
it is easy to know that those adjacent input probabilities cording to (7), once the probability maps are filtered, the label
which have similar intensities or colors in the guidance at pixel i can be simply chosen in a maximization manner as
image will have similar outputs after filtering. follows:
2) Filtering weight for the guided filter: as described in [37],
(2) can be represented in a weighted average form as (5), ći = arg max ṕi,n . (7)
n
2670 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 5, MAY 2014

Fig. 4. (a) Initial probability maps for the University of Pavia data set. (b) Final probability maps obtained by edge-preserving filtering. The lower figures in
(a) and (b) show the close-up views of the probability maps denoted by boxes in the upper figures.

This step aims at transforming the probability maps ṕn into the
final classification result ć.

IV. E XPERIMENTAL R ESULTS AND D ISCUSSIONS


A. Experimental Setup
1) Data Sets: The proposed method is performed on three
hyperspectral data sets, i.e., the Indian Pines image, the Univer-
sity of Pavia, and the Salinas image. The Indian Pines image
capturing the agricultural Indian Pine test site of North-western
Indiana was acquired by the Airborne Visible/Infrared Imaging
Spectrometer (AVIRIS) sensor. The image has 220 bands of
size 145 × 145 with a spatial resolution of 20 m per pixel
and a spectral coverage ranging from 0.4 to 2.5 μm (20 water
absorption bands no. 104–108, 150–163, and 220 were removed
before experiments). Fig. 5 shows the color composite of the
Indian Pines image and the corresponding ground truth data.
The University of Pavia image capturing an urban area sur- Fig. 5. (a) Three-band color composite of the Indian Pines image. (b) and
rounding the University of Pavia was recorded by the ROSIS- (c) Ground truth data of the Indian Pines image.
03 satellite sensor. The image has 115 bands of size 610 ×
340 with a spatial resolution of 1.3 m per pixel and a spectral
coverage ranging from 0.43 to 0.86 μm (12 most noisy channels
were removed before experiments). Nine classes of interest are
considered for this image. Fig. 6 shows the color composite of
the University of Pavia image and the corresponding ground
truth data.
The Salinas image was captured by the AVIRIS sensor over
Salinas Valley, CA, USA, and with a spatial resolution of 3.7 m
per pixel. The image has 224 bands of size 512 × 217. As
with the Indian Pines and University of Pavia scenes, 20 water
absorption bands no. 108–112, 154–167, and 224 were dis-
carded. Fig. 7 shows the color composite of the Salinas image Fig. 6. (a) Three-band color composite of the University of Pavia image.
and the corresponding ground truth data. For the three images, (b) and (c) Ground truth data of the University of Pavia image.
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2671

Fig. 7. (a) Three-band color composite of the Salinas image. (b) and
(c) Ground truth data of the Salinas image.

the number of training and test samples for each class is detailed
in Tables II–IV, respectively.
2) Quality Indexes: Three widely used quality indexes, i.e., Fig. 8. Indian Pines image: analysis of the influence of the parameters δs , δr ,
r, and  when the first principal component is selected as the guidance image.
the overall accuracy (OA), the average accuracy (AA), and
the kappa coefficient, are adopted to evaluate the performance
of the proposed method. OA is the percentage of correctly
classified pixels. AA is the mean of the percentage of correctly
classified pixels for each class. The kappa coefficient gives the
percentage of correctly classified pixels corrected by the num-
ber of agreements that would be expected purely by chance.

B. Classification Results
1) Analysis of the Influence of Parameters: For the proposed
joint bilateral filtering-based technique, the parameters δs and
δr determine the filtering size and blur degree, respectively.
Similarly, the parameters r and  denote the filtering size and
blur degree of the guided filter, respectively. The influence of
these parameters on the classification performance is analyzed
in Figs. 8 and 9 (experiment is performed on the Indian Pines
image). In the experiment, the training set which accounts for
10% of the ground truth was chosen randomly (see Table II).
The OA, AA, and kappa of the proposed method are measured
with different parameter settings. When the influence of δs is
analyzed, δr is fixed to be 0.2. Similarly, for the guided filter, Fig. 9. Indian Pines image: analysis of the influence of the parameters δs , δr ,
r, and  when the color composite of the first three principal components is
when the influence of r is analyzed,  is fixed to be 0.01. selected as the guidance image.
Furthermore, δr and  can be analyzed in the same way with
δs and r fixed at 3.
When the first principal component is used as the guidance limited local spatial information is considered in the filtering
image for the EPFs, it can be seen from Fig. 8 that, if the process.
filtering size and blur degree, i.e., δs , δr , r, and , are too large, Furthermore, Fig. 9 shows the influence of the parameters
the average classification accuracy may decrease dramatically. when the color composite of the first three principal compo-
The reason is that a large filtering size and blur degree may nents serves as the guidance image of the EPFs. From this
over-smooth the probability maps, and thus, those small-scale figure, a similar conclusion can be obtained that the filtering
objects may be misclassified. For example, although the OA size and blur degree cannot be too small or large. In this paper,
obtained with δs = 4 is similar to the accuracy obtained with the default parameter setting of the proposed method is given
δs = 5 [see Fig. 8(a)], the AA decreases dramatically when as follows: When the guidance image of the EPF is a gray-scale
δs = 5 because one small-scale class which contains 20 pixels image, δs = 3, δr = 0.2, r = 3, and  = 0.01 are set to be the
is totally misclassified when the filtering size is too large. default parameters; when the guidance image of the EPF is a
Similarly, a very small filtering size or blur degree is also not color image, δs = 4, δr = 0.2, r = 4, and  = 0.01 are set to
good for the proposed method because it means that only very be the default parameters. In the following experiments, it is
2672 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 5, MAY 2014

TABLE I
C LASSIFICATION ACCURACY ( IN P ERCENT ) OF THE P ROPOSED
M ETHOD W ITH D IFFERENT P OSTPROCESSING T ECHNIQUES . T HE
S TATISTICS -BASED M ETHOD I S A PPLIED W ITHIN A 7 × 7 W INDOW.
T HE PARAMETERS OF THE WLS F ILTER A RE S ET TO BE α = 1.4
AND λ = 0.3. T HE PARAMETERS OF THE NC F ILTER A RE S ET
TO BE δs = 3 AND δr = 0.2

shown that, with the provided parameter setting, we are able to


obtain good classification accuracies for different images.
2) Analysis of the Influence of Different Postprocessing
Techniques: For the proposed spectral–spatial classification
framework, different postprocessing techniques, e.g., other
types of edge-preserving smoothing methods or statistics-based
method, can be used to refine the binary probability maps. For
example, the statistics-based method [50] assigns the label of a
pixel to the most frequent class within its neighborhood pixels.
The WLS method [36] aims at solving a WLS optimization
problem which ensures that the input pixels are as smooth
as possible, except across edges in the guidance image. The
normalized convolution (NC) filter [45] is a recently proposed
joint EPF which has a similar property as the joint bilateral
filter. Table I compares the performance of the proposed method
with different postprocessing techniques. It can be seen that
other types of joint edge-preserving smoothing methods such
as the WLS method and the NC filter are also able to obtain
satisfactory overall classification accuracy, i.e., 94.93% and
95.20%, respectively. The reason is that the different joint edge-
preserving smoothing methods are all able to obtain spatially
smooth and edge-aligned probability maps. By contrast, since
the edge information of the guidance image is not considered
in the statistics-based method, that method can only ensure
the spatial smoothness of the resulting classification map, thus Fig. 10. Classification results (Indian Pines image) obtained by the (a) SVM
leading to an unsatisfactory OA of 89.33%. method, (b) EMP method, (c) AEAP method, (d) L-MLL method, (e) EPF-B-g
method, (f) EPF-B-c method, (g) EPF-G-g method, and (h) EPF-G-c method.
3) Comparison of Different Classification Methods: In this The value of OA is given in percent.
section, the proposed edge-preserving filtering based methods
(EPF-B-g, EPF-B-c, EPF-G-g, and EPF-G-c) are compared principal component or the color composite of the first three
with several widely used classification methods including the components, respectively, is used as the guidance image. The
method based on SVM [12], EMPs [25], automatic extended default parameters given in the first section are adopted for all
attribute profiles (AEAPs) [49], logistic regression, and mul- images, although an adjustment of the parameters can improve
tilevel logistic (L-MLL) [17]. The SVM algorithm is imple- the classification accuracy further for different images. In order
mented in the LIBSVM library [51]. Furthermore, the Gaussian to make the proposed method reproducible, the code and data
kernel with fivefold cross validation is adopted for the clas- will be made available on Mr. Kang’s homepage.2
sifier. For the EMP method, the morphological profiles are The first experiment is performed on the Indian Pines data
constructed with the first three principal components, a cir- set. Fig. 10 shows the classification maps obtained by different
cular structural element, a step size increment of two, and methods associated with the corresponding OA scores. From
four openings and closings for each principal component. In this figure, it can be seen that the classification accuracy ob-
order to construct the AEAPs, we use the Profattran soft- tained by the EMP and AEAP methods is not very satisfactory
ware which is kindly provided by the author of [49]. For the since some noisy estimations are still visible. By contrast, the
L-MLL method, the code is available on Dr. Li’s homepage.1 L-MLL method and the proposed method perform much better
For the proposed EPF-B-g, EPF-B-c, EPF-G-g, and EPF-G- in removing “noisy pixels.” Specifically, the proposed method
c methods, the abbreviations B and G represent that the joint increases the OA compared to the SVM method by about
bilateral filter or the guided filter is adopted for edge-preserving 15%. Compared with the recently proposed Bayesian-based
filtering. The abbreviations g or c denote that either the first classification method (L-MLL), the proposed EPF method also

1 http://www.lx.it.pt/~jun/ 2 http://xudongkang.weebly.com
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2673

TABLE II
N UMBER OF T RAINING (T RAIN ) AND T EST (T EST ) S AMPLES OF THE I NDIAN P INES I MAGE AND C LASSIFICATION ACCURACIES ( IN P ERCENT )
FOR THE SVM [12], EMP [25], AEAP[49], L-MLL [17], EPF-B-g, EPF-B-c, EPF-G-g, AND EPF-G-c M ETHODS

Fig. 11. Classification results (University of Pavia image) obtained by the


(a) SVM method, (b) EMP method, (c) AEAP method, (d) L-MLL method,
(e) EPF-B-g method, (f) EPF-B-c method, (g) EPF-G-g method, and
(h) EPF-G-c method. The value of OA is given in percent.

gives a higher classification accuracy. Table II presents the


number of training and test samples (the training set which
Fig. 12. Classification results (Salinas image) obtained by the (a) SVM
accounts for 10% of the ground truth was chosen randomly) method, (b) EMP method, (c) AEAP method, (d) L-MLL method, (e) EPF-B-g
and the classification accuracies for different methods. From method, (f) EPF-B-c method, (g) EPF-G-g method, and (h) EPF-G-c method.
this table, it can be observed that, by using the proposed EPF The value of OA is given in percent.
method, the AA of SVM is increased from 75% to 95% and
the kappa accuracy can be also increased significantly. The and IV present the number of training and test samples (for
proposed EPF-B-g method gives the best performance in terms the University of Pavia image and the Salinas image, the
of OA and kappa but not for the AA. training sets which, respectively, account for 6% and 2% of
The second and third experiments were performed on the the ground truth were chosen randomly) and the classification
University of Pavia and Salinas images, respectively. Figs. 11 accuracies for different methods. From the two examples, it can
and 12 show the classification maps obtained by different be seen that the proposed method always outperforms the EMP,
methods associated with the corresponding OAs. Tables III AEAP, and L-MLL methods in terms of OA, AA, and kappa.
2674 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 5, MAY 2014

TABLE III
N UMBER OF T RAINING (T RAIN ) AND T EST (T EST ) S AMPLES OF THE U NIVERSITY OF PAVIA I MAGE AND C LASSIFICATION ACCURACIES ( IN P ERCENT )
FOR THE SVM [12], EMP [25], AEAP[49], L-MLL [17], EPF-B-g, EPF-B-c, EPF-G-g, AND EPF-G-c M ETHODS

TABLE IV
N UMBER OF T RAINING (T RAIN ) AND T EST (T EST ) S AMPLES OF THE S ALINAS I MAGE AND C LASSIFICATION ACCURACIES ( IN P ERCENT ) FOR THE SVM
[12], EMP [25], AEAP [49], L-MLL [17], EPF-B-g, EPF-B-c, EPF-G-g, AND EPF-G-c M ETHODS

Compared with the SVM method, the proposed method can


improve the classification accuracies significantly. For example,
in Table III, the classification accuracy of the Bitumen class
increases from 67.5% to 100%. Similar improvements can be
found in the experimental results of the Salinas example. The
two sets of results further demonstrate the accuracy of the
proposed method.
4) Classification Results With Different Training and Test
Sets: In this section, the influence of different training and test
sets to the performance of the proposed method is analyzed.
Experiments are performed on three images, i.e., the Indian
Pines image, the University of Pavia image, and the Salinas
image. Only the classification result obtained by the EPF-B-g
method is presented because different edge-preserving filter-
ing methods tend to obtain similar classification results (see
Tables II–IV). Fig. 13 shows the classification results of the
EPF-B-g method with the number of training samples (in
percent) increased from 1% to 10% for the Indian Pines image,
1% to 6% for the University of Pavia image, and 1% to 2% Fig. 13. OA of the proposed EPF-B-g method with different numbers of
for the Salinas image. From this figure, it can be seen that the training samples on different images. (a) Indian Pines. (b) University of Pavia.
proposed method can always improve the classification accu- (c) Salinas.
racy significantly with a different number of training samples.
For example, regarding the Indian Pines image, when the OA cation accuracy near 92%. For the University of Pavia image, it
of SVM is about 72% (4% ground truth samples are used as can be seen that, with relatively limited training samples (1% of
training samples), the EPF-B-g method can obtain a classifi- the ground truth), the proposed method can obtain an OA near
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2675

TABLE V ACKNOWLEDGMENT
C OMPUTING T IME ( IN S ECONDS ) OF THE P ROPOSED A LGORITHMS .
T HE N UMBERS O UTSIDE AND I NSIDE THE PARENTHESES The authors would like to thank the Editor-in-Chief, the
S HOW THE C OMPUTING T IMES OF THE MATLAB AND
C++ I MPLEMENTATIONS , R ESPECTIVELY anonymous Associate Editor, and the reviewers for their
insightful comments and suggestions which have greatly im-
proved this paper, P. Ghamisi and N. Falco for their contri-
butions, and M. Pedergnana and Dr. J. Li for providing the
software of the AEAP and the L-MLL methods.

96.5%. A similar conclusion can be obtained when analyzing R EFERENCES


the experimental results of the Salinas image. [1] G. Hughes, “On the mean accuracy of statistical pattern recognizers,”
5) Computational Complexity: Here, experiments are per- IEEE Trans. Inf. Theory, vol. IT-14, no. 1, pp. 55–63, Jan. 1968.
[2] L. Zhang, L. Zhang, D. Tao, and X. Huang, “Tensor discriminative locality
formed using MATLAB and C++ on a Laptop with 2.5-GHz alignment for hyperspectral image spectral–spatial feature extraction,”
CPU and 4-GB memory. Table V shows the computing time IEEE Trans. Geosci. Remote Sens., vol. 51, no. 1, pp. 242–256, Jan. 2013.
of the proposed methods, i.e., EPF-G-g, EPF-G-c, EPF-B-g, [3] W. Li, S. Prasad, J. E. Fowler, and L. M. Bruce, “Locality-preserving
dimensionality reduction and classification for hyperspectral image anal-
and EPF-B-c. From this table, it can be seen that the C++ ysis,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 4, pp. 1185–1198,
implementations of the proposed methods are all very fast Apr. 2012.
(the EPF-G-g method takes only 0.17 s for the Indian Pines [4] A. Villa, J. A. Benediktsson, J. Chanussot, and C. Jutten, “Hyperspectral
image classification with independent component discriminant analysis,”
image). The reason is that the proposed method only requires IEEE Trans. Geosci. Remote Sens., vol. 49, no. 12, pp. 4865–4876,
edge-preserving filtering to be done several times and PCA Dec. 2011.
decomposition to be done once. The complexity of the two [5] L. Zhang, L. Zhang, D. Tao, and X. Huang, “On combining multiple fea-
tures for hyperspectral remote sensing image classification,” IEEE Trans.
EPFs is O(N ), and thus, the probability optimization step of Geosci. Remote Sens., vol. 50, no. 3, pp. 879–893, Mar. 2012.
the proposed method has a computing complexity of O(KN ), [6] A. Plaza, J. A. Benediktsson, J. W. Boardman, J. Brazile, L. Bruzzone,
where N is the number of pixels and K is the number of G. Camps-Valls, J. Chanussot, M. Fauvel, P. Gamba, A. Gualtieri,
M. Marconcini, J. C. Tilton, and G. Trianni, “Recent advances in tech-
classes. Considering that the PCA decomposition step also has niques for hyperspectral image processing,” Remote Sens. Environ.,
fast implementations, the proposed spectral–spatial method is vol. 113, no. Suppl. 1, pp. S110–S122, Sep. 2009.
indeed computationally efficient. For the spectral part of the [7] J. Ham, Y. Chen, M. Crawford, and J. Ghosh, “Investigation of the random
forest framework for classification of hyperspectral data,” IEEE Trans.
proposed method, the MATLAB implementation of the SVM Geosci. Remote Sens., vol. 43, no. 3, pp. 492–501, Mar. 2005.
classifier is the most time-consuming part. Specifically, it takes [8] M. Dalponte, H. O. Orka, T. Gobakken, D. Gianelle, and E. Nasset, “Tree
about 8.68 s for classifying the Indian Pines image. Therefore, species classification in boreal forests with hyperspectral data,” IEEE
Trans. Geosci. Remote Sens., vol. 51, no. 5, pp. 2632–2645, May 2013.
the algorithm can be accelerated further by using more efficient [9] F. Ratle, G. Camps-Valls, and J. Weston, “Semisupervised neural net-
pixelwise classifiers or more efficient implementations of the works for efficient hyperspectral image classification,” IEEE Trans.
SVM classifier. Geosci. Remote Sens., vol. 48, no. 5, pp. 2271–2282, May 2010.
[10] Y. Zhong and L. Zhang, “An adaptive artificial immune network for
supervised classification of multi-/hyperspectral remote sensing imagery,”
IEEE Trans. Geosci. Remote Sens., vol. 50, no. 3, pp. 894–909, Mar. 2012.
V. C ONCLUSION [11] S. Kawaguchi and R. Nishii, “Hyperspectral image classification by boot-
strap AdaBoost with random decision stumps,” IEEE Trans. Geosci. Re-
A simple yet powerful filtering approach has been proposed mote Sens., vol. 45, no. 11, pp. 3845–3851, Nov. 2007.
[12] F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sens-
for spectral–spatial hyperspectral image classification. The pro- ing images with support vector machines,” IEEE Trans. Geosci. Remote
posed method aims at optimizing the pixelwise classification Sens., vol. 42, no. 8, pp. 1778–1790, Aug. 2004.
maps in a local filtering framework. The filtering operation [13] Y. Chen, N. M. Nasrabadi, and T. Tran, “Hyperspectral image classifica-
tion via kernel sparse representation,” IEEE Trans. Geosci. Remote Sens.,
achieves a local optimization of the probabilities, which is in vol. 51, no. 1, pp. 217–231, Jan. 2013.
contrast to some segmentation methods that aim at optimizing [14] A. Castrodad, Z. Xing, J. B. Greer, E. Bosch, L. Carin, and G. Sapiro,
the classification map globally. This paper has shown that “Learning discriminative sparse representations for modeling, source sep-
aration, and mapping of hyperspectral imagery,” IEEE Trans. Geosci.
“local smoothing” is also able to achieve a high classifica- Remote Sens., vol. 49, no. 11, pp. 4263–4281, Nov. 2011.
tion accuracy, i.e., 99.43% for the University of Pavia image. [15] J. Li, J. M. Bioucas-Dias, and A. Plaza, “Spectral–spatial classification
One advantage of the proposed method is the dominance of of hyperspectral data using loopy belief propagation and active learning,”
IEEE Trans. Geosci. Remote Sens., vol. 51, no. 2, pp. 844–856, Feb. 2013.
the pixelwise spectral information with respect to the spatial [16] W. Di and M. M. Crawford, “View generation for multiview maximum
contextual information in spectral–spatial classification. An disagreement based active learning for hyperspectral image classifica-
observation on the proposed method is that the optimization of tion,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 5, pp. 1942–1954,
May 2012.
probability maps will not cause a big difference in the overall [17] J. Li, J. M. Bioucas-Dias, and A. Plaza, “Hyperspectral image segmenta-
appearance of the initial pixelwise probability maps, which tion using a new Bayesian approach with active learning,” IEEE Trans.
means that the pixelwise information is also well considered Geosci. Remote Sens., vol. 49, no. 10, pp. 3947–3960, Oct. 2011.
[18] M. Fauvel, Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and
in the filtering process. Furthermore, a major advantage of the J. C. Tilton, “Advances in spectral–spatial classification of hyperspectral
proposed method is that it is computationally efficient, and images,” Proc. IEEE, vol. 101, no. 3, pp. 652–675, Mar. 2013.
thus, it will be quite useful in real applications. In the future, [19] J. Benediktsson, M. Pesaresi, and K. Amason, “Classification and feature
extraction for remote sensing images from urban areas based on mor-
a further improvement may be achieved by adaptively adjusting phological transformations,” IEEE Trans. Geosci. Remote Sens., vol. 41,
the filtering size and the blur degree of the EPF. no. 9, pp. 1940–1949, Sep. 2003.
2676 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 5, MAY 2014

[20] G. Camps-Valls, L. Gomez-Chova, J. Munoz-Mari, J. Vila-Frances, and [44] K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using
J. Calpe-Maravilla, “Composite kernels for hyperspectral image classi- bilateral filtering,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 5,
fication,” IEEE Geosci. Remote Sens. Lett., vol. 3, no. 1, pp. 93–97, pp. 2308–2316, May 2010.
Jan. 2006. [45] E. S. L. Gastal and M. M. Oliveira, “Domain transform for edge-aware im-
[21] M. Fauvel, J. Chanussot, and J. A. Benediktsson, “A spatial–spectral age and video processing,” ACM Trans. Graph., vol. 30, no. 4, pp. 69:1–
kernel-based approach for the classification of remote-sensing images,” 69:12, Jul. 2011.
Pattern Recognit., vol. 45, no. 1, pp. 381–392, Jan. 2012. [46] T. Qiu, A. Wang, N. Yu, and A. Song, “LLSURE: Local linear sure-based
[22] G. Camps-Valls, N. Shervashidze, and K. M. Borgwardt, “Spatio-spectral edge-preserving image filtering,” IEEE Trans. Image Process., vol. 22,
remote sensing image classification with graph kernels,” IEEE Geosci. no. 1, pp. 80–90, Jan. 2013.
Remote Sens. Lett., vol. 7, no. 4, pp. 741–745, Oct. 2010. [47] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via L0 gradient
[23] A. Plaza, P. Martinez, J. Plaza, and R. Perez, “Dimensionality reduc- minimization,” ACM Trans. Graph., vol. 30, no. 6, pp. 174:1–174:12,
tion and classification of hyperspectral image data using sequences of Dec. 2011.
extended morphological transformations,” IEEE Trans. Geosci. Remote [48] X. Huang and L. Zhang, “An SVM ensemble approach combining spec-
Sens., vol. 43, no. 3, pp. 466–479, Mar. 2005. tral, structural, and semantic features for the classification of high-
[24] D. M. Mura, A. Villa, J. A. Benediktsson, J. Chanussot, and L. Bruzzone, resolution remotely sensed imagery,” IEEE Trans. Geosci. Remote Sens.,
“Classification of hyperspectral images by using extended morphological vol. 51, no. 1, pp. 257–272, Jan. 2013.
attribute profiles and independent component analysis,” IEEE Geosci. [49] P. R. Marpu, M. Pedergnana, M. D. Mura, J. A. Benediktsson, and
Remote Sens. Lett., vol. 8, no. 3, pp. 542–546, May 2011. L. Bruzzone, “Automatic generation of standard deviation attribute pro-
[25] J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification files for spectral–spatial classification of remote sensing data,” IEEE
of hyperspectral data from urban areas based on extended morphological Geosci. Remote Sens. Lett., vol. 10, no. 2, pp. 293–297, Mar. 2013.
profiles,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 3, pp. 480–491, [50] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.
Mar. 2005. Boston, MA, USA: Addison-Wesley, 2001.
[26] Y. Tarabalka, M. Fauvel, J. Chanussot, and J. A. Benediktsson, “SVM- and [51] C. C. Chang and C. J. Lin (2011, Apr.). LIBSVM: A library for sup-
MRF-based method for accurate classification of hyperspectral images,” port vector machines. ACM Trans. Intell. Syst. Technol. [Online]. 2(3),
IEEE Geosci. Remote Sens. Lett., vol. 7, no. 4, pp. 736–740, Oct. 2010. pp. 27:1–27:27. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm
[27] G. Moser and S. B. Serpico, “Combining support vector machines and
Markov random fields in an integrated framework for contextual im-
age classification,” IEEE Trans. Geosci. Remote Sens., vol. 51, no. 5,
pp. 2734–2752, May 2013.
[28] Y. Tarabalka, J. A. Benediktsson, and J. Chanussot, “Spectral–spatial
classification of hyperspectral imagery based on partitional clustering
techniques,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 8, pp. 2973–
2987, Aug. 2009.
[29] Y. Tarabalka, J. Chanussot, and J. A. Benediktsson, “Segmentation and
classification of hyperspectral images using watershed transformation,”
Pattern Recognit., vol. 43, no. 7, pp. 2367–2379, Jul. 2010.
[30] Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and J. C. Tilton, “Mul-
tiple spectral–spatial classification approach for hyperspectral data,” Xudong Kang (S’13) received the B.Sc. degree from
IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 4122–4132, Northeast University, Shenyang, China, in 2007. He
Nov. 2010. is currently working toward the Ph.D. degree in elec-
[31] Y. Tarabalka, J. Chanussot, and J. A. Benediktsson, “Segmentation and trical engineering in Hunan University, Changsha,
classification of hyperspectral images using minimum spanning forest China.
grown from automatically selected markers,” IEEE Trans. Syst., Man, He is currently a Visiting Ph.D. student in elec-
Cybern. B, Cybern., vol. 40, no. 5, pp. 1267–1279, Oct. 2010. trical engineering at the University of Iceland,
[32] J. Li, J. M. Bioucas-Dias, and A. Plaza, “Spectral–spatial hyperspectral Reykjavik, Iceland. He is engaged in image fusion,
image segmentation using subspace multinomial logistic regression and image superresolution, pansharpening, and hyper-
Markov random fields,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 3, spectral image classification.
pp. 809–823, Mar. 2012.
[33] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color im-
ages,” in Proc. Int. Conf. Comput. Vis., Jan. 1998, pp. 839–846.
[34] K. He, J. Sun, and X. Tang, “Guided image filtering,” in Proc. Eur. Conf.
Comput. Vision, Heraklion, Greece, Sep. 2010, pp. 1–14.
[35] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a
signal processing approach,” Int. J. Comput. Vis., vol. 81, no. 1, pp. 24–52,
Jan. 2009.
[36] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving
decompositions for multi-scale tone and detail manipulation,” ACM
Trans. Graph., vol. 27, no. 3, pp. 67:1–67:10, Aug. 2008.
[37] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013. Shutao Li (M’07) received the B.Sc., M.Sc., and
[38] A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast Ph.D. degrees in electrical engineering from Hunan
cost-volume filtering for visual correspondence and beyond,” IEEE Trans. University, Changsha, China, in 1995, 1997, and
Pattern Anal. Mach. Intel., vol. 35, no. 2, pp. 504–511, Feb. 2013. 2001, respectively.
[39] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE In 2001, he joined the College of Electrical and
Trans. Image Process., vol. 22, no. 7, pp. 2864–2875, Jul. 2013. Information Engineering, Hunan University. From
[40] S. Li and X. Kang, “Fast multi-exposure image fusion with median fil- May 2001 to October 2001, he was a Research
ter and recursive filter,” IEEE Trans. Consum. Electron., vol. 58, no. 2, Associate with the Department of Computer Science,
pp. 626–632, May 2012. Hong Kong University of Science and Technology,
[41] B. Zhang and J. P. Allebach, “Adaptive bilateral filter for sharpness en- Kowloon, Hong Kong. From November 2002 to
hancement and noise removal,” IEEE Trans. Image Process., vol. 17, November 2003, he was a Postdoctoral Fellow with
no. 5, pp. 664–678, May 2008. the Royal Holloway College, University of London, Egham, U.K., working
[42] K. He, J. Sun, and X. Tang, “Single image haze removal using dark with Prof. J.-S. Taylor. From April 2005 to June 2005, he was a Visiting
channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, Professor with the Department of Computer Science, Hong Kong University
pp. 2341–2353, Dec. 2011. of Science and Technology. He is currently a Full Professor with the College
[43] C. H. Lin, J. S. Tsai, and C. T. Chiu, “Switching bilateral filter with a of Electrical and Information Engineering, Hunan University. He has authored
texture/noise detector for universal noise removal,” IEEE Trans. Image or coauthored more than 160 refereed papers. His professional interests are
Process., vol. 19, no. 9, pp. 2307–2320, Sep. 2010. information fusion, pattern recognition, and image processing.
KANG et al.: SPECTRAL–SPATIAL HYPERSPECTRAL IMAGE CLASSIFICATION 2677

Jón Atli Benediktsson (S’84–M’90–SM’99–F’04)


received the Cand.Sci. degree in electrical engi-
neering from the University of Iceland, Reykjavik,
Iceland, in 1984 and the M.S.E.E. and Ph.D. de-
grees in electrical engineering from Purdue Univer-
sity, West Lafayette, IN, USA, in 1987 and 1990,
respectively.
He is currently the Pro Rector of Academic Affairs
and a Professor of electrical and computer engineer-
ing with the University of Iceland. He is a cofounder
of the biomedical startup company Oxymap. His
research interests are in remote sensing, image analysis, pattern recognition,
biomedical analysis of signals, and signal processing, and he has published
extensively in these fields.
Prof. Benediktsson was the 2011–2012 President of the IEEE Geoscience
and Remote Sensing Society (GRSS), and he has been on the GRSS Admin-
istrative Committee since 2000. He is a Fellow of SPIE. He is a member of
Societas Scientiarum Islandica and Tau Beta Pi. He was the Editor of the IEEE
T RANSACTIONS ON G EOSCIENCE AND R EMOTE S ENSING (TGRS) from
2003 to 2008, and he has served as an Associate Editor of TGRS since 1999
and the IEEE G EOSCIENCE AND R EMOTE S ENSING L ETTERS since 2003.
He received the Stevan J. Kristof Award from Purdue University in 1991 as an
outstanding graduate student in remote sensing. In 1997, he was the recipient
of the Icelandic Research Councils Outstanding Young Researcher Award; in
2000, he was granted the IEEE Third Millennium Medal; in 2004, he was
a corecipient of the University of Icelands Technology Innovation Award; in
2006, he received the yearly research award from the Engineering Research
Institute, University of Iceland; and in 2007, he received the Outstanding
Service Award from the GRSS. He was the corecipient of the 2012 IEEE
T RANSACTIONS ON G EOSCIENCE AND R EMOTE S ENSING Best Paper Award.

You might also like