Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Classification of Unmanned Aerial Vehicles Based on Acoustic Signals Obtained in External Environmental Conditions
Previous Article in Journal
Efficacy of PZT Sensors Network Different Configurations in Damage Detection of Fiber-Reinforced Concrete Prisms under Repeated Loading
Previous Article in Special Issue
Enhancing Multichannel Fiber Optic Sensing Systems with IFFT-DNN for Remote Water Level Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Anomaly Detection Based on Spectral Similarity Variability Feature

1
School of Physics and Electronic Information, Yantai University, Yantai 264005, China
2
Shandong Yuweng Information Technology Co., Ltd., Yantai 264005, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(17), 5664; https://doi.org/10.3390/s24175664
Submission received: 14 July 2024 / Revised: 25 August 2024 / Accepted: 26 August 2024 / Published: 30 August 2024
(This article belongs to the Special Issue Advanced Optical Sensors Based on Machine Learning)

Abstract

:
In the traditional method for hyperspectral anomaly detection, spectral feature mapping is used to map hyperspectral data to a high-level feature space to make features more easily distinguishable between different features. However, the uncertainty in the mapping direction makes the mapped features ineffective in distinguishing anomalous targets from the background. To address this problem, a hyperspectral anomaly detection algorithm based on the spectral similarity variability feature (SSVF) is proposed. First, the high-dimensional similar neighborhoods are fused into similar features using AE networks, and then the SSVF are obtained using residual autoencoder. Finally, the final detection of SSVF was obtained using Reed and Xiaoli (RX) detectors. Compared with other comparison algorithms with the highest accuracy, the overall detection accuracy (AUCODP) of the SSVFRX algorithm is increased by 0.2106. The experimental results show that SSVF has great advantages in both highlighting anomalous targets and improving separability between different ground objects.

1. Introduction

Hyperspectral remote sensing image processing technology is a branch of the signal processing field, many signal processing-related methods provide theoretical and technical support for hyperspectral remote sensing image processing. According to the characteristics of hyperspectral remote sensing images, remarkable achievements have been achieved in the application directions of hyperspectral image classification [1,2,3], unmixing [4,5,6], super-resolution mapping [7,8], and target detection [9]. In recent years, many experts and scholars have systematically reviewed different types of hyperspectral remote sensing image processing methods. samples include hyperspectral spatial enhancement techniques or super resolution (SR) [10], the application of machine learning to lithology mapping and mineral exploration [11], and the application of deep learning to anomaly detection [12]. These systematic reviews provide important reference and guidance for the further research and development of hyperspectral remote sensing image processing, and strongly promote the continuous innovation and improvement of related technologies.
Although hyperspectral images are rich in spectral and spatial information, they still face various challenges in the research process, including redundancy of high dimensional data, pollution of spectral noise and atmospheric influence, mixed pixels, and different objects within the same spectrum and in different spectra of the same object. Spectral dimension transformation involves mapping hyperspectral images to the corresponding feature space through the feature processing method, which makes the ground objects indistinguishable in the original feature space separable in the new feature space. In hyperspectral abnormal target detection, spectral dimension transformation can improve the separability between the background and the anomaly target. The most common feature processing methods are principal component analysis [13] (PCA), independent component analysis [14] (ICA) and nonlinear principal component analysis [15]. The essence of spectral dimension transformation is to obtain higher-level features of original hyperspectral images by a mapping method and improve the accuracy of anomaly detection by using its ability to improve the separability between different ground objects. The hyperspectral anomaly detection (HAD) of differential images [16] utilizes difference images to estimate background changes during the feature extraction stage, so as to suppress background signals and highlight anomaly signals. Fractional Fourier entropy [17] employs fractional Fourier transform for pre-processing, then uses space–frequency representation to obtain features from the intermediate region between the original spectrum and its complementary Fourier transform. Unsupervised spectral mapping and feature selection [18] highlights an anomaly target by searching for the optimal feature subset from the candidate feature space while mapping high-dimensional features to a low-dimensional space using unsupervised neural networks.
In addition, research on HAD that is based on a linear model has also found some success. The linear model is able to obtain the error term of the hyperspectral image. For example, the hyperspectral image is mapped to the feature space of other dimensions, and then the features of the mapping are re-projected to the original feature space by the opposite method. Finally, the reconstruction error is taken as the value of the anomaly degree. Because normal samples are easier to reconstruct than anomaly samples, the samples with higher reconstruction errors are considered abnormal targets. For example, the residuals between the reconstructed image and the original image are obtained by PCA projection reconstruction, and the projection parameters are updated in several iterations [19]. The work of [20] is enhanced by that of [19], and potential anomaly target is filtered out according to the error value of each iteration. The reconstruction probability algorithm of an autoencoder (AE) [21] is also a detection model that can obtain reconstruction errors through feature mapping. The joint graph detection (JGD) [22] model considers both spectral and spatial features. Through the spectral sub-model, the reconstruction error between the original hyperspectral sensing image (HSI) and the feature image after the graph Fourier transform (GFT) is mapped to fractional Fourier entropy (FrFE), which enhances the anomaly detection capability and shows the advantage in distinguishing background anomalies. To solve the problem of the PCA being sensitive to feature scales and outliers, robust PCA (RPCA) [23] decomposes data into low-rank and sparse matrices to enhance robustness to noise and outliers. RPCA integrating sparse and low-rank priors (RPCA-SL) is a new variant that achieves a more precise separation by combining prior targets and is solved using a near-end gradient algorithm. A discriminant reconstruction method based on spectral learning (SLDR) [24], firstly uses a spectral error map (SEM) to detect the anomaly, and then uses a spectral angle distance (SAD) to restrict the AE to follow a unit Gaussian distribution. The obtained SEM can well reflect the spectral similarity between the identification and reconstruction. The mixture of Gaussian low-rank and sparse decomposition [25] decomposes the HSI into low-rank background and sparse components, then deduces the hybrid Gaussian model of sparse components by variable decibels, and finally calculates the anomaly by Manhattan distance. Pixel-associate AE [26] uses super-pixel distance to build two representative dictionaries, and then obtains the hidden layer expression of similarity measure by AE.
In HAD, spectral dimension transformation involves mapping the background and anomaly target of hyperspectral data to another feature space, so that it can then identify the background and anomaly target that cannot be separated in the original feature space and thus improve the detection accuracy. However, the traditional feature mapping method is to map both the background and anomaly target to the same feature space; however, this cannot effectively highlight the anomaly target. The main factor affecting this problem is the uncertainty of the mapping direction. It is difficult to separate the anomaly target from the background effectively by the conventional spectral dimension transformation.
To solve this problem, hyperspectral anomaly detection based on the spectral similarity variability feature (SSVF) was proposed. First, the AE network is used to fuse high-dimensional similar neighborhoods into lower-dimensional similar features, which have similar information to neighborhood pixels and can reduce the computational burden of subsequent networks. The SSVF is then obtained using an autoencoder of the residuals, essentially to acquire the error between the image itself and its similar neighbors. Finally, the RX detector was used to obtain the final detection result of SSVF. Therefore, the proposed algorithm is also called SSVFRX. Hyperspectral images in which the background and its neighboring image elements have a high degree of similarity can be initially judged to belong to the same ground feature. In contrast, anomalous targets and their similar pixels have a low probability of belonging to the same ground feature. The similarity change feature allows most of the same features to be mapped in the same direction, while anomaly targets are mapped in the opposite direction.
This paper evaluates the superiority of SSVF in improving the difference between anomaly target and background through experiments. Comparative experiments are used to judge the effect of introducing a similar neighborhood on the improvement of the difference between the abnormal target and the background. SSVF aims to solve the uncertainty of the mapping direction of the spectral dimension transformation method for an unsupervised network model by introducing a similarity difference value to obtain a mapping direction which can improve the difference between anomaly target and background.

2. Materials and Methods

2.1. Experiment Data Description

The superiority of the SSVFRX algorithm was verified using seven hyperspectral experimental datasets. The detailed parameters of the experimental datasets are shown in Table 1, and the false-color images and their ground truth images are shown in Figure 1. Additionally, the following must be explained:
(1)
D1 and D2 are from the Remote Sensing and Image Processing Group (RSIPG) repository [27], captured at an altitude of 1200 m on a sunny day. D1 is the full image, while D2 is a cropped portion containing an anomaly. Both datasets have undergone residual stripe removal, and D1 has been further processed with noise whitening and partial spectral discarding.
(2)
D3 and D4 are from the San Diego Airport, with the anomaly target being aircraft.
(3)
D5 is from the Digital Imaging and Remote Sensing (DIRS) laboratory, which is part of the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology.
(4)
The high-spectral datasets D6 and D7 are from the personal website of Xudong Kang, School of Electrical and Information Engineering, Hunan University. The original images were downloaded from the AVIRIS website [28]. The authors extracted 100 × 100 sub-images and applied a noise level estimation method to remove the noisy bands.
Figure 1. False-color image and target position of experimental data.
Figure 1. False-color image and target position of experimental data.
Sensors 24 05664 g001
Table 1. Experimental dataset parameter.
Table 1. Experimental dataset parameter.
Data Set NameHyperspectral
Imaging Sensor
Collected
Location
Spectral RangeSpectral
Resolution
Spatial
Resolution
Size of Origin
Image
Size of
Sub-Image
The
Original Number of Bands
Number of Bands after Processing
μ m nm (m)PixelPixel
D1VNIR-SIM.GAParking lot in suburban vegetated0.40–1.001.20.6375 × 450375 × 450511127
D2200 × 100511511
D3AVIRISSan Diego0.36~2.509.03.0400 × 40080 × 80224126
D460 × 60
D5ProSpecTIR-VS2 sensorAvon, NY.0.39~2.455.01.0--120 × 80360360
D6AVIRISLos Angeles0.36–2.509.07.1100 × 100100 × 100224205
D7

2.2. Hyperspectral Anomaly Detection Based on Spectral Similar Variability Feature

The proposed algorithm is divided into three main steps: data pre-processing, similar feature fusion (SFF) and spectral similarity variability feature extraction. The overall flow chart is shown in Figure 2. The process of data pre-processing involves processing the origin HSI by PCA and whitening. SFF refers to the fusion of similar features from multiple similar neighborhoods, using AE networks to obtain a low-dimensional feature representation of the same dimension as the original image. Spectral similarity change feature extraction refers to the calculation of the difference value between similar features and the original features using a residual autoencoder network. Finally, the final detection result is obtained by the RX detector.

2.2.1. Data Pre-Processing

In hyperspectral image processing, the pre-processing stage is crucial for improving data quality and subsequent analysis effectiveness. Before network training, the hyperspectral images are usually pre-processed, such as by reducing dimension and whitening.
The hyperspectral dataset is represented as X = x 1 , x 2 , . . . , x N , where x i = x 1 i , x 2 i , . . . , x n i , x j i is the j th dimension of the i th sample, N is the number of samples, and n is the sample dimension.
The data pre-processing process is shown in Figure 3. Firstly, principal component analysis (PCA) is used to obtain the feature after reducing dimension X p , and then whitening is used to obtain the whitened features X w .
X w = X p / λ i = Σ k T X / λ i
where Σ = 1 N i = 1 N x i x i T is the covariance matrix, λ i is the i th eigenvalue of the covariance matrix, and Σ k is the first k columns of the covariance matrix.

2.2.2. Similar Feature Fusion Based on Autoencoder

Hyperspectral images have strong high-dimensional properties, and their similar features can be reconstructed in a manner that is nearly lossless by an autoencoder for images with similar features. This process can help to further improve the separability between classes through its own nonlinear transformations while reducing the training burden of the subsequent residual network. The fusion model of similar features fusion is shown in Figure 4.
The Euclidean distance is used as a similarity measure to find the nearest neighboring sample in the sample set. The feature of each sample is represented as x i and its neighborhood is represented as S i = S 1 i , S 2 i , . . . , S Q i , also known as the set of Q neighborhoods nearest to x i in the dataset, where Euclidean distance is used as a similarity measure. The specific process is as follows:
First, calculate the similarity, as follows:
d i = j = 1 N x i     x j 2
where d i the similarity set of the i th sample.
Then, the similarity matrix d i is arranged from small to large, and the first Q samples are selected, as its similarity neighborhood set is S i = S 1 i , S 2 i , . . . , S Q i .
The autoencoder is used to undertake similar feature fusion. The training sample set can be represented as S = S 1 , S 2 , . . . , S N . The network structure is shown in Figure 3. The network uses gradient descent to minimize the objective function, as follows:
J ( α , β ) = 1 M i = 1 M J ( α , β ; S ( i ) , S ( i ) ) + λ 2 l = 1 n l 1 i = 1 s l j = 1 s l + 1 α ji ( l ) 2
where α ,   β is the network parameter, M is the number of batches, S ( i ) is the i th input similar sample, λ is the weight decay term, n l is the number of layers of the network, and s l is the number of nodes at layer l.
After the training, the fixed parameters and the expression of the hidden layer are obtained.
Y = f ( α × S )
Finally, a nonlinear similar feature Y , which contains similar information is obtained.

2.2.3. Spectral Similar Variability Feature

Although hyperspectral images are very rich in spectral information, because of illumination, noise and other factors, the spectral information of pixel exists in the phenomenon of ‘same object and different spectrum’ (that is, the spectrum of the same object is different). This difference is defined as spectral variation (SV), and the extracted spectral variation information is called the spectral variation feature. The questions of how to extract the spectral variation information and how to use it to enhance the performance of anomaly detection are the highlights of the study in this section.
Every pixel has its similar pixel in the global scope, so that the spectral features combined with other similar pixels are called similar features (SF). Suppose similar pixels belong to different spectra of the same category, the variation between them is called the spectral similar variation feature (SSVF). The advantages of SSVF in hyperspectral anomaly detection may lie in the following aspects. First, the different characteristics of the background and anomaly target show that there is a large variation between the anomaly target as outliers and their similar features. Second, it can be seen from the different spectral changes of different ground object types that SSVF can distinguish different ground object types in scenes to a large extent.

2.2.4. Spectral Similar Variability Feature Extraction Based on Residual Autoencoder

In the similar feature fusion stage, a similar fusion feature Y with similar information of multiple neighboring pixels is obtained by using the autoencoder. In order to obtain the variability feature between the SFF and the original features, the residual autoencoder network is used to take the SFF as inputs and the original features as labels. The structure of the residual autoencoder network is shown in Figure 5, and the method of obtaining the SSVF is as follows:
First, the activation value of the network is obtained by forwarding propagation, as follows:
Z = f 2 ( θ 2 × f 1 θ 1 × Y ) + Y
where f 1 x = 1 1 + e x , f 2 x = x , θ 1 and θ 2 are parameters of the network, and Y is a similar feature.
The purpose of the residual autoencoder is to obtain the error generated when samples in the similar feature space are mapped to the original feature space. The parameter θ 1 , θ 2 is adjusted through back-propagation to minimize the cost function J (the mean square error of the sample set).
J θ 1 , θ 2 = 1 2 Z i X i 2
where Z i represents the ith activation values and X i represents the ith original hyperspectral data.
Then the difference between the activation value Z and the input Y of the residual network is used as the error for back-propagation to update the network parameters θ 1 and θ 2 .
After the training is completed, the spectral similar variability feature can be obtained as follows:
E = Z   X
The detection results are obtained by the following methods:
R E = RXdetector E
where RXdetector · represents the RX anomaly detection algorithm and R E represents the detection results of the feature sets.
Figure 5. SSVF extraction model.
Figure 5. SSVF extraction model.
Sensors 24 05664 g005

3. Experimental Result

3.1. Comparison Algorithm

In this experiment, 10 groups of related comparison algorithms were selected to verify the superiority of the SSVFRX algorithm. Global RX detector (GRXD) [29] is the most basic detection method in the field of anomaly target detection and is widely used in a variety of anomaly detection fields. GRXD, based on PCA [13], is the most commonly used feature extraction method. Principal component reconstruction error (PCRE) [19] is the anomaly detection method based on the residual (error) caused by PCA projection in the reconstruction of original images. Anomaly detection based on autoencoder (ADAE) [21] is a method used to detect an anomaly target through the residual of the autoencoder. Hyperspectral anomaly detection by fractional Fourier entropy (FrFE) [17] is an anomaly detection method based on feature extraction and selection. The low-rank and sparse decomposition model with a mixture of Gaussian (LSDMMoG) [25] is an anomaly detection method for constructing hybrid Gaussian models based on sparse components and low-rank backgrounds. Information entropy estimation based on point-set topology (IEEPST) [30] combines point-set topology and information entropy theory to reveal data characteristics and data arrangement in topological space. Hyperspectral anomaly detection based on chessboard topology (CTAD) [31] refers to the use of checkerboard topology to mine high-dimensional data features. Hyperspectral anomaly detection with guided autoencoder (GAED) [32] is a guided multi-layer autoencoder that reduces the feature representation of the anomaly target by providing feedback.

3.2. Parameter Selection

In order to better generalize the model, the parameter selection phase focuses on selecting a common hyperparameter that applies to most of the data. Therefore, it mainly explains how to adjust the parameters within a certain range.
(1)
The first parameter to be adjusted is Q (the number of K neighbors). Because the number of K neighbors directly affects the dimension of input data in the phase of similar feature fusion, the value of Q should not be too large in order for it not to affect the computational efficiency. Take D3 as an example, as shown in Table 2, when Q = 9, the anomaly detection accuracy reaches its maximum. However, if Q = 9, then, when the data set dimension is 511, the input data dimension will be as high as 4599, which will affect the computational efficiency of the algorithm. Therefore, Q is set to 5 at this stage.
(2)
In order to ensure the stability of detection results, when the network reaches the convergence state, the error of detection performance is small. The hyperparameters can be adjusted to control the degree and speed of network convergence and avoid falling into a local optimum in the following ways:
The main parameters are learning rate (a), learning rate decay (b), maximum number of iterations (T) and batch size. The first of these is used to control the attenuation speed. According to experience, a = 0.1. As the number of iterations increases, a(t) = b × a(t − 1). However, through debugging, the algorithm convergence speed is slow when b 1, and it is easy to fall into a local optimum, so b = 1. Batch size refers to the sample size of a model training process. It is related to the number of training samples, and a small sample may only need one batch of training. Although large batch size can improve the training speed, it may also cause slow convergence, low generalization performance and even over-fitting. If the batch size is small, data need to be loaded more frequently. Experience has shown that batch size is usually 1% of the sample size (batch size = N × 1%). The number of iterations, T, depends on the convergence degree and speed after the above parameters are determined and is generally set to 100 times according to the convergence situation.
(3)
n0 is the implicit layer dimension of the residual autoencoder. The mapping direction of the hyperspectral image is controlled by adjusting n0. Different mapping spaces affect the separability of different features. Based on experience, this is usually set to n − 20, where n is the original data dimension.
(4)
n1 is the dimension of the last layer of the residual network. As the algorithm needs to obtain the difference between similar fusion features and the original data, it must be consistent with the original image dimension.

3.3. Experimental Results

The basic evaluation indexes adopted in this chapter mainly include the three-dimensional receiver operating characteristic (3D ROC) [33], statistical separability analysis (SSA) [34] and detection result image (DRI) [35]. Seven experimental data and five comparison algorithms were selected to verify the superiority of SSVFRX.
The 3D ROC is an extension of the traditional ROC curve, where the threshold τ is used as an independent variable to illustrate the three-dimensional relationship among PD, PF, and τ. Here, PD represents the probability of correctly identifying a target when the true value is indeed a target, also known as the probability of detection. PF represents the probability of incorrectly identifying a target when the true value is a non-target, also known as the probability of false alarm. Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 display the 3D ROC curves for seven sets of experimental data, along with their corresponding 2D projections. Based on these, new quantitative performance metrics is redefined. AUC(D,F) is the area under the PD and PF curve, while AUC(D,τ) is the area under the PD and τ curve. Both of these metrics are positively correlated with target detection performance, meaning that the higher the value, the better the detection performance. AUC(F,τ) is the area under the PF and τ curve and is negatively correlated with background suppression performance, meaning that the lower the value, the better the background suppression. In addition, several comprehensive indicators are defined below:
AUCTD = AUC(D,F) + AUC(D,τ), which represents target detectability (TD). AUCBS = AUC(D,F) − AUC(F,τ), which represents background suppressibility (BS). AUCSNPR = AUC(D,τ)/AUC(F,τ), which measures the signal-to-noise ratio by treating the target as the signal and the background as noise. AUCTDBS = AUC(D,τ) − AUC(F,τ), which represents TD within the background. AUCODP = AUC(D,τ) + (1 − AUC(F,τ)), which represents overall detection accuracy.
The aforementioned metrics across the 7 experimental datasets are presented in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, where ↑ indicates that the value is proportional to the performance, ↓ indicates that the value is inversely proportional to the performance, and bold indicates the optimal solution. After analyzing the AUC results for these datasets, the following results are obtained:
(1)
Background suppressibility (BS): AUC(F,τ) and AUCBS correlate with BS capacity.
The SSVFRX model exhibits a number of characteristics in experiments on BS. In most experimental datasets, SSVFRX has the best AUCBS (comprehensive BS) performance. Despite the low performance of AUC(F,τ) under a single hypothesis, its comprehensive BS is strong. In addition to this, in datasets D4, D5, the AUCBS of SSVFRX are second only to one model, which is a different model, and the difference is very small, 0.0013 and 0.0185, respectively, which suggests that SSVFRX has a superior performance in background suppression.
Table 3. AUC performance comparison of different methods on D1.
Table 3. AUC performance comparison of different methods on D1.
D1AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.86880.09030.01480.95910.85406.10260.07550.9443
PCA0.87940.09120.01270.97060.86677.19040.07850.9579
PCRE0.70000.19240.11310.89240.58691.70160.07930.7793
ADAE0.69720.17790.01340.87500.683813.31620.16450.8617
FrFE0.85060.08460.01320.93520.83736.39330.07140.9219
LSDMMoG0.81640.19600.07251.01240.74402.70390.12350.9399
IEEPST0.67240.05290.00020.72530.6722321.41020.05270.7251
CTAD0.61460.14810.00430.76270.610334.48800.14380.7584
GAED0.70700.20730.04100.91430.66605.05680.16630.8733
SSVFRX0.88260.09920.00760.98180.875013.10490.09170.9743
Figure 6. Performance comparison of the 3D ROC of different methods on D1. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 6. Performance comparison of the 3D ROC of different methods on D1. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g006
Table 4. AUC performance comparison of different methods on D2.
Table 4. AUC performance comparison of different methods on D2.
D2AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.56670.13940.09000.70610.47681.54900.04940.6161
PCA0.57140.13260.08200.70390.48941.61730.05060.6220
PCRE0.66660.05730.00870.72400.65806.60900.04870.7153
ADAE0.76740.00820.00140.77560.76606.06020.00690.7743
FrFE0.59030.09710.05130.68740.53891.89060.04570.6360
LSDMMoG0.64610.16300.09310.80910.55301.75130.06990.7160
IEEPST0.65190.00090.00090.65280.65101.03520.00000.6520
CTAD0.56940.03720.03800.60660.53140.9796-0.00080.5686
GAED0.65200.02840.00570.68040.64634.98570.02270.6747
SSVFRX0.87250.15560.04321.02810.82933.60580.11250.9849
Figure 7. Performance comparison of the 3D ROC of different methods on D2. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 7. Performance comparison of the 3D ROC of different methods on D2. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g007
Table 5. AUC performance comparison of different methods on D3.
Table 5. AUC performance comparison of different methods on D3.
D3AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.82270.08580.05550.90860.76731.54660.03030.8531
PCA0.81700.08820.05740.90520.75961.53550.03070.8478
PCRE0.75460.01370.01010.76840.74461.36650.00370.7583
ADAE0.78550.02380.01430.80920.77111.66230.00950.7949
FrFE0.66370.30390.29090.96760.37271.04450.01290.6766
LSDMMoG0.73680.43030.37191.16710.36491.15710.05840.7952
IEEPST0.71410.03320.01220.74730.70192.71110.02100.7351
CTAD0.80790.14360.04670.95150.76123.07750.09700.9049
GAED0.71220.03790.02580.75020.68641.46920.01210.7244
SSVFRX0.94950.06650.01481.01610.93474.49010.05171.0012
Figure 8. Performance comparison of the 3D ROC of different methods on D3. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 8. Performance comparison of the 3D ROC of different methods on D3. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g008
Table 6. AUC performance comparison of different methods on D4.
Table 6. AUC performance comparison of different methods on D4.
D4AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.81390.05390.02960.86780.78421.82060.02430.8382
PCA0.81680.06510.03690.88190.77991.76570.02830.8450
PCRE0.71270.03210.02420.74480.68851.32850.00790.7207
ADAE0.94170.06620.01011.00800.93166.53730.05610.9979
FrFE0.92370.28950.05561.21320.86805.20370.23391.1575
LSDMMoG0.78010.44110.38991.22130.39021.13130.05120.8313
IEEPST0.87260.06660.01490.93920.85784.48080.05170.9243
CTAD0.93350.44970.12321.38320.81033.65140.32661.2600
GAED0.90480.22920.01231.13400.892518.62090.21691.1217
SSVFRX0.96530.27590.03501.24120.93037.88550.24091.2062
Figure 9. Performance comparison of the 3D ROC of different methods on D4. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 9. Performance comparison of the 3D ROC of different methods on D4. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g009
Table 7. AUC performance comparison of different methods on D5.
Table 7. AUC performance comparison of different methods on D5.
D5AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.93320.33090.09891.26410.83423.34500.23201.1652
PCA0.96750.19130.00991.15890.957619.29350.18141.1489
PCRE0.96520.15720.00881.12230.956417.95220.14841.1136
ADAE0.97030.10760.00541.07790.965020.11040.10221.0726
FrFE0.86750.35100.12381.21850.74372.83490.22721.0947
LSDMMoG0.93090.29250.07811.22350.85283.74340.21441.1453
IEEPST0.98850.23050.00241.21900.986196.81950.22811.2167
CTAD0.99070.57180.05711.56250.933610.01400.51471.5054
GAED0.95120.14240.00831.09360.942817.10930.13411.0852
SSVFRX0.99680.37030.02921.36700.967612.68550.34111.3379
(2)
Target detectability (TB): AUC(D,F), AUC(D,τ), AUCTD and AUCTDBS represent the TD in different cases.
Combining the detection results in Table 3, Table 4, Table 5, Table 6 and Table 7, the SSVFRX model has the best AUC(D,F) performance among all experimental data. However, SSVFRX generally performs worse in the AUC(D,τ) of a single hypothesis. This may be due to limitations in the target detection ability under different threshold conditions.
The AUCTD of SSVFRX is ranked 2nd, 1st, 2nd, 2nd, 2nd, 2nd, 1st, 3rd in D1~D7, respectively, which indicates that the target detection performance is relatively stable in different scenarios and performs well in most of the cases. The AUCTDBS of SSVFRX is ranked 5th, 1st, 3rd, 2nd, 2nd, 1st, 2nd in D1~D7, respectively. This indicates that in terms of the ability of TD to remove BS, SSVFRX performs relatively consistently and excels in most cases.
Figure 10. Performance comparison of the 3D ROC of different methods on D5. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 10. Performance comparison of the 3D ROC of different methods on D5. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g010
Table 8. AUC performance comparison of different methods on D6.
Table 8. AUC performance comparison of different methods on D6.
D6AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.84040.18410.05161.02450.78883.56910.13250.9729
PCA0.92780.09880.01331.02660.91457.44880.08551.0133
PCRE0.93480.11030.00911.04510.925712.10970.10121.0360
ADAE0.89400.05310.01280.94710.88124.14370.04030.9343
FrFE0.94410.14330.02411.08750.92005.93770.11921.0633
LSDMMoG0.84200.29890.09461.14090.74743.16000.20431.0463
IEEPST0.79700.00280.00120.79980.79592.38480.00160.7987
CTAD0.79140.19910.05030.99050.74113.95970.14880.9402
GAED0.87450.12090.03410.99540.84043.54340.08680.9613
SSVFRX0.97670.24700.02271.22380.954110.90060.22441.2011
Figure 11. Performance comparison of the 3D ROC of different methods on D6. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 11. Performance comparison of the 3D ROC of different methods on D6. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g011
(3)
Overall detection accuracy: AUCODP represents the overall detection accuracy.
The overall detection results show that the SSVFRX model has higher AUCODP scores in most of the datasets. This reveals that it has an advantage in overall detection accuracy. It should be noted in particular that AUCODP only differed by 0.054, 0.1675 and 0.1374 compared with CTAD in D4, D5 and D7, respectively, but even so, the performance of SSVFRX is still the best model besides CTAD. This shows that SSVFRX has a better overall detection performance than other global detection methods and outperforms local detection methods on most datasets.
SSA is used to assess the separability of the anomaly target and the background. The red box indicates the range of values for the anomaly target and the green box indicates the range of values for the background. The distance between the lower limit of the red box and the upper limit of the corresponding green box reflects the degree of separability between the anomaly target and the background. A larger distance represents a higher degree of separability between the anomaly target and the background, or, in other words, a more prominent anomaly target. The height of the green box represents the degree of background suppression, and the smaller the height, the higher the degree of background suppression. As shown in Figure 13, SSVFRX can significantly improve the separability between the background and the anomaly target and suppress the background. In particular, in datasets D4, D5 and D7, SSVFRX has a lower degree of separability than CTAD, but a higher degree of background suppression.
Table 9. AUC performance comparison of different methods on D7.
Table 9. AUC performance comparison of different methods on D7.
D7AUC(D,F)AUC(D,τ)AUC(F,τ)AUCTDAUCBSAUCSNPRAUCTDBSAUCODP
GRXD0.96920.14610.04371.11530.92553.34060.10241.0716
PCA0.96720.11700.03201.08420.93523.65810.08501.0522
PCRE0.96450.13150.03901.09600.92553.36860.09241.0569
ADAE0.90160.10800.01661.00960.88506.50420.09140.9930
FrFE0.96630.11680.02811.08310.93824.15160.08871.0550
LSDMMoG0.95090.38050.18431.33140.76652.06440.19621.1471
IEEPST0.85840.02390.00170.88220.856714.21650.02220.8806
CTAD0.95750.40950.04241.36700.91529.66610.36711.3246
GAED0.81290.08650.03600.89940.77692.40270.05050.8634
SSVFRX0.97750.23220.02241.20960.955010.34600.20971.1872
Figure 12. Performance comparison of the 3D ROC of different methods on D7. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Figure 12. Performance comparison of the 3D ROC of different methods on D7. (a) Three-dimensional ROC curves. (b) Corresponding 2D ROC curves (PD,PF). (c) Corresponding 2D ROC curves (PD, τ ). (d) Corresponding 2D ROC curves (PF, τ ).
Sensors 24 05664 g012
Figure 13. Comparison of SSA of different methods on different datasets.
Figure 13. Comparison of SSA of different methods on different datasets.
Sensors 24 05664 g013aSensors 24 05664 g013bSensors 24 05664 g013c
DRI is a two-dimensional flat view that uses color depth to represent an anomaly. As shown in the legend in Figure 8, the value represents the probability that the sample is an anomaly. DRI contains spatial information that can be used to observe differences between various categories, including anomaly target and background. As can be seen from Figure 8, the contours between the anomaly target and the background are clearer, and more separable for SSVFRX than for the other comparison algorithms.
As shown by the running time in Table 10, the SSVFRX algorithm exhibits a higher computational cost compared with the other compared algorithms. This is mainly due to the fact that the algorithm is more sensitive to the data dimension as well as the complex structure containing two sets of deep learning networks. Therefore, the runtime increases significantly when dealing with the higher dimension dataset (D2).

4. Discussion

The hypothesis is that SSVFRX has a greater degree of difference between the background and anomaly target, which can lead to better detection accuracy. The experimental results show that this hypothesis is correct. By theoretical analysis, the superiority of the SSVFRX algorithm may be due to the following reasons.
First, through network training, features of the same type are more likely to be mapped to the same direction. Second, the similar neighborhoods of an anomaly target tend to differ to a greater extent from themselves. Third, the trained model tends to match the characteristics of most data, and the errors arising from a smaller number of anomaly targets account for a lower proportion of the back propagation.
The possible reasons for the superiority of SSVFRX are analyzed based on the following experimental results:
Firstly, the 3D ROC detection results (Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 and Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12) indicate that, in most cases, SSVFRX significantly improves both target detectability and background suppression for datasets D4, D5, and D7, which is relatively low compared with CTAD. A similar trend is observed in the SSA analysis (Figure 13). SSA shows that, except for D4, D5, and D7, SSVFRX enhances the separability of backgrounds and anomaly target more effectively. A possible reason for this is that CTAD is a local anomaly detection method, which has some advantages in highlighting anomalies when compared with the global detection algorithm of SSVFRX. However, both anomalies are relative to different backgrounds, and anomalies in the global scope do not necessarily belong to anomalies in the local scope, and vice versa. Therefore, CTAD exhibits relatively weaker background suppression capabilities, as evidenced by its lower performance compared with SSVFRX in datasets D4, D5, and D7 (Table 6, Table 7 and Table 9). This is further validated in the detection results in Figure 14. Comparing the CTAD and SSVFRX in Figure 14, it is clear that there is more false detection in the CTAD background, while the background is clearer in the SSVFRX. A possible reason for this is that the suppressed background in CTAD is not the global background. Thus, it is easy to produce a situation where the background is mistaken for an anomaly.
Second, it can be seen from the DRI (Figure 14) that SSVFRX obtains a clearer contour of the anomaly target, representing a better separation between the anomaly target and the background. It can be inferred that SSVFRX is able to increase the difference between the background and the anomaly target.
Figure 14. Detection results of different methods on different datasets.
Figure 14. Detection results of different methods on different datasets.
Sensors 24 05664 g014
There are also clearer contours in the background, but they are shallower than those of the anomaly targets. A possible reason for this is that SSVFRX makes different categories of samples map to the direction of their similar neighborhoods. One of the anomaly targets belongs to an isolated point, which makes it very different from its similar image elements. While the other categories of ground objects in the background have smaller differences from the similar image elements. This is the reason that SSVFRX is able to suppress the background better.
Third, the running time of the SSVFRX algorithm (Table 10) shows a relatively high computational cost. Nevertheless, it demonstrates significant advantages in key aspects such as detection accuracy, background suppression, and anomaly target highlighting. In practical applications, it is necessary to balance the algorithm’s performance and efficiency according to specific requirements. For scenarios where real-time processing is not critical but high detection accuracy is crucial, the advantages of SSVFRX may far outweigh its runtime drawbacks. Moreover, improvements can be made to reduce computational costs. First, dimensional reduction methods can be considered. For example, in a group of data sets, D1 and D2 are compared to show that, when the sample size is large, a lower dimension reduces the computational cost. Second, parallel computing or GPU acceleration techniques can be utilized to enhance the algorithm’s execution speed.
The SSVFRX algorithm shows significant advantages in anomalous target detection tasks, being able to improve both background and anomalous target separability and background suppression, especially in terms of background suppression. These advantages mainly stem from the efficient mapping of different classes to their similar samples. While local detection methods may have their advantages in some specific cases, global SSVFRX is more advantageous in terms of background suppression.

5. Conclusions

SSVFRX is capable of capturing rich anomaly and difference information, effectively distinguishing different types of features, and highlighting anomaly targets. Experiments have shown that SSVFRX is able to improve target and background separability and background suppressibility at the same time. The advantages of SSVFRX are mainly reflected in several aspects: the anomaly target is shown as an isolated point, and its similar features are different from the original features, SSVFRX can accurately capture such differences and improve the accuracy of anomaly detection. Meanwhile, SSVFRX maps the background to a similar direction to enhance the background suppression effect, which improves the detection capability and reduces the false alarm rate. However, there is still room for improvement in computational efficiency, such as optimizing network structures, developing high-dimensional data processing techniques, exploring optimal parameter configurations, and leveraging parallel computing or GPU acceleration. Through continuous optimization, the efficiency and performance of SSVFRX are improved, so that it can play a greater role in the field of hyperspectral anomaly detection.

Author Contributions

Conceptualization, X.L.; methodology, X.L.; software, X.L.; validation, X.L.; formal analysis, X.L.; investigation, X.L.; resources, X.L.; data curation, X.L.; writing—original draft preparation, W.S.; writing—review and editing, W.S.; funding acquisition, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Yantai University (WL22B221).

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to thank the handing editors and the reviewers for providing valuable comments.

Conflicts of Interest

Xueyuan Li is employed by Shandong Yuweng Information Technology Co., Ltd. The authors declare no conflicts of interest.

References

  1. Teffahi, H.; Yao, H.; Chaib, S.; Belabid, N. A novel spectral-spatial classification technique for multispectral images using extended multi-attribute profiles and sparse autoencoder. Remote Sens. Lett. 2019, 10, 30–38. [Google Scholar] [CrossRef]
  2. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote 2017, 56, 847–858. [Google Scholar] [CrossRef]
  3. Xia, K.; Yuan, G.; Xia, M.; Li, X.; Gui, J.; Zhou, H. Advanced Global Prototypical Segmentation Framework for Few-Shot Hyperspectral Image Classification. Sensors 2024, 24, 5386. [Google Scholar] [CrossRef] [PubMed]
  4. Xu, X.; Shi, Z.; Pan, B. A supervised abundance estimation method for hyperspectral unmixing. Remote Sens. Lett. 2018, 9, 383–392. [Google Scholar] [CrossRef]
  5. Su, Y.; Marinoni, A.; Li, J.; Plaza, J.; Gamba, P. Stacked Nonnegative Sparse Autoencoders for Robust Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1427–1431. [Google Scholar] [CrossRef]
  6. Zhang, X.; Cheng, X.; Xue, T.; Wang, Y. Linear Spatial Misregistration Detection and Correction Based on Spectral Unmixing for FAHI Hyperspectral Imagery. Sensors 2022, 22, 9932. [Google Scholar] [CrossRef]
  7. Mei, S.; Xin, Y.; Ji, J.; Shuai, W.; Dian, Q. Hyperspectral image super-resolution via convolutional neural network. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4297–4301. [Google Scholar]
  8. Urbina Ortega, C.; Quevedo Gutiérrez, E.; Quintana, L.; Ortega, S.; Fabelo, H.; Santos Falcón, L.; Marrero Callico, G. Towards real-time hyperspectral multi-image super-resolution reconstruction applied to histological samples. Sensors 2023, 23, 1863. [Google Scholar] [CrossRef] [PubMed]
  9. Wei, L.; Wu, G.; Qian, D. Transferred deep learning for hyperspectral target detection. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5177–5180. [Google Scholar] [CrossRef]
  10. Aburaed, N.; Alkhatib, M.Q.; Marshall, S.; Zabalza, J.; Al Ahmad, H. A Review of Spatial Enhancement of Hyperspectral Remote Sensing Imaging Techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2275–2300. [Google Scholar] [CrossRef]
  11. Hajaj, S.; El Harti, A.; Pour, A.B.; Jellouli, A.; Adiri, Z.; Hashim, M. A review on hyperspectral imagery application for lithological mapping and mineral prospecting: Machine learning techniques and future prospects. Remote Sens. Appl. Soc. Environ. 2024, 35, 101218. [Google Scholar] [CrossRef]
  12. Hu, X.; Xie, C.; Fan, Z.; Duan, Q.; Zhang, D.; Jiang, L.; Wei, X.; Hong, D.; Li, G.; Zeng, X. Hyperspectral Anomaly Detection Using Deep Learning: A Review. Remote Sens. 2022, 14, 1973. [Google Scholar] [CrossRef]
  13. Racek, F.; Baláž, T.; Melša, P. Ability of utilization of PCA in hyperspectral anomaly detection. In Proceedings of the International Conference on Military Technologies (ICMT) 2015, Brno, Czech Republic, 19–21 May 2015; pp. 1–4. [Google Scholar]
  14. Johnson, R.J.; Williams, J.P.; Bauer, K.W. AutoGAD: An Improved ICA-Based Hyperspectral Anomaly Detection Algorithm. IEEE Trans. Geosci. Remote 2013, 51, 3492–3503. [Google Scholar] [CrossRef]
  15. Cavalli, R.M.; Licciardi, G.A.; Chanussot, J. Detection of Anomalies Produced by Buried Archaeological Structures Using Nonlinear Principal Component Analysis Applied to Airborne Hyperspectral Image. IEEE J. Stars 2013, 6, 659–669. [Google Scholar] [CrossRef]
  16. Imani, M. Hyperspectral anomaly detection using differential image. IET Image Process. 2018, 12, 801–809. [Google Scholar] [CrossRef]
  17. Tao, R.; Zhao, X.; Li, W.; Li, H.C.; Du, Q. Hyperspectral Anomaly Detection by Fractional Fourier Entropy. IEEE J. Stars 2019, 12, 4920–4929. [Google Scholar] [CrossRef]
  18. Wx, A.; Yl, A.; Jie, L.A.; Jian, Y.A.; Jl, A.; Xj, B.; Zhen, L.C. Unsupervised spectral mapping and feature selection for hyperspectral anomaly detection. Neural Netw. 2020, 132, 144–154. [Google Scholar] [CrossRef]
  19. Jablonski, J.A.; Bihl, T.J.; Bauer, K.W. Principal Component Reconstruction Error for Hyperspectral Anomaly Detection. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1725–1729. [Google Scholar] [CrossRef]
  20. Vafadar, M.; Ghassemian, H. Hyperspectral anomaly detection using Modified Principal component analysis reconstruction error. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 1741–1746. [Google Scholar]
  21. An, J.; Cho, S. Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability; SNU Data Mining Center: Seoul, Republic of Korea, 2015; pp. 1–8. [Google Scholar]
  22. Zhang, L.; Lin, F.; Fu, B. A joint model based on graph and deep learning for hyperspectral anomaly detection. Infrared Phys. Technol. 2024, 139, 105335. [Google Scholar] [CrossRef]
  23. Zhai, W.; Zhang, F. Robust Principal Component Analysis Integrating Sparse and Low-Rank Priors. J. Comput. Commun. 2024, 12, 1–13. [Google Scholar] [CrossRef]
  24. Lei, J.; Fang, S.; Xie, W.Y.; Li, Y.S.; Chang, C.I. Discriminative Reconstruction for Hyperspectral Anomaly Detection with Spectral Learning. IEEE Trans. Geosci. Remote 2020, 58, 7406–7417. [Google Scholar] [CrossRef]
  25. Li, L.; Li, W.; Du, Q.; Tao, R. Low-Rank and Sparse Decomposition with Mixture of Gaussian for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2020, 51, 4363–4372. [Google Scholar] [CrossRef]
  26. Xiang, P.; Ali, S.; Zhang, J.; Jung, S.K.; Zhou, H. Pixel-associated autoencoder for hyperspectral anomaly detection. Int. J. Appl. Earth Obs. 2024, 129, 103816. [Google Scholar] [CrossRef]
  27. Acito, N.; Matteoli, S.; Rossi, A.; Diani, M.; Corsini, G. Hyperspectral Airborne “Viareggio 2013 Trial” Data Collection for Detection Algorithm Assessment. IEEE J. Stars 2016, 9, 2365–2376. [Google Scholar] [CrossRef]
  28. Kang, X.; Zhang, X.; Li, S.; Li, K.; Li, J.; Benediktsson, J.A. Hyperspectral Anomaly Detection with Attribute and Edge-Preserving Filters. IEEE Trans. Geosci. Remote 2017, 55, 5600–5611. [Google Scholar] [CrossRef]
  29. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  30. Sun, X.; Zhuang, L.; Gao, L.; Gao, H.; Sun, X.; Zhang, B. Information Entropy Estimation Based on Point-Set Topology for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote 2024, 62, 5523415. [Google Scholar] [CrossRef]
  31. Gao, L.; Sun, X.; Sun, X.; Zhuang, L.; Du, Q.; Zhang, B. Hyperspectral Anomaly Detection Based on Chessboard Topology. IEEE Trans. Geosci. Remote 2023, 61, 5505016. [Google Scholar] [CrossRef]
  32. Xiang, P.; Ali, S.; Jung, S.K.; Zhou, H. Hyperspectral Anomaly Detection with Guided Autoencoder. IEEE Trans. Geosci. Remote 2022, 60, 5538818. [Google Scholar] [CrossRef]
  33. Chang, C.-I. An Effective Evaluation Tool for Hyperspectral Target Detection: 3D Receiver Operating Characteristic Curve Analysis. IEEE Trans. Geosci. Remote 2020, 59, 5131–5153. [Google Scholar] [CrossRef]
  34. Zhang, X.; Wen, G.; Dai, W. A Tensor Decomposition-Based Anomaly Detection Algorithm for Hyperspectral Image. IEEE Trans. Geosci. Remote 2016, 54, 5801–5820. [Google Scholar] [CrossRef]
  35. Shixin, M.A.; Chuntong, L.; Hongcai, L.I.; Hao, W.; Zhenxin, H.E. Camouflage Effect Evaluation Based on Hyperspectral Image Detection and Visual Perception. Acta Armamentarii 2019, 40, 1485–1494. [Google Scholar] [CrossRef]
Figure 2. The overall flow chart.
Figure 2. The overall flow chart.
Sensors 24 05664 g002
Figure 3. The flow chart of pre-processing.
Figure 3. The flow chart of pre-processing.
Sensors 24 05664 g003
Figure 4. Similar feature fusion based on autoencoder.
Figure 4. Similar feature fusion based on autoencoder.
Sensors 24 05664 g004
Table 2. The relationship between parameter k and AUC.
Table 2. The relationship between parameter k and AUC.
Q 357911
AUC0.94140.95750.96030.96130.9583
Table 10. Running time comparison of different methods on different datasets.
Table 10. Running time comparison of different methods on different datasets.
TimeGRXDPCAPCREADAEFrFELSDM MoGIEEPSTCTADGAEDSSVFRX
D10.37 1.03 4.14 464.02 99.54 46.60 1.9057285.80 256.48 280.65
D20.76 12.54 104.63 2983.21 416.08 72.46 1.1473127.72 69.08 7556.20
D30.07 7.38 74.82 195.52 7.80 3.59 0.18509.26 10.81 461.94
D40.03 2.44 12.44 79.78 4.41 1.56 0.19615.21 6.50 203.16
D50.24 2.49 28.83 938.90 121.58 26.65 1.843890.73 28.10 2355.64
D60.09 1.01 15.62 126.97 17.53 10.98 0.646923.78 22.48 1213.36
D70.10 1.12 7.12 133.86 18.87 14.79 0.671323.95 22.30 1219.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Shang, W. Hyperspectral Anomaly Detection Based on Spectral Similarity Variability Feature. Sensors 2024, 24, 5664. https://doi.org/10.3390/s24175664

AMA Style

Li X, Shang W. Hyperspectral Anomaly Detection Based on Spectral Similarity Variability Feature. Sensors. 2024; 24(17):5664. https://doi.org/10.3390/s24175664

Chicago/Turabian Style

Li, Xueyuan, and Wenjing Shang. 2024. "Hyperspectral Anomaly Detection Based on Spectral Similarity Variability Feature" Sensors 24, no. 17: 5664. https://doi.org/10.3390/s24175664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop