Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Multi-View Stereo Vision Patchmatch Algorithm Based on Data Augmentation
Next Article in Special Issue
An Explainable Student Fatigue Monitoring Module with Joint Facial Representation
Previous Article in Journal
Feasibility Study on the Discrimination of Fluor Concentration in the Liquid Scintillator Using PMT Waveform and Short-Pass Filter
Previous Article in Special Issue
A Classification Method of Point Clouds of Transmission Line Corridor Based on Improved Random Forest and Multi-Scale Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Texture Search and Histogram Redistribution for Hyperspectral Image Quality Improvement

1
Key Laboratory of Spectral Imaging Technology of Chinese Academy of Sciences, Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(5), 2731; https://doi.org/10.3390/s23052731
Submission received: 9 February 2023 / Revised: 27 February 2023 / Accepted: 28 February 2023 / Published: 2 March 2023
(This article belongs to the Special Issue Machine Learning Based 2D/3D Sensors Data Understanding and Analysis)

Abstract

:
Due to optical noise, electrical noise, and compression error, data hyperspectral remote sensing equipment is inevitably contaminated by various noises, which seriously affect the applications of hyperspectral data. Therefore, it is of great significance to enhance hyperspectral imaging data quality. To guarantee the spectral accuracy during data processing, band-wise algorithms are not suitable for hyperspectral data. This paper proposes a quality enhancement algorithm based on texture search and histogram redistribution combined with denoising and contrast enhancement. Firstly, a texture-based search algorithm is proposed to improve the accuracy of denoising by improving the sparsity of 4D block matching clustering. Then, histogram redistribution and Poisson fusion are used to enhance spatial contrast while preserving spectral information. Synthesized noising data from public hyperspectral datasets are used to quantitatively evaluate the proposed algorithm, and multiple criteria are used to analyze the experimental results. At the same time, classification tasks were used to verify the quality of the enhanced data. The results show that the proposed algorithm is satisfactory for hyperspectral data quality improvement.

1. Introduction

In the past few decades, hyperspectral imaging has become a significant approach in many fields, and hyperspectral image (HSI) processing has attracted extensive attention [1,2]. For example, in urban planning, surveying and mapping, agriculture, forestry and disaster monitoring [3,4,5,6], the quantitative analysis capability of hyperspectral remote sensing imaging data provides the means to identify material properties deeply. However, due to the limitations of equipment hardware performance (optical and electrical noise), compression difficulties, and information loss during transmission, there are many influences on hyperspectral image data processing. Consequently, the visual effect and application value are seriously affected. Therefore, studying HSI quality improvement technology is one of the main issues to solve on the road to scientific and technological innovation.
During the past decade, there were relatively little research on hyperspectral quality improvement. Recently, more researchers focused on hyperspectral denoising [7,8,9]. With the development of artificial intelligence, researchers often use a neural network to de-noise hyperspectral data [10,11,12,13]. Despite this, in the actual test data, the denoising effect largely depends on the generalization performance of the training samples and the network. For newly obtained HSI, the quality needs improvement. In addition, few scholars studied the contrast enhancement of each band in HSI. On the other hand, in practical applications, high DN values (overexposed clouds) and low DN values (dark pixels, water bodies) with fewer pixels will lead to valuable objects in a narrow dynamic range [14,15,16]. Even with linear stretching, the visualizations are still low, significantly reducing customer perception. A better visualization requirement for HSI was proposed in the literature [17,18,19]. However, there is no further analysis on whether the spectral information is well preserved. The visualization effect is enhanced but spectral information is lost. It is not conducive to later data processing, such as classification and detection.
According to the research status of HSI degradation, an HSI quality improvement algorithm, combining denoising and contrast enhancement, is proposed. The main contributions of this paper are summarized as follows:
(1)
Considering the noise interference in degraded HSI, it is innovatively proposed to extract the edge features of 3D similar block aggregation to restore HSI’s spectral space features by optimizing the sparsity of similar blocks.
(2)
Aiming at the current situation of poor visual effect caused by low spatial contrast, the adaptive threshold constraint histogram contraction combined with Poisson fusion is innovatively proposed to improve global and local contrast.
The rest of the paper is organized as follows. In Section 2, a quality improvement algorithm for degraded HSI is proposed. The experiment results and discussion are presented and discussed in Section 3. Section 4 is a summarization of this paper.

2. Materials and Methods

As mentioned in the introduction, current research lacks a robust method to improve HSI quality (denoising and contrast enhancement). The Block Matching 4D (BM4D) algorithm [20,21] provides a relatively stable denoising model. However, its block matching is not optimal, which affects the denoising performance and may decrease the dynamic range. An HSI enhancement algorithm based on texture search and histogram redistribution is proposed to solve these problems. The process flow chart of the algorithm is shown in Figure 1.
Firstly, due to the redundancy of hyperspectral bands, the Minimum Noise Fraction (MNF) [22,23,24,25] dimension reduction algorithm is utilized to obtain a single-channel feature map. Secondly, the direction and amplitude of the gradient are calculated on the characteristic graph. The non-maximum suppression method enhances the edge information to obtain more accurate texture. Thirdly, the obtained precise texture image is divided into an edge and non-edge region. In the non-edge region, similar blocks are searched vertically and horizontally. However, in the edge region, similar blocks are searched along the edge direction (see Section 2.1). Additionally, the intermediate denoising results are obtained by initial 4D block grouping, 4D hard threshold filtering, primary aggregation, secondary 4D block estimation, Wiener filtering, and secondary aggregation. Finally, the hyperspectral contrast and spatial detail information are improved by histogram redistribution and Poisson fusion under the unchanged spectral characteristics (see Section 2.2).

2.1. Texture-Based 4D Block Aggregation

Traditional BM4D is along a local square area in the X and Y directions. Similar blocks are not optimal. Hyperspectral remote sensing data of collected scenes such as farmland, rivers, and mountains have clear edge texture information. A better similar block could be searched when moving along the scene texture. Therefore, the search method of BM4D similar blocks should be improved.
When the center of the reference block falls on the non-edge, search the fixed-size image blocks in the rectangular window in the horizontal and vertical directions to determine which image blocks are similar to the reference block.
When the center point of the reference block falls on the target edge pixel, cluster 4D-similar blocks along the gradient and directional characteristics of the edge. However, HSI contains noise in each band, making extracting the edge difficult. In order to obtain clear edge texture, use MNF to obtain the feature map first. Then, the Canny algorithm extracts and marks the target edge texture { S 1 , S 2 S n } . For the corresponding subset of curves, expand l distance along the normal direction of the curve { L 1 , L 2 L n } . Obtain the corresponding search area { S n × L n } . Finally, the 3D cube with the smallest distance L ( R M S E _ S A M ) is calculated and integrated into a 4D matrix. The L ( R M S E _ S A M ) distance is as follows:
L ( R M S E _ S A M ) = λ 1 1 m i = 1 m ( y i y ^ i ) 2 + λ 2 1 m i = 1 m ( c o s 1 y i T y ^ i ( y i T y i ) 1 / 2 ( y ^ i T y ^ i ) 1 / 2 ) ,
where y i represents the DN value of the reference block at position i, and y ^ i represents the DN value of similar blocks corresponding to the reference block at position i.

2.2. Preserving Spectral Information for Contrast Enhancement

Most histograms are in a narrow range. Even with linear stretching, the contrast is still low. Therefore, remote sensing imaging often uses a 2% truncation stretch to enlarge the contrast. However, this approach loses much detail information. Based on the fact that the information entropy is maximum when the image histogram is uniformly distributed, try to achieve maximum information entropy and dynamic range by redistributing the corresponding hyperspectral histogram, while preserving the details of the image. The specific algorithm is as follows.
Firstly, process HSI data by band. Determine the convergence threshold T b a n d = w h / 2 N of histogram redistribution and the initial search DN value D N s t a r t = 1 w h j = 1 w i = 1 n f b ( i , j ) in each band.
Then, starting from the initial value D N s t a r t , the searched DN value is accumulated along the positive and negative directions of the X-axis. Judge whether the current DN cumulative value is close to the convergence threshold T b a n d . If the accumulated DN value is close to the convergence threshold Tband, record and update the current DN value. Otherwise, continue to search and accumulate DN values until all DN values in this band are processed.
Next, find the connected region of the merged pixels to be spatially related. Obtain the original DN value of its connected region and do the local remapping to restore more local contrast information. The local mapping is to map the spatially related pixels of the merged region back in sequence along the merged gray value. Stop the mapping when exceeding the maximum and minimum gray values after shrinkage.
Finally, perform Poisson fusion on the local remapped and originally merged connected regions. As a result, it increases the contrast and spatial details without changing the spectral information.

3. Results and Discussion

In the third chapter, the experiment is performed on the proposed algorithm and the experimental results are discussed. The environment and settings for the experiment are described in Section 3.1. The experimental datasets are then described in Section 3.2. The experimental results using the proposed algorithm are shown in Section 3.3. In Section 3.4, we perform subjective visual comparison analysis with traditional quality improvement algorithms and use five evaluation indicators for objective comparison. In Section 3.5, we classify quality improvement HSI and degraded HSI and further discuss the classification results.

3.1. Experimental Environment and Setting

A computer with 16 GB RAM and a 12th Gen Intel(R) Core (TM) i9-12900H 2.50 GHz processor running Windows 11 was used. The studies were performed in MATLAB R2022b. The Indian_Pines, PaviaU, and Salinas datasets were utilized for validation in this paper to estimate the effectiveness of the proposed algorithm.

3.2. Dataset Description

In order to prove the denoising and contrast enhancement performance of the proposed algorithm under degraded HIS, we add different Gaussian noises into three public hyperspectral remote sensing datasets (Indian_Pines, Pavia_University, and Salinas datasets) as experimental data. Meanwhile, to further prove the reliability of quality enhancement, we classify preprocessed HSI and unprocessed HSI.
The Indian Pines dataset [26] is a hyperspectral image of Indian pine trees in Indiana, USA, imaged by the airborne visible, infrared imaging spectrometer (AVIRIS) in 1992. Its size is 145 × 145. Wavelength range: 0.4–2.5 μm. Two hundred bands are the object of study [27], and the spatial resolution is about 20 m. During imaging, it is easy to be affected by atmospheric and other factors, resulting in noise interference and low dynamic range, which makes subsequent classification difficult. There are 16 types of ground objects, and the detailed category information is shown in Table 1.
The PaviaU dataset [28] is the hyperspectral data obtained by the German Reflective Optics Spectrographic Imaging System (ROSIS-03) in Pavia, Italy, in 2003. The spectral imager is sensitive to 0.43–0.86 μm, images 115 bands continuously, and the spatial resolution is 1.3 m. Among them, we eliminate 12 bands due to the influence of noise, remaining the 103 spectral bands. This data is 610 × 340, containing 42,776 object pixels and 164,624 background pixels. These object pixels contain nine ground objects: trees, asphalt roads, bricks, and meadows. The detailed category information is shown in Table 1.
The Salinas dataset [28] was also taken with an AVIRIS imaging spectrometer. It is an image of Salinas Valley in California, USA. Its spatial resolution reaches 3.7 m. After removing the 108–112, 154–167, and 224 wavebands affected by atmospheric water vapor [27], 204 waveband images remain. The image size is 512 × 217, so 111,104 pixels are included. Among them, 56,975 are background pixels, and 54,129 pixels can be applied to classification. These pixels are divided into 16 categories, such as fallow and celery. The detailed category information is shown in Table 1.
The three public datasets used for experiments are available at the website https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes. (accessed on 10 August 2021).

3.3. Experimental Result of Quality Improvement on Three Public Datasets

First, we add Gaussian noise (σ = 5) to the Indian Pines dataset to generate degraded data. Then, we use the proposed algorithm to conduct quality improvement experiments. Figure 2 shows the spatial visualization results of seven bands selected from the Indian Pines dataset of 200 bands. Figure 3 shows the spectral curve.
The first row in Figure 2 is the original data of the Indian Pines dataset. The second row is the degraded HSI with Gaussian noise (σ = 5). The last row is the quality improvement result using the proposed algorithm.
The clean HSI can be recovered from degraded HSI in the third row using the proposed method. The second row shows that the degraded HSI is still in a low dynamic range. Then, by using the proposed algorithm, the contrast, spatial information, and noise interference have a noticeable improvement in visual effect. Interestingly, the original data contains noise for band one in the first row of Figure 2a. After adding Gaussian noise of five, the added and unknown noise in its data is removed, demonstrating that the proposed algorithm has good denoising performance.
Next, we add the same noise to each band. Because the brightness value of each band is different, there are varying degrees of noise interference in different bands. As shown in the second row of Figure 2 and Figure 3, due to the high average brightness of band ten, the added Gaussian noise of five belongs to the low noise. However, for band 170, due to its low average brightness, the added Gaussian noise of five belongs to relatively high noise, equivalent to introducing different levels of noise into the dataset. In this way no matter what, at a low noise level (band 10) or high noise level (band 170), data can be recovered well after quality improvement. Moreover, Figure 3 proves that the spectral curve of improved data is close to its true spectrum. It is confirmed that the proposed algorithm can not only complete the denoising and contrast enhancement, but also keep the spectral structure information well.
To perform quality improvement on the degraded PaviaU data, we add noise (σ = 5). As shown in Figure 4, we uniformly selected the original data, degraded images, and quality improvement images of bands 1, 10, 20, 30, 50, 80, and 100 for visual display. From the second row of Figure 4, the degraded images are visually disturbed by different degrees of noise. At the same time, both the original data, the first row of Figure 4, and the degraded data have low overall contrast. As a result, it is difficult to interpret the various targets in the data, which seriously affects the user’s ability to interpret the data. After conducting quality enhancement using the proposed algorithm, the visual effect, space contrast, and brightness are effectively improved in Band 1, Band 10, and Band 20. In these bands, compared with the degraded data, various targets can be seen, which is conducive to the interpretation of users. In band 80 and band 100, the overall brightness was improved compared with the front bands, but the noise interference was more serious. The proposed algorithm can effectively denoise and enhance the image’s contrast while preserving the image’s details and recover the object information. Figure 4 shows that the propose algorithm effectively improves the overall hyperspectral spatial quality.
Next, we further prove that the proposed algorithm does not affect the spectral information of ground objects while improving the spatial quality. We select the pixel (311,311) to extract the spectral information and display it in Figure 5. The blue curve is the spectral curve of the original data. The red curve is the degraded spectral curve. The orange curve is the quality-improved spectrum. The spectral profile after quality enhancement can effectively restore the spectral features of the ground target in each band. Zoomed in between band 70 and band 83, the improved spectral curve is very close to the real spectrum, consistent with the expected quality improvement.
We next perform quality enhancement on the degraded Salina dataset. As shown in Figure 6, we selected the experimental results of Salina data bands 1, 10, 40, 80, 120, 160, and 210 for visualization. The experimental results using the proposed algorithm obtain visual effects well. In band 1 (the first row of Figure 7a), the original data has striping noise in the transverse direction. The second row of Figure 7a adds Gaussian noise, and the effective information of this band is submerged in the mixed noise. Using the proposed algorithm, we remove the Gaussian and strip noise well, and recover the valid information. Band 40 data also typically shows a quality improvement. Not only is the noise of the data removed, but also the contrast and brightness of the spatial information are improved. The spectral information at position (151,151) is shown in Figure 7, and the spectral profile, after the quality enhancement, still maintains the inherent spectral properties of the target.
This section visualizes the degraded and quality-enhanced HSI results in terms of space and spectrum on three datasets, explaining that the proposed algorithm could improve HSI quality. Section 3.4 compares the proposed algorithm and related quality improvement algorithms, and analyzes them from subjective and objective aspects to demonstrate that the proposed algorithm has improved the overall results. Moreover, in Section 3.5, the quality improvement of HSI is applied to the classification task, further demonstrating the effectiveness of quality improvement for subsequent applications.

3.4. Evaluation of Quality Improvement Compared with Other Methods

Quality improvement aims at HSI with low contrast and noise interference. The classical HSI denoising algorithm is the BM4D algorithm, and the contrast enhancement includes linear stretching (LS) and histogram equalization (HE). Therefore, in order to further evaluate the proposed algorithm, the quality improvement algorithm is further discussed with BM4D [20], BM4D combined with linear stretching (BM4D + LS) [20,29], and BM4D combined with histogram equalization (BM4D + HE) [20,29].
We perform the relevant comparative experiments on three datasets and the visualization results are presented in Figure 8. Figure 8a shows that the degraded data of Indian Pines in band 50 has ambiguous targets and moderate noise interference. Although it removes the noise effectively, there is low contrast in it using the BM4D denoising algorithm. After utilizing the linear stretching on BM4D, the overall contrast is still low. There is a status quo with fog, as shown in the fourth column of Figure 8a, since the denoised pixels have pixels that are too bright or too dark, while other pixels are limited to a narrow histogram range. Moreover, it leads to low-quality visualizations. The comparison algorithms on the PaviaU and Salinas datasets also show similar performance. The overall brightness is low using the BM4D algorithm on these two datasets. After linear stretching, the data contrast is not improved because the maximum and minimum values of the denoised data limit the histogram distribution. Histogram equalization based on the BM4D algorithm improves the contrast, but the over-exposure phenomenon exists in local areas, leading to the loss of local information. Moreover, the use of linear stretching and histogram equalization for each band results in breaking the spectral characteristics of the ground truth, as shown in Figure 9. For BM4D + LS and BM4D + HE, although they improve the spatial contrast in some bands, the target spectral information is affected. In this way, it affects the subsequent spectral tasks, such as classification and recognition. The proposed quality improvement method not only improves the spatial visual effect, but also maintains the spectral characteristics target. Therefore, it is beneficial for subsequent HSI processing and other related tasks.
To compare these results, we use five indicators: Spectral Angle (SA) [10], Peak Signal to Noise Ratio (PSNR) [30], Structural Similarity (SSIM) [31], Brightness [30], and Contrast [30], to evaluate the experiment results.
S A ( x ) = c o s 1 d T x ( d T d ) 1 / 2 ( x T x ) 1 / 2 ,
where, d represents spectral vector of ground-truth, x represents spectral vector after quality improvement.
S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 1 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
where, μ x and μ y represent mean value of the ground truth and the quality improvement HSI, σ x and σ y represent standard deviation of the ground truth and the quality improvement HSI. σ x y denotes covariance, and C 1 and C 1 are set as 0.0001 and 0.0009, respectively.
P S N R = 10 × l o g 10 ( 2 n 1 ) 2 M S E ,
where, MSE represent mean square error. n represents digitalizing bit of HSI.
B r i g h t n e s s = l = 0 2 n 1 l × p ( l ) ,
  C o n t r a s t = l = 0 2 n 1 ( l B r i g h t n e s s × p ( l ) ,
where, l represents the gray value of HSI, p ( l ) represents the probability of gray level l .
Table 2 shows the results of five indicators on three public datasets, and Figure 10 shows the index variation diagram of each band in the Indian Pines dataset. The SA indicator on the Indian Pines dataset dropped from 9.740 to 2.891, confirming the stability of spectral information during quality improvement. Meanwhile, its spectral information is closer to the actual spectrum, proving that the quality improvement could provide a guarantee for subsequent classification tasks.
Similarly, the contrast index increased from 0.0345 to 0.0918, and the brightness index increased from 0.1963 to 0.2670. It can be seen from Figure 10d that the index of each band after enhancement is also higher than the ground truth. We prove the effectiveness of the histogram redistribution algorithm in Section 2.2. SSIM was improved from 0.4725 to 0.9408, proving that the proposed method recovered spatial structure information.

3.5. Applied to Classification Tasks

In order to show the influence of hyperspectral quality improvement on classification, we send quality improvement HSI and degraded HSI from three public datasets into the 3dCNN classification model [32].
The network model comprises four 3D convolutional layers with kernel sizes of 3 × 3 × 3, 1 × 1 × 3, and 1 × 1 × 2. ReLU is used as the activation function. The final output is through the full connection layer. The setting patch size is 7, and epoch is 150. The initial learning rate is 0.001. In order to further verify that quality enhancement can improve the classification accuracy under different conditions, we use multiple data partition ratios for multiple verifications. The dividing ratio of training and test data is 50%, 25%, and 15%.
The average Kappa and classification accuracy of the ten experiments on three datasets are shown in Table 3. After using the proposed algorithm on three datasets, its classification accuracy is higher than degraded data. Specifically, the average classification accuracy of the quality enhanced by the proposed algorithm is 96.318%, 93.886%, and 89.556% on the three types of dividing ratios, respectively. Compared with the dataset with degraded HSI, the average classification accuracy is improved by 4.12%, 6.81%, and 13.93%, respectively. The Kappa coefficient improved under different partition ratios.
Table 4, Table 5 and Table 6 show the classification accuracy of 16 categories in the Indian Pines dataset on a 50% dividing ratio, eight categories in the PaviaU dataset, and 16 categories in the Salina dataset, respectively. In each category, the data after quality improvement is improved compared with the degraded data.
Figure 11, Figure 12 and Figure 13 show the final classification results. The noise points of the image with improved quality in each category are significantly less than the classification results of degraded data. It also proves the effectiveness of data quality improvement for subsequent applications.

4. Conclusions

In this paper, a hyperspectral image quality improvement algorithm based on texture search and histogram redistribution is proposed. Firstly, a new clustering strategy for searching along edge texture is proposed in 4D block clustering for denoising. Then, after the secondary polymerization, contrast enhancement was performed by histogram redistribution and Poisson fusion. There are two main advantages. On the one hand, we use the texture search strategy to make 3D block aggregation sparser to improve the accuracy of the four-dimensional transformation. On the other hand, we exploit the histogram redistribution method to stabilize spectral information and enhance spatial contrast. Experimental results show that the final quality enhancement is significant, and the spectral information is stably retained. More importantly, it effectively improves the accuracy of subsequent classification tasks.
It is a preliminary attempt to improve HSI quality by combining denoising and spatial contrast enhancement. However, the spatial spectrum has not been improved in some bands. In the future, more robust quality improvement algorithms to further improve HSI quality should be considered. For example, to restore more information about the local spatial contrast, and to improve spectral accuracy by learning spectral reflectance properties.

Author Contributions

Conceptualization, G.Z.; methodology, B.H. and J.C.; software, J.C.; validation, Y.W. and J.C.; writing—original draft preparation, J.C.; writing—review and editing, B.H. and H.L.; supervision, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Youth Innovation Promotion Association CAS, National Natural Science Foundation of China under Grants (grant No. 42176182) and the Foundation of Shaanxi Province (grant No. 2023-YBGY-390).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Islam, M.R.; Ahmed, B.; Hossain, M.A.; Uddin, M.P. Mutual Information-Driven Feature Reduction for Hyperspectral Image Classification. Sensors 2023, 23, 657. [Google Scholar] [CrossRef]
  2. Zhang, X.; Cheng, X.; Xue, T.; Wang, Y. Linear Spatial Misregistration Detection and Correction Based on Spectral Unmixing for FAHI Hyperspectral Imagery. Sensors 2022, 22, 9932. [Google Scholar] [CrossRef]
  3. Huang, S.-Y.; Mukundan, A.; Tsao, Y.-M.; Kim, Y.; Lin, F.-C.; Wang, H.-C. Recent Advances in Counterfeit Art, Document, Photo, Hologram, and Currency Detection Using Hyperspectral Imaging. Sensors 2022, 22, 7308. [Google Scholar] [CrossRef]
  4. Md Noor, S.S.; Ren, J.; Marshall, S.; Michael, K. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries. Sensors 2017, 17, 2644. [Google Scholar] [CrossRef] [Green Version]
  5. Manian, V.; Alfaro-Mejía, E.; Tokars, R.P. Hyperspectral Image Labeling and Classification Using an Ensemble Semi-Supervised Machine Learning Approach. Sensors 2022, 22, 1623. [Google Scholar] [CrossRef]
  6. Liu, Q.; Dong, Y.; Zhang, Y.; Luo, H. A Fast Dynamic Graph Convolutional Network and CNN Parallel Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5530215. [Google Scholar] [CrossRef]
  7. Wei, X.; Xiao, J.; Gong, Y. Blind Hyperspectral Image Denoising with Degradation Information Learning. Remote Sens. 2023, 15, 490. [Google Scholar] [CrossRef]
  8. Pang, L.; Gu, W.; Cao, X. TRQ3DNet: A 3D Quasi-Recurrent and Transformer Based Network for Hyperspectral Image Denoising. Remote Sens. 2022, 14, 4598. [Google Scholar] [CrossRef]
  9. Zhuang, L.; Bioucas-Dias, J.M. Fast Hyperspectral Image Denoising and Inpainting Based on Low-Rank and Sparse Representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  10. Maffei, A.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Bruzzone, L.; Plaza, A. A Single Model CNN for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2516–2529. [Google Scholar] [CrossRef]
  11. Dou, H.-X.; Pan, X.-M.; Wang, C.; Shen, H.-Z.; Deng, L.-J. Spatial and Spectral-Channel Attention Network for Denoising on Hyperspectral Remote Sensing Image. Remote Sens. 2022, 14, 3338. [Google Scholar] [CrossRef]
  12. Liu, W.; You, J.; Lee, J. A Classification-Aware HSI Denoising Model With Neural Adversarial Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6013305. [Google Scholar] [CrossRef]
  13. Sun, H.; Liu, M.; Zheng, K.; Yang, D.; Li, J.; Gao, L. Hyperspectral Image Denoising via Low-Rank Representation and CNN Denoiser. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 716–728. [Google Scholar] [CrossRef]
  14. Wang, Y.; Niu, R.; Yu, X. Anisotropic Diffusion for Hyperspectral Imagery Enhancement. IEEE Sens. J. 2010, 10, 469–477. [Google Scholar] [CrossRef]
  15. Zheng, Y.; Li, J.; Li, Y.; Cao, K.; Wang, K. Deep Residual Learning for Boosting the Accuracy of Hyperspectral Pansharpening. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1435–1439. [Google Scholar] [CrossRef]
  16. Pani, S.; Saifuddin, S.C.; Ferreira, F.I.M.; Henthorn, N.; Seller, P.; Sellin, P.J.; Stratmann, P.; Veale, M.C.; Wilson, M.D.; Cernik, R.J. High Energy Resolution Hyperspectral X-Ray Imaging for Low-Dose Contrast-Enhanced Digital Mammography. IEEE Trans. Med. Imaging 2017, 36, 1784–1795. [Google Scholar] [CrossRef]
  17. Mahmood, Z.; Scheunders, P. Enhanced Visualization of Hyperspectral Images. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; IEEE: Honolulu, HI, USA, 2010; pp. 991–994. [Google Scholar]
  18. Fang, J.; Qian, Y. Local Detail Enhanced Hyperspectral Image Visualization. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: Milan, Italy, 2015; pp. 1092–1095. [Google Scholar]
  19. Erturk, S.; Suer, S.; Koc, H. A High-Dynamic-Range-Based Approach for the Display of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2001–2004. [Google Scholar] [CrossRef]
  20. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal Transform-Domain Filter for Volumetric Data Denoising and Reconstruction. IEEE Trans. Image Process. 2013, 22, 119–133. [Google Scholar] [CrossRef]
  21. Sun, L.; Jeon, B. Hyperspectral Mixed Denoising Via Subspace Low Rank Learning and BM4D Filtering. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8034–8037. [Google Scholar]
  22. Nielsen, A.A. Kernel Maximum Autocorrelation Factor and Minimum Noise Fraction Transformations. IEEE Trans. Image Process. 2011, 20, 612–624. [Google Scholar] [CrossRef] [Green Version]
  23. Zhao, B.; Gao, L.; Zhang, B. An Optimized Method of Kernel Minimum Noise Fraction for Dimensionality Reduction of Hyperspectral Imagery. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 48–51. [Google Scholar]
  24. Gómez-Chova, L.; Nielsen, A.A.; Camps-Valls, G. Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3570–3573. [Google Scholar]
  25. Nielsen, A.A.; Vestergaard, J.S. Parameter Optimization in the Regularized Kernel Minimum Noise Fraction Transformation. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 370–373. [Google Scholar]
  26. Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted Feature Fusion of Convolutional Neural Network and Graph Attention Network for Hyperspectral Image Classification. IEEE Trans. Image Process. 2022, 31, 1559–1572. [Google Scholar] [CrossRef]
  27. Kang, X.; Li, S.; Fang, L.; Li, M.; Benediktsson, J.A. Extended Random Walker-Based Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 144–153. [Google Scholar] [CrossRef]
  28. Yu, C.; Xue, B.; Song, M.; Wang, Y.; Li, S.; Chang, C.-I. Iterative Target-Constrained Interference-Minimized Classifier for Hyperspectral Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1095–1117. [Google Scholar] [CrossRef]
  29. Digital Image Processing (3rd Edition): | Guide Books. Available online: https://dl.acm.org/doi/book/10.5555/1076432 (accessed on 16 January 2023).
  30. Menotti, D.; Najman, L.; Facon, J.; Araujo, A.D.A. Multi-Histogram Equalization Methods for Contrast Enhancement and Brightness Preserving. IEEE Trans. Consum. Electron. 2007, 53, 1186–1194. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  32. Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Process flow of the proposed algorithm. The function and process of each block are detailed in Section 2.
Figure 1. Process flow of the proposed algorithm. The function and process of each block are detailed in Section 2.
Sensors 23 02731 g001
Figure 2. Visualization results of the Indian Pines dataset at different bands. The first row represents the raw grayscale. The second row represents the noise map with added noise. The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 20, (d) band 30, (e) band 50, (f) band 120, and (g) band 170.
Figure 2. Visualization results of the Indian Pines dataset at different bands. The first row represents the raw grayscale. The second row represents the noise map with added noise. The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 20, (d) band 30, (e) band 50, (f) band 120, and (g) band 170.
Sensors 23 02731 g002
Figure 3. Spectral curve at (40,40) in the Indian Pines dataset. Blue represents the original spectral curve, red represents the spectral curve of degraded data, and yellow represents the spectral curve after quality improvement.
Figure 3. Spectral curve at (40,40) in the Indian Pines dataset. Blue represents the original spectral curve, red represents the spectral curve of degraded data, and yellow represents the spectral curve after quality improvement.
Sensors 23 02731 g003
Figure 4. Visualization results of the Pavia University dataset at different bands. The first row represents the original data. The second row represents the degraded map with added noise (σ = 5). The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 20, (d) band 30, (e) band 50, (f) band 80, and (g) band 100.
Figure 4. Visualization results of the Pavia University dataset at different bands. The first row represents the original data. The second row represents the degraded map with added noise (σ = 5). The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 20, (d) band 30, (e) band 50, (f) band 80, and (g) band 100.
Sensors 23 02731 g004
Figure 5. The spectral curve at (311,311) in the PaviaU dataset. Blue represents the original spectral curve, red represents the spectral curve of degraded data, and yellow represents the spectral curve after quality improvement.
Figure 5. The spectral curve at (311,311) in the PaviaU dataset. Blue represents the original spectral curve, red represents the spectral curve of degraded data, and yellow represents the spectral curve after quality improvement.
Sensors 23 02731 g005
Figure 6. Visualization results of the Salinas dataset at different bands. The first row represents the original data. The second row represents the noise map with added noise (σ = 10). The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 40, (d) band 80, (e) band 120, (f) band 160, and (g) band 210.
Figure 6. Visualization results of the Salinas dataset at different bands. The first row represents the original data. The second row represents the noise map with added noise (σ = 10). The third row represents the quality improvement result. Each column from left to right represents (a) band 1, (b) band 10, (c) band 40, (d) band 80, (e) band 120, (f) band 160, and (g) band 210.
Sensors 23 02731 g006
Figure 7. Spectral curve at (151,151) in the Salinas dataset. Blue represents the original spectral curve, red represents the degraded spectral curve, and yellow represents the spectral curve after quality improvement.
Figure 7. Spectral curve at (151,151) in the Salinas dataset. Blue represents the original spectral curve, red represents the degraded spectral curve, and yellow represents the spectral curve after quality improvement.
Sensors 23 02731 g007
Figure 8. Visualization comparison results of three public datasets using different methods. (a) Experimental results of band 50 in the Indian Pines dataset. (b) Experimental results of band 1 in the PaviaU dataset. (c) Experimental results of Band 11 in the Salinas dataset. Each line from left to right represents ground truth, degraded data, BM4D, BM4D + LS, BM4D + HE, and the proposed method.
Figure 8. Visualization comparison results of three public datasets using different methods. (a) Experimental results of band 50 in the Indian Pines dataset. (b) Experimental results of band 1 in the PaviaU dataset. (c) Experimental results of Band 11 in the Salinas dataset. Each line from left to right represents ground truth, degraded data, BM4D, BM4D + LS, BM4D + HE, and the proposed method.
Sensors 23 02731 g008aSensors 23 02731 g008b
Figure 9. The spectral curve results of different comparison algorithms. Blue represents the original spectral curve, red represents the spectral curve of degraded data, yellow represents the curve results using the BM4D algorithm, purple represents the curve results using the BM4D + LS algorithm, green represents the curve results using the BM4D + HE algorithm, and blue represents the spectral curve after quality improvement. (a) The spectral curve at (111,111) in the Indian Pines dataset. (b) The spectral curve at (171,171) in the PaviaU dataset. (c) The spectral curve at (101,101) in the Salinas dataset.
Figure 9. The spectral curve results of different comparison algorithms. Blue represents the original spectral curve, red represents the spectral curve of degraded data, yellow represents the curve results using the BM4D algorithm, purple represents the curve results using the BM4D + LS algorithm, green represents the curve results using the BM4D + HE algorithm, and blue represents the spectral curve after quality improvement. (a) The spectral curve at (111,111) in the Indian Pines dataset. (b) The spectral curve at (171,171) in the PaviaU dataset. (c) The spectral curve at (101,101) in the Salinas dataset.
Sensors 23 02731 g009
Figure 10. Index curves for each band of the Indian Pines dataset. The blue, red, yellow, purple, and dotted green represent each band’s index with degraded data, BM4D, BM4D + LS, BM4D + HE and proposed results, respectively. (a) PSNR evaluation. (b) SSIM evaluation. (c) Brightness evaluation. (d) Contrast evaluation.
Figure 10. Index curves for each band of the Indian Pines dataset. The blue, red, yellow, purple, and dotted green represent each band’s index with degraded data, BM4D, BM4D + LS, BM4D + HE and proposed results, respectively. (a) PSNR evaluation. (b) SSIM evaluation. (c) Brightness evaluation. (d) Contrast evaluation.
Sensors 23 02731 g010
Figure 11. Classification results on the Indian Pines dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Figure 11. Classification results on the Indian Pines dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Sensors 23 02731 g011
Figure 12. Classification results on the PaviaU dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Figure 12. Classification results on the PaviaU dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Sensors 23 02731 g012
Figure 13. Classification results on the Salinas dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Figure 13. Classification results on the Salinas dataset. (ac) represent category labels of ground truth, classification result on degraded data, and classification result after quality enhancement, respectively.
Sensors 23 02731 g013
Table 1. The results of five indicators on three pubic datasets.
Table 1. The results of five indicators on three pubic datasets.
ClassIndian PinesSalinasPaviaU
C1AlfalfaBrocoli_green_weeds_1Meadows
C2Corn-notillBrocoli_green_weeds_2Gravel
C3Corn-mintillFallowTrees
C4CornFallow_rough_plowPainted metal sheets
C5Grass-pastureFallow_smoothBare Soil
C6Grass-treesStubbleBitumen
C7Grass-pasture-mowedCelerySelf-Blocking Bricks
C8Hay-windrowedGrapes_untrainedShadows
C9OatsSoil_vinyard_developMeadows
C10Soybean-notillCorn_senesced_green_weeds-
C11Soybean-mintillLettuce_romaine_4wk-
C12Soybean-cleanLettuce_romaine_5wk-
C13WheatLettuce_romaine_6wk-
C14WoodsLettuce_romaine_7wk-
C15Buildings-Grass-Trees-DrivesVinyard_untrained-
C16Stone-Steel-TowersVinyard_vertical_trellis-
Table 2. The quantitative results of three pubic datasets. (Bold texts indicate the best performance).
Table 2. The quantitative results of three pubic datasets. (Bold texts indicate the best performance).
SAPSNRSSIMBrightnessContrast
Indian PinesGT---0.19630.0339
degenerate9.740126.76790.47250.20040.0345
BM4D3.389735.20640.87750.20020.0321
BM4D + LS39.99869.63170.32050.42820.0451
BM4D + HE43.46697.24560.20920.49980.0834
Proposed algorithm2.891438.37810.94080.26700.0918
PaviaUGT---0.17360.0125
degenerate15.56126.29750.48680.17480.0146
BM4D4.019130.23030.92950.17460.0121
BM4D + LS4.484336.04360.93640.16510.0127
BM4D + HE34.48438.001900.39470.49990.0860
Proposed algorithm3.690138.69920.94320.35060.0603
SalinasGT---0.13100.0142
degenerate15.488526.76260.39110.13500.0154
BM4D4.034533.44850.92140.13470.0134
BM4D + LS37.199812.73850.46680.30170.0288
BM4D + HE40.53976.732770.23630.50000.0845
Proposed algorithm3.101140.07770.93910.31510.0753
Table 3. The classification results on different dividing ratios of three public datasets.
Table 3. The classification results on different dividing ratios of three public datasets.
Datasets50% (%)25% (%)15% (%)
AccuracyKappaAccuracyKappaAccuracyKappa
Indian PinesDegenerate93.77692.981.86579.372.92268.9
Proposed algorithm96.70296.394.22493.486.11184.1
SalinasDegenerate90.12089.089.34288.576.72074.5
Proposed algorithm96.58696.292.81591.989.60088.4
PaviaUDegenerate93.65191.792.47090.886.17681.5
Proposed algorithm95.66694.394.62093.092.95491.7
AverageDegenerate92.51691.287.89286.278.60675.0
Proposed algorithm96.31895.693.88692.789.55688.1
Table 4. The classification accuracy of the 16 categories on the Indian Pines dataset.
Table 4. The classification accuracy of the 16 categories on the Indian Pines dataset.
Accuracy (%)C1C2C3C4C5C6C7C8
Degenerate97.895.689.391.893.299.210099.8
Proposed algorithm10098.794.397.596.699.996.399.8
Accuracy (%)C9C10C11C12C13C14C15C16
Degenerate94.792.894.99399.598.588.398.9
Proposed algorithm10098.497.998.899.599.989.196.7
Table 5. The classification accuracy of the 9 categories on the PaviaU dataset.
Table 5. The classification accuracy of the 9 categories on the PaviaU dataset.
Accuracy (%)C1C2C3C4C5C6C7C8C9
Degenerate96.395.288.998.110092.992.393.5100
Proposed algorithm97.696.792.098.710097.597.295.999.7
Table 6. The classification accuracy of the 16 categories on the Salinas dataset.
Table 6. The classification accuracy of the 16 categories on the Salinas dataset.
Accuracy (%)C1C2C3C4C5C6C7C8
Degenerate97.799.595.799.897.399.599.683.6
Proposed algorithm98.410099.999.999.999.699.795.2
Accuracy (%)C9C10C11C12C13C14C15C16
Degenerate98.795.992.398.698.098.360.694.9
Proposed algorithm99.898.496.599.099.299.091.194.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, B.; Chen, J.; Wang, Y.; Li, H.; Zhang, G. Joint Texture Search and Histogram Redistribution for Hyperspectral Image Quality Improvement. Sensors 2023, 23, 2731. https://doi.org/10.3390/s23052731

AMA Style

Hu B, Chen J, Wang Y, Li H, Zhang G. Joint Texture Search and Histogram Redistribution for Hyperspectral Image Quality Improvement. Sensors. 2023; 23(5):2731. https://doi.org/10.3390/s23052731

Chicago/Turabian Style

Hu, Bingliang, Junyu Chen, Yihao Wang, Haiwei Li, and Geng Zhang. 2023. "Joint Texture Search and Histogram Redistribution for Hyperspectral Image Quality Improvement" Sensors 23, no. 5: 2731. https://doi.org/10.3390/s23052731

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop