Abstract
Hyperspectral and multispectral image (HS-MSI) fusion aims to generate a high spatial resolution hyperspectral image (HR-HSI), using the complementarity and redundancy of the low spatial resolution hyperspectral image (LR-HSI) and the high spatial resolution multispectral image (HS-MSI). Previous works usually assume that the spatial down-sampling operator between HR-HSI and LR-HSI, and the spectral response function between HR-HSI and HR-MSI are known, which is infeasible in many cases. In this paper, we propose a coarse-to-fine HS-MSI fusion network, which does not require the prior on the mapping relationship between HR-HSI and LRI or MSI. Besides, the result is improved by iterating the proposed structure. Our model is composed of three blocks: degradation block, error map fusion block and reconstruction block. The degradation block is designed to simulate the spatial and spectral down-sampling process of hyperspectral images. Then, error maps in space and spectral domain are acquired by subtracting the degradation results from the inputs. The error map fusion block fuses those errors to obtain specific error maps corresponding to initialize HSI. In the case that the learned degradation process could represent the real mapping function, this block ensures to generate accurate errors between degraded images and the ground truth. The reconstruction block uses the fused maps to correct HSI, and finally produce high-precision hyperspectral images. Experiment results on CAVE and Harvard dataset indicate that the proposed method achieves good performance both visually and quantitatively compared with some SOTA methods.
This work was supported in part by the National Natural Science Foundation of China (61772274, 62071233, 61671243, 61976117), the Jiangsu Provincial Natural Science Foundation of China (BK20211570, BK20180018, BK20191409), the Fundamental Research Funds for the Central Universities (30917015104, 30919011103, 30919011402, 30921011209), and in part by the China Postdoctoral Science Foundation under Grant 2017M611814, 2018T110502.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dian, R., Li, S., Guo, A., Fang, L.: Deep hyperspectral image sharpening. IEEE Trans. Neural Netw. Learn. Syst. 29(11), 5345–5355 (2018)
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13
Dong, W., et al.: Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans. Image Process. 25(5), 2337–2352 (2016)
Han, X.H., Shi, B., Zheng, Y.: SSF-CNN: spatial and spectral fusion with CNN for hyperspectral image super-resolution. In: 2018 25th IEEE International Conference on Image Processing (ICIP) (2018)
Han, X.H., Zheng, Y., Chen, Y.W.: Multi-level and multi-scale spatial and spectral fusion CNN for hyperspectral image super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Kwon, H., Tai, Y.W.: RGB-guided hyperspectral image upsampling. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2015)
Li, S., Dian, R., Fang, L., Bioucas-Dias, J.M.: Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 27(8), 4118–4130 (2018)
Liu, P., Xiao, L., Li, T.: A variational pan-sharpening method based on spatial fractional-order geometry and spectral-spatial low-rank priors. IEEE Trans. Geosci. Remote Sens. 56(3), 1788–1802 (2018)
Loncan, L., et al.: Hyperspectral pansharpening: a review. IEEE Geosci. Remote Sens. Mag. 3(3), 27–46 (2015)
Marinelli, D., Bovolo, F., Bruzzone, L.: A novel change detection method for multitemporal hyperspectral images based on binary hyperspectral change vectors. IEEE Trans. Geosci. Remote Sens. 57(7), 4913–4928 (2019)
Simões, M., Bioucas-Dias, J., Almeida, L.B., Chanussot, J.: A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 53(6), 3373–3388 (2015)
Saravanakumar, V., Ealai Rengasari, N.: A survey of hyperspectral image segmentation techniques for multiband reduction. Aust. J. Basic Appl. Sci. 15, 446–451 (2019)
Wang, W., Zeng, W., Huang, Y., Ding, X., Paisley, J.: Deep blind hyperspectral image fusion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
Xu, S., Amira, O., Liu, J., Zhang, C.X., Zhang, J., Li, G.: HAM-MFN: hyperspectral and multispectral image multiscale fusion network with rap loss. IEEE Trans. Geosci. Remote Sens. 58(7), 4618–4628 (2020)
Yan, H., Zhang, Y., Wei, W., Zhang, L., Li, Y.: Salient object detection in hyperspectral imagery using spectral gradient contrast. In: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2016)
Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, T., Xu, Y., Wu, Z., Wei, Z. (2022). Spatial Spectral Joint Correction Network for Hyperspectral and Multispectral Image Fusion. In: Wallraven, C., Liu, Q., Nagahara, H. (eds) Pattern Recognition. ACPR 2021. Lecture Notes in Computer Science, vol 13189. Springer, Cham. https://doi.org/10.1007/978-3-031-02444-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-02444-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-02443-6
Online ISBN: 978-3-031-02444-3
eBook Packages: Computer ScienceComputer Science (R0)