Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581807.3581849acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccprConference Proceedingsconference-collections
research-article

NAF: Nest Axial Attention Fusion Network for Infrared and Visible Images

Published: 22 May 2023 Publication History
  • Get Citation Alerts
  • Abstract

    In recent years, deep learning has been widely used in the field of infrared and visible image fusion. However, the existing methods based on deep learning have the problems of losing details and less consideration of long-range dependence. To address that, we propose a novel encoder-decoder fusion model based on nest connections and Axial-attention, named NAF. The network can extract more multi-scale information as possible and retain more long-range dependencies due to the Axial-attention in each convolution block. The method includes three parts: an encoder consists of convolutional blocks, a fusion strategy based on spatial attention and channel attention, and a decoder to process the fused features. Specifically, the source images are firstly fed into an encoder to extract multi-scale depth features. Then, a fusion strategy is employed to merge the depth features of each scale generated by the encoder. Finally, a decoder based on nested convolutional block is exploited to reconstruct the fused image. The experimental results on public data sets demonstrate that the proposed method has better fusion performance than other state-of-the-art methods in both subjective and objective evaluation.

    References

    [1]
    Shreyamsha Kumar, B.K. 2013 Image fusion based on pixel significance using cross bilateral filter. Signal, Image and Video Processing 9, 1193-1204.10.1007/s11760-013-0556-9.
    [2]
    Liu, Y., Chen, X., Ward, R.K. & Wang, Z.J. 2016 Image Fusion With Convolutional Sparse Representation. IEEE Signal Processing Letters 23, 1882-1886.10.1109/LSP.2016.2618776.
    [3]
    Prabhakar, K.R., Srikar, V.S. & Babu, R.V. 2017 DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 4724-4732.
    [4]
    Han, X., Meiqi, G., Xin, T., Jun, H. & Jiayi, M. 2022 CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition. Computer Vision and Image Understanding 218, 103407.https://doi.org/10.1016/j.cviu.2022.103407.
    [5]
    Li, H. & Wu, X. 2019 DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Transactions on Image Processing 28, 2614-2623.10.1109/TIP.2018.2887342.
    [6]
    Fu, Y. & Wu, X.J. 2021 A Dual-branch Network for Infrared and Visible Image Fusion. International Conference on Pattern Recognition (ICPR).
    [7]
    Ren, L., Pan, Z., Cao, J. & Liao, J. 2021 Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation. Infrared Physics & Technology 117.10.1016/j.infrared.2021.103839.
    [8]
    Ma, J., Yu, W., Liang, P., Li, C. & Jiang, J. 2019 FusionGAN: A generative adversarial network for infrared and visible image fusion. Information Fusion 48, 11-26.https://doi.org/10.1016/j.inffus.2018.09.004.
    [9]
    Xu, H., Ma, J., Jiang, J., Guo, X. & Ling, H. 2020 U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 502 - 518.10.1109/TPAMI.2020.3012548.
    [10]
    Li, H., Wu, X.J. & Durrani, T. 2020 NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models. IEEE Transactions on Instrumentation and Measurement 69, 9645-9656.10.1109/TIM.2020.3005230.
    [11]
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L. & Polosukhin, I. 2017 Attention Is All You Need. arXiv.
    [12]
    Ho, J., Kalchbrenner, N., Weissenborn, D. & Salimans, T. 2019 Axial Attention in Multidimensional Transformers.
    [13]
    Wang, H., Zhu, Y., Green, B., Adam, H., Yuille, A. & Chen, L.C. 2020 Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation.
    [14]
    Zhao, H., Gallo, O., Frosio, I. & Kautz, J. 2017 Loss Functions for Image Restoration With Neural Networks. IEEE Transactions on Computational Imaging 3, 47-57.10.1109/tci.2016.2644865.
    [15]
    Ma, J., Xu, H., Jiang, J., Mei, X. & Zhang, X.P. 2020 DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Transactions on Image Processing 29, 4980-4995.
    [16]
    Li, H., Wu, X.-J. & Kittler, J. 2021 RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Information Fusion 73, 72-86.https://doi.org/10.1016/j.inffus.2021.02.023.
    [17]
    Xu, H., Zhang, H. & Ma, J. 2021 Classification Saliency-Based Rule for Visible and Infrared Image Fusion. IEEE Transactions on Computational Imaging 7, 824-836.10.1109/tci.2021.3100986.
    [18]
    Rao, D., Wu, X.J. & Xu, T. 2022 TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network.10.48550/arXiv.2201.10147.
    [19]
    Xydeas, C.S. & Petrović, V. 2000 Objective image fusion performance measure. Electronics Letters 36.10.1049/el:20000267.
    [20]
    Han, Y., Cai, Y., Cao, Y. & Xu, X. 2013 A new image fusion performance metric based on visual information fidelity. Information Fusion 14, 127-135.
    [21]
    Cui, G., Feng, H., Xu, Z., Li, Q. & Chen, Y. 2015 Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications 341, 199-209.
    [22]
    Aardt, V. & Jan. 2008 Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing 2, 1-28.
    [23]
    Tang, L., Yuan, J., Zhang, H., Jiang, X. & Ma, J. 2022 PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion.10.1016/j.inffus.2022.03.007.
    [24]
    Fu, Y., Wu, X.-J. & Durrani, T. 2021 Image fusion based on generative adversarial network consistent with perception. Information Fusion 72, 110-125.10.1016/j.inffus.2021.02.019.
    [25]
    Zhao, J., Laganiere, R. & Zheng, L. 2006 Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. International Journal of Innovative Computing Information & Control Ijicic 3.
    [26]
    Eskicioglu, A.M. & Fisher, P.S. 1995 Image quality measures and their performance. IEEE Trans Commun 43, 2959-2965.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCPR '22: Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition
    November 2022
    683 pages
    ISBN:9781450397056
    DOI:10.1145/3581807
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 May 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICCPR 2022

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 18
      Total Downloads
    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media