Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3582649.3582682acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicigpConference Proceedingsconference-collections
research-article

Multi-autoencoder with Perceptual Loss-Based Network for Infrared and Visible Image Fusion

Published: 07 April 2023 Publication History

Abstract

Infrared image has the obvious target while visible image has the abundant details. Fusing the infrared image and the visible image aims to reserve both the important information of these two. There is still space for the development in this field. In this paper, we propose a novel multi-autoencoder with perceptual loss based and WMC enhancement fusion network. Firstly, we design a preprocessing module in the framework to enhance the visible images from the source images. Then, to preserve more details and textures, we extract the higher feature map and use the perceptual loss to circulate the Euclidean distance of each feature map between the fused image and the infrared image. To highlight the target of infrared image, we use a Sobel-based gradient loss function to train the proposed network. At last, we use WMC to enhance the outputs from the proposed network. Extensive experiments on both public TNO and RoadScene datasets are conducted to test the performances of the proposed method. The experimental results demonstrate that the proposed method outperforms some state-of-the-art methods both qualitatively and quantitatively.

References

[1]
Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." European conference on computer vision. Springer, Cham, 2016.
[2]
Li, Hui, Xiao-Jun Wu, and Josef Kittler. "MDLatLRR: A novel decomposition method for infrared and visible image fusion." IEEE Transactions on Image Processing 29 (2020): 4733-4746.
[3]
Bavirisetti, Durga Prasad, and Ravindra Dhuli. "Two-scale image fusion of visible and infrared images using saliency detection." Infrared Physics & Technology 76 (2016): 52-64.
[4]
Vanmali, Ashish V., and Vikram M. Gadre. "Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility."  Sadhana-Academy Proceedings in Engineering Sciences 42.7 (2017): 1063-1082.
[5]
Chen, Jun, Li, Xuejiao, Luo, Linbo, Mei, Xiaoguang, Ma, Jiayi. "Infrared and visible image fusion based on target-enhanced multiscale transform decomposition." Information Sciences 508 (2020): 64-78.
[6]
Ma, Jiayi, Chen, Chen, Li, Chang, Huang, Jun. "Infrared and visible image fusion via gradient transfer and total variation minimization." Information Fusion 31 (2016): 100-109.
[7]
Liu, C. H., Y. Qi, and W. R. Ding. "Infrared and visible image fusion method based on saliency detection in sparse domain." Infrared Physics & Technology 83 (2017): 94-102.
[8]
Li, Ying, Li, Fanngyi, Bai, Bendu, Shen, Qiang. "Image fusion via nonlocal sparse K-SVD dictionary learning." Applied Optics 55.7 (2016): 1814-1823.
[9]
Ma, Jiayi, Yu, Wei, Liang, Pengwei, Li, Chang, Jiang, Junjun. "FusionGAN: A generative adversarial network for infrared and visible image fusion." Information Fusion 48 (2019): 11-26.
[10]
Ma, Jiayi, Xu, Han, Jiang, Junjun, Mei, Xiaoguang, Zhang, Xiaoping. "DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion." IEEE Transactions on Image Processing 29 (2020): 4980-4995.
[11]
Ma, Jiayi, Zhang, Hao, Shao, Zhenfeng, Liang, Pengwei, Xu, Han. "GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion." IEEE Transactions on Instrumentation and Measurement 70 (2020): 1-14.
[12]
Guo, X., L. Yu, and H. Ling . "LIME: Low-light Image Enhancement via Illumination Map Estimation." IEEE Transactions on Image Processing PP.99(2016):1-1.
[13]
Y. Gong, O. Goksel, Weighted mean curvature, Signal Process. 164 (2019) 329–339, https://doi.org/10.1016/j.sigpro.2019.06.020.
[14]
Li, Hui, and Xiao-Jun Wu. "DenseFuse: A fusion approach to infrared and visible images." IEEE Transactions on Image Processing 28.5 (2018): 2614-2623.
[15]
J.E. Taylor, II—mean curvature and weighted mean curvature, Acta Metallurgica et Materialia. 40 (7) (1992) 1475–1485.
[16]
Fu, Y., and X. J. Wu "A Dual-branch Network for Infrared and Visible Image Fusion.", 10.1109/ICPR48806.2021.9412293. 2021.
[17]
Jian, L, "A Symmetric Encoder-Decoder with Residual Block for Infrared and Visible Image Fusion.", 10.48550/arXiv.1905.11447. 2019.
[18]
Zhang, Chunxia, Zhang, Jiangshe, Liu, Junmin, Li, Pengfei, Xu, Shuang, Zhao, Zixiang. "DIDFuse: Deep image decomposition for infrared and visible image fusion." arXiv preprint arXiv:2003.09210 (2020).
[19]
Li, Hui, Xiao-Jun Wu, and Josef Kittler. "RFN-Nest: An end-to-end residual fusion network for infrared and visible images." Information Fusion 73 (2021): 72-86.

Cited By

View all

Index Terms

  1. Multi-autoencoder with Perceptual Loss-Based Network for Infrared and Visible Image Fusion

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIGP '23: Proceedings of the 2023 6th International Conference on Image and Graphics Processing
    January 2023
    246 pages
    ISBN:9781450398572
    DOI:10.1145/3582649
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 April 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep learning
    2. images fusion
    3. infrared image
    4. perceptual loss
    5. visible image

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICIGP 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 53
      Total Downloads
    • Downloads (Last 12 months)24
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 23 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media