Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
A Novel Orchestrator Architecture for Deploying Virtualized Services in Next-Generation IoT Computing Ecosystems
Previous Article in Journal
Incorporating Power-Law Model and ERA-5 Data for InSAR Tropospheric Delay Correction Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding

1
Chinese Academy of Sciences, Aerospace Information Research Institute, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
4
School of Future Technology, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(3), 717; https://doi.org/10.3390/s25030717
Submission received: 13 December 2024 / Revised: 22 January 2025 / Accepted: 22 January 2025 / Published: 24 January 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

This paper presents a novel image fusion method designed to enhance the integration of infrared and visible images through the use of a residual attention mechanism. The primary objective is to generate a fused image that effectively combines the thermal radiation information from infrared images with the detailed texture and background information from visible images. To achieve this, we propose a multi-level feature extraction and fusion framework that encodes both shallow and deep image features. In this framework, deep features are utilized as queries, while shallow features function as keys and values within a residual cross-attention module. This architecture enables a more refined fusion process by selectively attending to and integrating relevant information from different feature levels. Additionally, we introduce a dynamic feature preservation loss function to optimize the fusion process, ensuring the retention of critical details from both source images. Experimental results demonstrate that the proposed method outperforms existing fusion techniques across various quantitative metrics and delivers superior visual quality.
Keywords: image fusion; transformer; cross attention; deep learning; feature encoding image fusion; transformer; cross attention; deep learning; feature encoding

Share and Cite

MDPI and ACS Style

Li, H.; Yang, T.; Wang, R.; Li, C.; Zhou, S.; Guo, X. Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding. Sensors 2025, 25, 717. https://doi.org/10.3390/s25030717

AMA Style

Li H, Yang T, Wang R, Li C, Zhou S, Guo X. Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding. Sensors. 2025; 25(3):717. https://doi.org/10.3390/s25030717

Chicago/Turabian Style

Li, Hao, Tiantian Yang, Runxiang Wang, Cuichun Li, Shuyu Zhou, and Xiqing Guo. 2025. "Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding" Sensors 25, no. 3: 717. https://doi.org/10.3390/s25030717

APA Style

Li, H., Yang, T., Wang, R., Li, C., Zhou, S., & Guo, X. (2025). Residual Attention-Based Image Fusion Method with Multi-Level Feature Encoding. Sensors, 25(3), 717. https://doi.org/10.3390/s25030717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop