Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform

Published: 01 June 2021 Publication History

Highlights

Solved the problem that CNN cannot be directly used for medical image fusion.
Using CNN to replace artificially designed fusion rules.
Proposing a new CNN (PHF-CNN) to select high frequency subband coefficients.

Abstract

Although many powerful convolutional neural networks (CNN) have been applied to various image processing fields, due to the lack of datasets for network training and the significant different intensities of diverse multi-modal source images at the same location, CNN cannot be directly used for the field of medical image fusion (MIF), which is a major problem and limits the development of this field. In this article, a novel multimodal medical image fusion method based on non-subsampled contourlet transform (NSCT) and CNN is presented. The proposed algorithm not only solves this problem, but also exploits the advantages of both NSCT and CNN to obtain better fusion results. In the proposed algorithm, source multi-modality images are decomposed into low and high frequency subbands. For high frequency subbands, a new perceptual high frequency CNN (PHF-CNN), which is trained in the frequency domain, is designed as an adaptive fusion rule. In the matter of the low frequency subband, two result maps are adopted to generate the decision map. Finally, fused frequency subbands are integrated by the inverse NSCT. To verify the effectiveness of the proposed algorithm, ten state-of-the-art MIF algorithms are selected as comparative algorithms. Subjective evaluations by five doctors as well as objective evaluations by seven image quality metrics, demonstrate that the proposed algorithm is superior to the other comparative algorithms in terms of fusing multimodal medical images.

References

[1]
S. Albarqouni, C. Baur, F. Achilles, V. Belagiannis, S. Demirci, N. Navab, AggNet: Deep learning from crowds for mitosis detection in breast cancer histology images, IEEE Transactions on Medical Imaging 35 (5) (2016) 1313–1321.
[2]
C.O. Ancuti, C. Ancuti, V.C. De, A.C. Bovik, Single-scale fusion: An effective approach to merging images, IEEE Transactions on Image Processing (2016) 1. PP(99).
[3]
R.H. Bamberger, M.J.T. Smith, A filter bank for the directional decomposition of images: theory and design, IEEE Transactions on Signal Processing 40 (4) (1992) 882–893.
[4]
D.P. Bavirisetti, K.V. Kumar, X. Gang, R. Dhuli, Fusion of MRI and CT images using guided image filter and image statistics, International Journal of Imaging Systems & Technology 27 (3) (2017) 227–237.
[5]
V. Bhateja, A. Srivastava, A. Moin, A. Lay-Ekuakille, NSCT based multispectral medical image fusion model, IEEE International Symposium on Medical Measurements & (2016) Applications.
[6]
G. Bhatnagar, Q.M.J. Wu, L. Zheng, Directive contrast based multimodal medical image fusion in NSCT domain, IEEE Transactions on Multimedia 9 (6) (2014) 1014–1024.
[7]
P.J. Burt, Smart sensing within a pyramid vision machine, Proceedings of the IEEE 76 (8) (1988) 1006–1015.
[8]
P.J. Burt, E.H. Adelson, The Laplacian pyramid as a compact image code, Readings in Computer Vision 31 (4) (1987) 671–679.
[9]
L. Cunha Arthur, D.Z. Jianping, N. Do Minh, The nonsubsampled contourlet transform: Theory, design, and applications, IEEE Transactions on Image Processing 15 (10) (2006) 3089–3101.
[10]
S. Das, M.K. Kundu, NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency, Medical & Biological Engineering & Computing 50 (10) (2012) 1105–1114.
[11]
S. Ding, X. Zhao, X. Hui, Q. Zhu, X. Yu, NSCT-PCNN image fusion based on image gradient motivation, IET Computer Vision 12 (4) (2018) 377–383.
[12]
J. Fu, H. Zheng, M. Tao, Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition, Computer Vision & Pattern Recognition. (2017).
[13]
Q. Guihong, Z. Dali, Y. Pingfan, Medical image fusion by wavelet transform modulus maxima, Optics Express 9 (4) (2001) 184.
[14]
S. Hamid Rahim, Alan C. Bovik, Image information and visual quality, IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 15 (2) (2006) 430.
[15]
M. Hareeta, K. Mahendra, P. Anurag, Image fusion based on the modified curvelet transform, International Conference on Smart Trends for Information Technology & Computer (2016) Communications.
[16]
Z.X.R.S. He Kaiming, Deep residual learning for image recognition, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
[17]
K. He, J. Sun, X. Tang, Guided image filtering, IEEE Transactions on Pattern Analysis and Machine (2013) Intelligence.
[18]
R. Hong, Objective image fusion performance measure, Military Technical Courier 56 (2) (2000) 181–193.
[19]
Howard, A. G., Zhu, M., Bo, C., Kalenichenko, D., Wang, W., Weyand, T., Adam, H. (2017). MobileNets: efficient convolutional neural networks for mobile vision applications.
[20]
Y. Hu, J. Huang, S. Kwong, Y.K. Chan, Image fusion based visible watermarking using dual-tree complex wavelet transform, International Workshop on Digital Watermarking (2003).
[21]
J.H. Jang, J.B. Ra, Pseudo-color image fusion based on intensity-hue-saturation color space, in: IEEE International Conference on Multisensor Fusion & Integration for Intelligent Systems, 2008.
[22]
Y. Jia, Fusion of landsat TM and SAR images based on principal component analysis, Remote Sensing Technology & Application 13 (1) (1998) 46–49.
[23]
H. Jing, Z. Yi, L.J. Wang, L.F. Bai, Image fusion via feature residual and statistical matching, IET Computer Vision 10 (6) (2016) 551–558.
[24]
A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, in: International Conference on Neural Information Processing Systems, 2012.
[25]
S. Li, B. Yang, Multifocus image fusion by combining curvelet and wavelet transform, Pattern Recognition Letters 29 (9) (2008) 1295–1301.
[26]
S. Li, B. Yang, Multifocus image fusion using region segmentation and spatial frequency, Image & Vision Computing 26 (7) (2008) 971–979.
[27]
T. Li, Y. Wang, Biological image fusion using a NSCT based variable-weight method, Information Fusion 12 (2) (2011) 85–92.
[28]
Y. Liu, S. Liu, Z. Wang, A general framework for image fusion based on multi-scale transform and sparse representation, Information Fusion 24 (2015) 147–164.
[29]
X. Liu, W. Mei, H. Du, Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter, Medical & Biological Engineering & Computing 56 (2) (2018) 1–14.
[30]
J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, IEEE Transactions on Pattern Analysis & Machine Intelligence 39 (4) (2014) 640–651.
[31]
V. Nair, G.E. Hinton, Rectified linear units improve restricted boltzmann machines, in: International Conference on International Conference on Machine Learning, 2010.
[32]
F. Nencini, A. Garzelli, S. Baronti, Remote sensing image fusion using the curvelet transform, Information Fusion 8 (2) (2007) 143–156.
[33]
M. Piccinelli, Multimodality image fusion, moving forward, Journal of Nuclear Cardiology (2019).
[34]
G. Piella, H. Heijmans, A new quality metric for image fusion, in: International Conference on Image Processing, 2003.
[35]
Z. Qiang, W. Long, Multimodality image fusion by using both phase and magnitude information, Pattern Recognition Letters 34 (2) (2013) 185–193.
[36]
G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion, Electronics Letters 38 (7) (2002) 313–315.
[37]
B. Rajalingam, R. Priya, Multimodal medical image fusion based on deep learning neural network for clinical treatment analysis, International Journal of Chem Tech Research CODEN (USA): IJCRGG (2018) ISSN, 974-4290.
[38]
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: towards real-time object detection with region proposal networks.
[39]
O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical Image Computing & Computer-Assisted Intervention, 2015.
[40]
L. Shutao, K. Xudong, H. Jianwen, Image fusion with guided filtering, IEEE Transactions on Image Processing 22 (7) (2013) 2864–2875.
[41]
P. Tahmasebi, Nanoscale and multiresolution models for shale samples, Fuel (2018).
[42]
H. Venkateswara, S. Chakraborty, S. Panchanathan, Deep-learning systems for domain adaptation in computer vision: learning transferable feature representations, IEEE Signal Processing Magazine 34 (6) (2017) 117–129.
[43]
Y. Wu, P. Tahmasebi, C. Lin, L. Ren, C. Dong, Multiscale modeling of shale samples based on low- and high-resolution images, Marine and Petroleum Geology (2019),.
[44]
L. Yang, B.L. Guo, W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform, Neurocomputing 72 (1) (2008) 203–211.
[45]
L. Yu, C. Xun, J. Cheng, P. Hu, A medical image fusion method based on convolutional neural networks, in: International Conference on Information Fusion, 2017.
[46]
S.H. Yun, H.K. Jin, S. Kim, Image enhancement using a fusion framework of histogram equalization and Laplacian pyramid, IEEE Transactions on Consumer Electronics 56 (4) (2010) 2763–2771.
[47]
X. Zhang, X. Li, Y. Feng, H. Zhao, Z. Liu, Image fusion with internal generative mechanism, Expert Systems with Applications 42 (5) (2015) 2382–2391.
[48]
Y. Zheng, Z. Li, C. Zhang, A hybrid architecture based on CNN for image semantic annotation, in: International Conference on Intelligent Information Processing, 2016.
[49]
W. Zhou, B. Alan Conrad, S. Hamid Rahim, Eero P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing 13 (4) (2004) 600–612.
[50]
Zhu, W., Huang, Y., Hui, T., Zhen, Q., & Xie, X. (2018). AnatomyNet: deep 3D squeeze-and-excitation U-nets for fast and fully automated whole-volume anatomical segmentation.
[51]
Z. Zhu, M. Zheng, G. Qi, D. Wang, Y. Xiang, A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain, IEEE Access 7 (2019) 20811–20824.

Cited By

View all

Index Terms

  1. Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Expert Systems with Applications: An International Journal
          Expert Systems with Applications: An International Journal  Volume 171, Issue C
          Jun 2021
          587 pages

          Publisher

          Pergamon Press, Inc.

          United States

          Publication History

          Published: 01 June 2021

          Author Tags

          1. Medical image fusion
          2. Convolutional neural network
          3. Non-subsampled contourlet transform

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 12 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2025)Cross-attention interaction learning network for multi-model image fusion via transformerEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.109583139:PAOnline publication date: 1-Jan-2025
          • (2024)PCNN orchard heterologous image fusion with semantic segmentation of significance regionsComputers and Electronics in Agriculture10.1016/j.compag.2023.108454216:COnline publication date: 12-Apr-2024
          • (2024)Medical image fusion based on transfer learning techniques and coupled neural P systemsNeural Computing and Applications10.1007/s00521-023-09294-236:8(4325-4347)Online publication date: 1-Mar-2024
          • (2024)MSD-HAM-Net: A Multi-modality Fusion Network of PET/CT Images for the Prognosis of DLBCL PatientsArtificial Neural Networks and Machine Learning – ICANN 202410.1007/978-3-031-72353-7_23(314-327)Online publication date: 17-Sep-2024
          • (2023)Deep learning methods for medical image fusionComputers in Biology and Medicine10.1016/j.compbiomed.2023.106959160:COnline publication date: 1-Jun-2023
          • (2023)DAGM-fusionComputers in Biology and Medicine10.1016/j.compbiomed.2023.106620155:COnline publication date: 1-Mar-2023
          • (2023)Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural networkComputer Methods and Programs in Biomedicine10.1016/j.cmpb.2022.107304229:COnline publication date: 1-Feb-2023
          • (2023)Underwater bubble plumes multi-scale morphological feature extraction and state recognition methodNeural Computing and Applications10.1007/s00521-022-08116-135:11(8437-8451)Online publication date: 27-Jan-2023
          • (2022)Multi-modality image fusion for medical assistive technology management based on hybrid domain filteringExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.118283209:COnline publication date: 15-Dec-2022
          • (2022)Deep learning with multiresolution handcrafted features for brain MRI segmentationArtificial Intelligence in Medicine10.1016/j.artmed.2022.102365131:COnline publication date: 1-Sep-2022
          • Show More Cited By

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media