Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Recent advances via convolutional sparse representation model for pixel-level image fusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Image fusion aims to integrate complementary information from different source images into the final output image. This plays a significant role in high-level vision tasks. However, image fusion methods based on sparse representation (SR) or conventional multiscale transform (MST) have some drawbacks that are difficult to overcome. As an alternative form of SR, convolutional sparse representation (CSR) has the advantages of detail preservation and shift-invariance, which can overcome the shortcomings of SR- and MST-based fusion methods. Since CSR has been widely used in the field of image fusion and has advanced this field to a great extent, it is necessary to conduct a comprehensive investigation of image fusion based on CSR. To the best of our knowledge, there are no previous papers reviewing and evaluating CSR-based fusion methods, and this study is the first retrospective. In this paper, we focus on CSR-based image fusion methods and review the recent advances in pixel-level image fusion based on CSR. In the experimental part of the paper, multifocal images, infrared-visible images, and multimodal medical images are used as test images to compare and evaluate the performance of different image fusion methods. In addition, the future trend of CSR-based image fusion is discussed. This paper is expected to serve as a resource of reference for both researchers and general learners seeking an overview of CSR-based image fusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data Availability

The datasets used or analyzed during the current study are available from the corresponding author upon reasonable request.

Code Availability

Not applicable

Notes

  1. http://www.imagefusion.org

  2. https://figshare.com/articles/dataset/TNO_Image_dataset/1008029

  3. http://www.med.harvard.edu/AANLIB/home.html

References

  1. Li H, Manjunath B, Mitra SK (1995) Multisensor image fusion using the wavelet transform. Graph Models Image Process 57:235–245

    Google Scholar 

  2. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22:2864–2875

    Google Scholar 

  3. Pajares G, De La Cruz JM (2004) A wavelet-based image fusion tutorial. Pattern recognition 37:1855–1872

    Google Scholar 

  4. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: A survey of the state of the art. Inf Fusion 33:100–112

    Google Scholar 

  5. Babulal KS et al (2022) Real-time surveillance system for detection of social distancing. Int J E-Health Med Commun 13:1–13

    Google Scholar 

  6. Babulal KS, Das AK (2022) Deep learning-based object detection: an investigation, 697–711. Springer

  7. Kumar P, Babulal KS (2023) Hematological image analysis for segmentation and characterization of erythrocytes using fc-trisdr. Multimed Tools Appl 82:7861–7886

    Google Scholar 

  8. Wan T, Zhu C, Qin Z (2013) Multifocus image fusion based on robust principal component analysis. Pattern Recognit Lett 34:1001–1008

    Google Scholar 

  9. Chipman LJ, Orr TM, Graham LN (1995) Wavelets and image fusion. IEEE 3:248–251

    Google Scholar 

  10. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Google Scholar 

  11. Ding S, Zhao X, Xu H, Zhu Q, Xue Y (2018) Nsct-pcnn image fusion based on image gradient motivation. IET Comput Vis 12:377–383

    Google Scholar 

  12. Liu Y, Chen X, Liu A, Ward RK, Wang ZJ (2021) Recent advances in sparse representation based medical image fusion. IEEE Instrum Meas Mag 24:45–53

    Google Scholar 

  13. Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf Fusion 40:57–75

    Google Scholar 

  14. Kaur H, Koundal D, Kadyan V (2021) Image fusion techniques: a survey. Arch Comput Methods Eng 28:4425–4447

    Google Scholar 

  15. Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12:74–84

    Google Scholar 

  16. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164

    Google Scholar 

  17. Yang B, Li S (2009) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59:884–892

    Google Scholar 

  18. Bruckstein AM, Donoho DL, Elad M (2009) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51:34–81

    MathSciNet  Google Scholar 

  19. Liu Y, Chen X, Ward RK, Wang ZJ (2016) Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 23:1882–1886

    Google Scholar 

  20. Wohlberg B (2014) Endogenous convolutional sparse representations for translation invariant image subspace models, 2859–2863, IEEE

  21. Xianhong L, Zhibin C (2017) Fusion of infrared and visible images based on multi-scale directional guided filter and convolutional sparse representation [j]. Acta Photonica Sinica 37:1110004

    Google Scholar 

  22. Liu F et al (2020) Medical image fusion method by using laplacian pyramid and convolutional sparse representation. Concurrency and Computation: Practice and Experience 32:e5632

    Google Scholar 

  23. Zhang G et al (2023) A multimodal fusion method for alzheimer’s disease based on dct convolutional sparse representation. Front Neurosci 16:1100812

    Google Scholar 

  24. Nirmalraj S, Nagarajan G (2021) Fusion of visible and infrared image via compressive sensing using convolutional sparse representation. ICT Express 7:350–354

    Google Scholar 

  25. Pawar GA, Kadam S (2019) Multi-focal image fusion with convolutional sparse representation and stationary wavelet transform, 865–873. Springer

  26. Gao C, Liu F, Yan H (2020) Infrared and visible image fusion using dual-tree complex wavelet transform and convolutional sparse representation. J Intell Fuzzy Syst 39:4617–4629

    Google Scholar 

  27. Chen G, Chen Y, Li J, Liu G (2021) Infrared and visible image fusion based on discrete nonseparable shearlet transform and convolutional sparse representation. J Jilin Univ (Eng Technol Ed) 51:996–1010

    Google Scholar 

  28. Dong A, Su B, Zhao W, Du Q, Peng Y (2018) Infrared and visible image fusion based on convolution sparse representation. Lasers Infrared 48:1547–1553

    Google Scholar 

  29. Zhang C et al (2019) Infrared and visible image fusion using nsct and convolutional sparse representation, 393–405. Springer

  30. Dong A, Du Q, Long H, Shao Y (2019) Multi-focus image fusion based on convolution sparse representation and neighborhood features. J Optoelectron Laser 30:442–450

    Google Scholar 

  31. Wei Y et al (2022) Infrared and visivle image fusion based on nsct and convolutional sparse representation. Comput Digital Eng 50:276–283

    Google Scholar 

  32. Wang Z, Du Q, Long H, Shao Y, Peng Y (2021) Infrared and visible image fusion based on csr and energy features. Laser Infrared 51:1088–1096

    Google Scholar 

  33. Vishwakarma A, Bhuyan MK (2018) Image fusion using adjustable non-subsampled shearlet transform. IEEE Trans Instrum Meas 68:3367–3378

    Google Scholar 

  34. Qiu C, Zhao F, Duan D, Xia S (2020) Robust fusion method for pet and ct images based on convolutional sparse representation. Space Med Med Eng

  35. Cao Y, Yang S (2020) Image fusion method based on convolutional sparse representation. Navigation and Control 19:97

    Google Scholar 

  36. Xia J, Lu Y, Tan L (2020) Research of multimodal medical image fusion based on parameter-adaptive pulse-coupled neural network and convolutional sparse representation. Comput Math Methods Med 2020

  37. Wang L et al (2021) Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation. Multimed Tools Appl 80:36401–36421

    Google Scholar 

  38. Xia J, Lu Y, Tan L, Jiang P (2021) Intelligent fusion of infrared and visible image data based on convolutional sparse representation and improved pulse-coupled neural network. Comput Mater Contin 67:613–624

    Google Scholar 

  39. Shen S, Wang W, Wang H, Tan J (2021) Multimodal image fusion based on improved pulse-coupled neural network and convolutional sparse representation in nsst domain, Vol. 5, 1295–1300, IEEE

  40. Guo P, Xie G, Li R, Hu H (2023) Multimodal medical image fusion with convolution sparse representation and mutual information correlation in nsst domain. Complex Intell Syst 9:317–328

    Google Scholar 

  41. Zhang C (2021) Multifocus image fusion using multiscale transform and convolutional sparse representation. International Journal of Wavelets, Multiresolution and Information Processing 19:2050061

    MathSciNet  Google Scholar 

  42. Feng X, Fang C, Lou X, Hu K (2021) Research on infrared and visible image fusion based on tetrolet transform and convolution sparse representation. IEEE Access 9:23498–23510

    Google Scholar 

  43. Liu F, Chen L, Lu L, Jeon G, Yang X (2021) Infrared and visible image fusion via rolling guidance filter and convolutional sparse representation. J Intell Fuzzy Syst 40:10603–10616

    Google Scholar 

  44. Pei P, Yang Y, Dang J, Wang Y et al (2022) Infrared visible image fusion method based on rgf and csr. Laser Optoelectron Prog 59:1210001–1210001

    Google Scholar 

  45. Feng X (2021) Infrared and visible light image fusion based on internal generative mechanism and convolution sparse representation. Control Decis 37:167–174

    Google Scholar 

  46. Wang J, Chen S, Xie M (2021) Multi-source image fusion based on low-rank decomposition and convolutional sparse coding. Laser Optoelectron Prog 58:2210009

    Google Scholar 

  47. Hu Y, Chen Z, Zhang B, Ma L, Li J (2022) A multi-focus image fusion method based on multi-source joint layering and convolutional sparse representation. IET Image Process 16:216–228

    Google Scholar 

  48. Wang J, Ren P, Yang K, Qin C, Zhang X (2018) Image fusion based on gradient regularized convolution sparse representation, 1–4, IEEE

  49. Jian W, Chunxia Q, Xiufei Z, Ke Y, Ping R (2020) A multi-source image fusion algorithm based on gradient regularized convolution sparse representation. J Syst Eng Electron 31:447–459

    Google Scholar 

  50. Zhang C, Yan D, Yi L, Pei Z (2019) Visible and infrared image fusion based on convolutional sparse coding with gradient regularization, 1043–1049 IEEE

  51. Xing C, Wang M, Dong C, Duan C, Wang Z (2020) Using taylor expansion and convolutional sparse representation for image fusion. Neurocomputing 402:437–455

    Google Scholar 

  52. Liu X, CHENZB Q (2018) Infrared and visible image fusion using guided filter and convolutional sparse representation. Opt Precis Eng 26:1242G1253

  53. Xia J, Lu Y, Tan L (2020) Research of multimodal medical image fusion based on parameter-adaptive pulse-coupled neural network and convolutional sparse representation. Comput Math Methods Med 2020

  54. Yang M, Li F, Xie M, Zhang Y, Li H (2020) Joint implementation of image fusion and super-resolution based on convolutional sparse representation. Optical Technique 46:236

    Google Scholar 

  55. Liu Y, Chen X, Ward RK, Wang ZJ (2019) Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett 26:485–489

    Google Scholar 

  56. Xinxiang L, Zhang L, Wang L, Zhou X (2019) Image fusion method based on convolutional sparse representation and morphological component analysis. Int J Comput Intell Appl

  57. Tian C, Tang L, Li X, Liu K, Wang J (2021) Morphological component analysis-based perceptual medical image fusion using convolutional sparsity-motivated pcnn. Sci Program 2021:1–9

    Google Scholar 

  58. Guo P, Xie G, Li R, Hu H (2021) Multi-modal image fusion via convolutional morphological component analysis and guided filter. J. Circuits Syst. Comput 30:2130003

    Google Scholar 

  59. Xing C, Wang Z, Ouyang Q, Dong C, Duan C (2019) Image fusion method based on spatially masked convolutional sparse representation. Image Vis Comput 90:103806

    Google Scholar 

  60. Zhang C, Yue Z, Yan D, Yang X (2019) Infrared and visible image fusion using joint convolution sparse coding, Vol. 11321, 181–189 SPIE

  61. Shao L, Wu J, Wu M (2020) Infrared and visible image fusion based on spatial convolution sparse representation, Vol. 1634, 012113 IOP Publishing

  62. Wang W, Ma X, Liu H, Li Y, Liu W (2021) Multi-focus image fusion via joint convolutional analysis and synthesis sparse representation. Signal Process Image Commun 99:116521

    Google Scholar 

  63. Xu S, et al. (2020) Deep convolutional sparse coding networks for image fusion. arXiv preprint arXiv:2005.08448

  64. Zhang Z, Cao Y, Ding M, Tao J (2021) Infrared and visible image fusion via multi-layer convolutional sparse representation. J Harbin Inst Technol

  65. Wang L, Shi C, Lin S, Qin P, Wang Y (2020) Convolutional sparse representation and local density peak clustering for medical image fusion. Intern J Pattern Recognit Artif Intell 34:2057003

    Google Scholar 

  66. Wang W et al (2021) A noise-robust online convolutional coding model and its applications to poisson denoising and image fusion. Appl Math Model 95:644–666

    MathSciNet  Google Scholar 

  67. Zhang C, Zhang Z, Feng Z (2022) Image fusion using online convolutional sparse coding. J Ambient Intell Humaniz Comput 1–12

  68. Wang Y, Yao Q, Kwok JT.-Y, et al. (2018) Online convolutional sparse coding with sample-dependent dictionary, 5209–5218 PMLR

  69. Wohlberg B (2015) Efficient algorithms for convolutional sparse representations. IEEE Trans Image Process 25:301–315

    MathSciNet  Google Scholar 

  70. Papyan V, Romano Y, Elad M (2017) Convolutional neural networks analyzed via convolutional sparse coding. J Mach Learn Res 18:2887–2938

    MathSciNet  Google Scholar 

  71. Sharma A, Kumar P, Babulal KS, Obaid AJ, Patel H (2022) Categorical data clustering using harmony search algorithm for healthcare datasets. Int J E-Health Med Commun 13:1–15

    Google Scholar 

  72. Wang Y, Yao Q, Kwok JT, Ni LM (2018) Scalable online convolutional sparse coding. IEEE Trans Image Process 27:4850–4859

    MathSciNet  Google Scholar 

  73. Li H, Zhang C, He S, Feng Z, Yi L (2023) A novel fusion method based on online convolutional sparse coding with sample-dependent dictionary for visible–infrared images. Arab J Sci Eng 1–11

  74. Zhang C, Yang X, Yue Z (2019) Visible and infrared image fusion using convolutional dictionary learning with consensus auxiliary-auxiliary coupling, 1–4

  75. Zhang C (2020) Medical brain image fusion via convolution dictionary learning, 292–294 IEEE

  76. Zhang C (2021) Convolution dictionary learning for visible-infrared image fusion via local processing. Procedia Comput Sci 183:609–615

    Google Scholar 

  77. Zhang C (2020) Convolutional dictionary learning using global matching tracking (cdl-gmt): Application to visible-infrared image fusion, 288–291 IEEE

  78. Zhang C (2021) Multifocus image fusion using convolutional dictionary learning with adaptive contrast enhancement. J Electron Imaging 30:053016–053016

    Google Scholar 

  79. Zhang C, Feng Z (2021) Medical image fusion using convolution dictionary learning with adaptive contrast enhancement, 1–5

  80. Zhang C, Feng Z (2022) Infrared-visible image fusion using accelerated convergent convolutional dictionary learning. Arab J Sci Eng 47:10295–10306

    Google Scholar 

  81. Gao F, Deng X, Xu M, Xu J, Dragotti PL (2022) Multi-modal convolutional dictionary learning. IEEE Trans Image Process 31:1325–1339

    Google Scholar 

  82. Veshki FG, Vorobyov SA (2022) Coupled feature learning via structured convolutional sparse coding for multimodal image fusion, 2500–2504 IEEE

  83. Zhang C, Yang X (2021) Image fusion based on masked online convolutional dictionary learning with surrogate function approach, 70–74 Springer

  84. Zhang C, Yang X (2021) Visible and infrared image fusion based on masked online convolutional dictionary learning with frequency domain computation, 177–182 Springer

  85. Zhang C, Yang X (2021) Visible and infrared image fusion based on online convolutional dictionary learning with sparse matrix computation, 123–128 Springer

  86. Rubinstein R, Bruckstein AM, Elad M (2010) Dictionaries for sparse representation modeling. Proc IEEE 98:1045–1057

    Google Scholar 

  87. Garcia-Cardona C, Wohlberg B (2017) Subproblem coupling in convolutional dictionary learning, 1697–1701 IEEE

  88. Wohlberg B (2016) Boundary handling for convolutional sparse representations, 1833–1837 IEEE

  89. Zhang C, et al. (2020) Image fusion based on convolutional sparse representation with mask decoupling, 155–164 Springer

  90. Zhang C (2020) Multi-focus image fusion based on convolutional sparse representation with mask simulation, 159–168 Springer

  91. Papyan V, Romano Y, Sulam J, Elad M (2017) Convolutional dictionary learning via local processing, 5296–5304

  92. Plaut E, Giryes R (2018) Matching pursuit based convolutional sparse coding, 6847–6851 IEEE

  93. Chun IY, Fessler J (2017) Convolutional dictionary learning: Acceleration and convergence. IEEE Trans Image Process 27:1697–1712

    MathSciNet  Google Scholar 

  94. Chun IY, Fessler JA (2017) Convergent convolutional dictionary learning using adaptive contrast enhancement (cdl-ace): Application of cdl to image denoising, 460–464 IEEE

  95. Liu J, Garcia-Cardona C, Wohlberg B, Yin W (2018) First-and second-order methods for online convolutional dictionary learning. SIAM J Imaging Sci 11:1589–1628

    MathSciNet  Google Scholar 

  96. Liu J, Garcia-Cardona C, Wohlberg B, Yin W (2017) Online convolutional dictionary learning, 1707–1711 IEEE

  97. Degraux K, Kamilov US, Boufounos PT, Liu D (2017) Online convolutional dictionary learning for multimodal imaging, 1617–1621 IEEE

  98. Zeng Y, Chen J, Huang GB (2019) Slice-based online convolutional dictionary learning. IEEE Trans Cybern 51:5116–5129

    Google Scholar 

  99. Liu Z et al (2011) Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Trans Pattern Anal Mach 34:94–109

    Google Scholar 

  100. Piella G, Heijmans H (2003) A new quality metric for image fusion, Vol. 3, III–173 IEEE

  101. Xydeas CS, Petrovic V et al (2000) Objective image fusion performance measure. Electron Lett 36:308–309

    Google Scholar 

  102. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38:1

    Google Scholar 

  103. Cvejic N, Canagarajah C, Bull D (2006) Image fusion metric based on mutual information and tsallis entropy. Electron Lett 42:1

    Google Scholar 

  104. Wang Q, Shen Y, Jin J (2008) Performance evaluation of image fusion techniques. Image fusion: algorithms and applications 19:469–492

    Google Scholar 

  105. Zhao J, Laganiere R, Liu Z (2007) Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int J Innov Comput Inf Control 3:1433–1447

    Google Scholar 

  106. Chen Y, Blum RS (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27:1421–1432

    Google Scholar 

  107. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13:600–612

    Google Scholar 

  108. Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14:127–135

    Google Scholar 

Download references

Acknowledgements

This work was supported by Sichuan Science and Technology Program (2023NSFSC0495), Sichuan University and Luzhou Municipal People’s Government Strategic cooperation projects (2020CDLZ-10) and Colleague Project of Intelligent Policing Key Laboratory of Sichuan Province (ZNJW2022ZZMS001, ZNJW2023ZZQN004).

Funding

This work was supported by Sichuan Science and Technology Program (2023NSFSC0495), Sichuan University and Luzhou Municipal People’s Government Strategic cooperation projects (2020CDLZ-10) and Colleague Project of Intelligent Policing Key Laboratory of Sichuan Province (ZNJW2022ZZMS001, ZNJW2023ZZQN004).

Author information

Authors and Affiliations

Authors

Contributions

Yue Pan: Experiment design, Data analysis and interpretation, Paper writing. Tianye Lan: Literature collection, Manuscript polishing. Chongyang Xu: Data collection, Manuscript polishing. Chengfang Zhang: Project supervision, Experiment guidance, Manuscript review. Ziliang Feng: Project supervision, Manuscript review.

Corresponding author

Correspondence to Chengfang Zhang.

Ethics declarations

Ethical approval

This article does not contain any study performed on humans or animals by the authors.

Consent to participate

This article does not contain any study performed on humans or animals by the authors.

Consent for publication

Not applicable

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pan, Y., Lan, T., Xu, C. et al. Recent advances via convolutional sparse representation model for pixel-level image fusion. Multimed Tools Appl 83, 52899–52930 (2024). https://doi.org/10.1007/s11042-023-17584-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-17584-z

Keywords