Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Infrared and visible images fusion using visual saliency and optimized spiking cortical model in non-subsampled shearlet transform domain

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Aiming at some problems in existing infrared and visible image fusion methods such as edge blurring, low contrast, loss of details, a novel fusion scheme based on non-subsampled shearlet transform (NSST), visual saliency and multi-objective artificial bee colony (MOABC) optimizing spiking cortical mode (SCM) is proposed. NSST has many advantages such as multi-scale features and sparse representation. Moreover, the visual saliency map can improve the low frequency fusion strategy, and SCM has coupling and pulse synchronization properties. Firstly, NSST is utilized to decompose the source image into a low-frequency subband and a series of high-frequency subbands. Secondly, the low-frequency subband is fused by SCM, where SCM is motivated by the edge saliency map of the low-frequency subband of the source image, and then the high-frequency subbands are also fused by SCM, where the modified spatial frequency of the high-frequency subbands of the source image is adopted as the input stimulus of SCM, the parameters of SCM are optimized by the novel multi-objective artificial bee colony technique. Finally, the fused image is reconstructed by inverse NSST. Experimental results indicate that the proposed scheme performs well and has obvious superiorities over other current typical ones in both subjective visual performance and objective criteria.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012) A multi-objective artificial bee colony algorithm. Swarm Evol Comput 2(1):39–52

    Article  Google Scholar 

  2. Al Da C, Zhou J, Do MN (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process 15(10):3089–3101

    Article  Google Scholar 

  3. Banharnsakun A Multi-focus image fusion using best-so-far ABC strategies. Neural Comput & Applic:1–16, https://doi.org/10.1007/s00521-015-2061-2(2015)

  4. Current John R, Revelle CS, Cohon JL (1990) Interactive approach to identify the best compromise solution for two objective shortest path problems. Comput Oper Res 17(2):187–198

    Article  MathSciNet  MATH  Google Scholar 

  5. Do Minh N, Vetterli M (2005) The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process 14(12):2091–2106

    Article  Google Scholar 

  6. Gan W, Wu X, Wu W, Yang X, Ren C, He X, Liu K (2015) Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter. Infrared Phys Technol 72:37–51

    Article  Google Scholar 

  7. Glenn E, Labate D, Lim WQ (2008) Sparse directional image representations using the discrete shearlet transform. Appl Comput Harmon Anal 25(1):25–46

    Article  MathSciNet  MATH  Google Scholar 

  8. He K, Sun J, Tang X (2013) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409

    Article  Google Scholar 

  9. Hossny M, Nahavandi S, Creighton D (2008) Comments on ‘information measure for performance of image fusion. Electron Lett 44(18):1066–1067

    Article  Google Scholar 

  10. Hou X, Zhang L. (2007) Saliency detection: a spectral residual approach. IEEE Conference on Computer Vision and Pattern Recognition, Computer Vision and Pattern Recognition, CVPR '07, DOI: https://doi.org/10.1109/CVPR.2007.383267, 1–8: 2280

  11. Jin X, Nie RC, Zhou DM et al (2016) Multifocus color image fusion based on NSST and PCNN. J Sens 8359602. https://doi.org/10.1155/2016/8359602

  12. Jin X, Jiang Q, Yao SW et al (2017) A survey of infrared and visual image fusion methods. Infrared Phys Technol 85:478–501

    Article  Google Scholar 

  13. Jin X, Zhou DM, Yao SW et al (2017) Multi-focus image fusion method using S-PCNN optimized by particle swarm optimization. Soft Comput 1298:1–13. https://doi.org/10.1007/s00500-017-2694-4

    Article  Google Scholar 

  14. Johnson JL, Padgett ML (1999) PCNN models and applications. IEEE Trans Neural Netw 17(3):480–498

    Article  Google Scholar 

  15. Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 8(1):687–697

    Article  Google Scholar 

  16. Kong WW (2013) Multi-sensor image fusion based on NSST domain (ICM)-C-2. Electron Lett 49(13):802

    Article  Google Scholar 

  17. Kong W, Zhang L, Lei Y (2014) Novel fusion method for visible light and infrared images based on NSST-SF-PCNN. Infrared Phys Technol 65:103–112

    Article  Google Scholar 

  18. Kong W, Wang B, Lei Y (2015) Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model. Infrared Phys Technol 71:87–98

    Article  Google Scholar 

  19. Laurent I, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  20. Li H, Manjunath BS, Mitra SK (1995) Multisensor image fusion using the wavelet transform. Graph Model Image Process 57(3):235–245

    Article  Google Scholar 

  21. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164

    Article  Google Scholar 

  22. Liu Z, Erik B, Gaurav B et al (2018) Fusing synergistic information from multi-sensor images: an overview from implementation to performance assessment. Inf Fusion 42:127–145

    Article  Google Scholar 

  23. Ma JL, Zhou ZQ, Wang B et al (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 8-17:82

    Google Scholar 

  24. Manivannan R, Samidurai R, Cao JD et al (2017) Global exponential stability and dissipativity of generalized neural networks with time-varying delay signals. Neural Netw 87:149–159

    Article  MATH  Google Scholar 

  25. Manivannan R, Mahendralcumar G, Samidurai R et al (2017) Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. J Frankl Inst Eng Appl Math 354(11):4353–4376

    Article  MathSciNet  MATH  Google Scholar 

  26. Manivannan R, Samidurai R, Cao JD et al (2017) Design of extended dissipativity state estimation for generalized neural networks with mixed time-varying delay signals. Inf Sci 424:175–203

    Article  MathSciNet  MATH  Google Scholar 

  27. Manivannan R, Samidurai R, Zhu QX (2017) Further improved results on stability and dissipativity analysis of static impulsive neural networks with interval time-varying delays. J Frankl Inst Eng Appl Math 354(14):6312–6340

    Article  MathSciNet  MATH  Google Scholar 

  28. Nasiraghdam H, Jadid S (2012) Optimal hybrid PV/WT/FC sizing and distribution system reconfiguration using multi-objective artificial bee colony (MOABC) algorithm. Sol Energy 86(10):3057–3071

    Article  Google Scholar 

  29. Radhakrishna A, Sheila H, Francisco E, et al (2009) Frequency-tuned salient region detection. IEEE-Computer-Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009:1597-1604

  30. Song YD, Wu W, Liu Z et al (2016) An adaptive pansharpening method by using weighted least squares filter. IEEE Geosci Remote SensLett 13(1):18–22

    Article  Google Scholar 

  31. Toet A (1989) Image fusion by a ratio of low-pass pyramid. Pattern Recogn Lett 9(4):245–253

    Article  MATH  Google Scholar 

  32. Vladimir P, Costas, X (2004) Evaluation of image fusion performance with visible differences. 8th European Conference on Computer Vision, ECCV 2004, Lect Notes Comput Sci, 3023: 380–391

  33. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  34. Wang Q, Zhou D, Nie R, et al (2016) Medical image fusion using pulse coupled neural network and multi-objective particle swarm optimization. Eighth International Conference on Digital Image Processing, ICDIP 2016, DOI: https://doi.org/10.1117/12.2245043, 10033: 100334K

  35. Wei C, Blum RS (2010) Theoretical analysis of correlation-based quality measures for weighted averaging image fusion. Information Fusion 11(4):301–310

    Article  Google Scholar 

  36. Xu X, Shan D, Wang G, Jiang X (2016) Multimodal medical image fusion using PCNN optimized by the QPSO algorithm. Appl Soft Comput 46:588–595

    Article  Google Scholar 

  37. Yang X, Wu W, Yan B et al (2018) Infrared image super-resolution with parallel random Forest. Int J Parallel Prog 4:1–21

    Google Scholar 

  38. Yang XM, Wu W, Liu K et al (2018) Multi-semi-couple super-resolution method for edge computing. IEEE Access 6:5511–5520

    Article  Google Scholar 

  39. Yoshifusa I (1991) Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Netw 4(3):385–394

    Article  Google Scholar 

  40. Zhan K, Zhang H, Ma Y (2009) New spiking cortical model for invariant texture retrieval and image processing. IEEE Trans Neural Netw 20(12):1980–1986

    Article  Google Scholar 

  41. Zhan K, Shi J, Wang H, Xie Y, Li Q (2017) Computational mechanisms of pulse-coupled neural networks: a comprehensive review. Arch Comput Meth Eng 24(3):573–588

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhang L, Tong MH, Marks TK et al (2008) SUN: a Bayesian framework for saliency using natural statistics. J Vis 8(7):32.1-20

    Article  Google Scholar 

  43. Zhang D, Mabu S, Hirasawa K (2010) Noise reduction using genetic algorithm based PCNN method. IEEE International Conference on Systems Man and Cybernetics IEEE: 2627–2633

  44. Zhang BH, Lu XQ, Pei HQ et al (2015) A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. Infrared Phys Technol 73:286–297

    Article  Google Scholar 

Download references

Acknowledgements

The authors’ work is supported by the National Natural Science Foundation of China (no. 61463052 and no.61365001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongming Zhou.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hou, R., Nie, R., Zhou, D. et al. Infrared and visible images fusion using visual saliency and optimized spiking cortical model in non-subsampled shearlet transform domain. Multimed Tools Appl 78, 28609–28632 (2019). https://doi.org/10.1007/s11042-018-6099-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6099-x

Keywords