Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Region- and Pixel-Level Multi-Focus Image Fusion through Convolutional Neural Networks

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

Capturing all-in-focus images with 3D scenes is typically a challenging task due to depth of field limitations, and various multi-focus image fusion methods have been employed to generate all-in-focus images. However, existing methods have difficulty simultaneously achieving real-time and superior fusion performance. In this paper, we propose a region- and pixel-based method that can recognize the focus and defocus regions or pixels by the neighborhood information in the source images. The proposed method can obtain satisfactory fusion results and achieve improved real-time performance. First, a convolutional neural network (CNN)-based classifier generates a coarse region-based trimap quickly, which contains focus, defocus and boundary regions. Then, precise fine-tuning is implemented at the boundary regions to address the boundary pixels that are difficult to discriminate by existing methods. Based on a public database, a high-quality dataset is constructed that provides abundant precise pixel-level labels, so that the proposed method can accurately classify the regions and pixels without artifacts. Furthermore, an image interpolation method called NEAREST_Gaussian is proposed to improve the recognition ability at the boundary. Experimental results show that the proposed method outperforms other state-of-the-art methods in visual perception and object metrics. Additionally, the proposed method has 80% improved to the conventional CNN-based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Availability of data and material

Not applicable.

References

  1. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207. https://doi.org/10.1016/j.inffus.2016.12.001

    Article  Google Scholar 

  2. Burt PJ, Adelson EH (1987) The laplacian pyramid as a compact image code. Readings Comput Vision 31(4):671–679. https://doi.org/10.1016/b978-0-08-051581-6.50065-9

    Article  Google Scholar 

  3. Lewis JJ, O’Callaghan RJ, Nikolov SG, Bull DR, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inf Fusion 8(2):119–130. https://doi.org/10.1016/j.inffus.2005.09.006

    Article  Google Scholar 

  4. Liu Y, Jin J, Wang Q, Shen Y, Dong X (2014) Region level based multi-focus image fusion using quaternion wavelet and normalized cut. Signal Process 97:9–30. https://doi.org/10.1016/j.sigpro.2013.10.010

    Article  Google Scholar 

  5. Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Inf Fusion 8(2):143–156. https://doi.org/10.1016/j.inffus.2006.02.001

    Article  Google Scholar 

  6. Bavirisetti DP, Dhuli R (2018) Multi-focus image fusion using multi-scale image decomposition and saliency detection. Ain Shams Eng J 9(4):1103–1117. https://doi.org/10.1016/j.asej.2016.06.011

    Article  Google Scholar 

  7. Mitianoudis N, Stathaki T (2007) Pixel-based and region-based image fusion schemes using ICA bases. Inf Fusion 8(2):131–142. https://doi.org/10.1016/j.inffus.2005.09.001

    Article  MATH  Google Scholar 

  8. Bin Y, Shutao L (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892. https://doi.org/10.1109/tim.2009.2026612

    Article  Google Scholar 

  9. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164. https://doi.org/10.1016/j.inffus.2014.09.004

    Article  Google Scholar 

  10. Huang W, Jing Z (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recog Lett 28(4):493–500. https://doi.org/10.1016/j.patrec.2006.09.005

    Article  Google Scholar 

  11. De I, Chanda B (2013) Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure. Inf Fusion 14(2):136–146. https://doi.org/10.1016/j.inffus.2012.01.007

    Article  Google Scholar 

  12. Duan J, Chen L, Philip Chen CL (2016) Multifocus image fusion using superpixel segmentation and superpixel-based mean filtering. Appl Opt 55(36):10352. https://doi.org/10.1364/ao.55.010352

    Article  Google Scholar 

  13. Li M, Cai W, Tan Z (2006) A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recog Lett 27(16):1948–1956. https://doi.org/10.1016/j.patrec.2006.05.004

    Article  Google Scholar 

  14. Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14(2):147–162. https://doi.org/10.1016/j.inffus.2011.07.001

    Article  Google Scholar 

  15. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875. https://doi.org/10.1109/tip.2013.2244222

    Article  Google Scholar 

  16. Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inf Fusion 23:139–155. https://doi.org/10.1016/j.inffus.2014.05.004

    Article  Google Scholar 

  17. Ma H, Zhang J, Liu S, Liao Q (2019) Boundary Aware Multi-Focus Image Fusion Using Deep Neural Network. 2019 IEEE International Conference on Multimedia and Expo (ICME). https://doi.org/10.1109/icme.2019.00201

  18. Tang H, Xiao B, Li W, Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci (N Y) 433-434:125–141. https://doi.org/10.1016/j.ins.2017.12.043

    Article  MathSciNet  Google Scholar 

  19. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761. https://doi.org/10.1109/access.2017.2735019

    Article  Google Scholar 

  20. Zhang Q, Guo B-L (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346. https://doi.org/10.1016/j.sigpro.2009.01.012

    Article  MATH  Google Scholar 

  21. Rhemann C, Rother C, Wang J, Gelautz M, Kohli P, Rott P (2009) A perceptually motivated online benchmark for image matting. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition pp 1826-1833. https://doi.org/10.1109/cvpr.2009.5206503

  22. Lan R, Lu H, Zhou Y, Liu Z, Luo X (2019) An lbp encoding scheme jointly using quaternionic representation and angular information. Neural Comput & Applic 32:4317–4323. https://doi.org/10.1007/s00521-018-03968-y

    Article  Google Scholar 

  23. Xu X, Lu H, Song J, Yang Y, Shen HT, Li X (2019) Ternary adversarial networks with self-supervision for zero-shot cross-modal retrieval. IEEE transactions on. Cybernetics PP(99):1–14. https://doi.org/10.1109/TCYB.2019.2928180

    Article  Google Scholar 

  24. Lan R, Zhou Y, Liu Z, Luo X (2018). Prior knowledge-based probabilistic collaborative representation for visual recognition. IEEE transactions on cybernetics, 1-11. https://doi.org/10.1109/tcyb.2018.2880290

  25. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149. https://doi.org/10.1109/tpami.2016.2577031

    Article  Google Scholar 

  26. Chen Y, Yang T, Zhang X, Meng G, Pan C, Sun J (2019) Detnas: neural architecture search on object detection. arXiv preprint:arXiv:1903.10979. https://arxiv.org/abs/1903.10979. Accessed 9 April 2020

  27. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds) Medical image computing and computer-assisted intervention – MICCAI 2015. MICCAI 2015, Lecture notes in computer science, vol, vol 9351. Springer, Cham, pp 234–241. https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  28. Li H, Xiong P, Fan H, Sun J (2019) Dfanet: Deep feature aggregation for real-time semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp 9522–9531. https://doi.org/10.1109/cvpr.2019.00975

  29. Zhang W, Dong L, Pan X, Zhou J, Qin L, Xu W (2019) Single image defogging based on multi-channel convolutional MSRCR. IEEE Access 7:72492–72504. https://doi.org/10.1109/access.2019.2920403

    Article  Google Scholar 

  30. Lan R, Sun L, Liu Z, Lu H, Luo X (2020) Madnet: a fast and lightweight network for single-image super resolution. IEEE Trans Cybernetics PP(99):1–11. https://doi.org/10.1109/TCYB.2020.2970104

    Article  Google Scholar 

  31. Lu H, Li Y, Mu S, Wang D, Kim H, Serikawa S (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE internet things J PP(99):1–1. https://doi.org/10.1109/JIOT.2017.2737479

    Article  Google Scholar 

  32. Lu H, Li Y, Chen M, Kim H, Serikawa S (2018) Brain intelligence: go beyond artificial intelligence. Mobile Networks Appl 23(2):368–375. https://doi.org/10.1007/s11036-017-0932-8

    Article  Google Scholar 

  33. Chen X, Fang H, Lin T-Y, Vedantam R, Gupta S, Dollár P, Zitnick CL (2015) Microsoft COCO captions: data collection and evaluation server. arXiv preprint:arXiv:1504.00325. https://arxiv.org/abs/1504.00325. Accessed 9 April 2020

  34. Lin T-Y, Goyal P, Girshick R, He K, Dollar P (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision pp 2980-2988. https://doi.org/10.1109/iccv.2017.324

  35. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion 20:60–72. https://doi.org/10.1016/j.inffus.2013.11.005

    Article  Google Scholar 

  36. Zhang Y, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf Fusion 35:81–101 doi.org/10.1016/j.inffus.2016.09.006

    Article  Google Scholar 

  37. Liu Z, Blasch E, Xue Z, Zhao J, Laganiere R, Wu W (2012) Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Trans Pattern Anal Mach Intell 34(1):94–109. https://doi.org/10.1109/tpami.2011.109

    Article  Google Scholar 

  38. Johnson DH (2006) Signal-to-noise ratio. Scholarpedia 1(12):2088. https://doi.org/10.4249/scholarpedia.2088

    Article  Google Scholar 

  39. Zhao Y, Jia R, Shi P (2016) A novel combination method for conflicting evidence based on inconsistent measurements. Inf Sci (N Y) 367-368:125–142. https://doi.org/10.1016/j.ins.2016.05.039

    Article  Google Scholar 

  40. Zhao W, Lu H, Wang D (2017) Multisensor image fusion and enhancement in spectral total variation domain. IEEE Trans Multimed PP(99):1-1. https://doi.org/10.1109/TMM.2017.2760100

    Article  Google Scholar 

  41. Lu H, Li B, Zhu J, LiY SS (2016) Wound intensity correction and segmentation with convolutional neural networks. Concurrency Comput Pract Exp 29(6). https://doi.org/10.1002/cpe.3927

Download references

Acknowledgments

National Key R&D Program (2018AAA0102600).

Funding

National Key R&D Program (2018AAA0102600).

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Huihua Yang.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest.

Code availability

Not applicable.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, W., Yang, H., Wang, J. et al. Region- and Pixel-Level Multi-Focus Image Fusion through Convolutional Neural Networks. Mobile Netw Appl 26, 40–56 (2021). https://doi.org/10.1007/s11036-020-01719-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-020-01719-9

Keywords