A Precise Multi-Exposure Image Fusion Method Based on Low-level Features
Abstract
:1. Introduction
- An image registration algorithm based on the a priori exposure quality is proposed. It minimizes the local exposure distortion of the fused image caused by the improper selection of the reference image to improve the robustness of MEF in a dynamic scene. Additionally, both the structure consistency test and connectivity test are introduced to identify the ghost regions in the ghost removal process. The structure consistency detection can effectively avoid a large amount of display motion estimations.
- An MEF framework based on the low-level image features is proposed. It integrates the spatial-domain scale decomposition, image patch structure decomposition, and the moderate exposure evaluation that optimize both global and local image exposure qualities to improve the visual image quality. In addition, the low-level features such as image brightness, contrast, and intensity are used to improve the fusion efficiency, which preserve more detailed information of a scene. Therefore, it achieves the precise fusion of multi-exposure images.
- The proposed MEF framework can not only be used in a static scene, but also in a dynamic scene. Comparing with existing MEF solutions, the proposed MEF framework improves the robustness of ghost removal in a dynamic scene and performs well in image color saturation, sharpness, and local detail processing. The performance of the proposed framework is confirmed in both subjective and objective evaluations.
2. Related Work
2.1. MEF Algorithms in A Static Scene
2.2. Ghost Removal Algorithms in A Dynamic Scene
3. Multi-exposure Image Registration Fusion Method
3.1. Dynamic Scene Registration
3.1.1. Reference Image Selection
3.1.2. Intensity Map Replacement
3.2. A Precise Multi-Exposure Image Fusion
3.2.1. Image Space-Domain Decomposition by the Guided Filter
3.2.2. Fusion Based on Global and Local Exposure Optimization
3.2.3. Exposure Fusion Using the Gaussian Weight Method
3.2.4. The Workflow of The Proposed FPM Algorithm
Algorithm 1 The proposed FPM algorithm. |
Input: |
Source image sequences |
Output: |
A fused image F |
|
4. Comparative Experiments
4.1. Experiment Preparation
4.2. Comparison of The Fused Images from Static Scenes
4.3. Comparison of The Fused Images from Dynamic Scenes
4.4. Objective Evaluation Index
5. Conclusion and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Mann, S.; Picard, R. Beingundigital’with digital cameras. MIT Media Lab Perceptual 1994, 1, 2. [Google Scholar]
- Qi, G.; Zhang, Q.; Zeng, F.; Wang, J.; Zhu, Z. Multi-focus image fusion via morphological similarity-based dictionary construction and sparse representation. CAAI Trans. Intell. Technol. 2018, 3, 83–94. [Google Scholar] [CrossRef]
- Jacobs, K.; Loscos, C.; Ward, G. Automatic High-Dynamic Range Image Generation for Dynamic Scenes. IEEE Comput. Graph. Appl. 2008, 28, 84–93. [Google Scholar] [CrossRef] [PubMed]
- DiCarlo, J.M.; Wandell, B.A. Rendering high dynamic range images. In Proceedings of the ELECTRONIC IMAGING, San Jose, CA, USA, 22–28 January 2000; pp. 392–402. [Google Scholar]
- Debevec, P.E.; Malik, J. Recovering high dynamic range radiance maps from photographs. In Proceedings of the SIGGRAPH ’08: Special Interest Group on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 11–15 August 2008; p. 31. [Google Scholar]
- Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography; Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2009; Volume 28, pp. 161–171. [Google Scholar]
- Li, Z.G.; Zheng, J.H.; Rahardja, S. Detail-enhanced exposure fusion. IEEE Trans. Image Process. 2012, 21, 4672–4676. [Google Scholar]
- Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar]
- Li, S.; Kang, X. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Trans. Consum. Electron. 2012, 58, 626–632. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; Liu, Q.; Ikenaga, T. Visual salience and stack extension based ghost removal for high-dynamic-range imaging. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2244–2248. [Google Scholar]
- Zhang, W.; Hu, S.; Liu, K. Patch-based correlation for deghosting in exposure fusion. Inf. Sci. 2017, 415, 19–27. [Google Scholar] [CrossRef]
- An, J.; Lee, S.H.; Kuk, J.G.; Cho, N.I. A multi-exposure image fusion algorithm without ghost effect. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 1565–1568. [Google Scholar]
- Pece, F.; Kautz, J. Bitmap movement detection: HDR for dynamic scenes. In Proceedings of the 2010 Conference on Visual Media Production, London, UK, 17–18 November 2010; pp. 1–8. [Google Scholar]
- Hu, J.; Gallo, O.; Pulli, K.; Sun, X. HDR Deghosting: How to Deal with Saturation? In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 26–31 May 2013; pp. 1163–1170. [Google Scholar]
- Sen, P.; Kalantari, N.K.; Yaesoubi, M.; Darabi, S.; Goldman, D.B.; Shechtman, E. Robust patch-based hdr reconstruction of dynamic scenes. ACM Trans. Graph. 2012, 31, 203-1. [Google Scholar] [CrossRef]
- Zhang, W.; Cham, W. Gradient-directed composition of multi-exposure images. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 530–536. [Google Scholar]
- Li, Z.; Zheng, J.; Zhu, Z.; Wu, S. Selectively Detail-Enhanced Fusion of Differently Exposed Images With Moving Objects. IEEE Trans. Image Process. 2014, 23, 4372–4382. [Google Scholar] [CrossRef]
- Nejati, M.; Karimi, M.; Soroushmehr, S.M.R.; Karimi, N.; Samavi, S.; Najarian, K. Fast exposure fusion using exposedness function. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2234–2238. [Google Scholar]
- Li, H.; Wang, Y.; Yang, Z.; Wang, R.; Li, X.; Tao, D. Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion. IEEE Trans. Instrum. Meas. 2019, 69, 1082–1102. [Google Scholar] [CrossRef]
- Zhu, Z.; Yin, H.; Chai, Y.; Li, Y.; Qi, G. A novel multi-modality image fusion method based on image decomposition and sparse representation. Inf. Sci. 2018, 432, 516–529. [Google Scholar] [CrossRef]
- Zhao, Q.; Sbert, M.; Feixas, M.; Xu, Q. Multi-Exposure Image Fusion Based on Information-Theoretic Channel. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1872–1876. [Google Scholar]
- Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure Fusion Using Boosting Laplacian Pyramid. IEEE Trans. Cybern. 2014, 44, 1579–1590. [Google Scholar] [CrossRef] [PubMed]
- Zhu, Z.; Zheng, M.; Qi, G.; Wang, D.; Xiang, Y. A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain. IEEE Access 2019, 7, 20811–20824. [Google Scholar] [CrossRef]
- Li, Y.; Sun, Y.; Huang, X.; Qi, G.; Zheng, M.; Zhu, Z. An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain. Entropy 2018, 20, 522. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; He, X.; Tao, D.; Tang, Y.; Wang, R. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 2018, 79, 130–146. [Google Scholar] [CrossRef]
- Kinoshita, Y.; Shiota, S.; Kiya, H.; Yoshida, T. Multi-Exposure Image Fusion Based on Exposure Compensation. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1388–1392. [Google Scholar]
- Prabhakar, K.R.; Babu, R.V. Ghosting-free multi-exposure image fusion in gradient domain. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1766–1770. [Google Scholar]
- Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Visual Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
- Kou, F.; Wei, Z.; Chen, W.; Wu, X.; Wen, C.; Li, Z. Intelligent Detail Enhancement for Exposure Fusion. IEEE Trans. Multimedia 2018, 20, 484–495. [Google Scholar] [CrossRef]
- Ma, K.; Li, H.; Yong, H.; Wang, Z.; Meng, D.; Zhang, L. Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 2017, 26, 2519–2532. [Google Scholar] [CrossRef]
- Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index. IEEE Trans. Comput. Imag. 2018, 4, 60–72. [Google Scholar] [CrossRef]
- Li, Y.; Sun, Y.; Zheng, M.; Huang, X.; Qi, G.; Hu, H.; Zhu, Z. A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure. Entropy 2018, 20, 935. [Google Scholar] [CrossRef] [Green Version]
- Qin, Z.; Fan, J.; Liu, Y.; Gao, Y.; Li, G.Y. Sparse Representation for Wireless Communications: A Compressive Sensing Approach. IEEE Signal Process Mag. 2018, 35, 40–58. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Shi, M.; Zhu, Z.; Zhao, J. Image fusion based on complex-shearlet domain with guided filtering. Multidimension. Syst. Signal Process. 2017, 28, 207–224. [Google Scholar] [CrossRef]
- Ma, K.; Duanmu, Z.; Zhu, H.; Fang, Y.; Wang, Z. Deep Guided Learning for Fast Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2020, 29, 2808–2819. [Google Scholar] [CrossRef] [PubMed]
- Wang, K.; Qi, G.; Zhu, Z.; Chai, Y. A Novel Geometric Dictionary Construction Approach for Sparse Representation Based Image Fusion. Entropy 2017, 19, 306. [Google Scholar] [CrossRef] [Green Version]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Petrovic, V. Subjective tests for image fusion evaluation and objective metric validation. Inform. Fusion 2007, 8, 208–216. [Google Scholar] [CrossRef]
- Zhu, Z.; Qi, G.; Chai, Y.; Chen, Y. A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering. Future Internet 2016, 8, 53. [Google Scholar] [CrossRef] [Green Version]
- Qu, G. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef] [Green Version]
APS | DSIFT-EF | EFM | Fast-expo | FMMR | SPD-MEF | FPM | |
---|---|---|---|---|---|---|---|
0.9678 | 0.9573 | 0.9733 | 0.9744 | 0.9557 | 0.9693 | 0.9746 | |
0.6765 | 0.6973 | 0.6214 | 0.7301 | 0.5623 | 0.7147 | 0.7411 | |
1.3598 | 1.4552 | 1.1876 | 1.6724 | 1.2157 | 1.8353 | 1.8551 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qi, G.; Chang, L.; Luo, Y.; Chen, Y.; Zhu, Z.; Wang, S. A Precise Multi-Exposure Image Fusion Method Based on Low-level Features. Sensors 2020, 20, 1597. https://doi.org/10.3390/s20061597
Qi G, Chang L, Luo Y, Chen Y, Zhu Z, Wang S. A Precise Multi-Exposure Image Fusion Method Based on Low-level Features. Sensors. 2020; 20(6):1597. https://doi.org/10.3390/s20061597
Chicago/Turabian StyleQi, Guanqiu, Liang Chang, Yaqin Luo, Yinong Chen, Zhiqin Zhu, and Shujuan Wang. 2020. "A Precise Multi-Exposure Image Fusion Method Based on Low-level Features" Sensors 20, no. 6: 1597. https://doi.org/10.3390/s20061597
APA StyleQi, G., Chang, L., Luo, Y., Chen, Y., Zhu, Z., & Wang, S. (2020). A Precise Multi-Exposure Image Fusion Method Based on Low-level Features. Sensors, 20(6), 1597. https://doi.org/10.3390/s20061597