Abstract
Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.
1 Introduction
As a crucial image processing technology, image fusion technology has been widely noticed in recent years. The new image is reconstructed by image fusion, which provides richer visual information than the original images. The main purpose of image fusion is that all of the important salient visual information of input images should be well preserved in the fused image. Multi-focus image fusion is a process during which the blurred part of an image becomes clearer with appropriate image processing technology. At present, it is very difficult for optical imaging systems to make the image of the target with different distances in the same scene. Two or more images with the same scene but different focusing targets become multi-focus images. In order to eliminate the defocus of the same scene, the multi-focus image fusion arises at the historic moment. Multi-focus image fusion means to fuse the images of different focus objects in the same scene according to a specific algorithm to obtain the clear image of the whole scene. For multi-focus image fusion, two or more images of the same scene taken with different focus settings are combined into a single all-in-focus image with an extended depth of field [17]. As a research field of image fusion technology, multi-focus image fusion technology can be widely applied in other fields like machine vision, feature recognition, medical image processing, and so on.
Currently, the popular methods that multi-focus image fusion uses include the weighted average algorithm, principal components analysis transform, Hue saturation intensity transform, contourlet transform, and wavelet transform [13, 16, 21, 22]. The existing multi-focus fusion methods can be mainly divided into two groups: spatial domain and transform domain techniques [1]. The spatial domain methods directly fuse source images by a linear combination. Moreover, the spatial domain methods are often subject to noise and misregistration [6]. With good localization performance both in time domain and frequency domain, wavelet transform can analyze the signal in details and multiple scales by using calculation functions like scaling and translation [4]. Additionally, wavelet analysis can spread the image layer according to wavelet basis, spread the image information to a certain level according to the given requirement of processing, and effectively control the calculation amount [5, 8, 9, 10, 19, 20]. Therefore, the research and exploration of multi-focus fusion based on wavelet transform are most popular.
Based on wavelet transform, the paper points out a multi-focus fusion algorithm with the ratio weighted method and the gradient weighted method. The feature of multi-focus images and the difference between the coefficients are analyzed. The fusion rules of low-frequency coefficients and high-frequency coefficients are improved.
2 Realization of Two-Dimensional Wavelet Transform
Wavelet transform is the decomposition of an image from a high scale to a low scale. The result of wavelet decomposition divides an image into a collection of sub-images. On the first level of wavelet decomposition, the original image is divided into a collection of a low-frequency sub-image and three high-frequency sub-images. On the second level of wavelet decomposition, the low-frequency image produced on the first level will be divided into a collection of a low-frequency sub-image and three high-frequency sub-images, while the three high-frequency sub-images produced on the first level remain the same. The process of wavelet decomposition can go on according to the set levels and produce more collections of sub-images.
Any given image f(x, y) with the size M*N will be divided into four images with the same size of 1/4 of the original image after going through each level of wavelet transform. The four images result from the double interval sampling of both row and line after the scalar product of the original image and a wavelet basis image. The first-level wavelet decomposition can be expressed as the following, and further decompositions can be made by such analogy [3, 11, 23].
Here, ϕ is a scaling function, ψ is a wavelet function, and ϕ and ψ follow the equational relations below:
Here, j≥0, l=1, 2, 3, j, l, m, n are integers. By studying and analyzing the wavelet-based function, the paper chooses biorNr.Nd bi-orthogonal spline wavelet-based function. This kind of wavelet includes two filters: reconstruction scale filter and decomposition scale filter. When a filter has linear phase characteristics, the output signal will not produce phase distortion. This feature is rather important to the analysis and processing of image signal because visual information is especially sensitive to phase distortion. Figure 1 shows the low-frequency and high-frequency images produced in the first- and second-level decomposition after the original image has gone through the transform of biorNr.Nd wavelet-based function [14].
3 Design of Fusion Rules
The method of image fusion-based wavelet transform is shown in Figure 2. First of all, process the original image A and original image B with wavelet decomposition and transform. Secondly, process the low-frequency coefficient and high-frequency coefficient resulting from decomposition separately with different fusion rules, and produce a new low-frequency coefficient and high-frequency coefficient. In the end, re-construct the low-frequency coefficient and high-frequency coefficient that have been processed, and produce a new image, which is the fusion image.
3.1 Fusion Algorithm of Low-Frequency Coefficient Based on Ratio Weighted Analysis
The low-frequency coefficient resulting from wavelet transform keeps the basic view and information of the original image. The low-frequency image reflects the average and similar characteristics to the original image. At present, most image fusion methods usually employ simple averaging methods. This will reduce the image contrast to a certain extent. To retain more valid information of images, the paper chooses the method of ratio weighted analysis as the fusion rule of low-frequency coefficient. The edge information can be reflected with the method of ratio weighted analysis. The fusion rule of low-frequency coefficient is described below.
If CA(m, n) and CB(m, n) are two low-frequency coefficients of two images waiting to be fused, and the low-frequency coefficient of the fusion image is CF(m, n), considering that the process of the low-frequency coefficient is a rather slow part of image transform and consists most of the details of the image, the weighted coefficient used by ratio weighted fusion of the coefficient of low-frequency sub-band is a, which is
Therefore, the fusion rule of low-frequency coefficient is
3.2 Fusion Algorithm of High-Frequency Coefficient Based on the Weighted Analysis of Improved Edge Detection Operator Gradient
The high-frequency coefficient resulting from wavelet transform reflects the edge features of an image and the gray mutation of the original image, consisting of lots of details, which include the edge, boundary information of an image, and mutant parts of pixel luminance. Regarding the fusion algorithm of high-frequency coefficient, it may follow the rule that the greater value shall be selected. However, this kind of method does not consider the impact of surrounding pixels on the central pixel and can easily commit the mistake by selecting the greater noise value as the high-frequency coefficient, which will decrease the fusion quality of images. In fact, the human visual system is highly sensitive to features such as boundaries and directions most of the time, and is insensitive to brightness at each location. Therefore, this paper chooses the weighted analysis of the improved edge detection operator gradient as the fusion rule of high-frequency coefficient.
To extract the useful information about the impact of surrounding pixels on the central pixel, a pixel area of 3*3 is selected from the fusion rule of the high-frequency part of the image and gradient operation on this pixel field is performed. The selected gradient operator is the improved Sobel edge detection operator. The classic Sobel operator only conducts gradient operation on the difference of the neighboring pixels in horizontal and vertical directions of images, while it does not consider the relations of neighboring pixels in other directions, which easily lose some edge information of images [2, 18]. Therefore, it is considered to add the templates at the 45° and 135° on the basis of the Sobel operator [4, 12] and conduct gradient operation. The templates of 45° and 135° are shown in Figure 3. The gradient magnitude can be expressed in the norm of vector [Gx, Gy, G45°, G135°]T. The paper chooses the gradient with the norm of 1 as the gradient magnitude, which is G=|Gx|+|Gy|+|G45°|+|G135°|. Gx and Gy are the gradient magnitudes of the image in horizontal and vertical directions.
Suppose that the gradient value of the two images are GA(i, j) and GB(i, j), when the difference between the two values is larger than T [7] (T is the value of empirical constant). It will be considered that the image with a larger gradient value has more luminance mutant information; therefore, the gray value of the high-frequency image with the larger gradient value will be retained. When the difference between the two values is smaller than T, it means that the two images have similar details. Suppose that the high-frequency coefficient of the two images are FA(i, j) and FB(i, j), using weighted gradient fusion method for the two images, the fusion image FF(i, j) is shown as
4 Analysis of the Experiment Outcomes and Evaluation of Indicators
To test the validity of the algorithm in the paper, the paper chooses three groups of multi-focus images for a simulation experiment. Besides, it mainly compares and tests different common fusion rules based on wavelet transform. Figure 4 shows the multi-focus image fusion effect resulting from the improved fusion algorithm mentioned in the paper. The fusion rule selected in Figure 5 is the average algorithm for the low-frequency coefficient and takes the greatest absolute value for the high-frequency coefficient. The fusion rule selected in Figure 6 takes the greatest absolute value both for the low-frequency coefficient and high-frequency coefficient. Figure 7 shows the multi-focus image fusion effect based on the contourlet transform method. Figure 8 shows the multi-focus image fusion effect based on the non-sub-sampled contourlet (NSCT) transform method. All the experiments are carried out in the Matlab (R2015b) (US. Mathworks company) environment running on a PC with Intel(R) Core(TM) i7-5820K CPU 3.30 GHz and a Titan X GPU (US. State of California Nvidia corporation).
The paper chooses entropy, mutual information, cross entropy, average gradient as objective evaluation, and running time indicators to objectively evaluate the fusion algorithm [12, 15]. Information entropy is defined as
where pi is the probability of character number i showing up in a stream of characters of the given image and n is the total number of grayscale.
The mutual information of t fusion image F and original image A can be defined as
where γi,j is the joint probability distribution function of images F and A. The gray distribution of image F is p={p1, p2, …, pi, …, pn}. The gray distribution of image A is q={q1, q2, …, qi, …, qn}.
The mutual information of t fusion image F and original images A, B can be defined as
The cross entropy for fusion image F and original image A over a given set is defined as follows:
The gray distribution of image F is p={p1, p2, …, pi, …, pn}. The gray distribution of image A is q={q1, q2, …, qi, …, qn}. The total cross entropy is defined as
The average gradient is defined as
Among these objective indicators, the larger the entropy of fusion is, the more fusion image information will be added. The larger the mutual information is, the more information the fusion image extracting from the original image is, which means the result of fusion is better. The smaller the cross entropy is, the less the difference between images will be, and the average gradient can reflect minor differences of details and the features of textual changes. The larger the average gradient is, the clearer the image is. Table 1 evaluates the fusion image of multi-focus images resulting from different fusion rules. (Take the second group of images of each algorithm, for example.)
Fusion rules | Information entropy | Mutual information | Cross entropy | Average gradient | Peak signal-to-noise ratio | Running time (s) |
---|---|---|---|---|---|---|
Low-frequency coefficient (average), high-frequency coefficient (greatest) | 6.7801 | 6.3501 | 0.0719 | 4.6903 | 71.1651 | 1.5241 |
Low-frequency coefficient (greatest), high-frequency coefficient (greatest) | 6.7869 | 6.3048 | 0.0810 | 4.6896 | 68.9566 | 1.5003 |
Contourlet transform method | 6.7919 | 6.3776 | 0.0728 | 4.6997 | 72.4750 | 1.7887 |
NSCT transform method | 6.7989 | 6.4016 | 0.0716 | 4.7003 | 72.8485 | 3.0930 |
Proposed algorithm | 6.8007 | 6.4105 | 0.0712 | 4.7001 | 73.0062 | 1.6332 |
It can be seen that through comparing and analyzing the experiment outcomes, the fusion image resulting from the fusion algorithm of this paper has a better visual effect than the fusion images resulting from the other two fusion algorithms. The traditional algorithm, taking the greater absolute value, can lose many information of the original image that is related to visual features. According to the analysis of the result of the objective evaluation indicator, the five objective evaluation indicators of the algorithm that take the greater and average values of low-frequency coefficient and greater value of high-frequency coefficient do not perform as well as the fusion effect resulting from the ratio weighted analysis of low-frequency coefficient and weighted analysis of high-frequency coefficient based on the improved edge detection operator gradient. For image fusion by taking the greater value (in which the value of n-levels decides the type of decomposition), the type of decomposition is wavelet decomposition if it is zero; otherwise, the type is pyramid decomposition.
5 Conclusion
The multi-resolution of wavelet transform can decompose images on different scales. Outline and detailed information on different levels of the target image can be obtained by decomposing. Moreover, during the wavelet transform, the decomposition equation and filter coefficient keep constant compared to any neighboring scale. Through wavelet transform, a multi-focus image can obtain the details and edge information in different directions of the image, providing more information for multi-focus image fusion to refer to. In addition, wavelet transform features concentrated signal energy, which helps decrease the complexity of wavelet transform both in time and space, enjoying a comparative advantage in the application field of multi-focus image fusion. The paper applies wavelet transform to multi-focus image fusion and points out weighted analysis algorithm based on weighted ratio and improved edge detection operator gradient. This algorithm is applied to the multi-focus image fusion experiment. By evaluating through objective evaluation indicators and comparing with traditional fusion algorithm, it can be shown that this algorithm can effectively fuse multi-focus image information. After being evaluated by using entropy, peak signal-to-noise ratio, and average gradient, it can be seen that the fusion rule pointed out in the paper can provide more image information and, at the same time, produces clearer fusion images. As the fusion algorithms used by multi-focus image fusion are different, how to establish a rational and optimized multi-focus image fusion system and evaluation system is one of the key research fields in the future.
Acknowledgments
The study was supported by a research project of Zhejiang Province Department of Education (Y201430709), by Zhejiang Province Natural Science Foundation of China under grant no. LY14F020032, by the Science and Technology Plan Projects of Wenzhou City (no. G20150021), and also by the National Nature Science Foundation of China (no. 51305255) and Natural Science Foundation of Shanghai City (no. 13ZR1455900).
Bibliography
[1] X. Bai, Y. Zhang, F. Zhou and B. Xue, Quadtree-based multi-focus image fusion using a weighted focus-measure, Inf. Fus.22 (2015), 105–118.10.1016/j.inffus.2014.05.003Search in Google Scholar
[2] J. H. Cai and W. W. Hu, Feature extraction of gear fault signal based on Sobel operator and WHT, Shock Vibrat.20 (2013), 551–559.10.1155/2013/367045Search in Google Scholar
[3] L. Guo, H. H. Li and Y. S. Bao, Image fusion, pp. 183–248, Electronic Industry Press, Beijing, 2008.Search in Google Scholar
[4] J. Li and L. Chang, A SAR image compression algorithm based on Mallat tower-type wavelet decomposition, Optik – Int. J. Light Electron Optics126 (2015), 3982–3986.10.1016/j.ijleo.2015.07.196Search in Google Scholar
[5] Y. Liu, J. Jin, Q. Wang, Y. Shen and X. Dong, Region level based multi-focus image fusion using quaternion wavelet and normalized cut, Signal Process.97 (2014), 9–30.10.1016/j.sigpro.2013.10.010Search in Google Scholar
[6] X. Luo, Z. Zhang, C. Zhang and X. Wu, Multi-focus image fusion using HOSVD and edge intensity, J. Vis. Commun. Image Represent.45 (2017), 46–61.10.1016/j.jvcir.2017.02.006Search in Google Scholar
[7] Q. Q. Meng, G. Yang, T. Tong and J. F. Zhang, Fusion algorithm of multifocus images based on wavelet transform, Remote Sens. Land Resour.26 (2014).Search in Google Scholar
[8] M. F. Shen, Z. F. Su, J. Y. Yang and L. S. Sun, An image fusion algorithm based on redundant wavelet transform, Appl. Mech. Mater.687–691 (2014), 3656–3661.10.4028/www.scientific.net/AMM.687-691.3656Search in Google Scholar
[9] R. Singh and R. S. Dhanoa, Development of multi-focus image fusion technique using discrete wavelet transform (DWT) for digital images, Int. J. Eng. Sci. Res. Technol.3 (2014), 1–5.Search in Google Scholar
[10] J. Tian and L. Chen, Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure, Signal Process.92 (2012), 2137–2146.10.1016/j.sigpro.2012.01.027Search in Google Scholar
[11] H. Wang and J. F. Ma, Digital image analysis and pattern recognition, pp. 100–102, Science Press, Beijing, 2011.Search in Google Scholar
[12] Z. Wang and A. C. Bovik, A universal image quality index, IEEE Signal Process. Lett.9 (2002), 81–84.10.1109/97.995823Search in Google Scholar
[13] J. Xiao, T. Liu, Y. Zhang, B. Zhou, J. Lei and Q. Li, Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation, Signal Process.125 (2016), 171–186.10.1016/j.sigpro.2016.01.014Search in Google Scholar
[14] H. Xuan, Y. Zhuangzhi and L. Shupeng, Wavelet-based analysis of masses in digital mammograms, J. Shanghai Univ. (Nat. Sci.)6 (2000), 538–540.Search in Google Scholar
[15] C. S. Xydeas and V. Petrovic, Objective image fusion performance measure, Electron. Lett.36 (2000), 308–309.10.1049/el:20000267Search in Google Scholar
[16] X. Yan, H. Qin, J. Li, H. Zhou and T. Yang, Multi-focus image fusion using a guided-filter-based difference image, Appl. Optics55 (2016), 2230–2238.10.1364/AO.55.002230Search in Google Scholar PubMed
[17] X. Yan, H. Qin and J. Li, Multi-focus image fusion based on dictionary learning with rolling guidance filter, J. Opt. Soc. Am. A Optics Image Sci. Vis.34 (2017), 432.10.1364/JOSAA.34.000432Search in Google Scholar PubMed
[18] W. Yang, X. Wang, B. Moran, A. Wheaton and N. Cooley, Efficient registration of optical and infrared images via modified Sobel edging for plant canopy temperature estimation, Comput. Elect. Eng.38 (2012), 1213–1221.10.1016/j.compeleceng.2012.05.014Search in Google Scholar
[19] Y. Yang, S. Huang, J. Gao and Z. Qian, Multi-focus image fusion using an effective discrete wavelet transform based algorithm, Measure. Sci. Rev.14 (2014), 102–108.10.2478/msr-2014-0014Search in Google Scholar
[20] T. Yong-Zheng, Research on medical image fusion based on improved redundant complex wavelet transform, J. Chem. Pharmaceut. Res.6 (2014), 823–830.Search in Google Scholar
[21] B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, J. Huang, L. Song and S. Fu, Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion, Neurocomputing182 (2016), 1–9.10.1016/j.neucom.2015.10.084Search in Google Scholar
[22] Q. Zhang and M. D. Levine, Robust multi-focus image fusion using multi-task sparse representation and spatial context, IEEE Trans. Image Process.25 (2016), 2045–2058.10.1109/TIP.2016.2524212Search in Google Scholar PubMed
[23] X. C. Zhao, H. He and Y. C. Miu, MATLAB digital image processing of actual combat, pp. 75–93, China Machine Press, Beijing, 2013.Search in Google Scholar
©2019 Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.