Patch-Based Out-Of-Focusblur Reconstruction: Abstract-The Algorithm Is Developed For Color Images and Is
Patch-Based Out-Of-Focusblur Reconstruction: Abstract-The Algorithm Is Developed For Color Images and Is
Patch-Based Out-Of-Focusblur Reconstruction: Abstract-The Algorithm Is Developed For Color Images and Is
RECONSTRUCTION
G.FYAZ, Dr.V.Nagarajan,
Department Of ECE, VicePrincipal,
Adhiparaskthi Engineering College, Adhiparaskthi Engineering College
Melmaruvathur Tamilnadu, India, Melmaruvathur, Tamilnadu, India,
Fyaz2402@gmail.com. Nagarajanece31@rediffmail.com.
Abstract—The algorithm is developed for color images and is taking a weighted sum at each pixel location, with weights
based on blending the gradients of the luminance components of depending on some function of the input images. In region-
the input images using the maximum gradient magnitude at each based techniques, the input images are represented in a multi-
pixel location and then obtaining the fused luminance using a resolution framework, using pyramid or wavelet transforms,
Haar wavelet-based image reconstruction technique. A multi-
exposure and multi-focus image fusion algorithm is proposed.
and then operations such as taking the maximum or averaging
This image reconstruction algorithm is of O(N) complexity and the resulting coefficients are used to integrate the information
includes a Poisson solver at each resolution to eliminate artifacts into a more comprehensive image. It is proposed as multi-
that may appear due to the non-conservative nature of the resolution singular value decomposition (SVD)-based image
resulting gradient. The fused chrominance, on the other hand, is fusion technique. The images to be fused are decomposed into
obtained as a weighted mean of the chrominance channels. The approximation and detail coefficients, a similar structure to that
particular case of grayscale images is treated as luminance of wavelet decomposition. Then, at each decomposition level,
fusion. Experimental results and comparison with other fusion the largest absolute values of the detail coefficients are selected
techniques indicate that the proposed algorithm is fast and and an average of the approximation coefficients is used to
produces similar or better results than existing techniques for
both multi-exposure as well as multi-focus images.
obtain the fused image. [Zheng et al (2007)] proposed a fusion
rule based on principal component analysis (PCA) for multi-
Keywords—Multifocus and multi-exposure, out of focus scale decomposed images. [Lewis et al (2005)] presented a
blurring, frame enhancement) comparative study of pixel- and region-based fusions and
indicated that for most cases the region-based techniques
I. Introduction provide better results.
In applications such as computer vision, medical Image fusion can be applied to multi-focus or multi-
imagery, photography and remote sensing, there is a need for exposure images. In the multi-focus case, the input images are
algorithms to merge the information acquired by either single those in which only some portion of the image is well focused,
or multiple image sensors at the same or different time instants. whereas other portions appear blurred. [Haghighat et al(2011)]
Generally speaking, image fusion integrates information from a proposed a multi-focus image fusion technique that operates in
stack of images into a single image that has more details than the discrete cosine transform (DCT) domain. They compute the
the individual images. In static image fusion, it is assumed that variance of the 8x8 DCT coefficients of each image, and the
the input images are aligned and there exist no different in fused blocks are those having the highest variance of DCT
terms of depth or viewpoint of the imaged scenes. In dynamic coefficients. [Song et al(2006)] proposed a wavelet
image fusion, the imaged scenes in the input images contain decomposition-based algorithm for multi-focus image fusion.
some perturbations and are not exactly the same in terms of They fuse the wavelet coefficients using an activity measure
depth or viewpoint. Many researchers1, 2 tend to first identify which depends on the gradients of the wavelet coefficients. A
the perturbations and then align all the images by image multiresolution approach was also adopted in the algorithms
registration to produce a static sequence of images having and by [Biswas et al(2015)] survey on multi-focus image
similar geometry. After registration, the algorithms for static fusion techniques can be found. More recent research makes
fusion can be applied to these images. There are some use of edge detection techniques for color image fusion.
algorithms in which the two steps of registration and fusion are
integrated. Such algorithms can handle motion of some of the II. Review of Literature Survey
objects in the source images provided that the position of the
camera is kept constant. [C.-T. Shen et al (2012)], “Spatially-varying out-of-
focus image deblurring with L1-2 optimization and a guided
Static image fusion algorithms can be classified in blur map,” in Proc. IEEE Transaction. Acoust., Speech Signal
terms of the way in which the image is processed into pixel- Process., Mar, pp. 1069–1072.Has proposed estimate blur map
based 3,4 or region-based algorithms.5,6 In pixel-based by combining the modified local contrast prior and the guided
methods, the simplest way of obtaining a fused image is by filter. Second, deblur the whole out-of-focus blurry input with
L1-2 optimization scale by scale, and obtain a set of deblurred III. Objective
images as the candidates of output. In the last step, according to
blur map, select the deblurred pixels to reconstruct allin-focus Multi-Focus Image Fusion method, in which the core
output image. The experimental results show that proposed idea is to utilize the information sharing and complementation
method outperforms than the existing space invariant of the consecutive frames to recover the lost details in the
deblurring methods proposed by [Q.Shan et al(2010)].[N. input. For a blurry frame, the blurry patches may be clear in
Wiener et al(1949)] , respectively. Besides, proposed method other frames. Blurry patch and its corresponding clear patches
still outperforms than the spatially-varying method using the in the surrounding frames, the clear ones can be used to
cascaded combination’s blur map and image deblurring method reconstruct the blurry one.
[Q. Shan et al(2008)]. IV. Related Works
[Y.-W. Tai et al (2011)], “Detail recovery for single- In the multi-exposure case, the input images have
image defocus blur has proposed OPTICAL IMAGING different exposures. These images have details only in a part of
systems have a limited depth of field, which may lead to the image while the rest of the image is either under- or over-
defocus blur. Most blind deconvolution algorithms focus on exposed. Fusion of such images is done to integrate the details
estimating shift-invariant point-spread-functions (PSFs), or from all images into a single, more comprehensive result.
shift-varying PSFs that can be treated as projections of a [Mertens et al(2007)] proposed such an algorithm, in which the
globally constant blur descriptor caused by camera shake. images are decomposed into Laplacian pyramids and then they
However, estimating defocus blur is a challenging task mainly are combined at each level using weights depending on the
because the corresponding PSFs are spatially varying and contrast, saturation and well-exposedness of the given images.
cannot be represented by any global descriptor. Indeed, A technique for image contrast enhancement using image
spatially varying defocus PSFs for a given camera can be pre- fusion has been presented in Ref. 18 and is similar to Ref. 17.
calibrated and described typically through a simple model (e.g. In Ref. 18, the input images to the fusion algorithm are
disc, Gaussian) that is characterized by a single parameter obtained from the original image after applying local and/or
indicating its scale (radius, standard deviation, etc.) global enhancements. [Shen et al(2011)]use a probabilistic
[W. H. Richardson et al (2007)], “Bayesian-based model based on local contrast and color consistency to
iterative method of image restoration,” has proposed that combine multi-exposure images. [Li et al(2012)] fuse the
Motion blur caused by camera shake has been one of the prime multi-exposure images using a weighted sum methodology
causes of poor image quality in digital imaging, especially based on local contrast, brightness and color dissimilarity.
when using telephoto lens or using long shuttle speed. In past, They use a pixel-based method instead of a multi-resolution
many researchers have been working on recovering clear approach to increase the speed of execution. In Ref. 20, the
images from motion-blurred images. The motion blur caused input images are first divided into blocks and the blocks
by camera shake. corresponding to maximum entropy are used to obtain the
fused image. The genetic algorithm (GA) is used to optimize
[M. W. Tao, J. Malik et al (2014)], “Sharpening out of block size, and this may require a considerable amount of time
focus images using high-frequency transfer has proposed to converge. Image fusion in the gradient domain has been
Camera motion during longer exposure times, e.g., in low light recently studied by some Image fusion in the gradient domain
situations, is a common problem in handheld photography. It has been recently studied by some researchers. Socolinsky
causes image blur that destroys details in the captured photo. proposed an image fusion approach which integrates
Single image blind deconvolution or motion deblurring aims at information from a multi-spectral image dataset to produce a
restoring the sharp latent image from its blurred picture without one band visualization of the image. They generalize image
knowing the camera motion that took place during the contrast, which is closely related to image gradients, by
exposure. Blind deconvolution has several challenging aspects: defining it for multi-spectral images in terms of differential
modeling the image formation process, formulating tractable geometry. They use this contrast information to reconstruct the
priors based on natural image statistics, and devising efficient optimal gradient vector field, to produce the fused image.
methods for optimization. Later, [Wang et al(2006)] fused the images in gradient domain
using weights dependent on local variations in intensity of the
Patch-based out-of-focus blur reconstruction method, input images. At each pixel position, they construct an
in which the core idea is to utilize the information sharing and importance-weighted contrast matrix. The square root of the
complementation of the consecutive frames to recover the lost largest eigenvalue of this matrix yields the fused gradient
details in the input. For a blurry frame, the blurry patches may magnitude, and the corresponding eigenvector gives the
be clear in other frames. Blurry patch and its corresponding direction of the fused gradient. Recently, [Hara et al(2014)]
clear patches in the surrounding frames, the clear ones can be used an inter image weighting scheme to optimize the weighted
used to reconstruct the blurry one. sum of the gradient magnitude and then reconstruct the fused
gradients to produce the fused image. The optimization step
tends to slow down this technique. Additionally, their
technique comprises a manually thresholded intra image
weight saliency map, requiring user intervention. An a different manner, and that it is in the luminance channel
interesting block-based approach was recently proposed by Ma where the most advanced part of the fusion is to be performed.
and Wang in Ref. 24. This approach is unique in the way in Secondly, it reveals that the same procedure used for the
which it processes color images. Specifically, the RGB color luminance channels fusion can be used to fuse single channel
channels of an image are processed together, and instead the images (i.e., images in grayscale representation). In what
images are split into three conceptually independent follows, the proposed luminance fusion technique is described,
components: signal strength, signal structure and mean followed by chrominance fusion.
intensity".24 This idea was inspired by the increasingly popular
structural similarity (SSIM) index, 25 developed by the same B. Luminance fusion
main author as an objective measure of similarity between two As mentioned in the previous sections, the luminance
images. In this paper, a gradient-based image fusion algorithm fusion can be carried out on grayscale images, or on color
is proposed. The algorithm proposed here works for the fusion images that are in the YCbCr color coordinate system. If the
of both color as well as grayscale images. In the case of color input images are in RGB representation, conversion to YCbCr
images, one of the key ideas of the fusion algorithm proposed should be performed first. Luminance fusion is performed in
here is that it treats the luminance and chrominance channels of the gradient domain. This domain choice is motivated by the
the images to be fused in a different manner. This different fact that the image gradient depicts information on detail
treatment of the channels is motivated by the fact that the content, to which the human visual system is more sensitive
luminance channel contains a major part of information about under certain illumination conditions. For example, a blurred,
image details and contrast, whereas the chrominance channels over- or under-exposed region in an image will have a much
contain only color information, to which the human visual lower gradient magnitude of the luminance channel than the
system is less sensitive. The fusion of the luminance channels same region in an image with better focus or exposure. This
is done in the gradient domain, by taking the gradients with the observation implies that taking the gradients with the maximal
maximal magnitude of the input images at each pixel location. magnitude at each pixel position will lead to an image which
The luminance channel of the fused image is then obtained by has much more detail than any other image in the stack.
integrating the fused gradients. This done by using a wavelet-
based method, 26 which includes a Poisson solver27 at each Let the luminance channels of a stack of N input
resolution. This algorithm is known28 to produce good results, images be I = {I1; I2; ... ; IN},
free from artifacts, when the gradient field is a nonconservative
field, as is the case when gradients of different images are where N ≥ 2. According to a commonly employed
combined. Next, for the chrominance part of the color images, discretization model, the gradientof the luminance channel of
fusion is done by taking a weighted sum of the input the nth image in the stack may be defined as:
chrominance channels, with the weights depending on the
channel intensities, which conveys information about color.
Grayscale images may be dealt with in the same way as the
luminance component of color images. The proposed algorithm
can be applied for multi-exposure as well as multi-focus
images. The rest of the paper is organized as follows. In Sec. 2, Where Φ xn Φ yn are the gradient components in the
the proposed algorithm is presented. In Sec. 3, experimental x- and Φy-directions. The magnitude of the gradient may be
results and comparisons with other image fusion algorithms are defined as
presented. Finally, in Sec. 4, the main conclusions are drawn.
V. Proposed Algorithm
A. Image Fusion in Gradient Domain Let the image number having the maximum gradient
In this section, a new image fusion algorithm is magnitude at the pixel location (x; y) be p(x; y). It may be
proposed. The proposed algorithm can be applied to fuse mathematically represented as
together a sequence of either color or grayscale images
(minimum two images). A flowchart of the algorithm in its
most general case (i.e. fusion of multiple color images) is
illustrated in Fig. 1.The proposed algorithm operates in the Using (4), the fused luminance gradient may be
color space. The luminance (Y )channel represents the image represented as
brightness information and it is in this channel where variations
and details are most visible, since the human visual system is
more sensitive to luminance (Y ) than to chrominance (Cb, Cr).
This important observation has two main consequences for the
proposed fusion algorithm. Firstly, it indicates that the fusion Where Φxn(x,y) p(x,y), Φyn(x,y) p(x,y) denote the values
of the luminance and chrominance channels should be done in of the x and y gradient components of the image with index
p(x,y), at pixel position (x,y) So, the fused luminance gradient
is Φ = [Φx, Φy]T. It may be noted that the fused luminance
gradient has details from all the luminance channels from the
stack and in order to get the fused luminance channel,
reconstruction is required from the gradient domain. The
relationship between the fused gradient Φ and the fused
luminance channel I may be represented as
Fig.3. Output image after corrected Exposure and Fig.5. Output image after corrected Exposure and
Focus Focus
The input images should be named as Imag_1, Img_2, The Fusion image are reconstructed from the input
Imag_n and should be kept in the same folder as the codes for images and the output image can be seen with the optimal
image fusion. Here are few additional outputs of the same exposure and the focus area is deblurred. The whole image can
proposed method. The image is taken at an outdoor area with be seen as properly exposed in the light source.
different exposure values and the input images are fed to the
MatLab and the histogram of the same images are shown The advantages of the proposed method decreases
below. noise level. Better histogram generation. Enhanced post
processing support. Easy control in overexposure.
VIII. CONCLUSION [14] [K. Hara, K. Inoue and K. Urahama et alet al (2014)], A
differentiable approximation approach to contrastaware image
In this paper, address a new vision optimization fusion, IEEE Signal Process. Lett. 21 742–745.
problem—out-of-focus blur reconstruction—and propose a
patch-based method to reconstruct an all-in-focus. first divide
the original frames into a grid of small patches and search for
the corresponding patches in the surrounding frames to
compose a candidate target patch set. Then, an MRF model is
built to identify the optimal target patches that should be sharp,
similar to the original patches and continuous with the
reconstruction results of the neighboring patches. restore all
frames in the order of their sharpness, and an all-in-focus is
generated after several iterations.
Finally, adopt the idea of a bilateral filter to improve
the temporal consistency of the reconstructed. The experiments
demonstrate that method can effectively recover most of the
out-of-focus blurry regions
REFERENCES
[1] [Y. Zheng, X. Hou, T. Bian and Z. Qin et al (2007)], Effective
image fusion rules of multi-scale image decomposition, Int. Symp.
Image and Signal Processing and Analysis (Istanbul, 2007), pp.
362–366
[2] [J. J. Lewis, R. J. O'Callaghan, S. G. Bull, D. R., Canagarajah and
N. Nikolov et al(2005)] Pixel- and region-based image fusion with
complex wavelets, Inf. Fusion 8 119–130. 301, 1982].
[3] [M. B. A. Haghighat, A. Aghagolzadeh and H. Seyedarabiet al
(2007)], Multi-focus image fusion for visual sensor networks in
DCT domain, Comput. Electr. Eng. 37 789–797.
[4] [Y. Song, M. Li, Q. Li and L. Sunet al(2006), A new wavelet based
multi-focus image fusion scheme and its application on optical
microscopy, IEEE Int. Conf. Robotics and Biomimetics (Kunming,
pp. 401–405.
[5] [Biswas, R. Choudhuri, K. N. Dey and A. Chakrabartiet al (2015)],
A new multi-focus image fusion method using principal
component analysis in shearlet domain, 2nd Int. Conf. Perception
and Machine Intelligence , pp. 92–98, doi
10.1145/2708463.2709064.
[6] [C.-T. Shen, W.-L. Hwang, and S.-C. Pei et al (2012), “Spatially-
varying out-of-focusimage deblurring with L1-2 optimization and
a guided blur map,” inProc. IEEE Int. Conf. Acoust., Speech
Signal Process., pp. 1069–1072.
[7] [Q. Shan, J. Jia, and A. Agarwala et al (2008) , “High-quality
motion deblurring froma single image,” ACM Trans. Graph., vol.
27, no. 3, p. 73.
[8] [Y.-W. Tai and M. S. Brown et al (2009) , “Single image defocus
map estimationusing local contrast prior,” in Proc. 16th IEEE Int.
Conf. Image Process., pp. 1797–1800.
[9] [W. H. Richardson et al (1972) , “Bayesian-based iterative method
of imagerestoration,” J. Opt. Soc. Amer., vol. 62, no. 1, pp. 55–59.
[10] [M. W. Tao, J. Malik, and R. Ramamoorthi et al (2013),
“Sharpening out offocus images using high-frequency transfer,”
Comput. Graph. Forum,vol. 32, nos. 2–4, pp. 489–498.
[11] [T. Mertens, J. Kautz and F. V. Reethet al (2007)], Exposure
fusion, Paci¯c Conf. Computer Graphics and Applications, pp.
382–390.
[12] [R. Shen, I. Cheng, J. Shi and A. Basu et al (2011)], Generalized
random walks for fusion of multiexposure images, IEEE Trans.
Image Process. 20 3634–3646.
[13] [X. Li and M. Wang, Research of multi-focus image fusion
algorithm based on Gabor ¯lter bank, 12th Int. Conf. Signal
Processing (2014), pp. 693–697.