Abstract
Under foggy or hazy weather conditions, the visibility and color fidelity of outdoor images are prone to degradation. Hazy images can be the cause of serious errors in many computer vision systems. Consequently, image haze removal has practical significance for real-world applications. In this study, we first analyze the inherent weaknesses of the atmospheric scattering model and propose an improvement to address those weaknesses. Then, we present a fast image haze removal algorithm based on the improved model. In our proposed method, the input image is partitioned into several scenes based on the haze thickness. Next, averaging and erosion operations calculate the rough scene luminance map in a scene-wise manner. We obtain the rough scene transmission map by maximizing the contrast in each scene and then develop a way to gently remove the haze using an adaptive method for adjusting scene transmission based on scene features. In addition, we propose a guided total variation model for edge optimization, so as to prevent from the block effect as well as to eliminate the negative effect from the wrong scene segmentation results. The experimental results demonstrate that our method is effective in solving a series of common problems, including uneven illuminance, overenhanced and oversaturated images, and so forth. Moreover, our method outperforms most current dehazing algorithms in terms of visual effects, universality, and processing speed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Often, when outdoor images are acquired under poor weather conditions, such as haze and fog, the visibility of the captured scene is prone to significant degradation (see Fig. 1a). Narasimhan [1] exploited the interactions of light with particles suspended in the atmosphere (scattering, absorption, and emission) that result in reduced contrast, faded colors, and low saturation. Many computer vision applications rely on the assumption that the input image is haze free; consequently, degraded images may cause catastrophic errors. Hence, research on image dehazing is of practical significance, and the search for effective haze removal methods has attracted increased attention in recent years.
Early studies adopted image enhancement techniques to increase the visibility of hazy images. Among these, the Retinex [2] and Choi [3] image processing techniques are typical examples. However, because these techniques do not take the spatial distribution of haze into account and because they ignore the fact that the haze thickness is dependent on the scene depth, their dehazing effect is not visually compelling.
Therefore, subsequent research work focused mainly on haze removal based on the atmospheric scattering model, which has proved to be more attractive than using traditional image enhancement techniques. When using an atmospheric scattering model, it is critical to estimate the scene depth accurately. The literature [4–6] proposed using multiple images or external information to derive the scene depth map; however, this requirement is difficult to fulfill in many real-world applications.
More recently, single-image haze removal methods have attracted the most research attention—and remarkable progress has been made. Generally, these methods take advantage of strong prior knowledge or assumptions to produce the depth map. For example, by assuming that clear images possess higher local contrast than hazy ones, Tan [7] proposed deriving the transmission map based on a Markov random field (MRF) model and removing haze by maximizing the local contrast. However, Tan’s results tend to be oversaturated because, in spirit, it is similar to contrast stretching. Nishino [8] exploited the statistical properties hidden in images by adopting the Bayesian posterior probability model to remove haze. The results show its superiority for heavily hazed images, but after restoration, the image color tends to be overenhanced for misty images. Fattal [9] assumed that transmission and surface shading are locally uncorrelated and removed haze on the basis of color statistics. However, Fattal’s algorithm does not work for heavily hazed images. Tarel [10] used the median filter to estimate the dissipation function. However, because the median filter shows poor edge preserving performance, the method left small amounts of mist around depth changes in the dehazed image. To solve this problem, Xiao [11] proposed a guided joint bilateral filter for haze removal. Meng [12] estimated a rough transmission map using a boundary constraint and proposed a regularization method to blur the map. Although this method is fast, it tends to distort the color fidelity when dealing with white objects. He [13] obtained a rough estimate of transmission via dark channel prior and adopted soft matting for transmission refinement. Although the dehazing results are almost perfect in their visual effect, He’s algorithm is not applicable for real-time systems, because the soft matting operation incurs expensive computation and memory consumption overhead. To solve this problem, He [14] replaced the soft matting with a guided filter, which proved to be more efficient, but only at the cost of degrading the visual effect. Gibson [15] presented the median dark channel prior method based on [13], which accelerates the haze removal process to some extent, because it requires no refinement of the transmission map. Nevertheless, this method fails to achieve good visual results. In particular, it is prone to leaving dark spots in the dehazed image. Li [16] exploited the detail change prior to estimate the airlight. However, because the result contained excessive texture details, this form of haze removal is unsatisfactory. Zhu [17] created a linear model-to-model scene depth under the color attenuation prior and learned the parameters of the model with a supervised learning method. However, because the scattering coefficient in the atmospheric scattering model cannot actually be regarded as a constant, Zhu’s method proved to be unstable in its haze removal performance.
As mentioned above, the quality of dehazing methods still have room for improvement, especially for images with uneven illumination. Although Li [18] adopted post-enhancement processing to improve the visual quality, he was unable to analyze the underlying key problem and, consequently, failed to make an essential improvement on dehazing. In this study, we first analyze the inherent weaknesses of the atmospheric scattering model and propose an improvement. Then, we present a fast image haze removal algorithm based on the modified model. Our method does not use a traditional way of estimating the global atmospheric light and the transmission map; instead, we perform scene segmentation based on the haze thickness and estimate the scene luminance and scene transmission for each scene region. To eliminate the block effect and the negative effect caused by scene segmentation errors, we propose a guided total variation model (GTV) to perform guided smoothing, which the original total variation (TV) model was not equipped with [19, 20]. Compared to the traditional enhancement techniques, our method results in a better visual effect and improved color fidelity, as shown in Fig. 1.
2 Analysis of and improvement on the atmospheric scattering model
In computer vision and computer graphics, the atmosphere scattering model has been widely used to describe the formation of a hazy image [1, 7] and is defined as follows:
where I is the observed image, \(\rho \) is the scene reflectance, A is the global atmospheric light (regarded as constant in the input image), t denotes the transmission and—if we assume that the haze is homogenous—t can be expressed as follows:
where \(\beta _{0}\) is the scattering coefficient and d represents the scene depth. Evidently, it is an ill-posed problem to estimate A and t from a single input image. In recent years, many studies have exploited stronger priors or used assumptions as constraints to solve this challenging problem. Although significant progress has been made, the visual effect after restoration is still less than satisfactory.
Figure 2 shows various restoration results under different global atmospheric light levels. Clearly, for a smaller values of A, the local contrast in the dark tree shadow is enhanced, but a large number of the detail structures are lost in the bright region. As the Retinex theory states [2], scene reflectance is an intrinsic feature of objects, and it is independent of the incident light. One problem is that we cannot recover the ideal scene reflectance regardless of the value of A, which is a result of the assumption that the atmospheric light level is constant in Eq. 1. However, that assumption is not always true in the real world. The intensity of atmospheric light may vary among different regions. As shown in Fig. 2a, the light intensity tends to be zero in the shadowed area, but it approaches one at the distant horizon. Thus, assuming that A is constant has obvious limitations. In addition, the estimation of the transmission map involves considerable redundant computation, because the transmission map is estimated in a pixelwise manner, while in reality, the depth changes relatively smoothly in the same scene.
To overcome the weaknesses described above, we first discard the assumption that the atmospheric light level is constant. Then, we perform scene partition and adaptively estimate the incident light in each separate scene. Because pixels in the same scene are likely to have similar depth, we can increase the efficiency of this scheme by calculating the transmission in a scene-wise manner rather than pixel by pixel. According to the analysis shown above, the transmission map, t, and the atmospheric light, A, can be redefined as the scene transmission map, T, and the scene luminance map, L, respectively. Therefore, the redefined model can be expressed as follows:
where \(\Omega _{i}\) stands for the \(i{\mathrm{th}}\) scene. L(i), T(i) refer to the scene luminance and scene transmission that are constant in the \(i{\mathrm{th}}\) scene, respectively. The redefined model significantly simplifies the estimation of transmission, because the scene luminance and scene transmission need to be estimated for only a limited number of scenes (see Fig. 3).
3 A single image dehazing method
From the redefined model in Sect. 2, it can be inferred that all the scenes in the input image should be recognized first. Then, the scene luminance and scene transmission need to be estimated for each scene separately based on the results of scene segmentation. To eliminate negative effects caused by this scene-wise operation, it is also necessary to refine the scene transmission map and scene luminance map with the goal of preserving the essential depth structure while achieving local smoothness. Figure 4 shows a flowchart of our method.
3.1 Scene segmentation
It is worth noting that the brightness and texture features in a hazy image vary sharply along with the changes in haze concentration. In other words, in regions with heavy haze, the pixel brightness tends to be very high, while the texture detail is prone to be seriously blurred. Hence, we first partition the input image into several nonoverlapping patches \(B_{i}\) and, then, define a quantitative measurement of the haze density in each patch as follows:
where i denotes the patch index, and \(\varphi \) and \(\phi \) are the mean and standard deviation functions, respectively. The haze distribution map V is constructed after all the patches have been traversed in the hazy image.
Based on the haze distribution, we perform scene partition using the method from [21], which is attractive due to its low complexity. Assuming that the map V is divided into k scenes, the pixel (x, y) belongs to the following scene:
where C is the scene segmentation map, \(V_{\mathrm{sort}}\) is a vector in an ascending order of haze thickness coefficients of pixels, and l denotes the image resolution.
Figure 5 shows several groups of segmentation results from using the above method (note that identical colors indicate the same scene). As Fig. 5 shows, larger k values result in more elaborate scene segmentation results; however, they may also cause the subsequent estimation procedure to be more complicated. We set k to 15 throughout our experiments by taking both computational complexity and partition accuracy into account.
3.2 The rough estimate of scene luminance
As defined above, scene luminance is used to evaluate the intensity of incident light in a scene. If we simply choose the intensity of the brightest pixel in the scene as the scene luminance, it is susceptible to interference from white objects. Moreover, we have to take possible scene partition mistakes into account that can lead to an incorrect scene luminance estimate. Therefore, we adopt the erosion operation inspired by He [13] to reduce the negative impact from white objects and apply the averaging operation to weaken the interference of scene segmentation errors. In particular, for color hazy images, we first need to perform the erosion operation on the three RGB color channels separately, as follows:
where \(I^{c}\) is a color channel of image \(I,\Theta \) is the erosion operator, and \(\Lambda \) denotes the template used in erosion. For each scene, the top 0.1 brightest pixels in each eroded color channel \(Ic \,E\) are averaged to obtain the corresponding scene luminance. Figure 6 shows the three separate components of the rough scene luminance map in RGB color space. Clearly, this scene luminance map conforms more closely to the realistic distribution of ambient light compared with using a fixed setting representing a global atmospheric light level.
3.3 The rough estimate of scene transmission
From Eq. 3, we obtain the scene reflectance and its gradient:
Because \(L\left( i \right) \cdot T\left( i \right) \le 1\), we can obtain the following:
As can be inferred from Eq. 8, the goal of haze removal is to enhance the local contrast in hazy images. Inspired by this prior, we can derive the scene transmission by maximizing the contrast of each scene, as:
Equation 9 is a typical minimum searching problem, and the classical Fibonacci method works well to obtain optimal solutions quickly. Unfortunately, simply enhancing the contrast leads to poor visual effects, such as oversaturation in textured areas and overenhancement in the sky region. Therefore, we propose an adaptive way to adjust the scene transmission. The basic idea is to design an effective metric to distinguish scenes with various features. Then, this metric is employed to determine the magnitude of the scene transmission adjustment required. As defined in Eq. 4, the quantitative coefficient V can reflect the haze thickness. In the same manner, we are able to measure the haze thickness of a scene by averaging V(x, y) of all the pixels in a scene, as follows:
where \({\vert }\Omega _{i}{\vert }\) denotes the number of pixels contained in the \(i{\mathrm{th}}\) scene. We select 200 hazy images randomly as test samples from the Internet and perform scene segmentation on those samples by Eq. 5. By observing the scene segmentation output for all these images, we are able to arbitrarily classify all scenes into four types: texture, mist, dense haze, and sky. Then, we can calculate the corresponding value for \({\chi }\). The statistical result is shown in Fig. 7, from which we can obtain the following approximation relationship:
Obviously, it is difficult to distinguish regions of dense haze from sky regions; however, the likelihood that the scene contains sky tends to increase as the value of \({\chi }\) becomes larger. To prevent overenhancement, the adjustment magnitude should be increased accordingly (\(0.5<\chi \leqq 1\)). Moreover, the adjustment magnitude should be decreased from the texture region to the mist region (\(0\leqq \chi \leqq 0.5\)). According to this principle, we can define the adjustment of scene transmission as follows:
where \(M_{i}\) denotes the adjustment magnitude and is explicitly expressed as follows:
where \(\omega \) controls the slope of the function. After repeated testing, we found that the adjusted transmission behaves well and preserves the consistency of the original scene depth when \(\omega =0.15\) (see Fig. 9), and the corresponding adjustment magnitude function is used, as shown in Fig. 8. It can be clearly observed that the magnitude of the adjustment becomes smaller from texture to mist regions, while it tends to become larger from heavy haze to sky areas. In this way, we can eliminate the problems of overenhancement in sky regions and oversaturation in texture regions while still removing the haze as much as possible.
3.4 Edge optimization based on a guided total variation model
As described in Sect. 3.1, scene partition is inherently a patchwise process that will blur the edges in the scene transmission map \((\tilde{T})\) as well as in the three scene luminance maps \((L^{R},\, L^{G},\, L^{B})\). Thus, it will thus produce halo artifacts in the dehazing result. At the same time, the accuracy of the estimates for scene transmission and scene luminance may suffer from erroneous scene segmentation. Moreover, both the scene transmission map and the scene luminance map should possess the characteristic of local spatial smoothing, because excessive texture details may have a negative impact on the dehazing effect [11]. Intuitively, adopting a filter with a guiding function is a good choice for solving this problem [14, 22]. Such filters include the joint bilateral filter, guided filter, etc. However, these methods are extremely sensitive to parameter values and different parameter selections can greatly affect the filtering results.
Instead, to achieve the goal of local smoothing, we can apply the TV model described in [20, 23]:
where \(\alpha \) is the regularization factor. In this model, the first term ensures the correlation between \(T_{\mathrm{refine}}\) and \(\tilde{T}\), while the second term guarantees the local smoothing of \(T_{\mathrm{refine}}\) itself. Note, the texture details are reliably blurred in \(\tilde{T}\) through this total variation optimization; however, the edge inconsistencies still exist in \(T_{\mathrm{refine}}\) where the original depth changes. Inspired by the advantages of the joint bilateral filter and guided filter, we propose a GTV model with the guiding function described as follows:
Here, \(\beta \) and \(\gamma \) are regularization parameters. The last term is introduced to ensure that the edge features in \(T_{\mathrm{refine}}\) remain with the guiding image, G. The weight W is defined as:
Obviously, the weight W increases as the gradient increases. This means that the importance of the second term is reduced, but the third term becomes more important, thus achieving the goal of both blurring the texture details and preserving the edges around areas with sudden depth changes.
To speed up these calculations, we do not solve the GTV model from the perspective of the energy function; instead, we use the gradient approximation method [24] in the \(\hbox {r}\times \hbox {r}\) neighborhood as follows:
where \(T_{\mathrm{refine}-i}\) and \(G_{i}\) are the neighboring pixels of \(T_{\mathrm{refine}}\) and G. According to the Euler–Lagrange equation, Eq. (17) satisfies:
It can also be expressed in an iterative form [24]:
where Iter denotes the number of iterations. When \(\xi ^{\mathrm{Iter}}={\left\| {T_{{refine}}^{{Iter}} -T_{{refine}}^{{Iter}-1} } \right\| }_2^2\big /l\le 10^{-4}\) is satisfied, the iteration process terminates. The outcome from the last iteration is the refined result of the scene transmission \(T_{\mathrm{refine}}\). In the numerator, the first two items depend on the input information and must be calculated only once, while the third one involves sum operation in the \(\hbox {r}\times \hbox {r}\) neighbor region and needs constant updates during iteration. In effect, the computational complexity of the last term in the numerator during each iteration can be decreased to O(1) if the box filter [14] is adopted to speed up the processing. We set T0 refine to \(T_{\mathrm{refine}}^0 =\tilde{T}\). The guiding image, G, is the gray component of the hazy input image. After repeated testing, we have found that this approach can achieve good dehazing results when \({\alpha }=3,\, {\beta }=3\cdot \) (Iter\(-1\)), \({\gamma }=4\), and \(\hbox {r}=\hbox {max}(l_{h},l_{w})/15\), where \((l_{h}, l_{w})\) are the height and width of the image, respectively. As Fig. 10 shows, the GTV model can achieve fast convergence. Moreover, after only a few iterations, the output scene transmission map is effective at highlighting the depth structure while blurring a large amount of the texture details (in fact, the outcome of the \(1{\mathrm{st}}\) iteration is capable of preserving the depth details consistently to the original hazy image. As the subsequent iterations proceed, the texture details become increasingly blurred). The three scene luminance maps \(L^{R}\), \(L^{G}\), and \(L^{B}\) can be refined in the same way.
3.5 Image restoration
Note that the refinement of the scene transmission map \(T_{\mathrm{refine}}\) and the scene luminance map Lcrefine are known, we can derive the scene reflectance \(\rho \) by Eq. 3. For convenience, we rewrite Eq. 3 as follows:
Finally, the restoration result is obtained by restricting \(\rho \) in the range [0,1] by the min–max operation:
4 Experiments
In this section, we compare the quality of image haze removal using our proposed method with other typical dehazing algorithms. In the following experiments, our algorithms are implemented in MATLAB on a computer with an Intel (R) Core(Tm) i5-4210U CPU and 8.00 GB of RAM. All the parameters of our proposed method are set as described in Sect. 3.
4.1 The visual effect
Without loss of generality, we select six hazy images of different types from Internet and process them with our algorithm. The dehazing results in Fig. 11 show that our method is capable of estimating the luminance of various regions accurately and, thus, overcoming the limitation of using a fixed value for the global atmospheric light level; consequently, the visual effect of the restored images is significantly improved.
4.2 Comprehensive comparison
Next, we show the haze removal results of both our method and several other representative algorithms. (The test images in Fig. 14 were downloaded from the Internet, and the test images in Figs. 12, 13, and 15 originate from Fattal’s website: http://www.cs.huji.ac.il/~raananf/). Figure 12 shows the results obtained by Tan [7], Kratz [26] and our method, respectively. As Fig. 12 shows, both algorithms proposed by Tan and Kratz work well for contrast enhancement but result in oversaturated color and halo effects near areas with discontinuous depths. Comparatively, our algorithm performs better in terms of color fidelity, and the visual results seem more natural.
From left to right, the panels in Fig. 13 show the input image, the results obtained by Choi [3], Kopf [5], Fattal [9], and our method, respectively. Clearly, the results of all those algorithms except ours lose some information in the sky region, causing an unsatisfactory visual effect. Our method performs well in preserving the information in the sky region, and exhibits clearer visibility after restoration.
In Fig. 14, we compare our method with the algorithms presented by Taral [10], Meng [12], He [13], Gibson [15], Zhu [17], and Qi [27]. Obviously, the sky color is overenhanced in the results of Taral, Meng, He, Gibson and Qi, while it is not in our method and Zhu’s; however, our method outperforms Zhu’s algorithm in the dehazing visual effect.
Finally, we compare our method with the algorithms in Nishino [8] and Fattal [25]. As shown in Fig. 15, the algorithms presented by Nishino and Fattal can generally achieve a good dehazing effect except when the illumination is insufficient. When the incident light is not strong enough, the global contrast in the dehazing results tends to be low in their results. In comparison, our method not only provides comparable haze removal results but also performs well in low luminance conditions.
4.3 The objective assessment
We employ the rate of new visible edges e recommended by [28] and the structure similarity f proposed by Wang [29] to assess our approach quantitatively. The measures e and f are defined as follows:
where \(n_{0}\) and \(n_{r}\) represent the number of visible edges in the hazy images and the corresponding dehazed images, respectively, \(\hat{{B}}_i\) and \(\tilde{B}_i\) are the \(i{\mathrm{th}}\) nonoverlapping patches in the original image I and the restored image R, respectively, \(\mu _{\hat{{B}}_i}\) and \(\sigma _{\hat{{B}}_i}^2\) denote the mean and the variance of \(\hat{{B}}_i\), respectively, \(\mu _{\tilde{B}_i}\) and \(\sigma _{\tilde{B}_i}^2\) denote the mean and the variance of \(\tilde{B}_i\), respectively, and \(\sigma _{\hat{{B}}_i \tilde{B}_i}\) is the covariance between \(\hat{{B}}_i\) and \(\tilde{B}_i \). The constants \(c_{1}\) and \(c_{2}\) are included to avoid instability.
For the sake of fairness, we test the most up-to-date dehazing algorithms on two benchmark images from Fattal’s website. The visual results for all the algorithms are shown in Fig. 16a, c; the corresponding quantitative comparison results are listed in Table 1. As Table 1 shows, Tarel [10] achieves the maximum e value, followed by Meng [12], Gibson [15], and He [13]. However, this does not mean these algorithms are superior to our method, because the number of visible edges can increase when excessive dehazing leads to noise amplification in the image. This problem can be solved by the approaches described in [30, 31]. The f values listed in Table 1 demonstrate that our method achieves the maximum similarity in structure, which indicates that the depth structure of our result conforms better to the original image.
In addition, we adopt the Aydin [32] method to detect loss of visible contrast, amplification of invisible contrast and reversal of visible contrast. As can be seen in Fig. 16b , d, except for Kopf [5], Fattal [25], and ours, the other algorithms tend to cause more or less distortion and overenhancement (e.g., the rock region in Fig. 16b).
Processing speed is also important when evaluating algorithmic performance. Algorithms such as Tan [7], Nishino [8] and He [13] involve complex operations (e.g., MRF or soft matting) that greatly reduce the speed of haze removal. Therefore, we compare our method only with algorithms with less complexity. Figure 17 shows the curve for the computation time consumed in processing images under different resolutions. As the results show, our method is only slightly slower than Gibson [15], and it is faster than the others. From aspects of both dehazing effect and computational efficiency, our proposed algorithm is better suited for dehazing applications.
4.4 Situations not suited to our method
Our method may not work well in some specific types of scenes. Figure 18 shows the haze removal result of such an unsuitable image. As we all know, the hazy imaging formulation is modeled under the assumption that the scattering particles should consist of the same ingredients and uniformly distributed in atmosphere [1]. Consequently, when this assumption is violated, it is difficult for haze removal techniques satisfy the requirements of practical applications. To solve the problem of haze removal in inhomogeneous atmospheric conditions, Shi [33] presented a more robust scattering model. However, when processing the original hazy image shown in Fig. 18a. Shi’s model fails to achieve a satisfactory dehazing result, because it takes only the impact of earth’s gravity on the scattering particles into account (see Fig. 18c).
4.5 Conclusion and future work
In this study, we propose a single image haze removal approach based on an improved atmospheric scattering model. We first analyze the weaknesses of the atmospheric scattering model and propose an improvement to it. Then, based on the improved model as the starting point, we develop methods to automatically partition scenes and perform estimates of the scene luminance and scene transmission maps in a scene-wise manner. Finally, we present a GTV model to achieve edge optimization. The experimental results demonstrate that our approach outperforms most up-to-date algorithms in terms of both visual effect and processing speed.
It is possible to further accelerate the procedure of haze removal; for example, we can reduce the number of scenes segmented for images with smooth changes in depth. Therefore, our future work will focus on the following two aspects: (1) adaptively setting the number of scenes based on the features of the image, and (2) investigating improvements to the atmospheric scattering model, which we expect to be more applicable in inhomogeneous atmospheric conditions.
References
Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. Int. J. Comput. Vis. 48, 233–254 (2002)
Land, E.H., Mccann, J.J.: Lightness and retinex theory. J. Opt. Soc. Am. 61, 1–11 (1971)
Choi, L.K., You, J., Bovik, A.C.: Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 24, 3888–3901 (2015)
Shwartz, S., Namer, E., Schechner, Y.Y.: Blind haze separation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1984–1991. IEEE Computer Society, New York (2006)
Kopf, J., Neubert, B., Chen, B., Cohen, M., Cohen-Or, D., Deussen, O., Uyttendaele, M., Lischinski, D.: Deep photo. ACM Trans. Graph. 27, 1–9 (2008)
Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K.: Polarization-based vision through haze. Appl. Opt. 42, 511–525 (2003)
Tan, R.T.: Visibility in bad weather from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition 2008, pp. 1–8. IEEE, Piscataway (2008)
Nishino, K., Kratz, L., Lombardi, S.: Bayesian defogging. Int. J. Comput. Vis. 98, 263–278 (2012)
Fattal, R.: Single image dehazing. ACM Trans. Graph. 27, 72 (2008)
Tarel, J.P., Hautiere, N.: Fast visibility restoration from a single color or gray level image. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2201–2208. IEEE, Piscataway (2009)
Xiao, C., Gan, J.: Fast image dehazing using guided joint bilateral filter. Vis. Comput. 28, 713–721 (2012)
Meng, G., Wang, Y., Duan, J., Xiang, S. Pan, C.: Efficient image dehazing with boundary constraint and contextual regularization. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 617–624. IEEE, Piscataway (2013)
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2011)
He, K., Jian, S., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2013)
Gibson, K.B., Vo, D.T., Nguyen, T.Q.: An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process. 21, 662–673 (2011)
Li, J., Zhang, H., Yuan, D., Sun, M.: Single image dehazing using the change of detail prior. Neurocomputing 156, 1–11 (2015)
Zhu, Q., Mai, J., Shao, L.: A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 24, 3522–3533 (2015)
Li, B., Wang, S., Zheng, J., Zheng, L.: Single image haze removal using content-adaptive dark channel and post enhancement. IET Comput. Vis. 8, 131–140 (2013)
Ding, K., Chen, W., Wu, X.: Optimum inpainting for depth map based on L0 total variation. Vis. Comput. 30, 1311–1320 (2013)
Liu, X., Zeng, F., Huang, Z. Ji, Y.: Single color image dehazing based on digital total variation filter with color transfer. In: IEEE International Conference on Image Processing, pp. 909–913. IEEE, Piscataway (2013)
Qian, L., Chen, M.Y., Zhou, D.H.: Single image haze removal via depth-based contrast stretching transform. Sci. China Inf. Sci. 58, 1–17 (2015)
Liu, C., Zhao, J., Shen, Y., Zhou, Y., Wang, X., Ouyang, Y.: Texture filtering based physically plausible image dehazing. Vis. Comput. 32, 911–920 (2016)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60, 259–268 (1992)
Dong, N.A.N., Du-yan, B.I., Shi-ping, M.A., Lin-yuan, H.E., Xiao-long, L.O.U.: Single image dehazing method based on scene depth constraint. Chin. J. Electron. 43, 500–504 (2015)
Fattal, R.: Dehazing using color-lines. ACM Trans. Graph. 34, 1–14 (2014)
Kratz, L., Nishino, K.: Factorizing scene albedo and depth from a single foggy image. In: IEEE International Conference on Computer Vision, pp. 1701–1708 (2009)
Qi, M., Hao, Q., Guan, Q., Kong, J., Zhang, Y.: Image dehazing based on structure preserving. Optik Int. J. Light Electron Opt. 27, 21153–21160 (2015)
Hautiere, N., Tarel, J.P., Aubert, D., Dumont, E.: Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Anal. Stereol. 27, 87–95 (2008)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)
Khmag, A., Ramli, A.R., bin Hashim, S.J., Al-Haddad, S.A.R.: Additive noise reduction in natural images using second-generation wavelet transform hidden Markov models. IEEE J. Trans. Electr. Electron. Eng. 11, 339–347 (2016)
Shao, L., Yan, R., Li, X., Liu, Y.: From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms. IEEE Trans. Cybern. 44, 1001–1013 (2014)
Aydin, T.O., Mantiuk, R., Myszkowski, K., Seidel, H.P.: Dynamic range independentimage quality assessment. ACM. Trans. Graph. 27, 1–69 (2008)
Shi, Z., Long, J., Tang, W., Zhang, C.: Single image dehazing in inhomogeneous atmosphere. Optik Int. J. Light Electron Opt. 125, 3868–3875 (2014)
Acknowledgments
The authors wish to thank Dr. Pengfei Wu and Dr. Zhenfei Gu for their help with proofreading. We would also like to thank the reviewers for their valuable comments. This work is supported by the National Natural Science Foundations of P. R. China (Grant No. 61571241), the Jiangsu Province Graduate Research and Innovation Project (Grant No. CXZZ130476), and the Science Research Fund of NUPT (Grant No. NY215169).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ju, M., Zhang, D. & Wang, X. Single image dehazing via an improved atmospheric scattering model. Vis Comput 33, 1613–1625 (2017). https://doi.org/10.1007/s00371-016-1305-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-016-1305-1