Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Convex Variational Model for Restoring Blurred Images with Large Rician Noise

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

In this paper, a new convex variational model for restoring images degraded by blur and Rician noise is proposed. The new method is inspired by previous works in which the non-convex variational model obtained by maximum a posteriori estimation has been presented. Based on the statistical property of Rician noise, we put forward to adding an additional data-fidelity term into the non-convex model, which leads to a new strictly convex model under mild condition. Due to the convexity, the solution of the new model is unique and independent of the initialization of the algorithm. We utilize a primal–dual algorithm to solve the model. Numerical results are presented in the end to demonstrate that with respect to image restoration capability and CPU-time consumption, our model outperforms some of the state-of-the-art models in both medical and natural images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Aelterman, J., Goossens, B., Pizurica, A., Philips, W.: Removal of correlated Rician noise in magnetic resonance imaging. In: EUSIPCO-2008, pp. 25–29 (2008)

  2. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problem. Oxford University Press, London (2000)

    Google Scholar 

  3. Aubert, G., Aujol, J.F.: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 68, 925–946 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  4. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, vol. 147. Springer, Heidelberg (2006)

    Google Scholar 

  5. Abramowitz, M., Stegun, I.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Tenth printing) (1972)

  6. Bae, E., Yuan, J., Tai, X.: Global minimization for continuous multiphase partitioning problems using a dual approach. Int. J. Comput. Vis. 92, 112–129 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bao, P., Zhang, L.: Noise reduction for magnetic resonance images via adaptive multiscale products thresholding. IEEE Trans. Med. Imaging 22, 1089–1099 (2003)

    Article  Google Scholar 

  8. Basu, S., Fletcher, T., Whitaker, R.: Rician noise removal in diffusion tensor MRI. In: MICCAI-2006, pp. 117–125. Springer, Berlin (2006)

  9. Bowman, F.: Introduction to Bessel Functions. Dover Publications, Mineola (2012)

    Google Scholar 

  10. Buades, A., Coll, B., Morel, J.: A non-local algorithm for image denoising, vol. 2. In: IEEE Conference on CVPR, pp. 60–65 (2005)

  11. Cai, J., Chan, R., Nikolova, M.: Two-phase approach for deblurring images corrupted by impulse plus Gaussian noise. Inverse Probl. Imaging 2, 187–204 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. Cai, J.F., Chan, R.H., Shen, Z.: A framelet-based image inpainting algorithm. Appl. Comput. Harmonic Anal. 24(2), 131–149 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  13. Cai, J., Dong, B., Osher, S., Shen, Z.: Image restoration: total variation, wavelet frames, and beyond. J. Am. Math. Soc. 25, 1033–1089 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004)

    Article  MathSciNet  Google Scholar 

  15. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  16. Chan, R., Chan, T., Yip, A.: Numerical methods and applications in total variation image restoration. In: Handbook of Mathematical Methods in Imaging, pp. 1059–1094 (2011)

  17. Chan, R., Chen, K.: Fast multigrid algorithm for a minimization problem in impulse noise removal. SIAM J. Sci. Comput. 30, 1474–1489 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  18. Chan, R., Yang, H., Zeng, T.: A two-stage image segmentation method for blurry images with Poisson or multiplicative Gamma noise. SIAM J. Imaging Sci. 7(1), 98–127 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  19. Chan, T., Shen, J.: Image processing and analysis: variational, PDE, wavelet, and stochastic methods. In: SIAM (2005)

  20. Chen, H., Song, J., Tai, X.: A dual algorithm for minimization of the LLT model. Adv. Comput. Math. 31, 115–130 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  21. Chen, K., Tai, X.: A nonlinear multigrid method for total variation minimization from image restoration. J. Sci. Comput. 33(2), 115–138 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  22. Coupé, P., Yger, P., Prima, S., Hellier, P., Kervrann, C., Barillot, C.: An optimized blockwise nonlocal means denoising filter for 3-d magnetic resonance images. IEEE Trans. Med. Imaging 27, 425–441 (2008)

    Article  Google Scholar 

  23. Dong, Y., Hintermuller, M., Ner, M.: An efficient primal-dual method for L1-TV image restoration. SIAM J. Imaging Sci. 2, 1168–1189 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  24. Dong, Y., Zeng, T.: A convex variational model for restoring blurred images with multiplicative noise. SIAM J. Imaging Sci. 6, 1598–1625 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  25. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15, 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  26. Foi, A.: Noise estimation and removal in MR imaging: the variance-stabilization approach. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 1809–1814 (2011)

  27. Gerig, G., Kubler, O., Kikinis, R., Jolesz, F.A.: Nonlinear anisotropic filtering of MRI data. IEEE Trans. Med. Imaging. 11, 221–232 (1992)

    Article  Google Scholar 

  28. Getreuer, P., Tong, M., Vese, L.: A variational model for the restoration of mr images corrupted by blur and Rician noise. In: Advances in Visual Computing, pp. 686–698 (2011)

  29. Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  30. Gudbjartsson, H., Patz, S.: The Rician distribution of noisy MRI data. J. Magn. Reson. Med. 34(6), 910–914 (1995)

    Article  Google Scholar 

  31. Henkelman, R.: Measurement of signal intensities in the presence of noise in MR images. J. Med. Phys. 12, 232–233 (1985)

    Article  Google Scholar 

  32. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: International Conference on Pattern Recognition (ICPR), IEEE, pp. 2366–2369 (2010)

  33. Huang, Y., Ng, M., Wen, Y.: A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci. 2, 20–40 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  34. Kornprobst, P., Deriche, R., Aubert, G.: Image sequence analysis via partial differential equations. J. Math. Imaging Vision. 11, 5–26 (1999)

    Article  MathSciNet  Google Scholar 

  35. Le, T., Chartrand, R., Asaki, T.: A variational approach to reconstructing images corrupted by Poisson noise. J. Math. Imaging Vision. 27, 257–263 (2007)

    Article  MathSciNet  Google Scholar 

  36. Lézoray, O., Grady, L.: Image Processing and Analysis with Graphs: Theory and Practice. CRC Press/Taylor and Francis, Boca Raton (2012)

    Google Scholar 

  37. Liu, G., Huang, T., Liu, J., Lv, X.: Total variation with overlapping group sparsity for image deblurring under impulse noise. arXiv preprint arXiv:1312.6208 (2013)

  38. Manjón, J., A Thacker, N., Lull, J.J., Garcia-Martí, G., Martí-Bonmatí, L., Robles, M.: Multicomponent MR image denoising. J. Biomed. Imaging 18 (2009)

  39. Nowak, R.: Wavelet-based Rician noise removal for magnetic resonance imaging. IEEE Trans. Image Process. 8, 1408–1419 (1999)

    Article  Google Scholar 

  40. Papoulis, A.: Probability, Random Variables and Stochastic Processes, 2nd edn. McGraw-Hill, Tokyo (1984)

    MATH  Google Scholar 

  41. Payne, L., Weinberger, H.F.: An optimal Poincaré inequality for convex domains. Arch. Ration. Mech. Anal. 5, 286–292 (1960)

    Article  MATH  MathSciNet  Google Scholar 

  42. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990)

    Article  Google Scholar 

  43. Peyr, G., Bougleux, S., Cohen, L.: Non-local regularization of inverse problems. In: Computer Vision ECCV 2008, pp. 57–68 (2008)

  44. Zaitsev, V.F., Polyanin, A.D.: Handbook of Exact Solutions for Ordinary Differential Equations. CRC Press, Boca Raton (2002)

    Google Scholar 

  45. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)

    Article  MATH  Google Scholar 

  46. Sanders, M.: Characteristic Function of the Central Chi-Squared Distribution. Retrieved 2009–03-06

  47. Sijbers, J., Dekker, A., Dyck, D., Raman, E.: Estimation of signal and noise from Rician distributed data. In: SPCOM-1998, pp. 140–142 (1998)

  48. Snyman, J.: Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms, vol. 97. Springer, New York (2005)

    Google Scholar 

  49. Tai, X.-C., Borok, S., Hahn, J.: Image denoising using TV-Stokes equation with an orientation-matching minimization. In: SSVM-2009, pp. 490–501. Springer, Berlin (2009)

  50. Wang, Z.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  51. Wiest-Daesslé, N., Prima, S., Coupé, P., Morrissey, S., Barillot, C.: Non-local means variants for denoising of diffusion-weighted and diffusion tensor MRI. In: MICCAI-2007, pp. 344–351 (2007)

  52. Wiest-Daesslé, N., Prima, S., Coupé, P., Morrissey, S., Barillot, C.: Rician noise removal by non-local means filtering for low signal-to-noise ratio MRI: applications to DT-MRI. In: MICCAI-2008, pp. 171–179 (2008)

  53. Wood, J., Johnson, K.: Wavelet packet denoising of magnetic resonance images: importance of Rician noise at low SNR. Magn. Reson. Med. 41, 631–635 (1999)

    Article  Google Scholar 

  54. Wu, C., Tai, X.: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 3, 300–339 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  55. Zhang, B., Fadili, J., Starck, J.: Wavelets, ridgelets, and curvelets for Poisson noise removal. IEEE Trans. Image Process. 17, 1093–1108 (2008)

    Article  MathSciNet  Google Scholar 

  56. Zhu, M., Wright, S., Chan, T.: Duality-based algorithms for total-variation-regularized image restoration. Comput. Optim. Appl. 47, 377–400 (2010)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Science Foundation of China under Grant 11271049, in part by the Research Grants Council (RGC) under Grant 211911 and Grant 12302714, and in part by the Research Fellowship Grant, Hong Kong Baptist University, Hong Kong.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tieyong Zeng.

Appendix

Appendix

1.1 Appendix 1: Proof of Lemma 1

Proof

It suffices to prove

$$\begin{aligned} \sqrt{u}-\sqrt{|a|}-\sqrt{|b|}&\le (u^2+2au+b^2)^{\frac{1}{4}}\\&\le \sqrt{u}+\sqrt{|a|}+\sqrt{|b|}. \end{aligned}$$

The second “\(\le \)” is evident since

$$\begin{aligned} (\sqrt{u}+\sqrt{|a|}+\sqrt{|b|})^2 \ge u+|a|+|b| \ge \sqrt{u^2+2au+b^2}. \end{aligned}$$

For the first “\(\le \)”, one can assume that \(\sqrt{u}\ge \sqrt{|a|}+\sqrt{|b|}\) because if \(\sqrt{u}<\sqrt{|a|}+\sqrt{|b|}\), then the left “\(\le \)” holds true all the time. Immediately, we have,

$$\begin{aligned}&(\sqrt{u}-\sqrt{|a|}-\sqrt{|b|})^2 \\&\quad = u+|a|+|b|-2(\sqrt{|a|}+\sqrt{|b|})\sqrt{u}+2\sqrt{|a|}\sqrt{|b|} \\&\quad \le u+|a|+|b|-2(\sqrt{|a|}+\sqrt{|b|})^2+2\sqrt{|a|}\sqrt{|b|} \\&\quad \le u-|a|. \end{aligned}$$

Moreover, as \(|b|\ge |a|\), there holds that

$$\begin{aligned} u-|a|=(u^2-2|a|u+a^2)^{\frac{1}{2}}\le (u^2+2au+b^2)^{\frac{1}{2}}. \end{aligned}$$

Thus, \(\sqrt{u}-\sqrt{|a|}-\sqrt{|b|}\le (u^2+2au+b^2)^{\frac{1}{4}}\).

This completes the proof. \(\square \)

1.2 Appendix 2: Proof of Lemma 2

Proof

When \(t=0\), we have \(I_0(0)=1, I_1(0)=I_2(0)=0\), so \(h(t)=0\). When \(t>0\), by the recurrence formulas of the solutions of the modified Bessel function in [44], we have

$$\begin{aligned} I_2(x)=I_0(x)-\frac{2}{x}I_1(x). \end{aligned}$$

By denoting \(y(t)=\frac{I_1(t)}{I_0(t)}\), we have \(h(t)\) can be expressed as follows:

$$\begin{aligned} h(t)=2t^{\frac{3}{2}}\left[ 1-\frac{1}{t}y(t)-y(t)^2\right] . \end{aligned}$$

It’s sufficient to prove

$$\begin{aligned} y(t)>-\frac{1}{2t}+\sqrt{1-\frac{1}{2t^{\frac{3}{2}}}+\frac{1}{4t^2}} \end{aligned}$$
(30)

for any \(0<t\le 3902\) if we want to prove \(0\le h(t)<1\).

Based on the continued fractions expression of \(\frac{I_\nu (t)}{I_{\nu -1}(t)}\) in [44] with the assumption \(I_{\nu -1}(t)\ne 0\), we obtain that when \(\nu =1\), \(I_0(t)>0\),

$$\begin{aligned} \frac{I_1(t)}{I_0(t)}&= \frac{\frac{1}{2}t}{1+}\frac{\frac{1}{4}t^2/2}{1+}\frac{\frac{1}{4}t^2/6}{1+}\frac{\frac{1}{4}t^2/12}{1+}\frac{\frac{1}{4}t^2/20}{1+}\ldots \\&> \frac{\frac{1}{2}t}{1+}\frac{\frac{1}{4}t^2/2}{1+}\frac{\frac{1}{4}t^2/6}{1+}\frac{\frac{1}{4}t^2/12}{1+}\\&= \frac{4t(48+3t^2)}{8(48+3t^2)+t^2(48+t^2)}. \end{aligned}$$

So it suffices to prove

$$\begin{aligned} \frac{4t(48+3t^2)}{8(48+3t^2)+t^2(48+t^2)}\ge -\frac{1}{2t}+\sqrt{1-\frac{1}{2t^{\frac{3}{2}}}+\frac{1}{4t^2}}. \end{aligned}$$
(31)

Let \(z=\sqrt{t}\), we have that (31) can be written as

$$\begin{aligned} Z(z)&= (2z^{16}+48z^{15}+288z^{12}+11904z^{8}\\&\quad {}+110592z^4+294912)\\&\quad {}-(4z^{19}+1152z^{11}+294912z^{3}) \\&\equiv A(z)-B(z)\ge 0. \end{aligned}$$

Select \(z_i=0.01*(i-1)\), using MATLAB, we may calculate the above function \(Z(z)\) where \(Z(z_i)=A(z_i)-B(z_i)>0, \forall i=1,\ldots ,201\). Meanwhile, we know for any \(z\in [z_{i-1}, z_i]\), \(i=2,\ldots ,201\), \(|z_i-z_{i-1}|=0.01\),

$$\begin{aligned} Z(z)>A(z_{i-1})-B(z_i). \end{aligned}$$

Calculating \(A(z_{i-1})-B(z_i)\) using MATLAB, we get for for \(i=2,\ldots ,201\),

$$\begin{aligned} A(z_{i-1})-B(z_i)>0. \end{aligned}$$

Thus, we conclude that (31) holds for \(0<z\le 2\), i.e., \(0<t\le 4\).

In [36], Formulas (9.8.2) and (9.8.4) give the polynomial approximations of \(t^{\frac{1}{2}}e^{-t}I_0(t)\) and \(t^{\frac{1}{2}}e^{-t}I_1(t)\) respectively for \(t\ge 3.75\) as follows

$$\begin{aligned}&t^{\frac{1}{2}}e^{-t}I_0(t) \\&\quad = 0.39894228+0.01328592w^{-1}+0.00225319w^{-2}\\&\quad \quad -0.00157565w^{-3}+0.00916281w^{-4}-0.02057706w^{-5}\\&\quad \quad +0.02635537w^{-6}-0.01647633w^{-7}+0.00392377w^{-8}\\&\quad \quad +\epsilon _0, \quad |\epsilon _0|<1.9\times 10^{-7}\\&t^{\frac{1}{2}}e^{-t}I_1(t)\\&\quad = 0.39894228-0.03988024w^{-1}-0.00362018w^{-2}\\&\quad \quad +0.00163801w^{-3}-0.01031555w^{-4}+0.02282967w^{-5}\\&\quad \quad -0.02895312w^{-6}+0.01787654w^{-7}-0.00420059w^{-8}\\&\quad \quad +\epsilon _1, \quad |\epsilon _1|<2.2\times 10^{-7}, \end{aligned}$$

where \(w=\frac{t}{3.75}\). So we denote \(v=w^{-1}\), and give the following definitions:

$$\begin{aligned} C_0(v)&= 0.39894228+0.01328592v+0.00225319v^2\\&\quad {}+0.00916281v^4+0.02635537v^6\\&\quad {}+0.00392377v^8+|\epsilon _0|, \\ D_0(v)&= 0.00157565v^3+0.02057706v^5+0.01647633v^7, \\ C_1(v)&= 0.39894228+0.00163801v^3+0.02282967v^5\\&\quad {}+0.01787654v^7, \\ D_1(v)&= 0.03988024v+0.00362018v^2+0.01031555v^4\\&\quad {}+0.02895312v^6+0.00420059v^8+|\epsilon _1| . \end{aligned}$$

Thus,

$$\begin{aligned} \frac{I_1(t)}{I_0(t)}>\frac{C_1(v)-D_1(v)}{C_0(v)-D_0(v)}. \end{aligned}$$

What we want to prove is transferred into

$$\begin{aligned}&\frac{C_1(v)-D_1(v)}{C_0(v)-D_0(v)}\nonumber \\&\quad \ge -\frac{v}{3.75}+\sqrt{1-\frac{1}{2}\left( \frac{v}{3.75}\right) ^{\frac{3}{2}}+\frac{1}{4}\left( \frac{v}{3.75}\right) ^2}. \end{aligned}$$
(32)

Let us denote \(\alpha =v^{\frac{1}{2}}\), we simplify (32) as the following inequality,

$$\begin{aligned} E(\alpha )-F(\alpha )\ge 0, \end{aligned}$$
(33)

where

$$\begin{aligned} E(\alpha )&\equiv 0.028172379280\alpha ^{11}+0.079495137490\alpha ^{15}\\&\quad {}+0.011465887700\alpha ^{19}+0.0037800230360\alpha ^{23}\\&\quad {}+0.0055948339250\alpha ^{27}+0.0018524257440\alpha ^{31}\\&\quad {}+0.010820557840\alpha ^{18}+0.018990475470\alpha ^{20}\\&\quad {}+0.039847515640\alpha ^{24}+0.021617041410\alpha ^{28}\\&\quad {}+0.0022168115010\alpha ^{32}+.6164050299\alpha ^3\\&\quad {}+0.000059628339330\alpha ^{35}+0.041056084680\alpha ^5\\&\quad {}+0.0076464389140\alpha ^7+.1409786028\alpha ^6\\&\quad {}+1.974703980\alpha ^{10}+1.628367766\alpha ^{14}\\&\quad {}+0.0011738925590\alpha ^2, \end{aligned}$$

and

$$\begin{aligned} F(\alpha )&\equiv 0.0016658780970\alpha ^{21}+0.0054180558250\alpha ^{25}\\&\quad {}+0.0039890134660\alpha ^{29}+0.00050077155190\alpha ^{33}\\&\quad {}+.8804467730\alpha ^8+2.545996803\alpha ^{12}\\&\quad {}+.4240716899\alpha ^{16}+0.033477345850\alpha ^{22}\\&\quad {}+0.034871745910\alpha ^{26}+0.0089576010460\alpha ^{30}\\&\quad {}+0.00024723223530\alpha ^{34}+1.924496377\cdot 10^{-15}\alpha ^{36}\\&\quad {}+0.0046371849340\alpha ^9+0.062671663900\alpha ^{13}\\&\quad {}+0.048673748070\alpha ^{17}+0.000018401211970\\&\quad {}+.3432139728\alpha ^4. \end{aligned}$$

Similarly, select \(\alpha _i=0.03+0.001i\), for \(i=1,\ldots ,970\). Then, we calculate \(E(\alpha _i)-F(\alpha _i)\) using MATLAB, and get

$$\begin{aligned} E(\alpha _i)-F(\alpha _i)>0, \quad \quad \text {for} \quad i=1,\ldots ,970. \end{aligned}$$

Easily, we know for any \(\alpha \in [\alpha _{i-1}, \alpha _i]\), \(i=2,\ldots ,970\),

$$\begin{aligned} E(\alpha )-F(\alpha )>E(\alpha _{i-1})-F(\alpha _i). \end{aligned}$$

Again, we calculate that \(E(\alpha _{i-1})-F(\alpha _i)>0\) for \(i=2,\ldots ,970\) using MATLAB. Thus, we know (33) holds for \(0.031\le \alpha \le 1\) which deduces that (32) holds for \(0.000961\le v \le 1\), i.e., when \(3.75\le t \le 3902\), we have (30) holds.

Overall, we have proved that \(0\le h(t)<1\) on \([0,3902]\). \(\square \)

1.3 Appendix 3: Proof of Lemma 3

Proof

In order to prove that the function \(I_0(x)\) is strictly log-convex in \((0,+\infty )\), it suffices to show that its logarithmic second-order derivative is positive in \((0,+\infty )\) by the following reasons:

$$\begin{aligned} (\log {I_0(x)})^{\prime \prime }=\frac{\frac{1}{2}(I_0(x)+I_2(x))I_0(x)-I_1(x)^2}{I_0^2(x)}. \end{aligned}$$

From (5), we get:

$$\begin{aligned} I_1(x)&= \frac{1}{\pi }\int _0^\pi \cos {\theta }e^{x\cos {\theta }} d\theta ,\\ I_2(x)&= \frac{1}{\pi }\int _0^\pi \cos {2\theta }e^{x\cos {\theta }} d\theta . \end{aligned}$$

Then, using Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}&\frac{1}{2}(I_0(x)+I_2(x))I_0(x) \\&\quad = \frac{1}{\pi }\int ^\pi _0 \frac{1+\cos {2\theta }}{2}e^{x\cos {\theta }} d\theta \cdot \frac{1}{\pi }\int ^\pi _0 e^{x\cos {\theta }} d\theta \\&\quad = \frac{1}{\pi }\int ^\pi _0 \cos ^2{\theta }e^{x\cos {\theta }} d\theta \cdot \frac{1}{\pi }\int ^\pi _0e^{x\cos {\theta }} d\theta \\&\quad \ge \left( \frac{1}{\pi }\int ^\pi _0 \cos {\theta }e^{x\cos {\theta }} d\theta \right) ^2=(I_1(x))^2. \end{aligned}$$

Since \(\cos {\theta }e^{\frac{1}{2}x\cos {\theta }}\) and \(e^{\frac{1}{2}x\cos {\theta }}\) are not linear dependent when \(\theta \) changes, the strict inequality in above holds. Thus, the lemma is finished. \(\square \)

1.4 Appendix 4: Proof of Lemma 4

Proof

Since \(g(x)\) is strictly convex on \((0,+\infty )\), \(g^{\prime }(x)\) is increasing on \((0,+\infty )\).

By symmetry, let us assume that \(bc\ge ad\). Then we have

$$\begin{aligned} g(bd)-g(bc)&\ge g^{\prime }(bc)b(d-c) \\&\ge g^{\prime }(ad)b(d-c) \\&> g^{\prime }(ad)a(d-c) \\&\ge g(ad)-g(ac). \end{aligned}$$

This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Zeng, T. A Convex Variational Model for Restoring Blurred Images with Large Rician Noise. J Math Imaging Vis 53, 92–111 (2015). https://doi.org/10.1007/s10851-014-0551-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-014-0551-y

Keywords