Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Denoising of Image Gradients and Total Generalized Variation Denoising

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We revisit total variation denoising and study an augmented model where we assume that an estimate of the image gradient is available. We show that this increases the image reconstruction quality and derive that the resulting model resembles the total generalized variation denoising method, thus providing a new motivation for this model. Further, we propose to use a constraint denoising model and develop a variational denoising model that is basically parameter free, i.e., all model parameters are estimated directly from the noisy image. Moreover, we use Chambolle–Pock’s primal dual method as well as the Douglas–Rachford method for the new models. For the latter one has to solve large discretizations of partial differential equations. We propose to do this in an inexact manner using the preconditioned conjugate gradients method and derive preconditioners for this. Numerical experiments show that the resulting method has good denoising properties and also that preconditioning does increase convergence speed significantly. Finally, we analyze the duality gap of different formulations of the TGV denoising problem and derive a simple stopping criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. We used the MATLAB function ichol with default parameters in the experiments. Varying the parameters did not lead to significantly different results.

References

  1. Acosta, G., Durán, R.G.: An optimal Poincaré inequality in \(L^1\) for convex domains. Proc. Am. Math. Soc. 132(1), 195–202 (2004)

    Article  MATH  Google Scholar 

  2. Alt, H.W: Linear functional analysis. Universitext. Springer London, Ltd., London. An application-oriented introduction, Translated from the German edition by R. Nürnberg (2016)

  3. Bourgain, J., Brezis, H.: On the equation \({\text{ Y }}= f\) and application to control of phases. J. Am. Math. Soc. 16(2), 393–426 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bredies, K., Kunisch, K., Valkonen, T.: Properties of \(L^1-TGV^2\): the one-dimensional case. J. Math. Anal. Appl. 398(1), 438–454 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bredies, K., Sun, H.P.: Preconditioned Douglas–Rachford algorithms for Tv- and TGV-regularized variational imaging problems. J. Math. Imaging Vis. 52(3), 317–344 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ciarlet, P.G.: Linear and nonlinear functional analysis with applications, pp. xiv+832. Society for Industrial and Applied Mathematics, Philadelphia, PA (2013)

  9. Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5–6), 475–504 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  10. Jr Douglas, J., Jr Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  11. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ekeland, I., Temam, R.: Convex analysis and variational problems. Translated from the French, Studies in Mathematics and its Applications, vol. 1, pp. ix+402. North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York (1976)

  13. Knoll, F., Bredies, K., Pock, T., Stollberger, R.: Second order total generalized variation (TGV) for MRI. Magn. Resonan. Med. 65(2), 480–491 (2011)

    Article  Google Scholar 

  14. Komander, B., Lorenz, D., Fischer, M., Petz, M., Tutsch, R.: Data fusion of surface normals and point coordinates for deflectometric measurements. J. Sens. Sens. Syst. 3, 281–290 (2014)

    Article  Google Scholar 

  15. Komander, B., Lorenz, D.A.: Denoising of image gradients and constrained total generalized variation. In: Lauze, F., Dong, Y., Dahl, A.B. (eds) Proceedings of scale space and variational methods in computer vision: 6th international conference, SSVM 2017, Kolding, Denmark, June 4–8, 2017, pp. 435–446, Cham. Springer, Berlin (2017)

  16. Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  17. Liu, X., Tanaka, M., Okutomi, M.: Noise level estimation using weak textured patches of a single noisy image. In: 19th IEEE international conference on image processing (ICIP), 2012, pp. 665–668 (2012)

  18. Liu, X., Tanaka, M., Okutomi, M.: Single-image noise level estimation for blind denoising. IEEE Trans. Image Proc. 22(12), 5226–5237 (2013)

    Article  Google Scholar 

  19. Lorenz, D.A., Worliczek, N.: Necessary conditions for variational regularization schemes. Inverse Prob. 29, 075016 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Lysaker, M., Osher, S., Tai, X.-C.: Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Proc. 13(10), 1345–1357 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  21. Mardal, K.-A., Winther, R.: Preconditioning discretizations of systems of partial differential equations. Numer. Linear Algebra Appl. 18(1), 1–40 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. O’Connor, D., Vandenberghe, L.: Primal-dual decomposition by operator splitting and applications to image deblurring. SIAM J. Imaging Sci. 7(3), 1724–1754 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  23. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenom. 60(1), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  24. Songsiri, J.: Projection onto an \(l_1\)-norm ball with application to identification of sparse autoregressive models. In: Asean symposium on automatic control. Vietnam (2011)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dirk A. Lorenz.

Additional information

This material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Appendix

Appendix

1.1 Discretization

We equip the space of \(M\times N\) images with the inner product and induced norm

$$\begin{aligned}&\langle u,v\rangle =\sum _{i=1}^M\sum _{j=1}^N u_{i,j}v_{i,j},\quad \Vert u\Vert _2 =\left( \sum _{i=1}^M\sum _{j=1}^N u_{i,j}^2\right) ^{\frac{1}{2}}. \end{aligned}$$

For \(u\in \mathbf {R}^{M\times N}\), we define the discrete partial forward derivative (with constant boundary extension \(u_{M+1,j}=u_{M,j}\) and \(u_{i,N+1}=u_{i,N}\) as

$$\begin{aligned} (\partial _1u)_{i,j}&=u_{i+1,j}-u_{i,j}, \\ (\partial _2u)_{i,j}&=u_{i,j+1}-u_{i,j}. \end{aligned}$$

The discrete gradient \(\nabla :\mathbf {R}^{M\times N}\rightarrow \mathbf {R}^{M\times N\times 2}\) is defined by

$$\begin{aligned} (\nabla u)_{i,j,k}=(\partial _ku)_{i,j}. \end{aligned}$$

The symmetrized gradient \(\mathcal {E}\) maps from \(\mathbf {R}^{M\times N \times 2}\) to \(\mathbf {R}^{M\times N\times 4}\). For simplification of notation, the 4 blocks are written in one plane:

$$\begin{aligned} \mathcal {E}v&=\frac{1}{2}\left( \nabla v+(\nabla v)^T\right) \\&= \begin{pmatrix} \partial _1v_1 &{} \frac{1}{2}(\partial _1v_2+\partial _2v_1)\\ \frac{1}{2}(\partial _1v_2+\partial _2v_1) &{} \partial _2v_2 \end{pmatrix}. \end{aligned}$$

The norm \(\Vert |\cdot |\Vert _1\) in the space \(\mathbf {R}^{M\times N\times K}\) reflects that for \(v\in \mathbf {R}^{M\times N \times K}\) we consider \(v_{i,j}\) as a vector in \(\mathbf {R}^K\) on which we use the Euclidean norm:

$$\begin{aligned}&\Vert |v|\Vert _1:=\sum _{i=1}^M\sum _{j=1}^N|v_{i,j}|\\&\quad \text {with }|v_{i,j}|:=\left( \sum _{k=1}^Kv_{i,j,k}^2\right) ^{\frac{1}{2}}. \end{aligned}$$

The discrete divergence is the negative adjoint of \(\nabla \), i.e., the unique linear mapping \( \text {div}: \mathbf {R}^{M\times N \times 2} \rightarrow \mathbf {R}^{M\times N}\), which satisfies

$$\begin{aligned} \langle \nabla u,v\rangle =-\langle u,\text {div}\, v\rangle ,\quad \text {for all }u,v. \end{aligned}$$

The adjoint of the symmetrized gradient is the unique linear mapping \( {{\mathrm{\mathcal {E}}}}^*: \mathbf {R}^{M\times N \times 4} \rightarrow \mathbf {R}^{M\times N \times 2}\), which satisfies

$$\begin{aligned} \langle {{\mathrm{\mathcal {E}}}}v,p\rangle =\langle v,{{\mathrm{\mathcal {E}}}}^* p\rangle ,\quad \text {for all }v,p. \end{aligned}$$

1.2 Prox Operators and Duality Gaps for Considered Problems

In order to calculate experiments with the methods proposed in the previous sections, in this section we will state all primal and dual functionals according to a general optimization problem \(\min _x F(x) + G(Kx)\) along with a study of the corresponding primal-dual gaps and possibilities to ensure feasibility of the iterates throughout the program by reformulation of the problems by introducing a substitution variable. We also give the proximal operators needed for the Chambolle–Pock and Douglas–Rachford algorithm.

1.2.1 DGTV

In Sect. 2, we formulated a two stage denoising method in two ways. First, as a constrained version (4) and  (5). In this formulation, we get the primal functionals for the first problem (4)

$$\begin{aligned} F(v)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|{{\mathrm{\nabla }}}u_0 -\,\cdot \,|}\Vert _{}}_{1} \le \delta _2}(v),\nonumber \\ G(\psi )&= {\Vert {|\psi |}\Vert _{}}_{1} \end{aligned}$$
(27)

with operator \(K = {{\mathrm{\mathcal {E}}}}\). The dual problems, in general written as

$$\begin{aligned} \max _y -F^*(-K^*y)-G^*(y) \end{aligned}$$

are

$$\begin{aligned} F^*(t)&= \delta _2{\Vert {|t|}\Vert _{}}_{\infty } + \langle t,{{\mathrm{\nabla }}}u_0\rangle ,\nonumber \\ G^*(q)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q) \end{aligned}$$
(28)

with operator \(K^* = {{\mathrm{\mathcal {E}}}}^*\). The primal-dual gap writes as

$$\begin{aligned} {{\mathrm{gap}}}_{{{\mathrm{DGTV}}}}^{(1)}(v,q)= & {} {{\mathrm{\mathcal {I}}}}_{{\Vert {|{{\mathrm{\nabla }}}u_0 - \,\cdot \,|}\Vert _{}}_{1}\le 1}(v) + {\Vert {|{{\mathrm{\mathcal {E}}}}v|}\Vert _{}}_{1}\nonumber \\&+ \delta _2{\Vert {|{{\mathrm{\mathcal {E}}}}^* q|}\Vert _{}}_{\infty } - \langle {{\mathrm{\mathcal {E}}}}^* q,{{\mathrm{\nabla }}}u_0\rangle \nonumber \\&+ {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q). \end{aligned}$$
(29)

The proximal operators are

$$\begin{aligned} {{\mathrm{prox}}}_{\tau F}(v)&= {{\mathrm{proj}}}_{{\Vert {|{{\mathrm{\nabla }}}u_0 - \,\cdot \,|}\Vert _{}}_{1} \le \delta _2}(v),\nonumber \\ {{\mathrm{prox}}}_{\sigma G^*}(q)&= {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q). \end{aligned}$$
(30)

For the second problem within DGTV, we have the denoising problem of the image with respect to the denoised gradient \(\widehat{v}\) as output of the previous problem, cf. problem (5). There, the primal functionals with \(K = {{\mathrm{\nabla }}}\) are

$$\begin{aligned} F(u)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {u_0 - \,\cdot \,}\Vert _{}}_{2}\le \delta _1}(u) \nonumber \\ G(\phi )&= {\Vert {|\phi - \widehat{v}|}\Vert _{}}_{1} \end{aligned}$$
(31)

and the corresponding dual functionals write as

$$\begin{aligned} F^*(s)&= \delta _1{\Vert {s}\Vert _{}}_{2} + \langle u_0,s\rangle \nonumber \\ G^*(p)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(p) + \langle \widehat{v},p\rangle . \end{aligned}$$
(32)

Hence, the primal-dual gap for this problem is

$$\begin{aligned} {{\mathrm{gap}}}_{{{\mathrm{DGTGV}}}}^{(2)}(u,p)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {u_0-\,\cdot \,}\Vert _{}}_{2}\le \delta _1}(u)\nonumber \\&\qquad +\, {\Vert {|{{\mathrm{\nabla }}}u - \widehat{v}|}\Vert _{}}_{1}\nonumber \\&\qquad -\, \langle u_0,{{\mathrm{\nabla }}}^* p\rangle \nonumber \\&\qquad +\, {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(p)\nonumber \\&\qquad +\, \langle \widehat{v},p\rangle . \end{aligned}$$
(33)

The proximal operators are given by

$$\begin{aligned} {{\mathrm{prox}}}_{\tau F}(u)&= {{\mathrm{proj}}}_{{\Vert {u_0 - \,\cdot \,}\Vert _{}}_{2}\le \delta _1}(u) \nonumber \\ {{\mathrm{prox}}}_{\sigma G^*}(p)&= {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}( p - \sigma \widehat{v}). \end{aligned}$$
(34)

1.2.2 DGTGV

We reformulate problem (6) by using a substitution \(w = {{\mathrm{\nabla }}}u_0 - v\), also considered in Sect. 3, where we calculated another duality gap. Hence, we get the primal functionals for the gradient denoising problem with operator \(K = {{\mathrm{\mathcal {E}}}}\) as

$$\begin{aligned} F(w)&= {\Vert {|w|}\Vert _{}}_{1}\nonumber \\ G(\psi )&= \alpha {\Vert {|{{\mathrm{\mathcal {E}}}}{{\mathrm{\nabla }}}u_0 - w|}\Vert _{}}_{1}. \end{aligned}$$
(35)

Therefore, the dual functionals write as

$$\begin{aligned} F^*(t)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(t)\nonumber \\ G^*(q)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha }(q)\nonumber \\&\quad +\, \langle {{\mathrm{\mathcal {E}}}}{{\mathrm{\nabla }}}u_0,q\rangle . \end{aligned}$$
(36)

With that the primal-dual gap is

$$\begin{aligned} {{\mathrm{gap}}}^{(1)}_{{{\mathrm{DGTGV}}}}(w,q)&= {\Vert {|w|}\Vert _{}}_{1}\nonumber \\&\quad +\, \alpha {\Vert {|{{\mathrm{\mathcal {E}}}}{{\mathrm{\nabla }}}u_0 - w|}\Vert _{}}_{1} \nonumber \\&\quad +\, {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}({{\mathrm{\mathcal {E}}}}^*q)\nonumber \\&\quad +\, {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha }(q)\nonumber \\&\quad +\, \langle {{\mathrm{\mathcal {E}}}}{{\mathrm{\nabla }}}u_0,q\rangle . \end{aligned}$$
(37)

The proximal operators are, with Moreau’s identity for the first one,

$$\begin{aligned}&{{\mathrm{prox}}}_{\tau F}(w)= w - \tau {{\mathrm{prox}}}_{\tau ^{-1} F^{*}}(\tau ^{-1}w)\nonumber \\&\qquad \qquad \qquad = w - \tau {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1} (\tau ^{-1}w) \nonumber \\&{{\mathrm{prox}}}_{\sigma G^*}(q) = {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha } (q - \tau {{\mathrm{\mathcal {E}}}}{{\mathrm{\nabla }}}u_0). \end{aligned}$$
(38)

For the second problem, we already derived all functionals, gaps and proximal operators in Sect. A.2.1, equations (31)–(34).

1.2.3 CTGV

The Morozov-type constrained total generalized variation denoising problem was formulated in Sect. 2.3 (cf. (12)). For this formulation, the primal functionals are

$$\begin{aligned} F(u,w)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {u - u_0}\Vert _{}}_{2}\le \delta _1}(u) \nonumber \\&\quad + {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{1}\le \delta _2}(w),\nonumber \\ G(\phi )&= {\Vert {|\phi |}\Vert _{}}_{1} \end{aligned}$$
(39)

with block operator

$$\begin{aligned} K = {{\mathrm{\mathcal {E}}}}\begin{pmatrix} {{\mathrm{\nabla }}}&-{{\mathrm{Id}}} \end{pmatrix}. \end{aligned}$$
(40)

The corresponding dual functionals are

$$\begin{aligned} F^*(s,t)&= \delta _1{\Vert {s}\Vert _{}}_{2} + \langle s,u_0\rangle \nonumber \\&\quad + \delta _2{\Vert {|t|}\Vert _{}}_{\infty },\nonumber \\ G^*(q)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q) \end{aligned}$$
(41)

with dual block operator

$$\begin{aligned} K^* = \begin{pmatrix} {{\mathrm{\nabla }}}^* \\ -{{\mathrm{Id}}} \end{pmatrix}{{\mathrm{\mathcal {E}}}}^*. \end{aligned}$$
(42)

The proximal operators are accordingly given by

$$\begin{aligned}&{{\mathrm{prox}}}_{\tau F}(u,w) = \begin{pmatrix} {{\mathrm{proj}}}_{{\Vert {\,\cdot \,- u_0}\Vert _{}}_{2}\le \delta _1}(u)\nonumber \\ {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{1}\le \delta _2}(w) \end{pmatrix},\nonumber \\&{{\mathrm{prox}}}_{\sigma G^*}(q) = {{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q) \end{aligned}$$
(43)

and the primal-dual gap writes as

$$\begin{aligned} {{\mathrm{gap}}}&_{{\text {CTGV}}}(u,w,q)\nonumber \\&\quad ={\Vert {|{{\mathrm{\mathcal {E}}}}({{\mathrm{\nabla }}}u - w)|}\Vert _{}}_{1}\nonumber \\&\quad +\,{{\mathrm{\mathcal {I}}}}_{{\Vert {\,\cdot \,-u_0}\Vert _{}}_{2}\le \delta _1}(u)\nonumber \\&\quad +\,{{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{1}\le \delta _2}(w) \nonumber \\&\quad +\,\delta _1{\Vert {{{\mathrm{\nabla }}}^*{{\mathrm{\mathcal {E}}}}^*q}\Vert _{}}_{2}\nonumber \\&\quad -\, \langle {{\mathrm{\nabla }}}^*{{\mathrm{\mathcal {E}}}}^*q,u_0\rangle \nonumber \\&\quad +\,\delta _2 {\Vert {|{{\mathrm{\mathcal {E}}}}^*q|}\Vert _{}}_{\infty } \nonumber \\&\quad +\, {{\mathrm{\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q). \end{aligned}$$
(44)

1.2.4 MTGV

In Sect. 2.3, we defined the MTGV optimization problem (9) as a sort of mixed version between the TGV (10) and the CTGV (12) problems. Thus, the primal functionals are

$$\begin{aligned} F(u,v)&= {{\mathrm{\mathcal {I}}}}_{{\Vert {\,\cdot \,- u_0}\Vert _{}}_{2}\le \delta _1}(u),\nonumber \\ G(\phi , \psi )&= {\Vert {|\phi |}\Vert _{}}_{1} \nonumber \\&\quad + \alpha {\Vert {|\psi |}\Vert _{}}_{1} \end{aligned}$$
(45)

with the block operator

$$\begin{aligned} K = \begin{pmatrix} {{\mathrm{\nabla }}}&{} -{{\mathrm{Id}}}\\ 0 &{}{{\mathrm{\mathcal {E}}}} \end{pmatrix}. \end{aligned}$$
(46)

Accordingly, the dual functionals are

$$\begin{aligned} F^*(s,t)&= \delta _1{\Vert {s}\Vert _{}}_{2} \nonumber \\ {}&\quad + \langle s,u_0\rangle + {{\mathrm {\mathcal {I}}}}_{\{0\}}(t),\nonumber \\ G^*(p,q)&= \mathcal {I}_{\Vert |\,\cdot \,|\Vert _{\infty }\le \alpha }(p) {\Vert _{}|p|}{\Vert _{}}_{\infty }\nonumber \\ {}&\quad + {{\mathrm {\mathcal {I}}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(q) \end{aligned}$$
(47)

and the dual operator

$$\begin{aligned} K^* = \begin{pmatrix} {{\mathrm{\nabla }}}^* &{} 0\\ id &{}{{\mathrm{\mathcal {E}}}}^* \end{pmatrix}. \end{aligned}$$
(48)

The proximal operators are given by

$$\begin{aligned}&{{\mathrm{prox}}}_{s F}(u,v)\\&\quad =({{\mathrm{proj}}}_{{\Vert {\,\cdot \,-u_0}\Vert _{}}_{2}\le \delta _1}(u),v)\\&{{\mathrm{prox}}}_{t G^*}(p,q)\\&\quad =({{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1}(p),{{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha }(q)), \end{aligned}$$

and the gap function is given by

$$\begin{aligned} {{\mathrm{gap}}}&_{{\text {MTGV}}}(u,v,p,q)\\&={\Vert {|{{\mathrm{\nabla }}}u - v|}\Vert _{}}_{1} + \alpha {\Vert {|{{\mathrm{\mathcal {E}}}}(v)|}\Vert _{}}_{1} +\,{{\mathrm{\mathcal {I}}}}_{\{{\Vert {\,\cdot \,-u_0}\Vert _{}}_{2}\le \delta _1\}}(u)\\&\quad +\,\delta _1{\Vert {{{\mathrm{\nabla }}}^* p}\Vert _{}}_{2} - \langle p, {{\mathrm{\nabla }}}u_0\rangle +\,{{\mathrm{\mathcal {I}}}}_{\{0\}}(p - {{\mathrm{\mathcal {E}}}}^*q) \\&\quad +\,{{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha \}}(p) \nonumber \\&\quad +\, {{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1\}}(q). \end{aligned}$$

To circumvent the feasibility problem, as introduced in 3 one can use the modified gap function

$$\begin{aligned} {{\mathrm{gap}}}&_{{\text {MTGV}}}(u,v,\tilde{q})=\\&\le {\Vert {|{{\mathrm{\nabla }}}u - v|}\Vert _{}}_{1} + \alpha {\Vert {|{{\mathrm{\mathcal {E}}}}(v)|}\Vert _{}}_{1}\\&\quad +\,{{\mathrm{\mathcal {I}}}}_{\{{\Vert {\,\cdot \,-u_0}\Vert _{}}_{2}\le \delta _1\}}(u)\\&\quad +\,\delta _1{\Vert {{{\mathrm{\nabla }}}^* ({{\mathrm{\mathcal {E}}}}^*\tilde{q})}\Vert _{}}_{2} - \langle {{\mathrm{\mathcal {E}}}}^*\tilde{q}, {{\mathrm{\nabla }}}u_0\rangle + {{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le 1\}}(\tilde{q}), \end{aligned}$$

where \(\tilde{q}:=\frac{q}{\max \left( 1,\frac{{\Vert {|{{\mathrm{\mathcal {E}}}}^*q|}\Vert _{}}_{\infty }}{\alpha }\right) }\).

1.2.5 TGV

For TGV (10), we have the primal functionals

$$\begin{aligned}&F(u,v)= \frac{1}{2}{\Vert {u-u_0}\Vert _{}}_{2}^2\\&G(\phi ,\psi )=\alpha _1{\Vert {|\phi |}\Vert _{}}_{1} \\&\qquad \qquad \qquad +\alpha _0{\Vert {|\psi |}\Vert _{}}_{1}, \end{aligned}$$

with the same block operator (46). The corresponding dual functionals are

$$\begin{aligned} F^*(s,t)&= \frac{1}{2}{\Vert {s}\Vert _{}}_{2}^2 \\&\quad + \langle s,u_0\rangle + {{\mathrm{\mathcal {I}}}}_{\{0\}}(t),\\ G^*(p,q)&={{\mathrm{\mathcal {I}}}}_{\{{\Vert {\cdot }\Vert _{}}_{\infty }\le \alpha _1\}}(p)\\&\quad + {{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha _0\}}(q), \end{aligned}$$

and the dual operator is (48). The proximal operators are given by

$$\begin{aligned}&{{\mathrm{prox}}}_{t F}(u,v)=\left( \frac{u+tu_0}{1+t} ,v\right) \\&{{\mathrm{prox}}}_{t G^*}(p,q)=({{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha _1}(p),{{\mathrm{proj}}}_{{\Vert {|\,\cdot \,|}\Vert _{}}_{\infty }\le \alpha _0}(q)), \end{aligned}$$

and the gap function is given by

$$\begin{aligned} {{\mathrm{gap}}}_{{{\mathrm{TGV}}}}(u,v,p,q)&= \alpha _1{\Vert {|{{\mathrm{\nabla }}}u - v|}\Vert _{}}_{1} + \alpha _0{\Vert {|{{\mathrm{\mathcal {E}}}}(v)|}\Vert _{}}_{1}\\&\quad +\,\frac{1}{2}{\Vert {u-u_0}\Vert _{}}_{2}^2 +\frac{1}{2}{\Vert {{{\mathrm{\nabla }}}^* p}\Vert _{}}_{2}^2 - \langle p, {{\mathrm{\nabla }}}u_0\rangle \\&\quad +{{\mathrm{\mathcal {I}}}}_{\{0\}}(p - {{\mathrm{\mathcal {E}}}}^*q) +\,{{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\cdot |}\Vert _{}}_{\infty }\le \alpha _0\}}(p) + {{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\cdot |}\Vert _{}}_{\infty }\le \alpha _1\}}(q). \end{aligned}$$

To circumvent the feasibility problem, as introduced in 3 one can use the modified gap function

$$\begin{aligned} {{\mathrm{gap}}}&_{{{\mathrm{TGV}}}}(u,v,\tilde{q})=\alpha _1{\Vert {|{{\mathrm{\nabla }}}u - v|}\Vert _{}}_{1} + \alpha _0{\Vert {|{{\mathrm{\mathcal {E}}}}(v)|}\Vert _{}}_{1}\\&\quad +\,\frac{1}{2}{\Vert {u-u_0}\Vert _{}}_{2}^2 +\frac{1}{2}{\Vert {{{\mathrm{\nabla }}}^* ({{\mathrm{\mathcal {E}}}}^*\tilde{q})}\Vert _{}}_{2}^2 - \langle {{\mathrm{\mathcal {E}}}}^*\tilde{q}, {{\mathrm{\nabla }}}u_0\rangle \\&\quad +\, {{\mathrm{\mathcal {I}}}}_{\{{\Vert {|\cdot |}\Vert _{}}_{\infty }\le \alpha _1\}}(\tilde{q}). \end{aligned}$$

where \(\tilde{q}:=\frac{q}{\max \left( 1,\frac{{\Vert {|{{\mathrm{\mathcal {E}}}}^*q|}\Vert _{}}_{\infty }}{\alpha }\right) }\).

1.3 Projections

The projections used for the algorithms are

$$\begin{aligned}&{{\mathrm{proj}}}_{{\Vert {\cdot -u_0}\Vert _{}}_{2}\le \delta }(u) \\&\quad =u_0 +\, \max \left( \frac{\delta }{{\Vert {u-u_0}\Vert _{}}_{2}},1\right) \cdot (u-u_0),\\&{{\mathrm{proj}}}_{{\Vert {|\cdot |}\Vert _{}}_{\infty }\le \delta }(v) =\frac{v}{\max (1,\frac{|v|}{\delta })}, \end{aligned}$$

with \(|v|=\left( \sum _{k=1}^dv_k^2\right) ^{\frac{1}{2}}\) in the pointwise sense.

The idea how to project onto an mixed norm ball, i.e., \({\Vert {|\,\cdot \,|}\Vert _{}}_{1}\), can be found with in [24]. There the author developed an algorithm to project onto an \(l_1\)-norm ball and after that onto a sum of \(l_2\)-norm balls. The projection itself cannot be stated in a simple closed form.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Komander, B., Lorenz, D.A. & Vestweber, L. Denoising of Image Gradients and Total Generalized Variation Denoising. J Math Imaging Vis 61, 21–39 (2019). https://doi.org/10.1007/s10851-018-0819-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-018-0819-8

Keywords