Abstract
We revisit total variation denoising and study an augmented model where we assume that an estimate of the image gradient is available. We show that this increases the image reconstruction quality and derive that the resulting model resembles the total generalized variation denoising method, thus providing a new motivation for this model. Further, we propose to use a constraint denoising model and develop a variational denoising model that is basically parameter free, i.e., all model parameters are estimated directly from the noisy image. Moreover, we use Chambolle–Pock’s primal dual method as well as the Douglas–Rachford method for the new models. For the latter one has to solve large discretizations of partial differential equations. We propose to do this in an inexact manner using the preconditioned conjugate gradients method and derive preconditioners for this. Numerical experiments show that the resulting method has good denoising properties and also that preconditioning does increase convergence speed significantly. Finally, we analyze the duality gap of different formulations of the TGV denoising problem and derive a simple stopping criterion.
Similar content being viewed by others
Notes
We used the MATLAB function ichol with default parameters in the experiments. Varying the parameters did not lead to significantly different results.
References
Acosta, G., Durán, R.G.: An optimal Poincaré inequality in \(L^1\) for convex domains. Proc. Am. Math. Soc. 132(1), 195–202 (2004)
Alt, H.W: Linear functional analysis. Universitext. Springer London, Ltd., London. An application-oriented introduction, Translated from the German edition by R. Nürnberg (2016)
Bourgain, J., Brezis, H.: On the equation \({\text{ Y }}= f\) and application to control of phases. J. Am. Math. Soc. 16(2), 393–426 (2003)
Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)
Bredies, K., Kunisch, K., Valkonen, T.: Properties of \(L^1-TGV^2\): the one-dimensional case. J. Math. Anal. Appl. 398(1), 438–454 (2013)
Bredies, K., Sun, H.P.: Preconditioned Douglas–Rachford algorithms for Tv- and TGV-regularized variational imaging problems. J. Math. Imaging Vis. 52(3), 317–344 (2015)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Ciarlet, P.G.: Linear and nonlinear functional analysis with applications, pp. xiv+832. Society for Industrial and Applied Mathematics, Philadelphia, PA (2013)
Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5–6), 475–504 (2004)
Jr Douglas, J., Jr Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)
Ekeland, I., Temam, R.: Convex analysis and variational problems. Translated from the French, Studies in Mathematics and its Applications, vol. 1, pp. ix+402. North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York (1976)
Knoll, F., Bredies, K., Pock, T., Stollberger, R.: Second order total generalized variation (TGV) for MRI. Magn. Resonan. Med. 65(2), 480–491 (2011)
Komander, B., Lorenz, D., Fischer, M., Petz, M., Tutsch, R.: Data fusion of surface normals and point coordinates for deflectometric measurements. J. Sens. Sens. Syst. 3, 281–290 (2014)
Komander, B., Lorenz, D.A.: Denoising of image gradients and constrained total generalized variation. In: Lauze, F., Dong, Y., Dahl, A.B. (eds) Proceedings of scale space and variational methods in computer vision: 6th international conference, SSVM 2017, Kolding, Denmark, June 4–8, 2017, pp. 435–446, Cham. Springer, Berlin (2017)
Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)
Liu, X., Tanaka, M., Okutomi, M.: Noise level estimation using weak textured patches of a single noisy image. In: 19th IEEE international conference on image processing (ICIP), 2012, pp. 665–668 (2012)
Liu, X., Tanaka, M., Okutomi, M.: Single-image noise level estimation for blind denoising. IEEE Trans. Image Proc. 22(12), 5226–5237 (2013)
Lorenz, D.A., Worliczek, N.: Necessary conditions for variational regularization schemes. Inverse Prob. 29, 075016 (2013)
Lysaker, M., Osher, S., Tai, X.-C.: Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Proc. 13(10), 1345–1357 (2004)
Mardal, K.-A., Winther, R.: Preconditioning discretizations of systems of partial differential equations. Numer. Linear Algebra Appl. 18(1), 1–40 (2011)
O’Connor, D., Vandenberghe, L.: Primal-dual decomposition by operator splitting and applications to image deblurring. SIAM J. Imaging Sci. 7(3), 1724–1754 (2014)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenom. 60(1), 259–268 (1992)
Songsiri, J.: Projection onto an \(l_1\)-norm ball with application to identification of sparse autoregressive models. In: Asean symposium on automatic control. Vietnam (2011)
Author information
Authors and Affiliations
Corresponding author
Additional information
This material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Appendix
Appendix
1.1 Discretization
We equip the space of \(M\times N\) images with the inner product and induced norm
For \(u\in \mathbf {R}^{M\times N}\), we define the discrete partial forward derivative (with constant boundary extension \(u_{M+1,j}=u_{M,j}\) and \(u_{i,N+1}=u_{i,N}\) as
The discrete gradient \(\nabla :\mathbf {R}^{M\times N}\rightarrow \mathbf {R}^{M\times N\times 2}\) is defined by
The symmetrized gradient \(\mathcal {E}\) maps from \(\mathbf {R}^{M\times N \times 2}\) to \(\mathbf {R}^{M\times N\times 4}\). For simplification of notation, the 4 blocks are written in one plane:
The norm \(\Vert |\cdot |\Vert _1\) in the space \(\mathbf {R}^{M\times N\times K}\) reflects that for \(v\in \mathbf {R}^{M\times N \times K}\) we consider \(v_{i,j}\) as a vector in \(\mathbf {R}^K\) on which we use the Euclidean norm:
The discrete divergence is the negative adjoint of \(\nabla \), i.e., the unique linear mapping \( \text {div}: \mathbf {R}^{M\times N \times 2} \rightarrow \mathbf {R}^{M\times N}\), which satisfies
The adjoint of the symmetrized gradient is the unique linear mapping \( {{\mathrm{\mathcal {E}}}}^*: \mathbf {R}^{M\times N \times 4} \rightarrow \mathbf {R}^{M\times N \times 2}\), which satisfies
1.2 Prox Operators and Duality Gaps for Considered Problems
In order to calculate experiments with the methods proposed in the previous sections, in this section we will state all primal and dual functionals according to a general optimization problem \(\min _x F(x) + G(Kx)\) along with a study of the corresponding primal-dual gaps and possibilities to ensure feasibility of the iterates throughout the program by reformulation of the problems by introducing a substitution variable. We also give the proximal operators needed for the Chambolle–Pock and Douglas–Rachford algorithm.
1.2.1 DGTV
In Sect. 2, we formulated a two stage denoising method in two ways. First, as a constrained version (4) and (5). In this formulation, we get the primal functionals for the first problem (4)
with operator \(K = {{\mathrm{\mathcal {E}}}}\). The dual problems, in general written as
are
with operator \(K^* = {{\mathrm{\mathcal {E}}}}^*\). The primal-dual gap writes as
The proximal operators are
For the second problem within DGTV, we have the denoising problem of the image with respect to the denoised gradient \(\widehat{v}\) as output of the previous problem, cf. problem (5). There, the primal functionals with \(K = {{\mathrm{\nabla }}}\) are
and the corresponding dual functionals write as
Hence, the primal-dual gap for this problem is
The proximal operators are given by
1.2.2 DGTGV
We reformulate problem (6) by using a substitution \(w = {{\mathrm{\nabla }}}u_0 - v\), also considered in Sect. 3, where we calculated another duality gap. Hence, we get the primal functionals for the gradient denoising problem with operator \(K = {{\mathrm{\mathcal {E}}}}\) as
Therefore, the dual functionals write as
With that the primal-dual gap is
The proximal operators are, with Moreau’s identity for the first one,
For the second problem, we already derived all functionals, gaps and proximal operators in Sect. A.2.1, equations (31)–(34).
1.2.3 CTGV
The Morozov-type constrained total generalized variation denoising problem was formulated in Sect. 2.3 (cf. (12)). For this formulation, the primal functionals are
with block operator
The corresponding dual functionals are
with dual block operator
The proximal operators are accordingly given by
and the primal-dual gap writes as
1.2.4 MTGV
In Sect. 2.3, we defined the MTGV optimization problem (9) as a sort of mixed version between the TGV (10) and the CTGV (12) problems. Thus, the primal functionals are
with the block operator
Accordingly, the dual functionals are
and the dual operator
The proximal operators are given by
and the gap function is given by
To circumvent the feasibility problem, as introduced in 3 one can use the modified gap function
where \(\tilde{q}:=\frac{q}{\max \left( 1,\frac{{\Vert {|{{\mathrm{\mathcal {E}}}}^*q|}\Vert _{}}_{\infty }}{\alpha }\right) }\).
1.2.5 TGV
For TGV (10), we have the primal functionals
with the same block operator (46). The corresponding dual functionals are
and the dual operator is (48). The proximal operators are given by
and the gap function is given by
To circumvent the feasibility problem, as introduced in 3 one can use the modified gap function
where \(\tilde{q}:=\frac{q}{\max \left( 1,\frac{{\Vert {|{{\mathrm{\mathcal {E}}}}^*q|}\Vert _{}}_{\infty }}{\alpha }\right) }\).
1.3 Projections
The projections used for the algorithms are
with \(|v|=\left( \sum _{k=1}^dv_k^2\right) ^{\frac{1}{2}}\) in the pointwise sense.
The idea how to project onto an mixed norm ball, i.e., \({\Vert {|\,\cdot \,|}\Vert _{}}_{1}\), can be found with in [24]. There the author developed an algorithm to project onto an \(l_1\)-norm ball and after that onto a sum of \(l_2\)-norm balls. The projection itself cannot be stated in a simple closed form.
Rights and permissions
About this article
Cite this article
Komander, B., Lorenz, D.A. & Vestweber, L. Denoising of Image Gradients and Total Generalized Variation Denoising. J Math Imaging Vis 61, 21–39 (2019). https://doi.org/10.1007/s10851-018-0819-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-018-0819-8