Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A constrained optimization reformulation and a feasible descent direction method for \(L_{1/2}\) regularization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. (2014). doi:10.1007/s10107-014-0753-5

  2. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  3. Candes, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Candes, E., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted \(L_1\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  5. Canon, M., Cullum, C.: A tight upper bound on the rate of convergence of the Frank–Wolfe algorithm. SIAM J. Control 6, 509–516 (1968)

    Article  MATH  Google Scholar 

  6. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  7. Chartrand, R.: Nonconvex regularization for shape preservation. In: IEEE International Conference on Image Processing (ICIP). IEEE (2007)

  8. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 1–14 (2008)

    Article  MathSciNet  Google Scholar 

  9. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  10. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  11. Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(L_2-L_p\) minimization. Math. Program. 143, 371–383 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  12. Chen, X., Zhou, W.: Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 3, 765–790 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  13. Chen, X., Zhou, W.: Convergence of the reweighted \(l_1\) minimization algorithm for \(l_2-l_p\) minimization. Comput. Optim. Appl. (2013). doi:10.1007/s10589-013-9553-8

  14. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  15. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  16. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–598 (2007)

    Article  Google Scholar 

  17. Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Res. Logist. Quart. 3, 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  18. Lai, M., Wang, J.: An unconstrained \(L_q\) minimization with \(0<q\le 1\) for sparse solution of under-determined linear systems. SIAM J. Optim. 21, 82–101 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  19. Lu, Z.: Iterative reweighted minimization methods for \(l_p\) regularized unconstrained nonlinear programming. Math. Program. (2013). doi:10.1007/s10107-013-0722-4

  20. Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4, 960–991 (2005)

  21. Osborne, M., Presnell, B., Turlach, B.: A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20, 389–404 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  22. Petukhov, A.: Fast implementation of orthogonal greedy algorithm for tight wavelet frames. Signal Process 86, 471–479 (2006)

    Article  MATH  Google Scholar 

  23. Pironneau, O., Polak, E.: On the rate of convergence of certain method of centers. Math. Program. 2, 230–257 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pironneau, O., Polak, E.: Rate of convergence of a class of methods of feasible directions. SIAM J. Numer. Anal. 10, 161–174 (1973)

    Article  MathSciNet  Google Scholar 

  25. Polyk, E.: Computational Method in Optimization. Academic Press, New York (1971)

    Google Scholar 

  26. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  27. Topkis, D., Veinnott, A.: On the convergence of some feasible direction algorithms for non-linear programming. SIAM J. Control 5, 268–279 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  28. Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53, 4655–4667 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  29. Wu, L., Sun, Z., Li, D.H.: A gradient based method for the \(L_2-L_{1/2}\) minimization and application to compressive sensing. Pac. J. Optim. 10, 401–414 (2014)

    MATH  Google Scholar 

  30. Xu, Z., Zhang, H., Wang, Y., Chang, X.: \(L_{1/2}\) regularizer. Sci. China Ser. F 52, 1–9 (2009)

    Google Scholar 

  31. Xu, Z., Chang, X., Xu, F., Zhang, H.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)

    Article  Google Scholar 

  32. Zoutendijk, G.: Methods of Feasible Directions. Elsevier, Amsterdam (1960)

    MATH  Google Scholar 

  33. Zukhoviskii, S., Polak, R., Primak, M.: An algorithm for the solution of convex programming problems. Dokl. Akad. Nauk SSSR 153, 991–1000 (1963)

    MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors would like to thank two anonymous referees for their valuable suggestions and comments. Supported by the NSF of China Grant 11371154, 11071087, 11201197 and 11126147.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, DH., Wu, L., Sun, Z. et al. A constrained optimization reformulation and a feasible descent direction method for \(L_{1/2}\) regularization. Comput Optim Appl 59, 263–284 (2014). https://doi.org/10.1007/s10589-014-9683-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-014-9683-7

Keywords