Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

Rank regularized minimization problem is an ideal model for the low-rank matrix completion/recovery problem. The matrix factorization approach can transform the high-dimensional rank regularized problem to a low-dimensional factorized column-sparse regularized problem. The latter can greatly facilitate fast computations in applicable algorithms, but needs to overcome the simultaneous non-convexity of the loss and regularization functions. In this paper, we consider the factorized column-sparse regularized model. Firstly, we optimize this model with bound constraints, and establish a certain equivalence between the optimized factorization problem and rank regularized problem. Further, we strengthen the optimality condition for stationary points of the factorization problem and define the notion of strong stationary point. Moreover, we establish the equivalence between the factorization problem and its nonconvex relaxation in the sense of global minimizers and strong stationary points. To solve the factorization problem, we design two types of algorithms and give an adaptive method to reduce their computation. The first algorithm is from the relaxation point of view and its iterates own some properties from global minimizers of the factorization problem after finite iterations. We give some analysis on the convergence of its iterates to a strong stationary point. The second algorithm is designed for directly solving the factorization problem. We improve the PALM algorithm introduced by Bolte et al. (Math Program Ser A 146:459–494, 2014) for the factorization problem and give its improved convergence results. Finally, we conduct numerical experiments to show the promising performance of the proposed model and algorithms for low-rank matrix completion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Algorithm 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. West, J.D., Wesley-Smith, I., Bergstrom, C.T.: A recommendation system based on hierarchical clustering of an article-level citation network. IEEE Trans. Big Data 2(2), 113–123 (2016)

    Article  Google Scholar 

  2. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31, 1235–1256 (2009)

    Article  MathSciNet  Google Scholar 

  3. Liu, Q., Lai, Z., Zhou, Z., Kuang, F., Jin, Z.: A truncated nuclear norm regularization method based on weighted residual error for matrix completion. IEEE Trans. Image Process. 25(1), 316–330 (2016)

    Article  MathSciNet  Google Scholar 

  4. Cabral, R., Torre, F.D., Costeira, J.P., Bernardino, A.: Matrix completion for weakly-supervised multi-label image classification. IEEE Trans. Pattern Anal. Mach. Intell. 37(1), 121–135 (2015)

    Article  Google Scholar 

  5. Fan, J., Chow, T.W.S.: Exactly robust kernel principal component analysis. IEEE Trans. Neural Netw. Learn. Syst. 31(3), 749–761 (2020)

    Article  MathSciNet  Google Scholar 

  6. Huber, P.J.: Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 5(1), 799–821 (1973)

    MathSciNet  Google Scholar 

  7. Koenker, R., Kevin, F.H.: Quantile regression. J. Econ. Perspect. 15(4), 143–156 (2001)

    Article  Google Scholar 

  8. Srebro, N., Shraibman, A.: Rank, trace-norm and max-norm. In: Learning theory, pp. 545–560. Springer, Berlin (2005)

  9. Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2862–2869 (2014)

  10. Hu, Y., Zhang, D., Ye, J., Li, X., He, X.: Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35(9), 2117–2130 (2013)

    Article  Google Scholar 

  11. Shang, F., Liu, Y., Cheng, J.: Scalable algorithms for tractable schatten quasi-norm minimization. In Proceedings of AAAI conference on artificial intelligence, 2016–2022 (2016)

  12. Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MathSciNet  Google Scholar 

  13. Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(15), 615–640 (2010)

    MathSciNet  Google Scholar 

  14. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  Google Scholar 

  15. Cai, T.T., Zhou, W.-X.: Matrix completion via max-norm constrained optimization. Electron. J. Stat. 10(1), 1493–1525 (2016)

    Article  MathSciNet  Google Scholar 

  16. Fang, E.X., Liu, H., Toh, K.-C., Zhou, W.-X.: Max-norm optimization for robust matrix recovery. Math. Program. 167(1), 5–35 (2018)

    Article  MathSciNet  Google Scholar 

  17. Salakhutdinov, R., Srebro, N.: Collaborative filtering in a non-uniform world: learning with the weighted trace norm. Adv. Neural Inf. Process. Syst. 23, 1 (2010)

    Google Scholar 

  18. Yao, Q., Kwok, J.T., Wang, T., Liu, T.-Y.: Large-scale low-rank matrix learning with nonconvex regularizers. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2628–2643 (2018)

    Article  Google Scholar 

  19. Fan, J., Ding, L., Chen, Y., Udell, M.: Factor group-sparse regularization for efficient low-rank matrix recovery. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)

  20. Tao, T., Qian, Y., Pan, S.: Column \(\ell _{2,0}\)-norm regularized factorization model of low-rank matrix recovery and its computation. SIAM J. Optim. 32(2), 959–988 (2022)

    Article  MathSciNet  Google Scholar 

  21. Peleg, D., Meir, R.: A bilinear formulation for vector sparsity optimization. Signal Process. 88(2), 375–389 (2008)

    Article  Google Scholar 

  22. Fan, J., Li, R.: Variable selection via nonconvave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    Article  Google Scholar 

  23. Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    Article  MathSciNet  Google Scholar 

  24. Bian, W., Chen, X.: A smoothing proximal gradient algorithm for nonsmooth convex regression with cardinality penalty. SIAM J. Numer. Anal. 58(1), 858–883 (2020)

    Article  MathSciNet  Google Scholar 

  25. Le Thi, H.A., Pham Dinh, T., Le, H.M., Vo, X.T.: DC approximation approaches for sparse optimization. Eur. J. Oper. Res. 244(1), 26–46 (2015)

    Article  MathSciNet  Google Scholar 

  26. Li, W., Bian, W., Toh, K.-C.: Difference-of-convex algorithms for a class of sparse group \(\ell _0\) regularized optimization problems. SIAM J. Optim. 32(3), 1614–1641 (2022)

    Article  MathSciNet  Google Scholar 

  27. Soubies, E., Blanc-Féraud, L., Aubert, G.: A unified view of exact continuous penalties for \(\ell _2\)-\(\ell _0\) minimization. SIAM J. Optim. 27(3), 2034–2060 (2017)

    Article  MathSciNet  Google Scholar 

  28. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    Article  MathSciNet  Google Scholar 

  29. Pock, T., Sabach, S.: Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems. SIAM J. Imaging Sci. 9, 1756–1787 (2016)

    Article  MathSciNet  Google Scholar 

  30. Xu, Y., Yin, W.: A globally convergent algorithm for nonconvex optimization based on block coordinate update. J. Sci. Comput. 72, 700–734 (2017)

    Article  MathSciNet  Google Scholar 

  31. Phan, D.N., Le Thi, H.A.: DCA based algorithm with extrapolation for nonconvex nonsmooth optimization (2021). arXiv:2106.04743v1

  32. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    Book  Google Scholar 

  33. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basic Theory. Springer, Berlin (2006)

    Book  Google Scholar 

  34. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  Google Scholar 

  35. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    Article  MathSciNet  Google Scholar 

  36. Ruszczyński, A.: Nonlinear Optimization. Princeton University Press, Princeton (2006)

    Book  Google Scholar 

  37. Yang, L., Pong, T.K., Chen, X.: A non-monotone alternating updating method for a class of matrix factorization problems. SIAM J. Optim. 28(4), 3402–3430 (2018)

    Article  MathSciNet  Google Scholar 

  38. Penot, J.-P.: Calculus without Derivatives. Springer, New York (2013)

    Book  Google Scholar 

  39. Pang, J.-S., Razaviyayn, M., Alvarado, A.: Computing B-stationary points of nonsmooth DC programs. Math. Oper. Res. 42(1), 95–118 (2017)

    Article  MathSciNet  Google Scholar 

  40. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1–2), 5–16 (2009)

    Article  MathSciNet  Google Scholar 

  41. Yu, Y.-L.: On decomposing the proximal map. In: Burges, C.J., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 26, pp. 91–99. Curran Associates, Inc. (2013)

  42. Hastie, T., Mazumder, R., Lee, J.D., Zadeh, R.: Matrix completion and low-rank SVD via fast alternating least squares. J. Mach. Learn. Res. 16(1), 3367–3402 (2015)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Shaohua Pan and Ethan X. Fang for providing some codes in [16] and [20] for the numerical comparison in Sect. 6. The authors would also like to thank the Associate Editor and the referees for their helpful comments and suggestions to improve this paper.

Funding

Part of this work was done while Wenjing Li was with Department of Mathematics, National University of Singapore. The research of Wenjing Li is supported by the National Natural Science Foundation of China Grant (No. 12301397) and the China Postdoctoral Science Foundation Grant (No. 2023M730876). The research of Wei Bian is supported by the National Natural Science Foundation of China Grants (No. 12271127, 62176073), and the Fundamental Research Funds for Central Universities of China (HIT.OCEF.2024050, 2022FRFK060017). The research of Kim-Chuan Toh is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 3 grant call (MOE-2019-T3-1-010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Bian.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, W., Bian, W. & Toh, KC. On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models. Math. Program. (2024). https://doi.org/10.1007/s10107-024-02103-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10107-024-02103-1

Keywords

Mathematics Subject Classification