Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints

  • Conference paper
  • First Online:
Discrete Optimization and Operations Research (DOOR 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9869))

Abstract

In this paper, we consider a class of optimization problems with a strongly convex objective function and the feasible set given by an intersection of a simple convex set with a set given by a number of linear equality and inequality constraints. Quite a number of optimization problems in applications can be stated in this form, examples being entropy-linear programming, ridge regression, elastic net, regularized optimal transport, etc. We extend the Fast Gradient Method applied to the dual problem in order to make it primal-dual, so that it allows not only to solve the dual problem, but also to construct nearly optimal and nearly feasible solution of the primal problem. We also prove a theorem about the convergence rate for the proposed algorithm in terms of the objective function residual and the linear constraints infeasibility.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The absolute value here is crucial since \(x_k\) may not satisfy linear constraints and, hence, \(f(x_k)-Opt[P_1]\) could be negative.

References

  1. Fang, S.-C., Rajasekera, J., Tsao, H.-S.: Entropy Optimization and Mathematical Programming. Kluwers International Series, Boston (1997)

    Book  MATH  Google Scholar 

  2. Golan, A., Judge, G., Miller, D.: Maximum Entropy Econometrics: Robust Estimation with Limited Data. Wiley, Chichester (1996)

    MATH  Google Scholar 

  3. Kapur, J.: Maximum entropy models in science and engineering. John Wiley & Sons Inc., New York (1989)

    MATH  Google Scholar 

  4. Gasnikov, A., et al.: Introduction to Mathematical Modelling of Traffic Flows. MCCME, Moscow (2013). (in russian)

    Google Scholar 

  5. Rahman, M.M., Saha, S., Chengan, U., Alfa, A.S.: IP traffic matrix estimation methods: comparisons and improvements. In: 2006 IEEE International Conference on Communications, Istanbul, pp. 90–96 (2006)

    Google Scholar 

  6. Zhang, Y., Roughan, M., Lund, C., Donoho, D.: Estimating point-to-point and point-to-multipoint traffic matrices: an information-theoretic approach. IEEE/ACM Trans. Networking 13(5), 947–960 (2005)

    Article  MATH  Google Scholar 

  7. Hastie, T., Tibshirani, R., Friedman, R.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, Heidelberg (2009)

    Book  MATH  Google Scholar 

  8. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc.: Seri. B (Stat. Methodol.) 67(2), 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in Neural Information Processing Systems, pp. 2292–2300 (2013)

    Google Scholar 

  10. Benamou, J.-D., Carlier, G., Cuturi, M., Nenna, L., Peyre, G.: Iterative bregman projections for regularized transportation problems. SIAM J. Sci. Comput. 37(2), A1111–A1138 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bregman, L.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bregman, L.: Proof of the convergence of Sheleikhovskii’s method for a problem with transportation constraints. USSR Comput. Math. Math. Phys. 7(1), 191–204 (1967)

    Article  Google Scholar 

  13. Franklin, J., Lorenz, J.: On the scaling of multidimensional matrices. Linear Algebra Appl. 114, 717–735 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  14. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gasnikov, A., Gasnikova, E., Nesterov, Y., Chernov, A.: About effective numerical methods to solve entropy linear programming problem. Comput. Math. Math. Phys. 56(4), 514–524 (2016). http://arxiv.org/abs/1410.7719

  16. Polyak, R.A., Costa, J., Neyshabouri, J.: Dual fast projected gradient method for quadratic programming. Optim. Lett. 7(4), 631–645 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Necoara, I., Suykens, J.A.K.: Applications of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 53(11), 2674–2679 (2008)

    Article  MathSciNet  Google Scholar 

  18. Goldstein, T., O’Donoghue, B., Setzer, S.: Fast Alternating Direction Optimization Methods. Technical Report, Department of Mathematics, University of California, Los Angeles, USA, May (2012)

    Google Scholar 

  19. Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Devolder, O., Glineur, F., Nesterov, Y.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. 146(1–2), 37–75 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research by A. Gasnikov and P. Dvurechensky presented in Sect. 3 was conducted in IITP RAS and supported by the Russian Science Foundation grant (project 14-50-00150), the research by A. Gasnikov and P. Dvurechensky presented in Sect. 4 was partially supported by RFBR, research project No. 15-31-20571 mol_a_ved. The research by A. Chernov presented in Sect. 4 was partially supported by RFBR, research project No.14-01-00722-a.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavel Dvurechensky .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Chernov, A., Dvurechensky, P., Gasnikov, A. (2016). Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints. In: Kochetov, Y., Khachay, M., Beresnev, V., Nurminski, E., Pardalos, P. (eds) Discrete Optimization and Operations Research. DOOR 2016. Lecture Notes in Computer Science(), vol 9869. Springer, Cham. https://doi.org/10.1007/978-3-319-44914-2_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44914-2_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44913-5

  • Online ISBN: 978-3-319-44914-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics