Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

On the convergence of the DFP algorithm for unconstrained optimization when there are only two variables

  • Published:
Mathematical Programming Submit manuscript

Abstract.

Let the DFP algorithm for unconstrained optimization be applied to an objective function that has continuous second derivatives and bounded level sets, where each line search finds the first local minimum. It is proved that the calculated gradients are not bounded away from zero if there are only two variables. The new feature of this work is that there is no need for the objective function to be convex.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received: June 16, 1999 / Accepted: December 24, 1999¶Published online March 15, 2000

Rights and permissions

Reprints and permissions

About this article

Cite this article

Powell, M. On the convergence of the DFP algorithm for unconstrained optimization when there are only two variables. Math. Program. 87, 281–301 (2000). https://doi.org/10.1007/s101070050115

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s101070050115