Abstract
In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi-Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG. In the latter case, we embed damped techniques within a class of preconditioners based on quasi-Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising.
Similar content being viewed by others
References
Al-Baali M (1985) Descent property and global convergence of the Fletcher–Reeves method with inexact line search. IMA J Numer Anal 5:121–124
Al-Baali M (2014) Damped techniques for enforcing convergence of quasi-Newton methods. Optim Methods Softw 29:919–936
Al-Baali M, Fletcher R (1996) On the order of convergence of preconditioned nonlinear conjugate gradient methods. SIAM J Sci Comput 17:658–665
Al-Baali M, Grandinetti L (2009) On practical modifications of the quasi-Newton BFGS methods. AMO Adv Model Optim 11:63–76
Al-Baali M, Grandinetti L (2017) Improved damped quasi-Newton methods for unconstrained optimization. Pacific J Optim (to appear)
Al-Baali M, Purnama A (2012) Numerical experience with damped quasi-Newton optimization methods when the objective function is quadratic. SQU J Sci 17:1–11
Al-Baali M, Grandinetti L, Pisacane O (2014a) Damped techniques for the limited memory BFGS method for large-scale optimization. J Optim Theory Appl 161:688–699
Al-Baali M, Spedicato E, Maggioni F (2014b) Broyden’s quasi-Newton methods for a nonlinear system of equations and unconstrained optimization: a review and open problems. Optim Methods Softw 29:937–954
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, New York
Caliciotti A, Fasano G, Roma M (2016) Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates. In: Sergeyev Y, Kvasov D, Dell’Accio F, Mukhametzhanov M (eds) AIP conference proceedings, vol 1776. American Institute of Physics
Caliciotti A, Fasano G, Roma M (2017a) Novel preconditioners based on quasi-Newton updates for nonlinear conjugate gradient methods. Optim Lett 11:835–853
Caliciotti A, Fasano G, Roma M (2017b) Preconditioned nonlinear conjugate gradient methods based on a modified secant equation. Appl Math Comput (submitted to)
Dolan ED, Moré J (2002) Benchmarking optimization software with performance profiles. Math Program 91:201–213
Fasano G, Roma M (2013) Preconditioning Newton–Krylov methods in nonconvex large scale optimization. Comput Optim Appl 56:253–290
Fasano G, Roma M (2016) A novel class of approximate inverse preconditioners for large scale positive definite linear systems in optimization. Comput Optim Appl 65:399–429
Fletcher R (1987) Practical methods of optimization. Wiley, New York
Gilbert J, Nocedal J (1992) Global convergence properties of conjugate gradient methods for optimization. SIAM J Optim 2:21–42
Gould NIM, Orban D, Toint PL (2015) CUTEst: a constrained and unconstrained testing environment with safe threads. Comput Optim Appl 60:545–557
Gratton S, Sartenaer A, Tshimanga J (2011) On a class of limited memory preconditioners for large scale linear systems with multiple right-hand sides. SIAM J Optim 21:912–935
Grippo L, Lucidi S (1997) A globally convergent version of Polak–Ribière conjugate gradient method. Math Program 78:375–391
Grippo L, Lucidi S (2005) Convergence conditions, line search algorithms and trust region implementations for the Polak–Ribière conjugate gradient method. Optim Methods Softw 20:71–98
Grippo L, Sciandrone M (2011) Metodi di ottimizzazione non vincolata. Springer, Milan
Hager W, Zhang H (2013) The limited memory conjugate gradient method. SIAM J Optim 23:2150–2168
Kelley CT (1999) Iterative methods for optimization. Frontiers in applied mathematics. SIAM, Philadelphia, PA
Morales J, Nocedal J (2000) Automatic preconditioning by limited memory quasi-Newton updating. SIAM J Optim 10:1079–1096
Moré J, Thuente D (1994) Line search algorithms with guaranteed sufficient decrease. ACM Trans Math Softw (TOMS) 20:286–307
Nocedal J, Wright S (2006) Numerical optimization, 2nd edn. Springer, New York
Powell MJD (1978) Algorithms for nonlinear constraints that use Lagrangian functions. Math Program 14:224–248
Powell MJD (1986) How bad are the BFGS and DFP methods when the objective function is quadratic? Math Program 34:34–47
Pytlak R (2009) Conjugate gradient algorithms in nonconvex optimization. Springer, Berlin
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
In this appendix, we report a technical result used in the proofs of Proposition 1 (see Grippo and Sciandrone 2011).
Lemma 1
Let \(\{\xi _k\}\) be a sequence of nonnegative real numbers. Let \(\varOmega >0\) and \(q \in (0,1)\) and suppose that there exists \(k_1 \ge 1\) such that
Then,
Proof
Starting from relation
considering \(k-k_1\) iterations we get:
from which we obtain
\(\square \)
Rights and permissions
About this article
Cite this article
Al-Baali, M., Caliciotti, A., Fasano, G. et al. Exploiting damped techniques for nonlinear conjugate gradient methods. Math Meth Oper Res 86, 501–522 (2017). https://doi.org/10.1007/s00186-017-0593-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00186-017-0593-1