Abstract
We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.
Similar content being viewed by others
References
A. Auslender,Optimisation Méthodes Numériques (Masson, Paris, 1976).
D.P. Bertsekas, On the Goldstein-Levitin-Polyak gradient projection method, IEEE Trans. Auto. Contr. AC-21 (1976) 174–184.
D.P. Bertsekas, Projected Newton methods for optimization problems with simple constraints, SIAM J. Contr. Optim. 20 (1982) 221–246.
D.P. Bertsekas and E. Gafni, Projection methods for variational inequalities with application to the traffic assignment problem, Math. Prog. Study 17 (1982) 139–159.
D.P. Bertsekas and J.N. Tsitsiklis,Parallel and Distributed Computation: Numerical Methods (Prentice-Hall, Englewood Cliffs, NJ, 1989).
J.F. Bonnans, A variant of a projected variable metric method for bound constrained optimization problems, Working paper, INRIA, France (1983).
L.M. Bregman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Comp. Math. Math. Phys. 7 (1967) 200–217.
Y.C. Cheng, On the gradient-projection method for solving the nonsymmetric linear complementarity problem, Appl. Math. Optim. 43 (1984) 527–540.
C.W. Cryer, The solution of a quadratic programming problem using systematic over-relaxation, SIAM J. Contr. Optim. 9 (1971) 385–392.
D. D'Esopo, A convex programming procedure, Naval Res. Logist. Quart. 6 (1959) 33–42.
R.S. Dembo and U. Tulowizki, Local convergence analysis for successive inexact quadratic programming methods, Working Paper, School of Organization and Management, Yale University, New Haven, CT (1984).
J.C. Dunn, Global and asymptotic convergence rate estimates for a class of projected gradient processes, SIAM J. Contr. Optim. 19 (1981) 368–400.
J.C. Dunn, On the convergence of projected gradient processes to singular critical points, J. Optim. Theory Appl. 55 (1987) 203–216.
E.M. Gafni and D.P. Bertsekas, Two-metric projection methods for constrained optimization, SIAM J. Contr. Optim. 22 (1984) 936–964.
M. Gawande and J.C. Dunn, Variable metric gradient projection processes in convex feasible sets defined by nonlinear inequalities, Appl. Math. Optim. 17 (1988) 103–119.
A.A. Goldstein, Convex programming in Hilbert space, Bull. Am. Math. Soc. 70 (1964) 709–710.
A.A. Goldstein, On gradient projection,Proc. 12th Ann. Allerton Conf. on Circuits and Systems, Allerton Park, IL (1974) 38–40.
O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Contr. Optim. 29 (1991) 403–419.
C. Hildreth, A quadratic programming procedure, Naval Res. Logist. Quart. 4 (1957) 79–85; see also Erratum, Naval Res. Logist. Quart. 4 (1957) 361.
A.J. Hoffman, On approximate solutions of systems of linear inequalities, J. Res. Natl. Bur. Standards 49 (1952) 263–265.
A.N. Iusem, On dual convergence and the rate of primal convergence of Bregman's convex programming method, SIAM J. Optim. 1 (1991) 401–423.
A.N. Iusem and A. De Pierro, On the convergence properties of Hildreth's quadratic programming algorithm, Math. Prog. 47 (1990) 37–51.
H.B. Keller, On the solution of singular and semidefinite linear systems by iteration, SIAM J. Numer. Anal. 2 (1965) 281–290.
E.N. Khobotov, A modification of the extragradient method for the solution of variational inequalities and some optimization problems, Zh. Vychisl. Mat i Mat. Fiz. 27 (1987) 1462–1473.
G.M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. i Mat. Metody, translated into English as Matecon 12 (1976) 747–756.
B.W. Kort and D.P. Bertsekas, Combined primal-dual and penalty methods for convex programming, SIAM J. Contr. Optim. 14 (1976) 268–294.
J. Kruithof, Calculation of telephone traffic, De Ingenieur (E. Electrotechnik 3) 52 (1937) E15–25.
W. Li, Remarks on convergence of matrix splitting algorithm for the symmetric linear complementarity problem, SIAM J. Optim. 3 (1993) 155–163.
Y.Y. Lin and J.-S. Pang, Iterative methods for large convex quadratic programs: A survey, SIAM J. Contr. Optim. 25 (1987) 383–411.
D.G. Luenberger,Linear and Nonlinear Programming (Addison-Wesley, Reading, MA, 1984).
Z.-Q. Luo and P. Tseng, On global error bound for a class of monotone affine variational inequality problems, Oper. Res. Lett. 11 (1992) 159–165.
Z.-Q. Luo and P. Tseng, Error bound and reduced gradient projection algorithms for convex minimization over a polyhedral set, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (December 1990); to appear in SIAM J. Optim.
Z.-Q. Luo and P. Tseng, On the convergence of a matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Contr. Optim. 29 (1991) 1037–1060.
Z.-Q. Luo and P. Tseng, On the rate of convergence of a class of distributed asynchronous routing algorithms, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (May 1991), to appear in Math. Oper. Res. 18 (1993).
Z.-Q. Luo and P. Tseng, On the convergence rate of dual ascent methods for strictly convex minimization, to appear in Math. Oper. Res. 18 (1993).
Z.-Q. Luo and P. Tseng, On the convergence of the coordinate descent method for convex differentiable minimization, J. Optim. Theory Appl. 72 (1992) 7–35.
Z.-Q. Luo and P. Tseng, Error bound and the convergence analysis of matrix splitting algorithms for the affine variational inequality problem, SIAM J. Optim. 2 (1992) 43–54.
Z.-Q. Luo and P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Contr. Optim. 30 (1992) 408–425.
O.L. Mangasarian, Solution of symmetric linear complementarity problems by iterative methods, J. Optim. Theory Appl. 22 (1977) 465–485.
O.L. Mangasarian, Sparsity-preserving SOR algorithms for separable quadratic and linear programming, Comp. Oper. Res. 11 (1984) 105–112.
O.L. Mangasarian, Error bounds for nondegenerate monotone linear complementarity problems, Math. Prog. 48 (1990) 437–445.
O.L. Mangasarian, Convergence of iterates of an inexact matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Optim. 1 (1991) 114–122.
O.L. Mangasarian, Global error bounds for monotone affine variational inequality problems, Lin. Alg. Appl. 174 (1992) 153–163.
O.L. Mangasarian and R. De Leone, Error bounds for strongly convex programs and (super)linearly convergent iterative schemes for the least 2-norm solution of linear programs. Appl. Math. Optim. 17 (1988) 1–14.
O.L. Mangasarian and T.-H. Shiau, Error bounds for monotone linear complementarity problems, Math. Prog. 36 (1986) 81–89.
P. Marcotte, Application of Khobotov's algorithm to variational inequalities and network equilibrium problems, Inf. Syst. Oper. Res. 29 (1991) 258–270.
B. Martinet, Regularisation d'inéquations variationnelles par approximations successives, Rev. Française d'Auto. et Inform. Rech. Opér. 4 (1970) 154–159.
B. Martinet, Determination approchée d'un point fixe d'une application pseudo-contractante, C.R. Acad. Sci. Paris 274 (1972) 163–165.
R. Mathias, and J.-S. Pang, Error bounds for the linear complementarity problem with aP-matrix, Lin. Alg. Appl. 132 (1990) 123–136.
J.J. Moré, Gradient projection techniques for large-scale optimization problems,Proc. 28th Conf. on Decision and Control, Tampa, FL (December 1989).
J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).
J.-S. Pang, On the convergence of a basic iterative method for the implicit complementarity problem, J. Optim. Theory Appl. 37 (1982) 149–162.
J.-S. Pang, Necessary and sufficient conditions for the convergence of iterative methods for the linear complementarity problem, J. Optim. Theory Appl. 42 (1984) 1–17.
J.-S. Pang, More results on the convergence of iterative methods for the symmetric linear complementarity problem, J. Optim. Theory Appl. 49 (1986) 107–134.
J.-S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Prog. 36 (1986) 54–71.
J.-S. Pang, A posteriori error bounds for the linearly-constrained variational inequality problem, Math. Oper. Res. 12 (1987) 474–484.
S.M. Robinson, Some continuity properties of polyhedral multifunctions, Math. Prog. Study 14 (1981) 206–214.
S.M. Robinson, Generalized equations and their solutions, Part II: Applications to nonlinear programming, Math. Prog. Study 14 (1982) 200–221.
R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Contr. Optim. 14 (1976) 877–898.
R.T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res. 1 (1976) 97–116.
P. Tseng, Dual Ascent methods for problems with strictly convex costs and linear constraints: A unified approach, SIAM J. Contr. Optim. 28 (1990) 214–242.
P. Tseng, On the rate of convergence of a partially asynchronous gradient projection algorithm, SIAM J. Optim, 1 (1991) 603–619.
J.N. Tsitsiklis and D.P. Bertsekas, Distributed asynchronous optimal routing in data networks, IEEE Trans. Auto. Contr. AC-31 (1986) 325–332.
J.N. Tsitsiklis, D.P. Bertsekas and M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Trans. Auto. Contr. AC-31 (1986) 803–812.
J. Warga, Minimizing certain convex functions, J. Soc. Indust. Appl. Math. 11 (1963) 588–593.
E.S. Levitin and B.T. Polyak, Constrained minimization methods, USSR Comp. Math. Math. Phys. 6 (1965) 1–50.
J.C. Dunn, A subspace decomposition principle for scaled gradient projection methods: global theory, SIAM J. Contr. Optim. 29 (1991) 1160–1175.
Author information
Authors and Affiliations
Additional information
The research of the first author is supported by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG0090391, and the research of the second author is supported by the National Science Foundation, Grant No. CCR-9103804.
Rights and permissions
About this article
Cite this article
Luo, ZQ., Tseng, P. Error bounds and convergence analysis of feasible descent methods: a general approach. Ann Oper Res 46, 157–178 (1993). https://doi.org/10.1007/BF02096261
Issue Date:
DOI: https://doi.org/10.1007/BF02096261