Abstract
We present a primal-dual active-set framework for solving large-scale convex quadratic optimization problems (QPs). In contrast to classical active-set methods, our framework allows for multiple simultaneous changes in the active-set estimate, which often leads to rapid identification of the optimal active-set regardless of the initial estimate. The iterates of our framework are the active-set estimates themselves, where for each a primal-dual solution is uniquely defined via a reduced subproblem. Through the introduction of an index set auxiliary to the active-set estimate, our approach is globally convergent for strictly convex QPs. Moreover, the computational cost of each iteration typically is only modestly more than the cost of solving a reduced linear system. Numerical results are provided, illustrating that two proposed instances of our framework are efficient in practice, even on poorly conditioned problems. We attribute these latter benefits to the relationship between our framework and semi-smooth Newton techniques.
Similar content being viewed by others
References
Aganagić, M.: Newton’s method for linear complementarity problems. Math. Program. 28(3), 349–362 (1984)
Bergounioux, M., Ito, K., Kunisch, K.: Primal-dual strategy for constrained optimal control problems. SIAM J. Control Optim. 37(4), 1176–1194 (1999)
Bergounioux, M., Kunisch, K.: Primal-dual strategy for state-constrained optimal control problems. Comput. Optim. Appl. 22(2), 193–224 (2002)
Birgin, E.G., Floudas, C.A., Martínez, J.M.: Global minimization using an augmented Lagrangian method with variable lower-level constraints. Math. Program. 125(1), 139–162 (2010)
Byrd, R.H., Chin, G.M., Nocedal, J., Oztoprak, F.: A family of second-order methods for convex \(\ell _1\)-regularized optimization. Technical report, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL (2012)
Chen, L., Wang, Y., He, G.: A feasible active set QP-free method for nonlinear programming. SIAM J. Optim. 17(2), 401–429 (2006)
Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, Philadelphia, PA (2002)
Conn, A.R., Gould, N.I.M., Toint, PhL: A globally convergent augmented lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)
Conn, A.R., Gould, N.I.M., Toint, PhL: Trust-Region Methods. Society for Industrial and Applied Mathematics, Philadelphia, PA (2000)
Cryer, C.W.: The solution of a quadratic programming problem using systematic overrelaxation. SIAM J. Control 9(3), 385–392 (1971)
Feng, L., Linetsky, V., Morales, J.L., Nocedal, J.: On the solution of complementarity problems arising in American options pricing. Optim. Method. Softw. 26(4–5), 813–825 (2011)
Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput., 1–37 (2014)
Gharbia, I.B., Gilbert, J.C.: Nonconvergence of the plain Newton-min algorithm for linear complementarity problems with a P-matrix. Math. Program. 134(2), 349–364 (2012)
Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: an SQP algorithm for large-scale constrained optimization. SIAM Rev. 47(1), 99–131 (2005)
Gill, P.E., Murray, W., Saunders, M.A.: User’s guide for SQOPT version 7: software for largescale linear and quadratic programming. Systems Optimization Laboratory, Stanford University, Palo Alto, CA (2006)
Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Emerald Group Publishing Limited, Bingley (1982)
Gill, P.E., Robinson, D.P.: Regularized sequential quadratic programming methods. Technical report, Department of Mathematics, University of California, San Diego, La Jolla, CA (2011)
Gould, N.I.M., Robinson, D.P.: A second derivative SQP method: global convergence. SIAM J. Optim. 20(4), 2023–2048 (2010)
Gould, N.I.M., Robinson, D.P.: A second derivative SQP method: local convergence and practical issues. SIAM J. Optim. 20(4), 2049–2079 (2010)
Gould, N.I.M., Robinson, D.P.: A second-derivative SQP method with a “trust-region-free” predictor step. IMA J. Numer. Anal. 32(2), 580–601 (2011)
Gould, N.I.M., Toint, PhL: An iterative working-set method for large-scale nonconvex quadratic programming. Appl. Numer. Math. 43(1), 109–128 (2002)
Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)
Hager, W.W.: The dual active set algorithm. In: Pardalos, P.M. (ed.) Advances in Optimization and Parallel Computing, pp. 137–142. North Holland, Amsterdam (1992)
Hager, W.W., Hearn, D.W.: Application of the dual active set algorithm to quadratic network optimization. Comput. Optim. Appl. 1(4), 349–373 (1993)
Hintermüller, M., Ito, K., Kunisch, K.: The primal-dual active set strategy as a semismooth Newton method. SIAM J. Optim. 13(3), 865–888 (2003)
Kostreva, M.M.: Block pivot methods for solving the complementarity problem. Linear Algebra Appl. 21(3), 207–215 (1978)
Kočvara, M., Zowe, J.: An iterative two-step algorithm for linear complementarity problems. Numerische Mathematik 68(1), 95–106 (1994)
Kunisch, K., Rendl, F.: An infeasible active set method for quadratic problems with simple bounds. SIAM J. Optim. 14(1), 35–52 (2003)
Maros, I., Mészáros, C.: A repository of convex quadratic programming problems. Optim. Method. Softw. 11(1–4), 671–681 (1999)
Moré, J., Toraldo, G.: On the solution of large quadratic programming problems with bound constraints. SIAM J. Optim. 1(1), 93–113 (1991)
Nocedal, J., Wright, S.J.: Numerical Optimization 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, New York (2006)
Portugal, L.F., Júdice, J.J., Vicente, L.N.: A comparison of block pivoting and interior-point algorithms for linear least squares problems with nonnegative variables. Math. Comput. 63(208), 625–643 (1994)
Robinson, D.P., Feng, L., Nocedal, J., Pang, J.S.: Subspace accelerated matrix splitting algorithms for asymmetric and symmetric linear complementarity problems. SIAM J. Optim. 23(3), 1371–1397 (2013)
Toint, PhL: Non-monotone trust-region algorithms for nonlinear optimization subject to convex constraints. Math. Program. 77(3), 69–94 (1997)
Ulbrich, M., Ulbrich, S.: Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function. Math. Program. 95(1), 103–135 (2003)
Vapnik, V., Cortes, C.: Support vector networks. Mach. Learn. 20(3), 273–297 (1995)
Vardi, Y., Shepp, L.A., Kaufman, L.: A statistical model for positron emission tomography. J. Am. Statist. Assoc. 80(389), 8–20 (1985)
Acknowledgments
Frank E. Curtis, Zheng Han was supported in part by National Science Foundation Grant DMS–1016291. Daniel P. Robinson was supported in part by National Science Foundation Grant DMS–1217153
Author information
Authors and Affiliations
Corresponding author
Appendix: Primal-dual active-set as a semi-smooth newton method
Appendix: Primal-dual active-set as a semi-smooth newton method
In this appendix, we show that Algorithm 3 is equivalent to a semi-smooth Newton method under certain conditions. The following theorem utilizes the concept of a slant derivative of a slantly differentiable function [25].
Theorem 5
Let \(\{(x_k,y_k,z^\ell _k,z^u_k)\}\) be generated by Algorithm 3 with Step 6 employing Algorithm 4, where we suppose that, for all \(k\), \(({\mathcal A}_k^\ell ,{\mathcal A}_k^u,{\mathcal I}_k,{\mathcal U}_k)\) with \({\mathcal U}_k=\emptyset \) is a feasible partition at the start of Step 3. Then, \(\{(x_k,y_k,z^\ell _k,z^u_k)\}\) is the sequence of iterates generated by the semi-smooth Newton method for finding a zero of the function \(\mathrm{KKT}\) defined by (2) with initial value \((x_0,y_0,z^\ell _0,z^u_0) = \mathrm{SM}({\mathcal A}^\ell _0,{\mathcal A}^u_0,{\mathcal I}_0,\emptyset )\) and slant derivative \(M(a,b)\) of the slantly differentiable function \(m(a,b)=\min (a,b)\) defined by
Proof
To simplify the proof, let us assume that \(\ell = -\infty \) so that problem (1) has upper bounds only. This ensures that \(z^\ell _k = 0\) and \({\mathcal A}^\ell _k = \emptyset \) for all \(k\), so in this proof we remove all references to these quantities. The proof of the case with both lower and upper bounds follows similarly.
Under the assumptions of the theorem, the point \((x_0,y_0,z^u_0) \leftarrow \mathrm{SM}(\emptyset ,{\mathcal A}^u_0,{\mathcal I}_0,\emptyset )\) is the first primal-dual iterate for both algorithms, i.e., Algorithm 3 and the semi-smooth Newton method. Furthermore, it follows from (4)–(6) that
We now proceed to show that both algorithms generate the same subsequent iterate, namely \((x_1,y_1,z^u_1)\). The result then follows as a similar argument can be used to show that both algorithms generate the same iterate \((x_k,y_k,z^u_k)\) for each \(k\).
Partitioning the variable indices into four sets, namely \(\mathrm{I}\), \(\mathrm{II}\), \(\mathrm{III}\), and \(\mathrm{IV}\), we find:
Here, the implications after each set follow from Step 2 of Algorithm 2. Next, (16) implies
Algorithm 3 computes the next iterate as the unique point \((x_1,y_1,z^u_1)\) satisfying
Now, let us consider one iteration of the semi-smooth Newton method on the function KKT defined by (2) using the slant derivative function \(M\). It follows from (27), Table 13, and the definition of \(M\) that the semi-smooth Newton system may be written as
The first five block equations of (30) combined with (26) yield
while the last four blocks of equations of (30) and (27) imply
so that
It now follows from (29), (31), and (36) that \((x_1,y_1,z^u_1)\) generated by the semi-smooth Newton method is the same as that generated by Algorithm 3.\(\square \)
Rights and permissions
About this article
Cite this article
Curtis, F.E., Han, Z. & Robinson, D.P. A globally convergent primal-dual active-set framework for large-scale convex quadratic optimization. Comput Optim Appl 60, 311–341 (2015). https://doi.org/10.1007/s10589-014-9681-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-014-9681-9