We present an active-set algorithm for finding a local minimizer to a nonconvex bound-constrained quadratic problem. Our algorithm extends the ideas developed by Dostal and Schoberl that is based on the linear conjugate gradient... more
We present an active-set algorithm for finding a local minimizer to a nonconvex bound-constrained quadratic problem. Our algorithm extends the ideas developed by Dostal and Schoberl that is based on the linear conjugate gradient algorithm for (approximately) solving a linear system with a positive-definite coefficient matrix. This is achieved by making two key changes. First, we perform a line search along negative curvature directions when they are encountered in the linear conjugate
gradient iteration. Second, we use Lanczos iterations to compute approximations to leftmost eigen-pairs, which is needed to promote convergence to points satisfying certain
second-order optimality conditions. Preliminary numerical results show that our method is efficient and robust on nonconvex bound-constrained quadratic problems.
We present a new algorithm for nonconvex bound-constrained quadratic optimization. In the strictly convex case, our method is equivalent to the state-of-the-art algorithm by Dostal and Schoberl [Comput. Optim. Appl., 30 (2005), pp.... more
We present a new algorithm for nonconvex bound-constrained quadratic optimization. In the strictly convex case, our method is equivalent to the state-of-the-art algorithm by Dostal and Schoberl [Comput. Optim. Appl., 30 (2005), pp. 23–43]. Unlike their method, however, we establish a convergence theory for our algorithm that holds even when the problems are nonconvex. This is achieved by carefully addressing the challenges associated with directions of negative curvature, in particular, those that may naturally arise when applying the conjugate gradient algorithm to an indefinite system of equations. Our presentation and analysis deal explicitly with both lower and upper bounds on the optimization variables, whereas the work by Dostal and Schoberl considers only strictly convex problems with lower bounds. To handle this generality, we introduce the reduced chopped gradient that is analogous to the reduced free gradient previously used. The reduced chopped gradient leads to a new condition that is used to determine when optimization over a given subspace should be terminated. This condition, although not equivalent, is motivated by a similar condition used by Dostal and Schoberl. Numerical results illustrate the superior performance of our method over commonly used solvers that employ gradient projection steps and subspace acceleration.
Quadratic Programming Active-Set (QP-as) is often used in Model Predictive Control (MPC). Increasing the speed with which such algorithms are solved can improve the quality of systems which depend on them. With the newer generation Field... more
Quadratic Programming Active-Set (QP-as) is often used in Model Predictive Control (MPC). Increasing the speed with which such algorithms are solved can improve the quality of systems which depend on them. With the newer generation Field Programmable Gate Arrays (FPGAs) it is possible to create new architectures which can decrease the required time to solve optimization problems. We propose a low latency fixed-point FPGA implementation that takes advantage of higher amounts of available local memory and high reuse of functional blocks in order to decrease the execution time of solving optimization problems.
In this article we compare two contrasting methods, active set method (ASM). and genetic algorithms, for learning the weights in aggregation operators, such as weighted mean (WM), ordered weighted average (OWA), and weighted ordered... more
In this article we compare two contrasting methods, active set method (ASM). and genetic algorithms, for learning the weights in aggregation operators, such as weighted mean (WM), ordered weighted average (OWA), and weighted ordered weighted average (WOWA). We give the formal definitions for each of the aggregation operators, explain the two learning methods, give results of processing for each of the methods and operators with simple test datasets, and contrast the approaches and results.