Nonlinear Programming Solvers For Unconstrained An
Nonlinear Programming Solvers For Unconstrained An
ARTICLE HISTORY
Compiled April 12, 2022
Abstract
In this paper we propose a set of guidelines to select a solver for the solution of
nonlinear programming problems. With this in mind, we present a comparison of
the convergence performances of commonly used solvers for both unconstrained and
constrained nonlinear programming problems. The comparison involves accuracy,
convergence rate, and convergence speed. Because of its popularity among research
teams in academia and industry, MATLAB is used as common implementation plat-
form for the solvers. Our study includes solvers which are either freely available, or
require a license, or are fully described in literature. In addition, we differentiate
solvers if they allow the selection of different optimal search methods. As result,
we examine the performances of 23 algorithms to solve 60 benchmark problems. To
enrich our analysis, we will describe how, and to what extent, convergence speed
and accuracy can be improved by changing the inner settings of each solver.
KEYWORDS
NLP; unconstrained; constrained; optimization
1. Introduction
The current technological era prioritizes, more than ever, high performance and ef-
ficiency of complex processes controlled by a set of variables. Examples of these
processes are (Lasdon and Warren 1980; Grossmann 1996; Charalambous 1979;
Grossmann and Kravanja 1997; Wu and William 1992; Wansuo and Haiying 2010;
Rustagi 1994; Ziemba and Vickson 1975): engineering designs, chemical plant reac-
tions, manufacturing processes, grid power management, power generation/conversion
process, path planning for autonomous vehicles, climate simulations, etc. Quite often,
the search for the best performance, or the highest efficiency, can be transcribed into the
form of a Nonlinear Programming (NLP) problem. Namely, the need to minimize (or
maximize) a scalar cost function subjected to a set of constraints. In some instances
these functions are linear but, in general, one or both of them are characterized by
nonlinearities. For simple, one-time use problems, one might successfully use any of
2
Section 5. Finally, the main contributions of the paper are outlined in Section 6.
In general, a constrained NLP problem aims to minimize a nonlinear real scalar ob-
jective function, with respect a set of variables, while satisfying a set of nonlinear
constraints. If the problem entails the minimization of a function without the presence
of constraints the problem is defined as unconstrained (Nocedal and Wright 2006). In
the following section, the general form of a nonlinear unconstrained and constrained
optimization problems in the minimization format form are thoroughly stated.
∇f (x∗ ) = 0. (2)
subject to
3
With c(x) a smooth real-valued function on a subset of Rn . Notably, ci (x) and cj (x)
represent the sets of equality constraints and inequality constraints, respectively. The
feasible set is identified as the set of points x that satisfy just the constraints (Eqs. 4,
5). It must be pointed out that some of the solvers considered in this study are only
able to consider equality constraints. In these instances, we will introduce a set of slack
variables si , and convert Eq. 5 into the following set of equality constraints
Such necessary expedient will obviously induce more computational burden on the
particular solvers affected by this constraint-type limitation.
m
X ℓ
X
f (x) : ∇f (x∗ ) + µi ∇gi (x∗ ) + λj ∇hj (x∗ ) = 0. (7)
i=1 j=1
• Primal feasibility:
• Dual feasibility:
µi ≥ 0, for i = 1, . . . , w. (10)
• Complementary slackness:
m
X
µi gi (x∗ ) = 0. (11)
i=1
The selection of the NLP solvers considered in this work is based on the following
aspects. First of all, we are only considering algorithms that can be implemented in
4
MATLAB. Secondly, we have included solvers that are either free source or, for com-
mercial software, have a free trial version. The remaining part of this section briefly
describes the 23 solvers included in our analysis and the most direct source to each
algorithm.
3.1. APSO
The Accelerated Particle Swarm Optimization (APSO) is an algorithm developed by
Yang at Cambridge University in 2007 and it based on swarm-intelligent search of
the optimum (Yang 2014). APSO is an evolution of the standard particle swarm opti-
mization, and developed to accelerate the convergence of the standard version of the
algorithm. The standard PSO is characterized by two elements, the swarm, that is the
population, and the members of the population, called particles. The search is based
on a randomly initialized population that moves in randomly chosen directions. In par-
ticular, each particle moves through the searching space, remembers the best earlier
positions, velocity, and accelerations of itself and its neighbors. This information is
shared among the particles while they dynamically adjust their own position, velocity
and acceleration derived from the best position of all particles. The next step starts
when all particles have been shifted. Finally, all particles aim to find the global best
among all the current best solutions till the objective function no longer improves or
after a certain number of iterations (Yang 2014). The standard PSO uses both the
current global best and the individual best, whereas the simplified version APSO is
able to accelerate the convergence of the algorithm buy using the global best only. Due
to the nature of the algorithm, only constrained nonlinear programming problems can
be solved. The MATLAB version of the APSO algorithm is provided in (Yang 2014).
3.2. BARON
The Branch and Reduced Optimization Navigator (BARON) is a commercial global
optimization software that solves both NLP and mixed-integer nonlinear programs
(MINLP). BARON uses deterministic global optimization algorithms of the branch
and bound search type which, by applying general assumptions, solve the global opti-
mization problem. It comes with embedded linear programming (LP) and NLP solvers,
such as CLP/CBC, IPOPT, FilterSD and FilterSQP. By default, BARON selects the
NLP solver and may switch between different NLP solvers during the search according
to problem characteristics and solver performance. To refer to the default option, the
name BARON (auto) is chosen. Unlike many other NLP algorithms, BARON doesn’t
explicitly require the user to provide an initial guess of the solution but leaves this
as an option. If a user doesn’t provide the initial guess, then the software shrewdly
initializes the variables. In this paper, we use the demo version of the software in con-
junction with the MATLAB interface which can be retrieved in (Firm 2021). Must be
noted that the free demo version is characterized by some limitations, namely, it can
only handle problems with up to ten variables, ten constraints, and it doesn’t sup-
port trigonometric functions. Details and documentations about BARON software are
provided in (Tawarmalani and Sahinidis 2004; Sahinidis n.d.).
3.2.1. CLP/CBC
The Computational Optimization Infrastructure for Operations Research (COIN-
OR) Branch and Cut (CBC) is an open-source mixed integer nonlinear programming
5
solver based on the COIN-OR LP solver (CLP) and the COIN-OR Cut generator
library (Cgl). The code has been written primarily by John J. Forrest (COIN-OR
2016).
3.2.2. IPOPT
COIN-OR Interior Point Optimizer (IPOPT) is an open-source solver for large-scale
NLP and it has been mainly developed by Andreas Wächter (Wächter and Biegler
2006). IPOPT implements an interior point line search filter method for nonlinear pro-
gramming models. The problem function are not required to be convex but should be
twice continuously differentiable. Mathematical details of the algorithm and documen-
tation can be found in (COIN-OR 2021).
3.2.3. FilterSD
FilterSD is a package of Fortran 77 subroutines for solving nonlinear programming
problems and linearly constrained problems in continuous optimization. The NLP
solver filterSD aims to find a solution of the NLP problem, where the objective func-
tion and the constraint function are continuously differentiable at points that satisfy
the bounds on x. The code has been developed to avoid the use of second derivatives,
and to prevent storing an approximate reduced Hessian matrix by using a new limited
memory spectral gradient approach based on Ritz values. The basic approach is that
of Robinson’s method, globalised by using a filter and trust region (FilterSD 2020).
3.2.4. FilterSQP
FilterSQP is a Sequential Quadratic Programming solver suitable for solving large,
sparse or dense linear, quadratic and nonlinear programming problems. The method
implements a trust region algorithm with a filter to promote global convergence. The
filter accepts a trial point whenever the objective or the constraint violation is improved
compared to all previous iterations. The size of the trust region is reduced if the step
is rejected, and increased if it is accepted (Fletcher and Leyffer 1999).
3.3. FMINCON
FMINCON is a MATLAB optimization toolbox used to solve constrained nonlinear
programming problems. FMINCON provides the user the option to select amongst five
different algorithms to solve nonlinear problems: Active-set, Interior-point, Sequential
Quadratic Programming, Sequential Quadratic Programming legacy, and Trust region
reflective. Four, out of the five, algorithms are implemented in our analysis as one of
them, the Trust Region Reflective algorithm, does not accept the type of constraint
considered in our benchmark cases.
3.3.1. Active-set
The Active-set, unlike the Interior point (mentioned next), doesn’t use a barrier term
to ensure that the inequality constraints are met, but it solves the optimal equation by
understanding the true active-set. A general active-set algorithm for convex quadratic
programming can be found in (Nocedal and Wright 2006).
6
3.3.2. Interior point
This method, also known as barrier method, is one type of nonlinear problem-solving
algorithms that achieves the determination of optimum values by iteratively approach-
ing the optimal solution from the interior of the feasible set (Nocedal and Wright 2006).
Since interior point algorithm depends on a feasible set, the following requirements
must be met for the method to be used:
• the set of feasible interior point should not be empty;
• all the iterations should occur in the interior of this feasible set.
3.4. FMINUNC
FMINUNC is another MATLAB optimization toolbox used to solve unconstrained
nonlinear programming problems (MathWorks 2020b). In this case, FMINUNC gives
the user the option of choosing between two different algorithms to solve nonlinear
minimization problems: Quasi-Newton, and Trust region.
7
3.4.1. Quasi-Newton
The Quasi-Newton methods build up curvature information at each iteration to
formulate a quadratic model problem, with the optimal solution occurring when the
sationarity conditions are satisfied. Newton-type methods, as opposed to quasi-Newton
methods, calculate the Hessian matrix directly, and proceed in a direction of descent
to locate the minimum after a number of iterations, numerically involving a large
amount of computation. On the contrary, quasi-Newton methods adopt the observed
behavior of the objective function and its gradient to build up curvature information to
make an approximation to the Hessian matrix using an appropriate updating technique
(MathWorks 2020c). In particular, the quasi-Newton algorithm uses the formula of
Broyden, Fletcher, Goldfarb, and Shanno (BFGS) to implement a cubic line search
procedure, and for updating the approximation of the Hessian matrix (MathWorks
2020b).
3.5. GCMMA
GCMMA, the Globally Convergent Method of Moving Asymptotes, is a modified ver-
sion of the MMA that evaluates the global optimum value. However, unlike MMA,
GMMA consists of the so called inner and outer iterations. The GCMMA follows the
same steps as the MMA except for small changes. In GMMA, an approximate sub-
problem is created for the first outer iteration by replacing the function with convex
functions. These subproblems are then solved in order to find the next iteration points
otherwise the inner iteration kicks off. For the first inner iteration, a new subproblem
will be generated. Then the next iteration points will be calculated by solving these
convex subproblems. The algorithm then moves to the next iteration (Svanberg 2002).
The GCMMA algorithm is fully described in (Svanberg 2007), and the MATLAB code
is freely available at (Svanberg 2020).
3.6. KNITRO
ARTELYS KNITRO is a commercially available nonlinear optimization software pack-
age developed by Zienna Optimization since 2001 (Knitro 2021b). KNITRO, short for
Nonlinear Interior point Trust Region Optimization, is a software package for finding
local solutions of both continuous optimization problems, with or without constraints,
and discrete optimization problems with integer or binary variables. The KNITRO
package provides efficient and robust solution of small or large problems, for both con-
tinuous and discrete problems, derivative-free options. It supports the most popular
8
operating systems and several modeling language and programmatic interfaces (Knitro
2021a). Multiple versions of the software are available to download at (Knitro 2021b).
In this work, the software free trial license is used, in conjunction with the MATLAB
interface. Several algorithms are included in the software, such as Interior point, Active-
set, and Sequential Quadratic Programming. The description of these algorithms can
be found in Section 3.3.
3.7. MIDACO
The Mixed Integer Distributed Ant Colony Optimization (MIDACO) is a global opti-
mization solver that combines an extended evolutionary probabilistic technique, called
the Ant Colony Optimization algorithm, with the Oracle Penalty method for con-
strained handling (MIDACO-Solver, user manual 2021). Ant Colony optimization is
modelled using the behavior of ants to find the quickest path between their colony and
the source food. Like the majority of evolutionary optimization algorithms, MIDACO
considers the objective and constraint functions as black-box functions. MIDACO was
created in collaboration of European Space Agency and EADS Astrium to solve con-
strained mixed-integer non-linear (MINLP) space applications (Schlueter et al. 2013).
We use the trial version of MIDACO, in conjunction with the MATLAB interface. The
trial version has a limitation, namely, it doesn’t support more than four variables per
problem. The solver can be downloaded from (MIDACO-Solver, user manual 2021).
3.8. MMA
The Method of Moving Asymptotes (MMA) solves nonlinear problem function by gen-
erating an approximate subproblem. These convex functions used as subproblems are
chosen using gradient information at the current iteration points, and also at param-
eters that are updated at each iteration stage, called the moving asymptotes. The
subproblem is solved at the current iteration point, and the solution is used as the
next iteration point. Similarly, a new subproblem is generated at this new iteration
point, which again is solved to create the next iteration point (Svanberg 1987). The
MMA algorithm is fully described in (Svanberg 2007), and the MATLAB code is freely
available at (Svanberg 2020).
3.9. MQA
The Modified Quasilinearization Algorithm (MQA) is the modified version of the Stan-
dard Quasilinearization Algorithm (SQA) (Eloe and Jonnalagadda 2019; Yeo 1974)
described below. These quasilinearization algorithms base their solution search on the
linear approximation of the NLP, namely, on the Hessian matrix and gradient of the
objective and constraint functions. Ultimately, the goal is the progressive reduction
of the performance index. For unconstrained NLP problems, the performance index is
defined as Q̃ = fxT fx , where fx is the gradient of the objective function. On the other
hand, for constrained NLP problems the performance index is defined as R̃ = P̃ + Q̃,
which comprises both the feasibility index P̃ = hT h, and optimality index Q̃ = FxT Fx ,
with F = f +λT h, where f is the objective function, h is the constraint function, and λ
is the vector of Lagrange multipliers associate to the constraint function. Convergence
to the desired solution is achieved when the performance index Q̃ ≤ ε1 or R̃ ≤ ε2 , with
ε1 and ε2 small preselected positive constants, for the unconstrained and constrained
9
case respectively (Miele and Iyer 1971; Miele, Mangiavacchi, and Aggarwal 1974). Un-
like SQA, characterized by a unitary step size, MQA reduces progressively the step
size 0 < α < 1 to enforce an improvement in optimality. In turn, the main advantage
of the MQA, over the SQA, is its descent property: if the stepsize α is sufficiently
small, the reduction in the performance index is indeed guaranteed. It must be pointed
out that the MQA for NLP problems can only treat equality constraints. Therefore,
in our implementation, all the inequality constraints are converted into equality con-
straints by introducing the slack variables. We have implemented the algorithm on
MATLAB in order to solve both unconstrained and constrained NLP problems, based
on (Miele and Iyer 1971; Miele, Mangiavacchi, and Aggarwal 1974).
3.10. PENLAB
PENLAB is a free open source software package implemented in MATLAB for nonlinear
optimization, linear and nonlinear semidefinite optimization and any combination of
these. It derives from PENNON, the original implementation of the algorithm which
is not open source (Fiala, Kočvara, and Stingl 2013). Originally, PENNON was an
implementation of the PBM method developed by Ben-Tal and Zibulevsky for problems
of structural optimization, that has grown into a stand alone program for solving
general problems (Kocvara and Stingl 2003). It is based on a generalized Augmented
Lagrangian method pioneered by R. Polyak (Polyak 1992). PENLAB can be freely
downloaded from (Kocvara 2017).
3.11. SGRA
The Sequential Gradient-Restoration Algorithm (SGRA) is a first order nonlinear pro-
gramming solver developed by Angelo Miele and his research group in 1969 (COKER
1985; Miele, Huang, and Heideman 1969). It is based on a cyclical scheme whereby,
first, the constraints are satisfied to a prescribed accuracy (restoration phase); then,
using a first-order gradient method, a step is taken toward the optimal direction to
improve the performance index (gradient phase). The performance index is defined
as R̃ = P̃ + Q̃, which includes both the feasibility index P̃ = hT h, and optimality
index Q̃ = FxT Fx , with F = f + λT h, where f is the objective function, h is the con-
straint function, and λ is the vector of Lagrange multipliers associate to the constraint
function. Convergence is achieved when the constraint error, and the optimality con-
dition error are P̃ ≤ ε1 , Q̃ ≤ ε2 , respectively, with ε1 , ε2 small preselected positive
constants. It must be pointed out that the SGRA for NLP problems can only treat
equality constraints. Therefore, in our implementation, all the inequality constraints
are converted into equality constraints by introducing the slack variables. We have
programmed the algorithm on MATLAB in order to solve both unconstrained and
constrained NLP problems, based on (Miele, Huang, and Heideman 1969). The SGRA
version used to solve unconstrained NLP problems differs from the original formulation
by the omission of the restoration phase in the iterative process.
3.12. SNOPT
The Sparse Nonlinear OPTimizer (SNOPT) is a commercial software package for solv-
ing large-scale optimization problems, linear and nonlinear programs. It minimizes a
linear or nonlinear function subject to bounds on the variables and sparse linear or non-
10
linear constraints. SNOPT implements a sequential quadratic programming method
for solving constrained optimization problems with functions and gradients that are
expensive to evaluate, and with smooth nonlinear functions in the objective and con-
straints (Gill et al. 2001). SNOPT is implemented in Fortran 77 and distributed as
source code. In this paper, we use the free trial version of the software in conjunction
with the MATLAB interface, that can be retrieved at (Gill et al. 2018).
3.13. SOLNP
SOLNP is originally implemented in MATLAB to solve general nonlinear programming
problems, characterized by nonlinear smooth functions in the objective and constraints
(Ye 1989). Inequality constraints are converted into equality constraints by means of
slack variables. The major iteration of SOLNP solves a linearly constrained optimiza-
tion problem with an augmented Lagrangian objective function. Within the major
iteration, as first step it is checked if the solution is feasible for the linear equality con-
straints of the objective function; if it is not, an interior linear programming procedure
is called to find an interior feasible (or near-feasible) solution. Successively, a sequential
quadratic programming (QP) solves the linearly constrained problem. If the QP solu-
tion is both feasible and optimal, the algorithm stops, otherwise it solves another QP
problem as minor iteration. Both major and minor processes repeat until the optimal
solution is found or the user-specified maximum number of iterations is reached (Ye
1989). The SOLNP module in MATLAB can be freely downloaded from (Ye 2020).
3.14. SQA
The Standard Quasilinearization Algorithm (SQA) is the standard version of the QA,
and it uses QA techniques for solving nonlinear problems by generating a sequence of
linear problems solutions (Eloe and Jonnalagadda 2019; Yeo 1974). SQA differs from
MQA for the value associated to the scaling factor α. As mentioned before, the SQA
can only treat equality constraints. Therefore, in our implementation, all the inequality
constraints are converted into equality constraints by introducing the slack variables.
In this section we provide the description of the convergence metrics considered in our
analysis and narrate the key implementation steps for each solver.
11
assessment of the convergence speed, we will require the solver to repeat the same search
several times and average out the total CPU time. As result, given N benchmark test
functions, M solvers/algorithms, K randomly generated initial guesses, and Z repeated
identical search runs, a total of N × M × K × Z runs have been executed.
The following performance metrics are in order:
N K
1 X 1 X |f (x) − f (x∗ )|
Ēm = Ēn , Ēn = Ek , Ek = 100 (12)
N K max(|f (x∗ )|, 0.001)
n=1 k=1
with f (x) the benchmark test function evaluated at the numerical solution x
provided by the solver, f (x∗ ) the benchmark test function evaluated at the op-
timal solution f (x∗ ), Ek the error associated to the run from the k-th randomly
generated initial guess, Ēn the mean error associated to the n-th benchmark test
function, and Ēm the mean error delivered by the m-th solver. The biunivocal
choice of the denominator of Ek is based on the fact that some benchmark test
functions at the optimal solution have zero value; in this case, a value of 0.001 is
chosen instead as reference value.
• Mean variance [%]:
N K
1 X 1 X 2
σ̄m = σn , σn = Ek − Ēn (13)
N K −1
n=1 k=1
where σn is the variance correspondent to the n-th benchmark test function, and
σ̄m the mean variance delivered by the m-th solver.
• Mean convergence rate [%]:
N
1 X K − Kconv
γ̄m = γn , γn = 100 (14)
N K
n=1
with Kconv the number of runs (from a pool of K distinct initial guesses) which
successfully reach convergence for the n-th function, γn the convergence rate for
the n-th function, and γ̄m the mean convergence rate delivered by the m-th solver.
The convergence rate is computed considering succesfull a run that satisfies the
converging threshold conditions Ek ≤ Emax = 5%, and CP Uk ≤ CP Umax = 10
s, with CP Uk is the CPU time required to the run starting from the k-th initial
guess.
• Mean CPU time [s]:
N
1 X
CP U m = CP U n , (15)
N
n=1
Z K
1 X 1 X
CP U n = CP U z , CP U z = CP Uk (16)
Z K
z=1 k=1
12
where CP U z is the mean CPU time per z-th repetition, CP U n is the mean CPU
time related to the n-th benchmark test function, and CP U m is the mean CPU
time delivered by the m-th solver.
4.2.1. APSO
The three settings considered in the analysis are reported in Table 1, where no.
particles is the number of particles, no. iterations is the total number of iterations,
and γ is a control parameter that multiplies α, one of the two learning parameters
or acceleration constants, α and β, the random amplitude of roaming particles and
the speed of convergence, respectively. APSO does also require the number of problem
variables, no. vars, to be defined but this parameter is, obviously, invariant for the
three settings.
Table 1. APSO settings.
Settings P&P HA QS
no. particles 15 50 10
no. iterations 300 500 100
γ 0.9 0.95 0.95
13
4.2.2. BARON
The three settings considered in the analysis are reported in Table 2, with EpsA
the absolute termination tolerance, EpsR the relative termination tolerance, and
AbsConF easT ol the absolute constraint feasibility tolerance. Due to the limitations of
the trial version of the solver, trigonometric functions and problems with more than ten
variables are not supported by the solver; for this reason, the following test functions
are excluded in the analysis: A.2, A.3, A.4, A.5, A.7, A.11, A.13, A.14, A.16, A.17,
A.18, A.22, A.24, A.26 for unconstrained problems, and B.1, B.2, B.5, B.8, B.20 for
constrained problems.
Table 2. BARON settings.
Settings P&P HA QS
4.2.3. FMINCON/FMINUNC
The three settings considered in the analysis are reported in Table 3, with
StepT olerance the lower bound on the size of a step, ConstraintT olerance the up-
per bound on the magnitude of any constraint functions, F unctionT olerance the
lower bound on the change in the value of the objective function during a step, and
OptimalityT olerance the tolerance for the first-order optimality measure.
Table 3. FMINCON/FMINUNC settings.
Settings P&P HA QS
FMINCON
StepT olerance 1e-10 1e-10 1e-6
ConstraintT olerance 1e-6 1e-10 1e-3
F unctionT olerance 1e-6 1e-10 1e-3
OptimalityT olerance 1e-6 1e-10 1e-3
FMINUNC (quasi-newton)
StepT olerance 1e-6 1e-12 1e-6
F unctionT olerance 1e-6 1e-12 1e-3
OptimalityT olerance 1e-6 1e-12 1e-3
FMINUNC (trust-region)
StepT olerance 1e-6 1e-12 1e-6
F unctionT olerance 1e-6 1e-12 1e-3
OptimalityT olerance 1e-6 1e-6 1e-3
4.2.4. GCMMA/MMA
The three settings considered in the analysis are reported in Table 4, where
epsimin is a prescribed small positive tolerance that terminates the algorithm, whereas
maxoutit is the maximum number of iterations for MMA, and the maximum number
of outer iterations for GCMMA.
14
Table 4. GCMMA/MMA settings.
Settings P&P HA QS
4.2.5. KNITRO
The three settings considered in the analysis are reported in Table 5, where M axIter
is the maximum number of iterations before termination, T olX is a tolerance that
terminates the optimization process if the relative change of the solution point estimate
is less than that value, T olF un specifies the final relative stopping tolerance for the
KKT (optimality) error, and T olCon specifies the final relative stopping tolerance for
the feasibility error.
Table 5. KNITRO settings.
Settings P&P HA QS
4.2.6. MIDACO
The three settings considered in the analysis are reported in Table 6, where maxeval
is the maximum number of function evaluation. It is a distinctive feature of MIDACO
that allows the solver to stop exactly after that number of function evaluation. Due
to the limitations of the trial version of the solver, test functions with more than four
variables are not supported by the solver; for this reason, the following test functions
are excluded in the analysis: A.7, A.10, A.11, A.13, A.14, A.15, A.16, A.18, A.22, A.24
for unconstrained problems, and B.1, B.2, B.3, B.4, B.7, B.9, B.10, B.12, B.18, B.19,
B.20, B.21, B.22, B.26, B.29, B.30 for constrained problems.
Table 6. MIDACO settings.
Settings P&P HA QS
4.2.7. MQA
The three settings considered in the analysis are reported in Table 7, with ε1 and
ε2 the prescribed small positive tolerances that allow the solver to stop, when the
inequality Q̃ ≤ ε1 or R̃ ≤ ε2 is met. As mentioned in Section 3.9, MQA for NLP
problems can only treat equality constraints, namely all the inequality constraints are
converted into equality constraints by introducing the slack variables. In this study, for
all the three settings considered in the analysis, a value of 1 is chosen as initial guess
for all the slack variables.
15
Table 7. MQA settings.
Settings P&P HA QS
4.2.8. PENLAB
The three settings considered in the analysis are reported in Table 8, where
max_inner_iter is the maximum number of inner iterations, max_outer_iter
is the maximum number of outer iterations, mpenalty_min is the lower bound
for penalty parameters, inner_stop_limit is the termination tolerance for the in-
ner iterations, outer_stop_limit is the termination tolerance for the outer itera-
tions, kkt_stop_limit is the termination tolerance KKT optimality conditions, and
unc_dir_stop_limit is the stopping tolerance for the unconstrained minimization.
Table 8. PENLAB settings.
Settings P&P HA QS
4.2.9. SGRA
The three settings considered in the analysis are reported in Table 9, with ε1 the
tolerance related to the constraint error P̃ , and ε2 the tolerance related to the optimal-
ity condition error Q̃. Considering that the SGRA can only treat equality constraints,
all the inequality constraints are converted into equality constraints by introducing the
slack variables. In this study, for all the three settings considered in the analysis, a
value of 1 is chosen for all the slack variables.
Table 9. SGRA settings.
Settings P&P HA QS
4.2.10. SNOPT
The three settings considered in the analysis are reported in Table 10, where
major_iterations_limit is the limit on the number of major iterations in the SQP
method, minor_iterations_limit is the limit on minor iterations in the QP subprob-
lems, major_f easibility_tolerance is the tolerance for feasibility of the nonlinear
16
constraints, major_optimality_tolerance is the tolerance for the dual variables, and
minor_f easibility_tolerance is the tolerance for the variables and their bounds.
Table 10. SNOPT settings.
Settings P&P HA QS
4.2.11. SOLNP
The three settings considered in the analysis are reported in Table 11, with ρ the
penalty parameter in the augmented Lagrangian objective function, maj the maximum
number of major iterations, min the maximum number of minor iterations, δ the
perturbation parameter for numerical gradient calculation, and ǫ the relative tolerance
on optimality and feasibility. During the HA scenario implementation, we learned that
different convergence settings are required for unconstrained and constrained problems.
This peculiarity might be induced by the stringent tolerances adopted in this scenario.
Table 11. SOLNP settings. Tuning values for the HA scenario are divided for unconstrained (left-side) and
constrained (right-side) problems.
Settings P&P HA QS
ρ 1 1 1
maj 10 500|10 10
min 10 500|10 10
δ 1e-5 1e-10|1e-6 1e-3
ǫ 1e-4 1e-12|1e-7 1e-3
4.2.12. SQA
The three settings considered in the analysis are reported in Table 12, with ε1 and
ε2 the prescribed small positive tolerances that allow the solver to stop, when the
inequality Q̃ ≤ ε1 or R̃ ≤ ε2 is met. As mentioned earlier, SQA can only treat equality
constraints. To overcome this limitation, the inequality constraints are converted into
equality constraints by introducing slack variables. In this study, for all the three
settings considered in the analysis, a value of 1 is chosen for all the slack variables.
Table 12. SQA settings.
Settings P&P HA QS
17
5. Benchmark test functions and results
18
average convergence time. MIDACO is now able to reach the highest convergence rate
together with all the versions of BARON. Overall PENLAB is the solver which delivers
a good trade-off in performance. With respect the P&P settings, SOLNP significantly
improves its convergence rate, whereas SGRA just slightly increase its performances.
It is interesting to observe that, KNITRO (interior-point) and KNITRO (sqp), aside
improving their convergence rate, increase their mean error and variance increase. De-
spite our effort, we are not sure how to explain this unexpected behaviour. Regarding
the QS settings, Table 15, generally all the solvers reduce their convergence time and
also decrease their convergence rate except for BARON (auto), BARON (ipopt), and
BARON (sqp) which remain unaltered. SQA, FMINUNC, SOLNP, and FMINUNC are
amongst the fastest to reach the solution but their convergence rate is quite low. In ad-
dition, conversely to all the other solvers that experience a smaller CPU time, BARON
is not always able to achieve a faster CPU time with respect to the P&P settings. The
same happens to the SGRA, probably due to its intrinsic iterative nature.
Table 13. All unconstrained problems, plug and play (P&P) settings. Solvers ranked w.r.t. convergence rate.
19
Table 14. All unconstrained problems, high accuracy (HA) settings. Solvers ranked w.r.t. mean error.
Table 15. All unconstrained problems, quick solution (QS) settings. Solvers ranked w.r.t. mean CPU time.
20
is able to achieve the second best convergence rate, with an average CPU time that
is more than 50% faster than BARON. PENLAB obtains the best mean error and
variance but this performance is tempered by a low convergence rate, together with
the SGRA, MQA, and SQA which are also quite slow to reach solution. FMINCON
(interior-point), KNITRO (interior-point), and SNOPT reach a convergence rate lower
than BARON and MIDACO, but they are significantly faster. Regarding the HA set-
tings, Table 17, similar consideration can be made for BARON, but in this case the
CPU time is considerably increasing. MIDACO shows an improvement in the conver-
gence rate, reaching values very similar to BARON. PENLAB still obtains the best
mean error and variance, but it has one of the lowest convergence rate, together with
the SGRA. In general, most of the solvers increase their convergence rate, and decrease
their mean error, except for GCMMA and PENLAB. Regarding the QS settings, Ta-
ble 18, generally all the solvers decrease their convergence rate except for BARON
and PENLAB. Same considerations about BARON and PENLAB can be done as in
the two previous scenarios. MIDACO reports a significant decrease in the convergence
rate. The different versions of BARON have similar CPU time with respect to the P&P
settings. FMINCON (interior-point), KNITRO (interior-point), and SNOPT reach a
convergence rate lower than BARON, but they are significantly faster. The worst re-
sults in terms of convergence rate and CPU time are obtained by MQA and SQA.
Table 16. All constrained problems, plug and play (P&P) settings. Solvers ranked w.r.t. convergence rate.
21
Table 17. All constrained problems, high accuracy (HA) settings. Solvers ranked w.r.t. mean error.
Table 18. All constrained problems, quick solution (QS) settings. Solvers ranked w.r.t. mean CPU time.
22
6. Conclusions
In this paper we provide an explicit comparison of a set of NLP solvers. The compar-
ison includes popular solvers which are readily available in MATLAB, a few gradient
descent methods that have been extensively used in literature, and a particle swarm
optimization. Because of its widespread use among research groups, both in academia
and private sector, we have used MATLAB as common implementation platform. Con-
strained and unconstrained NLP problems have been selected amongst the standard
benchmark problems with up to thirty variables and a up to nine scalar constraints.
Results for the unconstrained problems show that BARON is the algorithm that deliver
the best convergence rate and accuracy but it is the slowest. PENLAB is the algorithm
that has the best trade off between accuracy, convergence rate, and speed. For the
constrained NLP problems, again, BARON is the solver which delivers excellent accu-
racy and convergence rate but is amongst the slower. FMINCON, KNITRO, SNOPT,
and MIDACO are the one that are able to deliver a fair compromise of accuracy,
convergence rate, and speed.
Disclosure statement
The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.
Funding
References
Box, M. J. 1966. “A comparison of several current optimization methods, and the use of
transformations in constrained problems.” The Computer Journal 9 (1): 67–77.
Boyd, S., and L. Vandenberghe. 2004. Convex Optimiza-
tion. Cambridge, United Kingdom: Cambridge University Press.
https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf.
Charalambous, C. 1979. “Acceleration of the Least pth Algorithm for MiniMax Optimization
with Engineering Applications.” Mathematical Programming 17: 270–297.
COIN-OR. 2016. “Computational Optimization Infrastructure for Operations Research.”
https://www.coin-or.org/.
COIN-OR. 2021. “IPOPT.” https://coin-or.github.io/Ipopt/.
COKER, ESTELLE MATHILDA. 1985. “Sequential gradient-restoration al-
gorithm for optimal control problems with control inequality con-
straints and general boundary conditions.” PhD diss., Rice University.
https://www.proquest.com/dissertations-theses/sequential-gradient-restoration-algorithm-optimal
23
Eloe, P. W., and J. Jonnalagadda. 2019. “Quasilinearization and boundary value problems
for Riemann-Liouville fractional differential equations.” Electron. J. Differential Equations
2019 (58): 1–15. https://ejde.math.txstate.edu/Volumes/2019/58/eloe.pdf.
Fiala, Jan, Michal Kočvara, and Michael Stingl. 2013. “PENLAB: A MATLAB solver for
nonlinear semidefinite optimization.” https://arxiv.org/abs/1311.5240.
FilterSD. 2020. “Computational Infrastructure for Operations Research, COIN-OR project.”
https://projects.coin-or.org/filterSD/export/19/trunk/filterSD.pdf.
Firm, The Optimization. 2021. “Analytics and Optimization Software.”
https://minlp.com/baron-downloads.
Fletcher, R., and S. Leyffer. 1999. User manual for filterSQP. Technical Report.
University of Dundee, Department of Mathematics, Dundee, Scotland, U.K.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.139.7769&rep=rep1&type=pdf.
Floudas, C.A., and et al. Pardalos, P.M. 1999. “Handbook of Test Problems in Local and
Global Optimization.” In Nonconvex Optimization and Its Applications, Vol. 33. Dordrecht,
The Netherlands: Kluwer Academic Publishers.
Frank, P. D., and G. R. Shubin. 1992. “A Comparison of Optimization-Based Approaches for
a Model Computational Aerodynamics Design Problem.” Journal of Computational Physics
98 (1): 74–89.
Gearhart, J. L., K. L. Adair, R. J. Detry, J. D. Durfee, K. A. Jones, and N. Martin. 2013.
Comparison of Open-Source Linear Programming Solvers. Technical Report. Sandia Na-
tional Laboratory.
George, G., and K. Raimond. 2013. “A Survey on Optimization Algorithms for Optimizing
the Numerical Functions.” International Journal of Computer Applications 61: 41–46.
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.303.5096&rep=rep1&type=pdf.
Gill, Philip, Walter Murray, Michael Saunders, Arne Drud, and Erwin Kalvelagen. 2001.
“SNOPT: An SQP algorithm for large-scale constrained optimization.” SIAM Review 47.
Gill, Philip E., Walter Murray, Michael A. Saunders, and Elizabeth Wong. 2018. User’s Guide
for SNOPT 7.7: Software for Large-Scale Nonlinear Programming. Center for Computa-
tional Mathematics Report CCoM 18-1. La Jolla, CA: Department of Mathematics, Uni-
versity of California, San Diego. https://ccom.ucsd.edu/~optimizers/downloads/.
Grossmann, I. E. 1996. Global Optimization in Engineering Design. Berlin/Heidelberg, Ger-
many: Springer-Science+Business, B.V.
Grossmann, I. E., and Zd. Kravanja. 1997. “Mixed-Integer Nonlinear Programming: A Survey
of Algorithms and Applications.” In Biegler L.T., Coleman T.F., Conn A.R., Santosa F.N.
(eds) Large-Scale Optimization with Applications, Vol. 93, 73–100. New York, NY: Springer.
Hamdy, M., A. Nguyen, and J. L. Hensen. 2016. “A Performance Comparison of Multi-objective
Optimization Algorithms for Solving Nearly-zero-energy-building Design Problems.” Energy
and Buildings 121: 57–71.
Haupt, Randy. 1995. “Comparison Between Genetic and Gradient-Based Optimization Al-
gorithms for Solving Electromagnetics Problems.” IEEE Transactions on Magnetics 31:
1932–1935.
Hedar, Abdel Rahman. 2020. “GLOBAL OPTIMIZATION TEST PROBLEMS.”
http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm.
Karaboga, D., and B. Basturk. 2008. “On the Performance of Artificial Bee Colony (ABC)
Algorithm.” Applied Soft Computing 8 (1): 687–697.
Knitro, Artelys. 2021a. Artelys Knitro User’s Manual.
https://www.artelys.com/docs/knitro//index.html.
Knitro, Artelys. 2021b. “Artelys Optimization Solutions.”
https://www.artelys.com/solvers/knitro/.
Kocvara, Michal. 2017. “PENLAB.” http://web.mat.bham.ac.uk/kocvara/penlab/.
Kocvara, Michal, and Michael Stingl. 2003. “PENNON - A generalized augmented Lagrangian
method for semidefinite programming.” In High Performance Algorithms and Software for
Nonlinear Optimization, edited by Almerico Murli Gianni Di Pillo, Vol. 82 of Applied Op-
timization, 303–321.
24
Kronqvist, J., D. E. Bernal, A. Lundell, and I. E. Grossmann. 2018. “A Review and Comparison
of Solvers for Convex MINLP.” Optimization and Engineering 20: 397–455.
Lasdon, L. A., and A. D. Warren. 1980. “Survey of Nonlinear Programming Applications.”
Journal of Operations Research Society of America 28 (5): 1029–1073.
Levy, A. V., and V. Guerra. 1976. On the Optimization of Constrained Functions: Comparison
of Sequential Gradient-Restoration Algorithm and Gradient-Projection Algorithm. Amster-
dam, The Netherlands: American Elsevier Publishing Company.
MathWorks. 2020a. “Fmincon.” https://www.mathworks.com/help/optim/ug/fmincon.html#busp5fq-6.
MathWorks. 2020b. “Fminunc.” https://www.mathworks.com/help/optim/ug/fminunc.html#but9q82-2_head.
MathWorks. 2020c. “Quasi-Newton algorithm.” https://www.mathworks.com/help/optim/ug/unconstrained-nonli
MATLAB. 2020. “The MathWorks Inc.” Natick, MA, USA.
https://www.mathworks.com/products/matlab.html.
McIlhagga, M., P. Husbands, and R. Ives. 1996. “A Comparison of Optimization Techniques for
Integrated Manufacturing Planning and Scheduling.” In Voigt HM., Ebeling W., Rechenberg
I., Schwefel HP. (eds) Parallel Problem Solving from Nature, Vol. 1141, 604–613. Berlin,
Heidelberg: Springer.
MIDACO-Solver, user manual. 2021. MIDACO-SOLVER: Numerical High-Performance Opti-
mization Software. http://www.midaco-solver.com/index.php/download.
Miele, A., H. Y. Huang, and J. C. Heideman. 1969. “Sequential gradient-restoration algorithm
for the minimization of constrained functions—Ordinary and conjugate gradient versions.”
Journal of Optimization Theory and Applications 4 (4): 213–243.
Miele, A., and R. R. Iyer. 1971. “Modified quasilinearization method for solving nonlinear,
two-point boundary-value problems.” Journal of Mathematical Analysis and Applications
36 (3): 674–692.
Miele, A., A. Mangiavacchi, and A. K. Aggarwal. 1974. “Modified quasilinearization algorithm
for optimal control problems with nondifferential constraints.” Journal of Optimization The-
ory and Applications 14 (5): 529–556.
Neumaier, A., O. Shcherbina, W. Huyer, and T. Vinko. 2005. “A Comparison of Complete
Global Optimization Solvers.” Mathematical Programming 103: 335–356.
Nocedal, Jorge, and Stephen J. Wright. 2006. Numerical Optimization.
Berlin/Heidelberg, Germany: Springer Science+Business Media, LLC.
https://link.springer.com/book/10.1007/978-0-387-40065-5.
Obayash, S., and T. Tsukahara. 1997. “Comparison of Optimization Algorithms for Aerody-
namic Shape Design.” AIAA Journal 35: 1413–1415.
Polyak, R.A. 1992. “Modified barrier functions (theory and methods).” Mathematical Program-
ming, Series B 54: 177–222.
Pucher, H., and V.r Stix. 2008. “Comparison of Nonlinear Opti-
mization Methods on a Multinomial Logit-Model in R.” Interna-
tional Multi-Conference on Engineering and Technological Innovation
https://www.iiis.org/cds2009/cd2009sci/imeti2009/PapersPdf/F216BU.pdf.
Rustagi, J. 1994. Optimization Techniques in Statistics. Cambridge, Massachusetts: Academic
Press Limited.
Sahinidis, N. n.d. BARON user manual. The Optimization Firm LLC.
http://www.minlp.com/.
Saxena, P. 2012. “Comparison of Linear and Nonlinear Programming Techniques for Animal
Diet.” Applied Mathematics 1: 106–108.
Schittkowski, K. 2009. Test Examples for Nonlinear Pro-
gramming Codes. Technical Report. University of Bayreuth.
http://www.apmath.spbu.ru/cnsa/pdf/obzor/Schittkowski_Test_problem.pdf.
Schittkowski, K., C. Zillober, and R. Zotemantel. 1994. “Numerical Comparison of Nonlinear
Programming Algorithms for Structural Optimization.” Structural Optimization 7: 1–19.
Schlueter, M., S. O. Erb, M. Gerdts, S. Kemble, and J. Rückmann. 2013. “MIDACO on MINLP
space applications.” Advances in Space Research 51 (7): 1116–1131.
Svanberg, K. 1987. “The method of moving asymptotes - A new method for structural opti-
25
mization.” International Journal for Numerical Methods in Engineering 24: 359 – 373.
Svanberg, K. 2002. “A Class of Globally Convergent Optimization Methods Based on Conser-
vative Convex Separable Approximations.” SIAM Journal on Optimization 12: 555–573.
Svanberg, K. 2020. “MMA and GCMMA Matlab code.” http://www.smoptit.se/.
Svanberg, Krister. 2007. “MMA and GCMMA – two methods for nonlinear optimization.” vol
1: 1–15. https://people.kth.se/~krille/mmagcmma.pdf.
Tawarmalani, M., and N. V. Sahinidis. 2004. “Global Optimization of Mixed-integer Nonlinear
Programs: A Theoretical and Computational Study.” Mathematical Programming 99: 563–
591.
Wansuo, D., and L. Haiying. 2010. “A New Strategy for Solving a Class of Constrained Non-
linear Optimization Problems Related to Weather and Climate Predictability.” Advances in
Atmospheric Sciences 27: 741–749.
Wu, X., and S. L. William. 1992. “Assimilation of ERBE Data with a Nonlinear Programming
Technique to Improve Cloud-Cover Diagnosis.” American Meteorological Society 120: 2009–
2024.
Wächter, A., and L. Biegler. 2006. “On the Implementation of an Interior-Point Filter Line-
Search Algorithm for Large-Scale Nonlinear Programming.” Mathematical programming 106:
25–57.
Yang, X. 2014. Nature-inspired metaheuristic algorithms. United Kingdom: Luniver press.
Ye, Yinyu. 1989. SOLNP USERS’ GUIDE - A Nonlinear Optimization Program in MATLAB.
https://web.stanford.edu/~yyye/matlab/manual.ps.
Ye, Yinyu. 2020. “SOLNP.” https://web.stanford.edu/~yyye/matlab.html.
Yeo, B. P. 1974. “A quasilinearization algorithm and its application to a manipulator problem.”
International Journal of Control 20 (4): 623–640.
Yuan, G., K. Chang, C. Hsieh, and C. Lin. 2010. “A Comparison of Opti-
mization Methods and Software for Large-scale L1-regularized Linear Clas-
sification.” The Journal of Machine Learning Research 11: 3183–3234.
https://www.jmlr.org/papers/volume11/yuan10c/yuan10c.pdf.
Ziemba, W. T., and R. G. Vickson. 1975. Stochastic Optimization Models in Finance. Cam-
bridge, Massachusetts: Academic Press INC.
26
• Function:
27
Appendix A.7. Dixon & Price Function
• Dimension: 25;
• Domain: −10 ≤ xi ≤ 10;
• Function:
n
X
f (x) = (x1 − 2)2 + i(2x2i − xi−1 )2 ; (A7)
i=1
2i −2
• Global minimum at x∗ = 2− 2i with i = 1, ..., n, f (x∗ ) = 0.
28
• Function:
n/4
X
f (x) = (x4i−3 + 10x4i−2 )2 + 5(x4i−1 − x4i )2 +
i=1
(x4i−2 − x4i−1 )2 + 10(x4i−3 − x4i )2 ; (A11)
29
• Function:
n
X n
X
2
f (x) = (xi − 1) − xi xi−1 ; (A15)
i=1 i=2
30
• Global minimum at x∗ = (0, 0), f (x∗ ) = −200.
31
• Function:
Pn
x2i )
f (x) = −e(−0.5 i=1 ; (A24)
32
Appendix A.29. Rump Function
• Dimension: 2;
• Domain: −500 ≤ xi ≤ 500;
• Function:
x1
f (x) = (333.75 − x21 )x62 + x21 (11x21 x22 − 121x42 − 2) + 5.5x82 + ; (A29)
2 + x2
x31 2
− 8x21 + 33x1 − x1 x2 + 5 + (x1 − 4)2 + (x2 − 5)2 − 4 ;
f (x) = 2 (A30)
3
Appendix B.1.
• Dimension: 13;
• Domain and search space: 0 ≤ xi ≤ ui , with
u = (1, 1, 1, ..., 1, 100, 100, 100, 1);
• Function:
4
X 4
X n
X
f (x) = 5 xi − 5 x2i − xi ; (B1)
i=1 i=1 i=5
• Constraints:
c1 (x) = 2x1 + 2x2 + x10 + x11 − 10 ≤ 0;
c2 (x) = 2x1 + 2x3 + x10 + x12 − 10 ≤ 0;
c3 (x) = 2x2 + 2x3 + x11 + x12 − 10 ≤ 0;
4 (x) = −8x1 + x10 ≤ 0;
c
c5 (x) = −8x2 + x11 ≤ 0; (B2)
c6 (x) = −8x3 + x12 ≤ 0;
c7 (x) = −2x4 − x5 + x10 ≤ 0;
c8 (x) = −2x6 − x7 + x11 ≤ 0;
c9 (x) = −2x8 − x9 + x12 ≤ 0;
33
Appendix B.2.
• Dimension: 20;
• Domain and search space: 0 ≤ xi ≤ 10;
• Function:
Pn 4 Qn 2
i=1 cos(x i) −2 i=1 cos(xi )
f (x) = − p Pn 2
; (B3)
i=1 ixi
• Constraints:
(
c1 (x) = − ni=1 xi + 0.75 ≤ 0;
Q
Pn (B4)
c2 (x) = i=1 xi − 7.5n ≤ 0;
Appendix B.3.
• Dimension: 10;
• Domain and search space: 0 ≤ xi ≤ 1;
• Function:
n
√ n Y
f (x) = − n xi ; (B5)
i=1
• Constraints:
n
X
c1 (x) = x2i − 1 = 0; (B6)
i=1
1 0.5 1 0.5
• Global minimum at x∗ = 10 , ..., 10 , f (x∗ ) = −1.
Appendix B.4.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = (78, 33, 27, 27, 27)
and u = (102, 45, 45, 45, 45);
• Function:
34
• Constraints:
c1 (x) = −u ≤ 0;
c2 (x) = u − 92 ≤ 0;
c3 (x) = −v + 90 ≤ 0;
c4 (x) = v − 110 ≤ 0;
c5 (x) = −w + 20 ≤ 0; (B8)
c6 (x) = w − 25 ≤ 0;
u = 85.334407 + 0.0056858x2 x5 + 0.0006262x1 x4 − 0.0022053x3 x5 ;
v = 80.51249 + 0.0071317x2 x5 + 0.0029955x1 x2 + 0.0021813x23 ;
w = 9.300961 + 0.0047026x3 x5 + 0.0012547x1 x3 + 0.0019085x3 x4 ;
Appendix B.5.
• Dimension: 4;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, −0.55, −0.55)
and u = (1200, 1200, 0.55, 0.55);
• Function:
2
f (x) = 3x1 + 10−6 x31 + 2x2 + 10−6 x32 ; (B9)
3
• Constraints:
c1 (x) = x3 − x4 − 0.55 ≤ 0;
c2 (x) = x4 − x3 − 0.55 ≤ 0;
c3 (x) = 1000(sin(−x3 − 0.25) + sin(−x4 − 0.25)) + 894.8 − x1 = 0;
c4 (x) = 1000(sin(x3 − 0.25) + sin(x3 − x4 − 0.25)) + 894.8 − x2 = 0;
c (x) = 1000(sin(x − 0.25) + sin(x − x − 0.25)) + 1294.8 = 0;
5 4 4 3
(B10)
• Global minimum at x∗ = (679.9453, 1026, 0.118876, −0.3962336), f (x∗ ) =
5126.4981.
Appendix B.6.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ 100, with l = (13, 0);
• Function:
• Constraints:
(
c1 (x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0;
(B12)
c2 (x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0;
35
• Global minimum at x∗ = (14.095, 0.84296), f (x∗ ) = −6961.81388.
Appendix B.7.
• Dimension: 10;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
f (x) = 2(x6 − 1)2 − 16x2 − 14x1 + (x5 − 3)2 + 4(x4 − 5)2 + (x3 − 10)2 +
(x10 − 7)2 + 7(x8 − 11)2 + 2(x9 − 10)2 + x1 x2 + x21 + x22 + 5x27 + 45; (B13)
• Constraints:
c1 (x) = 4x1 + 5x2 − 3x7 + 9x8 − 105 ≤ 0;
c2 (x) = 10x1 − 8x2 − 17x7 + 2x8 ≤ 0;
c3 (x) = −8x1 + 2x2 + 5x9 − 2x10 − 12 ≤ 0;
c (x) = 3(x − 2)2 + 4(x − 3)2 + 2x2 − 7x − 120 ≤ 0;
4 1 2 3 4
2 + 8x + (x − 6)2 − 2x − 40 ≤ 0;
(B14)
c5 (x) = 5x 1 2 3 4
c (x) = 0.5(x − 8)2 + 2(x − 4)2 + 3x2 − x − 30 ≤ 0;
6 1 2 6
5
2 + 2(x − 2)2 − 2x x + 14x − 6x ≤ 0;
c (x) = x
7 1 2 1 2 5 6
c (x) = −3x + 6x + 12(x − 8)2 − 7x ≤ 0;
8 1 2 9 10
Appendix B.8.
• Dimension: 2;
• Domain and search space: 0 ≤ xi ≤ 10;
• Function:
sin(2πx1 )3 sin(2πx2 )
f (x) = − ; (B15)
x31 (x1 + x2 )
• Constraints:
(
c1 (x) = x21 − x2 + 1 ≤ 0;
(B16)
c2 (x) = 1 − x1 + (x2 − 4)2 ≤ 0;
Appendix B.9.
• Dimension: 7;
• Domain and search space: −10 ≤ xi ≤ 10;
36
• Function:
• Constraints:
Appendix B.10.
• Dimension: 8;
• Domain and search space: li ≤ xi ≤ ui , with l = 10(10, 100, 100, 1, 1, 1, 1, 1) and
u = 1000(10, 10, 10, 1, 1, 1, 1, 1);
• Function:
f (x) = x1 + x2 + x3 ; (B19)
• Constraints:
c1 (x) = −1 + 0.0025(x4 + x6 ) ≤ 0;
c2 (x) = −1 + 0.0025(−x4 + x5 + x7 ) ≤ 0;
c (x) = −1 + 0.01(−x + x ) ≤ 0;
3 5 8
(B20)
c 4 (x) = 100x 1 − x x
1 6 + 833.33252x 4 − 83333.333 ≤ 0;
c5 (x) = x2 x4 − x2 x7 − 1250x4 + 1250x5 ≤ 0;
c6 (x) = x3 x5 − x3 x8 − 2500x5 + 1250000 ≤ 0;
Appendix B.11.
• Dimension: 2;
• Domain and search space: −1 ≤ xi ≤ 1;
• Function:
• Constraints:
37
0.5 1
• Global minimum at x∗ = ±( 12 , 2 ), f (x∗ ) = 0.75.
Appendix B.12.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = −u, u =(2.3,2.3,3.2,3.2,3.2);
• Function:
• Constraints:
2 2 2 2 2
c1 (x) = x1 + x2 + x3 + x4 + x5 − 10 = 0;
c2 (x) = x2 x3 − 5x4 x5 = 0; (B24)
c3 (x) = x31 + x32 + 1 = 0;
Appendix B.13.
• Dimension: 2;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
• Constraints:
c1 (x) = x1 + x2 − 8 = 0; (B26)
Appendix B.14.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
• Constraints:
38
Appendix B.15.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
• Constraints:
(
c1 (x) = x1 x2 x3 − 72 = 0;
(B30)
c2 (x) = x1 − 2x2 = 0;
Appendix B.16.
• Dimension: 2;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
f (x) = ln 1 + x21 − x2 ;
(B31)
• Constraints:
Appendix B.17.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:
• Constraints:
Appendix B.18.
• Dimension: 5;
• Domain and search space: 0 ≤ xi ≤ 1;
39
• Function:
• Constraints:
Appendix B.19.
• Dimension: 6;
• Domain and search space: 0 ≤ x1,...,5 ≤ 1, 0 ≤ y;
• Function:
• Constraints:
(
c1 (x) = 6x1 + 3x2 + 3x3 + 2x4 + 1x5 − 6.5 ≤ 0
(B38)
c2 (x) = 10x1 + 10x3 + y − 20 ≤ 0
Appendix B.20.
• Dimension: 12;
• Domain and search space: 0 ≤ xi ≤ 1, 0 ≤ y1,...,5 ≤ 1, 0 ≤ y6,...,8 ≤ 3;
• Function:
40
• Constraints:
c1 (x) = 2x1 + 2x2 + y6 + y7 − 10 ≤ 0
c2 (x) = 2x1 + 2x3 + y6 + y8 − 10 ≤ 0
c3 (x) = 2x2 + 2x3 + y7 + y8 − 10 ≤ 0
c4 (x) = −8x1 + y6 ≤ 0
c5 (x) = −8x2 + y7 ≤ 0 (B40)
c6 (x) = −8x3 + y8 ≤ 0
c7 (x) = −2x4 − y1 + y6 ≤ 0
c8 (x) = −2y2 − y3 + y7 ≤ 0
c9 (x) = −2y4 − y5 + y8 ≤ 0
Appendix B.21.
• Dimension: 10;
• Domain and search space: 0 ≤ xi ;
• Function:
n−1
X n−2
X
f (x) = − xi xi+1 − xi xi+2 − x1 x9 − x1 x10 − x2 x10 − x1 x5 − x4 x7 ; (B41)
i=1 i=1
• Constraints:
n
X
c1 (x) = xi − 1 = 0 (B42)
i=1
• Global minimum at x∗ = (0, 0, 0, 0.25, 0.25, 0.25, 0.25, 0, 0, 0), f (x∗ ) = −0.375.
Appendix B.22.
• Dimension: 6;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, 1, 0, 1, 0), u =
(−, −, 5, 6, 5, 10);
• Function:
f (x) = −25(x1 −2)2 −(x2 −2)2 −(x3 −1)2 −(x4 −4)2 −(x5 −1)2 −(x6 −4)2 ; (B43)
41
• Constraints:
2
c1 (x) = 4 − (x3 − 3) − x4 ≤ 0
c2 (x) = 4 − (x5 − 3)2 − x6 ≤ 0
c (x) = x − 3x − 2 ≤ 0
3 1 2
(B44)
c4 (x) = −x1 + x2 − 2 ≤ 0
c5 (x) = x1 + x2 − 6 ≤ 0
c6 (x) = 2 − x1 − x2 ≤ 0
Appendix B.23.
• Dimension: 3;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, 0), u = (2, −, 3);
• Function:
• Constraints:
T T T 2 2
c1 (x) = −x A Ax + 2y Ay − kyk + 0.25kb − zk ≤ 0
c2 (x) = x1 + x2 + x3 − 4 ≤ 0 (B46)
c3 (x) = 3x2 + x3 − 6 ≤ 0
0 0 1
with A = 0 −1 0 , b = (3, 0, −4)T
−2 1 −1
Appendix B.24.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0), u = (2, 3);
• Function:
• Constraints:
42
Appendix B.25.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0), u = (3, 4);
• Function:
• Constraints:
(
c1 (x) = x2 − 2 − 2x41 + 8x31 − 8x21 ≤ 0
(B50)
c2 (x) = x2 − 4x41 + 32x31 − 88x21 + 96x1 − 36 ≤ 0
Appendix B.26.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = (78, 33, 27, 27, 27), u =
(102, 45, 45, 45, 45);
• Function:
• Constraints:
c1 (x) = 0.00002584x3 x5 − 0.00006663x2 x5 − 0.0000734x1 x4 − 1 ≤ 0
c2 (x) = 0.000853007x2 x5 + 0.00009395x1 x4 − 0.00033085x3 x5 − 1 ≤ 0
c (x) = 1330.3294x−1 x−1 − 0.42x x−1 − 0.30586x−1 x2 x−1 − 1 ≤ 0
3 2 5 1 5 2 3 5
c (x) = 0.00024186x x + 0.00010159x x + 0.00007379x 2 −1≤0
4 2 5 1 2 3
−1 −1 −1 −1
c (x) = 2275.1327x x − 0.2668x x − 0.40584x x 5 −1 ≤ 0
5 1 4
3 5 5
c6 (x) = 0.00029955x3 x5 + 0.00007992x1 x3 + 0.00012157x3 x4 − 1 ≤ 0
(B52)
• Global minimum at x = (78, 33, 29.998, 45, 36.7673), f (x ) = 10122.696.
∗ ∗
Appendix B.27.
• Dimension: 3;
• Domain and search space: 1 ≤ xi ≤ 100;
• Function:
• Constraints:
43
Appendix B.28.
• Dimension: 4;
• Domain and search space: 0.1 ≤ xi ≤ 10;
• Function:
• Constraints:
(
c1 (x) = 0.05882x3 x4 + 0.1x1 − 1 ≤ 0
(B56)
c2 (x) = 4x2 x−1
4 + 2x2
−0.71 −1
x4 + 0.05882x−1.3
2 x3 − 1 ≤ 0
Appendix B.29.
• Dimension: 8;
• Domain and search space: 0.01 ≤ xi ≤ 10;
• Function:
• Constraints:
c1 (x) = 0.05882x3 x4 + 0.1x1 − 1 ≤ 0
c (x) = 0.05882x x + 0.1x + 0.1x − 1 ≤ 0
2 7 8 1 5
−1 −0.71 −1 −1.3 (B58)
c3 (x) = 4x x
2 4 + 2x 2 x 4 + 0.05882x 2 x3 − 1 ≤ 0
−1 −0.71 −1 −1.3
c4 (x) = 4x6 x8 + 2x6 x8 + 0.05882x6 x7 − 1 ≤ 0
Appendix B.30.
• Dimension: 5;
• Domain and search space: −5 ≤ xi ≤ 5;
• Function:
• Constraints:
2 3
√
c1 (x) = x1 + x2 + x3 − 3√2 − 2 = 0
c2 (x) = x2 − x23 + x4 − 2 2 + 2 = 0 (B60)
c3 (x) = x1 x5 − 2 = 0
44