Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
30 views

Nonlinear Programming Solvers For Unconstrained An

The document compares the performance of 23 nonlinear programming solvers on 60 benchmark problems. It discusses unconstrained and constrained optimization problems and their formulations. It then describes the solvers that will be compared, the performance metrics that will be used, and how the solvers are implemented in MATLAB before presenting and discussing the results of the comparison.

Uploaded by

Mohamed Adel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Nonlinear Programming Solvers For Unconstrained An

The document compares the performance of 23 nonlinear programming solvers on 60 benchmark problems. It discusses unconstrained and constrained optimization problems and their formulations. It then describes the solvers that will be compared, the performance metrics that will be used, and how the solvers are implemented in MATLAB before presenting and discussing the results of the comparison.

Uploaded by

Mohamed Adel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

ARTICLE TEMPLATE

Nonlinear Programming Solvers for Unconstrained and Constrained


Optimization Problems: a Benchmark Analysis
arXiv:2204.05297v1 [math.OC] 11 Apr 2022

Giovanni Lavezzia , Kidus Guyeb and Marco Ciarciàa


a
Department of Mechanical Engineering, South Dakota State University, 1451 Stadium Rd,
Brookings, South Dakota, 57006, USA; b Department of Mechanical Engineering and
Materials Science, Washington University in St. Louis, 1 Brookings Dr, St. Louis, 63130,
Missouri, USA

ARTICLE HISTORY
Compiled April 12, 2022

Abstract
In this paper we propose a set of guidelines to select a solver for the solution of
nonlinear programming problems. With this in mind, we present a comparison of
the convergence performances of commonly used solvers for both unconstrained and
constrained nonlinear programming problems. The comparison involves accuracy,
convergence rate, and convergence speed. Because of its popularity among research
teams in academia and industry, MATLAB is used as common implementation plat-
form for the solvers. Our study includes solvers which are either freely available, or
require a license, or are fully described in literature. In addition, we differentiate
solvers if they allow the selection of different optimal search methods. As result,
we examine the performances of 23 algorithms to solve 60 benchmark problems. To
enrich our analysis, we will describe how, and to what extent, convergence speed
and accuracy can be improved by changing the inner settings of each solver.

KEYWORDS
NLP; unconstrained; constrained; optimization

1. Introduction

The current technological era prioritizes, more than ever, high performance and ef-
ficiency of complex processes controlled by a set of variables. Examples of these
processes are (Lasdon and Warren 1980; Grossmann 1996; Charalambous 1979;
Grossmann and Kravanja 1997; Wu and William 1992; Wansuo and Haiying 2010;
Rustagi 1994; Ziemba and Vickson 1975): engineering designs, chemical plant reac-
tions, manufacturing processes, grid power management, power generation/conversion
process, path planning for autonomous vehicles, climate simulations, etc. Quite often,
the search for the best performance, or the highest efficiency, can be transcribed into the
form of a Nonlinear Programming (NLP) problem. Namely, the need to minimize (or
maximize) a scalar cost function subjected to a set of constraints. In some instances
these functions are linear but, in general, one or both of them are characterized by
nonlinearities. For simple, one-time use problems, one might successfully use any of

CONTACT: Giovanni Lavezzi, Ph.D. student. Email: giovanni.lavezzi@sdstate.edu


CONTACT: Kidus Guye, Ph.D. student. Email: g.kidus@wustl.edu
Corresponding author: Marco Ciarcià, Assistant professor. Email: marco.ciarcia@sdstate.edu
the solver available, like fmincon in MATLAB (MathWorks 2020a; MATLAB 2020).
Nevertheless, if the NLP derives from some specific applications, like real-time process
optimization, then the solver choice begs a more accurate selection.
The first research efforts toward the characterization of optimization solvers started
in the 60s’. In (Box 1966) the authors compare eight solvers on twenty benchmark un-
constrained NLP problems containing up to 20 variables. Notably, they illustrate tech-
niques to transform particular constrained NLP into equivalent unconstrained prob-
lems. The authors of (Levy and Guerra 1976) analyze the convergence properties of
two gradient-based solvers applied to 16 test problems. In the last few decades, with
the development of new methodologies and optimization applications, more studies
aimed to illustrate difference in performance among NLP solvers. Schittkowski et.al.
(Schittkowski, Zillober, and Zotemantel 1994) performed the comparison of eleven dif-
ferent mathematical programming codes applied to structural optimization through
finite element analysis. George et. al. summarize a qualitative comparison of few opti-
mization methodologies reported by several other sources (George and Raimond 2013).
In the research document prepared by Sandia National Laboratory (Gearhart et al.
2013), a study was conducted on four open source Linear Programming (LP) solvers
applied to 201 benchmark problems. In (Kronqvist et al. 2018) Kronqvist et.al. carried
out a performance comparison of mixed integer NLP solvers limited to convex bench-
mark problems. Pratiksha Saxena presents a comparison between linear and nonlin-
ear programming techniques on the diet formulation for animals (Saxena 2012). An-
other work by Hannes Pucher uses the programming language R to analyze multiple
nonlinear optimization methods applied to real life problems (Pucher and Stix 2008).
State-of-the-art optimization methods were used to compare on their application on
L1-regularized classifiers (Yuan et al. 2010). On a similar note, multiple global opti-
mization solvers were compared on a work done by Arnold Neumaier (Neumaier et al.
2005). Authors from (Obayash and Tsukahara 1997; McIlhagga, Husbands, and Ives
1996; Haupt 1995; Hamdy, Nguyen, and Hensen 2016) conducted a performance com-
parison of optimization techniques for specific applications which includes aerodynamic
shape design, integrated manufacturing planning and scheduling, solving electromag-
netic problems, and building energy design problems, respectively. Similarly, Frank
et. al. conducted a comparison between three optimization methods for solving aero-
dynamic design problems (Frank and Shubin 1992). In (Karaboga and Basturk 2008),
Karaboga et. al. compares the performance of artificial bee colony algorithm with the
differential evolution, evolutionary and particle swarm optimization algorithms using
multi-dimensional numerical problems.
In this paper we want to provide an explicit comparison of a set of NLP solvers. We
include in our comparison popular solvers readily available in MATLAB, a few gradient
descent methods that have been extensively used in literature, and a particle swarm
optimization. Because of its widespread use among research groups, both in academia
and private sector, we have decided to use MATLAB as common implementation plat-
form. For this reason we will focus on all the solvers that are either written on or
can be implemented in MATLAB. The NLP problems used in this comparison have
been selected amongst the standard benchmark problems (Hedar 2020; Schittkowski
2009; Floudas and Pardalos 1999) with up to thirty variables and a up to nine scalar
constraints. The paper is organized as follows. Section 2 describes the statement of
unconstrained and contrained NLP problems. In Section 3, we enumerate the NLP
solvers included in our analysis and their main features. Subsequently, an overview of
the different convergence metrics, and the solvers implementations is carried out in
Section 4. The results of the comparison with the benchmark equations is discussed in

2
Section 5. Finally, the main contributions of the paper are outlined in Section 6.

2. Nonlinear programming problem statements

In general, a constrained NLP problem aims to minimize a nonlinear real scalar ob-
jective function, with respect a set of variables, while satisfying a set of nonlinear
constraints. If the problem entails the minimization of a function without the presence
of constraints the problem is defined as unconstrained (Nocedal and Wright 2006). In
the following section, the general form of a nonlinear unconstrained and constrained
optimization problems in the minimization format form are thoroughly stated.

2.1. Unconstrained optimization problem


2.1.1. Statement
Let x ∈ Rn be a real vector with n ≥ 1 components and let f : Rn → R be a smooth
function. Then, the unconstrained optimization problem is defined as

min f (x). (1)


x∈Rn

2.1.2. Optimality conditions


For a one-dimensional function f (x) defined and differentiable over an interval (a, b),
the necessary condition for a point x∗ ∈ (a, b) to be a local maximum or minimum
is that f ′ (x∗ ) = 0. This is also known as Fermat’s theorem. The multidimensional
extension of this condition states that the gradient must be zero at local optimum
point, namely

∇f (x∗ ) = 0. (2)

Eq. 2 is referred as a first order optimality condition, as it expresses in terms of the


first order derivatives.

2.2. Constrained optimization problem


2.2.1. Statement
The constrained optimization problem is formulated as

min f (x) (3)


x∈Rn

subject to

ci (x) ≤ 0, i = 1, 2, ..., w, (4)

cj (x) = 0, j = 1, 2, ..., l, (5)

3
With c(x) a smooth real-valued function on a subset of Rn . Notably, ci (x) and cj (x)
represent the sets of equality constraints and inequality constraints, respectively. The
feasible set is identified as the set of points x that satisfy just the constraints (Eqs. 4,
5). It must be pointed out that some of the solvers considered in this study are only
able to consider equality constraints. In these instances, we will introduce a set of slack
variables si , and convert Eq. 5 into the following set of equality constraints

ci (x) + s2i = 0, i = 1, 2, ..., w. (6)

Such necessary expedient will obviously induce more computational burden on the
particular solvers affected by this constraint-type limitation.

2.2.2. Optimality conditions


The measure of first-order optimality for constrained problems derives from the
Karush-Kuhn-Tucker (KKT) conditions (Boyd and Vandenberghe 2004). These nec-
essary conditions are defined as follow. Let the objective function f and the con-
straint functions gi and hj be continuously differentiable functions at x∗ ∈ Rn . If
x∗ is a local optimum and the optimization problem satisfies some regularity condi-
tions (Nocedal and Wright 2006), then there exist the two constants µi (i = 1, . . . , w)
and λj (j = 1, . . . , ℓ), called KKT multipliers, such that the following four groups of
conditions hold:
• Stationarity:

m
X ℓ
X
f (x) : ∇f (x∗ ) + µi ∇gi (x∗ ) + λj ∇hj (x∗ ) = 0. (7)
i=1 j=1

• Primal feasibility:

gi (x∗ ) ≤ 0, for i = 1, . . . , w. (8)

hj (x∗ ) = 0, for j = 1, . . . , ℓ. (9)

• Dual feasibility:

µi ≥ 0, for i = 1, . . . , w. (10)

• Complementary slackness:
m
X
µi gi (x∗ ) = 0. (11)
i=1

3. Selection of NLP solvers and algorithms

The selection of the NLP solvers considered in this work is based on the following
aspects. First of all, we are only considering algorithms that can be implemented in

4
MATLAB. Secondly, we have included solvers that are either free source or, for com-
mercial software, have a free trial version. The remaining part of this section briefly
describes the 23 solvers included in our analysis and the most direct source to each
algorithm.

3.1. APSO
The Accelerated Particle Swarm Optimization (APSO) is an algorithm developed by
Yang at Cambridge University in 2007 and it based on swarm-intelligent search of
the optimum (Yang 2014). APSO is an evolution of the standard particle swarm opti-
mization, and developed to accelerate the convergence of the standard version of the
algorithm. The standard PSO is characterized by two elements, the swarm, that is the
population, and the members of the population, called particles. The search is based
on a randomly initialized population that moves in randomly chosen directions. In par-
ticular, each particle moves through the searching space, remembers the best earlier
positions, velocity, and accelerations of itself and its neighbors. This information is
shared among the particles while they dynamically adjust their own position, velocity
and acceleration derived from the best position of all particles. The next step starts
when all particles have been shifted. Finally, all particles aim to find the global best
among all the current best solutions till the objective function no longer improves or
after a certain number of iterations (Yang 2014). The standard PSO uses both the
current global best and the individual best, whereas the simplified version APSO is
able to accelerate the convergence of the algorithm buy using the global best only. Due
to the nature of the algorithm, only constrained nonlinear programming problems can
be solved. The MATLAB version of the APSO algorithm is provided in (Yang 2014).

3.2. BARON
The Branch and Reduced Optimization Navigator (BARON) is a commercial global
optimization software that solves both NLP and mixed-integer nonlinear programs
(MINLP). BARON uses deterministic global optimization algorithms of the branch
and bound search type which, by applying general assumptions, solve the global opti-
mization problem. It comes with embedded linear programming (LP) and NLP solvers,
such as CLP/CBC, IPOPT, FilterSD and FilterSQP. By default, BARON selects the
NLP solver and may switch between different NLP solvers during the search according
to problem characteristics and solver performance. To refer to the default option, the
name BARON (auto) is chosen. Unlike many other NLP algorithms, BARON doesn’t
explicitly require the user to provide an initial guess of the solution but leaves this
as an option. If a user doesn’t provide the initial guess, then the software shrewdly
initializes the variables. In this paper, we use the demo version of the software in con-
junction with the MATLAB interface which can be retrieved in (Firm 2021). Must be
noted that the free demo version is characterized by some limitations, namely, it can
only handle problems with up to ten variables, ten constraints, and it doesn’t sup-
port trigonometric functions. Details and documentations about BARON software are
provided in (Tawarmalani and Sahinidis 2004; Sahinidis n.d.).

3.2.1. CLP/CBC
The Computational Optimization Infrastructure for Operations Research (COIN-
OR) Branch and Cut (CBC) is an open-source mixed integer nonlinear programming

5
solver based on the COIN-OR LP solver (CLP) and the COIN-OR Cut generator
library (Cgl). The code has been written primarily by John J. Forrest (COIN-OR
2016).

3.2.2. IPOPT
COIN-OR Interior Point Optimizer (IPOPT) is an open-source solver for large-scale
NLP and it has been mainly developed by Andreas Wächter (Wächter and Biegler
2006). IPOPT implements an interior point line search filter method for nonlinear pro-
gramming models. The problem function are not required to be convex but should be
twice continuously differentiable. Mathematical details of the algorithm and documen-
tation can be found in (COIN-OR 2021).

3.2.3. FilterSD
FilterSD is a package of Fortran 77 subroutines for solving nonlinear programming
problems and linearly constrained problems in continuous optimization. The NLP
solver filterSD aims to find a solution of the NLP problem, where the objective func-
tion and the constraint function are continuously differentiable at points that satisfy
the bounds on x. The code has been developed to avoid the use of second derivatives,
and to prevent storing an approximate reduced Hessian matrix by using a new limited
memory spectral gradient approach based on Ritz values. The basic approach is that
of Robinson’s method, globalised by using a filter and trust region (FilterSD 2020).

3.2.4. FilterSQP
FilterSQP is a Sequential Quadratic Programming solver suitable for solving large,
sparse or dense linear, quadratic and nonlinear programming problems. The method
implements a trust region algorithm with a filter to promote global convergence. The
filter accepts a trial point whenever the objective or the constraint violation is improved
compared to all previous iterations. The size of the trust region is reduced if the step
is rejected, and increased if it is accepted (Fletcher and Leyffer 1999).

3.3. FMINCON
FMINCON is a MATLAB optimization toolbox used to solve constrained nonlinear
programming problems. FMINCON provides the user the option to select amongst five
different algorithms to solve nonlinear problems: Active-set, Interior-point, Sequential
Quadratic Programming, Sequential Quadratic Programming legacy, and Trust region
reflective. Four, out of the five, algorithms are implemented in our analysis as one of
them, the Trust Region Reflective algorithm, does not accept the type of constraint
considered in our benchmark cases.

3.3.1. Active-set
The Active-set, unlike the Interior point (mentioned next), doesn’t use a barrier term
to ensure that the inequality constraints are met, but it solves the optimal equation by
understanding the true active-set. A general active-set algorithm for convex quadratic
programming can be found in (Nocedal and Wright 2006).

6
3.3.2. Interior point
This method, also known as barrier method, is one type of nonlinear problem-solving
algorithms that achieves the determination of optimum values by iteratively approach-
ing the optimal solution from the interior of the feasible set (Nocedal and Wright 2006).
Since interior point algorithm depends on a feasible set, the following requirements
must be met for the method to be used:
• the set of feasible interior point should not be empty;
• all the iterations should occur in the interior of this feasible set.

3.3.3. Sequential Quadratic Programming


The basic idea behind the Sequential Quadratic Programming (SQP) is to find
a minimizer for a subproblem, which is generated as an approximate model of the
optimization problem at the current iteration point. This will be then used to define a
new iteration point, which is in turn used to define another minimizer, and the process
is iterated. SQP is similar to the active set, but some of the differences are listed as
follows:
• strict feasibility with respect to bounds;
• robustness to non-double results;
• refactored linear algebra routines;
• reformulated feasibility routines.
A general line-search algorithm framework for SQP can be found in
(Nocedal and Wright 2006).

3.3.4. Sequential Quadratic Programming legacy


Sequential Quadratic Programming legacy (SQP-legacy) is similar to SQP, with the
difference of using a larger memory and, therefore, it is slower to determining the
problem solution (Nocedal and Wright 2006).

3.3.5. Trust region reflective


The Trust region reflective algorithm solves a NLP by defining a region that is as-
sumed to represent the objective function as accurately as possible. From the selected
trust region, a step is taken and is used as a minimizer. If that specific step doesn’t
generate a solution, a different region with a reduced size will be selected. Then a new
step is executed and considered as new minimizer for the region, and the process is iter-
ated (Nocedal and Wright 2006). Trust region reflective algorithm used by FMINCON
requires only bounds or linear equality constraints. Due to this setback, this algorithm
isn’t included in the analysis.

3.4. FMINUNC
FMINUNC is another MATLAB optimization toolbox used to solve unconstrained
nonlinear programming problems (MathWorks 2020b). In this case, FMINUNC gives
the user the option of choosing between two different algorithms to solve nonlinear
minimization problems: Quasi-Newton, and Trust region.

7
3.4.1. Quasi-Newton
The Quasi-Newton methods build up curvature information at each iteration to
formulate a quadratic model problem, with the optimal solution occurring when the
sationarity conditions are satisfied. Newton-type methods, as opposed to quasi-Newton
methods, calculate the Hessian matrix directly, and proceed in a direction of descent
to locate the minimum after a number of iterations, numerically involving a large
amount of computation. On the contrary, quasi-Newton methods adopt the observed
behavior of the objective function and its gradient to build up curvature information to
make an approximation to the Hessian matrix using an appropriate updating technique
(MathWorks 2020c). In particular, the quasi-Newton algorithm uses the formula of
Broyden, Fletcher, Goldfarb, and Shanno (BFGS) to implement a cubic line search
procedure, and for updating the approximation of the Hessian matrix (MathWorks
2020b).

3.4.2. Trust region


The trust region algorithm is a subspace trust-region method, based on the interior-
reflective Newton method. Each iteration involves the approximate solution of a
large linear system using the method of preconditioned conjugate gradients (PCG)
(MathWorks 2020b). In a minimization context, the Hessian matrix can be assumed
symmetric, but it is guaranteed to be positive definite only in the neighborhood of
a strong minimizer. Algorithm PCG exits when it encounters a direction of negative
or zero curvature. The PCG output direction is either a direction of negative curva-
ture or an approximate solution to the Newton system, anyways helping to define the
two-dimensional subspace used in the trust-region approach (MathWorks 2020c).

3.5. GCMMA
GCMMA, the Globally Convergent Method of Moving Asymptotes, is a modified ver-
sion of the MMA that evaluates the global optimum value. However, unlike MMA,
GMMA consists of the so called inner and outer iterations. The GCMMA follows the
same steps as the MMA except for small changes. In GMMA, an approximate sub-
problem is created for the first outer iteration by replacing the function with convex
functions. These subproblems are then solved in order to find the next iteration points
otherwise the inner iteration kicks off. For the first inner iteration, a new subproblem
will be generated. Then the next iteration points will be calculated by solving these
convex subproblems. The algorithm then moves to the next iteration (Svanberg 2002).
The GCMMA algorithm is fully described in (Svanberg 2007), and the MATLAB code
is freely available at (Svanberg 2020).

3.6. KNITRO
ARTELYS KNITRO is a commercially available nonlinear optimization software pack-
age developed by Zienna Optimization since 2001 (Knitro 2021b). KNITRO, short for
Nonlinear Interior point Trust Region Optimization, is a software package for finding
local solutions of both continuous optimization problems, with or without constraints,
and discrete optimization problems with integer or binary variables. The KNITRO
package provides efficient and robust solution of small or large problems, for both con-
tinuous and discrete problems, derivative-free options. It supports the most popular

8
operating systems and several modeling language and programmatic interfaces (Knitro
2021a). Multiple versions of the software are available to download at (Knitro 2021b).
In this work, the software free trial license is used, in conjunction with the MATLAB
interface. Several algorithms are included in the software, such as Interior point, Active-
set, and Sequential Quadratic Programming. The description of these algorithms can
be found in Section 3.3.

3.7. MIDACO
The Mixed Integer Distributed Ant Colony Optimization (MIDACO) is a global opti-
mization solver that combines an extended evolutionary probabilistic technique, called
the Ant Colony Optimization algorithm, with the Oracle Penalty method for con-
strained handling (MIDACO-Solver, user manual 2021). Ant Colony optimization is
modelled using the behavior of ants to find the quickest path between their colony and
the source food. Like the majority of evolutionary optimization algorithms, MIDACO
considers the objective and constraint functions as black-box functions. MIDACO was
created in collaboration of European Space Agency and EADS Astrium to solve con-
strained mixed-integer non-linear (MINLP) space applications (Schlueter et al. 2013).
We use the trial version of MIDACO, in conjunction with the MATLAB interface. The
trial version has a limitation, namely, it doesn’t support more than four variables per
problem. The solver can be downloaded from (MIDACO-Solver, user manual 2021).

3.8. MMA
The Method of Moving Asymptotes (MMA) solves nonlinear problem function by gen-
erating an approximate subproblem. These convex functions used as subproblems are
chosen using gradient information at the current iteration points, and also at param-
eters that are updated at each iteration stage, called the moving asymptotes. The
subproblem is solved at the current iteration point, and the solution is used as the
next iteration point. Similarly, a new subproblem is generated at this new iteration
point, which again is solved to create the next iteration point (Svanberg 1987). The
MMA algorithm is fully described in (Svanberg 2007), and the MATLAB code is freely
available at (Svanberg 2020).

3.9. MQA
The Modified Quasilinearization Algorithm (MQA) is the modified version of the Stan-
dard Quasilinearization Algorithm (SQA) (Eloe and Jonnalagadda 2019; Yeo 1974)
described below. These quasilinearization algorithms base their solution search on the
linear approximation of the NLP, namely, on the Hessian matrix and gradient of the
objective and constraint functions. Ultimately, the goal is the progressive reduction
of the performance index. For unconstrained NLP problems, the performance index is
defined as Q̃ = fxT fx , where fx is the gradient of the objective function. On the other
hand, for constrained NLP problems the performance index is defined as R̃ = P̃ + Q̃,
which comprises both the feasibility index P̃ = hT h, and optimality index Q̃ = FxT Fx ,
with F = f +λT h, where f is the objective function, h is the constraint function, and λ
is the vector of Lagrange multipliers associate to the constraint function. Convergence
to the desired solution is achieved when the performance index Q̃ ≤ ε1 or R̃ ≤ ε2 , with
ε1 and ε2 small preselected positive constants, for the unconstrained and constrained

9
case respectively (Miele and Iyer 1971; Miele, Mangiavacchi, and Aggarwal 1974). Un-
like SQA, characterized by a unitary step size, MQA reduces progressively the step
size 0 < α < 1 to enforce an improvement in optimality. In turn, the main advantage
of the MQA, over the SQA, is its descent property: if the stepsize α is sufficiently
small, the reduction in the performance index is indeed guaranteed. It must be pointed
out that the MQA for NLP problems can only treat equality constraints. Therefore,
in our implementation, all the inequality constraints are converted into equality con-
straints by introducing the slack variables. We have implemented the algorithm on
MATLAB in order to solve both unconstrained and constrained NLP problems, based
on (Miele and Iyer 1971; Miele, Mangiavacchi, and Aggarwal 1974).

3.10. PENLAB
PENLAB is a free open source software package implemented in MATLAB for nonlinear
optimization, linear and nonlinear semidefinite optimization and any combination of
these. It derives from PENNON, the original implementation of the algorithm which
is not open source (Fiala, Kočvara, and Stingl 2013). Originally, PENNON was an
implementation of the PBM method developed by Ben-Tal and Zibulevsky for problems
of structural optimization, that has grown into a stand alone program for solving
general problems (Kocvara and Stingl 2003). It is based on a generalized Augmented
Lagrangian method pioneered by R. Polyak (Polyak 1992). PENLAB can be freely
downloaded from (Kocvara 2017).

3.11. SGRA
The Sequential Gradient-Restoration Algorithm (SGRA) is a first order nonlinear pro-
gramming solver developed by Angelo Miele and his research group in 1969 (COKER
1985; Miele, Huang, and Heideman 1969). It is based on a cyclical scheme whereby,
first, the constraints are satisfied to a prescribed accuracy (restoration phase); then,
using a first-order gradient method, a step is taken toward the optimal direction to
improve the performance index (gradient phase). The performance index is defined
as R̃ = P̃ + Q̃, which includes both the feasibility index P̃ = hT h, and optimality
index Q̃ = FxT Fx , with F = f + λT h, where f is the objective function, h is the con-
straint function, and λ is the vector of Lagrange multipliers associate to the constraint
function. Convergence is achieved when the constraint error, and the optimality con-
dition error are P̃ ≤ ε1 , Q̃ ≤ ε2 , respectively, with ε1 , ε2 small preselected positive
constants. It must be pointed out that the SGRA for NLP problems can only treat
equality constraints. Therefore, in our implementation, all the inequality constraints
are converted into equality constraints by introducing the slack variables. We have
programmed the algorithm on MATLAB in order to solve both unconstrained and
constrained NLP problems, based on (Miele, Huang, and Heideman 1969). The SGRA
version used to solve unconstrained NLP problems differs from the original formulation
by the omission of the restoration phase in the iterative process.

3.12. SNOPT
The Sparse Nonlinear OPTimizer (SNOPT) is a commercial software package for solv-
ing large-scale optimization problems, linear and nonlinear programs. It minimizes a
linear or nonlinear function subject to bounds on the variables and sparse linear or non-

10
linear constraints. SNOPT implements a sequential quadratic programming method
for solving constrained optimization problems with functions and gradients that are
expensive to evaluate, and with smooth nonlinear functions in the objective and con-
straints (Gill et al. 2001). SNOPT is implemented in Fortran 77 and distributed as
source code. In this paper, we use the free trial version of the software in conjunction
with the MATLAB interface, that can be retrieved at (Gill et al. 2018).

3.13. SOLNP
SOLNP is originally implemented in MATLAB to solve general nonlinear programming
problems, characterized by nonlinear smooth functions in the objective and constraints
(Ye 1989). Inequality constraints are converted into equality constraints by means of
slack variables. The major iteration of SOLNP solves a linearly constrained optimiza-
tion problem with an augmented Lagrangian objective function. Within the major
iteration, as first step it is checked if the solution is feasible for the linear equality con-
straints of the objective function; if it is not, an interior linear programming procedure
is called to find an interior feasible (or near-feasible) solution. Successively, a sequential
quadratic programming (QP) solves the linearly constrained problem. If the QP solu-
tion is both feasible and optimal, the algorithm stops, otherwise it solves another QP
problem as minor iteration. Both major and minor processes repeat until the optimal
solution is found or the user-specified maximum number of iterations is reached (Ye
1989). The SOLNP module in MATLAB can be freely downloaded from (Ye 2020).

3.14. SQA
The Standard Quasilinearization Algorithm (SQA) is the standard version of the QA,
and it uses QA techniques for solving nonlinear problems by generating a sequence of
linear problems solutions (Eloe and Jonnalagadda 2019; Yeo 1974). SQA differs from
MQA for the value associated to the scaling factor α. As mentioned before, the SQA
can only treat equality constraints. Therefore, in our implementation, all the inequality
constraints are converted into equality constraints by introducing the slack variables.

4. Convergence metrics and solvers implementation

In this section we provide the description of the convergence metrics considered in our
analysis and narrate the key implementation steps for each solver.

4.1. Convergence metrics


The main goal of this paper is to characterize the convergence performance, in terms of
speed and accuracy, of the different solvers under analysis. We have selected a number
of benchmark NLPs and compared the numerical solutions returned by each solver
with the true analytical solution. Moreover, considering that the choice of the initial
guesses critically affects the convergence process, we want to also assess the capability
to converge to the true optimum, rather than converging into to local minima or not
converging at all. With this in mind, we define as converging robustness the quality of
a solver to achieve the solution when the search process is initiated from a broad set of
initial guesses randomly chosen within the search domain. Finally, to have an accurate

11
assessment of the convergence speed, we will require the solver to repeat the same search
several times and average out the total CPU time. As result, given N benchmark test
functions, M solvers/algorithms, K randomly generated initial guesses, and Z repeated
identical search runs, a total of N × M × K × Z runs have been executed.
The following performance metrics are in order:

• Mean error [%]:

N K
1 X 1 X |f (x) − f (x∗ )|
Ēm = Ēn , Ēn = Ek , Ek = 100 (12)
N K max(|f (x∗ )|, 0.001)
n=1 k=1

with f (x) the benchmark test function evaluated at the numerical solution x
provided by the solver, f (x∗ ) the benchmark test function evaluated at the op-
timal solution f (x∗ ), Ek the error associated to the run from the k-th randomly
generated initial guess, Ēn the mean error associated to the n-th benchmark test
function, and Ēm the mean error delivered by the m-th solver. The biunivocal
choice of the denominator of Ek is based on the fact that some benchmark test
functions at the optimal solution have zero value; in this case, a value of 0.001 is
chosen instead as reference value.
• Mean variance [%]:

N K
1 X 1 X 2
σ̄m = σn , σn = Ek − Ēn (13)
N K −1
n=1 k=1

where σn is the variance correspondent to the n-th benchmark test function, and
σ̄m the mean variance delivered by the m-th solver.
• Mean convergence rate [%]:

N
1 X K − Kconv
γ̄m = γn , γn = 100 (14)
N K
n=1

with Kconv the number of runs (from a pool of K distinct initial guesses) which
successfully reach convergence for the n-th function, γn the convergence rate for
the n-th function, and γ̄m the mean convergence rate delivered by the m-th solver.
The convergence rate is computed considering succesfull a run that satisfies the
converging threshold conditions Ek ≤ Emax = 5%, and CP Uk ≤ CP Umax = 10
s, with CP Uk is the CPU time required to the run starting from the k-th initial
guess.
• Mean CPU time [s]:

N
1 X
CP U m = CP U n , (15)
N
n=1

Z K
1 X 1 X
CP U n = CP U z , CP U z = CP Uk (16)
Z K
z=1 k=1

12
where CP U z is the mean CPU time per z-th repetition, CP U n is the mean CPU
time related to the n-th benchmark test function, and CP U m is the mean CPU
time delivered by the m-th solver.

4.2. Solvers implementation


In this paper we analyze the convergence performances of the different solvers in terms
of robustness, accuracy, and convergence speed. Considering that the user might decide
to tune the convergence parameters to favor one of these metrics, we have decided to
perform the comparison for three separate implementation scenarios: plug and play
(P&P), high accuracy (HA), and quick solution (QS). The plug and play settings, as
the name suggests, are the "out-of-the-box" settings of each solver. The high accuracy
settings are based on more stringent tolerances and/or on a higher number of maximum
iterations with respect to the plug and play settings. This tuning aims to achieve a more
precise solution. Finally, the quick solution settings are characterized by more relaxed
convergence tolerances, and a lower number of maximum iterations with respect to
the plug and play settings. In this scenario the algorithms should reach a less accurate
solution but in a shorter time. In general, the objective function, its gradient, the
initial conditions, the constraint function (for constrained problems only), and the
solver options are elements which are inputted to each solver. The objective function
gradient is not necessary for APSO, BARON, MIDACO, and SOLNP, but it is optional
for FMINCON/FMINUNC and KNITRO. For GCMMA/MMA, SGRA, and SNOPT,
the gradient of both the objective and constraint functions is necessary. MQA/SQA and
PENLAB, in addition to these inputs, require the Hessian of the objective function.
In the following subsection, details on each solver, and on the three different solver
settings per each solver are described. It must be noted that, in most cases, the settings’
names here reported are the same of the solver’s options names used in the code
implementation. In this way, the reader can have a better understanding of which
solver’s parameter has been tuned.

4.2.1. APSO
The three settings considered in the analysis are reported in Table 1, where no.
particles is the number of particles, no. iterations is the total number of iterations,
and γ is a control parameter that multiplies α, one of the two learning parameters
or acceleration constants, α and β, the random amplitude of roaming particles and
the speed of convergence, respectively. APSO does also require the number of problem
variables, no. vars, to be defined but this parameter is, obviously, invariant for the
three settings.
Table 1. APSO settings.

Settings P&P HA QS

no. particles 15 50 10
no. iterations 300 500 100
γ 0.9 0.95 0.95

13
4.2.2. BARON
The three settings considered in the analysis are reported in Table 2, with EpsA
the absolute termination tolerance, EpsR the relative termination tolerance, and
AbsConF easT ol the absolute constraint feasibility tolerance. Due to the limitations of
the trial version of the solver, trigonometric functions and problems with more than ten
variables are not supported by the solver; for this reason, the following test functions
are excluded in the analysis: A.2, A.3, A.4, A.5, A.7, A.11, A.13, A.14, A.16, A.17,
A.18, A.22, A.24, A.26 for unconstrained problems, and B.1, B.2, B.5, B.8, B.20 for
constrained problems.
Table 2. BARON settings.

Settings P&P HA QS

EpsA 1e-6 1e-10 1e-3


EpsR 1e-4 1e-10 1e-3
AbsConF easT ol 1e-5 1e-10 1e-3

4.2.3. FMINCON/FMINUNC
The three settings considered in the analysis are reported in Table 3, with
StepT olerance the lower bound on the size of a step, ConstraintT olerance the up-
per bound on the magnitude of any constraint functions, F unctionT olerance the
lower bound on the change in the value of the objective function during a step, and
OptimalityT olerance the tolerance for the first-order optimality measure.
Table 3. FMINCON/FMINUNC settings.

Settings P&P HA QS

FMINCON
StepT olerance 1e-10 1e-10 1e-6
ConstraintT olerance 1e-6 1e-10 1e-3
F unctionT olerance 1e-6 1e-10 1e-3
OptimalityT olerance 1e-6 1e-10 1e-3

FMINUNC (quasi-newton)
StepT olerance 1e-6 1e-12 1e-6
F unctionT olerance 1e-6 1e-12 1e-3
OptimalityT olerance 1e-6 1e-12 1e-3

FMINUNC (trust-region)
StepT olerance 1e-6 1e-12 1e-6
F unctionT olerance 1e-6 1e-12 1e-3
OptimalityT olerance 1e-6 1e-6 1e-3

4.2.4. GCMMA/MMA
The three settings considered in the analysis are reported in Table 4, where
epsimin is a prescribed small positive tolerance that terminates the algorithm, whereas
maxoutit is the maximum number of iterations for MMA, and the maximum number
of outer iterations for GCMMA.

14
Table 4. GCMMA/MMA settings.

Settings P&P HA QS

epsimin 1e-7 1e-10 1e-3


maxoutit 80 150 30

4.2.5. KNITRO
The three settings considered in the analysis are reported in Table 5, where M axIter
is the maximum number of iterations before termination, T olX is a tolerance that
terminates the optimization process if the relative change of the solution point estimate
is less than that value, T olF un specifies the final relative stopping tolerance for the
KKT (optimality) error, and T olCon specifies the final relative stopping tolerance for
the feasibility error.
Table 5. KNITRO settings.

Settings P&P HA QS

M axIter 1000 10000 100


T olX 1e-6 1e-10 1e-3
T olF un 1e-6 1e-10 1e-3
T olCon 1e-6 1e-10 1e-3

4.2.6. MIDACO
The three settings considered in the analysis are reported in Table 6, where maxeval
is the maximum number of function evaluation. It is a distinctive feature of MIDACO
that allows the solver to stop exactly after that number of function evaluation. Due
to the limitations of the trial version of the solver, test functions with more than four
variables are not supported by the solver; for this reason, the following test functions
are excluded in the analysis: A.7, A.10, A.11, A.13, A.14, A.15, A.16, A.18, A.22, A.24
for unconstrained problems, and B.1, B.2, B.3, B.4, B.7, B.9, B.10, B.12, B.18, B.19,
B.20, B.21, B.22, B.26, B.29, B.30 for constrained problems.
Table 6. MIDACO settings.

Settings P&P HA QS

maxeval 50000 150000 10000

4.2.7. MQA
The three settings considered in the analysis are reported in Table 7, with ε1 and
ε2 the prescribed small positive tolerances that allow the solver to stop, when the
inequality Q̃ ≤ ε1 or R̃ ≤ ε2 is met. As mentioned in Section 3.9, MQA for NLP
problems can only treat equality constraints, namely all the inequality constraints are
converted into equality constraints by introducing the slack variables. In this study, for
all the three settings considered in the analysis, a value of 1 is chosen as initial guess
for all the slack variables.

15
Table 7. MQA settings.

Settings P&P HA QS

ε1 1e-5 1e-8 1e-2


ε2 1e-4 1e-5 1e-3

4.2.8. PENLAB
The three settings considered in the analysis are reported in Table 8, where
max_inner_iter is the maximum number of inner iterations, max_outer_iter
is the maximum number of outer iterations, mpenalty_min is the lower bound
for penalty parameters, inner_stop_limit is the termination tolerance for the in-
ner iterations, outer_stop_limit is the termination tolerance for the outer itera-
tions, kkt_stop_limit is the termination tolerance KKT optimality conditions, and
unc_dir_stop_limit is the stopping tolerance for the unconstrained minimization.
Table 8. PENLAB settings.

Settings P&P HA QS

max_inner_iter 100 1000 25


max_outer_iter 100 1000 25
mpenalty_min 1e-6 1e-9 1e-3
inner_stop_limit 1e-2 1e-9 1e-1
outer_stop_limit 1e-6 1e-9 1e-3
kkt_stop_limit 1e-4 1e-6 1e-2
unc_dir_stop_limit 1e-2 1e-9 1e-1

4.2.9. SGRA
The three settings considered in the analysis are reported in Table 9, with ε1 the
tolerance related to the constraint error P̃ , and ε2 the tolerance related to the optimal-
ity condition error Q̃. Considering that the SGRA can only treat equality constraints,
all the inequality constraints are converted into equality constraints by introducing the
slack variables. In this study, for all the three settings considered in the analysis, a
value of 1 is chosen for all the slack variables.
Table 9. SGRA settings.

Settings P&P HA QS

ε1 1e-9 1e-10 1e-8


ε2 1e-4 1e-6 1e-2

4.2.10. SNOPT
The three settings considered in the analysis are reported in Table 10, where
major_iterations_limit is the limit on the number of major iterations in the SQP
method, minor_iterations_limit is the limit on minor iterations in the QP subprob-
lems, major_f easibility_tolerance is the tolerance for feasibility of the nonlinear

16
constraints, major_optimality_tolerance is the tolerance for the dual variables, and
minor_f easibility_tolerance is the tolerance for the variables and their bounds.
Table 10. SNOPT settings.

Settings P&P HA QS

major_iterations_limit 1000 10000 100


minor_iterations_limit 500 5000 100
major_f easibility_tolerance 1e-6 1e-12 1e-3
major_optimality_tolerance 1e-6 1e-12 1e-3
minor_f easibility_tolerance 1e-6 1e-12 1e-3

4.2.11. SOLNP
The three settings considered in the analysis are reported in Table 11, with ρ the
penalty parameter in the augmented Lagrangian objective function, maj the maximum
number of major iterations, min the maximum number of minor iterations, δ the
perturbation parameter for numerical gradient calculation, and ǫ the relative tolerance
on optimality and feasibility. During the HA scenario implementation, we learned that
different convergence settings are required for unconstrained and constrained problems.
This peculiarity might be induced by the stringent tolerances adopted in this scenario.
Table 11. SOLNP settings. Tuning values for the HA scenario are divided for unconstrained (left-side) and
constrained (right-side) problems.

Settings P&P HA QS

ρ 1 1 1
maj 10 500|10 10
min 10 500|10 10
δ 1e-5 1e-10|1e-6 1e-3
ǫ 1e-4 1e-12|1e-7 1e-3

4.2.12. SQA
The three settings considered in the analysis are reported in Table 12, with ε1 and
ε2 the prescribed small positive tolerances that allow the solver to stop, when the
inequality Q̃ ≤ ε1 or R̃ ≤ ε2 is met. As mentioned earlier, SQA can only treat equality
constraints. To overcome this limitation, the inequality constraints are converted into
equality constraints by introducing slack variables. In this study, for all the three
settings considered in the analysis, a value of 1 is chosen for all the slack variables.
Table 12. SQA settings.

Settings P&P HA QS

ε1 1e-5 1e-8 1e-2


ε2 1e-4 1e-5 1e-3

17
5. Benchmark test functions and results

We present a collection of unconstrained and constrained optimization test problems


that are used to validate the performance of the various optimization algorithms pre-
sented above for the different implementation scenarios. The comparison results are
also discussed in depth in this section.
For performance comparison purposes, an equivalent environment and control pa-
rameters have been created to run each NLP solver. All outputs tabulated in this paper
are calculated using MATLAB software running on a desktop computer with the fol-
lowing specs: Intel(R) Core(TM) i7-6700 CPU 3.40GHz processor, 16.0 GB of RAM,
running a 64-bit Windows 10 operating system. To assess the true computational time
required by each algorithm to reach convergence, implementation steps that are ex-
pected to have an impact on the computer’s performance are deactivated during the run
for the solution. The internet connection and other unrelated applications are turned
off throughout the analysis, ensuring that unnecessary background activities are not
accessing computational resources throughout the solvers’ performance. A collection of
unconstrained and constrained benchmark problems are used to test the solvers based
on, but not limited to (Hedar 2020; Schittkowski 2009; Floudas and Pardalos 1999).
Specifically, the benchmark problems include combinations of logarithmic, trigonomet-
ric, and exponential terms, non-convex and convex functions, a minimum of two to
a maximum of thirty variables, and a maximum of nine constraint functions for the
constrained optimization problems. For sake of completeness, all the benchmark test
functions are listed in Appendix A and B. For each of the test function, dimension,
domain and search space, objective function, constraints, and minimum solution are
listed. As mentioned in Section 4.2, the comparison between each solver is carried out
by considering three different settings: plug and play, high accuracy, and quick solu-
tion. In this way we want to assess the robustness, accuracy, and convergence speed
of every solver. For each benchmark problem, all solvers use the same set of randomly
generated initial guesses.

5.1. Results for unconstrained optimization problems


A collection of 30 unconstrained optimization test problems is used to validate the
performance of the optimization algorithms. The benchmark test functions for uncon-
strained global optimization are listed in Appendix A. For the purpose of this analysis,
given N = 30 benchmark test functions, M = 16 solvers and algorithms, K = 50 ran-
domly generated initial guesses, and Z = 3 iterations, a set of N × M × K × Z runs are
executed. Tables 13, 14, and 15 report the results for the plug and play (P&P), high
accuracy (HA), and quick solution (QS) settings, respectively. From the analysis of the
results for the P&P settings, Table 13, we observe that BARON (auto) and BARON
(ipopt) are able to reach the minimum mean error and variance, the highest conver-
gence rate, but they are not the fastest ones to reach the solution. Moreover, BARON
(sd), BARON (sqp), SNOPT, and PENLAB are able to obtain good results in terms
of mean error and variance. Overall, PENLAB is also able to reach a convergence rate
similar to BARON (auto) and BARON (ipopt), with the advantage of being 10 times
faster than them. The worst results in terms of accuracy and convergence rate are
obtained by SOLNP and SGRA. For the HA settings, Table 14, we can observe similar
trends. In general, as expected, all the solvers manage to achieve a more accurate solu-
tion as they reduce the average error, increase their convergence rate, and increase the

18
average convergence time. MIDACO is now able to reach the highest convergence rate
together with all the versions of BARON. Overall PENLAB is the solver which delivers
a good trade-off in performance. With respect the P&P settings, SOLNP significantly
improves its convergence rate, whereas SGRA just slightly increase its performances.
It is interesting to observe that, KNITRO (interior-point) and KNITRO (sqp), aside
improving their convergence rate, increase their mean error and variance increase. De-
spite our effort, we are not sure how to explain this unexpected behaviour. Regarding
the QS settings, Table 15, generally all the solvers reduce their convergence time and
also decrease their convergence rate except for BARON (auto), BARON (ipopt), and
BARON (sqp) which remain unaltered. SQA, FMINUNC, SOLNP, and FMINUNC are
amongst the fastest to reach the solution but their convergence rate is quite low. In ad-
dition, conversely to all the other solvers that experience a smaller CPU time, BARON
is not always able to achieve a faster CPU time with respect to the P&P settings. The
same happens to the SGRA, probably due to its intrinsic iterative nature.
Table 13. All unconstrained problems, plug and play (P&P) settings. Solvers ranked w.r.t. convergence rate.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

1 BARON (auto) 2.559e-07 2.171e-31 92.3 0.3434


2 BARON (ipopt) 2.559e-07 2.162e-31 92.3 0.2845
3 BARON (sd) 1.596e-03 2.615e-09 92.3 0.3863
4 BARON (sqp) 2.536e-07 4.231e-20 92.3 0.3389
12 FMINUNC (quasi-newton) 8.643e-02 1.669e-01 52.8 0.0045
8 FMINUNC (trust-region) 7.836e-02 2.068e-02 68.8 0.0153
11 KNITRO (active-set) 2.299e-01 4.979e-02 60.5 0.0200
10 KNITRO (interior-point) 2.703e-02 3.235e-02 61.2 0.0194
9 KNITRO (sqp) 1.121e-02 1.274e-02 61.7 0.0365
6 MIDACO 3.130e-01 1.626e-01 85.2 0.3193
14 MQA 2.031e-01 8.748e-02 51.7 0.1345
5 PENLAB 1.016e-03 5.340e-37 88.5 0.0125
16 SGRA 5.921e-01 8.627e-02 40.8 0.2227
7 SNOPT 7.008e-03 2.444e-02 73.8 0.0071
15 SOLNP 4.648e-01 1.908e-01 48.2 0.0097
13 SQA 2.362e-01 1.383e-01 52.5 0.0005

19
Table 14. All unconstrained problems, high accuracy (HA) settings. Solvers ranked w.r.t. mean error.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

2 BARON (auto) 2.563e-07 2.171e-31 92.3 0.6550


3 BARON (ipopt) 2.563e-07 2.162e-31 92.3 0.6624
4 BARON (sd) 1.186e-06 3.611e-13 92.3 0.8777
1 BARON (sqp) 2.536e-07 4.231e-20 92.3 0.6334
9 FMINUNC (quasi-newton) 3.526e-03 8.852e-04 59.2 0.0062
11 FMINUNC (trust-region) 1.860e-02 1.423e-02 68.8 0.0238
13 KNITRO (active-set) 7.153e-02 1.883e-01 67.9 0.0273
12 KNITRO (interior-point) 5.329e-02 1.258e-01 68.3 0.0440
14 KNITRO (sqp) 9.411e-02 1.571e-01 69.1 0.0731
15 MIDACO 1.756e-01 9.387e-02 92.2 1.0174
8 MQA 3.160e-03 5.835e-05 52.1 0.1520
5 PENLAB 4.042e-06 8.944e-42 88.5 0.0121
16 SGRA 2.709e-01 1.335e-01 44.9 0.2555
7 SNOPT 1.260e-03 1.298e-03 74.2 0.0099
6 SOLNP 9.420e-04 7.900e-04 69.1 0.0095
10 SQA 3.984e-03 9.000e-05 53.2 0.0003

Table 15. All unconstrained problems, quick solution (QS) settings. Solvers ranked w.r.t. mean CPU time.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

15 BARON (auto) 2.556e-07 2.171e-31 92.3 0.3692


16 BARON (ipopt) 2.556e-07 2.162e-31 92.3 0.3743
13 BARON (sd) 4.295e-06 2.853e-09 84.6 0.3684
14 BARON (sqp) 2.536e-07 4.231e-20 92.3 0.3690
2 FMINUNC (quasi-newton) 6.076e-01 8.157e-01 33.8 0.0024
5 FMINUNC (trust-region) 1.924e-01 1.997e-01 49.3 0.0108
8 KNITRO (active-set) 3.522e-01 3.677e-01 48.7 0.0171
7 KNITRO (interior-point) 3.231e-01 4.066e-01 49.1 0.0169
9 KNITRO (sqp) 3.900e-01 5.835e-01 50.4 0.0256
10 MIDACO 5.852e-02 7.128e-02 72.3 0.0692
11 MQA 2.405e-01 2.930e-01 42.2 0.1819
6 PENLAB 5.452e-05 5.623e-39 84.6 0.0118
12 SGRA 8.640e-01 2.211e-01 23.8 0.3033
3 SNOPT 1.581e-01 1.367e-01 66.4 0.0040
4 SOLNP 5.357e-01 3.847e-01 41.2 0.0093
1 SQA 1.964e-01 1.609e-01 43.3 0.0002

5.2. Results for constrained optimization problems


A collection of 30 unconstrained optimization test problems is used to validate the per-
formance of the optimization algorithms. The benchmark test functions for constrained
global optimization are listed in Appendix B. For the purpose of the analysis, given
N = 30 benchmark test functions, M = 21 solvers and algorithms, K = 50 randomly
generated initial guesses, and Z = 3 iterations, a set of N × M × K × Z runs are
executed. Tables 16, 17, and 18 report the results for the P&P, HA, and QS settings,
respectively. From the analysis of the results for the P&P settings, Table 16, we observe
that all the versions of BARON are able to reach almost the highest accuracy and the
best convergence rate but they are not the fastest to reach the solution. MIDACO

20
is able to achieve the second best convergence rate, with an average CPU time that
is more than 50% faster than BARON. PENLAB obtains the best mean error and
variance but this performance is tempered by a low convergence rate, together with
the SGRA, MQA, and SQA which are also quite slow to reach solution. FMINCON
(interior-point), KNITRO (interior-point), and SNOPT reach a convergence rate lower
than BARON and MIDACO, but they are significantly faster. Regarding the HA set-
tings, Table 17, similar consideration can be made for BARON, but in this case the
CPU time is considerably increasing. MIDACO shows an improvement in the conver-
gence rate, reaching values very similar to BARON. PENLAB still obtains the best
mean error and variance, but it has one of the lowest convergence rate, together with
the SGRA. In general, most of the solvers increase their convergence rate, and decrease
their mean error, except for GCMMA and PENLAB. Regarding the QS settings, Ta-
ble 18, generally all the solvers decrease their convergence rate except for BARON
and PENLAB. Same considerations about BARON and PENLAB can be done as in
the two previous scenarios. MIDACO reports a significant decrease in the convergence
rate. The different versions of BARON have similar CPU time with respect to the P&P
settings. FMINCON (interior-point), KNITRO (interior-point), and SNOPT reach a
convergence rate lower than BARON, but they are significantly faster. The worst re-
sults in terms of convergence rate and CPU time are obtained by MQA and SQA.
Table 16. All constrained problems, plug and play (P&P) settings. Solvers ranked w.r.t. convergence rate.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

17 APSO 1.512e+00 1.025e+00 39.2 0.1772


1 BARON (auto) 2.153e-03 2.728e-08 92.0 0.7016
2 BARON (ipopt) 2.441e-03 2.920e-08 92.0 0.8052
3 BARON (sd) 2.162e-03 2.566e-08 92.0 0.7539
4 BARON (sqp) 2.183e-03 1.478e-08 92.0 0.8188
10 FMINCON (active-set) 1.795e-01 2.123e-01 71.9 0.0204
6 FMINCON (interior-point) 1.985e-01 2.413e-01 75.9 0.0271
13 FMINCON (sqp) 1.908e-01 2.446e-01 69.3 0.0093
11 FMINCON (sqp-legacy) 1.893e-01 2.429e-01 69.4 0.0111
15 GCMMA 4.490e-01 3.742e-01 45.7 0.9681
12 KNITRO (active-set) 1.908e-01 2.759e-01 69.4 0.0472
7 KNITRO (interior-point) 1.718e-01 1.962e-01 74.6 0.0303
8 KNITRO (sqp) 1.788e-01 2.027e-01 72.9 0.1016
5 MIDACO 4.739e-01 2.718e-01 81.0 0.3331
16 MMA 7.188e-01 5.743e-01 44.1 0.5856
20 MQA 5.125e-01 3.460e-01 20.8 3.1559
18 PENLAB 1.127e-04 3.258e-41 31.0 0.0379
19 SGRA 6.360e-01 7.011e-01 30.3 0.9815
9 SNOPT 1.689e-01 2.010e-01 72.1 0.0040
14 SOLNP 3.243e-01 3.211e-01 48.1 0.0095
21 SQA 3.990e-01 5.778e-01 20.2 3.1822

21
Table 17. All constrained problems, high accuracy (HA) settings. Solvers ranked w.r.t. mean error.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

21 APSO 1.173e+00 1.014e+00 45.9 1.0168


2 BARON (auto) 2.054e-03 4.556e-17 92.0 1.7958
5 BARON (ipopt) 2.055e-03 1.210e-09 92.0 1.9877
3 BARON (sd) 2.054e-03 4.680e-17 92.0 1.8107
4 BARON (sqp) 2.054e-03 4.120e-17 92.0 1.8730
13 FMINCON (active-set) 1.770e-01 2.082e-01 72.3 0.0214
10 FMINCON (interior-point) 1.985e-01 2.413e-01 75.9 0.0326
11 FMINCON (sqp) 1.881e-01 2.388e-01 69.4 0.0082
12 FMINCON (sqp-legacy) 1.857e-01 2.365e-01 69.7 0.0110
17 GCMMA 5.112e-01 5.668e-01 45.2 1.0599
8 KNITRO (active-set) 1.881e-01 2.691e-01 70.1 0.0698
6 KNITRO (interior-point) 1.718e-01 1.962e-01 75.2 0.0357
7 KNITRO (sqp) 1.785e-01 2.027e-01 73.0 0.1360
15 MIDACO 2.735e-01 2.648e-01 85.9 1.0503
20 MMA 9.786e-01 5.748e-01 42.1 0.7101
18 MQA 5.358e-01 4.601e-01 20.8 3.2012
1 PENLAB 1.502e-04 6.711e-39 31.0 0.0488
19 SGRA 6.248e-01 7.673e-01 30.0 0.9632
9 SNOPT 1.689e-01 2.010e-01 72.4 0.0069
16 SOLNP 2.949e-01 3.106e-01 44.8 0.0112
14 SQA 2.754e-01 2.838e-01 20.1 3.1838

Table 18. All constrained problems, quick solution (QS) settings. Solvers ranked w.r.t. mean CPU time.

Ranking Solver Ē[%] σ̄[%] γ̄[%] CP U [s]

10 APSO 1.531e+00 5.677e-01 35.2 0.0538


17 BARON (auto) 7.925e-03 9.069e-05 92.0 0.7393
16 BARON (ipopt) 1.152e-02 1.109e-03 92.0 0.7357
18 BARON (sd) 2.766e-02 2.935e-04 92.0 0.7670
19 BARON (sqp) 1.534e-02 7.491e-05 92.0 0.8652
5 FMINCON (active-set) 2.850e-01 3.484e-01 68.9 0.0165
6 FMINCON (interior-point) 2.166e-01 2.554e-01 72.3 0.0262
2 FMINCON (sqp) 1.916e-01 2.448e-01 69.0 0.0071
4 FMINCON (sqp-legacy) 1.902e-01 2.431e-01 69.1 0.0092
15 GCMMA 6.967e-01 4.256e-01 45.5 0.5574
8 KNITRO (active-set) 2.148e-01 2.767e-01 66.0 0.0295
7 KNITRO (interior-point) 2.105e-01 2.826e-01 69.5 0.0268
11 KNITRO (sqp) 2.207e-01 3.002e-01 70.3 0.0632
12 MIDACO 8.355e-01 5.483e-01 58.6 0.0723
14 MMA 1.161e+00 1.189e+00 41.0 0.1324
20 MQA 5.844e-01 4.193e-01 20.8 3.1174
9 PENLAB 1.896e-04 1.454e-37 31.0 0.0323
13 SGRA 8.774e-01 1.198e+00 27.5 0.9369
1 SNOPT 1.767e-01 2.045e-01 70.2 0.0027
3 SOLNP 4.790e-01 6.452e-01 46.6 0.0087
21 SQA 3.316e-01 3.131e-01 20.1 3.1361

22
6. Conclusions

In this paper we provide an explicit comparison of a set of NLP solvers. The compar-
ison includes popular solvers which are readily available in MATLAB, a few gradient
descent methods that have been extensively used in literature, and a particle swarm
optimization. Because of its widespread use among research groups, both in academia
and private sector, we have used MATLAB as common implementation platform. Con-
strained and unconstrained NLP problems have been selected amongst the standard
benchmark problems with up to thirty variables and a up to nine scalar constraints.
Results for the unconstrained problems show that BARON is the algorithm that deliver
the best convergence rate and accuracy but it is the slowest. PENLAB is the algorithm
that has the best trade off between accuracy, convergence rate, and speed. For the
constrained NLP problems, again, BARON is the solver which delivers excellent accu-
racy and convergence rate but is amongst the slower. FMINCON, KNITRO, SNOPT,
and MIDACO are the one that are able to deliver a fair compromise of accuracy,
convergence rate, and speed.

Data availability statement

Data available on request from the authors.

Disclosure statement

The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

Funding

This research received no external funding.

References

Box, M. J. 1966. “A comparison of several current optimization methods, and the use of
transformations in constrained problems.” The Computer Journal 9 (1): 67–77.
Boyd, S., and L. Vandenberghe. 2004. Convex Optimiza-
tion. Cambridge, United Kingdom: Cambridge University Press.
https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf.
Charalambous, C. 1979. “Acceleration of the Least pth Algorithm for MiniMax Optimization
with Engineering Applications.” Mathematical Programming 17: 270–297.
COIN-OR. 2016. “Computational Optimization Infrastructure for Operations Research.”
https://www.coin-or.org/.
COIN-OR. 2021. “IPOPT.” https://coin-or.github.io/Ipopt/.
COKER, ESTELLE MATHILDA. 1985. “Sequential gradient-restoration al-
gorithm for optimal control problems with control inequality con-
straints and general boundary conditions.” PhD diss., Rice University.
https://www.proquest.com/dissertations-theses/sequential-gradient-restoration-algorithm-optimal

23
Eloe, P. W., and J. Jonnalagadda. 2019. “Quasilinearization and boundary value problems
for Riemann-Liouville fractional differential equations.” Electron. J. Differential Equations
2019 (58): 1–15. https://ejde.math.txstate.edu/Volumes/2019/58/eloe.pdf.
Fiala, Jan, Michal Kočvara, and Michael Stingl. 2013. “PENLAB: A MATLAB solver for
nonlinear semidefinite optimization.” https://arxiv.org/abs/1311.5240.
FilterSD. 2020. “Computational Infrastructure for Operations Research, COIN-OR project.”
https://projects.coin-or.org/filterSD/export/19/trunk/filterSD.pdf.
Firm, The Optimization. 2021. “Analytics and Optimization Software.”
https://minlp.com/baron-downloads.
Fletcher, R., and S. Leyffer. 1999. User manual for filterSQP. Technical Report.
University of Dundee, Department of Mathematics, Dundee, Scotland, U.K.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.139.7769&rep=rep1&type=pdf.
Floudas, C.A., and et al. Pardalos, P.M. 1999. “Handbook of Test Problems in Local and
Global Optimization.” In Nonconvex Optimization and Its Applications, Vol. 33. Dordrecht,
The Netherlands: Kluwer Academic Publishers.
Frank, P. D., and G. R. Shubin. 1992. “A Comparison of Optimization-Based Approaches for
a Model Computational Aerodynamics Design Problem.” Journal of Computational Physics
98 (1): 74–89.
Gearhart, J. L., K. L. Adair, R. J. Detry, J. D. Durfee, K. A. Jones, and N. Martin. 2013.
Comparison of Open-Source Linear Programming Solvers. Technical Report. Sandia Na-
tional Laboratory.
George, G., and K. Raimond. 2013. “A Survey on Optimization Algorithms for Optimizing
the Numerical Functions.” International Journal of Computer Applications 61: 41–46.
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.303.5096&rep=rep1&type=pdf.
Gill, Philip, Walter Murray, Michael Saunders, Arne Drud, and Erwin Kalvelagen. 2001.
“SNOPT: An SQP algorithm for large-scale constrained optimization.” SIAM Review 47.
Gill, Philip E., Walter Murray, Michael A. Saunders, and Elizabeth Wong. 2018. User’s Guide
for SNOPT 7.7: Software for Large-Scale Nonlinear Programming. Center for Computa-
tional Mathematics Report CCoM 18-1. La Jolla, CA: Department of Mathematics, Uni-
versity of California, San Diego. https://ccom.ucsd.edu/~optimizers/downloads/.
Grossmann, I. E. 1996. Global Optimization in Engineering Design. Berlin/Heidelberg, Ger-
many: Springer-Science+Business, B.V.
Grossmann, I. E., and Zd. Kravanja. 1997. “Mixed-Integer Nonlinear Programming: A Survey
of Algorithms and Applications.” In Biegler L.T., Coleman T.F., Conn A.R., Santosa F.N.
(eds) Large-Scale Optimization with Applications, Vol. 93, 73–100. New York, NY: Springer.
Hamdy, M., A. Nguyen, and J. L. Hensen. 2016. “A Performance Comparison of Multi-objective
Optimization Algorithms for Solving Nearly-zero-energy-building Design Problems.” Energy
and Buildings 121: 57–71.
Haupt, Randy. 1995. “Comparison Between Genetic and Gradient-Based Optimization Al-
gorithms for Solving Electromagnetics Problems.” IEEE Transactions on Magnetics 31:
1932–1935.
Hedar, Abdel Rahman. 2020. “GLOBAL OPTIMIZATION TEST PROBLEMS.”
http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm.
Karaboga, D., and B. Basturk. 2008. “On the Performance of Artificial Bee Colony (ABC)
Algorithm.” Applied Soft Computing 8 (1): 687–697.
Knitro, Artelys. 2021a. Artelys Knitro User’s Manual.
https://www.artelys.com/docs/knitro//index.html.
Knitro, Artelys. 2021b. “Artelys Optimization Solutions.”
https://www.artelys.com/solvers/knitro/.
Kocvara, Michal. 2017. “PENLAB.” http://web.mat.bham.ac.uk/kocvara/penlab/.
Kocvara, Michal, and Michael Stingl. 2003. “PENNON - A generalized augmented Lagrangian
method for semidefinite programming.” In High Performance Algorithms and Software for
Nonlinear Optimization, edited by Almerico Murli Gianni Di Pillo, Vol. 82 of Applied Op-
timization, 303–321.

24
Kronqvist, J., D. E. Bernal, A. Lundell, and I. E. Grossmann. 2018. “A Review and Comparison
of Solvers for Convex MINLP.” Optimization and Engineering 20: 397–455.
Lasdon, L. A., and A. D. Warren. 1980. “Survey of Nonlinear Programming Applications.”
Journal of Operations Research Society of America 28 (5): 1029–1073.
Levy, A. V., and V. Guerra. 1976. On the Optimization of Constrained Functions: Comparison
of Sequential Gradient-Restoration Algorithm and Gradient-Projection Algorithm. Amster-
dam, The Netherlands: American Elsevier Publishing Company.
MathWorks. 2020a. “Fmincon.” https://www.mathworks.com/help/optim/ug/fmincon.html#busp5fq-6.
MathWorks. 2020b. “Fminunc.” https://www.mathworks.com/help/optim/ug/fminunc.html#but9q82-2_head.
MathWorks. 2020c. “Quasi-Newton algorithm.” https://www.mathworks.com/help/optim/ug/unconstrained-nonli
MATLAB. 2020. “The MathWorks Inc.” Natick, MA, USA.
https://www.mathworks.com/products/matlab.html.
McIlhagga, M., P. Husbands, and R. Ives. 1996. “A Comparison of Optimization Techniques for
Integrated Manufacturing Planning and Scheduling.” In Voigt HM., Ebeling W., Rechenberg
I., Schwefel HP. (eds) Parallel Problem Solving from Nature, Vol. 1141, 604–613. Berlin,
Heidelberg: Springer.
MIDACO-Solver, user manual. 2021. MIDACO-SOLVER: Numerical High-Performance Opti-
mization Software. http://www.midaco-solver.com/index.php/download.
Miele, A., H. Y. Huang, and J. C. Heideman. 1969. “Sequential gradient-restoration algorithm
for the minimization of constrained functions—Ordinary and conjugate gradient versions.”
Journal of Optimization Theory and Applications 4 (4): 213–243.
Miele, A., and R. R. Iyer. 1971. “Modified quasilinearization method for solving nonlinear,
two-point boundary-value problems.” Journal of Mathematical Analysis and Applications
36 (3): 674–692.
Miele, A., A. Mangiavacchi, and A. K. Aggarwal. 1974. “Modified quasilinearization algorithm
for optimal control problems with nondifferential constraints.” Journal of Optimization The-
ory and Applications 14 (5): 529–556.
Neumaier, A., O. Shcherbina, W. Huyer, and T. Vinko. 2005. “A Comparison of Complete
Global Optimization Solvers.” Mathematical Programming 103: 335–356.
Nocedal, Jorge, and Stephen J. Wright. 2006. Numerical Optimization.
Berlin/Heidelberg, Germany: Springer Science+Business Media, LLC.
https://link.springer.com/book/10.1007/978-0-387-40065-5.
Obayash, S., and T. Tsukahara. 1997. “Comparison of Optimization Algorithms for Aerody-
namic Shape Design.” AIAA Journal 35: 1413–1415.
Polyak, R.A. 1992. “Modified barrier functions (theory and methods).” Mathematical Program-
ming, Series B 54: 177–222.
Pucher, H., and V.r Stix. 2008. “Comparison of Nonlinear Opti-
mization Methods on a Multinomial Logit-Model in R.” Interna-
tional Multi-Conference on Engineering and Technological Innovation
https://www.iiis.org/cds2009/cd2009sci/imeti2009/PapersPdf/F216BU.pdf.
Rustagi, J. 1994. Optimization Techniques in Statistics. Cambridge, Massachusetts: Academic
Press Limited.
Sahinidis, N. n.d. BARON user manual. The Optimization Firm LLC.
http://www.minlp.com/.
Saxena, P. 2012. “Comparison of Linear and Nonlinear Programming Techniques for Animal
Diet.” Applied Mathematics 1: 106–108.
Schittkowski, K. 2009. Test Examples for Nonlinear Pro-
gramming Codes. Technical Report. University of Bayreuth.
http://www.apmath.spbu.ru/cnsa/pdf/obzor/Schittkowski_Test_problem.pdf.
Schittkowski, K., C. Zillober, and R. Zotemantel. 1994. “Numerical Comparison of Nonlinear
Programming Algorithms for Structural Optimization.” Structural Optimization 7: 1–19.
Schlueter, M., S. O. Erb, M. Gerdts, S. Kemble, and J. Rückmann. 2013. “MIDACO on MINLP
space applications.” Advances in Space Research 51 (7): 1116–1131.
Svanberg, K. 1987. “The method of moving asymptotes - A new method for structural opti-

25
mization.” International Journal for Numerical Methods in Engineering 24: 359 – 373.
Svanberg, K. 2002. “A Class of Globally Convergent Optimization Methods Based on Conser-
vative Convex Separable Approximations.” SIAM Journal on Optimization 12: 555–573.
Svanberg, K. 2020. “MMA and GCMMA Matlab code.” http://www.smoptit.se/.
Svanberg, Krister. 2007. “MMA and GCMMA – two methods for nonlinear optimization.” vol
1: 1–15. https://people.kth.se/~krille/mmagcmma.pdf.
Tawarmalani, M., and N. V. Sahinidis. 2004. “Global Optimization of Mixed-integer Nonlinear
Programs: A Theoretical and Computational Study.” Mathematical Programming 99: 563–
591.
Wansuo, D., and L. Haiying. 2010. “A New Strategy for Solving a Class of Constrained Non-
linear Optimization Problems Related to Weather and Climate Predictability.” Advances in
Atmospheric Sciences 27: 741–749.
Wu, X., and S. L. William. 1992. “Assimilation of ERBE Data with a Nonlinear Programming
Technique to Improve Cloud-Cover Diagnosis.” American Meteorological Society 120: 2009–
2024.
Wächter, A., and L. Biegler. 2006. “On the Implementation of an Interior-Point Filter Line-
Search Algorithm for Large-Scale Nonlinear Programming.” Mathematical programming 106:
25–57.
Yang, X. 2014. Nature-inspired metaheuristic algorithms. United Kingdom: Luniver press.
Ye, Yinyu. 1989. SOLNP USERS’ GUIDE - A Nonlinear Optimization Program in MATLAB.
https://web.stanford.edu/~yyye/matlab/manual.ps.
Ye, Yinyu. 2020. “SOLNP.” https://web.stanford.edu/~yyye/matlab.html.
Yeo, B. P. 1974. “A quasilinearization algorithm and its application to a manipulator problem.”
International Journal of Control 20 (4): 623–640.
Yuan, G., K. Chang, C. Hsieh, and C. Lin. 2010. “A Comparison of Opti-
mization Methods and Software for Large-scale L1-regularized Linear Clas-
sification.” The Journal of Machine Learning Research 11: 3183–3234.
https://www.jmlr.org/papers/volume11/yuan10c/yuan10c.pdf.
Ziemba, W. T., and R. G. Vickson. 1975. Stochastic Optimization Models in Finance. Cam-
bridge, Massachusetts: Academic Press INC.

Appendix A. Benchmark Test Functions for Unconstrained Global


Optimization

Appendix A.1. Beale Function


• Dimension: 2;
• Domain: −4.5 ≤ xi ≤ 4.5;
• Function:
2
f (x) = (1.5 − x1 + x1 x2 )2 + 2.5 − x1 + x1 x22 +
2
2.625 − x1 + x1 x32 ; (A1)

• Global minimum at x∗ = (3, 0.5), f (x∗ ) = 0.

Appendix A.2. Bohachevsky 1 Function


• Dimension: 2;
• Domain: −100 ≤ xi ≤ 100;

26
• Function:

f (x) = x21 + 2x22 − 0.3 cos(3πx1 ) − 0.4 cos(4πx2 ) + 0.7; (A2)

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.3. Bohachevsky 2 Function


• Dimension: 2;
• Domain: −100 ≤ xi ≤ 100;
• Function:

f (x) = x21 + 2x22 − 0.3 cos(3πx1 ) cos(4πx2 ) + 0.3; (A3)

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.4. Bohachevsky 3 Function


• Dimension: 2;
• Domain: −100 ≤ xi ≤ 100;
• Function:

f (x) = x21 + 2x22 − 0.3 cos(3πx1 + 4πx2 ) + 0.3; (A4)

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.5. Branin RCOS Function


• Dimension: 2;
• Domain: −5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15;
• Function:
2
5.1 x1 2 5 x1
  
1
f (x) = − + + x2 − 6 + 10 1 − cos (x1 ) + 10; (A5)
40 π 2 π 8π

• Global minimum at x∗ = {-pi, 12.275}, {pi, 2.275}, {9.42478,2.475}, f (x∗ ) =


0.397887.

Appendix A.6. Colville Function


• Dimension: 4;
• Domain: −10 ≤ xi ≤ 10;
• Function:

f (x) = 100(x21 − x2 )2 + (x1 − 1)2 + (x3 − 1)2 +


90(x23 − x4 )2 + 10.1((x2 − 1)2 + (x4 − 1)2 ) + 19.8(x2 − 1)(x4 − 1); (A6)

• Global minimum at x∗ = (1, 1, 1, 1), f (x∗ ) = 0.

27
Appendix A.7. Dixon & Price Function
• Dimension: 25;
• Domain: −10 ≤ xi ≤ 10;
• Function:
n
X
f (x) = (x1 − 2)2 + i(2x2i − xi−1 )2 ; (A7)
i=1

2i −2
• Global minimum at x∗ = 2− 2i with i = 1, ..., n, f (x∗ ) = 0.

Appendix A.8. Hump Function


• Dimension: 2;
• Domain: −5 ≤ xi ≤ 5;
• Function:
4

f (x) = x21 (x13 − 2.1x21 + 4) + x1 x2 + x22 (4x22 − 4); (A8)

• Global minimum at x∗ = {0.0898, −0.7126}, {−0.0898, 0.7126}, f (x∗ ) = 0.

Appendix A.9. Matyas Function


• Dimension: 2;
• Domain: −10 ≤ xi ≤ 10;
• Function:

f (x) = 0.26(x21 + x22 ) − 0.48x1 x2 ; (A9)

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.10. Perm(n,β) Function


• Dimension: 10;
• Domain: −10 ≤ xi ≤ 10;
• Function:
n n  !2
X X
k xi k
f (x) = (i + β) −1 , with β = 0.5; (A10)
i
k=1 i=1

• Global minimum at x∗ = i with i = 1, ..., n, f (x∗ ) = 0.

Appendix A.11. Powell singular Function


• Dimension: 16;
• Domain: −4 ≤ xi ≤ 5;

28
• Function:

n/4
X
f (x) = (x4i−3 + 10x4i−2 )2 + 5(x4i−1 − x4i )2 +
i=1
(x4i−2 − x4i−1 )2 + 10(x4i−3 − x4i )2 ; (A11)

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.12. Power Sum Function


• Dimension: 4;
• Domain: 0 ≤ xi ≤ 256;
• Function:
n n
! !2
X X
f (x) = xki − bk , with b = (8, 18, 44, 114); (A12)
k=1 i=1

• Global minimum at x∗ = (1, 2, 3, 3), f (x∗ ) = 0.

Appendix A.13. Sphere Function


• Dimension: 30;
• Domain: −5.12 ≤ xi ≤ 5.12;
• Function:
n
X
f (x) = x2i ; (A13)
i=1

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.14. Sum Squares Function


• Dimension: 30;
• Domain: −10 ≤ xi ≤ 10;
• Function:
n
X
f (x) = ix2i ; (A14)
i=1

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.15. Trid Function


• Dimension: 10;
• Domain: −100 ≤ xi ≤ 100;

29
• Function:
n
X n
X
2
f (x) = (xi − 1) − xi xi−1 ; (A15)
i=1 i=2

• Global minimum at x∗ = i ∗ (11 − i) with i = 1, ..., n, f (x∗ ) = −210.

Appendix A.16. Zakharov Function


• Dimension: 20;
• Domain: −5 ≤ xi ≤ 10;
• Function:
n n
!2 n
!4
X 1X 2 1X 2
f (x) = x2i + ixi + ixi ; (A16)
2 2
i=1 i=1 i=1

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.17. Branin RCOS 2 Function


• Dimension: 2;
• Domain: −5 ≤ xi ≤ 15;
• Function:
2
5.1 x1 2 5 x1

f (x) = − + + x2 − 6 +
40 π 2 π
 
1
cos (x1 ) cos (x2 ) ln x21 + x22 + 1 + 10; (A17)

10 1 −

• Global minimum at x∗ = (−3.2, 12.53), f (x∗ ) = 5.559037.

Appendix A.18. Ackley 1 Function


• Dimension: 10;
• Domain: −15 ≤ xi ≤ 30;
• Function:
√ 1 Pn 2 1
Pn
f (x) = −20e−0.2 n i=0 xi − e n i=0 cos(2πxi ) + 20 + e; (A18)

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.19. Ackley 2 Function


• Dimension: 2;
• Domain: −32 ≤ xi ≤ 32;
• Function:
√ 2 2
f (x) = −200e−0.02 x1 +x2 ; (A19)

30
• Global minimum at x∗ = (0, 0), f (x∗ ) = −200.

Appendix A.20. Camel 3 Function


• Dimension: 2;
• Domain: −5 ≤ xi ≤ 5;
• Function:
1
f (x) = 2x21 − 1.05x41 + x61 + x1 x2 + x22 ; (A20)
6

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.21. Booth Function


• Dimension: 2;
• Domain: −10 ≤ xi ≤ 10;
• Function:

f (x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)2 ; (A21)

• Global minimum at x∗ = (1, 3), f (x∗ ) = 0.

Appendix A.22. Brown Function


• Dimension: 14;
• Domain: −1 ≤ xi ≤ 4;
• Function:
n−1
X (x2i+1 +1) (x2i +1)
f (x) = x2i + x2i+1 ; (A22)
i=0

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 0.

Appendix A.23. Cube Function


• Dimension: 2;
• Domain: −10 ≤ xi ≤ 10;
• Function:
2
f (x) = 100 x2 − x31 + (1 − x1 )2 ; (A23)

• Global minimum at x∗ = (−1, 1), f (x∗ ) = 0.

Appendix A.24. Exponential Function


• Dimension: 18;
• Domain: −1 ≤ xi ≤ 1;

31
• Function:
Pn
x2i )
f (x) = −e(−0.5 i=1 ; (A24)

• Global minimum at x∗ = (0, ..., 0), f (x∗ ) = 1.

Appendix A.25. Freudenstein Roth Function


• Dimension: 2;
• Domain: −10 ≤ xi ≤ 10;
• Function:

f (x) = (x1 − 13 + x2 ((5 − x2 ) x2 − 2))2 +


(x1 − 29 + x2 ((x2 + 1) x2 − 14))2 ; (A25)

• Global minimum at x∗ = (5, 4), f (x∗ ) = 0.

Appendix A.26. Miele Cantrell Function


• Dimension: 4;
• Domain: −1 ≤ xi ≤ 1;
• Function:
4
f (x) = e−x1 − x2 + 100 (x2 − x3 )6 + (tan(x3 − x4 ))4 + x81 ; (A26)

• Global minimum at x∗ = (0, 1, 1, 1), f (x∗ ) = 0.

Appendix A.27. Quadratic Function


• Dimension: 2;
• Domain: −10 ≤ xi ≤ 10;
• Function:

f (x) = −3803.84 − 138.08x1 − 232.92x2 +


128.08x21 + 203.64x22 + 182.25x1 x2 ; (A27)

• Global minimum at x∗ = (0.19388, 0.48513), f (x∗ ) = −3873.7243.

Appendix A.28. Rotated Ellipse Function


• Dimension: 2;
• Domain: −500 ≤ xi ≤ 500;
• Function:

f (x) = 7x21 − 6 3x1 x2 + 13x22 ; (A28)

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

32
Appendix A.29. Rump Function
• Dimension: 2;
• Domain: −500 ≤ xi ≤ 500;
• Function:
x1
f (x) = (333.75 − x21 )x62 + x21 (11x21 x22 − 121x42 − 2) + 5.5x82 + ; (A29)
2 + x2

• Global minimum at x∗ = (0, 0), f (x∗ ) = 0.

Appendix A.30. Wayburn Seader 3 Function


• Dimension: 2;
• Domain: −500 ≤ xi ≤ 500;
• Function:

x31 2
− 8x21 + 33x1 − x1 x2 + 5 + (x1 − 4)2 + (x2 − 5)2 − 4 ;

f (x) = 2 (A30)
3

• Global minimum at x∗ = f (5.611, 6.187), f (x∗ ) = 21.35.

Appendix B. Benchmark Test Functions for Constrained Global


Optimization

Appendix B.1.
• Dimension: 13;
• Domain and search space: 0 ≤ xi ≤ ui , with
u = (1, 1, 1, ..., 1, 100, 100, 100, 1);
• Function:
4
X 4
X n
X
f (x) = 5 xi − 5 x2i − xi ; (B1)
i=1 i=1 i=5

• Constraints:

c1 (x) = 2x1 + 2x2 + x10 + x11 − 10 ≤ 0;


c2 (x) = 2x1 + 2x3 + x10 + x12 − 10 ≤ 0;








c3 (x) = 2x2 + 2x3 + x11 + x12 − 10 ≤ 0;
 4 (x) = −8x1 + x10 ≤ 0;


c
c5 (x) = −8x2 + x11 ≤ 0; (B2)

c6 (x) = −8x3 + x12 ≤ 0;








c7 (x) = −2x4 − x5 + x10 ≤ 0;
c8 (x) = −2x6 − x7 + x11 ≤ 0;





c9 (x) = −2x8 − x9 + x12 ≤ 0;

• Global minimum at x∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1), f (x∗ ) = −15.

33
Appendix B.2.
• Dimension: 20;
• Domain and search space: 0 ≤ xi ≤ 10;
• Function:
Pn 4 Qn 2
i=1 cos(x i) −2 i=1 cos(xi )
f (x) = − p Pn 2
; (B3)
i=1 ixi

• Constraints:
(
c1 (x) = − ni=1 xi + 0.75 ≤ 0;
Q
Pn (B4)
c2 (x) = i=1 xi − 7.5n ≤ 0;

• Global minimum (best known) at f (x∗ ) = −0.803619.

Appendix B.3.
• Dimension: 10;
• Domain and search space: 0 ≤ xi ≤ 1;
• Function:
n
√ n Y
f (x) = − n xi ; (B5)
i=1

• Constraints:
n
X
c1 (x) = x2i − 1 = 0; (B6)
i=1
 
1 0.5 1 0.5
 
• Global minimum at x∗ = 10 , ..., 10 , f (x∗ ) = −1.

Appendix B.4.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = (78, 33, 27, 27, 27)
and u = (102, 45, 45, 45, 45);
• Function:

f (x) = 5.3578547x23 + 0.8356891x1 x5 + 37.293239x1 − 40792.141; (B7)

34
• Constraints:

c1 (x) = −u ≤ 0;


c2 (x) = u − 92 ≤ 0;








c3 (x) = −v + 90 ≤ 0;
c4 (x) = v − 110 ≤ 0;



c5 (x) = −w + 20 ≤ 0; (B8)

c6 (x) = w − 25 ≤ 0;








u = 85.334407 + 0.0056858x2 x5 + 0.0006262x1 x4 − 0.0022053x3 x5 ;
v = 80.51249 + 0.0071317x2 x5 + 0.0029955x1 x2 + 0.0021813x23 ;





w = 9.300961 + 0.0047026x3 x5 + 0.0012547x1 x3 + 0.0019085x3 x4 ;

• Global minimum at x∗ = (78, 33, 29.995, 45, 36.7758), f (x∗ ) = −30665.539.

Appendix B.5.
• Dimension: 4;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, −0.55, −0.55)
and u = (1200, 1200, 0.55, 0.55);
• Function:
2
f (x) = 3x1 + 10−6 x31 + 2x2 + 10−6 x32 ; (B9)
3
• Constraints:



 c1 (x) = x3 − x4 − 0.55 ≤ 0;
c2 (x) = x4 − x3 − 0.55 ≤ 0;



c3 (x) = 1000(sin(−x3 − 0.25) + sin(−x4 − 0.25)) + 894.8 − x1 = 0;

c4 (x) = 1000(sin(x3 − 0.25) + sin(x3 − x4 − 0.25)) + 894.8 − x2 = 0;





c (x) = 1000(sin(x − 0.25) + sin(x − x − 0.25)) + 1294.8 = 0;
5 4 4 3
(B10)
• Global minimum at x∗ = (679.9453, 1026, 0.118876, −0.3962336), f (x∗ ) =
5126.4981.

Appendix B.6.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ 100, with l = (13, 0);
• Function:

f (x) = (x1 − 10)3 + (x2 − 20)3 ; (B11)

• Constraints:
(
c1 (x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0;
(B12)
c2 (x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0;

35
• Global minimum at x∗ = (14.095, 0.84296), f (x∗ ) = −6961.81388.

Appendix B.7.
• Dimension: 10;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = 2(x6 − 1)2 − 16x2 − 14x1 + (x5 − 3)2 + 4(x4 − 5)2 + (x3 − 10)2 +
(x10 − 7)2 + 7(x8 − 11)2 + 2(x9 − 10)2 + x1 x2 + x21 + x22 + 5x27 + 45; (B13)

• Constraints:



 c1 (x) = 4x1 + 5x2 − 3x7 + 9x8 − 105 ≤ 0;
c2 (x) = 10x1 − 8x2 − 17x7 + 2x8 ≤ 0;




c3 (x) = −8x1 + 2x2 + 5x9 − 2x10 − 12 ≤ 0;




c (x) = 3(x − 2)2 + 4(x − 3)2 + 2x2 − 7x − 120 ≤ 0;

4 1 2 3 4
2 + 8x + (x − 6)2 − 2x − 40 ≤ 0;
(B14)

 c5 (x) = 5x 1 2 3 4


c (x) = 0.5(x − 8)2 + 2(x − 4)2 + 3x2 − x − 30 ≤ 0;
6 1 2 6


 5
2 + 2(x − 2)2 − 2x x + 14x − 6x ≤ 0;

c (x) = x

7 1 2 1 2 5 6



c (x) = −3x + 6x + 12(x − 8)2 − 7x ≤ 0;

8 1 2 9 10

• Global minimum at x∗ = (2.171996, 2.363683, 8.773926, 5.095984, 0.9906548,


1.430574, 1.321644, 9.828726, 8.280092, 8.375927), f (x∗ ) = 24.3062091.

Appendix B.8.
• Dimension: 2;
• Domain and search space: 0 ≤ xi ≤ 10;
• Function:

sin(2πx1 )3 sin(2πx2 )
f (x) = − ; (B15)
x31 (x1 + x2 )

• Constraints:
(
c1 (x) = x21 − x2 + 1 ≤ 0;
(B16)
c2 (x) = 1 − x1 + (x2 − 4)2 ≤ 0;

• Global minimum at x∗ = (1.2279713, 4.2453733), f (x∗ ) = −0.095825.

Appendix B.9.
• Dimension: 7;
• Domain and search space: −10 ≤ xi ≤ 10;

36
• Function:

f (x) = (x1 − 10)2 − 8x7 − 10x6 + 5(x2 − 12)2 +


3(x4 − 11)2 − 4x6 x7 + x43 + 7x26 + 10x65 + x47 ; (B17)

• Constraints:

c1 (x) = 2x21 + 3x42 + x3 + 4x24 + 5x5 − 127 ≤ 0;





c (x) = 7x + 3x + 10x2 + x − x − 282 ≤ 0;

2 1 2 3 4 5
2 2
(B18)

c3 (x) = 23x1 + x2 + 6x6 − 8x7 − 196 ≤ 0;

c4 (x) = 4x21 + x22 − 3x1 x2 + 2x23 + 5x6 − 11x7 ≤ 0;

• Global minimum at x∗ = (2.330499, 1.951372, -0.4775414, 4.365726,


-0.6244870, 1.038131, 1.594227), f (x∗ ) = 680.6300573.

Appendix B.10.
• Dimension: 8;
• Domain and search space: li ≤ xi ≤ ui , with l = 10(10, 100, 100, 1, 1, 1, 1, 1) and
u = 1000(10, 10, 10, 1, 1, 1, 1, 1);
• Function:

f (x) = x1 + x2 + x3 ; (B19)

• Constraints:


 c1 (x) = −1 + 0.0025(x4 + x6 ) ≤ 0;

c2 (x) = −1 + 0.0025(−x4 + x5 + x7 ) ≤ 0;




c (x) = −1 + 0.01(−x + x ) ≤ 0;
3 5 8
(B20)

 c 4 (x) = 100x 1 − x x
1 6 + 833.33252x 4 − 83333.333 ≤ 0;

c5 (x) = x2 x4 − x2 x7 − 1250x4 + 1250x5 ≤ 0;





c6 (x) = x3 x5 − x3 x8 − 2500x5 + 1250000 ≤ 0;

• Global minimum at x∗ = (579.3167, 1359.943, 5110.071, 182.0174, 295.5985,


217.9799, 286.4162,395.5979), f (x∗ ) = 7049.3307.

Appendix B.11.
• Dimension: 2;
• Domain and search space: −1 ≤ xi ≤ 1;
• Function:

f (x) = x21 + (x2 − 1)2 ; (B21)

• Constraints:

c1 (x) = x2 − x21 = 0; (B22)

37
0.5 1
• Global minimum at x∗ = ±( 12 , 2 ), f (x∗ ) = 0.75.

Appendix B.12.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = −u, u =(2.3,2.3,3.2,3.2,3.2);
• Function:

f (x) = ex1 x2 x3 x4 x5 ; (B23)

• Constraints:

2 2 2 2 2
c1 (x) = x1 + x2 + x3 + x4 + x5 − 10 = 0;

c2 (x) = x2 x3 − 5x4 x5 = 0; (B24)
c3 (x) = x31 + x32 + 1 = 0;

• Global minimum at x∗ = (-1.717143, 1.595709, 1.827247, -0.7636413,


-0.763645), f (x∗ ) = 0.0539498.

Appendix B.13.
• Dimension: 2;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = x31 + x32 ; (B25)

• Constraints:

c1 (x) = x1 + x2 − 8 = 0; (B26)

• Global minimum at x∗ = (4, 4), f (x∗ ) = 128.

Appendix B.14.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = (x1 − 1)2 + (x2 − 2)2 + x23 + 2; (B27)

• Constraints:

c1 (x) = x21 + x2 − 3 = 0; (B28)

• Global minimum at x∗ = (1, 2, 0), f (x∗ ) = 2.

38
Appendix B.15.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = 2(x1 x2 + x2 x3 + x1 x3 ); (B29)

• Constraints:
(
c1 (x) = x1 x2 x3 − 72 = 0;
(B30)
c2 (x) = x1 − 2x2 = 0;

• Global minimum at x∗ = (6, 3, 4), f (x∗ ) = 108.

Appendix B.16.
• Dimension: 2;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = ln 1 + x21 − x2 ;

(B31)

• Constraints:

c1 (x) = (1 + x21 )2 + x22 − 4 = 0 (B32)


√ √
• Global minimum at x∗ = (0, 3), f (x∗ ) = − 3.

Appendix B.17.
• Dimension: 3;
• Domain and search space: −10 ≤ xi ≤ 10;
• Function:

f (x) = 0.01(x1 − 1)2 + (x2 − x21 )2 ; (B33)

• Constraints:

c1 (x) = x1 + x23 + 1 = 0 (B34)

• Global minimum at x∗ = (-1, 1, 0), f (x∗ ) = 0.04.

Appendix B.18.
• Dimension: 5;
• Domain and search space: 0 ≤ xi ≤ 1;

39
• Function:

f (x) = cT x − 0.5xT Qx; (B35)

with c = (42, 44, 45, 47, 47.5)T , Q = 100I

• Constraints:

c1 (x) = 20x1 + 12x2 + 11x3 + 7x4 + 4x5 − 40 ≤ 0 (B36)

• Global minimum at x∗ = (1, 1, 0, 1, 0), f (x∗ ) = −17.

Appendix B.19.
• Dimension: 6;
• Domain and search space: 0 ≤ x1,...,5 ≤ 1, 0 ≤ y;
• Function:

f (x, y) = cT x − 0.5xT Qx − 10y; (B37)

with c = (−10.5, −7.5, −3.5, −2.5, −1.5)T , Q=I

• Constraints:
(
c1 (x) = 6x1 + 3x2 + 3x3 + 2x4 + 1x5 − 6.5 ≤ 0
(B38)
c2 (x) = 10x1 + 10x3 + y − 20 ≤ 0

• Global minimum at x∗ = (0, 1, 0, 1, 1), y ∗ = 20, f (x∗ ) = −213.

Appendix B.20.
• Dimension: 12;
• Domain and search space: 0 ≤ xi ≤ 1, 0 ≤ y1,...,5 ≤ 1, 0 ≤ y6,...,8 ≤ 3;
• Function:

f (x, y) = cT x − 0.5xT Qx + dT y; (B39)

with c = (5, 5, 5, 5)T , Q = 100I,

d = (−1, −1, −1, −1, −1, −1, −1, −1)T

40
• Constraints:

c1 (x) = 2x1 + 2x2 + y6 + y7 − 10 ≤ 0


c2 (x) = 2x1 + 2x3 + y6 + y8 − 10 ≤ 0








c3 (x) = 2x2 + 2x3 + y7 + y8 − 10 ≤ 0
c4 (x) = −8x1 + y6 ≤ 0



c5 (x) = −8x2 + y7 ≤ 0 (B40)

c6 (x) = −8x3 + y8 ≤ 0








c7 (x) = −2x4 − y1 + y6 ≤ 0
c8 (x) = −2y2 − y3 + y7 ≤ 0





c9 (x) = −2y4 − y5 + y8 ≤ 0

• Global minimum at x∗ = (1, 1, 1, 1), y ∗ = (1,1,1,1,1,3,3,3), f (x∗ ) = −194.

Appendix B.21.
• Dimension: 10;
• Domain and search space: 0 ≤ xi ;
• Function:
n−1
X n−2
X
f (x) = − xi xi+1 − xi xi+2 − x1 x9 − x1 x10 − x2 x10 − x1 x5 − x4 x7 ; (B41)
i=1 i=1

• Constraints:
n
X
c1 (x) = xi − 1 = 0 (B42)
i=1

• Global minimum at x∗ = (0, 0, 0, 0.25, 0.25, 0.25, 0.25, 0, 0, 0), f (x∗ ) = −0.375.

Appendix B.22.
• Dimension: 6;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, 1, 0, 1, 0), u =
(−, −, 5, 6, 5, 10);
• Function:

f (x) = −25(x1 −2)2 −(x2 −2)2 −(x3 −1)2 −(x4 −4)2 −(x5 −1)2 −(x6 −4)2 ; (B43)

41
• Constraints:

2
c1 (x) = 4 − (x3 − 3) − x4 ≤ 0

c2 (x) = 4 − (x5 − 3)2 − x6 ≤ 0






c (x) = x − 3x − 2 ≤ 0
3 1 2
(B44)

c4 (x) = −x1 + x2 − 2 ≤ 0

c5 (x) = x1 + x2 − 6 ≤ 0





c6 (x) = 2 − x1 − x2 ≤ 0

• Global minimum at x∗ = (5, 1, 5, 0, 5, 10), f (x∗ ) = −310.

Appendix B.23.
• Dimension: 3;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0, 0), u = (2, −, 3);
• Function:

f (x) = −2x1 + x2 − x3 ; (B45)

• Constraints:

T T T 2 2
c1 (x) = −x A Ax + 2y Ay − kyk + 0.25kb − zk ≤ 0

c2 (x) = x1 + x2 + x3 − 4 ≤ 0 (B46)

c3 (x) = 3x2 + x3 − 6 ≤ 0

 
0 0 1
with A =  0 −1 0  , b = (3, 0, −4)T
−2 1 −1

y = (1.5, −0.5, −5)T , z = (0, −1, −6)T

• Global minimum at x∗ = (0.5, 0, 3), f (x∗ ) = −4.

Appendix B.24.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0), u = (2, 3);
• Function:

f (x) = −12x1 − 7x2 + x22 ; (B47)

• Constraints:

c1 (x) = −2x41 + 2 − x2 = 0 (B48)

• Global minimum at x∗ = (0.7175, 1.47), f (x∗ ) = −16.73889.

42
Appendix B.25.
• Dimension: 2;
• Domain and search space: li ≤ xi ≤ ui , with l = (0, 0), u = (3, 4);
• Function:

f (x) = −x1 − x2 ; (B49)

• Constraints:
(
c1 (x) = x2 − 2 − 2x41 + 8x31 − 8x21 ≤ 0
(B50)
c2 (x) = x2 − 4x41 + 32x31 − 88x21 + 96x1 − 36 ≤ 0

• Global minimum at x∗ = (2.3295, 3.17846), f (x∗ ) = −5.50796.

Appendix B.26.
• Dimension: 5;
• Domain and search space: li ≤ xi ≤ ui , with l = (78, 33, 27, 27, 27), u =
(102, 45, 45, 45, 45);
• Function:

f (x) = 5.3578x23 + 0.8357x1 x5 + 37.2392x1 ; (B51)

• Constraints:


 c1 (x) = 0.00002584x3 x5 − 0.00006663x2 x5 − 0.0000734x1 x4 − 1 ≤ 0

c2 (x) = 0.000853007x2 x5 + 0.00009395x1 x4 − 0.00033085x3 x5 − 1 ≤ 0




c (x) = 1330.3294x−1 x−1 − 0.42x x−1 − 0.30586x−1 x2 x−1 − 1 ≤ 0

3 2 5 1 5 2 3 5
c (x) = 0.00024186x x + 0.00010159x x + 0.00007379x 2 −1≤0


 4 2 5 1 2 3
−1 −1 −1 −1
c (x) = 2275.1327x x − 0.2668x x − 0.40584x x 5 −1 ≤ 0

5 1 4



 3 5 5
c6 (x) = 0.00029955x3 x5 + 0.00007992x1 x3 + 0.00012157x3 x4 − 1 ≤ 0

(B52)
• Global minimum at x = (78, 33, 29.998, 45, 36.7673), f (x ) = 10122.696.
∗ ∗

Appendix B.27.
• Dimension: 3;
• Domain and search space: 1 ≤ xi ≤ 100;
• Function:

f (x) = 0.5x1 x−1 −1


2 − x1 − 5x2 ; (B53)

• Constraints:

c1 (x) = 0.01x2 x−1


3 + 0.01x1 + 0.0005x1 x3 − 1 ≤ 0 (B54)

• Global minimum at x∗ = (88.2890, 7.7737, 1.3120), f (x∗ ) = −83.254.

43
Appendix B.28.
• Dimension: 4;
• Domain and search space: 0.1 ≤ xi ≤ 10;
• Function:

f (x) = −x1 + 0.4x0.67


1 x3
−0.67
; (B55)

• Constraints:
(
c1 (x) = 0.05882x3 x4 + 0.1x1 − 1 ≤ 0
(B56)
c2 (x) = 4x2 x−1
4 + 2x2
−0.71 −1
x4 + 0.05882x−1.3
2 x3 − 1 ≤ 0

• Global minimum at x∗ = (8.1267, 0.6154, 0.5650, 5.6368), f (x∗ ) = −5.7398.

Appendix B.29.
• Dimension: 8;
• Domain and search space: 0.01 ≤ xi ≤ 10;
• Function:

f (x) = −x1 − x5 + 0.4x0.67


1 x3
−0.67
+ 0.4x0.67
5 x7
−0.67
; (B57)

• Constraints:


 c1 (x) = 0.05882x3 x4 + 0.1x1 − 1 ≤ 0

c (x) = 0.05882x x + 0.1x + 0.1x − 1 ≤ 0
2 7 8 1 5
−1 −0.71 −1 −1.3 (B58)

 c3 (x) = 4x x
2 4 + 2x 2 x 4 + 0.05882x 2 x3 − 1 ≤ 0
 −1 −0.71 −1 −1.3
c4 (x) = 4x6 x8 + 2x6 x8 + 0.05882x6 x7 − 1 ≤ 0

• Global minimum at x∗ = (6.4225, 0.6686, 1.0239, 5.9399, 2.2673, 0.5960, 0.4029,


5.5288), f (x∗ ) = -6.0482.

Appendix B.30.
• Dimension: 5;
• Domain and search space: −5 ≤ xi ≤ 5;
• Function:

f (x) = (x1 − 1)2 + (x1 − x2 )2 + (x2 − x3 )3 + (x3 − x4 )4 + (x4 − x5 )4 ; (B59)

• Constraints:

2 3

c1 (x) = x1 + x2 + x3 − 3√2 − 2 = 0

c2 (x) = x2 − x23 + x4 − 2 2 + 2 = 0 (B60)

c3 (x) = x1 x5 − 2 = 0

• Global minimum at x∗ = (1.1166, 1.2204, 1.5378, 1.9728, 1.7911), f (x∗ ) =


0.0293.

44

You might also like