Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Evaluating Personal Default Risk in P2P Lending Platform: Based on Dual Hesitant Pythagorean Fuzzy TODIM Approach
Next Article in Special Issue
Multiobjective Mixed Integer Nonlinear Model to Plan the Schedule for the Final Disposal of the Spent Nuclear Fuel in Finland
Previous Article in Journal
Second-Order Parabolic Equation to Model, Analyze, and Forecast Thermal-Stress Distribution in Aircraft Plate Attack Wing–Fuselage
Previous Article in Special Issue
The Averaged Hausdorff Distances in Multi-Objective Optimization: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid Evolutionary Algorithm for the Treatment of Equality Constrained MOPs

1
Department of Computer Science, Cinvestav-IPN, Mexico City 07360, Mexico
2
Metropolitan Autonomous University, Azcapotzalco Unit, Av. San Pablo No. 180, Col. Reynosa Tamaulipas, Azcapotzalco 02200, Mexico
3
Instituto Politécnico Nacional, Mexico City 07738, Mexico
4
Department of Applied Mathematics and Systems, Metropolitan Autonomous University, Cuajimalpa Unit (UAM-C), Vasco de Quiroga 4871, Santa Fe Cuajimalpa 05370, Mexico
5
Dr. Rodolfo Quintero Ramirez Chair, Metropolitan Autonomous University, Cuajimalpa Unit (UAM-C), Vasco de Quiroga 4871, Santa Fe Cuajimalpa 05370, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(1), 7; https://doi.org/10.3390/math8010007
Submission received: 27 November 2019 / Revised: 5 December 2019 / Accepted: 5 December 2019 / Published: 18 December 2019
(This article belongs to the Special Issue Recent Trends in Multiobjective Optimization and Optimal Control)

Abstract

:
Multi-objective evolutionary algorithms are widely used by researchers and practitioners to solve multi-objective optimization problems (MOPs), since they require minimal assumptions and are capable of computing a finite size approximation of the entire solution set in one run of the algorithm. So far, however, the adequate treatment of equality constraints has played a minor role. Equality constraints are particular since they typically reduce the dimension of the search space, which causes problems for stochastic search algorithms such as evolutionary strategies. In this paper, we show that multi-objective evolutionary algorithms hybridized with continuation-like techniques lead to fast and reliable numerical solvers. For this, we first propose three new problems with different characteristics that are indeed hard to solve by evolutionary algorithms. Next, we develop a variant of NSGA-II with a continuation method. We present numerical results on several equality-constrained MOPs to show that the resulting method is highly competitive to state-of-the-art evolutionary algorithms.

1. Introduction

In many applications, one is faced with the problem that several objectives have to be optimized concurrently. One main characteristic of such multi-objective optimization problems (MOPs) is that their solutions sets typically form objects of dimension k 1 , k being the number of objectives in the problem. Multi-objective evolutionary algorithms (MOEAs, e.g., [1,2]) have caught the interest of many researchers for the treatment of such problems as they have shown their efficiency both on academic and real-world problems. The population-based approach of MOEAs allows computing finite-size approximations of the entire solution set in one single run of the algorithm. Further, these methods are of global nature and require just minimal assumptions on the model (e.g., they can be work without gradient information).
Equality constraints represent an important class of constraints that naturally arise in many applications (e.g., [3,4,5,6,7,8,9]), and their importance will rise since decision making processes are getting more and more complex. Such constraints, however, still represent a challenge for state-of-the-art MOEAs ([10]).
In this work, we argue that a hybridization of a recently proposed Pareto Tracer (PT, [11]) with MOEAs leads to a new solver for continuous constrained MOPs that is fast and reliable. To this end, we propose and discuss here three equality constrained MOPs to complement the comparisons. Next, we propose ϵ -NSGA-II/PT that is a hybrid of a variant of the well-known and widely used NGSA-II and PT. PT is currently one of the fastest multi-objective continuation methods that allow us to trace solutions along the Pareto set/front, can be applied to higher dimensions, and reliably handles constraints. However, as all continuation methods, it is of local nature. We demonstrate the strength of the novel approach by showing results on selected benchmark problems and comparisons against some state-of-the-art MOEAs. A preliminary study of this work can be found in [12] which is restricted to the treatment of bi-objective problems, and where the discussion and comparison of the novel method is reduced.
This paper is structured as follows: in Section 2, we shortly recall some required background. In Section 3, we propose three new equality constrained MOPs that we will use for our comparisons. In Section 4, we propose the hybrid evolutionary algorithm ϵ -NSGA-II/PT. In Section 5, we present results on some benchmark problems including a comparison to the performance of related algorithms. In Section 6, we finally conclude our work and discuss possible further steps that can be made in this line of research.

2. Background and Related Work

2.1. Multi-Objective Optimization Problem (MOP)

In the sequel, we will consider continuous MOPs
min x R n   F ( x )   s . t .   h i ( x ) = 0   i = 1 , , p   g j ( x ) 0   j = 1 , , m ,
where F ( x ) = ( f 1 ( x ) , , f k ( x ) ) T defines the objective functions. Q denotes the domain of (1), Q : = { x R n   :   h i ( x ) = 0       g j ( x ) 0 } . We say that y R n is dominated by x R n (in short x y ) if f i ( x ) f i ( y ) , i = 1 , , k , and it holds f j ( x ) < f j ( y ) for a j { 1 , , k } . Else we say that y is non-dominated by x. x Q is said to be (Pareto) optimal or simply optimal if there exists no y Q with y x . The Pareto set P Q is the set of all optimal points, and its image F ( P Q ) is called the Pareto front. We have to assume that the gradients of all functions (objective and constraints) can at least be approximated in order to apply PT.

2.2. Related Work

The development of solution tools for the constrained MOP described in the previous subsection has generated a recent interest, mainly within the evolutionary computation community. This subsection provides a review of previous works related to this field, focusing on pure evolutionary techniques and on hybrid strategies as well.
Regarding MOEAs, two constraint handling mechanisms, classically used in the single-objective optimization framework, are adapted in [13] to MOEA/D [14]. The first one is stochastic ranking (SR, [15]), which accounts for the need of directing the search according to both feasibility and objective value. Therefore, when comparing two solutions and at least one of them is infeasible, the comparison criterion is the objective function value with probability p f , while, with probability 1 p f , individuals are distinguished through the overall constraint violation ϕ (where for any solution x, ϕ ( x ) = i = 1 p | h i ( x ) | + j = 1 p max { 0 , g j ( x ) } ). On the other hand, the constraint-domination principle (CDP) is a multi-objective extension [16] of Deb’s feasibility rules, where feasible solutions are compared according to dominance. The computational experiments, performed on the CTP-series [17] and CF-series [18], demonstrate that MOEA/D-CDP consistently outperforms MOEA/D-SR concerning hypervolume, inverted generational distance (IGD) and set coverage. It also performs reasonably well when compared with IDEA [19] and DMOEA-DD [20].
Besides, the Infeasibility Driven Evolutionary Algorithm (IDEA) [19] is a modification of the classical CDP-based NSGA-II [16] that enforces the participation of infeasible solutions through a parameter α , which represents the fraction of the current population allocated for those solutions. During the selection step, the pool combining parent and offspring populations is first divided into two (feasible and infeasible) sets. Then, non-dominated ranking is applied to both these sets, considering a function of the constraint violations as an additional objective for the infeasible set. Then, if N is the population size, the α · N infeasible solutions with the best-ranking values, as well as the 1 α · N feasible solutions with the best ranks, are included into the next population. Note that the constraint violation function used as an additional objective for infeasible solutions is computed as the sum of the ranking of the solutions, sorted in increasing order of the magnitude of constraint violation for each constraint (instead of the number of violated constraints formerly used [21]). The experimental results conducted over some test functions of the CTP-series [17] demonstrate that IDEA consistently outperforms CDP-NSGA-II.
Another adaptation of MOEA/D for handling constrained MOPs is introduced in [22] where, for each solution, a modified function for the overall constraint violation accounts for the number of active constraints, in addition to the simple constraint violation. The mean value of this modified function over the population is weighted by the ratio of feasible individuals in the population, in order to produce a threshold on the allowed amount of constraint violation. Solutions that are within this threshold are considered as feasible and compared in terms of their objective values. Furthermore, a gradient-based local search is periodically invoked in order to repair infeasible solutions. The resulting algorithm, tested on some of the CTP-series test functions [17], performs similarly or better than NSGA-II with CDP.
In [23], the ϵ -constraint technique originally developed for single-objective optimization is extended for the solutions of MOPs. This strategy, proposed in [24], consists in relaxing the tolerance level on constraints up to a value ϵ . Thus, when two solutions have an overall constraint violation lower than ϵ , they are both considered as feasible and compared in terms of their objective value. In [24], the value of ϵ is monotonically decreased according to a polynomial function, until some generation T c . From then on, ϵ is set to 0 in order to narrow the search on the feasible space. In [23], the authors allow increasing the ϵ level when the ratio of feasible solutions is greater than a threshold value, to promote exploration. This strategy, embedded in MOEA/D, is compared with MOEA/D-CDP [13] and with the original ϵ -constraint mechanism (decreasing ϵ pattern), over a set of nine constrained MOPs introduced earlier by the same authors [25]. Furthermore, this strategy is also compared with classical MOEAs (either dominance or decomposition-based) in a later work [26], over the CTP [17] and CF series [18], using IGD as a performance indicator. In any case, the MOEA/D-IEpsilon algorithm outperforms all its contenders, except IDEA [19], which obtains similar performance levels.
More recently, the same authors developed a two-stage (Push and Pull Search, PPS) procedure, which first solves the unconstrained MOP (solutions are pushed towards the unconstrained Pareto front) and, in a second stage, include constraints to modify the first (unconstrained) approximation and identify the constrained Pareto front (solutions are pulled from infeasible regions towards the feasible space). The switching criterion between both phases is based on no-evolution of the identified ideal and Nadir points. During the “push stage”, the canonical MOEA/D (with the Tchebycheff scalarizing function) is employed, while a modified ϵ -constraint technique is applied in the second one (still with MOEA/D) to find feasible solutions. A specific decreasing scheme for the ϵ level is adopted, where exponential or polynomial (as in [24]) decrease can be used, depending on the feasibility ratio. Tested over a 14 functions benchmark earlier proposed by the same authors [25], the resulting PPS-MOEA/D outperforms some classical MOEAs quoted in this section but requires tuning many parameters.
As the last example of MOEA-based solution procedures, an innovative idea is introduced in [27], where MOEA/D is modified in such a way that two solutions are assigned to each weight vector. The aim is having one individual on each side of the feasibility boundary (one feasible and one infeasible), in order to focus the search on this region where the Pareto-optimal solutions might lie. The consequences of this working mode are: (i) the doubled size of the neighborhood of each weight vector for offspring generation (since each neighboring weight vector has two associated solutions); (ii) for solution replacement, the created offspring is now compared to two individuals, in terms of both the scalarizing function used within MOEA/D and the overall constraint violation. In this bi-objective space, dominance is used to select two surviving individuals among the three contenders. If the three are non-dominated, that with the larger constraint violation is discarded, while if one solution dominates the two others, the former is the only one to survive. The algorithm is successfully compared with MOEA/D-CDP over several functions of the C-DTLZ test suite [28]. However, for CTP and CF series [17,18], this Dual-grid MOEA/D is outperformed by IDEA [19] or MOEA/D-IEpsilon [26].
Besides, as for single-objective optimization, hybrid strategies have been adapted for solving MOPs in recent years. In general, these so-called memetic algorithms combine a global search engine (a MOEA) with some local search technique based on exact algorithms. In this framework, proposals differ one from another according to:
  • The kind of local search technique used, which may be gradient-based (quasi-Newton in [29,30], or sequential quadratic programming in [31]) or direct search for nonlinear problems (Nelder and Mead’s algorithm in [32]).
  • The problem reformulation on which local search is applied, which may be based on ϵ -constraint [33] or a scalarization of the MOP [31].
  • The hybridization scheme, which can consists on seeding the initial population of the MOEA [34], interleaving global and local search steps by applying local search to some selected individuals of the population [33] or periodically (every t generations) [35], or using the non-dominated solutions obtained by the exact algorithm to reconstruct the whole Pareto front [36].
However, to the best of our knowledge, most of these hybrid strategies were applied to unconstrained MOPs through classical test suites (ZDT, DTLZ, WFG) and there is almost no proposal for dealing with constraints, particularly equality constraints.
There already exist some strategies for deal with equality constraints: for instance, if all equalities are linear, one can use orthogonal projections in order to obtain feasible points near to a given candidate solution ([8]). For vehicle routing problems, several repair mechanisms can be found in [37,38,39]), and in [4,40] such mechanisms can be found for portfolio selection problems. Finally, [41] proposes a repair mechanism that makes use of first-order Taylor approximations of the constraints.

2.3. Test Suites for Constrained MOPs

Benchmark problems are important to assess the performance of solution techniques. In literature, some of such test suites can be found for the case of constrained MOPs. Most of these problems, however, contain only box-constraints. Established test suites whose domains are more complex are the CTP problems ([17]) and the CF problems (see [18]). More recently, the C-DTLZ ([28]) and the LIRCMOP problems ([25]) have been proposed. Remarkably, not one of these test problems contains an equality constraint. Thus, the common operating mode proposed by most authors to handle equalities (i.e., dividing the constraint into two inequalities) has never been seriously tested. Indeed, this methodology does not work in practice since it is very unlikely that a solution can respect both inequalities. As a consequence, the first feasible solution identified attracts the rest of the population, causing drastic diversity losses. This is why the specific treatment of equality constraints represents a relevant issue on its own, as shown by the present study.
In [42], the CZDT functions are proposed that contain equality constraints and that are derived from the well-known ZDT functions ([43]). We will use this test suite for our comparisons. The particularity of this suite, however, is that the Pareto sets of the constrained problems are identical to the Pareto sets of the respective unconstrained problems. To further test the ability of MOEAs to handle constraints, we will in this paper propose three new equality constrained MOPs where the inclusion of the equality constraint(s) has an influence on the location of the Pareto sets.

2.4. Pareto Tracer (PT)

PT is a continuation method that is capable of performing a movement along the set of Karush–Kuhn–Tucker (KKT) points of a given MOP starting from an initial KKT point [11]. The algorithm can handle both equality and inequality constraints and is applicable in principle to any number of objectives. The main steps of PT for equality constrained MOPs are for convenience of the reader briefly described in Appendix A.

3. Proposed Test Problems

Here we propose two bi-objective test problems, Eq1-ZDT1 and Eq2-ZDT1, and one three-objective problem, Eq-Quad. All problems are scalable in the number of decision variables, and for all problems the inclusion of the equality constraint(s) has an influence on the location of the Pareto set.

3.1. Eq1-ZDT1

The original ZDT1 is a bi-objective problem with box constraints that can be defined for an arbitrary number n of decision variables. Meanwhile, the proposed Eq1-ZDT1 problem is stated as follows
f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) 2 f 1 ( x ) g ( x )
s . t . h ( x ) = x 1 0.5 2 + x 2 0.4 2 0.25 = 0
where
g ( x ) = 1 + 9 n 1 i = 2 n x i 2
As we can see, the Eq1-ZDT1 (2) is also a scalable bi-objective problem in the number of variables, which changes the box constraints by an equality constraint (with the implicit inequality constraint that x 1 0 so that f 2 is defined). The constraint of this problem (3) defines a kind of “hyper-cylinder”, where the variables x 1 and x 2 are placed on a circle, while the remaining variables x i ,   i = 3 , , n can take any value (see Figure 1).
In the following, we will provide the Pareto set for Eq1-ZDT1. For this, we need the set P h Q defined as
P h : = x R n : x 1 0.5 2 + x 2 0.4 2 = 0.25 , x i = 0 ,   i = 3 , , n .
Theorem 1.
Let P h be defined as in  (5) and n = 30 . Then the subset P E q 1 P h given by
P E q 1 = { x R n : x 1 [ 0 , γ ] , x 2 = 0.4 0.25 ( x 1 0.5 ) 2 ,   x i = 0 ,   i = 3 , , n } .
where γ 0.977336 is the Pareto set of Eq1-ZDT1.
Proof. 
(a) First, we prove that   y Q \ P h such that y x , where x P h .
Suppose that   y R n \ P h such that h ( x ) = 0 and y x , x P h . First, let
x : = ( y 1 , y 2 , 0 , , 0 ) T ,
with ( y 1 0 . 5 ) 2 + ( y 2 0.4 ) 2 = 0.25 . Then, let Δ : = ( Δ 1 , , Δ n ) T R n \ with Δ 1 = Δ 2 = 0 ; as y R n \ P h we can choose y as follows:
y : = x + Δ = ( y 1 , y 2 , Δ 3 , , Δ n ) T ,
Now, from (2) note that
  • For the first objective we have that f 1 ( y ) = f 1 ( x ) = y 1 .
  • For the second objective we have that
    g ( x ) = 1 + 9 n 1 i = 2 n x i 2 = 1 + 9 n 1 y 2 2 + i = 3 n 0 = 1 + 9 n 1 y 2 2 , g ( y ) = 1 + 9 n 1 i = 2 n y i 2 = 1 + 9 n 1 y 2 2 + i = 3 n Δ i 2 , g ( x ) < g ( y ) .
    Then,
    y 1 g ( x ) > y 1 g ( y ) y 1 g ( x ) > y 1 g ( y ) y 1 g ( x ) < y 1 g ( y ) 2 f 1 ( x ) g ( x ) < 2 f 1 ( y ) g ( y ) .
    Finally,
    g ( x ) < g ( y ) g ( x ) 2 f 1 ( x ) g ( x ) < g ( y ) 2 f 1 ( y ) g ( y ) f 2 ( x ) < f 2 ( y ) ,
    which contradicts the hypothesis. Thus   y P D \ P h   |   x y with x P h .
(b) Now we show that the points P E q 1 are not dominated by each other and they dominate all the points in the set P h \ P E q 1 .
Let x , x P h such that x 1 = x 1 and | x 2 | < | x 2 | . Notice that we can express x 2 in terms of x 1 as
x 2 = 0.4 ± 0.25 ( x 1 0 . 5 ) 2 ,
and it is clear that the points of the form ( x 1 , 0.4 + 0.25 ( x 1 0.5 ) 2 ) are dominated by the points ( x 1 , 0.4 0.25 ( x 1 0.5 ) 2 ) (that is, the inferior half of the circle), then g ( x ) < g ( x ) f 2 ( x ) < f 2 ( x ) .
Now, we can write f 2 ( x ) with x P h d in terms of x 1 as
f 2 ( x 1 ) = ( n 1 ) + 9 C 2 ( x 1 ) n 1 2 x 1 ( n 1 ) + 9 C 2 ( x 1 ) .
where, C ( x 1 ) = 0.4 0.25 ( x 1 0.5 ) 2 .
Computing the derivative of Equation (10) we have
f 2 ( x 1 ) = ( 5 ( n 1 ) + 180 x 1 90 ) 0.25 ( x 1 0.5 ) 2 72 x 1 + 36 5 ( n 1 ) 0.25 ( x 0.5 ) 2 .
The derivative of f 2 has only one root at γ = x 1 * 0.977336 . Also, note that for a point a we have that if a 0 , γ , then f 2 ( a ) < 0 . On the other hand, if a γ , 1 , then f 2 ( a ) > 0 . Hence, f 2 ( x ) is monotonically decreasing, and consequently, points in P E q 1 are not dominated by each other.
Finally, by (a) and (b) we have that P E q 1 is the Pareto set of Eq1-ZDT1. □
To obtain γ for the formulation of the Pareto set we needed to consider f 2 which depends on n. The proof is analog for other values of n with changing value of γ . Some of these values can be found in Table A1 of Appendix D.

3.2. Eq2-ZDT1

Via adding box constraints to Eq1-ZDT1, we can define the Eq2-ZDT1 problem as follows
f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) 2 f 1 ( x ) g ( x )
s . t . h ( x ) = 0
0 x i 1 ,   i = 1 , , n ,
where h ( x ) and g ( x ) are defined as in (3) and (4), respectively.
Next, we will provide the analytical Pareto set for Eq2-ZDT1.
Theorem 2.
For n = 30 , x R n , the Pareto set for the Eq2-ZDT1 problem (see Figure 2) is given by
P E q 2 : = { x R n : x 1 I 1 I 2 I 3 ,   x 2 = I ( x 1 ) ,   x i = 0 ,   i = 3 , , n . }
where I 1 : = [ 0 , 0.2 ] ,   I 2 : = [ η , 0.8 ) ,   I 3 : = [ 0.8 , γ ] , η 6.700214 , and
I ( x 1 ) : = 0.4 0.25 ( x 1 0.5 ) 2 , x 1 I 1 I 3 0.4 + 0.25 ( x 1 0.5 ) 2 , x 1 I 2
Proof.  
(a)
Let P h be defined as in  (5), and P E q 1 as in (6) and then first part of this proof is analogs the previous analysis for Eq1-ZDT1.
(b)
As second step, we need to remove all the points in P E q 1 that do not satisfy the box constraints. In particular, as x i = 0 ,   i = 3 , , n , we focus on x 1 and x 2 . For x 1 , x 2 P h , we have that x 1 [ 0 , 1 ] and x 2 [ 0.1 , 0.4 ] , i.e., some values of x 2 do not satisfy the lower bound.
We can express x 1 as follows:
x 1 = 0.5 + 0.25 ( x 2 0.4 ) 2 ,
thus, for x 2 = 0 we can find the values of x 1 that define I 1 and I 3 via:
x 1 = 0.5 ± 0.3   I 1 = [ 0 , 0.2 ]   I 3 = [ 0.8 , γ ]
After removing the non-feasible points from P E q 1 we have a gap in Pareto set/front. Now, notice that, some points x P h : x 2 = 0.4 + 0.25 ( x 1 0.5 ) 2 (that is, x P E q 1 ), could be within the gap. That is, we have to find the values of x P h \ P E q 1 such that f 2 ( x 1 ) [ f ( 0 . 8 ) , f ( 0.2 ) ] .
For this we consider:
f 2 ¯ ( x 1 ) = ( n 1 ) + 9 C 2 2 ( x 1 ) n 1 2 x 1 ( n 1 ) + 9 C 2 2 ( x 1 ) .
where, C 2 ( x 1 ) = 0.4 + 0.25 ( x 1 0.5 ) 2 .
Notice that [ f 2 ( 0.8 ) , f 2 ( 0.2 ) ] [ f 2 ¯ ( 0.8 ) , f 2 ¯ ( 0.2 ) ] and f 2 ¯ is a continuous function, then for the intermediate value theorem   x 1 , 0.8 , x 1 , 0.2 such that f 2 ¯ ( x 1 , 0.2 ) = f 2 ( 0.2 ) and f 2 ¯ ( x 1 , 0.8 ) = f 2 ( 0.8 ) , respectively.
For n = 30 , such values are x 1 , 0.8 0.9773356 and x 1 , 0.2 0.670021 . Then I 2 = [ η , 0.8 ) , with η = x 1 , 0.2 , as x 1 , 0.8 > 0.8 and f 2 ( 0.8 ) < f 2 ¯ ( 0.8 ) .
Finally, by (a) and (b) we have that P E q 2 is the Pareto set of Eq2-ZDT2. □
As we can observe, the values of γ and η depend on n (see Theorems 1 and 2). We refer to the Appendix D for these values for other dimensions of the decision variable space. See Figure 2 for Pareto set and front of Eq2-ZDT2.

3.3. Eq-Quad

Finally, we propose a modification of the problem taken from [11] which has three quadratic objectives and two equality constraints:
f j ( x ) = x a ( j ) 2 2 ,   j = 1 , , k ,
where x R n , k = 3 , and a ( 1 ) = ( 1 , 1.4 , 0.4 ) T , a ( 2 ) = ( 1.4 , 1 , 0.4 ) T , and a ( 3 ) = ( 0.4 , 0.4 , 0.8 ) T . subject to
h 1 ( x ) = r 2 x 3 2 R x 1 2 + x 2 2 2 = 0 h 2 ( x ) = x 1 + x 2 x 3 = 0 1 . 5 x 1 1 1.5 x 2 1 0 x 3 1 .
Figure 3 shows the constraints and the Pareto set for n = 3 . As it can be seen, the Pareto set consists of two connected components that can be both expressed by curves (and which are hence 1-dimensional).

4. Proposed Algorithm ( ϵ -NSGA-IIPT)

This section is devoted to the description of the hybrid algorithm proposed in this work. As above- mentioned, the Pareto Tracer is a continuation strategy that is able to efficiently perform moves along the Pareto front of a MOP. However, this reconstruction process is carried out locally, which involves that PT must be provided with a reduced set of relevant approximated solutions, i.e., with a reasonably good convergence and dispersion over the front. Therefore, a multi-objective evolutionary algorithm, based on a modified implementation of NSGA-II [16], is developed here as the first stage of the hybrid algorithm, in the aim of producing this first set of promising solutions. This latter subsequently serves as an input of PT, which refines it to produce the final approximation of the Pareto front/set. The two stages of the resulting ϵ -NSGA-IIPT are presented in detail in what follows.

4.1. First Stage: Rough Approximation via ϵ -NSGA-II

The first stage of the hybrid algorithm presented in this work consists in determining a rough approximation of the Pareto front via a MOEA. Each approximated solution will subsequently feed the Pareto Tracer, which generates local reconstructions of the real Pareto front and combines them to finally obtain the whole Pareto set. Therefore, the MOEA used in the first step should meet the following characteristics:
the number of solutions in the roughly approximated set is small, in order to reduce the computational burden of the local search (PT). Indeed, in case of a completely connected front, one single approximated solution might allow us to build the entire Pareto front.
the MOEA should promote diversity, since the rough approximation produced should cover all the extent of the Pareto front and identify all the different components, in case of a disconnected front.
the MOEA must be able to handle equality constraints. As mentioned before, a severely constrained problem might cause diversity issues that should be overcome by the MOEA.
Note that the two first features are conflicting, since the number of elements of the approximated front should be small enough to avoid unnecessary computations. Nonetheless, there must be enough approximated solutions to ensure the identification of all the components, in case of disconnected Pareto front. To deal with 2 or 3 objective problems such as those treated in this work, we observed that 20 points in the rough approximation represents a good trade-off between these two requirements. This means that the MOEA should work either with small populations or maintaining a small external archive.
Regarding the search engine and diversity preservation issue, two classical dominance and decomposition-based algorithms (NSGA-II and MOEA/D, respectively) were initially investigated. However, preliminary computations led to the observations that, first, the use of small populations deteriorated the performance of MOEA/D and, even with larger populations (and a small archive to be returned), diversity remained an issue. Therefore, NSGA-II was used in this study as baseline algorithm.
In the proposed implementation, NSGA-II does not necessarily use small populations, but the final population is reduced to 20 at the end of the search (according to crowding distance). Regarding constraint handling, the constraint-domination principle CDP generally used within NSGA-II does not obtain good results on the equality constrained problems tackled here. Indeed, any technique establishing the superiority of feasible solutions over infeasible ones will face severe diversity issues, due to a premature convergence towards the first feasible solution found.
As a simple way to modify this operating mode, the ϵ -constraint strategy proposed in [24] is adopted here. Note that, despite this method was previously integrated within MOEA/D in [23], to the best of our knowledge, there is no reference to NSGA-II working with ϵ -constraint for solving constrained MOPs. As explained earlier, the ϵ -constraint strategy relaxes the tolerance level on constraint violation up to a value ϵ , i.e., a solution x with an overall constraint violation ϕ ( x ) ϵ is viewed as feasible and, thus, evaluated and compared in terms of its objective value. Then, ϵ is smoothly reduced according to a specific decreasing scheme. This technique allows solutions with a small (but positive) overall constraint violation to be compared with feasible individuals in terms of convergence (through dominance-based sorting) and spread uniformity (thanks to the crowding distance). Accordingly, the following operator is used to compare two solutions x and y:
x ϵ y x y if   ϕ ( x ) , ϕ ( y ) ϵ x y if   ϕ ( x ) = ϕ ( y ) ϕ ( x ) < ϕ ( y ) otherwise ,
where
ϕ ( x ) = j = 1 m max g j ( x ) , 0 + i = 1 p | h k ( x ) |
denotes the constraint violation. At the beginning of the run, the first value of ϵ , denoted as ϵ ( 0 ) , is set in such a way that at least some solutions are “ ϵ -feasible”. More precisely, ϵ ( 0 ) is the overall constraint volation of individual x θ , which is the θ -th solution in the first population sorted in descreasing order of ϕ : ϵ ( 0 ) = ϕ ( x θ ) . For subsequent generations, the ϵ decreasing schedule originally proposed in [24] is adopted here: ϵ is an exponentially decreasing function of the generation number, until a critical generation T c is reached. From then on, ϵ is set to ϵ m i n :
ϵ = max ( ϵ m i n , ϵ ( 0 ) 1 t / T c c p ) if   0 < t < T c ϵ m i n if   t T c ,
where t is the generation number, c p is a parameter controlling the decreasing speed and ϵ ( 0 ) is the constraint relaxation level at the first generation. Note that other ϵ decreasing schedules, such as that embedded within MOEAD/D-IEpsilon [23], have been tested, but better results were obtained using Equation (21).
Finally, in order to promote diversity, another parameter is introduced to bias the parent selection operator, which is based on a tournament implementing the ϵ -constraint strategy described in Equation (19). However, as suggested in [44], the result of the tournament is respected only with probability p f ; else (i.e., with probability 1 p f ), the winner is randomly chosen. Empirical results proved that this small modification sometimes allows significant improvements in the algorithm performance. The whole process is shortly described in Algorithm 1.

4.2. Second Stage: Refinement via PT

The main task on this stage is to process the resulting archive P provided by ϵ -NSGA-II. The main challenge here is to avoid unnecessary effort, e.g., via computing non-optimal KKT points along local Pareto fronts that are already dominated by previously computed solutions. While this is relatively easy for k = 2 objectives (see [12]), this task becomes more complicated with increasing k. The following procedure works for general k.
Before PT can be executed, the following post-processing has to be done on P:
  • Let τ be the desired minimal distance between two solutions in objective space. In this first step, go over P and eliminate elements that are too close to each other (if needed). This leads to the new archive P ˜ .
  • Apply the Newton method  (A11) to all elements of P ˜ . Remove all dominated solutions, and elements that are too close to each other as in the first step. This leads to the archive  P ¯ .
  • To obtain a “global picture” of the part of the Pareto front that will be computed by PT, construct a partition of a potentially interesting subset S of the image space into a set of hyper-cubes (or k-dimensional boxes) with radius τ . This partition can easily be constructed via using a binary tree whose root represents S (see [45] for details, where, however, the partition is used in decision variable space). S is a box that is constructed out of P ¯ as follows: denote by m i and M i the minimal and maximal value of the i-th objective value of all elements in P ¯ , respectively. Then the i-th element of the center of S is set to ( m i + M i ) / 2 and its i-th element of the radius to ( M i m i ) / 2 . In the computations, we will only allow a storing one candidate solution within each of these boxes in the archive A to guarantee a spread of the solutions.
Algorithm 1: ϵ -NGSA-II
P p o p _ i n i t ( )
 Evaluate each individual x i P to obtain F ( x i ) and ϕ ( x i )
 Compute ϵ ( 0 ) and set ϵ = ϵ ( 0 )
for t 1  to  M a x G e n do
   P c r o s s o v e r ( P )         ▹ Parent selection through tournament and ϵ -constraint
   P m u t a t i o n ( P )
   Q P P
   Q F F e a s i b l e ( Q , ϵ ) , Q I = I n f e a s i b l e ( Q , ϵ )
   Q F F a s t N o n D o m i n a t e d S o r t i n g ( Q F )
   Q I = S o r t C o n s t r a i n t V i o l a t i o n ( Q I )
  Fill P with Q F , using crowding distance if necessary
  if | Q F | < P o p S i z e then
    Complete P with Q I
  end if
  Update ϵ through Equation (21)
end for
P r e d u c e ( P )                           ▹ return 20 solutions
 Return P and F ( P )
Then, in the first step the element p ( 1 ) P ¯ is chosen as the starting point for PT, where f 1 ( p ( 1 ) ) = m 1 . An external archive A will be created that will be the reference for PT and that will be the set of solutions that will be returned after the application of ϵ -NSGA-II/PT. At the beginning, it is A : = { p ( 1 ) } . In parallel, a box collection C will be created that contains all the (unique) boxes out of the above partition that contain all the elements of A. In the first step, C will set to the box that contains p ( 1 ) . The application of PT leads to a sequence of candidate solutions x i , i = 1 , , s . For each x i the following steps are performed:
  • if x i is dominated by any element of A, then PT the current application of PT is stopped.
  • else, it is checked if the unique box that contains x i is already contained in C. If this is not the case, add this box to C and add x i to A. Else, decline x i and proceed with x i + 1 .
After this, take the element from P ¯ \ { p ( 1 ) } with the smallest value of f 2 as the starting point for PT and proceed as above. Proceed in this way, sorting cyclic according to each objective, until all elements p P ¯ have been chosen as starting points for PT.

5. Numerical Results

Here we present some numerical results that compare the behavior of a well-known mathematical programming technique and some state-of-the-art MOEAs in the selected test problems. As test functions we have chosen to take the three test problems proposed above, Eq1-Quad from [11], the CZDT test suite as well as a modification of a problem from Das and Dennis problem (D&D) stated in [11]. First, we test the normal boundary intersection (NBI) method in the selected suite. With this experiment we want to show why the use of MOEAs is convenient, even more the need for a special hybrid algorithm which is capable of solving these type of MOPs. The second experiment is the comparison between state-of-the-art MOEAs against the proposed ϵ -NSGA-II/PT. Here, a point x is considered to be feasible if h ( x ) < ε , where we have taken ε = 1 e 04 , which is common in evolutionary computation.

5.1. Performance Assessment

The multi-objective solvers were evaluated by adopting two performance indicators taken from the literature.
Δ p indicator
The Δ p indicator [46,47] can be viewed as an averaged Hausdorff distance between an approximation set and the real Pareto front of a MOP. This indicator is defined by slight modifications of the indicators Generational Distance (GD) [48] and Inverted Generational Distance (IGD) [49]. Formally, the Δ p indicator can be written as follows.
Let P = { x 1 , , x | P | } an approximation and R = { r 1 , , r | R | } be a discretization of the real PF of a MOP, then
Δ p ( P , R ) = max { G D p ( P , R ) , I G D p ( P , R ) } ,
where G D p ( P , R ) = 1 | P | i = 1 | P | d i p 1 p and I G D p ( P , R ) = 1 | R | j = 1 | R | d ^ j p 1 p , and where d i and d ^ j are the Euclidean distance from x i to its closest member r R , and the Euclidean distance from r j to its closest member x P , respectively. Here we have chosen p = 2 . The ideal indicator value is 0, and a low Δ p value indicates a good approximation of P.
Feasibility Ratio
The feasibility ratio ( I F ) indicator refers to the ratio of the number of feasible solutions found in the final approximation P given by a MOEA. Mathematically, this indicator can bee stated as follows.
I F ( P ) = P f | P | ,
where P f denotes the number of feasible solutions in P and | P | represents the cardinality of the population P.

5.2. Solving Equality Constrained MOPs with Mathematical Programming Techniques

In this section, we test the NBI technique and apply it to the selected test problems. First, we compute the extreme points of the CHIM by separately optimizing each objective. We take the center of the box defined by each MOP as the initial solution of the optimization process. Then, we compute the CHIM using the previously obtain solution, here we set a partition of 100 well-distributed points. Finally, we solve the NBI subproblem taking into a count each NBI weight. We compute the extreme points of the MOP and the NBI subproblem via fmincon Matlab function. Table 1 shows the Δ 2 value and the computational cost, in terms of function evaluations, for each problem. Feval contains the global number of function evaluations required by NBI technique without the cost of each objective optimization process. Feval f i column contains the number of function evaluations that were required to optimize each objective. Figure A1 of Appendix B shows NBI results for each test problem. Each figure shows the real Pareto front, the computed CHIM and NBI solution.
From Table 1 we observed that in CZDT1, CZDT2, CZDT4 and Das and Dennis Δ 2 value is close to zero which means that NBI adequately solves these problems, on the other hand, the number of function evaluations needed is very high. In the remaining test functions, neither the performance indicator value nor the number of function evaluations is good. Thus, NBI is not capable of solving these MOPs.

5.3. Solving Equality Constrained MOPs with MOEAs

In this section, the proposed approach was compared against four state-of-the-art MOEAs that incorporate different constraint-handling strategies in their environmental selection procedures. Also we include a comparison with a two-stage algorithm that combines the above mention ϵ -NSGA-II (see Section 4.1) and NBI technique.
  • NSGA-II. The popular non-dominated sorting genetic algorithm II [50] was adopted in our comparative study. NSGA-II employs a binary tournament-based on feasibility in the mating selection procedure. In order to determine the next generation, the crowding comparison operator considers the feasibility of solutions. In our study, NSGA-II was performed using the standard parameters given by its authors, i.e., P c = 1 , P m = 1 / n , η c = 20 , η m = 20 .
  • GDE3. The third evolution step of generalized differential evolution [51] was also adopted in our experimental analysis. GDE3 introduces the concept of constraint-domination explained before to discriminate solutions. GDE3 was employed using C R = 1 and F = 0.5 .
  • cMOEA/D-DE. We also adopted the first version of the multi-objective evolutionary algorithm based on decomposition for constraint multi-objective optimization [52]. cMOEA/D-DE utilizes a penalty function in order to satisfy the constraint of the problem. The penalty function is straightforward added to the scalarizing function employed by MOEA/D-DE [53] to approximate the PF of a constrained MOP. cMOEA/D-DE was employed using C R = 1 , F = 0.5 , T = 0.1 × N , n r = 0.01 × N , P m = 1 / n , η = 20 , s 1 = 0.01 , and s 2 = 20 .
  • eMOEA/D-DE. A version of MOEA/D-DE based on the ε -constraint method for constrained optimization [54] is also adopted in our experimental study. eMOEA/D-DE employs the ε -constraint method to satisfy the constraints of the problem by obtaining information about feasible solutions in the neighborhood of MOEA/D-DE. Thus, the neighboring solutions are used to defined the ε -constraint value which is dynamically adapted during the search process of eMOEA/D-DE. eMOEA/D-DE was performed using the standard parameters suggested by its authors, i.e., C R = 1 , F = 0.5 , T = 0.1 × N , n r = 0.01 × N , P m = 1 / n , η = 20 , τ = 0.3 , and δ e = 0.7 .
In the above description, N represents the population size which is implicitly defined by the number of subproblems defined by the decomposition-based MOEAs (i.e., cMOEA/D-DE and eMOEA/D-DE). Such subproblems were generated by employing the simplex-lattice design [55] and using the penalty boundary intersection approach (PBI) with a penalty value θ = 5 , such as it was suggested by [14]. Therefore, the number of weight vectors is given by N = C H + M 1 M 1 , where M is the number of objective functions. Consequently, the setting of N is controlled by the parameter H. Here, we use H = 99 (for two-objective problems) and H = 23 (for three-objective problems), i.e., 100 and 300 weight vectors for problems having two and three objectives, respectively.
For each MOP, 30 independent executions were performed with each MOEA.

5.4. Analysis

Our experiments have shown that the new hybrid needs between 15,000 and 17,000 function evaluations (FEs) to obtain good results for the bi-objective problems and 110,000 to 150,000 FEs for the three-objective problems. In order to make the comparison fair, we have set a final budget of 20,000 FEs for all the selected MOEAs on the bi-objective problems and 150,000 FEs for the three-objective problems. For ϵ -NSGA-II/PT we have split the budget for the bi-objective problems into 15,000 FEs for ϵ -NSGA-II and the remaining 5000 FEs for PT (and 100,000 plus 50,000 FEs for three-objective problems). For the realization of PE, we have used Automatic Differentiation to compute the gradients which allows us to express the cost of the continuation method in terms of FEs. For all experiments, we have executed 30 independent runs. We performed the Wilcoxon Test as statistical significance proof to validate the results. For this, we consider the value α = 0.05 . Based on the test results, for the comparison between ϵ -NSGA-II/PT and any of the chosen MOEAs the symbol “↑” means that the obtained results are statistically significant. The symbol “ ” means that no Δ 2 value could be computed for the algorithm. This was the case if a MOEA detected no feasible solutions for at least 25 runs.
Figure 4 and Figure 5 show the results for Eq1-Quad and Eq2-Quad, respectively. The reader is referred to Appendix C for more figures. The theoretical PF is marked with dots (.) while approximations from the algorithms are marked with triangles (△). The Pareto fronts for the other test problems have been omitted due to space limitations, however, Table 2 shows the indicator values, Δ 2 and I F , for all test problems. Smaller values of Δ 2 correspond to better qualities of the approximated solution, while larger values for I F indicate more feasible solutions in the approximation. The best values are displayed in bold for each problem and each indicator. We can see that the new hybrid algorithm significantly outperforms the other algorithms in nine out of the ten test functions considering a compromise between Δ 2 value and the number of the needed function evaluations. Note that in CZDT1, CZDT2, and CZDT6, ϵ -NSGA-II/NBI has better performance that our proposal but it needs more function evaluations in order to achieve these results. In some cases, the MOEAs are not able to find any feasible solution within the given FE budget. ϵ -NSGA-II/PT, however, loses against c-MOEAD on Eq2-Quad. This is due to the fact that the real Pareto front is disconnected, and in most of the runs, the first stage of our proposal was not able to find adequate solutions near both components. Therefore, in the second stage, we were in most cases only able to compute one of the two components. However, note that all the solutions of the final approximation are feasible in all the independent runs.

6. Conclusions

Here the have addressed the treatment of equality constrained MOPs. To this end, we have first proposed three such problems that have different characteristics and that can serve as future benchmark problems. Next, we have proposed a two-phase hybrid evolutionary algorithm, ϵ -NSGA-II/PT, which combines a variant of the well-known and widely used MOEA NSGA-II with the recently proposed continuation method, the Pareto tracer (PT). More precisely, the evolutionary algorithm, ϵ -NSGA-II, computes in a first step a small but ideally well-spread set of solutions around the Pareto front of a given MOP. In the next step, PT takes over to refine the given rough approximation. Empirical results on some benchmark functions that included a comparison against the state-of-the-art have demonstrated that this new hybrid evolutionary algorithm is highly competitive, and yields highly satisfying results using only a moderate budget of function evaluations.
For future work, it might be interesting to reduce the requirement of directly computing the gradient information, e.g., via utilizing approximation strategies like the one proposed in [41], and to apply the new hybrid to the treatment of real-world applications. For instance, our proposal could improve the results presented in [56], where the authors showed that some classic MOEAs have an unsatisfactory performance solving the portfolio problem with complex equality constraints. On the other hand, equality constraints also appear in some single-objective optimization problems (for instance, the development of computer-controlled material [57], the predictive scheduling [58], or the production system designing [59]), and more applications of our proposal could emerge from the extension of such problems to the multi-objective case.

Author Contributions

Conceptualization, O.S. and A.L.; evolutionary algorithms, A.P. and S.Z.-M.; local search, O.C. and L.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Conacyt Basic Science project No. 285599, SEP Cinvestav project No. 231, and IPN SIP project No. 20196444.

Acknowledgments

The authors acknowledge funding from the Conacyt Basic Science project No. 285599, SEP Cinvestav project No. 231, and IPN SIP project No. 20196444.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Pareto Tracer (PT)

Here we briefly describe the main steps of the continuation method PT for equality constrained MOPs, more details e.g., on how to treat inequalities can be found in [11].
Starting with a solution x i , a next solution x i + 1 is computed in the following two steps: first, the vector ν μ that is tangential to the KKT set at x i is chosen via solving
W α H T H 0 ν μ ξ = J T μ 0 ,
where J denotes the Jacobian of F at x , μ R k , ξ R p and
W α : = i = 1 k α i 2 f i ( x ) R n × n ,
where α R k is the Lagrange multiplier, and
H : = h 1 ( x ) T h p ( x ) T R p × n .
If the ranks of W α and H are maximal, then ν μ is determined uniquely. In that case, it holds
i = 1 k μ i = 0 ,
as well as
ν μ = W α 1 J T μ .
For a given vector d R k , the direction ν d R k with J ν μ d = d can be computed using (A5):
J ν μ d = J W α 1 J T μ d = d .
Since also (A4) has to hold, μ d it is determined via solving
J W α 1 J T e μ d = d 0 .
The direction vector d can now be chosen to steer along the Pareto front. For this, we have to choose d orthogonal to α : let α = Q R = ( q 1 , , q k ) R be a Q R decomposition of α , then define
d i : = q i + 1 , i = 1 , , k 1 ,
and select the μ d i ’s via solving (A7). Then, the vectors ν μ d i are tangential to P Q , and the d i ’s are the respective directions in objective space.
Using ν μ , we can now compute the predictor p i
p i : = x i + t i ν μ ,
with step size t i as follows:
t i : = τ J ν μ .
τ > 0 is a user specified value that represents the desired distance between the images of two consecutive solutions in objective space.
In the next step, a Newton method is used to project p i to the KKT set. The Newton direction is hereby chosen as the solution of
min ( ν , δ ) R n × R δ s . t . f i ( x ) T ν + 1 2 ν T 2 f i ( x ) ν δ , i = 1 , , k , h i ( x ) + h i ( x ) T ν = 0 , i = 1 , , p .
Quasi-Newton techniques can be used to realize the algorithm without using Hessians. We will use this implementation here.

Appendix B. Graphical Results for the NBI

Figure A1. Pareto front approximations for the selected test functions using the NBI method.
Figure A1. Pareto front approximations for the selected test functions using the NBI method.
Mathematics 08 00007 g0a1

Appendix C. Graphical results for the ϵ-NSGAII/PT

Figure A2 shows the numerical results obtained by ϵ -NSGAII/PT on all the ten considered test functions.
Figure A2. Best result for the ϵ -NSGA-II/PT for all the ten test functions.
Figure A2. Best result for the ϵ -NSGA-II/PT for all the ten test functions.
Mathematics 08 00007 g0a2

Appendix D. Values of γ and η

Table A1 shows the values of γ and η for different dimensions n of the decision variable space. γ and η are used to describe the Pareto sets of Eq1-ZDT1 and Eq2-ZDT1.
Table A1. Values of γ and η .
Table A1. Values of γ and η .
n γ η
160.9543800.863336
170.9570290.848048
180.9594450.832853
190.9616560.817805
200.9636860.802946
210.9655540.788312
220.9672780.773932
230.9688740.759830
240.9703530.746025
250.9717270.732530
260.9730060.719359
270.9741990.706518
280.9753140.694012
290.9763570.681847
300.9773360.670021
310.9782530.658536
320.9791160.647389

References

  1. Kalyanmoy, D. Multi Objective Optimization Using Evolutionary Algorithms; John Wiley and Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  2. Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: Cham, Switzerland, 2007; Volume 5. [Google Scholar]
  3. Ullah, A.S.B.; Sarker, R.; Lokan, C. Handling equality constraints in evolutionary optimization. Eur. J. Oper. Res. 2012, 221, 480–490. [Google Scholar] [CrossRef]
  4. Ponsich, A.; Jaimes, A.L.; Coello, C.A.C. A Survey on Multiobjective Evolutionary Algorithms for the Solution of the Portfolio Optimization Problem and Other Finance and Economics Applications. IEEE Trans. Evol. Comput. 2013, 17, 321–344. [Google Scholar] [CrossRef]
  5. Chen, X.L.; Wang, P.H.; Wang, Q.; Dong, Y.H. A Two-Stage strategy to handle equality constraints in ABC-based power economic dispatch problems. Soft Comput. 2019, 23, 6679–6696. [Google Scholar] [CrossRef]
  6. Liao, Z.; Rittscher, J. A multi-objective supplier selection model under stochastic demand conditions. Int. J. Prod. Econ. 2007, 105, 150–159. [Google Scholar] [CrossRef]
  7. Bernardino, H.S.; Barbosa, H.J.C.; Angelo, J.S. Differential Evolution with Adaptive Penalty and Tournament Selection for Optimization Including Linear Equality Constraints. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  8. Barbosa, H.J.C.; Bernardino, H.S.; Angelo, J.S. An improved differential evolution algorithm for optimization including linear equality constraints. Memetic Comput. 2018, 11, 317–329. [Google Scholar] [CrossRef]
  9. Ghannadpour, S.F.; Noori, S.; Tavakkoli-Moghaddam, R.; Ghoseiri, K. A multi-objective dynamic vehicle routing problem with fuzzy time windows: Model, solution and application. Appl. Soft Comput. 2014, 14, 504–527. [Google Scholar] [CrossRef]
  10. Cuate, O.; Uribe, L.; Lara, A.; Schütze, O. A benchmark for equality constrained multi-objective optimization. Swarm Evol. Comput. 2020, 52, 100619. [Google Scholar] [CrossRef]
  11. Martín, A.; Schütze, O. Pareto Tracer: A predictor–corrector method for multi-objective optimization problems. Eng. Optim. 2018, 50, 516–536. [Google Scholar] [CrossRef]
  12. Cuate, O.; Uribe, L.; Ponsich, A.; Lara, A.; Beltran, F.; Sánchez, A.R.; Schütze, O. A New Hybrid Metaheuristic for Equality Constrained Bi-objective Optimization Problems. In Evolutionary Multi-Criterion Optimization; Deb, K., Goodman, E., Coello Coello, C.A., Klamroth, K., Miettinen, K., Mostaghim, S., Reed, P., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 53–65. [Google Scholar]
  13. Jan, M.A.; Khanum, R.A. A study of two penalty-parameterless constraint handling techniques in the framework of MOEA/D. Appl. Soft Comput. 2013, 13, 128–148. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  15. Runarsson, T.P.; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 2000, 4, 284–294. [Google Scholar] [CrossRef] [Green Version]
  16. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  17. Deb, K.; Pratap, A.; Meyarivan, T. Constrained Test Problems for Multi-objective Evolutionary Optimization. In Evolutionary Multi-Criterion Optimization; Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 284–298. [Google Scholar]
  18. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective optimization test instances for the CEC 2009 special session and competition. In Special Session on Performance Assessment of Multi-Objective Optimization Algorithms; Technical Report; University of Essex: Colchester, UK; Nanyang Technological University: Singapore, 2008; Volume 264. [Google Scholar]
  19. Ray, T.; Singh, H.K.; Isaacs, A.; Smith, W. Infeasibility Driven Evolutionary Algorithm for Constrained Optimization. In Constraint-Handling in Evolutionary Optimization; Mezura-Montes, E., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 145–165. [Google Scholar] [CrossRef]
  20. Liu, M.; Zou, X.; Chen, Y.; Wu, Z. Performance assessment of DMOEA-DD with CEC 2009 MOEA competition test instances. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 2913–2918. [Google Scholar] [CrossRef]
  21. Isaacs, A.; Ray, T.; Smith, W. Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 2780–2787. [Google Scholar] [CrossRef]
  22. Asafuddoula, M.; Ray, T.; Sarker, R.; Alam, K. An adaptive constraint handling approach embedded MOEA/D. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  23. Fan, Z.; Li, H.; Wei, C.; Li, W.; Huang, H.; Cai, X.; Cai, Z. An improved epsilon constraint handling method embedded in MOEA/D for constrained multi-objective optimization problems. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar] [CrossRef]
  24. Takahama, T.; Sakai, S. Constrained Optimization by the ϵ Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 1–8. [Google Scholar] [CrossRef]
  25. Fan, Z.; Li, W.; Cai, X.; Huang, H.; Fang, Y.; You, Y.; Mo, J.; Wei, C.; Goodman, E. An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput. 2019, 23, 12491–12510. [Google Scholar] [CrossRef] [Green Version]
  26. Fan, Z.; Fang, Y.; Li, W.; Lu, J.; Cai, X.; Wei, C. A comparative study of constrained multi-objective evolutionary algorithms on constrained multi-objective optimization problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 209–216. [Google Scholar] [CrossRef]
  27. Ishibuchi, H.; Fukase, T.; Masuyama, N.; Nojima, Y. Dual-grid Model of MOEA/D for Evolutionary Constrained Multiobjective Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018; ACM: New York, NY, USA, 2018; pp. 665–672. [Google Scholar] [CrossRef]
  28. Jain, H.; Deb, K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
  29. Ono, S.; Nakayama, S. Multi-Objective Particle Swarm Optimization for robust optimization and its hybridization with gradient search. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 1629–1636. [Google Scholar] [CrossRef]
  30. Shim, V.A.; Tan, K.C.; Tan, K.K. A hybrid adaptive evolutionary algorithm in the domination-based and decomposition-based frameworks of multi-objective optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  31. Sindhya, K.; Miettinen, K.; Deb, K. A Hybrid Framework for Evolutionary Multi-Objective Optimization. IEEE Trans. Evol. Comput. 2013, 17, 495–511. [Google Scholar] [CrossRef]
  32. Martínez, S.Z.; Coello Coello, C.A. A direct local search mechanism for decomposition-based multi-objective evolutionary algorithms. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  33. Hu, X.; Huang, Z.; Wang, Z. Hybridization of the multi-objective evolutionary algorithms and the gradient-based algorithms. In Proceedings of the 2003 Congress on Evolutionary Computation, Canberra, ACT, Australia, 8–12 December 2003; Volume 2, pp. 870–877. [Google Scholar] [CrossRef]
  34. Hernandez-Diaz, A.G.; Coello Coello, C.A.; Perez, F.; Caballero, R.; Molina, J.; Santana-Quintero, L.V. Seeding the initial population of a multi-objective evolutionary algorithm using gradient-based information. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1617–1624. [Google Scholar] [CrossRef]
  35. López, A.L.; Coello, C.A.C.; Schütze, O. A painless gradient-assisted multi-objective memetic mechanism for solving continuous bi-objective optimization problems. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  36. Lara, A.; Coello Coello, C.A.; Schutze, O. Using gradient-based information to deal with scalability in multi-objective evolutionary algorithms. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 16–23. [Google Scholar] [CrossRef]
  37. Martínez-Salazar, I.A.; Molina, J.; Ángel Bello, F.; Gómez, T.; Caballero, R. Solving a bi-objective Transportation Location Routing Problem by metaheuristic algorithms. Eur. J. Oper. Res. 2014, 234, 25–36. [Google Scholar] [CrossRef]
  38. Braekers, K.; Caris, A.; Janssens, G.K. A Deterministic Annealing Algorithm for a Bi-Objective Full Truckload Vehicle Routing Problem in Drayage Operations. Procedia- Soc. Behav. Sci. 2011, 20, 344–353. [Google Scholar] [CrossRef] [Green Version]
  39. Kumar, V.S.; Thansekhar, M.; Saravanan, R.; Amali, S.M.J. Solving Multi-objective Vehicle Routing Problem with Time Windows by FAGA. Procedia Eng. 2014, 97, 2176–2185. [Google Scholar] [CrossRef] [Green Version]
  40. Krink, T.; Paterlini, S. Multiobjective optimization using differential evolution for real-world portfolio optimization. Comput. Manag. Sci. 2011, 8, 157–179. [Google Scholar] [CrossRef]
  41. Schütze, O.; Alvarado, S.; Segura, C.; Landa, R. Gradient subspace approximation: A direct search method for memetic computing. Soft Comput. 2017, 21, 6331–6350. [Google Scholar] [CrossRef]
  42. Saha, A.; Ray, T. Equality Constrained Multi-objective optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, CEC 2012, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–7. [Google Scholar]
  43. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  45. Dellnitz, M.; Schütze, O.; Hestermeyer, T. Covering Pareto Sets by Multilevel Subdivision Techniques. J. Optim. Theory Appl. 2005, 124, 113–155. [Google Scholar] [CrossRef]
  46. Schütze, O.; Esquivel, X.; Lara, A.; Coello, C.A.C. Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  47. Bogoya, J.M.; Vargas, A.; Cuate, O.; Schütze, O. A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets. Math. Comput. Appl. 2018, 23, 51. [Google Scholar] [CrossRef] [Green Version]
  48. Veldhuizen, D.A.V. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Ph.D. Thesis, Department of Electrical and Computer Engineering, Graduate School of Engineering, Air Force Institute of Technology, Wright-Patterson AFB, OH, USA, 1999. [Google Scholar]
  49. Coello Coello, C.A.; Cruz Cortés, N. Solving Multiobjective Optimization Problems using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  50. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Multi-Objective Optimization Test Problems. In Proceedings of the Congress on Evolutionary Computation (CEC’2002), Honolulu, HI, USA, 12–17 May 2002; IEEE Service Center: Piscataway, NJ, USA, 2002; Volume 1, pp. 825–830. [Google Scholar]
  51. Kukkonen, S.; Lampinen, J. GDE3: The third evolution step of generalized differential evolution. In Proceedings of the IEEE 2005 Congress on Evolutionary Computation (CEC’2005), Edinburgh, UK, 2–5 September 2005; Volume 1, pp. 443–450. [Google Scholar]
  52. Jan, M.A.; Zhang, Q. MOEA/D for constrained multiobjective optimization: Some preliminary experimental results. In Proceedings of the 2010 UK Workshop on Computational Intelligence (UKCI), Colchester, UK, 8–10 September 2010; pp. 1–6. [Google Scholar]
  53. Li, H.; Zhang, Q. Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2009, 13, 284–302. [Google Scholar] [CrossRef]
  54. Zapotecas Martínez, S.; Coello Coello, C.A. A Multi-objective Evolutionary Algorithm based on Decomposition for Constrained Multi-objective Optimization. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC’2014), Beijing, China, 6–11 July 2014; pp. 429–436. [Google Scholar] [CrossRef]
  55. Scheffé, H. Experiments with Mixtures. J. R. Stat. Soc. 1958, 20, 344–360. [Google Scholar] [CrossRef]
  56. Skolpadungket, P.; Dahal, K.; Harnpornchai, N. Portfolio optimization using multi-obj ective genetic algorithms. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 516–523. [Google Scholar] [CrossRef]
  57. Gola, A.; Kłosowski, G. Development of computer-controlled material handling model by means of fuzzy logic and genetic algorithms. Neurocomputing 2019, 338, 381–392. [Google Scholar] [CrossRef]
  58. Sobaszek, Ł.; Gola, A.; Świć, A. Predictive Scheduling as a Part of Intelligent Job Scheduling System. In Intelligent Systems in Production Engineering and Maintenance–ISPEM 2017; Burduk, A., Mazurkiewicz, D., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 358–367. [Google Scholar]
  59. Plinta, D.; Krajčovič, M. Production System Designing with the Use of Digital Factory and Augmented Reality Technologies. In Progress in Automation, Robotics and Measuring Techniques; Szewczyk, R., Zieliński, C., Kaliczyńska, M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 187–196. [Google Scholar]
Figure 1. Pareto set and front of the Eq1-ZDT1 with n = 30 . (a) Projection of the Pareto set onto the x 1 x 2 -plane; (b) Pareto front.
Figure 1. Pareto set and front of the Eq1-ZDT1 with n = 30 . (a) Projection of the Pareto set onto the x 1 x 2 -plane; (b) Pareto front.
Mathematics 08 00007 g001
Figure 2. Pareto set and front of the Eq2-ZDT1 with n = 30 . (a) Projection of the Pareto set onto the x 1 x 2 -plane; (b) Pareto front.
Figure 2. Pareto set and front of the Eq2-ZDT1 with n = 30 . (a) Projection of the Pareto set onto the x 1 x 2 -plane; (b) Pareto front.
Mathematics 08 00007 g002
Figure 3. Constrains and feasible region for the torus problem.
Figure 3. Constrains and feasible region for the torus problem.
Mathematics 08 00007 g003
Figure 4. Pareto front approximations for Eq1-Quad for the selected MOEAs using a budget of 150,000 function evaluations.
Figure 4. Pareto front approximations for Eq1-Quad for the selected MOEAs using a budget of 150,000 function evaluations.
Mathematics 08 00007 g004
Figure 5. Pareto front approximations for Eq2-Quad for the selected multi-objective evolutionary algorithms (MOEAs) using a budget of 150,000 function evaluations.
Figure 5. Pareto front approximations for Eq2-Quad for the selected multi-objective evolutionary algorithms (MOEAs) using a budget of 150,000 function evaluations.
Mathematics 08 00007 g005
Table 1. Values of Δ 2 and number of function calls used by normal boundary intersection (NBI) for the selected test problems.
Table 1. Values of Δ 2 and number of function calls used by normal boundary intersection (NBI) for the selected test problems.
Function Δ 2 FevalFeval f 1 Feval f 2 Feval f 3
CZDT10.001025,759314314-
CZDT20.000830,221314693-
CZDT30.20944587638830-
CZDT40.001122,543186186-
CZDT60.03256902106240-
D&D0.100584,5311430498-
Eq1-ZDT12.754189,8384054528-
Eq2-ZDT12.656787,5393735181-
Eq1-Quad0.901922,679357231649
Eq2-Quad3.526310,4491471444739
Table 2. Values of Δ 2 and I F on the selected test problems.
Table 2. Values of Δ 2 and I F on the selected test problems.
Method Δ 2 I F Feval Method Δ p F. Ratio
CZDT1 ϵ -NSGA-IIPT0.00381.000015,981D&D ϵ -NSGA-IIPT0.34421.000017,175.5
(std.dev)(0.0002) (std.dev)(0.6553)
ϵ -NSGA-IINBI↑0.00150.988126,317 ϵ -NSGA-IINBI↑1.27010.988766,414
(std.dev)(0.0014) (std.dev)(1.8398)
c-MOEAD0.000020,000 c-MOEAD↑4.51680.027020,000
(std.dev)(–) (std.dev)(2.1485)
e-MOEAD0.000020,000 e-MOEAD0.000020,000
(std.dev)(–) (std.dev)(–)
GDE30.000020,000 GDE30.000020,000
(std.dev)(–) (std.dev)(–)
NSGA-II0.000020,000 NSGA-II0.000020,000
(std.dev)(–) (std.dev)(–)
CZDT2 ϵ -NSGA-IIPT0.00381.000015,700Eq1-ZDT1 ϵ -NSGA-IIPT0.01581.000015,763.2
(std.dev)(0.0002) (std.dev)(0.0015)
ϵ -NSGA-IINBI↑0.00080.998029,298 ϵ -NSGA-IINBI↑0.79601.0000161,525
(std.dev)(0.0001) (std.dev)(0.4744)
c-MOEAD0.000020,000 c-MOEAD ↑0.40880.506020,000
(std.dev)(–) (std.dev)(0.2504)
e-MOEAD0.000020,000 e-MOEAD ↑0.16830.378720,000
(std.dev)(–) (std.dev)(0.0488)
GDE30.000020,000 GDE3 ↑3.09970.665320,000
(std.dev)(–) (std.dev)(0.5521)
NSGA-II0.000020,000 NSGA-II ↑0.001320,000
(std.dev)(–) (std.dev)(–)
CZDT3 ϵ -NSGA-IIPT0.01561.000016,235.1Eq2-ZDT1 ϵ -NSGA-IIPT0.12511.000016,285.3
(std.dev)(0.0164) (std.dev)(0.0428)
ϵ -NSGA-IINBI↑0.26810.8386201,139 ϵ -NSGA-IINBI↑0.62041.0000159,702
(std.dev)(0.2298) (std.dev)(0.4138)
c-MOEAD0.000020,000 c-MOEAD ↑0.66240.470020,000
(std.dev)(–) (std.dev)(0.2215)
e-MOEAD0.000020,000 e-MOEAD ↑0.78000.461720,000
(std.dev)(–) (std.dev)(0.1235)
GDE30.000020,000 GDE3 ↑3.61440.887320,000
(std.dev)(–) (std.dev)(0.5234)
NSGA-II0.000020,000 NSGA-II ↑2.46620.003720,000
(std.dev)(–) (std.dev)(1.6368)
CZDT4 ϵ -NSGA-IIPT0.00311.000016,265.4Eq1-Quad ϵ -NSGA-IIPT0.12611.0000149,826.3
(std.dev)(0.0016) (std.dev)(0.0043)
ϵ -NSGA-IINBI↑0.00730.9990529,146 ϵ -NSGA-IINBI↑0.18800.247840,633
(std.dev)(0.0055) (std.dev)(0.0439)
c-MOEAD0.000020,000 c-MOEAD ↑0.57140.2533150,000
(std.dev)(–) (std.dev)(0.0953)
e-MOEAD0.000020,000 e-MOEAD ↑3.17600.0014150,000
(std.dev)(–) (std.dev)(0.6012)
GDE30.000020,000 GDE3 ↑0.91330.2666150,000
(std.dev)(–) (std.dev)(0.0931)
NSGA-II0.000020,000 NSGA-II0.0001150,000
(std.dev)(–) (std.dev)(–)
CZDT6 ϵ -NSGA-IIPT0.08841.000015,739.5Eq2-Quad ϵ -NSGA-IIPT1.99691.0000149,049.2
(std.dev)(0.0180) (std.dev)(1.0378)
ϵ -NSGA-IINBI↑0.01770.854522,362 ϵ -NSGA-IINBI0.000063,299
(std.dev)(0.0102) (std.dev)()
c-MOEAD0.000020,000 c-MOEAD ↑0.47370.1583150,000
(std.dev)(–) (std.dev)(0.2000)
e-MOEAD0.000020,000 e-MOEAD0.0000150,000
(std.dev)(–) (std.dev)(–)
GDE30.000020,000 GDE3 ↑2.81420.0047150,000
(std.dev)(–) (std.dev)(1.1008)
NSGA-II0.000020,000 NSGA-II0.0000150,000
(std.dev)(–) (std.dev)(–)

Share and Cite

MDPI and ACS Style

Cuate, O.; Ponsich, A.; Uribe, L.; Zapotecas-Martínez, S.; Lara, A.; Schütze, O. A New Hybrid Evolutionary Algorithm for the Treatment of Equality Constrained MOPs. Mathematics 2020, 8, 7. https://doi.org/10.3390/math8010007

AMA Style

Cuate O, Ponsich A, Uribe L, Zapotecas-Martínez S, Lara A, Schütze O. A New Hybrid Evolutionary Algorithm for the Treatment of Equality Constrained MOPs. Mathematics. 2020; 8(1):7. https://doi.org/10.3390/math8010007

Chicago/Turabian Style

Cuate, Oliver, Antonin Ponsich, Lourdes Uribe, Saúl Zapotecas-Martínez, Adriana Lara, and Oliver Schütze. 2020. "A New Hybrid Evolutionary Algorithm for the Treatment of Equality Constrained MOPs" Mathematics 8, no. 1: 7. https://doi.org/10.3390/math8010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop