Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Dorado-Sevilla2021 Chapter AnInteractiveFrameworkToCompar

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

An Interactive Framework to Compare

Multi-criteria Optimization Algorithms:


Preliminary Results on NSGA-II
and MOPSO

David F. Dorado-Sevilla, Diego H. Peluffo-Ordóñez,


Leandro L. Lorente-Leyva, Erick P. Herrera-Granda,
and Israel D. Herrera-Granda

Abstract A problem of multi-criteria optimization, according to its approach, can


mean either minimizing or maximizing a group of at least two objective functions
to find the best possible set of solutions. There are several methods of multi-criteria
optimization, in which the resulting solutions’ quality varies depending on the method
used and the complexity of the posed problem. A bibliographical review allowed us
to notice that the methods derived from the evolutionary computation deliver good
results and are commonly used in research works. Although comparative studies
among these optimization methods have been found, the conclusions that these offer
to the reader do not allow us to define a general rule that determines when one method
is better than another. Therefore, the choice of a well-adapted optimization method
can be a difficult task for non-experts in the field. To implement a graphical interface
that allows non-expert users in multi-objective optimization is proposed to interact
and compare the performance of the NSGA-II and MOPSO algorithms. It is chosen
qualitatively from a group of five preselected algorithms as members of evolutionary
algorithms and swarm intelligence. Therefore, a comparison methodology has been
proposed to allow the user for analyzing the graphical and numerical results, which
will observe the behavior of algorithms and determine the best suited one according
to their needs.

Keywords Evolutionary computation · Multi-objective optimization · Swarm


intelligence

D. F. Dorado-Sevilla
Universidad de Nariño, Pasto, Colombia
D. H. Peluffo-Ordóñez · L. L. Lorente-Leyva (B) · E. P. Herrera-Granda · I. D. Herrera-Granda
SDAS Research Group, Ibarra, Ecuador
e-mail: leandro.lorente@sdas-group.com
D. H. Peluffo-Ordóñez
e-mail: dpeluffo@yachaytech.edu.ec
D. H. Peluffo-Ordóñez
Yachay Tech University, Urcuquí, Ecuador
Corporación Universitaria Autónoma de Nariño, Pasto, Colombia

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 61
V. Bindhu et al. (eds.), International Conference on Communication, Computing and
Electronics Systems, Lecture Notes in Electrical Engineering 733,
https://doi.org/10.1007/978-981-33-4909-4_5
62 D. F. Dorado-Sevilla et al.

1 Introduction

Most of the optimization problems that people are commonly facing will have more
than one objective simultaneously. In this type of problem, it does not allow one
single solution that satisfies all the stated objectives, but rather a set of possible
solutions. This set could be very extensive, and if obtaining the best results are
desired, then the objective functions must be optimized to find the subset that contains
the best solutions. The quality of the obtained set of solutions can vary according to
the applied method by taking in count that a general rule which allows defining a
method A, as better than a method B, does not exist. In this article, the development
of an interactive comparative interface of NSGA-II [1] and MOPSO [2] optimization
methods is described, which have been selected, after a review of the state of the art,
to represent two of the most used optimization branches: the algorithms inspired by
evolutionary theories and those inspired by swarm intelligence. Some applications
of these algorithms are proposed for performance optimization and adaptive and
intelligent routing of wireless networks using energy optimally [3, 4]. The interface
developed in MATLAB allows its user to apply the mentioned algorithms to five
different test problems with two objectives and bring the necessary information to
conclude which method best suits the user’s needs.
This paper is organized as follows: Sect. 2 describes multi-criteria optimization
and metaheuristics. Section 3 shows the comparison methodology. Section 4 presents
the experimental setup, and Sect. 5 depicts the results and discussion. Finally, the
conclusion and the future scope are drawn in Sect. 6.

2 Multi-criteria Optimization

The multi-criteria optimization helps to reach a specific goal, looking for a set of solu-
tions that best adapt to the proposed problem criteria. Depending on the characteris-
tics of the problem, optimizing could involve maximize or minimize the objectives.
Thus, a multi-criteria optimization problem in terms of minimization is formally
defined as [5].

Optimize y = f (x) = ( f 1 (x), f 2 (x), . . . , f k (x))


s.t. g(x) = (g1 (x), . . . , gm (x)) ≤ 0 (1)

where

x = (x1 , . . . , xn ) ∈ X ⊆ R n
y = (y1 , . . . , yn ) ∈ Y ⊆ R n

The function f (x) depends of k objective functions, and it can represent real
numbers, binary numbers, lists, to-do tasks, etc. The decision vector x contains n
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 63

decision variables that identify each solution on the problems space X, which is the
set of all the possible elements of the problem. The m restrictions for g(x) limit
feasible search areas, where the vector is located x. The objective vector y with k
objectives belongs to the objective space Y which is the co-domain of the objective
functions. The values, found after solving the objective functions with the decision
variables, will be known as functionals.
To classify the best solutions of the solution set, the term dominance is used
(Vilfredo Pareto, 1896), which mentions that a Pareto optimum solution is found if
it reaches equilibrium, where this solution can’t be improved without deteriorating
another. Formally, since u and v are vectors contained in the decision space f (u) and
f (v) then corresponding functionals, it can be said in minimization terms that:
The dominant vector will be which has the minor functional. Then,

u ≺ v(u dominates a v) if and only if f (u) < f (v).


v ≺ u(v dominates a u) if and only if f (v) < f (u).

Solutions are not compatible if none of the vectors dominates each other. This is:

u ∼ v(uy v are not comparable) if and only if f (u) = f (v) ∧ f (v) = f (u).

The optimization methods try to find, in the decision space, the set called Pareto
optimum defined as X true = { x ∈ X |x is not dominated respect a X }, for
succeeding, reaching the Pareto front in the objective space defined as Ytrue =
F(X true ) [6].

2.1 Metaheuristics

They are algorithms that modify variables trough time, guided by expert knowledge
through the feasible area of the decision space in an iterative manner. The best
results are obtained by applying improvements to a set of initial solutions, based on
the mentioned concept of dominance, to discard the least suitable solutions [7].

2.1.1 NSGA-II (Non-dominated Sorting Genetic Algorithm)

It is a genetic algorithm chosen to represent evolutionary algorithms [8]. It is widely


used in the literature for solving multi-criteria optimization problems, as shown in
[9, 10]. It is considered one of the best methods for its strategies to maintain elitism
and diversity in the search for optimal solutions, using Darwinian natural selection
analogy, which establishes that only the fittest individuals survive and reproduce to
generate a new generation with improved aspects. In Algorithm 1 is detailed the
pseudocode of the algorithm proposed in [1].
64 D. F. Dorado-Sevilla et al.

Initially, the algorithm randomly creates an initial population of feasible P0 solu-


tions of N size and then forms a Q0 population also of N size using binary tournament
selection, recombination, and mutation. The next step is to combine the two popu-
lations in such a way that from the new population R0 = P0 + Q0 , using selection,
mutation, and recombination, a new P1 population is born. The process is repeated
in the following generations as shown in Fig. 1.

2.1.2 Multi-objective Particle Swarm Optimization (MOPSO)

It is an algorithm representative of swarm intelligence, popular in the literature for


solving multi-criteria optimization problems, due to its good performance as can be
seen [11, 12]. Besides, it is a collective intelligence algorithm with similar search
behavior to the flight of starlings. These birds move in flocks and coordinates the
direction and speed of their flights, so that a subgroup of the population, in response to
an external stimulation, transmits clearly and immediately the state of their movement
to the rest of the group. Each individual maintains a maximum susceptibility to any
change in the flight of their neighbors, who react quickly to external stimulation and
transmit the information to the whole group [13]. The pseudocode proposed in [2]
is shown in the Algorithm 2.
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 65

Fig. 1 NSGA-II algorithm search process


66 D. F. Dorado-Sevilla et al.

Fig. 2 Individual’s change of position

The MOPSO performs the search for optimal solutions imitating the behavior
of a flock in search of food. The position of each individual is obtained from the
following equations.

vid (t + 1) = w ∗ vid (t) + c1 ∗ r1 ∗ [ pbestid − xid (t)]


+ c2 ∗ r2 ∗ [(gbestid − xid (t)] (2)

xid (t + 1) = xid (t) + vid (t + 1) (3)

where vid is the speed value of individual i in the d dimension; c1 is the cognitive
learning value; c2 is the global learning factor; r1 and r2 are random values uniformly
distributed in the range [0.1]; x id is the position of individual i in the d dimension;
pbestid is the value in the d dimension of the individual with the best position found
by individual i; and gbestd is the value in the d dimension of the individual in the
population with the best position. The value w is important for the convergence of
the algorithm. It is suggested that c1 and c2 take values in the range [1.5, 2] and w
in the range [0.1, 0.5] [14]. The change of position is shown in Fig. 2.

3 Comparison Methodology

It is necessary to establish guidelines that allow understanding of how the two


methods of optimization perform against certain objective functions. To evaluate
its performance, four metrics are used to measure the convergence to the optimum
Pareto front [15, 16].
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 67

3.1 Error Ratio (E)

This measure determines the portion of individuals in the set of solutions found by
the algorithm Yknown that belongs to the Pareto optimal solution Ytrue , where a value
of E = 0 is ideal. Formally, it is defined as follows:
N
i=1 ei
E (4)
N

0, if a vector of Yknown is in Ytrue
ei = (5)
1, otherwise

3.2 Generational Distance (DG)

This measure determines the solutions which are found by the Pareto optimal
algorithm. Mathematically, it is defined as:

N
di2
DG = i=1
(6)
N

where di is the Euclidean distance between each objective vector that belongs to the
solution set found and its closest corresponding member in the real optimal Pareto
front.

3.3 Spacing (S)

Verifies the dispersion of the elements of the Pareto set X found by the algorithm.
Knowing the individuals at the extremes of the set, this measure proposes to use the
variance of the distance between neighboring vectors of the current X set.


 1 
n
S
2
d − di (7)
n − 1 i=1

j j
For two objective functions, di = min j f 1i (x) − f 1 (x) + f 2i (x) − f 2 (x) is
the Euclidean distance between consecutive solutions of Yknown i, j = 1, 2, . . . , n,
where n is the number of individuals in the set.
68 D. F. Dorado-Sevilla et al.

4 Experimental Setup

To test the performance of the optimization algorithms, in [17] the test functions,
proposed by Zitzler, Deb, and Thiele, are used. The functions ZDT1, ZDT2, ZDT3,
ZDT4, and ZDT6 allow analyzing the behavior of the algorithms when optimizing
five different Pareto fronts. The optimal fronts of the five functions are given for
g(x) = 1.

4.1 ZDT1 Function

This function has a convex and continuous front. With n = 30 as the number of
decision variables and xi in the [0, 1] rank.

f 1 (x) = x1 (8.)

9 
n
g(x) = 1 + xi ,
n − 1 i=2

f1
h( f 1 , g) = 1 − ,
g

f 2 = g(x) ∗ h( f 1 (x), g(x)) (9)

4.2 ZDT2 Function

This function has a convex and continuous Pareto front. With n = 30 as the number
of decision variables and xi in the [0, 1] rank.

f 1 (x) = x1 , (10)

9 
n
g(x) = 1 + xi ,
n − 1 i=2
 2
f1
h( f 1 , g) = 1 − ,
g

f 2 = g(x) ∗ h( f 1 (x), g(x)) (11)


An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 69

4.3 ZDT3 Function

This function has a discontinuous Pareto front segmented into five parts. With n =
30 as the number of decision variables and xi in the [0, 1] rank.

f 1 (x) = x1 (12.)

9 
n
g(x) = 1 + xi ,
n − 1 i=2

f1 f1
h( f 1 , g) = 1 − − sin(10π f 1 ),
g g

f 2 = g(x) ∗ h( f 1 (x), g(x)) (13)

4.4 ZDT4 Function

This is a multi-modal function that has several convex and continuous Pareto fronts.
With n = 10 as the number of decision variables, xi in the [0, 1] rank and xi in the
[−5, 5] rank for i = 2, . . . , n.

f 1 (x) = x1 (14)


n
g(x) = 1 + 10(n − 1) + xi2 − 10 cos(4π xi ) ,
i=2

f1
h( f 1 , g) = 1 − ,
g

f 2 = g(x) ∗ h( f 1 (x), g(x)) (15)

4.5 ZDT6 Function

This function has a non-convex and continuous Pareto front. With n = 10 as the
number of decision variables and xi in the [0, 1] rank.

f 1 (x) = 1 − exp(−4x1 ) ∗ sin6 (6π x1 ) (16)


70 D. F. Dorado-Sevilla et al.

 n 0.125
i=2 x i
g(x) = 1 + 9
9
 2
f1
h( f 1 , g) = 1 −
g

f 2 (x) = g(x) ∗ h( f 1 (x), g(x)) (17)

4.6 Parameters

Tables 1 and 2 show the parameters used in the execution and simulation of NSGA-II
and MOPSO algorithms.

Table 1 Parameters for


Parameters
execution of the NSGA-II
algorithm Population size 100
m 30
Number of iterations 200
Range, decision variables [0 1]
Crossover rate 0.8
Mutation rate 0.033
Number of mutants 20

Table 2 Parameters for


Parameters
execution of the MOPSO
algorithm Population size 100
Decision variables 30
Number of iterations 200
Range, decision variables [0 1]
w 0.5
Wdamp 0.99
Mutation rate 0.01
c1 1
c2 2
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 71

5 Results and Discussion

A graphic interface in Fig. 3 using MATLAB was developed, in which the user is
allowed to apply the NSGA-II and MOPSO algorithms to the five ZDT test func-
tions mentioned above, to obtain numerical results of the proposed performance
measures. In addition, this interface shows iteratively how each algorithm tracks
the best possible solutions in the search space. In order to execute the interface,
it is necessary to introduce certain evaluation parameters that guide the search of
each algorithm. These parameters are loaded for each test function automatically.
In the following Tables 1 and 2, the parameters loaded in the interface for the two
algorithms, and their respective test functions are shown.
Since the NSGA-II algorithm is based on a population for the solutions search,
the N size of this population must be defined. A stop parameter is needed to stop the
search, in this case, a maximum of MaxIt iterations. To create the population, define
the number of parents to generate a group of descendants, where Pc is the crossing
rate. The number of mutants is defined as nm = round(pm1 ∗ N ), where Pm1 is the
mutation rate. Table 3 shows the parameters used in the NSGA-II to evaluate the five
defined test functions.
Like the previous algorithm in the MOPSO, you must define the N size of the
individuals that will take flight in search of optimal solutions and a MaxIt stop param-
eter. As for the search procedure, the change of individual’s position is fundamental,
the parameters w, c1, c2 defined in Eq. (2) must be defined. To generate diversity,

Fig. 3 Interactive comparator developed in a graphical interface using MATLAB.


More information and MATLAB scripts available at: https://sites.google.com/site/
degreethesisdiegopeluffo/interactive-comparator
72 D. F. Dorado-Sevilla et al.

Table 3 Evaluation parameters NSGA-II


Parameters ZDT1 ZDT2 ZDT3 ZDT4 ZDT6
N 100 100 100 100 100
MaxIt 500 500 500 500 500
Pc 0.67 0.63 0.63 0.67 0.67
Pm1 0.33 0.33 0.33 0.33 0.33

Table 4 Evaluation parameters MOPSO


Parameters ZDT1 ZDT2 ZDT3 ZDT4 ZDT6
N 100 100 100 100 100
MaxIt 100 100 100 100 200
W 0.5 0.5 0.5 0.5 0.5
c1 1.5 1.5 1.5 1.5 1.5
c2 2 2 2.5 2 2.5
Pm2 0.1 0.1 0.5 0.1 0.5

the algorithm simulates turbulence in flight using a mutation operator. In each iter-
ation, all individuals are assigned a mutation probability (Pm2). Table 4 shows the
parameters used to evaluate the five defined test functions.
To allow the user to conclude the results easily, the “Create Comparative Table”
function is created in the interface, which executes automatically each algorithm ten
times in a row and creates an excel file that contains a table with the numerical results
of the performance measures for each execution.

5.1 ZDT1 Results

In Table 5, the results obtained with the ZDT1 execution are presented, where it can
be evidenced that the performance of the MOPSO when optimizing a problem with
continuous and convex front is better than the NSGA-II, where the E and DG metrics
are very close to the real Pareto front. It is also noted that the swarm intelligence
algorithm is much faster. It is also shown that according to the S metric, the NSGA-II
has a better dispersion than the MOPSO.
Figures 4 and 5 show the Pareto front of the ZDT1 function and the solutions
distribution.
In the previous figures, it is shown that the analysis made in the execution of the
ZDT1 function with the developed interface, where a user will be able to make in the
interface, the analysis for the rest of the presented problems (ZDT2, ZDT3, ZDT4,
and ZDT6). And obtain in this way, the behavior of each algorithm is used against
Table 5 ZDT1 results
Execution E_NSGA E_MOPSO DG_NSGA DG_MOPSO Time NSGA Time MOPSO S_NSGA S_NSGA
1 1.000 0.000 0.012 0.000 318.006 44.153 0.007 0.035
2 0.990 0.000 0.013 0.001 384.542 42.192 0.006 0.023
3 0.970 0.000 0.016 0.001 466.853 40.592 0.009 0.023
4 1.000 0.000 0.013 0.001 517.460 40.701 0.009 0.018
5 1.000 0.000 0.021 0.001 574.547 41.070 0.016 0.022
6 1.000 0.000 0.013 0.000 645.206 42.438 0.007 0.020
7 1.000 0.000 0.018 0.001 712.545 41.705 0.013 0.021
8 0.990 0.000 0.013 0.000 777.596 40.710 0.007 0.021
9 0.970 0.000 0.013 0.001 893.973 42.343 0.007 0.018
10 1.000 0.000 0.015 0.001 1196.094 39.570 0.008 0.019
Average 0.992 0.000 0.015 0.001 648.682 41.547 0.009 0.022
An Interactive Framework to Compare Multi-criteria Optimization Algorithms …
73
74 D. F. Dorado-Sevilla et al.

Fig. 4 NSGA-II solutions with ZDT1 optimization versus continuous convex Pareto front

Fig. 5 MOPSO solutions with ZDT1 optimization versus continuous convex Pareto front

all the given conditions. Determine the algorithm that has the best performance and
obtains the best results.

6 Conclusion and Future Scope

The results thrown by an optimization algorithm can reach different quality levels,
depending on the variation of the evaluation parameters. Therefore, the comparative
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 75

studies of multi-criteria optimization methods available in the literature, limit the


reader performance analysis of the algorithms, by basing their experiments on fixed
evaluation parameters.
The development of the interactive comparative interface offers the possibility
of easily carrying out an optimization process in an intuitive way. This interactive
interface allows the user to choose the optimization algorithm and the test function
that he wants to optimize according to the Pareto front of interest and establish a man–
machine communication, through several inputs defined as parameters of evaluation,
which can be modified to obtain a dynamic graphic and numerical response.
Using the mapping of the objective functions, the set of solutions found in the
target space at the end of each iteration can be observed, allowing them to observe the
search procedure in a dynamic way. The user can accurately measure the performance
of the algorithms by evaluating the numerical results of the performance measures,
which are on the final set of solutions found. For the above, a not necessarily expert
user will have a greater understanding of the optimization process and will choose
the appropriate method more easily, according to their needs. As future work, it is
proposed to expand the number of optimization algorithms and add new test functions
such as three objective functions and so on.

Acknowledgements The authors are greatly grateful for the support given by the SDAS Research
Group (https://sdas-group.com/).

References

1. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic
algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
2. Coello Coello C, Lechuga M (2002) MOPSO: a proposal for multiple objective particle swarm
optimization. In: Proceedings of the 2002 Congress on evolutionary computation, CEC’02, pp
1051–1056
3. Rahimunnisa K (2019) Hybridized genetic-simulated annealing algorithm for performance
optimization in wireless Adhoc network. J Soft Comput Paradigm 1(01):1–13
4. Shakya S, Pulchowk LN (2020) Intelligent and adaptive multi-objective optimization in
WANET using bio inspired algorithms. J Soft Comput Paradigm 2(01):13–23
5. Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic
algorithm for multi-objective optimization: Nsga-II. In: International conference on parallel
problem solving from nature. Springer, pp 849–858
6. Veldhuizen DAV, Lamont GB (2000) Multiobjective evolutionary algorithms: analyzing the
state-of-the-art. Evolut Comput 8(2):125–147
7. Melián B, Pérez JAM, Vega JMM (2003) Metaheurísticas: Una visión global. Inteligencia
Artificial. Revista Iberoamericana de Inteligencia Artificial 7(19)
8. Kannan S, Baskar S, McCalley JD, Murugan P (2009) Application of NSGA-II algorithm to
generation expansion planning. IEEE Trans Power Syst 24(1):454–461
9. Kwong WY, Zhang PY, Romero D, Moran J, Morgenroth M, Amon C (2014) Multi-objective
wind farm layout optimization considering energy generation and noise propagation with Nsga-
II. J Mech Des 136(9):091010
10. Lorente-Leyva LL et al (2019) Optimization of the master production scheduling in a textile
ındustry using genetic algorithm. In: Pérez García H, Sánchez González L, Castejón Limas M,
76 D. F. Dorado-Sevilla et al.

Quintián Pardo H, Corchado Rodríguez E (eds) HAIS 2019. LNCS 11734, Springer, Cham,
pp 674–685
11. Robles-Rodriguez C, Bideaux C, Guillouet S, Gorret N, Roux G, Molina-Jouve C, Aceves-
Lara CA (2016) Multi-objective particle swarm optimization (MOPSO) of lipid accumulation in
fed-batch cultures. In: 2016 24th Mediterranean conference on control and automation (MED).
IEEE, pp 979–984
12. Borhanazad H, Mekhilef S, Ganapathy VG, Modiri-Delshad M, Mirtaheri A (2014) Optimiza-
tion of micro-grid system using MOPSO. Renew Energy 71:295–306
13. Marro J (2011) Los estorninos de san lorenzo, o cómo mejorar la eficacia del grupo. Revista
Española De Física 25(2):62–64
14. Parsopoulos KE, Vrahatis MN (2002) Recent approaches to global optimization problems
through particle swarm optimization. Nat Comput 1:235–306
15. Van Veldhuizen DA, Lamont GB (1999) Multiobjective evolutionary algorithm test suites. In:
Proceedings of the 1999 ACM symposium on applied computing. ACM, pp 351–357
16. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Micro machine
and human science. In: Proceedings of the Sixth International Symposium on MHS’95. IEEE,
pp 39–43
17. Zitzler E, Deb K, Thiele L (2000) Comparison of multi-objective evolutionary algorithms:
empirical results. Evolut Comput 8:173

You might also like