Dorado-Sevilla2021 Chapter AnInteractiveFrameworkToCompar
Dorado-Sevilla2021 Chapter AnInteractiveFrameworkToCompar
Dorado-Sevilla2021 Chapter AnInteractiveFrameworkToCompar
D. F. Dorado-Sevilla
Universidad de Nariño, Pasto, Colombia
D. H. Peluffo-Ordóñez · L. L. Lorente-Leyva (B) · E. P. Herrera-Granda · I. D. Herrera-Granda
SDAS Research Group, Ibarra, Ecuador
e-mail: leandro.lorente@sdas-group.com
D. H. Peluffo-Ordóñez
e-mail: dpeluffo@yachaytech.edu.ec
D. H. Peluffo-Ordóñez
Yachay Tech University, Urcuquí, Ecuador
Corporación Universitaria Autónoma de Nariño, Pasto, Colombia
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 61
V. Bindhu et al. (eds.), International Conference on Communication, Computing and
Electronics Systems, Lecture Notes in Electrical Engineering 733,
https://doi.org/10.1007/978-981-33-4909-4_5
62 D. F. Dorado-Sevilla et al.
1 Introduction
Most of the optimization problems that people are commonly facing will have more
than one objective simultaneously. In this type of problem, it does not allow one
single solution that satisfies all the stated objectives, but rather a set of possible
solutions. This set could be very extensive, and if obtaining the best results are
desired, then the objective functions must be optimized to find the subset that contains
the best solutions. The quality of the obtained set of solutions can vary according to
the applied method by taking in count that a general rule which allows defining a
method A, as better than a method B, does not exist. In this article, the development
of an interactive comparative interface of NSGA-II [1] and MOPSO [2] optimization
methods is described, which have been selected, after a review of the state of the art,
to represent two of the most used optimization branches: the algorithms inspired by
evolutionary theories and those inspired by swarm intelligence. Some applications
of these algorithms are proposed for performance optimization and adaptive and
intelligent routing of wireless networks using energy optimally [3, 4]. The interface
developed in MATLAB allows its user to apply the mentioned algorithms to five
different test problems with two objectives and bring the necessary information to
conclude which method best suits the user’s needs.
This paper is organized as follows: Sect. 2 describes multi-criteria optimization
and metaheuristics. Section 3 shows the comparison methodology. Section 4 presents
the experimental setup, and Sect. 5 depicts the results and discussion. Finally, the
conclusion and the future scope are drawn in Sect. 6.
2 Multi-criteria Optimization
The multi-criteria optimization helps to reach a specific goal, looking for a set of solu-
tions that best adapt to the proposed problem criteria. Depending on the characteris-
tics of the problem, optimizing could involve maximize or minimize the objectives.
Thus, a multi-criteria optimization problem in terms of minimization is formally
defined as [5].
where
x = (x1 , . . . , xn ) ∈ X ⊆ R n
y = (y1 , . . . , yn ) ∈ Y ⊆ R n
The function f (x) depends of k objective functions, and it can represent real
numbers, binary numbers, lists, to-do tasks, etc. The decision vector x contains n
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 63
decision variables that identify each solution on the problems space X, which is the
set of all the possible elements of the problem. The m restrictions for g(x) limit
feasible search areas, where the vector is located x. The objective vector y with k
objectives belongs to the objective space Y which is the co-domain of the objective
functions. The values, found after solving the objective functions with the decision
variables, will be known as functionals.
To classify the best solutions of the solution set, the term dominance is used
(Vilfredo Pareto, 1896), which mentions that a Pareto optimum solution is found if
it reaches equilibrium, where this solution can’t be improved without deteriorating
another. Formally, since u and v are vectors contained in the decision space f (u) and
f (v) then corresponding functionals, it can be said in minimization terms that:
The dominant vector will be which has the minor functional. Then,
Solutions are not compatible if none of the vectors dominates each other. This is:
u ∼ v(uy v are not comparable) if and only if f (u) = f (v) ∧ f (v) = f (u).
The optimization methods try to find, in the decision space, the set called Pareto
optimum defined as X true = { x ∈ X |x is not dominated respect a X }, for
succeeding, reaching the Pareto front in the objective space defined as Ytrue =
F(X true ) [6].
2.1 Metaheuristics
They are algorithms that modify variables trough time, guided by expert knowledge
through the feasible area of the decision space in an iterative manner. The best
results are obtained by applying improvements to a set of initial solutions, based on
the mentioned concept of dominance, to discard the least suitable solutions [7].
The MOPSO performs the search for optimal solutions imitating the behavior
of a flock in search of food. The position of each individual is obtained from the
following equations.
where vid is the speed value of individual i in the d dimension; c1 is the cognitive
learning value; c2 is the global learning factor; r1 and r2 are random values uniformly
distributed in the range [0.1]; x id is the position of individual i in the d dimension;
pbestid is the value in the d dimension of the individual with the best position found
by individual i; and gbestd is the value in the d dimension of the individual in the
population with the best position. The value w is important for the convergence of
the algorithm. It is suggested that c1 and c2 take values in the range [1.5, 2] and w
in the range [0.1, 0.5] [14]. The change of position is shown in Fig. 2.
3 Comparison Methodology
This measure determines the portion of individuals in the set of solutions found by
the algorithm Yknown that belongs to the Pareto optimal solution Ytrue , where a value
of E = 0 is ideal. Formally, it is defined as follows:
N
i=1 ei
E (4)
N
0, if a vector of Yknown is in Ytrue
ei = (5)
1, otherwise
This measure determines the solutions which are found by the Pareto optimal
algorithm. Mathematically, it is defined as:
N
di2
DG = i=1
(6)
N
where di is the Euclidean distance between each objective vector that belongs to the
solution set found and its closest corresponding member in the real optimal Pareto
front.
Verifies the dispersion of the elements of the Pareto set X found by the algorithm.
Knowing the individuals at the extremes of the set, this measure proposes to use the
variance of the distance between neighboring vectors of the current X set.
1
n
S
2
d − di (7)
n − 1 i=1
j j
For two objective functions, di = min j f 1i (x) − f 1 (x) + f 2i (x) − f 2 (x) is
the Euclidean distance between consecutive solutions of Yknown i, j = 1, 2, . . . , n,
where n is the number of individuals in the set.
68 D. F. Dorado-Sevilla et al.
4 Experimental Setup
To test the performance of the optimization algorithms, in [17] the test functions,
proposed by Zitzler, Deb, and Thiele, are used. The functions ZDT1, ZDT2, ZDT3,
ZDT4, and ZDT6 allow analyzing the behavior of the algorithms when optimizing
five different Pareto fronts. The optimal fronts of the five functions are given for
g(x) = 1.
This function has a convex and continuous front. With n = 30 as the number of
decision variables and xi in the [0, 1] rank.
f 1 (x) = x1 (8.)
9
n
g(x) = 1 + xi ,
n − 1 i=2
f1
h( f 1 , g) = 1 − ,
g
This function has a convex and continuous Pareto front. With n = 30 as the number
of decision variables and xi in the [0, 1] rank.
f 1 (x) = x1 , (10)
9
n
g(x) = 1 + xi ,
n − 1 i=2
2
f1
h( f 1 , g) = 1 − ,
g
This function has a discontinuous Pareto front segmented into five parts. With n =
30 as the number of decision variables and xi in the [0, 1] rank.
f 1 (x) = x1 (12.)
9
n
g(x) = 1 + xi ,
n − 1 i=2
f1 f1
h( f 1 , g) = 1 − − sin(10π f 1 ),
g g
This is a multi-modal function that has several convex and continuous Pareto fronts.
With n = 10 as the number of decision variables, xi in the [0, 1] rank and xi in the
[−5, 5] rank for i = 2, . . . , n.
f 1 (x) = x1 (14)
n
g(x) = 1 + 10(n − 1) + xi2 − 10 cos(4π xi ) ,
i=2
f1
h( f 1 , g) = 1 − ,
g
This function has a non-convex and continuous Pareto front. With n = 10 as the
number of decision variables and xi in the [0, 1] rank.
n 0.125
i=2 x i
g(x) = 1 + 9
9
2
f1
h( f 1 , g) = 1 −
g
4.6 Parameters
Tables 1 and 2 show the parameters used in the execution and simulation of NSGA-II
and MOPSO algorithms.
A graphic interface in Fig. 3 using MATLAB was developed, in which the user is
allowed to apply the NSGA-II and MOPSO algorithms to the five ZDT test func-
tions mentioned above, to obtain numerical results of the proposed performance
measures. In addition, this interface shows iteratively how each algorithm tracks
the best possible solutions in the search space. In order to execute the interface,
it is necessary to introduce certain evaluation parameters that guide the search of
each algorithm. These parameters are loaded for each test function automatically.
In the following Tables 1 and 2, the parameters loaded in the interface for the two
algorithms, and their respective test functions are shown.
Since the NSGA-II algorithm is based on a population for the solutions search,
the N size of this population must be defined. A stop parameter is needed to stop the
search, in this case, a maximum of MaxIt iterations. To create the population, define
the number of parents to generate a group of descendants, where Pc is the crossing
rate. The number of mutants is defined as nm = round(pm1 ∗ N ), where Pm1 is the
mutation rate. Table 3 shows the parameters used in the NSGA-II to evaluate the five
defined test functions.
Like the previous algorithm in the MOPSO, you must define the N size of the
individuals that will take flight in search of optimal solutions and a MaxIt stop param-
eter. As for the search procedure, the change of individual’s position is fundamental,
the parameters w, c1, c2 defined in Eq. (2) must be defined. To generate diversity,
the algorithm simulates turbulence in flight using a mutation operator. In each iter-
ation, all individuals are assigned a mutation probability (Pm2). Table 4 shows the
parameters used to evaluate the five defined test functions.
To allow the user to conclude the results easily, the “Create Comparative Table”
function is created in the interface, which executes automatically each algorithm ten
times in a row and creates an excel file that contains a table with the numerical results
of the performance measures for each execution.
In Table 5, the results obtained with the ZDT1 execution are presented, where it can
be evidenced that the performance of the MOPSO when optimizing a problem with
continuous and convex front is better than the NSGA-II, where the E and DG metrics
are very close to the real Pareto front. It is also noted that the swarm intelligence
algorithm is much faster. It is also shown that according to the S metric, the NSGA-II
has a better dispersion than the MOPSO.
Figures 4 and 5 show the Pareto front of the ZDT1 function and the solutions
distribution.
In the previous figures, it is shown that the analysis made in the execution of the
ZDT1 function with the developed interface, where a user will be able to make in the
interface, the analysis for the rest of the presented problems (ZDT2, ZDT3, ZDT4,
and ZDT6). And obtain in this way, the behavior of each algorithm is used against
Table 5 ZDT1 results
Execution E_NSGA E_MOPSO DG_NSGA DG_MOPSO Time NSGA Time MOPSO S_NSGA S_NSGA
1 1.000 0.000 0.012 0.000 318.006 44.153 0.007 0.035
2 0.990 0.000 0.013 0.001 384.542 42.192 0.006 0.023
3 0.970 0.000 0.016 0.001 466.853 40.592 0.009 0.023
4 1.000 0.000 0.013 0.001 517.460 40.701 0.009 0.018
5 1.000 0.000 0.021 0.001 574.547 41.070 0.016 0.022
6 1.000 0.000 0.013 0.000 645.206 42.438 0.007 0.020
7 1.000 0.000 0.018 0.001 712.545 41.705 0.013 0.021
8 0.990 0.000 0.013 0.000 777.596 40.710 0.007 0.021
9 0.970 0.000 0.013 0.001 893.973 42.343 0.007 0.018
10 1.000 0.000 0.015 0.001 1196.094 39.570 0.008 0.019
Average 0.992 0.000 0.015 0.001 648.682 41.547 0.009 0.022
An Interactive Framework to Compare Multi-criteria Optimization Algorithms …
73
74 D. F. Dorado-Sevilla et al.
Fig. 4 NSGA-II solutions with ZDT1 optimization versus continuous convex Pareto front
Fig. 5 MOPSO solutions with ZDT1 optimization versus continuous convex Pareto front
all the given conditions. Determine the algorithm that has the best performance and
obtains the best results.
The results thrown by an optimization algorithm can reach different quality levels,
depending on the variation of the evaluation parameters. Therefore, the comparative
An Interactive Framework to Compare Multi-criteria Optimization Algorithms … 75
Acknowledgements The authors are greatly grateful for the support given by the SDAS Research
Group (https://sdas-group.com/).
References
1. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic
algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
2. Coello Coello C, Lechuga M (2002) MOPSO: a proposal for multiple objective particle swarm
optimization. In: Proceedings of the 2002 Congress on evolutionary computation, CEC’02, pp
1051–1056
3. Rahimunnisa K (2019) Hybridized genetic-simulated annealing algorithm for performance
optimization in wireless Adhoc network. J Soft Comput Paradigm 1(01):1–13
4. Shakya S, Pulchowk LN (2020) Intelligent and adaptive multi-objective optimization in
WANET using bio inspired algorithms. J Soft Comput Paradigm 2(01):13–23
5. Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic
algorithm for multi-objective optimization: Nsga-II. In: International conference on parallel
problem solving from nature. Springer, pp 849–858
6. Veldhuizen DAV, Lamont GB (2000) Multiobjective evolutionary algorithms: analyzing the
state-of-the-art. Evolut Comput 8(2):125–147
7. Melián B, Pérez JAM, Vega JMM (2003) Metaheurísticas: Una visión global. Inteligencia
Artificial. Revista Iberoamericana de Inteligencia Artificial 7(19)
8. Kannan S, Baskar S, McCalley JD, Murugan P (2009) Application of NSGA-II algorithm to
generation expansion planning. IEEE Trans Power Syst 24(1):454–461
9. Kwong WY, Zhang PY, Romero D, Moran J, Morgenroth M, Amon C (2014) Multi-objective
wind farm layout optimization considering energy generation and noise propagation with Nsga-
II. J Mech Des 136(9):091010
10. Lorente-Leyva LL et al (2019) Optimization of the master production scheduling in a textile
ındustry using genetic algorithm. In: Pérez García H, Sánchez González L, Castejón Limas M,
76 D. F. Dorado-Sevilla et al.
Quintián Pardo H, Corchado Rodríguez E (eds) HAIS 2019. LNCS 11734, Springer, Cham,
pp 674–685
11. Robles-Rodriguez C, Bideaux C, Guillouet S, Gorret N, Roux G, Molina-Jouve C, Aceves-
Lara CA (2016) Multi-objective particle swarm optimization (MOPSO) of lipid accumulation in
fed-batch cultures. In: 2016 24th Mediterranean conference on control and automation (MED).
IEEE, pp 979–984
12. Borhanazad H, Mekhilef S, Ganapathy VG, Modiri-Delshad M, Mirtaheri A (2014) Optimiza-
tion of micro-grid system using MOPSO. Renew Energy 71:295–306
13. Marro J (2011) Los estorninos de san lorenzo, o cómo mejorar la eficacia del grupo. Revista
Española De Física 25(2):62–64
14. Parsopoulos KE, Vrahatis MN (2002) Recent approaches to global optimization problems
through particle swarm optimization. Nat Comput 1:235–306
15. Van Veldhuizen DA, Lamont GB (1999) Multiobjective evolutionary algorithm test suites. In:
Proceedings of the 1999 ACM symposium on applied computing. ACM, pp 351–357
16. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Micro machine
and human science. In: Proceedings of the Sixth International Symposium on MHS’95. IEEE,
pp 39–43
17. Zitzler E, Deb K, Thiele L (2000) Comparison of multi-objective evolutionary algorithms:
empirical results. Evolut Comput 8:173