Algorithms 16 00406
Algorithms 16 00406
Article
A Hybrid Discrete Memetic Algorithm for Solving Flow-Shop
Scheduling Problems
Levente Fazekas 1, *,† , Boldizsár Tüű-Szabó 2 , László T. Kóczy 2 , Olivér Hornyák 1 and Károly Nehéz 1,†
Abstract: Flow-shop scheduling problems are classic examples of multi-resource and multi-operation
scheduling problems where the objective is to minimize the makespan. Because of the high complexity
and intractability of the problem, apart from some exceptional cases, there are no explicit algorithms
for finding the optimal permutation in multi-machine environments. Therefore, different heuristic
approaches, including evolutionary and memetic algorithms, are used to obtain the solution—or
at least, a close enough approximation of the optimum. This paper proposes a novel approach: a
novel combination of two rather efficient such heuristics, the discrete bacterial memetic evolutionary
algorithm (DBMEA) proposed earlier by our group, and a conveniently modified heuristics, the
Monte Carlo tree method. By their nested combination a new algorithm was obtained: the hybrid
discrete bacterial memetic evolutionary algorithm (HDBMEA), which was extensively tested on the
Taillard benchmark data set. Our results have been compared against all important other approaches
published in the literature, and we found that this novel compound method produces good results
overall and, in some cases, even better approximations of the optimum than any of the so far
Citation: Fazekas, L.; Tüű-Szabó, B.;
proposed solutions.
Kóczy, L.T.; Hornyák, O.; Nehéz, K. A
Hybrid Discrete Memetic Algorithm
Keywords: flow-shop; scheduling problem; discrete bacterial memetic evolutionary algorithm;
for Solving Flow-Shop Scheduling
Problems. Algorithms 2023, 16, 406.
hybrid DBMEA; Monte Carlo tree search; simulated annealing
https://doi.org/10.3390/a16090406
even heuristics, leading to more efficient problem solvers, an observation that fits well with
the basic concept of applying memetic and hybrid heuristic algorithms.
Above, the GA was mentioned. The bacterial evolutionary algorithm (BEA) may be
considered as its further development, where some of the operators have been changed in
a way inspired by the reproduction of bacteria [10], and so the algorithm became faster and
produced unambiguously better approximations [4].
Let us provide an overview of the basic algorithm. As mentioned above, the BEA
model replicates the process of bacterial gene crossing as seen in the nature. This process
has two essential operations: bacterial mutation and gene transfer. At the start of the
algorithm, an initial random population is generated, where each bacterium represents
a candidate for the solution in an encoded form. The new generation is created by using
the two operations mentioned above. The individual bacteria from the second generation
onward are ranked according to an objective function. Some bacteria are terminated, and
some are prioritized based on their traits. This process is repeated until the termination
criteria are met.
Its memetic version was first applied for continuous optimization [11]. This was an ex-
tension of the original memetic algorithm to the class of bacterial evolutionary approaches,
where the BEA replaced the GA. Various benchmark tests proved that the algorithm was
very effective for many other application domains for optimization tasks. Several exam-
ples showed that the BEA could be applied in various fields with success, for example,
timetable creation problems, automatic data clustering [12], and determining optimal 3D
structures [13] as well. The first implementations of the bacterial memetic evolutionary
algorithms (BMEA) were used for finding the optimal parameters of fuzzy rule-based
systems. The BMEA was later successfully extended also to discrete problems, like, among
others, the flow-shop scheduling problem itself, produced considerably better efficiency
for some problems. A novel hybrid algorithm version with a BEA wrapper and nested SA
showed remarkably good results on numerous benchmarks [3].
As discussed above, initially, the combination of bacterial evolutionary algorithms
with local search methods was proposed only for the very special aim of increasing the
ability to find the optimal estimation of fuzzy rules. In the original BEA paper, mechanical,
chemical, and electrical engineering problems were presented as benchmark applications.
In addition, a fascinating mathematical research field, namely, solving transcendental
functions, completed the list of first applications. In the first version of BMEA, where
the local search method applied was the second-order gradient method, the Levenberg-
Marquard [14] algorithm was tested on all these benchmarks.
The benchmark tests with the BMEA obtained better results than any former ones in
the literature and outscored all other approaches used to estimate the parameters of the
trapezoidal fuzzy membership functions [15]. Later, first-order gradient-based algorithms
were also investigated as local search, which seemed promising [11].
In the next, the idea of BMEA was extended towards discrete problems, where gradient-
based local search was replaced by traditional discrete optimization, such as exhaustive
search, inbounded sub-graphs. This new family of algorithms was named discrete bac-
terial memetic evolutionary algorithms (DBMEA). DBMEA algorithms were first tested
on vairous extensions of the travelling salesman problem and produced rather promising
results. DBMEA algorithms have also been investigated on discrete permutation-based
problems, where the local search methods were matching bounded discrete optimiza-
tion techniques.
In a GA-based discrete memetic type approach, n-opt local search algorithms were sug-
gested by Yamada et al. [16]. The investigations with simulations reduced the algorithms to
the consecutive running 2-opt and 3-opt methods. In the case of n ≥ 4, the computational
time was too high, thus limiting the pragmatical usability of the algorithm. The 2-opt
algorithm was first applied to solve various routing problems [17], where in the bounded
sub-graph, two graph edges were always swapped in the local search phase. It is worth
noting that 3-opt [18] is similar to the 2-opt operator; here, three edges are always deleted
Algorithms 2023, 16, 406 4 of 30
and replaced in a single step, which results in seven different ways to reconnect the graph,
in order to find the local optimum.
In this research, another class of discrete optimization problems, the flow-shop, came
to focus. Thus, in the next Section 2, the paper will give a detailed explanation of the
flow-shop scheduling problem. Section 3 is a detailed survey of commonly used classic
and state-of-the-art algorithms. The algorithms used by the novel HDBMEA algorithm
proposed in this paper will be detailed in Sections 4–6. Section 7 deals with the results
from the Taillard flow-shop benchmark set and with the comparison to relevant algorithms
selected from Section 3. Section 8 extensively analyses each parameter’s impact on the
HDBMEA algorithm’s scheduling performance and run-time. The result of our analysis is
a chosen parameter set with which we prove the abilities of our proposed algorithm.
The first scheduled job does not have to wait for other jobs and is available from
the start. The completion time of the first job on the current machine is the sum of its
previous operations on preceding machines in the chain and its processing time on the
current machine:
r
Cj1 ,r = ∑ p j1 ,k i = 1, . . . , NR . (3)
k =1
Figure 1 shows how the first scheduled job has only one contingency: its own opera-
tions on previous machines.
Algorithms 2023, 16, 406 5 of 30
machine 1 j1,1
machine 2 j1,2
machine 3 j1,3
Jobs on the first resource are only contingent on jobs on the same resource; therefore,
the completion time of the job is the sum of the processing times of previously scheduled
jobs and its own processing time:
i
Cji ,1 = ∑ p jk ,1 j = 1, . . . , NJ . (4)
k =1
Figure 2 shows how there is only on contingency on the first machine; therefore,
operations can follow each other without downtime.
When it comes to subsequent jobs on subsequent machines, (i > 1, r > 1) the com-
pletion times depend on the same job on previous machines (Equation (2)) and previously
scheduled jobs on previous machines in the chain (Equation (1)):
Cji ,r = max Cji ,r−1 , Cji−1 ,r + p ji ,r
(5)
r = 2, . . . , NR ; i = 2, . . . , NJ .
Figure 3 shows the contingency of jobs. j2,2 is contingent upon j1,2 and j2,1 , where it
has to wait for j2,1 to finish despite the availability of machine 2. j2,3 is contingent upon j2,2
and j1,3 , where it has to wait for machine 3 to become available despite being completed on
machine 2.
The completion time of the last job on the last resource is to be minimized and is called
the makespan.
Cmax = CjN ,NR . (6)
J
One of the most widely used objective function is to minimize the makespan:
Figure 4 illustrates how the completion time of the last scheduled job on the last
machine in the manufacturing chain determines the makespan.
t
Cmax
acceptance of worse solutions, simulated annealing is able to escape local optima and has
the potential to find near-optimal solutions for complex optimization problems. There are
simulated annealing based scheduling algorithms like [25–30] Particle swarm optimization
(PSO) describes a group of optimization algorithms where the candidate solutions are
particles that roam the search space iteratively. There are many PSO variants. The particle
swarm optimization 1 (PSO1) algorithm is a PSO that uses the smallest position value
(SPV) [31] heuristic approach and the VNS [32] algorithm to improve the generated permu-
tations [33]. The particle swarm optimization 2 (PSO2) algorithm is a simple PSO algorithm
proposed by Ching-Jong Liao [34]. This approach introduces a new method to transform
the particle into a new permutation. The combinatorial particle swarm optimization (CPSO)
algorithm improves the simple PSO algorithm to optimize for integer-based combinatorial
problems. Its characteristics differ from the standard PSO algorithm in each particle’s defi-
nition and speed, and in the generation of new solutions [35]. The hybrid adaptive particle
swarm optimization (HAPSO) algorithm uses an approach that optimizes every parameter
of the PSO. The velocity coefficients, iteration count of the local search, upper and lower
bounds of speed, and particle count are optimized during runtime, resulting in a new
efficient adaptive method [36]. The PSO with expanding neighborhood topology (PSOENT)
algorithm uses the neighborhood topology of a local and a global search algorithm. First,
the algorithm generates two neighbours for every particle. The number of neighbours
increases every iteration until it reaches the number of particles. If the termination criteria
are not met, every neighbour is reinitialized. The search ends when the termination criteria
are met. These steps ensure that no two consecutive iterations have identical neighbour
counts. This algorithm relies on two meta-heuristic approaches: variable neighborhood
search (VNS) [32] and path relinking (PR) [37,38]. VNS is used to improve the solutions of
each particle, while the PR strategy improves the permutation of the best solution [39]. The
ant colony optimization (ANS) [40] algorithm is a virtual ant colony-based optimization
approach introduced by Marco Dorigo in 1992 [41,42]. Hayat et al., in 2023 [43], introduced
a new hybridization of particle swarm optimization (HPSO) using variable neighborhood
search and simulated annealing to improve search results further.
| P0 |
Nseg = (8)
Iseg
The number of segments equals the lower bound of a bacterium’s length divided by
the segment’s length. For all of the permutations of the population, a random number
r, r ∈ [0; 1] is chosen. If this random number is below the coherent segment loose rate
R, the bacterium undergoes a coherent segment mutation; if it is greater than or equal
to this value, the bacterium is operated on by a loose segment mutation. Each mutation
algorithm is called on a bacterium Nseg times. The segment length shifts the segments for
the coherent segment mutation in each iteration. Every part of the bacterium is mutated. No
Algorithms 2023, 16, 406 10 of 30
shifting is needed for the loose segment mutation since its random permutation generation
provides an even distribution of segments across the bacterium. The original bacterium
gets overridden if the mutation algorithm generates a better alternative.
• P: the population consisting of permutations;
• P0 : the first bacterium of the population;
• Nmutants : the number of mutants generated for each mutation operation;
• Iseg : the length of the segment to be mutated;
• Nseg : the number of segments for each bacterium;
• R: the cohesive/loose rate;
• x: an element of P population;
• x 0 : a mutant of x;
• f : the objective function.
Figure 5 shows how cohesive segments are chosen for mutation. The segment gets
shifted across the entire bacterium without overlap. The starting index of the segment is
calculated with the following equation:
|x| = 9
Sseg = 0
|x| = 9
Sseg = 3
Iseg = 3
Iseg = 3
1 4 2 8 5 9 3 7 6 1 4 2 8 5 9 3 7 6
Segment Segment
(a) (b)
|x| = 9
Sseg = 6
Iseg = 3
1 4 2 8 5 9 3 7 6
1 4 2 8 5 9 3 7 6
1st 2nd
3rd
(c) (d)
Figure 5. Cohesive segments chosen in Algorithm 2. (a) Chosen segment, when Sseg = Iseg · i = 0;
(b) chosen segment, when Sseg = Iseg · i = 3; (c) chosen segment, when Sseg = Iseg · i = 6; and (d) all
the segments chosen throughout the algorithm.
• x 0 : a mutated bacterium;
• f : the objective function.
|x| = 9
Sseg = 0
Iseg = 3
Original Bacterium 1 4 2 8 5 9 3 7 6
Segment
First variation 2 4 1 8 5 9 3 7 6
Reversed Segment
Other variations 1 2 4 8 5 9 3 7 6
Shuffled Segment
4 1 2 8 5 9 3 7 6
Shuffled Segment
The loose segment mutation operates similarly to the coherent segment mutation
(Algorithm 3). The only difference is the non-cohesive segment selection. At the start of the
operation, a random segment with length Iseg is chosen from the bacterium x. The segment
is first reversed to generate the first mutant. All other mutants have the selected segment
shuffled randomly. According to the objective function f , the best one is chosen from all
mutants and the original bacterium. Algorithm 4 defines this process.
Algorithms 2023, 16, 406 13 of 30
Figure 7 illustrates how a loose segment might get chosen. In the example, the length of
the permutation is | x | = 9; the segment length is Iseg = 3; the random segment indexes are
S = 1, 3, 7; and the segment values are V = 1, 8, 7. Every time the loose segment mutation
operation is called, the segment is selected based on values given by a pseudo-random
generator. This random selection process ensures an even distribution of segments across
the bacterium. This way, non-cohesive parts of the bacterium can get optimized locally.
|x| = 9
Iseg = 3
S = {1,3,7}
V = {1,8,7}
4 1 2 8 5 9 3 7 6
Segment elements
Figure 7. Example for a segment chosen in loose segment mutation (Algorithm 4).
|x| = 9
Iseg = 3
S = {1,3,7}
V = {1,8,7}
Original Bacterium 4 1 2 8 5 9 3 7 6
Segment elements
First variation 4 7 2 8 5 9 3 1 6
Reversed segment
Other variations 4 8 2 1 5 9 3 7 6
Shuffled segment
4 7 2 1 5 9 3 8 6
Shuffled segment
Source Bacterium 1 3 2 9 5 6 8 4 2
Destination Bacterium 1 3 5 8 4 9 6 2 7
Infected Bacterium 1 3 9 5 6 8 4 2 7
In the selection step, the upper confidence bound (UCB) [67] Formula (10) is used to
select the child node. s
Xj ln Nj
UCBj = +C· , (10)
nj nj
where X j is the number of games won through the jth node, C is the exploration parameter
that adjusts the selection strategy, Nj is the number of games played through the jth parent
node, and n j is the number of games played through the jth child node. The execution of
the algorithm keeps asymmetrically expanding the tree; this requires balancing between
exploitation and exploration steps. The constant C can be considered a weight that balances
those strategies. exploitation allows the search space to be expanded randomly, and
exploration reuses the bet option found.
The MCTS algorithm was initially implemented for two-person board games to solve
scheduling problems. Each node represents a permutation, i.e., a possible scheduling
sequence. The child nodes are created from the parent nodes by modifying operators.
Each tree node is a possible solution, and the neighborhood relation exists between them.
Nodes having the same solution may appear multiple times in the tree—similar to board
game problems.
7. Experimental Results
In this article, the performance of the proposed algorithm was evaluated on the Taillard
benchmark problems. The upper bound solution was chosen as the basis of comparison.
The quality of the results was measured as the relative signed distance from the upper
bound found in the original dataset. This distance was converted to a percentage using the
following formula:
C − CUB
ω = BS · 100 (11)
CUB
where CBS represents the best Cmax value found by the algorithm, CUB refers to the upper
bound determined in the benchmark dataset, and ω indicates the goodness of the method.
The lower the value is, the closer the best makespan is to the theoretical upper bound; in
other words, it is the relative distance from the upper bound as a percentage. The results of
the benchmark dataset, which consisted of 120 problems, are presented in Tables A1–A3,
respectively, where the n × m columns denote the number of jobs and resources (machines),
respectively. The parameters used to obtain the results are listed in Table 1, and their
reasoning is detailed in Section 8.
Parameter Value
Maximum iteration count (SA) 100
Initial temperature (SA) 300
α (SA) 0.1
Iteration count (MCTS) 10,000
Nind 8
Maximum iteration count (DBMEA) 2
Nclones 8
Nin f 40
Iseg 4
Itrans 5
Nmort 0.05
This method allows for an easy comparison between metaheuristic algorithms. The
results show that the hybrid discrete bacterial memetic evolutionary algorithm introduced
by this paper provides outstanding scheduling performance regarding flow-shop problems.
impacts the results and runtime and their standard deviation (See Figure 11). The omega
and standard deviation decrease dramatically until our chosen iteration count of 100 (see
Figures 11b and 12d), after which the improvements are negligible while the runtime keeps
increasing linearly (See Figure 11a).
The starting temperature for the simulated annealing algorithm has little impact on
the results and runtime (Figure 12); however, there is a slight dip in ω at the 300 mark (See
Figure 12b), which is our chosen final value.
The α parameter of the simulated annealing algorithm is the temperature decrease in
each iteration. This method creates an exponential multiplicative cooling strategy, which
was proposed by Kirpatrick et al. [68], and the effects of which are considered in [69]. The
parameter did not significantly affect the overall performance of the algorithm, however
(Figure 13).
The iteration count of the MCTS algorithm is fixed; it does not take the number of
iterations elapsed since the last improvement into account. This parameter improves the
search results drastically (See Figure 14b), while the runtime increases only linearly (See
Figure 14a). A high value of 10,000 was maintained throughout our testing since this
proved to be a good middle-ground between scheduling performance (ω ) and runtimes.
The benefit of a high iteration count is the decrease in the standard deviation of ω (See
Figure 14d), meaning that the scheduling performance is increasingly less dependent on
the processing times in the benchmark set. One downside of a high iteration count is the
increase in the standard deviation of runtimes (See Figure 14c) since the SA algorithm is
called every iteration, which has no fixed iteration count.
1.5
7.5
Runtime (s)
Std. dev.
1.0
Mean
ω
5.0
0.5 Min
2.5
Max
Std. dev.
Mean
0.0 0.0 Min
Standard Deviation
1.2
0.2
0.9
0.1
0.6
0.0 0.3
0 250 500 750 1,000 0 250 500 750 1,000
Iteration count Iteration count
Max
1.5
0.20
ω
1.0 Std. dev.
Mean
0.15 Std. dev.
Mean
0.5
0.10
Min
Min 0.0
0 2500 5000 7500 10,000 0 2500 5000 7500 10,000
Initial temperature Initial temperature
Standard Deviation
0.350
0.044
0.325
0.042
0.300
0.040
0 2,500 5,000 7,500 10,000 0 2,500 5,000 7,500 10,000
Initial temperature Initial temperature
0.25 2.0
Runtime (s)
Max
0.20 1.5
Std. dev.
ω
Mean
1.0
0.15 Std. dev.
Mean
0.5
0.10
Min
Min
0.0
0 0.25 0.5 0.75 0 0.25 0.5 0.75
α α
Standard Deviation
0.042 0.36
0.039 0.34
0.036 0.32
0.30
0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75
α α
Iteration count − Monte Carlo Tree Search Iteration count − Monte Carlo Tree Search
Max ω average
150
Runtime (s)
2
Std. dev.
100 Mean
ω
1
Min
50
Max
Std. dev.
Mean
0 0 Min
0 10,000 20,000 30,000 40,000 50,000 0 10,000 20,000 30,000 40,000 50,000
Iteration count Iteration count
Standard Deviation
0.4
15
0.3
10
5 0.2
0 0.1
0 10,000 20,000 30,000 40,000 50,000 0 10,000 20,000 30,000 40,000 50,000
Iteration count Iteration count
20
Runtime (s)
Std. dev.
Mean 1.0
ω
10
Min 0.5
Max
Std. dev.
Mean
0 0.0 Min
0 100 200 300 400 500 0 100 200 300 400 500
Number of Individuals Number of Individuals
0.25
Standard Deviation
Standard Deviation
0.20
2
1 0.15
0 0.10
0 100 200 300 400 500 0 100 200 300 400 500
Number of Individuals Number of Individuals
The maximum iteration count (generation count) of the DBMEA algorithm is the
maximum number of iterations since the last improvement in makespan. This parameter
impacts the scheduling performance and runtime significantly. ω and its standard deviation
keep decreasing as the number of iterations increases (see Figure 16b,d), while the runtime
is coupled linearly to the number of generations (see Figure 16a). Similarly to the population
size (Nind ), the number of iterations was set to a lower value to keep MCTS and SA iteration
counts high while keeping runtimes manageable.
30
Runtime (s)
2
Std. dev.
20 Mean
ω
Min 1
10
Max
Std. dev.
0 Mean
0 Min
Standard Deviation
0.3
3
2 0.2
1
0.1
0
0 250 500 750 1,000 0 250 500 750 1,000
Number of generations Number of generations
The number of clones (Nclones ) parameter is the number of clones generated in each
bacterial mutation operation. An overly large number of clones creates many bacteria
with the same permutation; therefore, going over a certain amount yields negligible or no
improvement in overall scheduling performance (Figure 17b) while increasing runtimes
(Figure 17a).
The number of infections (Nin f ) is the number of new bacteria created during the gene
transfer operation. This may increase diversity in the population with negligible runtime
differences (See Figure 18); therefore, a higher value of forty was chosen to increase the
chances of escaping local optima when scheduling larger problems.
The Iseg parameter is the length of the segment in the bacterial mutation operation.
This value determines the size of the segment to be mutated to generate new mutant
solutions. The larger the segment, the more varied the mutants will be. This parameter has
a lesser impact on overall performance and runtime (See Figure 19). However, a value of
four yielded the lowest standard deviation in runtime (See Figure 19c); therefore, it is the
final parameter value chosen for our testing.
Itrans is the length of the transferred segment in the gene transfer operation. A longer
segment may increase the variance in generated solutions. However, in our testing, it had
little impact on the quality of solutions (See Figure 20b,d) and runtimes (See Figure 20a,c).
A value of four was chosen since it is a good middle-ground and it illustrates the workings
of the operator well.
Algorithms 2023, 16, 406 22 of 30
1.00
Runtime (s)
0.75
ω
0.75 Std. dev.
0.50 Std. dev.
Mean
Mean
0.25
0.50
Min
0.00 Min
0.25
0 25 50 75 100 0 25 50 75 100
Number of clones Number of clones
Standard Deviation
0.22
0.19
0.21
0.18
0.20
0.17
0.16 0.19
0.15 0.18
0 25 50 75 100 0 25 50 75 100
Number of clones Number of clones
The mortality rate of Nmort is the percentage of the population to be terminated at the
end of each iteration (generation) and replaced with random solutions to increase diversity
and escape local optima. A mortality rate of one creates a random population for each
generation; a low mortality rate keeps more from the previous generation. Therefore, the
mortality rate is an extension of elitism, where a portion of the population is kept instead
of just one individual. It is evident that a high mortality rate (above 0.1) decreases the
scheduling efficiency of the algorithm since the beneficial traits of previous generations
are not carried forward (See Figure 21b,d). Too low of a value may increase the chance of
the search being stuck in local optima. The mortality rate must be kept as high as possible
without negating the traits of previous iterations. The parameter has almost no impact on
the runtime of the algorithm (see Figure 21a,c). Considering the above, a mortality rate of
0.1 (10%) was chosen.
After our parameter analysis, a set of parameters were determined for the final testing
on the entirety of the Taillard benchmark set. Table 1 contains all of the parameters chosen.
These running parameters were used to obtain the results presented in Tables A1–A4.
Algorithms 2023, 16, 406 23 of 30
1.00 Max
Runtime (s)
0.8
0.75
ω
Std. dev.
Min 0.0
0.25 Min
0.18
0.24
Standard Deviation
Standard Deviation
0.17 0.23
0.22
0.16
0.21
0.15
0.20
0 50 100 150 0 50 100 150
Number of infections Number of infections
Bacterial mutation segment length − DBMEA Bacterial mutation segment length − DBMEA
1.25 ω average
Max
Max
1.00
0.9
Runtime (s)
0.75 0.6
ω
Std. dev.
0.24
Standard Deviation
Standard Deviation
0.16
0.23
0.15
0.22
0.14
0.21
2 4 6 8 2 4 6 8
Bacterial mutation segment length Bacterial mutation segment length
Gene transfer segment length − DBMEA Gene transfer segment length − DBMEA
1.25
ω average
Max
1.00 Max 0.9
Runtime (s)
0.75 0.6
ω
Std. dev.
Standard Deviation
0.17 0.21
0.16
0.20
0.15
0.19
2 4 6 8 2 4 6 8
Gene transfer segment length Gene transfer segment length
1.5
1.2
Runtime (s)
Max
Std. dev.
1.0 Mean
ω
0.8
Std. dev.
0.5
Mean
0.4
Min Min
0.0
0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1
Mortality rate Mortality rate
Standard Deviation
0.19 0.28
0.17
0.24
0.15
0.13 0.20
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Mortality rate Mortality rate
9. Conclusions
In this paper, we have described our proposed novel hybrid (extended “memetic”
style) algorithm with a top-down approach and also provided details on the implemen-
tation. The results presented in this study demonstrate the efficiency of this new hybrid
bacterial memetic algorithm in solving the flow-shop scheduling problem. As shown
in Tables A1 and A2, the algorithm has achieved a quality of solution comparable to the
best-known results, with a difference of less than 1%. This comparison indicates that the
Algorithms 2023, 16, 406 25 of 30
algorithm can efficiently explore the solution space and produce high-quality solutions for
many problems. Furthermore, our analysis of the algorithm’s performance on the Taillard
benchmark set revealed that the hybrid bacterial memetic algorithm outperformed the
best-known solutions for nine benchmark cases. The algorithm’s ability to solve complex
scheduling problems has been made obvious.
To provide a comprehensive view of the algorithm’s performance, we have included
Table A4, which summarizes the new best solutions obtained by the algorithm, along with
their corresponding makespans. These results demonstrate the algorithm’s ability to find
high-quality solutions that surpass the performance of all published methods.
The advantage of our proposed algorithm is its black-box approach to optimization,
which implies that no assumption was made on the behaviour of the objective function. This
approach allows for problem agnostic search as long as the solution requires a permutation
as its solution. Therefore, we claim that our proposed algorithm can be used for the
optimization of problems, which are even closer related to real-world tasks, like hybrid
flow-shop scheduling with unrelated machines and machine eligibility [70] or extended
job-shop scheduling [71]. One limitation of the algorithm is the number of times the
objective function is called, increasing computation times drastically when more complex
models are to be computed. Due to space and time constraints, these extended problems
and respective solutions under increased complexity, like parallelization, have not been
considered here. Overall, the hybrid bacterial memetic algorithm has shown great promise
in solving the flow-shop scheduling problem, and it can potentially contribute to developing
more efficient scheduling algorithms in the future. In our future work, we would like to
investigate other use cases and refinements in performance, both in terms of scheduling
ability and of compute time.
Author Contributions: Conceptualization, L.T.K. and K.N.; methodology, L.F. and K.N.; software,
L.F. and K.N.; validation, L.F., K.N., B.T.-S. and O.H.; formal analysis, L.F. and K.N.; investigation, L.F.
and K.N.; resources, L.F. and K.N.; data curation, L.F. and K.N.; writing—original draft preparation,
L.F., K.N. and O.H.; writing—review and editing, L.F., K.N. and O.H.; visualization, L.F.; supervision,
K.N.; project administration, O.H.; and funding acquisition, L.T.K. and K.N. All authors have read
and agreed to the published version of the manuscript.
Funding: The described article was carried out as part of the 2020-1.1.2-PIACI-KFI-2020-00147
“OmegaSys—Lifetime planning and failure prediction decision support system for facility man-
agement services” project implemented with the support provided from the National Research,
Development, and Innovation Fund of Hungary. L. T. Koczy, as a Professor Emeritus at Budapest
University of Technology and Economics, was supported by the Hungarian National Office for
Research, Development, and Innovation, grant. nr. K124055.
Data Availability Statement: The data presented in this study are openly available at http://
mistic.heig-vd.ch/taillard/problemes.dir/ordonnancement.dir/ordonnancement.html (accessed on
5 July 2023).
Conflicts of Interest: The authors declare no conflicts of interest.
Appendix A
Table A1. Comparison of performance of 20 algorithms on 20 and 50 machine problems (average ω).
Table A2. Comparison of performance of 20 algorithms on 100, 200, and 500 machine problems
(average ω).
Algorithm Average ω
HAPSO [36] 1.63
PSOENT [39] 1.65
NEHT [22] 3.35
SGA [44] 1.93
MGGA [44] 1.82
ACGA [44] 1.84
SGGA [44] 1.85
DDE [49] 1.37
CPSO [33] 3.40
GMA [39,72] 2.35
PSO2 [34] 2.40
HPSO [43] 0.48
GA-VNS [52] 0.40
IWO [54] 24.66
HGSA [55] 4.33
HGA [56] 42.61
HMM-PFA [57] 44.16
SA [68] 1.22
DJaya [59] 1.13
DJRL3M [58] 1.02
MCTS + SA [65] 0.70
DBMEA + SA [3] 1.50
HDBMEA 0.28
References
1. Onwubolu, G.C.; Babu, B. New Optimization Techniques in Engineering; Springer: Berlin/Heidelberg, Germany , 2013; Volume 141.
2. Tüű-Szabó, B.; Földesi, P.; Kóczy, L.T. An efficient evolutionary metaheuristic for the traveling repairman (minimum latency)
problem. Int. J. Comput. Intell. Syst. 2020, 13, 781–793.
3. Agárdi, A.; Nehéz, K.; Hornyák, O.; Kóczy, L.T. A Hybrid Discrete Bacterial Memetic Algorithm with Simulated Annealing for
Optimization of the flow-shop scheduling problem. Symmetry 2021, 13, 1131.
4. Balázs, K.; Botzheim, J.; Kóczy, L.T. Comparison of various evolutionary and memetic algorithms. In Integrated Uncertainty
Management and Applications; Springer: Berlin/Heidelberg, Germany, 2010; pp. 431–442.
Algorithms 2023, 16, 406 28 of 30
5. Nawa, N.E.; Furuhashi, T. Bacterial evolutionary algorithm for fuzzy system design. In SMC’98 Conference Proceedings, Proceedings
of the 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218), San Diego, CA, USA, 14 October
1998; IEEE: New York, NY, USA, 1998, Volume 3, pp. 2424–2429.
6. Moscato, P. On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. In Technical
Report, Caltech Concurrent Computation Program Report 826; California Institute of Technology: Pasadena, CA, USA, 1989.
7. Wolpert, D.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82.
8. Kóczy, L.; Zorat, A. Fuzzy systems and approximation. Fuzzy Sets Syst. 1997, 85, 203–222.
9. Kóczy, L.T. Symmetry or Asymmetry? Complex Problems and Solutions by Computational Intelligence and Soft Computing,
Symmetry 2022, 14, 1839.
10. Nawa, N.E.; Furuhashi, T. Fuzzy system parameters discovery by bacterial evolutionary algorithm. IEEE Trans. Fuzzy Syst. 1999,
7, 608–616.
11. Botzheim, J.; Cabrita, C.; Kóczy, L.T.; Ruano, A. Fuzzy rule extraction by bacterial memetic algorithms. Int. J. Intell. Syst. 2009,
24, 312–339.
12. Das, S.; Chowdhury, A.; Abraham, A. A bacterial evolutionary algorithm for automatic data clustering. In Proceedings of the
2009 IEEE Congress on Evolutionary Computation. Trondheim, Norway, 18–21 May 2009; pp. 2403–2410.
13. Hoos, H.H.; Stützle, T. Stochastic Local Search: Foundations and Applications; Elsevier: Amsterdam, The Netherlands, 2004.
14. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin/Heidelberg,
Germany, 1978; pp. 105–116.
15. Gong, G.; Deng, Q.; Chiong, R.; Gong, X.; Huang, H. An effective memetic algorithm for multi-objective job-shop scheduling.
Knowl.-Based Syst. 2019, 182, 104840.
16. Yamada, T.; Nakano, R. A fusion of crossover and local search. In Proceedings of the IEEE International Conference on Industrial
Technology (ICIT’96), Shanghai, China, 2–6 December 1996; pp. 426–430.
17. Muyldermans, L.; Beullens, P.; Cattrysse, D.; Van Oudheusden, D. Exploring variants of 2-opt and 3-opt for the general routing
problem. Oper. Res. 2005, 53, 982–995.
18. Balazs, K.; Koczy, L.T. Hierarchical-interpolative fuzzy system construction by genetic and bacterial memetic programming
approaches. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2012, 20, 105–131.
19. Pinedo, M. Scheduling: Theory, Algorithms, and Systems, 5th ed.; Springer: Cham, Switzerland, 2016.
20. Johnson, S.M. Optimal two-and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68.
21. Garey, M.R.; Johnson, D.S.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129.
22. Nawaz, M.; Enscore, E.E., Jr.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983,
11, 91–95.
23. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285.
24. Van Laarhoven, P.J.; Aarts, E.H. simulated annealing. In simulated annealing: Theory and Applications; Springer: Berlin/Heidelberg,
Germany, 1987; pp. 7–15.
25. Dai, M.; Tang, D.; Giret, A.; Salido, M.A.; Li, W.D. Energy-efficient scheduling for a flexible flow shop using an improved
genetic-simulated annealing algorithm. Robot. Comput.-Integr. Manuf. 2013, 29, 418–429.
26. Jouhari, H.; Lei, D.; A. A. Al-qaness, M.; Abd Elaziz, M.; Ewees, A.A.; Farouk, O. Sine-Cosine Algorithm to Enhance simulated
annealing for Unrelated Parallel Machine Scheduling with Setup Times. Mathematics 2019, 7, 1120. https://doi.org/10.3390/
math7111120.
27. Alnowibet, K.A.; Mahdi, S.; El-Alem, M.; Abdelawwad, M.; Mohamed, A.W. Guided Hybrid Modified simulated annealing
Algorithm for Solving Constrained Global Optimization Problems. Mathematics 2022, 10, 1312. https://doi.org/10.3390/math1
0081312.
28. Suanpang, P.; Jamjuntr, P.; Jermsittiparsert, K.; Kaewyong, P. Tourism Service Scheduling in Smart City Based on Hybrid Genetic
Algorithm simulated annealing Algorithm. Sustainability 2022, 14, 6293. https://doi.org/10.3390/su142316293.
29. Redi, A.A.N.P.; Jewpanya, P.; Kurniawan, A.C.; Persada, S.F.; Nadlifatin, R.; Dewi, O.A.C. A simulated annealing Algorithm for
Solving Two-Echelon Vehicle Routing Problem with Locker Facilities. Algorithms 2020, 13, 218. https://doi.org/10.3390/a13090
218.
30. Rahimi, A.; Hejazi, S.M.; Zandieh, M.; Mirmozaffari, M. A Novel Hybrid simulated annealing for No-Wait Open-Shop Surgical
Case Scheduling Problems. Appl. Syst. Innov. 2023, 6, 15. https://doi.org/10.3390/asi6010015.
31. Bean, J.C. Genetic algorithms and random keys for sequencing and optimization. ORSA J. Comput. 1994, 6, 154–160.
32. Hansen, P.; Mladenović, N. Variable neighborhood search: Principles and applications. Eur. J. Oper. Res. 2001, 130, 449–467.
33. Tasgetiren, M.F.; Liang, Y.C.; Sevkli, M.; Gencyilmaz, G. A particle swarm optimization algorithm for makespan and total
flowtime minimization in the permutation flowshop sequencing problem. Eur. J. Oper. Res. 2007, 177, 1930–1947.
34. Liao, C.J.; Tseng, C.T.; Luarn, P. A discrete version of particle swarm optimization for flowshop scheduling problems. Comput.
Oper. Res. 2007, 34, 3099–3111.
35. Jarboui, B.; Ibrahim, S.; Siarry, P.; Rebai, A. A combinatorial particle swarm optimisation for solving permutation flowshop
problems. Comput. Ind. Eng. 2008, 54, 526–538.
Algorithms 2023, 16, 406 29 of 30
36. Marchetti-Spaccamela, A.; Crama, Y.; Goossens, D.; Leus, R.; Schyns, M.; Spieksma, F. Proceedings of the 12th Workshop on
Models and Algorithms for Planning and Scheduling Problems. 2015. Available online: https://feb.kuleuven.be/mapsp2015/
Proceedings%20MAPSP%202015.pdf (accessed on 25 August 2023. ).
37. Glover, F.; Laguna, M.; Marti, R. Scatter search and path relinking: Advances and applications. In Handbook of Metaheuristics;
Springer: Berlin/Heidelberg, Germany, 2003; pp. 1–35.
38. Resende, M.G.; Ribeiro, C.C.; Glover, F.; Martí, R. Scatter search and path-relinking: Fundamentals, advances, and applications.
In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2010; pp. 87–107.
39. Marinakis, Y.; Marinaki, M. Particle swarm optimization with expanding neighborhood topology for the permutation flowshop
scheduling problem. Soft Comput. 2013, 17, 1159–1173.
40. Ying, K.C.; Liao, C.J. An ant colony system for permutation flow-shop sequencing. Comput. Oper. Res. 2004, 31, 791–801.
41. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the First European Conference
on Artificial Life, Paris, France, 11–13 December 1991; Volume 142, pp. 134–142.
42. Colorni, A.; Dorigo, M.; Maniezzo, V. A Genetic Algorithm to Solve the Timetable Problem; Politecnico di Milano: Milan, Italy, 1992.
43. Hayat, I.; Tariq, A.; Shahzad, W.; Masud, M.; Ahmed, S.; Ali, M.U.; Zafar, A. Hybridization of Particle Swarm Optimization
with Variable Neighborhood Search and simulated annealing for Improved Handling of the Permutation Flow-Shop Scheduling
Problem. Systems 2023, 11, 221. https://doi.org/10.3390/systems11050221.
44. Chen, S.H.; Chang, P.C.; Cheng, T.; Zhang, Q. A self-guided genetic algorithm for permutation flowshop scheduling problems.
Comput. Oper. Res. 2012, 39, 1450–1457.
45. Baraglia, R.; Hidalgo, J.I.; Perego, R. A hybrid heuristic for the traveling salesman problem. IEEE Trans. Evol. Comput. 2001,
5, 613–622.
46. Harik, G.R.; Lobo, F.G.; Goldberg, D.E. The compact genetic algorithm. IEEE Trans. Evol. Comput. 1999, 3, 287–297.
47. Mühlenbein, H.; Paaß, G. From recombination of genes to the estimation of distributions I. Binary parameters. In International
Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 1996; pp. 178–187.
48. Pelikan, M.; Goldberg, D.E.; Lobo, F.G. A survey of optimization by building and using probabilistic models. Comput. Optim.
Appl. 2002, 21, 5–20.
49. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob.
Optim. 1997, 11, 341–359.
50. Tasgetiren, M.F.; Pan, Q.K.; Suganthan, P.N.; Liang, Y.C. A discrete differential evolution algorithm for the no-wait flowshop
scheduling problem with total flowtime criterion. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in
Scheduling, Honolulu, HI, USA, 1–5 April 2007; pp. 251–258.
51. Pan, Q.K.; Tasgetiren, M.F.; Liang, Y.C. A discrete differential evolution algorithm for the permutation flowshop scheduling
problem. Comput. Ind. Eng. 2008, 55, 795–816.
52. Zobolas, G.; Tarantilis, C.D.; Ioannou, G. Minimizing makespan in permutation flow-shop scheduling problems using a hybrid
metaheuristic algorithm. Comput. Oper. Res. 2009, 36, 1249–1267.
53. Mehrabian, A.R.; Lucas, C. A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 2006,
1, 355–366.
54. Zhou, Y.; Chen, H.; Zhou, G. Invasive weed optimization algorithm for optimization no-idle flow shop scheduling problem.
Neurocomputing 2014, 137, 285–292.
55. Wei, H.; Li, S.; Jiang, H.; Hu, J.; Hu, J. Hybrid genetic simulated annealing algorithm for improved flow shop scheduling with
makespan criterion. Appl. Sci. 2018, 8, 2621.
56. Tseng, L.Y.; Lin, Y.T. A hybrid genetic algorithm for no-wait flowshop scheduling problem. Int. J. Prod. Econ. 2010, 128, 144–152.
57. Qu, C.; Fu, Y.; Yi, Z.; Tan, J. Solutions to no-wait flow-shop scheduling problem using the flower pollination algorithm based on
the hormone modulation mechanism. Complexity 2018, 2018, 1973604.
58. Alawad, N.A.; Abed-alguni, B.H. Discrete Jaya with refraction learning and three mutation methods for the permutation
flow-shop scheduling problem. J. Supercomput. 2022, 78, 3517–3538.
59. Gao, K.; Yang, F.; Zhou, M.; Pan, Q.; Suganthan, P.N. Flexible job-shop rescheduling for new job insertion by using discrete Jaya
algorithm. IEEE Trans. Cybern. 2018, 49, 1944–1955.
60. Fathollahi-Fard, A.M.; Woodward, L.; Akhrif, O. Sustainable distributed permutation flow-shop scheduling model based on a
triple bottom line concept. J. Ind. Inf. Integr. 2021, 24, 100233. https://doi.org/10.1016/j.jii.2021.100233.
61. Baroud, M.M.; Eghtesad, A.; Mahdi, M.A.A.; Nouri, M.B.B.; Khordehbinan, M.W.W.; Lee, S. A New Method for Solving the
flow-shop scheduling problem on Symmetric Networks Using a Hybrid Nature-Inspired Algorithm. Symmetry 2023, 15, 1409.
https://doi.org/10.3390/sym15071409.
62. Liang, Z.; Zhong, P.; Liu, M.; Zhang, C.; Zhang, Z. A computational efficient optimization of flow shop scheduling problems. Sci.
Rep. 2022, 12, 845. https://doi.org/10.1038/s41598-022-04887-8.
63. Alireza, A.; Javid, G.N.; Hamed, N. Flexible flow shop scheduling with forward and reverse flow under uncertainty using the red
deer algorithm. J. Ind. Eng. Manag. Stud. 2023, 10, 16–33.
64. Kóczy, L.T.; Földesi, P.; Tüű-Szabó, B. An effective discrete bacterial memetic evolutionary algorithm for the traveling salesman
problem. Int. J. Intell. Syst. 2017, 32, 862–876.
65. Agárdi, A.; Nehéz, K. Parallel machine scheduling with Monte Carlo Tree Search. Acta Polytech. 2021, 61, 307–312.
Algorithms 2023, 16, 406 30 of 30
66. Wu, T.Y.; Wu, I.C.; Liang, C.C. Multi-objective flexible job shop scheduling problem based on monte-carlo tree search. In
Proceedings of the 2013 Conference on Technologies and Applications of Artificial Intelligence, Taipei, Taiwan, 6–8 December
2013; pp. 73–78.
67. Kocsis, L.; Szepesvári, C. Bandit based monte-carlo planning. In European Conference on Machine Learning, Proceedings of the ECML
2006, Berlin, Germany, 18–22 September 2006; Proceedings 17; Springer: Berlin/Heidelberg, Germany, 2006; pp. 282–293.
68. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680.
69. Miliczki, J.; Fazekas, L. Comparison of Cooling Strategies in simulated annealing Algorithms for Flow-shop Scheduling. Prod.
Syst. Inf. Eng. 2022, 10, 129–136.
70. Yu, C.; Semeraro, Q.; Matta, A. A genetic algorithm for the hybrid flow shop scheduling with unrelated machines and machine
eligibility. Comput. Oper. Res. 2018, 100, 211–229.
71. Shi, L.; Ólafsson, S.; Shi, L.; Ólafsson, S. Extended Job Shop Scheduling. Nested Partitions Method, Theory and Applications; Springer:
Berlin/Heidelberg, Germany, 2009; pp. 207–225.
72. Marinakis, Y.; Marinaki, M. Hybrid Adaptive Particle Swarm Optimization Algorithm for the Permutation Flowshop Scheduling
Problem. In Proceedings of the 13th Workshop on Models and Algorithms for Planning and Scheduling Problems, Abbey,
Germany, 12–16 June 2017; p. 189.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.