Walrus Optimization Algorithm
Walrus Optimization Algorithm
Metaheuristic Algorithm
Pavel Trojovský ( pavel.trojovsky@uhk.cz )
University of Hradec Králové
Mohammad Dehghani
University of Hradec Králové
Article
DOI: https://doi.org/10.21203/rs.3.rs-2174098/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License
Walrus Optimization Algorithm: A New Bio-Inspired
Metaheuristic Algorithm
1
Department of Mathematics, Faculty of Science, University of Hradec Králové,
500 03 Hradec Králové, Czech Republic
*Corresponding Author:
Pavel Trojovský1
Rokitanského 62, Hradec Králové, 500 03, Czech Republic
Email address: pavel.trojovsky@uhk.cz
Abstract
In this paper, a new bio-Inspired metaheuristic algorithm called Walrus Optimization Algorithm
(WaOA) mimics walrus behaviors in nature. The fundamental inspirations employed in WaOA
design are the process of feeding, migrating, escaping, and fighting predators. The WaOA
implementation steps are mathematically modeled in three different phases exploration,
migration, and exploitation. Sixty-eight standard benchmark functions have been employed to
evaluate WaOA performance in optimization applications. The ability of WaOA to provide
solutions to optimization problems has been compared with the results of ten well-known
metaheuristic algorithms. The simulation results show that WaOA, with its high capability in
balancing exploration and exploitation, offers superior performance and is far more competitive
than the ten compared algorithms. In addition, the use of WaOA in addressing four design
engineering issues demonstrates the apparent effectiveness of WaOA in real-world applications.
Introduction
Any problem that has more than one feasible solution is known as an optimization problem.
According to this definition, the process of determining the best solution among all feasible
solutions is called optimization. From a mathematical point of view, decision variables,
constraints, and objective functions are the three main parts of modeling an optimization
problem. The purpose of optimization is to quantify the decision variables of the problem so that
while respecting the constraints, it leads to achieving the minimum (minimization problems) or
maximum (maximization problems) value for the objective function1. Applied techniques in
solving optimization problems fall into the deterministic and stochastic approaches.
Deterministic approaches, which are classified into gradient-based and non-gradient-based, have
successful performance in solving linear and convex optimization problems. However, many
real-world and science optimization problems are described by nonlinear, complex, non-
differentiable objective functions, discrete search space, non-convex nature, many decision
variables, multiple equal and unequal constraints, and so on. These features make it impossible
for deterministic methods to address these types of optimization challenges to fail2. This failure
of deterministic methods has led to the extensive efforts of researchers and the introduction of
stochastic approaches to address optimization issues. As one of the most widely used stochastic
approaches, metaheuristic algorithms, using stochastic operators, trial and error concepts, and
stochastic search, can provide appropriate solutions to optimization problems without requiring
derivative information from the objective function. The simplicity of ideas, easy implementation,
independence from the type of problem, and no need for a derivation process, are among the
advantages that have led to the popularity and pervasiveness of metaheuristic algorithms among
researchers 3. The optimization process in metaheuristic algorithms begins with the random
generation of several initial feasible solutions in the problem search space. Then, in an iterative-
based process, based on the effectiveness of the algorithm steps, these initial solutions are
improved. Finally, the best solution found during the implementation of the algorithm is
introduced as the solution to the problem4. However, none of the metaheuristic algorithms
guarantee that they will be able to provide the optimal global solution. This insufficiency is due
to the nature of random search in these types of optimization approaches. Hence, the solutions
derived from metaheuristic algorithms are known as quasi-optimal solutions5.
Exploration and exploitation capabilities enable metaheuristic algorithms to provide better quasi-
optimal solutions. Exploration refers to the ability to search globally in different areas of the
problem-solving space to discover the best optimal area. In contrast, exploitation refers to the
ability to search locally around the available solutions and the promising areas to converge to the
global optimal. Balancing exploration and exploitation is the key to the success of metaheuristic
algorithms in achieving effective solutions6. Achieving better quasi-optimal solutions has been
the main challenge and reason for researchers' development of various metaheuristic algorithms.
The main research question is that despite the numerous metaheuristic algorithms introduced so
far, is there still a need to develop new algorithms? The No Free Lunch (NFL) theorem7 answers
the question that the optimal performance of an algorithm in solving a set of optimization
problems gives no guarantee for the similar performance of that algorithm in solving other
optimization problems. The NFL theorem concept rejects the hypothesis that a particular
metaheuristic algorithm is the best optimizer for all optimization applications over all different
algorithms. Instead, the NFL theorem encourages researchers to continue to design newer
metaheuristic algorithms to solve optimization problems more effectively. This theorem has also
motivated the authors of this paper to develop a new metaheuristic algorithm to address
optimization challenges.
This paper’s novelty and contribution are in designing a new metaheuristic algorithm called the
Walrus Optimization Algorithm (WaOA), which is based on the simulation of walrus behaviors
in nature. The WaOA’s main inspiration is the natural behaviors of walruses in feeding, when
fleeing and fighting predators, and migrating. WaOA is mathematically modeled in three phases:
exploration, exploitation, and migration. The efficiency of WaOA in handling optimization
problems is tested on sixty-eight standard objective functions of various types of unimodal,
multimodal, CEC 2015 test suite, and CEC 2017 test suite. In addition, the success of WaOA in
real-world applications is challenged in addressing four engineering design issues.
The rest of the paper is as follows; the lecture review is presented in the “Lecture Review’’
section. The proposed WaOA approach is introduced and modeled in the “Walrus Optimization
Algorithm’’ section. Simulation studies are presented in the “Simulation Studies and Results’’
section. The efficiency of WaOA in solving engineering design problems is evaluated in the
“WaOA for real world-application’’ section. Conclusions and future research directions are
included in the “Conclusions and future works’’ section.
Lecture Review
Metaheuristic algorithms are based on the inspiration and simulation of various natural
phenomena, animal strategies and behaviors, concepts of biological sciences, genetics, physics
sciences, human activities, rules of games, and any evolution-based process. Accordingly, from
the point of view of the main inspiration used in the design, metaheuristic algorithms fall into
five groups: evolutionary-based, swarm-based, physics-based, human-based, and game-based.
Evolutionary-based metaheuristic algorithms have been developed using the concepts of biology,
natural selection theory, and random operators such as selection, crossover, and mutation.
Genetic Algorithm (GA) is one of the most famous metaheuristic algorithms, which is inspired
by the process of reproduction, Darwin's theory of evolution, natural selection, and biological
concepts8. Differential Evolution (DE) is another evolutionary computation that, in addition to
using the concepts of biology, random operators, and natural selection, uses a differential
operator to generate new solutions9.
Swarm-based metaheuristic algorithms have been developed based on modeling natural
phenomena, swarming phenomena, and behaviors of animals, birds, insects, and other living
things. Particle Swarm Optimization (PSO) is one of the first introduced metaheuristics methods
and was widely used in optimization fields. The main inspiration in designing PSO is the search
behaviors of birds and fish to discover food sources10. Ant Colony Optimization (ACO) is a
swarm-based method inspired by the ability and strategy of an ant colony to identify the shortest
path between the colony to food sources11. Grey Wolf Optimization (GWO) is a metaheuristic
algorithm inspired by grey wolves' hierarchical structure and social behavior while hunting12.
Marine Predator Algorithm (MPA) has been developed inspired by the ocean and sea predator
strategies and their Levy flight movements to trap prey13. The strategy of the tunicates and their
search mechanism in the process of finding food sources and foraging have been the main
inspirations in the design of the Tunicate Swarm Algorithm (TSA)14. Some other swarm-based
methods are White Shark Optimizer (WSO)15, Reptile Search Algorithm (RSA)16, Raccoon
Optimization Algorithm (ROA)17, African Vultures Optimization Algorithm (AVOA)18, and
Pelican Optimization Algorithm (POA)19.
Physics-based metaheuristic algorithms have been inspired by physics’s theories, concepts, laws,
forces, and phenomena. Simulated Annealing (SA) is one of the most famous physics-based
methods, the main inspiration of which is the process of annealing metals. During this physical
process, a solid is placed in a heat bath, and the temperature is continuously raised until the solid
melts. The solid particles are physically separated or randomly placed. From such a high energy
level, the thermal bath cools slowly as the temperature decreases so that the particles can align
themselves in a regular crystal lattice structure20. Gravitational Search Algorithm (GSA) is a
physics-based computational method inspired by the simulation of Newton’s law of universal
gravitation and Newton's laws of motion among masses housed in a system21. Applying the three
concepts of a black hole, white hole, and wormhole in cosmology science has been the
inspiration for the design of the Multi-Verse Optimizer (MVO)22. Some other physics-based
methods are: Water Cycle Algorithm (WCA)23, Spring Search Algorithm (SSA)24, Atom Search
Optimization (ASO)25, Momentum Search Algorithm (MSA)26, and Nuclear Reaction
Optimization (NRO)27.
Human-based metaheuristic algorithms have been developed inspired by human activities, social
relationships, and interactions. Teaching Learning Based Optimization (TLBO) is the most
widely used human-based metaheuristic algorithm in which the interactions between teacher and
students, as well as students with each other in the educational space, are its main source of
inspiration28. The efforts of two sections of society, including the poor and the rich, to improve
their financial situation have been the main idea in the design of Poor and Rich Optimization
(PRO)29. Some other human-based methods are Archery Algorithm (AA)30, Brain Storm
Optimization (BSO)31, Chef Based Optimization Algorithm (CBOA)32, War Strategy
Optimization (WSO)33, and Teamwork Optimization Algorithm (TOA)34.
Game-based metaheuristic algorithms have been introduced based on simulating the rules
governing various individual and group games and imitating the behaviors of players, referees,
coaches, and other effective interactions. E.g., competition of players in the tug-of-war game
under the rules of this game has been the main idea used in designing the Tug-of-War
Optimization (TWO) algorithm35. Premier Volleyball League (PVL) algorithm is introduced
based on mathematical modeling of player interactions, competitions, and coaching instructions
during game36. Puzzle Optimization Algorithm (POA) is another game-based metaheuristic
algorithm that has been produced based on players trying to solve puzzles and getting help from
each other to arrange puzzle pieces better37. Some other game-based methods are Orientation
Search Algorithm (OSA) 38, Ring Toss Game-Based Optimization (RTGBO) 39, Football Game
Based Optimization (FGBO) 40, Dice Game Optimization (DGO) 41, and Orientation Search
Algorithm (OSA) 38.
Based on the best knowledge gained from the literature review, no metaheuristic algorithm has
been developed based on the simulation of the behaviors and strategies of walruses. However,
intelligent walrus behaviors such as food search, migration, escape, and fighting with predators
are prone to designing an optimizer. In the next section, based on the mathematical modeling of
natural walrus behaviors, a new metaheuristic algorithm is developed to handle optimization
applications to address this research gap.
Walrus Optimization Algorithm
In this section, employed fundamental inspiration and the theory of the proposed Walrus
Optimization Algorithm (WaOA) is stated, then its various steps are modeled mathematically.
Inspiration of WaOA
Walrus is a big flippered marine mammal with a discontinuous distribution in the Arctic Ocean
and subarctic waters of the Northern Hemisphere around the North Pole42. Adult walruses are
easily identifiable with their large whiskers and tusks. Walruses are social animals who spend
most of their time on the sea ice, seeking benthic bivalve mollusks to eat. The most prominent
feature of walruses is the long tusks of this animal. These are elongated canines seen in both
male and female species that may weigh up to 5.4 kilograms and measure up to 1 meter in
length. Males' tusks are slightly thicker and longer and are used for dominance, fighting, and
display. The most muscular male with the longest tusks dominates the other group members and
leads them43. An image of walrus is presented in Figure 1. As the weather warms and the ice
melts in late summer, walruses prefer to migrate to outcrops or rocky beaches. These migrations
are very dramatic and involve massive aggregations of walruses44. The walrus has just two
natural predators due to its large size and tusks: the polar bear and the killer whale (orca).
Observations show that the battle between a walrus and a polar bear is very long and exhausting,
and usually, polar bears withdraw from the fight after injuring the walrus. However, walruses
harm polar bears with their tusks during this battle. In the fight against walruses, killer whales
can hunt them successfully, with minimal and even no injuries45.
The social life and natural behaviors of walruses represent an intelligent process. Of these
intelligent behaviors, three are the most obvious:
(i) Guiding individuals to feed under the guidance of a member with the longest tusks.
(ii) Migration of walruses to rocky beaches
(iii) Fight or escape from predators
Mathematical modeling of these behaviors is the primary inspiration for developing the proposed
WaOA approach.
Phase 2: Migration
One of the natural behaviors of walruses is their migration to outcrops or rocky beaches due to
the warming of the air in late summer. This migration process is employed in the WaOA to guide
the walruses in the search space to discover suitable areas in the search space. This behavioral
mechanism is mathematically modeled using (5) and (6). This modeling assumes that each
walrus migrates to another walrus (randomly selected) position in another area of the search
space. Therefore, the proposed new position is first generated based on (5). Then according to
(6), if this new position improves the value of the objective function, it replaces the previous
position of walrus.
𝑃 𝑥𝑖,𝑗 + 𝑟𝑎𝑛𝑑𝑖,𝑗 ∙ (𝑥𝑘,𝑗 − 𝐼𝑖,𝑗 ∙ 𝑥𝑖,𝑗 ), 𝐹𝑘 < 𝐹𝑖 ,
𝑥𝑖,𝑗2 = { (5)
𝑥𝑖,𝑗 + 𝑟𝑎𝑛𝑑𝑖,𝑗 ∙ (𝑥𝑖,𝑗 − 𝑥𝑘,𝑗 ), 𝑒𝑙𝑠𝑒,
𝑃 𝑃
𝑋 2 , 𝐹𝑖 2 < 𝐹𝑖
𝑋𝑖 = { 𝑖 , (6)
𝑋𝑖 , 𝑒𝑙𝑠𝑒
𝑃 𝑃
where 𝑋𝑖 2 is the new generated position for the 𝑖th walrus based on the 2nd phase, 𝑥𝑖,𝑗2 is its 𝑗th
𝑃
dimension, 𝐹𝑖 2 is its objective function value, 𝑋𝑘 , 𝑘 ∈ {1,2, … , 𝑁} and 𝑘 ≠ 𝑖 is the location of
selected walrus to migrate the 𝑖th walrus towards it, 𝑥𝑘,𝑗 is its 𝑗th dimension, and 𝐹𝑘 is its
objective function value.
Phase 3: Escaping and fighting against predators (exploitation)
Walruses are always exposed to attacks by the polar bear and the killer whale. The strategy of
escaping and fighting these predators leads to a change in the position of the walruses in the
vicinity of the position in which they are located. Simulating this natural behavior of walruses
improves the WaOA exploitation power in the local search in problem-solving space around
candidate solutions. For simulation of this phenomenon in WaOA, a neighborhood is assumed
around each walrus, which first is generated a new position randomly in this neighborhood using
(7) and (8). then if the value of the objective function is improved, this new position replaces the
previous position according to (9).
𝑃 𝑡 𝑡 𝑡
𝑥𝑖,𝑗3 = 𝑥𝑖,𝑗 + (𝑙𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 + (𝑢𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 − 𝑟𝑎𝑛𝑑 ∙ 𝑙𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 )), (7)
𝑡 𝑙𝑏𝑗
𝑙𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 =
𝑡
Local bounds: { 𝑢𝑏𝑗 (8)
𝑡
𝑢𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 = 𝑡
𝑃 𝑃
𝑋𝑖 3 , 𝐹𝑖 3 < 𝐹𝑖 ,
𝑋𝑖 = { (9)
𝑋𝑖 , 𝑒𝑙𝑠𝑒,
𝑃 𝑃
where 𝑋𝑖 3 is the new generated position for the 𝑖th walrus based on the 3rd phase, 𝑥𝑖,𝑗3 is its 𝑗th
𝑃
dimension, 𝐹𝑖 3 is its objective function value, 𝑡 is the iteration contour, 𝑙𝑏𝑗 and 𝑢𝑏𝑗 are the lower
𝑡 𝑡
and upper bounds of the 𝑗th variable, respectively, 𝑙𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 and 𝑢𝑏𝑙𝑜𝑐𝑎𝑙,𝑗 are local lower and local
upper bounds allowable for the 𝑗th variable, respectively, to simulate local search in the
neighborhood of the candidate solutions.
Repetition process, pseudocode, and flowchart of WaOA
After updating the walruses' position based on the implementation of the first, second, and third
phases, the first WaOA iteration is completed, and new values are calculated for the position of
the walruses and the objective functions. Update and improve candidate solutions is repeated
based on the WaOA steps according to Eqs. (3) to (9) until the final iteration. Upon completion
of the algorithm execution, WaOA introduces the best candidate solution found during execution
as the solution to the given problem. The WaOA implementation flowchart is presented in Figure
2, and its pseudocode is specified in Algorithm 1.
Start WaOA.
.
Phase 1: Update strongest walrus based on the comparison of the objective function.
𝑃
Phase1: Calculate 𝑋𝑖 1 function using Eq. (3).
Phase 2: Select 𝑋𝐾 randomly as the migration destination for 𝑋𝑖 using Eq. (5).
𝑃
Phase 2: Calculate 𝑋𝑖 2 function using Eq. (5).
𝑃
Phase 3: Calculate 𝑋𝑖 3 function using Eqs. (7) and (8).
Yes
𝑖<𝑁 𝑖 =𝑖+1
No
Save the best candidate solution so far.
Yes 𝑡 =𝑡+1
𝑡< 𝑇
𝑖 = 1
No
Output the best quasi-optimal solution of the objective function found by WaOA.
End WaOA.
.
Figure 2. Flowchart of WaOA.
Algorithm 1. Pseudocode of WaOA
Start WaOA.
1. Input all optimization problem information.
2. Set the number of walruses (𝑁) and the total number of iterations (𝑇).
3. Initialization process of walruses’ locations.
4. For 𝑡 = 1: 𝑇
5. Update strongest walrus based on objective function value criterion.
6. For 𝑖 = 1: 𝑁
7. Phase1: Feeding strategy (exploration)
8. Calculate new location of the 𝑗th walrus using (3).
11. Update the 𝑖th walrus location using (4).
12. Phase 2: Migration
13. Choose an immigration destination for the 𝑖th walrus.
14. Calculate new location of the 𝑗th walrus using (5).
15. Update the 𝑖th walrus location using (6).
16. Phase 3: Escaping and fighting against predators (exploitation)
17. Calculate a new position in the neighborhood of the 𝑖th walrus using (7) and (8)
18. Update the 𝑖th walrus location using (9).
19. end
20. Save the best candidate solution so far.
21. end
22. Output the best quasi-optimal solution obtained by WaOA for given problem.
End WaOA.
GA PSO GSA TLBO GWO MVO TSA MPA RSA WSO WaOA
avg -8732.0566 -6655.931 -2500.7139 -5231.075 -6083.0982 -7816.9559 -9767.2623 -6168.7082 -5406.0627 -7093.1044 -8881.1061
std 599.35054 796.53218 393.55973 587.7787 1045.1408 651.41296 475.49897 492.70948 338.83459 1097.2857 152.94837
F8 bsf -9653.3571 -7989.6611 -3246.4966 -6299.8836 -7654.2152 -9073.2252 -10689.627 -7227.6685 -5655.4772 -9790.4976 -9075.5449
med -8768.4313 -6498.1012 -2466.9839 -5134.4807 -6019.692 -7735.8964 -9758.4183 -6271.1354 -5493.351 -7050.2823 -8917.6187
rank 3 6 11 10 8 4 1 7 9 5 2
avg 57.122334 62.018927 25.868921 0 0.1072815 95.254712 0 166.51009 0 27.043749 0
std 15.547508 14.674411 6.6313076 0 0.4797776 24.107803 0 41.096443 0 6.1049913 0
F9 bsf 34.598517 44.773094 9.9495906 0 0 43.910246 0 91.619574 0 15.188582 0
med 54.083175 56.224581 26.863884 0 0 91.11852 0 171.11072 0 26.372688 0
rank 5 6 3 1 2 7 1 8 1 4 1
avg 3.5854087 3.2364098 8.30E-09 4.09E-15 1.67E-14 0.5370924 3.91E-15 1.7230212 8.882E-16 4.7802266 2.13E-15
std 0.4151563 0.8973082 1.60E-09 1.09E-15 3.15E-15 0.6513908 1.30E-15 1.6183189 0 0.8814042 1.74E-15
F10 bsf 2.856219 1.7750878 5.13E-09 8.88E-16 1.15E-14 0.0918005 8.88E-16 1.51E-14 8.882E-16 2.8691584 8.88E-16
med 3.5140585 3.1264706 8.16E-09 4.44E-15 1.51E-14 0.1269169 4.44E-15 2.7111539 8.882E-16 4.8412315 8.88E-16
rank 10 9 6 4 5 7 3 8 1 11 2
avg 1.5658187 0.0989545 8.7022139 0 0.0032973 0.4116334 0 0.0065336 0 2.0387294 0
std 0.1869219 0.1284744 4.6881208 0 0.0062632 0.1070168 0 0.0063468 0 1.7243749 0
F11 bsf 1.2168372 0.0001701 2.7594413 0 0 0.2741324 0 0 0 1.107167 0
med 1.5528338 0.0628106 8.2290232 0 0 0.3958958 0 0.0088858 0 1.5129228 0
rank 6 4 8 1 2 5 1 3 1 7 1
avg 0.154488 1.5947308 0.3648692 0.0821009 0.0413696 1.460569 1.82E-10 7.0583987 1.3120904 2.8972544 1.57E-32
std 0.1110493 1.3141826 0.5614192 0.0268307 0.0193412 1.458994 9.87E-11 3.6829525 0.3309185 1.2781011 2.81E-48
F12 bsf 0.0487615 0.0004665 3.54E-19 0.0454024 0.0134185 0.0019963 4.63E-11 0.5684754 0.6975687 0.6649101 1.57E-32
med 0.1255449 1.4939335 0.132149 0.0812882 0.0370402 1.0833741 1.53E-10 6.8697136 1.5217562 2.8024353 1.57E-32
rank 5 9 6 4 3 8 2 11 7 10 1
avg 2.2803846 5.1857653 0.2491467 1.0496147 0.5714104 0.0242273 0.0013037 2.8069554 5.442E-22 8081.2485 1.35E-32
std 0.9391074 4.2390635 0.7537801 0.2541193 0.2372261 0.0208494 0.0038377 0.5753073 2.344E-21 23135.549 2.81E-48
F13 bsf 0.9745414 0.2328254 5.86E-18 0.5896332 0.1002619 0.00478 6.66E-10 1.3560246 1.059E-31 13.356494 1.35E-32
med 2.0472332 4.7415948 1.19E-17 1.0475961 0.6576587 0.0175472 3.03E-09 2.8573762 7.794E-31 38.79782 1.35E-32
rank 8 10 5 7 6 4 3 9 2 11 1
Sum rank 37 44 39 27 26 35 11 46 21 48 8
Mean rank 6.16667 7.33333 6.5 4.5 4.33333 5.83333 1.83333 7.66667 3.5 8 1.33333
Total rank 7 9 8 5 4 6 2 10 3 11 1
Table 3. Results of optimization of WaOA and competitor metaheuristics on the high-
dimensional multimodal functions.
GA PSO GSA TLBO GWO MVO TSA MPA RSA WSO WaOA
avg 0.9981643 2.5666387 3.5074504 1.2956191 4.5112709 0.9980038 0.9980038 9.8626047 3.4763804 1.0972089 0.9980038
std 0.0004209 3.1924333 2.2232827 0.7268706 4.9191731 5.84E-12 7.20E-17 4.4088461 2.5684321 0.4436585 1.02E-16
F14 bsf 0.9980038 0.9980038 0.9980038 0.9980038 0.9980038 0.9980038 0.9980038 0.9980038 1.0478432 0.9980038 0.9980038
med 0.998005 1.9920309 2.4246779 0.9980039 0.9980038 0.9980038 0.9980038 10.763181 2.9821052 0.9980038 0.9980038
rank 3 6 8 5 9 2 1 10 7 4 1
avg 0.0088845 0.0025359 0.0038451 0.0014762 0.0053342 0.0046946 0.0003075 0.006305 0.001262 0.0013103 0.0003075
std 0.0086194 0.0061158 0.0030393 0.0044582 0.0089026 0.0080421 3.92E-19 0.0138809 0.0005632 0.0044846 9.87E-20
F15 bsf 0.0008345 0.0003075 0.0013735 0.0003122 0.0003075 0.0003268 0.0003075 0.0003076 0.0006652 0.0003075 0.0003075
med 0.0051832 0.0003075 0.002212 0.0003187 0.0003079 0.0007413 0.0003075 0.0004825 0.0011743 0.0003075 0.0003075
rank 11 6 7 5 9 8 2 10 3 4 1
- - - - - - - - - - -
avg
1.0316252 1.0316285 1.0316285 1.0316268 1.0316284 1.0316284 1.0316285 1.0284655 1.0295585 1.0316284 1.0316285
std 9.20E-06 1.61E-16 1.02E-16 1.46E-06 4.11E-09 4.04E-08 2.10E-16 0.0097351 0.0069817 3.273E-08 2.28E-16
- - - - - - - - - - -
F16 bsf
1.0316285 1.0316285 1.0316285 1.0316284 1.0316285 1.0316285 1.0316285 1.0316284 1.0316235 1.0316285 1.0316285
- - - - - - - - - - -
med
1.0316282 1.0316285 1.0316285 1.0316273 1.0316284 1.0316284 1.0316285 1.0316283 1.0312763 1.0316285 1.0316285
rank 6 1 1 5 2 4 1 8 7 3 1
avg 0.3980165 0.6018112 0.3978874 0.3980571 0.3979973 0.3978874 0.3978874 0.3979132 0.4116021 0.3978874 0.3978874
std 0.0003531 0.5653864 0 0.0002036 0.0004897 1.30E-07 0 3.52E-05 0.0206876 0 0
F17 bsf 0.3978874 0.3978874 0.3978874 0.3978876 0.3978874 0.3978874 0.3978874 0.3978879 0.3979635 0.3978874 0.3978874
med 0.3978938 0.3978874 0.3978874 0.3980127 0.3978875 0.3978874 0.3978874 0.3979045 0.4031937 0.3978874 0.3978874
rank 5 8 1 6 4 2 1 3 7 1 1
avg 3.0098143 3 3 3.0000008 3.0000125 3.0000004 3 8.8017182 7.5115846 3 3
std 0.0243289 2.87E-15 3.44E-15 1.14E-06 1.28E-05 3.31E-07 1.23E-15 20.497606 11.136857 3.529E-16 5.76E-16
F18 bsf 3.0000007 3 3 3 3.0000001 3 3 3.0000003 3.0000033 3 3
med 3.0001376 3 3 3.0000007 3.0000086 3.0000003 3 3.0000084 3.0001994 3 3
rank 8 3 4 6 7 5 2 10 9 1 1
- - - - - - - - - -
avg -3.862782
3.8626818 3.8241312 3.8627821 3.8617086 3.8612086 3.8627821 3.8627425 3.8195154 3.8627821 3.8627821
std 0.0002087 0.1728521 1.97E-15 0.0023471 0.0028936 1.67E-07 2.28E-15 2.60E-05 0.0360682 2.278E-15 2.28E-15
- - - - - - - - -
F19 bsf -3.862751 -3.862781
3.8627821 3.8627821 3.8627821 3.8627816 3.8627821 3.8627821 3.8621529 3.8627821 3.8627821
- - - - - - - - - -
med -3.825845
3.8627639 3.8627821 3.8627821 3.8625048 3.8627639 3.8627821 3.8627821 3.8627476 3.8627821 3.8627821
rank 4 7 1 5 6 2 1 3 8 1 1
- - - - - - - - - -
avg -3.26319
3.2074926 3.2857259 3.3219952 3.2448865 3.2564435 3.3219952 3.2610207 2.5357686 3.3160412 3.3219952
std 0.1330767 0.0665186 3.81E-16 0.0681168 0.0698817 0.0608337 4.44E-16 0.08834 0.4743104 0.0265835 4.44E-16
- - - - - - - - - -
F20 bsf -3.321995
3.3201329 3.3219952 3.3219952 3.3165345 3.3219943 3.3219952 3.3216262 3.0036949 3.3219952 3.3219952
- - - - - - - - - -
med -3.321992
3.2347512 3.3219952 3.3219952 3.2495147 3.2030757 3.3219952 3.3201103 2.7683014 3.3219952 3.3219952
rank 8 3 1 7 4 6 1 5 9 2 1
- - - - - -
avg -6.231103 -9.900112 -10.1532 -5.055196 -10.1532
5.1961582 4.5268585 6.3750274 7.6132836 7.4645915 8.4065104
std 2.5341696 3.0135558 3.5951183 1.9403488 1.1297318 2.6059 2.08E-15 3.1774824 2.788E-07 3.1433508 3.21E-15
- - - - - -
F21 bsf -10.1532 -10.1532 -10.1532 -10.1532 -10.1532
9.8099505 9.2118711 10.153084 10.153189 10.099531 5.0551966
- - - - - - -
med -6.902018 -10.1532 -10.1532 -10.1532
5.2428461 2.6828604 5.3837395 10.152715 7.6269292 9.8476972 5.0551959
rank 8 10 6 7 2 4 1 5 9 3 1
- - - - - - - - - -
F22 avg -9.605593
5.8434567 6.4347072 10.402941 7.3995435 10.402536 10.402941 4.6611193 5.0876679 9.3525262 10.402941
std 2.7273583 3.7465155 4.08E-15 2.1676383 0.0001684 1.947217 3.65E-15 3.2949226 9.805E-07 2.5719025 3.05E-15
- - - - - - - - - -
bsf -10.0663
10.345307 10.402941 10.402941 10.402846 10.402934 10.402941 10.354892 5.0876699 10.402941 10.402941
- - - - - - - - - - -
med
5.1988266 5.1082473 10.402941 7.8935553 10.402537 10.402859 10.402941 2.7559742 5.0876678 10.402941 10.402941
rank 7 6 1 5 2 3 1 9 8 4 1
- - - - - - - -
avg -10.53641 -10.53641 -10.53641
7.3214569 7.1817094 8.0026439 10.535988 9.1867389 6.9534073 5.1284729 8.7165214
std 2.583081 3.8817584 1.68E-15 2.127303 0.0002421 2.3983439 2.51E-15 3.6225412 1.283E-06 3.2430933 1.82E-15
- - - -
F23 bsf -10.20802 -10.53641 -10.53641 10.444176 10.536352 10.536387 -10.53641 10.422351 -5.128476 -10.53641 -10.53641
- - - - - -
med -10.53641 -10.53641 -10.53641 -10.53641 -10.53641
8.5115441 8.8302784 10.536025 10.536346 7.7068006 5.1284726
rank 7 8 2 6 3 4 2 9 10 5 1
Sum rank 67 58 32 57 48 40 13 72 77 28 10
Mean rank 6.7 5.8 3.2 5.7 4.8 4 1.3 7.2 7.7 2.8 1
Total rank 9 8 4 7 6 5 2 10 11 3 1
Table 4. Results of optimization of the WaOA and competitor metaheuristics on fixed-
dimensional multimodal functions.
Figure 3. The boxplot diagram of WaOA and competitor algorithms performances on functions
F1 to F23.
Statistical analysis.
In this subsection, the superiority of WaOA over competitor algorithms is statistically analyzed
to determine whether this superiority is significant or not. To perform statistical analysis on the
obtained results, Wilcoxon sum rank tes48 is utilized. Wilcoxon sum rank test is a non-parametric
test that is used to detect significant differences between two data samples. The results of
statistical analysis using this test are presented in Table 5. What can be seen from the study of
the simulation results is that WaOA has a significant statistical superiority over the competitor
algorithm in cases where the 𝑝-value is less than 0.05.
Functions type
Compared Algorithms
Unimodal High-Multimodal Fixed-Multimodal
WaOA vs. WSO 1.01E-24 6.25E-18 1.44E-34
WaOA vs. RSA 1.01E-24 2.29E-12 2.09E-26
WaOA vs. MPA 1.01E-24 5.98E-20 1.44E-34
WaOA vs. TSA 1.01E-24 0.044967 1.13E-05
WaOA vs. MVO 1.01E-24 3.17E-18 1.44E-34
WaOA vs. GWO 1.01E-24 1.17E-16 1.44E-34
WaOA vs. TLBO 1.01E-24 2.37E-13 1.44E-34
WaOA vs. GSA 1.01E-24 1.97E-21 3.22E-13
WaOA vs. PSO 1.01E-24 1.97E-21 5.35E-17
WaOA vs. GA 1.01E-24 1.49E-11 1.44E-34
WaOA WSO RSA MPA TSA MVO GWO TLBO GSA PSO GA
avg 1.00E+02 4.78E+04 9.04E+07 7.08E+06 2.39E+07 1.28E+06 4.68E+06 4.26E+06 4.81E+06 6.10E+04 2.15E+07
C15-F1 std 1.52E-07 3.50E+04 2.70E+07 3.39E+06 1.88E+07 4.76E+05 5.18E+06 2.29E+06 1.21E+06 5.59E+04 2.36E+07
rank 1 2 11 8 10 4 6 5 7 3 9
avg 2.00E+02 1.01E+04 8.96E+09 4.16E+06 1.70E+09 1.57E+04 6.57E+06 9.88E+07 5.00E+03 3.06E+03 4.05E+06
C15-F2 std 5.06E-06 2.89E+03 1.43E+09 2.32E+06 2.72E+09 8.00E+03 3.53E+06 2.77E+07 1.32E+03 4.98E+03 2.84E+06
rank 1 4 11 7 10 5 8 9 3 2 6
avg 3.15E+02 3.20E+02 3.21E+02 3.20E+02 3.20E+02 3.20E+02 3.20E+02 3.20E+02 3.20E+02 3.20E+02 3.20E+02
C15-F3 std 1.00E+01 5.70E-02 6.29E-02 8.88E-02 1.71E-01 1.23E-02 6.71E-02 5.39E-02 1.22E-05 3.40E-06 1.20E-01
rank 1 10 11 5 8 4 9 7 2 3 6
avg 4.09E+02 4.16E+02 4.68E+02 4.47E+02 4.48E+02 4.25E+02 4.13E+02 4.36E+02 4.32E+02 4.18E+02 4.28E+02
C15-F4 std 2.37E+00 5.77E-01 4.16E+00 1.83E+01 8.18E+00 8.54E+00 3.08E+00 1.44E+00 6.96E+00 8.18E+00 5.21E+00
rank 1 3 11 9 10 5 2 8 7 4 6
avg 6.10E+02 1.11E+03 1.82E+03 1.71E+03 1.54E+03 1.19E+03 1.05E+03 1.59E+03 1.62E+03 1.38E+03 8.01E+02
C15-F5 std 8.78E+01 5.73E+02 1.98E+02 2.44E+02 2.33E+02 1.17E+02 3.06E+02 1.45E+02 1.97E+02 4.01E+02 1.88E+02
rank 1 4 11 10 7 5 3 8 9 6 2
avg 6.06E+02 8.59E+02 1.33E+06 4.45E+05 1.77E+04 5.66E+03 4.48E+04 2.34E+04 9.37E+04 3.97E+03 1.28E+05
C15-F6 std 3.87E+00 1.65E+02 2.22E+06 3.72E+05 2.71E+04 3.44E+03 4.17E+04 2.80E+04 8.35E+04 4.50E+03 2.12E+05
rank 1 2 11 10 5 4 7 6 8 3 9
avg 7.01E+02 7.02E+02 7.25E+02 7.05E+02 7.14E+02 7.02E+02 7.03E+02 7.04E+02 7.04E+02 7.03E+02 7.05E+02
C15-F7 std 3.21E-01 1.04E+00 1.28E+01 1.63E+00 9.16E+00 7.26E-01 1.26E+00 8.04E-01 4.26E-01 1.45E+00 4.35E-01
rank 1 2 11 9 10 3 5 6 7 4 8
avg 8.01E+02 8.66E+02 1.95E+05 1.17E+04 5.30E+05 6.72E+03 4.50E+03 3.27E+03 4.31E+05 8.05E+04 5.37E+05
C15-F8 std 4.81E-01 4.56E+01 2.73E+05 8.05E+03 1.05E+06 7.65E+03 1.67E+03 7.52E+02 5.82E+05 1.52E+05 7.04E+05
rank 1 2 8 6 10 5 4 3 9 7 11
avg 1.00E+03 1.00E+03 1.03E+03 1.00E+03 1.02E+03 1.00E+03 1.00E+03 1.00E+03 1.00E+03 1.00E+03 1.00E+03
C15-F9 std 5.95E-02 5.37E-01 1.82E+00 2.33E-01 2.10E+01 9.06E-02 1.51E-01 9.66E-02 2.03E-01 5.05E-01 2.27E+00
rank 2 8 11 6 10 1 3 5 4 7 9
avg 1.22E+03 1.29E+03 5.75E+04 1.47E+04 8.46E+03 2.45E+03 1.99E+03 4.48E+03 1.43E+05 2.28E+03 8.56E+03
C15-F10 std 2.20E-01 6.41E+01 5.21E+04 1.91E+04 4.47E+03 1.58E+03 6.42E+02 9.19E+02 1.19E+05 6.70E+02 8.29E+03
rank 1 2 10 9 7 5 3 6 11 4 8
avg 1.33E+03 1.26E+03 1.49E+03 1.53E+03 1.35E+03 1.43E+03 1.43E+03 1.33E+03 1.40E+03 1.45E+03 1.33E+03
C15-F11 std 1.48E+02 1.65E+02 7.13E+01 1.42E+02 1.55E+02 6.58E+01 6.80E+01 1.40E+02 8.10E-01 1.09E+02 1.40E+02
rank 2 1 10 11 5 7 8 3 6 9 4
avg 1.30E+03 1.31E+03 1.34E+03 1.31E+03 1.31E+03 1.30E+03 1.30E+03 1.31E+03 1.30E+03 1.30E+03 1.31E+03
C15-F12 std 4.54E-01 4.00E+00 6.01E+00 5.81E+00 2.14E+01 4.61E-01 6.84E-01 1.28E+00 4.74E-01 1.11E+00 1.86E+00
rank 2 7 11 8 10 4 3 6 1 5 9
avg 1.30E+03 1.30E+03 1.30E+03 1.30E+03 1.31E+03 1.30E+03 1.30E+03 1.30E+03 1.53E+03 1.30E+03 1.30E+03
C15-F13
std 6.61E-05 8.56E-02 7.92E-02 9.65E-04 8.72E+00 1.89E-04 7.89E-05 9.94E-04 2.26E+02 2.44E-03 9.98E-01
rank 1 8 7 4 10 3 2 5 9
11 6
avg 3.63E+03 3.68E+03 1.37E+04 8.39E+03 1.05E+04 5.37E+03 7.76E+03 7.13E+03 7.04E+03 4.72E+03 4.54E+03
C15-F14 std 1.42E+03 1.41E+03 3.62E+03 1.54E+01 3.76E+03 4.16E+03 1.65E+03 3.33E+03 3.80E+03 6.28E+02 9.97E+02
rank 1 2 11 9 10 5 8 7 6 4 3
avg 1.60E+03 1.60E+03 2.25E+03 1.60E+03 2.56E+03 1.60E+03 1.61E+03 1.61E+03 1.60E+03 1.61E+03 1.62E+03
C15-F15 std 1.56E-06 5.86E+00 3.88E+02 1.14E+00 1.85E+03 8.24E-03 1.14E+01 1.98E+00 4.71E-10 1.19E+01 3.03E+00
rank 2 4 10 5 11 3 8 7 1 6 9
Sum rank 19 61 155 116 133 63 79 91 92 73 108
Mean rank 1.2666667 4.0666667 10.333333 7.7333333 8.8666667 4.2 5.2666667 6.0666667 6.1333333 4.8666667 7.2
Total rank 1 2 11 9 10 3 5 6 7 4 8
Table 8. Evaluation results of the CEC 2015 test suite functions.
Subject to:
𝑥23 𝑥3 4𝑥22 − 𝑥1 𝑥2 1
𝑔1 (𝑥) = 1 − ≤ 0, 𝑔2 (𝑥) = 3 + − 1 ≤ 0,
71785𝑥14 12566(𝑥2 𝑥1 ) 5108𝑥12
140.45𝑥1 𝑥1 +𝑥2
𝑔3 (𝑥) = 1 − ≤ 0, 𝑔4 (𝑥) = − 1 ≤ 0.
𝑥22 𝑥3 1.5
With
0.05 ≤ 𝑥1 ≤ 2, 0.25 ≤ 𝑥2 ≤ 1.3 and 2 ≤ 𝑥3 ≤ 15.
The results of using WaOA and competing algorithms in optimizing the Tension/compression
spring design variables are presented in Table 10. The simulation results show that WaOA has
provided the optimal solution to this problem with the values of the variables equal to
(0.0519693, 0.363467, 10.9084) and the corresponding objective function value equal to
0.012672. The statistical results obtained from the performance of WaOA and competitor
algorithms are reported in Table 11, which shows the superiority of WaOA in providing better
values for statistical indicators. The WaOA convergence curve while achieving the solution for
tension/compression spring is shown in Figure 7.
Optimum
Algorithm Optimum cost
variables
d D P
WaOA 0.0519693 0.363467 10.9084 0.012672
WSO 0.057641 0.583026 14.00465 0.012722
RSA 0.051734 0.360336 11.54961 0.01317
MPA 0.050657 0.340484 11.98053 0.012782
TSA 0.049701 0.338294 11.95873 0.012786
MVO 0.049525 0.307463 14.85743 0.013305
GWO 0.049525 0.312953 14.09102 0.012926
TLBO 0.050297 0.331597 12.60176 0.012818
GSA 0.049525 0.314295 14.09343 0.012983
PSO 0.049624 0.307163 13.86693 0.013147
GA 0.049772 0.313344 15.09475 0.012885
Table 10. Comparison results for the tension/compression spring design problem.
Table 11. Statistical results for the tension/compression spring design problem.
Welded beam design.
Welded beam design is a real global challenge in engineering sciences whose main goal in
design is to reduce the fabrication cost of the welded beam. A schematic of this design is shown
in Figure 8 49. The formulation of welded beam design problem is as follows:
Consider 𝑋 = [𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [ℎ, 𝑙, 𝑡, 𝑏].
Minimize 𝑓 (𝑥) = 1.10471𝑥12 𝑥2 + 0.04811𝑥3 𝑥4 (14.0 + 𝑥2 ).
Subject to:
𝑔1 (𝑥) = 𝜏(𝑥) − 13600 ≤ 0, 𝑔2 (𝑥) = 𝜎(𝑥) − 30000 ≤ 0,
𝑔3 (𝑥) = 𝑥1 − 𝑥4 ≤ 0, 𝑔4 (𝑥) = 0.10471𝑥12 + 0.04811𝑥3 𝑥4 (14 + 𝑥2 ) − 5.0 ≤ 0,
𝑔5 (𝑥) = 0.125 − 𝑥1 ≤ 0, 𝑔6 (𝑥) = 𝛿 (𝑥) − 0.25 ≤ 0, 𝑔7 (𝑥) = 6000 − 𝑝𝑐 (𝑥) ≤ 0.
Where
𝑥2 6000 𝑀𝑅
𝜏(𝑥) = √(𝜏 ′ )2 + (2𝜏𝜏 ′ ) + (𝜏")2 , 𝜏′ = , 𝜏" = ,
2𝑅 √2𝑥1 𝑥2 𝐽
𝑥2 𝑥22 𝑥1 + 𝑥3 2 𝑥22 𝑥1 + 𝑥3 2
𝑀 = 6000 (14 + ),𝑅 = √ +( ) , 𝐽 = 2√2𝑥1 𝑥2 ( + ( ) ),
2 4 2 12 2
504000 65856000
𝜎(𝑥) = , 𝛿 (𝑥) = (30∙106)𝑥 3 ,
𝑥4 𝑥32 4 𝑥3
With
0.1 ≤ 𝑥1 , 𝑥4 ≤ 2 and 0.1 ≤ 𝑥2 , 𝑥3 ≤ 10.
WaOA and competing algorithms are implemented on the welded beam design problem, and the
results are presented in Table 12. Based on these results, WaOA has provided the optimal
solution to this problem with the values of the variables equal to (0.20573, 3.470489, 9.036624,
0.20573) and the corresponding objective function value equal to 1.724901. Statistical results
from the performance of WaOA and competitor algorithms are reported in Table 13. This table
shows that WaOA performs better in terms of statistical indicators. The convergence curve from
the WaOA implementation on the welded beam design is shown in Figure 9.
1.93𝑥53 1 745𝑥4 2
𝑔4 (𝑥) = − 1 ≤ 0, 𝑔5 (𝑥) = √ ( ) + 16.9 ∙ 106 − 1 ≤ 0,
𝑥2 𝑥3 𝑥74 110𝑥63 𝑥2 𝑥3
1 745𝑥5 2
𝑔6 (𝑥) = √( ) + 157.5 ∙ 106 − 1 ≤ 0,
85𝑥73 𝑥2 𝑥3
𝑥2 𝑥3 5𝑥2 𝑥1
𝑔7 (𝑥) = − 1 ≤ 0, 𝑔8 (𝑥) = − 1 ≤ 0, 𝑔9 (𝑥) = − 1 ≤ 0,
40 𝑥1 12𝑥2
1.5𝑥6 + 1.9 1.1𝑥7 + 1.9
𝑔10 (𝑥) = − 1 ≤ 0, 𝑔11 (𝑥) = − 1 ≤ 0.
𝑥4 𝑥5
With
2.6 ≤ 𝑥1 ≤ 3.6, 0.7 ≤ 𝑥2 ≤ 0.8, 17 ≤ 𝑥3 ≤ 28, 7.3 ≤ 𝑥4 ≤ 8.3, 7.8 ≤ 𝑥5 ≤ 8.3, 2.9 ≤ 𝑥6 ≤ 3.9,
and 5 ≤ 𝑥7 ≤ 5.5 .
The results obtained by employing WaOA and competitor algorithms in speed reducer design
optimization are reported in Table 14. The results show that WaOA has provided the optimal
solution to this problem with the values of the variables equal to (3.5, 0.7, 17, 7.3, 7.8, 3.35021,
5.28668) and the corresponding objective function value equal to 2996.3482. The statistical
results obtained from WaOA and the algorithms compared in Table 15 are released, which
indicates the superiority of the proposed WaOA. The WaOA convergence curve while getting
the solution to the speed reducer design problem is shown in Figure 11.
Optimum
Algorithm Optimum variables cost
b m p l1 l2 d1 d2
WaOA 3.50000 0.700007 17 7.3 7.8 3.350209 5.286683 2996.3482
WSO 3.504191 0.70028 17.0068 7.311043 7.750249 3.35201 5.288865 2997.714
RSA 3.510772 0.70028 17.0068 7.399095 7.803283 3.361271 5.291898 3006.339
MPA 3.498013 0.698879 16.97279 7.288825 7.787514 3.347813 5.283283 2997.045
TSA 3.503107 0.698879 16.97279 7.36976 7.80327 3.354383 5.281313 2999.781
MVO 3.496443 0.698879 16.97279 8.287294 7.787569 3.348954 5.281261 3004.253
GWO 3.504918 0.698879 16.97279 7.398892 7.803577 3.354609 5.281322 3001.42
TLBO 3.50517 0.698879 16.97279 7.288316 7.787514 3.45745 5.283757 3029.041
GSA 3.596322 0.698879 16.97279 8.287294 7.787514 3.366182 5.283768 3049.589
PSO 3.506667 0.698879 16.97279 8.337218 7.787514 3.358732 5.282268 3066.02
GA 3.516528 0.698879 16.97279 8.357187 7.787514 3.363496 5.283262 3027.481
Table 14. Comparison results for the speed reducer design problem.
Algorithm Best Mean Worst Std. Dev. Median
WaOA 2996.3482 2999.4961 3000.972 1.2463198 2998.6108
WSO 2997.714 3003.365 3008.597 5.221708 3001.932
RSA 3006.339 3013.236 3028.83 10.37327 3011.845
MPA 2997.045 2999.033 3003.281 1.931539 2999.979
TSA 2999.781 3005.237 3008.143 5.836758 3003.911
MVO 3004.253 3104.623 3210.524 79.62197 3104.623
GWO 3001.42 3028.228 3060.338 13.01596 3026.419
TLBO 3029.041 3065.296 3104.15 18.07054 3064.988
GSA 3049.589 3169.692 3363.192 92.55386 3156.113
PSO 3066.02 3185.877 3312.529 17.11513 3197.539
GA 3027.481 3294.662 3618.732 57.01195 3287.991
Table 15. Statistical results for the speed reducer design problem.
Pressure vessel design.
Pressure vessel design is a real-world optimization challenge that aims to reduce design costs. A
schematic of this design is shown in Figure 12 52. The formulation of pressure vessel design
problem is as follows:
Consider 𝑋 = [𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [𝑇𝑠 , 𝑇ℎ , 𝑅, 𝐿].
Minimize 𝑓 (𝑥) = 0.6224𝑥1 𝑥3 𝑥4 + 1.778𝑥2 𝑥32 + 3.1661𝑥12 𝑥4 + 19.84𝑥12 𝑥3 .
Subject to:
𝑔1 (𝑥) = −𝑥1 + 0.0193𝑥3 ≤ 0, 𝑔2 (𝑥) = −𝑥2 + 0.00954𝑥3 ≤ 0,
4
𝑔3 (𝑥) = −𝜋𝑥32 𝑥4 − 𝜋𝑥33 + 1296000 ≤ 0, 𝑔4 (𝑥) = 𝑥4 − 240 ≤ 0.
3
With
0 ≤ 𝑥1 , 𝑥2 ≤ 100, and 10 ≤ 𝑥3 , 𝑥4 ≤ 200.
WaOA and competitor algorithms are used in optimizing pressure vessel design. The results
obtained for the design variables of this topic are released in Table 16. Based on this table,
WaOA provides the optimal values of the design variables equal to (0.7782641, 0.3847753,
40.32163, 199.8713), which leads to the value equal to 5883.9604 for the objective function. The
statistical indicators results obtained of performances of WaOA and competitor algorithms are
presented in Table 17. Statistical results indicate that WaOA has effectively optimized the
pressure vessel design challenge by providing more favorable values for statistical indicators.
The WaOA convergence curve in achieving the optimal solution is shown in Figure 13.
Figure 13. Convergence analysis of the WaOA for the pressure vessel design optimization
problem.
Algorithm Optimum variables Optimum cost
Ts Th R L
WaOA 0.778264 0.384775 40.32163 199.8713 5883.9604
WSO 0.836709 0.417284 43.20765 160.9094 6010.62
RSA 0.810993 0.443429 42.0335 175.915 6088.866
MPA 0.795294 0.393338 41.20008 200 5909.092
TSA 0.796137 0.393104 41.21311 200 5912.899
MVO 0.826206 0.444881 42.80841 179.6187 5914.925
GWO 0.864286 0.427753 44.77817 159.8146 6035.531
TLBO 0.835526 0.427107 42.66592 187.6027 6161.892
GSA 1.109637 0.970461 50.4285 173.2081 11596.44
PSO 0.768879 0.408312 41.34057 202.3494 5913.862
GA 1.123661 0.926481 45.43235 183.6029 6576.192
Table 16. Comparison results for the pressure vessel design problem.
References
1 Gill, P. E., Murray, W. & Wright, M. H. Practical optimization. (Academic Press, 1981).
2 Fletcher, R. Practical methods of optimization. (John Wiley & Sons, 1987).
3 Cavazzuti, M. Optimization Methods: From Theory to Design Scientific and
Technological Aspects in Mechanics. (Springer, 2013).
4 Dehghani, M., Hubálovský, Š. & Trojovský, P. Tasmanian Devil Optimization: A New
Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 10,
19599-19620 (2022).
5 Cervone, G., Franzese, P. & Keesee, A. P. Algorithm quasi‐optimal (AQ) learning. Wiley
Interdisciplinary Reviews: Computational Statistics 2, 218-236 (2010).
6 Osuna-Enciso, V., Cuevas, E. & Castañeda, B. M. A diversity metric for population-
based metaheuristic algorithms. Information Sciences 586, 192-208 (2022).
7 Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE
Transactions on Evolutionary Computation 1, 67-82 (1997).
8 Goldberg, D. E. & Holland, J. H. Genetic Algorithms and Machine Learning. Machine
Learning 3, 95-99 (1988).
9 Storn, R. & Price, K. Differential evolution–a simple and efficient heuristic for global
optimization over continuous spaces. Journal of Global Optimization 11, 341-359 (1997).
10 Kennedy, J. & Eberhart, R. In Proceedings of ICNN'95 - International Conference on
Neural Networks, Lecture Notes in Computer Science, 1942-1948 (IEEE, 1995).
11 Dorigo, M., Maniezzo, V. & Colorni, A. Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
26, 29-41 (1996).
12 Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey Wolf Optimizer. Advances in Engineering
Software 69, 46-61 (2014).
13 Faramarzi, A., Heidarinejad, M., Mirjalili, S. & Gandomi, A. H. Marine Predators
Algorithm: A nature-inspired metaheuristic. Expert Systems with Applications 152, 113377
(2020).
14 Kaur, S., Awasthi, L. K., Sangal, A. L. & Dhiman, G. Tunicate Swarm Algorithm: A new
bio-inspired based metaheuristic paradigm for global optimization. Engineering Applications of
Artificial Intelligence 90, 103541 (2020).
15 Braik, M., Hammouri, A., Atwan, J., Al-Betar, M. A. & Awadallah, M. A. White Shark
Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems.
Knowledge-Based Systems 243, 108457 (2022).
16 Abualigah, L., Abd Elaziz, M., Sumari, P., Geem, Z. W. & Gandomi, A. H. Reptile
Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Systems with
Applications 191, 116158 (2022).
17 Koohi, S. Z., Hamid, N. A. W. A., Othman, M. & Ibragimov, G. Raccoon optimization
algorithm. IEEE Access 7, 5383-5399 (2018).
18 Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. African vultures optimization
algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems.
Computers & Industrial Engineering 158, 107408 (2021).
19 Trojovský, P. & Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired
Algorithm for Engineering Applications. Sensors 22, 855 (2022).
20 Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. Optimization by simulated annealing.
science 220, 671-680 (1983).
21 Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: a gravitational search algorithm.
Information sciences 179, 2232-2248 (2009).
22 Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-verse optimizer: a nature-inspired
algorithm for global optimization. Neural Computing and Applications 27, 495-513 (2016).
23 Eskandar, H., Sadollah, A., Bahreininejad, A. & Hamdi, M. Water cycle algorithm–A
novel metaheuristic optimization method for solving constrained engineering optimization
problems. Computers & Structures 110, 151-166 (2012).
24 Dehghani, M. et al. A spring search algorithm applied to engineering optimization
problems. Applied Sciences 10, 6173 (2020).
25 Zhao, W., Wang, L. & Zhang, Z. Atom search optimization and its application to solve a
hydrogeologic parameter estimation problem. Knowledge-Based Systems 163, 283-304 (2019).
26 Dehghani, M. & Samet, H. Momentum search algorithm: a new meta-heuristic
optimization algorithm inspired by momentum conservation law. SN Applied Sciences 2, 1-15
(2020).
27 Wei, Z., Huang, C., Wang, X., Han, T. & Li, Y. Nuclear reaction optimization: A novel
and powerful physics-based algorithm for global optimization. IEEE Access 7, 66084-66109
(2019).
28 Rao, R. V., Savsani, V. J. & Vakharia, D. Teaching–learning-based optimization: a novel
method for constrained mechanical design optimization problems. Computer-Aided Design 43,
303-315 (2011).
29 Moosavi, S. H. S. & Bardsiri, V. K. Poor and rich optimization algorithm: A new human-
based and multi populations algorithm. Engineering Applications of Artificial Intelligence 86,
165-181 (2019).
30 Zeidabadi, F.-A. et al. Archery Algorithm: A Novel Stochastic Optimization Algorithm
for Solving Optimization Problems. Computers, Materials & Continua 72, 399-416 (2022).
31 Shi, Y. Brain storm optimization algorithm. In Advances in Swarm Intelligence - Second
International Conference, Lecture Notes in Computer Science, 303-309 (Springer, 2011).
32 Trojovská, E., Dehghani, M. A new human-based metahurestic optimization method
based on mimicking cooking training. Sci Rep 12, 14861 (2022).
33 Ayyarao, T. L. et al. War Strategy Optimization Algorithm: A New Effective
Metaheuristic Algorithm for Global Optimization. IEEE Access 10, 25073-25105 (2022).
34 Dehghani, M. & Trojovský, P. Teamwork Optimization Algorithm: A New Optimization
Approach for Function Minimization/Maximization. Sensors 21, 4567 (2021).
35 Kaveh, A. & Zolghadr, A. A novel meta-heuristic algorithm: tug of war optimization.
Iran University of Science & Technology 6, 469-492 (2016).
36 Moghdani, R. & Salimifard, K. Volleyball premier league algorithm. Applied Soft
Computing 64, 161-185 (2018).
37 Zeidabadi, F. A. & Dehghani, M. POA: Puzzle Optimization Algorithm. International
Journal of Intelligent Engineering and Systems 15, 273-281 (2022).
38 Dehghani, M., Montazeri, Z., Malik, O. P., Ehsanifar, A. & Dehghani, A. OSA:
Orientation search algorithm. International Journal of Industrial Electronics, Control and
Optimization 2, 99-112 (2019).
39 Doumari, S. A., Givi, H., Dehghani, M. & Malik, O. P. Ring Toss Game-Based
Optimization Algorithm for Solving Various Optimization Problems. International Journal of
Intelligent Engineering and Systems 14, 545-554 (2021).
40 Dehghani, M., Mardaneh, M., Guerrero, J. M., Malik, O. & Kumar, V. Football game
based optimization: An application to solve energy commitment problem. International Journal
of Intelligent Engineering and Systems 13, 514-523 (2020).
41 Dehghani, M., Montazeri, Z. & Malik, O. P. DGO: Dice game optimizer. Gazi University
Journal of Science 32, 871-882 (2019).
42 Wilson, D. E. & Reeder, D. M. Mammal species of the world: a taxonomic and
geographic reference. (Johns Hopkins University Press, 2005).
43 Fay, F. H. Ecology and biology of the Pacific walrus, Odobenus rosmarus divergens
Illiger. (North American Fauna, 1982).
44 Fischbach, A. S., Kochnev, A. A., Garlich-Miller, J. L. & Jay, C. V. Pacific walrus
coastal haulout database, 1852-2016—Background report. Report No. 2331-1258, (U.S.
Geological Survey, 2016).
45 Jefferson, T. A., Stacey, P. J. & Baird, R. W. A review of killer whale interactions with
other marine mammals: predation to co‐existence. Mammal Review 21, 151-180 (1991).
46 Sheffield, G., Fay, F. H., Feder, H. & Kelly, B. P. Laboratory digestion of prey and
interpretation of walrus stomach contents. Marine Mammal Science 17, 310-330 (2001).
47 Levermann, N., Galatius, A., Ehlme, G., Rysgaard, S. & Born, E. W. Feeding behaviour
of free-ranging walruses with notes on apparent dextrality of flipper use. BMC Ecology 3, 1-13
(2003).
48 Wilcoxon, F. Individual Comparisons by Ranking Methods. Biometrics Bulletin 1, 80–83
(1945).
49 Mirjalili, S. & Lewis, A. The whale optimization algorithm. Advances in Engineering
Software 95, 51-67 (2016).
50 Gandomi, A. H. & Yang, X.-S. Benchmark problems in structural optimization. in
Computational optimization, methods and algorithms 259-281 (Springer, 2011).
51 Mezura-Montes, E. & Coello, C. A. C. Useful infeasible solutions in engineering
optimization with evolutionary algorithms. in Mexican international conference on artificial
intelligence. 652-662 (Springer).
52 Kannan, B. & Kramer, S. N. An augmented Lagrange multiplier based method for mixed
integer discrete continuous optimization and its applications to mechanical design.
Journal of mechanical design 116, 405-411 (1994).
Funding information
This work was supported by the Project of Specific Research, Faculty of Science, University of
Hradec Králové, No. 2104/2022.
Competing interest
The authors declare that they have no competing interests.
Informed Consent
Informed consent was not required as no human or animals were involved.
Ethical Approval
This article does not contain any studies with human participants or animals performed by any of
the authors.
Data availability
All data generated or analyzed during this study are included directly in the text of this submitted
manuscript. There are no additional external files with datasets.
Additional information
The authors declare that no experiments on humans have been carried out in connection with this
manuscript and therefore no human data have been generated in our research. Correspondence
and requests for materials should be addressed to P.T.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.
AppendixWaOA16102022.docx