Human Memory Optimization Algorithm
Human Memory Optimization Algorithm
A R T I C L E I N F O A B S T R A C T
Keywords: With the progress of science and technology, optimization problems have become complex. Meta-heuristic al
Heuristic algorithms gorithms have the advantages of high efficiency and strong global search ability in solving optimization prob
Human memory optimization algorithms lems, so more and more meta-heuristic algorithms are proposed and investigated deeply, and how to improve the
CEC 2013
universality of an algorithm is an important issue. In this paper, propose an optimization algorithm that simu
Engineering optimization problems
lates human memory behaviour, fully known as human memory optimization Algorithm, abbreviated as HMO,
which simulates the way humans behave in production, stores human preferences for success and failure, sim
ulates the way humans behave in their memory, gradually moving towards better directions and outcomes to find
a reasonable optimal solution. The results were compared with other meta-heuristic algorithms in the CEC 2013
test set and showed that HMO has better optimization capabilities, and the feasibility of the algorithm was
verified from convergence analysis and parametric analysis experiments. In three engineering optimization
problems, HMO was able to find optimal solutions within a reasonable range of parameters, verifying the
practicality of HMO.
1. Introduction 2021), and there are numerous optimization problems in areas such as
image processing (Zhu et al., 2023; Liang, Qin & Zhou, 2022), neural
With the rapid development of technology at home and abroad, networks (Chen et al., 2022; Zhang et al., 2022), etc. their time
human life has become more colorful and therefore involves more complexity grows exponentially with the size of the problem so that
optimization problems; how to use limited storage space and computa simple methods such as exhaustive search are no longer effective.
tional resources to solve these optimization problems more appropri Over time, many scholars have been exploring suitable solutions to
ately is the current focus of scientific research. In everyday life, such problems. In the early days, they proposed some traditional opti
optimization problems can be found everywhere. For example, how to mization algorithms such as hill climbing, Newton’s method, sequence
find the shortest route in the path planning of drones while ensuring comparison, gradient descent, etc. These algorithms require that the
safety (Phung & Ha, 2021; Puente-Castro et al., 2022); how to plan functions and parameters of the optimization problem must be specif
rationally in production scheduling to improve efficiency and save ically given. Although they are effective in solving convex optimization
human and material resources (Wei et al., 2021; Zhao et al., 2021); how and simple linear or continuum problems, most real-world optimization
to place nodes to maximize coverage in layout optimization (Yin, Deng problems are large, non-linear, and multi-modal in nature; traditional
& Zhang, 2022; Deepa & Venkataraman, 2021); and so on, such as optimization methods have inadequate global search capabilities when
transportation scheduling (Abosuliman & Almagrabi, 2021; Wu et al., solving such complex optimization problems, and suffer from
* Corresponding authors.
E-mail addresses: zhouchangjun@zjnu.edu.cn (C. Zhou), xuejiankai@mail.dhu.edu.cn (J. Xue).
1
ORCID: 0000-0001-9868-8103.
2
ORCID: 0009-0001-9661-6699.
3
ORCID: 0000-0002-6787-5325.
4
ORCID: 0000-0002-4076-4641.
5
ORCID: 0009-0003-5468-2545.
https://doi.org/10.1016/j.eswa.2023.121597
Received 17 March 2023; Received in revised form 10 September 2023; Accepted 10 September 2023
Available online 16 September 2023
0957-4174/© 2023 Elsevier Ltd. All rights reserved.
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
shortcomings such as low solution accuracy and low optimization search Ma et al. introduced the idea of Aquila Optimizer (AO) to improve the
efficiency (Wu et al., 2015). GWO algorithm (Ma et al., 2022); the sparrow search algorithm has low
Meta-heuristic algorithms originate from the study of the group convergence accuracy, and Jiankai Xue introduced neighborhood search
behavior of natural animals such as ant colonies, bee colonies, and bird and saltation learning to update the position of sparrow (Xue, Shen &
flocks (Seyyedabbasi & Kiani, 2023), and are characterized by collabo Pan, 2023). Currently, more and more algorithms are being improved
rative cooperation, individual competition, self-organization, and self- and proposed to deal with different complex problems. According to the
adaptation. The characteristics of swarm intelligence have opened up theorem that there is no free lunch in the world (Wolpert & Macready,
new ways to solve optimization problems, and therefore many swarm 1997), no single algorithm can show better optimization capability in
intelligence optimization algorithms based on swarm intelligence theory any optimization problem, so proposing a universal algorithm with
have been born, and these algorithms have received a lot of attention better applicability is the goal we are pursuing.
and extensive research from researchers at home and abroad, and Taking the minimization problem as an example, this paper develops
gradually become a common method for solving complex optimization a heuristic algorithm inspired by human memory behavior. Human
problems. The advantages of swarm intelligence algorithms, which memory is behavior is multi-modal, and people are often impressed by
effectively integrate swarm intelligence and optimization theory, lie in failures as well as successes that guide future behavior, at the same time,
the following aspects: 1) they are simple and easy to implement; 2) they humans will recall recent memories from time to time, but they will also
have good stability and robustness; and 3) they are scalable. In addition, forget recent memories. A specific schematic is shown in Fig. 1.The
swarm intelligence algorithms do not require in-depth analysis of the person in (a) searches everywhere for his mobile phone, but it turns out
solution problem, do not depend on the specific solution form, and have to be in his own hand, which is the process of temporary forgetting of
a better ability to find the best in solving the problem, while having human memory; the person in (b) has been bitten by a snake before, so it
strong parallelism and a certain degree of stability, the more classic are is very afraid of snakes, and that painful memory is very deep. Inspired
ant colony optimization (ACO) (Dorigo, Birattari & Stutzle, 2006), by human memory behavior, this paper proposes a human memory
particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), artifi optimization algorithm, referred to as HMO, which stores the memories
cial bee colony algorithm (ABC) (Karaboga & Basturk, 2008) and so on. of successes and failures generated by human behavior and recalls them
However, as the difficulty of solving complex optimization problems through a human-specific mindset, thus continuously motivating people
increases, these algorithms need to be improved in terms of their opti towards a better place. A comparison with various heuristic algorithms
mization ability. in CEC 2013 shows that HMO is more general and has better optimi
In recent years, to better solve these complex optimization problems, zation capabilities; in addition, HMO has better optimization results in a
scholars have proposed a series of optimization algorithms, such as the variety of engineering cases, verifying the practicality and feasibility of
whale optimization algorithm (WOA) (Mirjalili & Lewis, 2016), the Grey HMO.
Wolf Optimizer (Mirjalili, Mirjalili & Lewis, 2014), the sparrow search The paper is structured as follows: Section 2 focuses on the
algorithm (SSA) (Xue & Shen, 2020), and the Harris hawks optimization description and analysis of the HMO; Section 3 tests the HMO and other
(HHO) (Heidari et al., 2019), the Dung beetle optimizer (DBO) (Xue & algorithms in the CEC 2013 test set; Section 4 tests each algorithm on
Shen, 2023), the Manta ray foraging optimization (MRFO) (Zhao, Zhang three engineering optimization problems; and the final section provides
& Wang, 2020), and others. All of these algorithms beat the classical an analytical discussion of the experiments in this paper and concludes
ones in their respective original texts and are also evolving. Numerous with directions for future work.
studies have shown that all these algorithms have their shortcomings,
for example, the whale optimization algorithm suffers from the problem 2. Human memory optimization algorithm
of falling into local optimum and slow convergence speed, Sanjoy
Chakraborty et al. introduced Symbiotic Organisms Search (SOS) In this section, a new swarm intelligence optimization method
(Chakraborty et al., 2021) to enhance the ability of the whale algorithm named the HMO algorithm is discussed, including the following two
to jump out of local optimum and improve the convergence speed; Chi aspects: 1) Inspiration. 2) the mathematical model.
2
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
2.1. Inspiration
̃ + ∀ × (ub − lb) × rand(1, dim)
Xi (t + 1) = X
∀ = k × sin(1/k) × b (1)
As people go about their daily lives, constantly performing activities
b = 2 × (1 − t/T)
and producing records of events that form memories, it is also through
the constant recollection of past events that we guide our present
where X̃ is the average position, T is the maximum number of iterations.
behavior. In most cases, mistakes made in the past we try to avoid
t denotes the current iteration number, ∀ is composed of divergence
making the same mistakes. On the other hand, what we have achieved in
functions that simulate the state of human behavior. k is a random
the past needs to be built upon. It has been shown that humans
constant number belonging to (0.1, 1.3), dim is the dimension of the
remember unpleasant as well as pleasant events (LaBar, 2007) and that
optimization problem, lb and ub denote the lower and upper bounds of
such memories are retained in the brain for a long time, while in the
the entire problem space, respectively. The global exploration and
recent past, we still miss them, and Kornell et al. also show that our
memory is biased (Kornell & Bjork, 2009), especially in the recent or exploitation capabilities of the algorithm are balanced by the X
̃ and ∀,
dinary time, which is more likely to be missed. life and behavior (Mahr allowing the algorithm to search flexibly in space.
& Csibra, 2020). It is because of these memories to influence that we are In this paper, the process of human memory can be divided into
continually guided towards a better direction through the experiences successful memory and failure memory to simulate different storage
generated by memory, working together to promote human develop activities in the real world (see Algorithm 1). Store the success and
ment through alternating cycles of failure and success. failure events generated each time. The matrix for memory storage is of
Based on the above ideas and inspiration,this paper propose a human finite capacity, set to N*dim, and when the newest memory is stored, it
memory optimization algorithm, which gradually approaches the replaces the oldest memory one at a time if it exceeds the capacity, with
optimal solution to the optimization problem by constantly updating N representing the number of populations. According to the nature of
new search direction through human behavioral activities and recall. human memory, the storage of memories only stores better and worse
The specific schematic diagram is shown in Fig. 2. The HMO model has events than the previous ones. It is worth noting that the global optimal
the following main stages: and worst solutions are retained here, so there will be cases where the
same solutions are stored.
• Humans need to produce memories or events through productive Algorithm 1 Memory Storage Strategies
activities. Input:
• Failures and successes are especially remembered, and people will Pwrost:Worst position(memory) to be stored
Best:Best position(memory) to be stored
continue to recall these events. It is worth noting that failed events
N:Number of populations
are those experiences that are painful and successful events are those Output:
that are joyful. Mworst:Storage Matrix for Worst Memory
• Humans may be able to recall or forget current memories. Mbest:Storage Matrix for Best Memory
• The process of remembering is also a human activity that can also Storage:
W = 1;% W is the number of stored memories
produce memories or events.
E = mod(W,N);
• The mood is different when recalling different events. If E==0
Mworst (N,:) = Pwrost;
Mbest (N,:) = Best;
2.2. The proposed algorithm else
Mworst(E,:) = Pwrost;
Mbest(E,:) = Best;
2.2.1. Human activities end
According to the above discussion, we know that people will create W = W + 1;
memories and store them all the time during production activities. In
order to simulate this behavior of humans, the updating of the memory
production rules of each person can be expressed as:
3
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 1 Finally, if the current memory is not lost, it can be represented by the
Parameter settings. following:
Algorithms parameters
Xi (t + 1) = xbest + r2 × (Xl − Xi (t)) (5)
DBO K = 0.1, b = 0.3, S = 0.5
GWO a decreases linearly from 2 to 0 where Xl represents the current optimal solution, it represents the best
HHO – solution in the current population. r2 denotes the mood state at this time
PSO wmax = 0.9, wmin = 0.1, c1 = 2, c2 = 2
SCA r1 decreases linearly from 2 to 0
and is a uniform random number obeying [0,1]. The significance of this
SSA PD = 15, SD = 10 equation is to indicate that humans identify with the current memory
WOA a decreases linearly from 2 to 0 state and thus develop it in accordance with the current behavior. From
HMO r = 0.1 Eq. (5), it can be seen that the optimal positions are considered together
with the locally optimal positions, and also they are fixed positions at the
same number of iterations, thus enabling some local exploitation.
2.2.2. Recollective behavior
The process of recall is also a human activity, so the optimal and
As we all know, humans are constantly recalling successes and fail
worst solutions it produces also need to be stored, as shown in Algorithm
ures in the process of making memories. It is worth noting that this
1.
behavior plays an important role in human production activities. It’s
easy to remember the hard work that came before when achieved
2.3. Algorithm flow
something, like an award speech. These tough times can be seen as
failures. To mimic this behavior, we use the following update to repre
The HMO algorithm is divided into two main parts, one is the
sent the process of each person recalling a failed event:
simulation of human activity behavior and the other is the simulation of
̃
Xi (t + 1) = xbest + β × (r1 × Worsta − r2 × X) memory behavior. Among them, memory behavior is divided into four
(2)
β = − 1/δ × log(1 − ϕ) parts. For simplicity, the selection of memory behavior will be per
formed by parameter r in this paper, where the selection probability of
where δ is a constant value 4, ϕ is a random vector by size 1 × dim that failure and success is the same; the probability of recalling the current
follows normally distributed. xbest denotes historical optimal memory. a event is also the same as the probability of the current event being lost.
denotes the randomly selected a-th position inside the worst matrix. β The selection probabilities of the other two approaches are also the
denotes the state factor of emotion. r1 and r2 are both uniform random same. The flow of the proposed HMO algorithm is depicted in Algorithm
numbers of [0,1], and they represent is the proportion of extracted in 2.
formation. The storage scheme for failed memories is more slowly Algorithm 2 The framework of the HMO algorithm
updated because the worst solution is locked in the early stages and it is
Require: The maximum iterations Tmax, the size of the population N.
more difficult to find the worst solution in the later stages. In this case, Ensure: Optimal position Best and its fitness value Fbest.
the position in Worst is more different from the current population po Initialize the each individual i ← 1, 2, …, N and define relevant parameters
sition information and provides better global exploration capability. t = 1;
Similarly, the representation of recalling a successful event is as while (t=<Tmax) do
for i ← 1 to N do
follows:
Update the memory production’s position by using (1);
end for
̃
Xi (t + 1) = xbest + β × (r1 × Besta2 − r2 × X) (3)
Obtain different memory storage values by Algorithm 1.
for j ← 1 to N do
Best a2 represents a randomly selected a2-segment memory in the optimal R = rand(1);;
memory matrix. In the recall of successful events, the difference between if R<=r then
successes is rapidly becoming smaller, so the recall behaviour for suc Update the process of each person recalling a failed event by (2);
else if R > r and R <= 0.5 then
cessful events is a rapid shift from global exploration to local
Update the process of each person recalling recent events by (4);
exploitation. else if R > 0.5 and R<= 1-r then
It should be mentioned that people often fail to remember when Update the process of finding lost memories by (5);
recalling the current event,which is referred to in this article as transient else then
memory loss. In this case, lost memories can only be retrieved through Update the process of each person recalling a successful event by (3);
end for
impressive events. Therefore, we use the following update to simulate Obtain different memory storage values by Algorithm 1.
finding lost memories: t = t + 1;
( ∼
) end while
Xi (t + 1) = Xr (t) + σ × r1 × X − Pworst (4) Return Best and its Fbest
4
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 2
Table of the optimization results of each algorithm (D = 30).
F Index PSO SCA GWO SSA WOA HHO DBO HMO
F10 Best 5.2708E-01 9.6661E + 02 1.1649E + 00 7.9112E + 01 1.4568E + 01 3.2975E + 00 3.6146E-02 4.0746E-01
Worst 1.9577E + 00 2.2726E + 03 7.1922E + 02 3.5824E + 02 1.2641E + 02 1.0558E + 01 3.6338E + 01 1.4929E + 00
Ave 1.4093E + 00 1.6377E + 03 3.1869E + 02 1.6787E + 02 6.1110E + 01 6.0802E + 00 1.6412E + 01 1.0412E + 00
Std 3.1246E-01 3.7893E + 02 1.7984E + 02 7.3964E + 01 2.9081E + 01 1.5005E + 00 1.1446E + 01 3.0289E-01
Rank 2 8 7 6 5 3 4 1
F11 Best 2.4718E + 02 3.2164E + 02 2.4775E + 02 3.0369E + 02 2.2087E + 02 8.7766E + 01 6.9195E + 01 5.5180E + 01
Worst 4.9151E + 02 4.1991E + 02 6.1487E + 02 5.6757E + 02 6.9322E + 02 2.3688E + 02 3.1175E + 02 4.4972E + 02
Ave 3.2998E + 02 3.6735E + 02 4.2346E + 02 4.2125E + 02 4.6222E + 02 1.6628E + 02 1.6430E + 02 1.1178E + 02
Std 6.6682E + 01 2.7620E + 01 9.0948E + 01 6.6853E + 01 1.0875E + 02 3.6881E + 01 7.0914E + 01 6.9319E + 01
Rank 4 5 7 6 8 3 2 1
F12 Best 2.6699E + 02 3.0304E + 02 2.9453E + 02 3.9323E + 02 2.8539E + 02 2.6188E + 02 1.4905E + 02 3.9049E + 01
Worst 7.5915E + 02 4.3494E + 02 7.5915E + 02 1.0157E + 03 7.6297E + 02 7.6847E + 02 4.9034E + 02 4.8454E + 02
(continued on next page)
5
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 2 (continued )
F Index PSO SCA GWO SSA WOA HHO DBO HMO
F13 Best 2.6888E + 02 3.2807E + 02 2.5483E + 02 3.8277E + 02 3.0742E + 02 3.6018E + 02 2.1225E + 02 1.0139E + 02
Worst 6.1231E + 02 4.4803E + 02 5.6670E + 02 1.0659E + 03 7.3384E + 02 7.8984E + 02 4.0253E + 02 4.1456E + 02
Ave 4.4641E + 02 3.6834E + 02 4.0086E + 02 6.7630E + 02 4.8471E + 02 5.8731E + 02 2.7586E + 02 1.9591E + 02
Std 8.1539E + 01 2.7138E + 01 7.6651E + 01 2.0089E + 02 9.5919E + 01 9.2877E + 01 4.4179E + 01 5.8430E + 01
Rank 5 3 4 8 6 7 2 1
F14 Best 2.3356E + 03 6.5205E + 03 3.2853E + 03 4.3526E + 03 2.8581E + 03 1.4848E + 03 1.9256E + 03 1.2881E + 03
Worst 5.5602E + 03 7.9047E + 03 5.5602E + 03 7.7490E + 03 7.3858E + 03 5.1779E + 03 5.0429E + 03 4.9129E + 03
Ave 3.4542E + 03 7.1024E + 03 4.4291E + 03 5.6163E + 03 4.9943E + 03 2.7030E + 03 3.7102E + 03 3.1030E + 03
Std 7.5401E + 02 3.2706E + 02 6.1867E + 02 9.1080E + 02 1.0667E + 03 7.8400E + 02 7.9386E + 02 8.2510E + 02
Rank 3 8 5 7 6 1 4 2
F15 Best 3.1509E + 03 6.5247E + 03 3.0706E + 03 3.6427E + 03 3.9344E + 03 3.0142E + 03 3.3001E + 03 2.1748E + 03
Worst 6.0820E + 03 7.9186E + 03 6.0820E + 03 8.0713E + 03 7.2056E + 03 6.2281E + 03 7.1838E + 03 7.1792E + 03
Ave 4.4791E + 03 7.3819E + 03 4.7566E + 03 6.4043E + 03 5.7826E + 03 4.8150E + 03 5.1761E + 03 3.3729E + 03
Std 7.3845E + 02 3.7109E + 02 6.7512E + 02 1.1077E + 03 8.1430E + 02 8.0178E + 02 9.4823E + 02 1.0090E + 03
Rank 2 8 3 7 6 4 5 1
F16 Best 1.0095E + 02 1.0192E + 02 1.0077E + 02 1.0120E + 02 1.0073E + 02 1.0090E + 02 1.0080E + 02 1.0077E + 02
Worst 1.0261E + 02 1.0298E + 02 1.0301E + 02 1.0451E + 02 1.0259E + 02 1.0289E + 02 1.0288E + 02 1.0218E + 02
Ave 1.0165E + 02 1.0253E + 02 1.0242E + 02 1.0248E + 02 1.0183E + 02 1.0179E + 02 1.0218E + 02 1.0131E + 02
Std 4.5873E-01 2.5516E-01 4.8989E-01 7.1426E-01 4.2653E-01 4.4360E-01 4.0651E-01 3.2825E-01
Rank 2 8 6 7 4 3 5 1
F17 Best 2.1063E + 02 5.0615E + 02 3.8618E + 02 6.5697E + 02 4.2161E + 02 5.9303E + 02 1.7223E + 02 1.9211E + 02
Worst 9.1475E + 02 6.4436E + 02 9.1475E + 02 1.0938E + 03 8.5427E + 02 9.4968E + 02 4.3620E + 02 4.4928E + 02
Ave 4.4336E + 02 5.8382E + 02 6.8359E + 02 9.3344E + 02 6.6685E + 02 8.0627E + 02 2.8634E + 02 2.7683E + 02
Std 2.0405E + 02 3.4719E + 01 1.5713E + 02 9.1068E + 01 1.1883E + 02 1.0977E + 02 7.1746E + 01 6.7189E + 01
Rank 3 4 6 8 5 7 2 1
F18 Best 3.8666E + 02 5.1326E + 02 3.4389E + 02 7.9998E + 02 4.7144E + 02 6.4823E + 02 1.8903E + 02 2.1448E + 02
Worst 7.4819E + 02 6.4666E + 02 8.5836E + 02 1.0863E + 03 9.0609E + 02 9.5315E + 02 5.1675E + 02 5.1596E + 02
Ave 4.9177E + 02 5.7416E + 02 5.9237E + 02 9.6334E + 02 7.2287E + 02 8.2456E + 02 3.4950E + 02 3.5595E + 02
Std 8.9828E + 01 3.6037E + 01 1.3776E + 02 8.6673E + 01 1.1008E + 02 8.7303E + 01 8.6196E + 01 4.9072E + 01
Rank 3 4 5 8 6 7 1 2
F19 Best 1.0502E + 02 1.0663E + 03 1.0369E + 02 1.4752E + 02 1.2349E + 02 1.2488E + 02 1.0527E + 02 1.0959E + 02
Worst 1.5487E + 02 8.9172E + 03 3.7204E + 03 6.1729E + 04 1.9780E + 02 1.6577E + 02 1.3355E + 02 1.6826E + 02
Ave 1.1543E + 02 3.1090E + 03 4.3395E + 02 2.3253E + 03 1.5651E + 02 1.4432E + 02 1.1593E + 02 1.3803E + 02
Std 1.3917E + 01 1.9631E + 03 8.5205E + 02 1.1222E + 04 1.9368E + 01 1.2337E + 01 8.2614E + 00 1.4715E + 01
Rank 1 8 6 7 5 4 2 3
F20 Best 1.1402E + 02 1.1334E + 02 1.1239E + 02 1.1450E + 02 1.1402E + 02 1.1450E + 02 1.1151E + 02 1.0921E + 02
Worst 1.1500E + 02 1.1451E + 02 1.1500E + 02 1.1500E + 02 1.1500E + 02 1.1500E + 02 1.1450E + 02 1.1500E + 02
Ave 1.1466E + 02 1.1390E + 02 1.1440E + 02 1.1493E + 02 1.1477E + 02 1.1482E + 02 1.1311E + 02 1.1239E + 02
Std 2.7250E-01 2.9429E-01 7.5200E-01 1.7032E-01 3.0133E-01 2.3936E-01 6.8475E-01 1.6296E + 00
Rank 5 3 4 8 6 7 2 1
F21 Best 2.0139E + 02 1.4931E + 03 5.4354E + 02 5.5157E + 02 2.3341E + 02 2.5687E + 02 2.0000E + 02 4.0015E + 02
Worst 5.4355E + 02 2.1825E + 03 1.7090E + 03 2.8243E + 03 5.4407E + 02 5.4453E + 02 5.4354E + 02 5.4354E + 02
Ave 4.2392E + 02 1.9292E + 03 1.0304E + 03 7.4511E + 02 4.7667E + 02 4.6812E + 02 4.1497E + 02 4.7666E + 02
Std 9.2227E + 01 1.8201E + 02 3.0762E + 02 4.4690E + 02 7.9734E + 01 7.6930E + 01 9.2495E + 01 7.2725E + 01
Rank 2 8 7 6 5 3 1 4
F22 Best 3.1828E + 03 6.6167E + 03 3.0263E + 03 4.4979E + 03 3.9023E + 03 2.0697E + 03 2.7440E + 03 2.1353E + 03
Worst 6.5066E + 03 8.3822E + 03 7.7285E + 03 8.8182E + 03 8.4802E + 03 5.8555E + 03 5.8418E + 03 5.5338E + 03
Ave 4.8008E + 03 7.5814E + 03 6.0280E + 03 6.8747E + 03 6.2163E + 03 3.5515E + 03 4.4745E + 03 3.2697E + 03
Std 9.5748E + 02 4.3140E + 02 1.0963E + 03 1.2069E + 03 1.1084E + 03 8.5577E + 02 8.2032E + 02 7.4635E + 02
Rank 4 8 5 7 6 2 3 1
F23 Best 3.4673E + 03 6.5198E + 03 4.8026E + 03 5.1089E + 03 4.7628E + 03 3.7412E + 03 3.8439E + 03 1.8978E + 03
Worst 6.5887E + 03 8.5522E + 03 7.3853E + 03 8.7862E + 03 8.0134E + 03 8.1771E + 03 8.1018E + 03 7.2015E + 03
Ave 5.3234E + 03 7.8502E + 03 6.2107E + 03 7.1746E + 03 6.6062E + 03 6.3513E + 03 5.8432E + 03 4.0199E + 03
Std 8.9427E + 02 4.3704E + 02 6.1512E + 02 9.8649E + 02 8.8058E + 02 1.0269E + 03 9.2228E + 02 1.3360E + 03
Rank 2 8 4 7 6 5 3 1
(continued on next page)
6
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 2 (continued )
F Index PSO SCA GWO SSA WOA HHO DBO HMO
F24 Best 3.6815E + 02 4.0222E + 02 3.9379E + 02 3.9876E + 02 3.9714E + 02 3.7918E + 02 3.7712E + 02 3.4160E + 02
Worst 4.3867E + 02 4.2435E + 02 4.4500E + 02 4.3853E + 02 4.3467E + 02 4.6248E + 02 4.1091E + 02 3.9415E + 02
Ave 3.9838E + 02 4.1720E + 02 4.1525E + 02 4.2132E + 02 4.1233E + 02 4.2577E + 02 3.9497E + 02 3.5258E + 02
Std 2.0078E + 01 5.1472E + 00 1.5931E + 01 1.0265E + 01 1.0104E + 01 1.6761E + 01 7.8008E + 00 1.1510E + 01
Rank 3 6 5 7 4 8 2 1
F25 Best 3.9240E + 02 4.1370E + 02 4.1338E + 02 4.0891E + 02 4.0309E + 02 4.0815E + 02 3.8312E + 02 3.4977E + 02
Worst 5.0534E + 02 4.3370E + 02 4.5756E + 02 4.5418E + 02 4.4550E + 02 4.6090E + 02 4.2937E + 02 4.1657E + 02
Ave 4.3771E + 02 4.2631E + 02 4.3088E + 02 4.3167E + 02 4.2290E + 02 4.3547E + 02 4.0335E + 02 3.7387E + 02
Std 2.8454E + 01 4.1501E + 00 1.0952E + 01 1.0363E + 01 1.1228E + 01 1.5239E + 01 1.0039E + 01 1.3000E + 01
Rank 8 4 5 6 3 7 2 1
F26 Best 3.0006E + 02 3.0492E + 02 3.0007E + 02 3.0215E + 02 3.0058E + 02 3.0030E + 02 3.0014E + 02 3.0017E + 02
Worst 5.0830E + 02 3.2215E + 02 5.0916E + 02 5.2086E + 02 5.1204E + 02 5.1199E + 02 4.9059E + 02 4.9094E + 02
Ave 4.1233E + 02 3.1256E + 02 4.5355E + 02 4.7728E + 02 4.5710E + 02 4.5338E + 02 3.1332E + 02 4.0828E + 02
Std 8.7756E + 01 3.6188E + 00 7.8397E + 01 7.0010E + 01 7.9680E + 01 8.6033E + 01 4.4854E + 01 6.6845E + 01
Rank 4 1 6 8 7 5 2 3
F27 Best 1.0324E + 03 1.3611E + 03 1.1725E + 03 1.2733E + 03 1.2813E + 03 1.2297E + 03 1.0666E + 03 7.9999E + 02
Worst 1.4627E + 03 1.5232E + 03 1.6214E + 03 1.7119E + 03 1.6214E + 03 1.6282E + 03 1.4580E + 03 1.2265E + 03
Ave 1.2581E + 03 1.4624E + 03 1.3752E + 03 1.5122E + 03 1.4332E + 03 1.4434E + 03 1.2648E + 03 9.2500E + 02
Std 1.1662E + 02 4.1961E + 01 1.0127E + 02 9.9286E + 01 7.3473E + 01 9.8443E + 01 1.0775E + 02 9.9195E + 01
Rank 2 7 4 8 5 6 3 1
F28 Best 2.7325E + 03 2.4195E + 03 2.7175E + 03 2.2578E + 03 2.8121E + 03 4.0134E + 03 4.0000E + 02 6.4366E + 02
Worst 5.3448E + 03 3.0211E + 03 5.3448E + 03 8.0465E + 03 5.4654E + 03 5.7993E + 03 1.6821E + 03 5.1936E + 03
Ave 3.7228E + 03 2.7542E + 03 3.9927E + 03 4.7180E + 03 4.3077E + 03 4.8996E + 03 5.0656E + 02 1.3616E + 03
Std 6.1337E + 02 1.4631E + 02 6.1128E + 02 1.4344E + 03 6.6779E + 02 4.3946E + 02 3.1251E + 02 7.6935E + 02
Rank 4 3 5 7 6 8 1 2
Total rank 3.0000 6.0000 5.1786 7.1786 5.3929 4.7500 2.8571 1.6429
complexity that depends on the number of populations N, the maximum CPU@3.00 GHz, 16 GB (3200MHZ) RAM, 64-bit operating system, and
number of iterations T, and the variables of the problem D. HMO are Matlab2018a. The basic parameters are set as follows: population size N
divided into two main behavioral approaches, human activities, and = 50, maximum number of evaluations are MaxEval = dim × 105 ,
recollective behavior. Their time complexity is therefore as follows: number of independent runs are 30, and dimension (dim) 30. The best
(Best), worst (Worst), average (Ave) and standard deviation (Std) were
O(HMO) = O(T(O(Human activities) + O(Recollective behavior)))
calculated, and finally each algorithm was ranked (Rank) using the
= O(T(ND + ND)) = O(TND) (6) average, where the standard deviation was considered if the averages
were equal, and the results are shown in Table 2.
3. Performance tests From Table 2, it can be seen that HMO ranks first in F1, F5, F7, F9,
F10, F11, F12, F13, F15, F16, F17, F20, F22, F23, F24, F25, F27, a total
This subsection focuses on testing each algorithm in CEC 2013 (Liang of 16 functions, and ranks second in F2-F4, F14,F18,F28, and only F8
et al., 2013), comparing the optimization results, and also analyzing the and F21 ranked close to the fourth among the 28 functions. Among the
r. 28 functions, only F8 and F21 ranked fourth, while the rest of the
functions ranked in the top three, showing a superior performance in
3.1. Comparison with the basic algorithm finding the best. In the overall ranking of all functions, HMO is ranked
first, with a ranking of 1.6429, followed by DBO and PSO. It can be seen
In order to verify the performance of HMO, this paper tested it on the that the performance of HMO in all kinds of complex optimization
CEC 2013 test function set and compared it with other heuristics, such as functions is more outstanding, and the algorithm has a strong compre
GWO, HHO, PSO, sine cosine algorithm (SCA) (Mirjalili, 2016), SSA, hensive optimization-seeking ability and better generalization.
WOA DBO, and SSA are new algorithms with high recognition in recent In order to verify the performance of the HMO algorithm in higher
years, and GWO, PSO, SCA and WOA are more classical algorithms, and dimensions, this paper tested it again in a 50-dimensional function with
the specific parameter settings are shown in Table 1. There are 28 the same parameters as above, and the results are shown in Table 3.
functions in CEC 2013 and the theoretical optimum ranges from − 1400 From the 50-dimensional results, HMO still ranks first in most of the
to 1400. In order to see the differences between the algorithms, each functions in higher dimensions, specifically in F1-F2, F4-F5, F7, F9-F14,
algorithm subtracts the theoretical optimum from the result of the final F16-F17, F22-F25, F27-F28, despite the more complex computational
optimization search, so that the theoretical optimum becomes 0. dimensions. The overall ranking is still the first, with a specific value of
To ensure the fairness of the experiment, the computer configuration 1.5, which again proves the superiority-seeking ability of HMO, while
used for the simulation is AMD Ryzen 5 4600H with Radeon Graphics the other algorithms are less effective in comprehensive search, with the
7
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 3
Table of the optimization results of each algorithm (D = 50).
F Index PSO SCA GWO SSA WOA HHO DBO HMO
F10 Best 3.7907E + 00 2.7366E + 03 3.3199E + 02 2.9491E + 02 9.2745E + 01 1.4084E + 01 5.3847E + 00 2.5854E + 00
Worst 1.1261E + 01 5.6648E + 03 1.4870E + 03 4.7364E + 03 2.8363E + 02 3.8092E + 01 3.1075E + 03 8.4269E + 00
Ave 6.3798E + 00 3.5556E + 03 7.6162E + 02 7.3765E + 02 1.8846E + 02 2.5752E + 01 2.5372E + 02 4.4513E + 00
Std 1.6770E + 00 5.9036E + 02 2.9505E + 02 7.7081E + 02 5.2368E + 01 6.2004E + 00 6.3509E + 02 1.5260E + 00
Rank 2 8 7 6 4 3 5 1
F11 Best 3.4249E + 02 5.9140E + 02 4.9269E + 02 5.2934E + 02 6.1356E + 02 3.3239E + 02 1.9093E + 02 1.6533E + 02
Worst 6.7530E + 02 7.9687E + 02 9.0449E + 02 1.0719E + 03 9.5505E + 02 5.8626E + 02 5.8888E + 02 3.5838E + 02
Ave 4.7284E + 02 6.9985E + 02 7.2967E + 02 8.1300E + 02 7.5736E + 02 4.4445E + 02 3.1043E + 02 2.3406E + 02
Std 7.3380E + 01 4.7972E + 01 1.0325E + 02 1.3375E + 02 8.3445E + 01 5.5532E + 01 1.0815E + 02 3.9860E + 01
(continued on next page)
8
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 3 (continued )
F Index PSO SCA GWO SSA WOA HHO DBO HMO
Rank 4 5 6 8 7 3 2 1
F12 Best 4.7604E + 02 6.8631E + 02 5.9413E + 02 8.4347E + 02 7.0368E + 02 7.2645E + 02 3.0957E + 02 1.5937E + 02
Worst 8.3233E + 02 8.5834E + 02 1.1086E + 03 1.3166E + 03 1.1576E + 03 1.1786E + 03 9.7076E + 02 5.2029E + 02
Ave 6.2741E + 02 7.5209E + 02 8.6363E + 02 1.1895E + 03 9.6006E + 02 1.0082E + 03 5.6105E + 02 2.9544E + 02
Std 8.9477E + 01 4.6444E + 01 1.4594E + 02 9.3562E + 01 9.9109E + 01 1.0109E + 02 1.5435E + 02 9.5851E + 01
Rank 3 4 5 8 6 7 2 1
F13 Best 6.0834E + 02 6.1551E + 02 6.7394E + 02 8.6820E + 02 7.1744E + 02 8.0532E + 02 3.9514E + 02 2.7049E + 02
Worst 9.2844E + 02 8.4271E + 02 1.1823E + 03 1.4169E + 03 1.2222E + 03 1.3077E + 03 7.4416E + 02 5.6731E + 02
Ave 7.5529E + 02 7.4703E + 02 9.5167E + 02 1.2346E + 03 9.4216E + 02 1.0798E + 03 5.9129E + 02 3.8677E + 02
Std 8.7332E + 01 5.0925E + 01 1.3254E + 02 1.6813E + 02 1.3689E + 02 1.2918E + 02 8.6239E + 01 7.2848E + 01
Rank 4 3 6 8 5 7 2 1
F14 Best 4.2698E + 03 1.2739E + 04 5.0216E + 03 9.2229E + 03 5.8946E + 03 2.5155E + 03 4.0302E + 03 4.0241E + 03
Worst 7.1061E + 03 1.4314E + 04 1.0006E + 04 1.3810E + 04 1.1836E + 04 9.4079E + 03 8.2654E + 03 1.1108E + 04
Ave 5.8198E + 03 1.3515E + 04 7.2696E + 03 1.0700E + 04 9.3762E + 03 6.3205E + 03 7.0309E + 03 5.6563E + 03
Std 7.2811E + 02 4.3787E + 02 1.0964E + 03 1.0511E + 03 1.5683E + 03 1.5158E + 03 9.0606E + 02 1.3321E + 03
Rank 2 8 5 7 6 3 4 1
F15 Best 5.4710E + 03 1.3302E + 04 7.1997E + 03 9.9646E + 03 8.3120E + 03 8.2387E + 03 7.0186E + 03 5.4061E + 03
Worst 1.0283E + 04 1.5046E + 04 1.1618E + 04 1.4853E + 04 1.4055E + 04 1.2221E + 04 1.4181E + 04 1.4978E + 04
Ave 8.1941E + 03 1.4481E + 04 9.3128E + 03 1.2502E + 04 1.1077E + 04 1.0330E + 04 1.0541E + 04 9.0365E + 03
Std 1.2223E + 03 4.5068E + 02 1.1890E + 03 1.3008E + 03 1.5109E + 03 9.1141E + 02 1.9817E + 03 3.5023E + 03
Rank 1 8 3 7 6 4 5 2
F16 Best 1.0237E + 02 1.0270E + 02 1.0277E + 02 1.0160E + 02 1.0126E + 02 1.0129E + 02 1.0247E + 02 1.0109E + 02
Worst 1.0389E + 02 1.0390E + 02 1.0413E + 02 1.0466E + 02 1.0357E + 02 1.0354E + 02 1.0375E + 02 1.0300E + 02
Ave 1.0316E + 02 1.0349E + 02 1.0351E + 02 1.0335E + 02 1.0260E + 02 1.0249E + 02 1.0323E + 02 1.0187E + 02
Std 3.9733E-01 3.0978E-01 3.2941E-01 6.8893E-01 5.8274E-01 5.5473E-01 3.5591E-01 4.8094E-01
Rank 4 7 8 6 3 2 5 1
F17 Best 4.6763E + 02 9.2800E + 02 8.2434E + 02 1.2865E + 03 1.0194E + 03 1.1543E + 03 3.6855E + 02 3.2388E + 02
Worst 8.4275E + 02 1.2221E + 03 1.3799E + 03 1.5861E + 03 1.4750E + 03 1.4117E + 03 7.5074E + 02 6.1636E + 02
Ave 6.3365E + 02 1.0617E + 03 1.1879E + 03 1.4120E + 03 1.2397E + 03 1.2796E + 03 5.3029E + 02 4.4378E + 02
Std 9.1396E + 01 8.4427E + 01 1.2689E + 02 8.2573E + 01 1.1327E + 02 6.8617E + 01 9.8636E + 01 7.6732E + 01
Rank 3 4 5 8 6 7 2 1
F18 Best 6.1505E + 02 9.2314E + 02 7.5324E + 02 1.2549E + 03 9.8939E + 02 1.1322E + 03 3.7411E + 02 4.5938E + 02
Worst 1.0766E + 03 1.2190E + 03 1.3436E + 03 1.5756E + 03 1.4380E + 03 1.4043E + 03 8.4237E + 02 7.2526E + 02
Ave 8.5941E + 02 1.0615E + 03 1.1407E + 03 1.4926E + 03 1.2249E + 03 1.3018E + 03 6.2034E + 02 6.2473E + 02
Std 1.2245E + 02 7.3562E + 01 1.5425E + 02 5.7148E + 01 1.1920E + 02 7.9769E + 01 1.2484E + 02 6.0462E + 01
Rank 3 4 5 8 6 7 1 2
F19 Best 1.1672E + 02 1.4118E + 04 1.4794E + 02 5.5121E + 02 1.7741E + 02 1.5247E + 02 1.1295E + 02 1.4641E + 02
Worst 1.3337E + 02 7.4004E + 04 6.3691E + 03 5.7295E + 03 3.1680E + 02 2.4327E + 02 1.6567E + 02 2.6996E + 02
Ave 1.2564E + 02 2.8599E + 04 9.4881E + 02 2.1153E + 03 2.4277E + 02 1.8660E + 02 1.3501E + 02 1.8908E + 02
Std 4.2644E + 00 1.5752E + 04 1.1548E + 03 1.2943E + 03 3.5799E + 01 2.2706E + 01 1.2888E + 01 2.8592E + 01
Rank 1 8 6 7 5 3 2 4
F20 Best 1.2243E + 02 1.2310E + 02 1.2307E + 02 1.2450E + 02 1.2332E + 02 1.2412E + 02 1.2110E + 02 1.1836E + 02
Worst 1.2454E + 02 1.2474E + 02 1.2451E + 02 1.2500E + 02 1.2500E + 02 1.2500E + 02 1.2450E + 02 1.2260E + 02
Ave 1.2409E + 02 1.2404E + 02 1.2441E + 02 1.2459E + 02 1.2465E + 02 1.2451E + 02 1.2319E + 02 1.2111E + 02
Std 5.3669E-01 4.1163E-01 2.9375E-01 1.8564E-01 3.4346E-01 1.1703E-01 9.6389E-01 9.2959E-01
Rank 4 3 5 7 8 6 2 1
F21 Best 9.3662E + 02 3.5947E + 03 1.5012E + 03 1.4994E + 03 3.5210E + 02 9.3930E + 02 3.0543E + 02 9.3644E + 02
Worst 1.2248E + 03 4.4704E + 03 3.5750E + 03 3.8919E + 03 1.2595E + 03 1.2375E + 03 1.2247E + 03 1.2224E + 03
Ave 1.0517E + 03 3.9607E + 03 2.6422E + 03 2.8405E + 03 9.4525E + 02 1.1064E + 03 1.0154E + 03 1.0412E + 03
Std 1.4304E + 02 1.5718E + 02 5.0462E + 02 5.1324E + 02 3.1432E + 02 1.4790E + 02 3.0935E + 02 1.4009E + 02
Rank 4 8 6 7 1 5 2 3
F22 Best 5.8818E + 03 1.3187E + 04 8.3530E + 03 8.6988E + 03 8.4833E + 03 6.3339E + 03 5.9235E + 03 4.4461E + 03
Worst 1.3014E + 04 1.5243E + 04 1.3067E + 04 1.5223E + 04 1.4972E + 04 1.1790E + 04 1.1022E + 04 8.9834E + 03
Ave 9.4434E + 03 1.4310E + 04 1.0524E + 04 1.3078E + 04 1.2069E + 04 8.6784E + 03 8.4541E + 03 6.8833E + 03
Std 1.7547E + 03 4.6426E + 02 1.3330E + 03 1.3665E + 03 1.5962E + 03 1.3008E + 03 1.3326E + 03 1.0203E + 03
(continued on next page)
9
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 3 (continued )
F Index PSO SCA GWO SSA WOA HHO DBO HMO
Rank 4 8 5 7 6 3 2 1
F23 Best 9.1125E + 03 1.4316E + 04 8.8634E + 03 1.1567E + 04 1.0063E + 04 1.1225E + 04 8.6726E + 03 6.1295E + 03
Worst 1.3606E + 04 1.5807E + 04 1.4525E + 04 1.7279E + 04 1.5925E + 04 1.4846E + 04 1.5200E + 04 1.4754E + 04
Ave 1.0925E + 04 1.5150E + 04 1.1834E + 04 1.4420E + 04 1.3205E + 04 1.2793E + 04 1.0793E + 04 8.1477E + 03
Std 1.0962E + 03 4.0953E + 02 1.3341E + 03 1.4979E + 03 1.3340E + 03 1.0435E + 03 1.4156E + 03 1.6946E + 03
Rank 3 8 4 7 6 5 2 1
F24 Best 4.2757E + 02 5.1331E + 02 4.8638E + 02 4.8836E + 02 4.7513E + 02 4.8136E + 02 4.4421E + 02 3.8447E + 02
Worst 4.9939E + 02 5.3745E + 02 5.5151E + 02 5.3811E + 02 5.4823E + 02 5.6913E + 02 5.1373E + 02 4.2509E + 02
Ave 4.6809E + 02 5.2456E + 02 5.1853E + 02 5.1083E + 02 5.0753E + 02 5.2714E + 02 4.7602E + 02 4.0449E + 02
Std 1.8950E + 01 5.8386E + 00 1.7638E + 01 1.1545E + 01 1.5649E + 01 2.4270E + 01 1.6968E + 01 1.0255E + 01
Rank 2 7 6 5 4 8 3 1
F25 Best 4.8036E + 02 5.3607E + 02 5.0861E + 02 5.0164E + 02 4.8598E + 02 5.1525E + 02 4.6808E + 02 4.1012E + 02
Worst 7.0054E + 02 5.6057E + 02 5.7780E + 02 5.9191E + 02 5.6339E + 02 6.3628E + 02 5.1122E + 02 4.8075E + 02
Ave 5.5509E + 02 5.4708E + 02 5.4819E + 02 5.5025E + 02 5.2218E + 02 5.5276E + 02 4.9486E + 02 4.4672E + 02
Std 6.2330E + 01 6.1742E + 00 1.6947E + 01 1.8460E + 01 1.8141E + 01 3.1941E + 01 1.1308E + 01 1.6720E + 01
Rank 8 4 5 6 3 7 2 1
F26 Best 5.2586E + 02 3.2266E + 02 3.0057E + 02 3.0877E + 02 3.0253E + 02 5.5790E + 02 3.0071E + 02 3.0148E + 02
Worst 5.7364E + 02 6.0063E + 02 5.9098E + 02 6.1197E + 02 6.0042E + 02 6.1912E + 02 5.7255E + 02 5.2410E + 02
Ave 5.4800E + 02 4.5981E + 02 5.5822E + 02 5.8386E + 02 5.5536E + 02 5.8117E + 02 4.3269E + 02 4.8018E + 02
Std 1.1152E + 01 1.2552E + 02 7.0638E + 01 5.2757E + 01 8.5825E + 01 1.2641E + 01 1.3041E + 02 6.1003E + 01
Rank 4 2 6 8 5 7 1 3
F27 Best 1.7630E + 03 2.2238E + 03 2.0571E + 03 2.1299E + 03 2.0701E + 03 2.1195E + 03 1.8497E + 03 1.2528E + 03
Worst 2.3995E + 03 2.5144E + 03 2.6628E + 03 2.6792E + 03 2.5567E + 03 2.6753E + 03 2.2520E + 03 1.7388E + 03
Ave 2.0858E + 03 2.4236E + 03 2.3073E + 03 2.4543E + 03 2.3135E + 03 2.4349E + 03 2.0455E + 03 1.4652E + 03
Std 1.6281E + 02 5.5826E + 01 1.7099E + 02 1.3655E + 02 1.2604E + 02 1.4508E + 02 1.0461E + 02 1.3110E + 02
Rank 3 6 4 8 5 7 2 1
F28 Best 5.0989E + 02 3.8214E + 03 3.6432E + 03 2.6281E + 03 4.5447E + 03 6.9630E + 03 5.0013E + 02 8.7778E + 02
Worst 8.3681E + 03 5.9815E + 03 1.0403E + 04 1.1902E + 04 9.9683E + 03 1.0090E + 04 4.3897E + 03 4.1820E + 03
Ave 6.1863E + 03 4.9243E + 03 7.5503E + 03 8.1261E + 03 7.4986E + 03 8.8023E + 03 2.4281E + 03 2.0483E + 03
Std 1.3527E + 03 5.7600E + 02 1.4086E + 03 2.4515E + 03 1.4386E + 03 7.5172E + 02 1.7957E + 03 1.2091E + 03
Rank 4 3 6 7 5 8 2 1
Total rank 2.8214 6.1071 5.3929 7.1786 5.1429 4.8214 3.0357 1.5000
DBO algorithm ranking second in 30 dimensions and third in 50 di As can be seen in Fig. 4, the HMO are coloured red, the HMO are
mensions, which shows that DBO is somewhat affected at higher di tightly around the centre and are the closest, i.e. the most highly ranked,
mensions. In general, HMO can show strong advantages in both 30 and of most of the functions, and their enclosed area is much smaller than
50 dimensions, which further validates their competitiveness and value. that of the other algorithms, demonstrating superior merit-seeking
To further see how the HMO convergence speed varies, the average ability.
convergence plots for each algorithm in a 30-dimensional function are To test the difference and superiority of HMO to other algorithms, all
shown in Fig. 3. As can be seen from the convergence plots, HMO con algorithms were subjected to the Wilcoxon rank sum test against HMO
verges relatively quickly in the early stages, reaching the same level as based on the results of 30 runs, and the results are shown in Table 4.
the other compared algorithms after a very small number of iterations, According to the Wilcoxon rank sum test rule, if the result is <0.05 then
and shows excellent local search ability in the middle and late stages, it can be concluded that there is a significant difference between the two
constantly jumping out of the local optimum and converging further compared algorithms, and to further highlight the HMO, the “+”, “=”,
towards the optimal value. As can be seen from Fig. 3, HMO converges to “− ” indicate that HMO is better than, equal to, or inferior to a com
a better level than the other algorithms in most of the functions, that is, parison algorithm respectively, and from the results, HMO is signifi
the search results show a strong ability of balanced exploration and cantly different from the other algorithms, and at this point, combined
development, especially in the later stages when it keeps getting rid of with the rankings presented in the above table of arithmetic results,
the attraction of the local optimum trap and converges to a better so HMO is clearly superior to the other algorithms.
lution. Meanwhile, in order to further verify the effectiveness and
generalizability of the HMO and visualize the optimization performance
of the algorithms, Fig. 4 shows the comprehensive ranking radar plot of 3.2. Comparison with variants of other algorithms
each algorithm in each function in 30 dimensions, with the smaller area
around indicating a higher ranking. To further validate the effective competitiveness of HMO, HMO is
compared with the variants of other algorithms proposed in recent
10
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
11
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Fig. 3. (continued).
12
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Fig. 3. (continued).
F8
F9 F7
F10 F6
F11 F5
F12 F4
PSO
SCA
F13 F3 GWO
SSA
WOA
HHO
F14 F2 DBO
HMO
F15 F1
F16 F28
F17 F827
F18 F26
F19 F25
F20 F24
F21 F23
F22
years, which include QMESSA (Wu et al., 2023), FACL (Peng et al., between the algorithms, and the average metrics of each algorithm are
2021), MPSO (Liu, Zhang & Tu, 2020; Meng et al., 2022), CSSA (Zhang used to perform the Friedman test (Demšar, 2006) to get the overall
& Ding, 2021), EAPSO (Zhang, 2023), VPPSO (Shami et al., 2023), each ranking.
of which has been validated in the CEC test set and has a strong global From Tables 5–6, it is intuitively clear that HMO has a certain opti
optimization capability. The internal parameters of each algorithm are mization advantage in comparison with other variant algorithms of
shown in Table 5. The optimization results of each algorithm in 30 and recent years, with an average ranking of 3.9286 in the 30-dimension and
50 dimensions are shown in Tables 6–7 above to measure the differences 3.9464 in the 50-dimension, and an overall ranking of fourth in both
13
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 4
Table of test results for each algorithm.
F PSO SCA GWO SSA WOA HHO DBO
Table 5 the optimal values were bolded. The specific results are shown in
Internal parameter settings for each variant of the algorithm. Table 8.
Algorithms parameters It can be seen from Table 5 that HMO has the most optimal values for
the optimization performance, especially in the functions F1, F4, F5, F7,
QMESSA ST = 0.8, PD = 0.2, SD = 0.2,ED = 0.2
FACL α = 0.5, βmin = 0.2, β = 1, γ = 1 F8, F9, F11-14, F17-18, F22-F23, F27 where the advantage is more
MPSO iw∊[0.4, 0.9], cw = 4 • r • (1 − r) significant. It can be seen that the HMO is better optimized in CEC 2013
CSSA ST = 0.8, PD = 0.2, SD = 0.2 when r = 0.1, and further validates the feasibility of the HMO. Some of
VPPSO α = 0.3, N1 = 15, N2 = 15 the functions are also better when r is taken for other probabilities, for
example, r = 0.2 gives better optimization results in F2 and F21, thus
dimensions. In 30 dimensions, the best function of HMO’s optimization illustrating that r is a very important parameter and the value of r needs
ability is F4, F9, F11-13, F23, and the worse function is F16, F24-26. In to be set reasonably according to different optimization problems.
50 dimensions, the best function of optimization ability is F4, F7, F9, and
the worse function is F16, F20, F24, F26. In the other functions, HMO is 4. Engineering optimization examples
able to beat individual algorithms as well as lag behind other algorithms.
MPSO and VPPSO have a better ability to find the optimal, HMO beats This section focuses on the application of each algorithm to three
them relatively few times, while compared with FACL, CSSA, and engineering optimization problems to verify the practicality of the
EAPSO, HMO beats more than half of them, especially for the 30-dimen HMO. Assume that the number of parameters to be optimized is d and
sional EAPSO, HMO beats it 21 times, on the whole, HMO has a certain the maximum number of evaluations is d*10000.The parameters are
degree of competitiveness in the comparison of the 6 variants of algo consistent as described above.Each algorithm was run independently 10
rithms, although it is behind the individual algorithms, but it is also times.The statistical indicators and optimal parameters for the results of
stronger than some algorithms, which also verifies that HMO has some the 10 runs are shown in each table. It is worth noting that the precision
development and research value. in this section is retained to six decimal places.
14
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 6
Optimization results of HMO and variant algorithms (D = 30).
F Index QMESSA FACL MPSO CSSA EAPSO VPPSO HMO
15
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 6 (continued )
F Index QMESSA FACL MPSO CSSA EAPSO VPPSO HMO
16
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 6 (continued )
F Index QMESSA FACL MPSO CSSA EAPSO VPPSO HMO
seen from the table, the optimal value f(x) = 263.852346 obtained from 4.3. Welded beam design
the HMO optimization, corresponding to the optimal solution x =
(0.788415, 0.408114), ranks first. It is worth noting that the mean value The objective of this problem is to minimize the manufacturing cost
of HMO’s solution is 263.852347 with a variance of 0. This proves that of welded beams. This problem includes the following design variables:
HMO has good stability in solving the Three-bar truss design problem (1) the height of the reinforcement t (x1); (2) the thickness of the weld h
compared to the other seven algorithms. The convergence curve in Fig. 5 (x2); (3) the thickness of the reinforcement b (x3); and (4) the length of
also shows that HMO has a faster convergence rate compared to the the reinforcement l (x4) (Coello, 2000). A table with optimization results
other seven algorithms. is shown in Table 11, and a graph of the average convergence results for
each algorithm is shown in Fig. 7.
4.2. Compression Spring design
4.4. Comprehensive analysis
The objective of this problem is to minimize the weight of the ten
sion/compression spring design problem. Three design variables are Among the three engineering optimization problems mentioned
considered in this problem: (1) the wire diameter d (x1); (2) the average above, it can be seen that HMO optimizes better, while the optimized
coil diameter D (x2); and (3) the number of active coils P (x3) (Nadimi- parameters are within a reasonable constraint. In particular, the optimal
Shahraki et al., 2020).The specific optimization results are shown in the solutions found by the HMO for problems Three-bar truss design and
Table 10 and a graph of the average convergence results for each al Compression Spring Design have a high accuracy and are also very
gorithm shown in Fig. 6. stable. In the average convergence plots of the three optimization
17
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 7
Optimization results for HMO and variant algorithms (D = 50).
F Index QMESSA FACL MPSO CSSA EAPSO VPPSO HMO
18
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 7 (continued )
F Index QMESSA FACL MPSO CSSA EAPSO VPPSO HMO
19
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 8
Optimization results for different parameters.
F Index r = 0.1 r = 0.2 r = 0.3 r = 0.4
20
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
Table 8 (continued )
F Index r = 0.1 r = 0.2 r = 0.3 r = 0.4
Table 9
Three-bar truss design problem.
Algorithms f(x) x1 x2
5. Conclusion
Table 10
Optimization results for Compression Spring Design.
Algorithms f(x) x1 x2 x3
Table 11
Welded Beam Design.
Algorithms f(x) x1 x2 x3 x4
22
D. Zhu et al. Expert Systems With Applications 237 (2024) 121597
CRediT authorship contribution statement Kennedy, J., & Eberhart, R. (1995, November). Particle swarm optimization. In
Proceedings of ICNN’95-international conference on neural networks (Vol. 4, pp. 1942-
1948). IEEE. Doi: 10.1109/ICNN.1995.488968.
Donglin Zhu: Conceptualization, Methodology, Software, Data Karaboga, D., & Basturk, B. (2008). On the performance of artificial bee colony (ABC)
curation, Writing – original draft. Siwei Wang: Visualization, Investi algorithm. Applied Soft Computing, 8(1), 687–697. https://doi.org/10.1016/j.
gation. Changjun Zhou: Conceptualization, Supervision, Funding asoc.2007.05.007
Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in
acquisition. Shaoqiang Yan: Software, Methodology. Jiankai Xue: Engineering Software, 95, 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008
Software, Methodology, Writing – original draft. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in
Engineering Software, 69, 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
Xue, J., & Shen, B. (2020). A novel swarm intelligence optimization approach: sparrow
Declaration of Competing Interest search algorithm. Systems Science & Control Engineering, 8(1), 22-34.10.1080/
21642583.2019.1708830.
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris
The authors declare that they have no known competing financial hawks optimization: Algorithm and applications. Future Generation Computer Systems,
interests or personal relationships that could have appeared to influence 97, 849–872. https://doi.org/10.1016/j.future.2019.02.028
Xue, J., & Shen, B. (2023). Dung beetle optimizer: A new meta-heuristic algorithm for
the work reported in this paper. global optimization. The Journal of Supercomputing, 79(7), 7305–7336. https://doi.
org/10.1007/s11227-022-04959-6
Data availability Zhao, W., Zhang, Z., & Wang, L. (2020). Manta ray foraging optimization: An effective
bio-inspired optimizer for engineering applications. Engineering Applications of
Artificial Intelligence, 87, Article 103300. https://doi.org/10.1016/j.
Data will be made available on request. engappai.2019.103300
Chakraborty, S., Saha, A. K., Sharma, S., Mirjalili, S., & Chakraborty, R. (2021). A novel
enhanced whale optimization algorithm for global optimization. Computers & Industrial
Acknowledgements Engineering, 153, Article 107086. https://doi.org/10.1016/j.cie.2020.107086
Ma, C., Huang, H., Fan, Q., Wei, J., Du, Y., & Gao, W. (2022). Grey wolf optimizer based
This work was supported in part by the National Natural Science on Aquila exploration method. Expert Systems with Applications, 205, Article 117629.
https://doi.org/10.1016/j.eswa.2022.117629
Foundation of China under Grant numbers 62272418 and 62002046, Xue, J., Shen, B., & Pan, A. (2023). An intensified sparrow search algorithm for solving
Basic Public Welfare Research Program of Zhejiang Province (No. optimization problems. Journal of Ambient Intelligence and Humanized Computing, 14
LGG18E050011). (7), 9173–9189. https://doi.org/10.1007/s12652-022-04420-9
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE
Transactions on Evolutionary Computation, 1(1), 67–82. https://doi.org/10.1109/
References 4235.585893
LaBar, K. S. (2007). Beyond fear: Emotional memory mechanisms in the human brain.
Phung, M. D., & Ha, Q. P. (2021). Safety-enhanced UAV path planning with spherical Current Directions in Psychological Science, 16(4), 173–177. https://doi.org/10.1111/
vector-based particle swarm optimization. Applied Soft Computing, 107, Article j.1467-8721.2007.00498.x
107376. https://doi.org/10.1016/j.asoc.2021.107376 Kornell, N., & Bjork, R. A. (2009). A stability bias in human memory: Overestimating
Puente-Castro, A., Rivero, D., Pazos, A., & Fernandez-Blanco, E. (2022). A review of remembering and underestimating learning. Journal of Experimental Psychology:
artificial intelligence applied to path planning in UAV swarms. Neural Computing and General, 138(4), 449. https://doi.org/10.1037/a0017350
Applications, 1–18. https://doi.org/10.1007/s00521-021-06569-4 Mahr, J. B., & Csibra, G. (2020). Witnessing, remembering, and testifying: Why the past
Wei, Y., Zhou, Y., Luo, Q., & Deng, W. (2021). Optimal reactive power dispatch using an is special for human beings. Perspectives on Psychological Science, 15(2), 428–443.
improved slime mould algorithm. Energy Reports, 7, 8742–8759. https://doi.org/ https://doi.org/10.1177/1745691619879167
10.1016/j.egyr.2021.11.138 Liang, J. J., Qu, B. Y., Suganthan, P. N., & Hernández-Díaz, A. G. (2013). Problem
Zhao, F., Zhang, L., Cao, J., & Tang, J. (2021). A cooperative water wave optimization definitions and evaluation criteria for the CEC 2013 special session on real-
algorithm with reinforcement learning for the distributed assembly no-idle flowshop parameter optimization. Computational Intelligence Laboratory, Zhengzhou
scheduling problem. Computers & Industrial Engineering, 153, Article 107082. https:// University, Zhengzhou, China and Nanyang Technological University, Singapore,
doi.org/10.1016/j.cie.2020.107082 Technical Report, 201212(34), 281-295.
Yin, J., Deng, N., & Zhang, J. (2022). Wireless Sensor Network coverage optimization Mirjalili, S. (2016). SCA: A sine cosine algorithm for solving optimization problems.
based on Yin-Yang pigeon-inspired optimization algorithm for Internet of Things. Knowledge-Based Systems, 96, 120–133. https://doi.org/10.1016/j.knosys.2015.12.022
Internet of Things, 19, Article 100546. https://doi.org/10.1016/j.iot.2022.100546 Wu, R., Huang, H., Wei, J., Ma, C., Zhu, Y., Chen, Y., & Fan, Q. (2023). An improved
Deepa, R., & Venkataraman, R. (2021). Enhancing Whale Optimization Algorithm with sparrow search algorithm based on quantum computations and multi-strategy
Levy Flight for coverage optimization in wireless sensor networks. Computers & enhancement. Expert Systems with Applications, 215, Article 119421. https://doi.org/
Electrical Engineering, 94, Article 107359. https://doi.org/10.1016/j. 10.1016/j.eswa.2022.119421
compeleceng.2021.107359 Peng, H., Zhu, W., Deng, C., & Wu, Z. (2021). Enhancing firefly algorithm with courtship
Abosuliman, S. S., & Almagrabi, A. O. (2021). Routing and scheduling of intelligent learning. Information Sciences, 543, 18–42. https://doi.org/10.1016/j.ins.2020.05.111
autonomous vehicles in industrial logistics systems. Soft Computing, 25, Liu, H., Zhang, X. W., & Tu, L. P. (2020). A modified particle swarm optimization using
11975–11988. https://doi.org/10.1007/s00500-021-05633-4 adaptive strategy. Expert Systems with Applications, 152, Article 113353. https://doi.
Wu, C., Sui, Q., Lin, X., Wang, Z., & Li, Z. (2021). Scheduling of energy management org/10.1016/j.eswa.2020.113353
based on battery logistics in pelagic islanded microgrid clusters. International Journal Meng, Z., Zhong, Y., Mao, G., & Liang, Y. (2022). PSO-sono: A novel PSO variant for
of Electrical Power & Energy Systems, 127, Article 106573. https://doi.org/10.1016/j. single-objective numerical optimization. Information Sciences, 586, 176–191. https://
ijepes.2020.106573 doi.org/10.1016/j.ins.2021.11.076
Zhu, D., Zhou, C., Qiu, Y., Tang, F., & Yan, S. (2023). Kapur’s entropy underwater image Zhang, C., & Ding, S. (2021). A stochastic configuration network based on chaotic
segmentation based on multi-strategy Manta ray foraging optimization. Multimedia sparrow search algorithm. Knowledge-Based Systems, 220, Article 106924. https://
Tools and Applications, 82(14), 21825–21863. https://doi.org/10.1007/s11042-022- doi.org/10.1016/j.knosys.2021.106924
14024-2 Zhang, Y. (2023). Elite archives-driven particle swarm optimization for large scale
Liang, Z., Qin, Q., & Zhou, C. (2022). An image encryption algorithm based on Fibonacci numerical optimization and its engineering applications. Swarm and Evolutionary
Q-matrix and genetic algorithm. Neural Computing and Applications, 34(21), Computation, 76, Article 101212. https://doi.org/10.1016/j.swevo.2022.101212
19313–19341. https://doi.org/10.1007/s00521-022-07493-x Shami, T. M., Mirjalili, S., Al-Eryani, Y., Daoudi, K., Izadi, S., & Abualigah, L. (2023).
Chen, G., Zhu, D., Wang, X., Zhou, C., & Chen, X. (2022). Prediction of concrete Velocity pausing particle swarm optimization: A novel variant for global
compressive strength based on the BP neural network optimized by random forest optimization. Neural Computing and Applications, 35(12), 9193–9223. https://doi.
and ISSA. Journal of Function Spaces, 2022. https://doi.org/10.1155/2022/8799429 org/10.1007/s00521-022-08179-0
Zhang, L., Gao, T., Cai, G., & Hai, K. L. (2022). Research on electric vehicle charging Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. The
safety warning model based on back propagation neural network optimized by Journal of Machine Learning Research, 7.
improved gray wolf algorithm. Journal of Energy Storage, 49, Article 104092. https:// Pathak, V. K., & Srivastava, A. K. (2022). A novel upgraded bat algorithm based on
doi.org/10.1016/j.est.2022.104092 cuckoo search and Sugeno inertia weight for large scale and constrained engineering
Wu, G., Pedrycz, W., Suganthan, P. N., & Mallipeddi, R. (2015). A variable reduction design optimization problems. Engineering with Computers, 38(2), 1731–1758.
strategy for evolutionary algorithms handling equality constraints. Applied soft https://doi.org/10.1007/s00366-020-01127-3
computing, 37, 774–786. https://doi.org/10.1016/j.asoc.2015.09.007 Nadimi-Shahraki, M. H., Taghian, S., Mirjalili, S., & Faris, H. (2020). MTDE: An effective
Seyyedabbasi, A., & Kiani, F. (2023). Sand Cat swarm optimization: A nature-inspired multi-trial vector-based differential evolution algorithm and its applications for
algorithm to solve global optimization problems. Engineering with Computers, 39(4), engineering design problems. Applied Soft Computing, 97, Article 106761. https://
2627–2651. https://doi.org/10.1007/s00366-022-01604-x doi.org/10.1016/j.asoc.2020.106761
Dorigo, M., Birattari, M., & Stutzle, T. (2006). Ant colony optimization. IEEE Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering
Computational Intelligence Magazine, 1(4), 28–39. https://doi.org/10.1109/ optimization problems. Computers in Industry, 41(2), 113–127. https://doi.org/
MCI.2006.329691 10.1016/S0166-3615(99)00046-9
23