Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Simulation and Optimization of Transmitting Transducers for Well Logging
Previous Article in Journal
Intelligent Monitoring and Visualization System for High Building Nighttime Utilization Based on Image Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Bald Eagle Search Algorithm Embedded Orthogonal Learning for Wireless Sensor Network (WSN) Coverage Optimization

1
Faculty of Information Science and Engineering, Management and Science University, Shah Alam 40100, Malaysia
2
School of Management, Henan University of Technology, Zhengzhou 450001, China
3
College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(21), 6794; https://doi.org/10.3390/s24216794
Submission received: 16 September 2024 / Revised: 19 October 2024 / Accepted: 21 October 2024 / Published: 23 October 2024
(This article belongs to the Section Sensor Networks)

Abstract

:
Coverage control is a fundamental and critical issue in plentiful wireless sensor network (WSN) applications. Aiming at the high-dimensional optimization problem of sensor node deployment and the complexity of the monitoring area, an orthogonal learning multi-strategy bald eagle search (OLMBES) algorithm is proposed to optimize the location deployment of sensor nodes. This paper incorporates three kinds of strategies into the bald eagle search (BES) algorithm, including Lévy flight, quasi-reflection-based learning, and quadratic interpolation, which enhances the global exploration ability of the algorithm and accelerates the convergence speed. Furthermore, orthogonal learning is integrated into BES to improve the algorithm’s robustness and premature convergence problem. By this way, population search information is fully utilized to generate a more superior position guidance vector, which helps the algorithm jump out of the local optimal solution. Simulation results on CEC2014 benchmark functions reveal that the optimization performance of the proposed approach is better than that of the existing method. On the WSN coverage optimization problem, the proposed method has greater network coverage ratio, node uniformity, and stronger optimization stability when compared to other state-of-the-art algorithms.

1. Introduction

Wireless sensor networks (WSNs), as the core foundation of the Internet of Things (IoT) technology [1,2], have garnered immense amounts of attention due to their flexibility, timeliness, scalability, and rapid deployment [3,4]. They are composed of numerous microsensor nodes, which can promptly collect and process real-time data from the monitoring region [5]. In recent years, WSNs have been introduced into intelligent transportation systems [6], military defence [7], environmental monitoring [8,9], medical care [10], and other fields. A highly reliable and robust WSN has created great convenience for human production and life. In multiple sensor network applications, coverage control of the surveillance region is a substantial task that is strongly related to the service quality of WSN. The random deployment of sensor nodes results in inefficient network coverage and an uneven distribution of nodes, affecting the effective collection and transmission of data in the surveillance area. Therefore, it is exceedingly significant to deploy a specific number of sensors to maximize the coverage ratio of the surveillance area.
Immense amounts of specialists and professors have conducted extensive research for the sake of excellent coverage performance [11,12,13]. At present, coverage control algorithms are mainly divided into two categories: centralized and distributed coverage control algorithms. Generally, the distributed approach allows each node to utilize a neighbours’ location information to move repeatedly until it reaches the optimal deployment location. However, in distributed coverage control algorithms, the higher energy consumption in the iterative moving process is inevitable, and thus the network’s total energy consumption decreases significantly. For centralized coverage control algorithms, a sink sensor should be required to perform the coverage optimization with global topology. After analyzing all of the data, it can determine where all of the other sensors in the network should be placed. In comparison with distributed algorithms, centralized coverage optimization algorithms reduce the unnecessary movement of sensors and prolong the network lifetime.
With the development of artificial intelligence theory, a series of swarm intelligence optimization algorithms have emerged, which play an increasingly significant role in optimization problems [14,15,16]. An increasing number of scholars have applied them to the coverage optimization problem of WSN and obtained specific achievements. Multiple swarm intelligence optimization algorithms provide efficient and reliable solutions for solving the optimization problem of sensor node deployment, for instance, particle swarm optimization algorithm (PSO), grey wolf optimization algorithm (GWO), whale optimization algorithm (WOA), invasive weed algorithm (IWO), salp swarm algorithm (SSA), and so forth. The bald eagle search (BES) algorithm [17], proposed by Alsatter in 2020, is a new swarm intelligence optimization algorithm that was inspired by the hunting strategy or intelligent social behaviour of bald eagles. In contrast to other swarm intelligence optimization algorithms, BES has the advantages of high optimization accuracy and fast convergence rate [18], and is widely used in synchronous optimization feature selection [19], support vector machine regression parameter adjustment [20], photovoltaic (PV) model parameter adjustment [21] and other fields. However, BES still suffers from the problem of being easily trapped in local optimums and imbalance between global search and local exploitation. Some scholars have improved the BES algorithm and applied it to the optimization problems. In order to improve the global search ability of the BES algorithm, Zhao et al. [22] incorporated the golden sine algorithm and crisscross strategy into the standard BES algorithm. The improved BES algorithm was applied to the optimization of the back propagation (BP) neural network model, and the experimental results show that the optimized BP neural network model can effectively improve the accuracy of air quality prediction. Ding et al. [23] introduced the adaptive inertia weight and Cauchy mutation strategy into the BES algorithm, which enhanced the local search ability of the algorithm and reduced the possibility of falling into the local optimal. The improved BES shows good optimization ability in engineering applications such as the pressure vessel design. Shen et al. [24] integrated tent chaotic mapping, Lévy flight, and adaptive weights into the BES and applied them to the offloading task of vehicular networks. Simulation results show that the improved BES can effectively reduce the total cost of offloading tasks. Tong et al. [25] introduced a chaos operator and sine and cosine into the BES algorithm and applied them to the position optimization of a logistics distribution centre. Experimental results show that the modified algorithm can effectively save delivery costs and lift efficiency.
For the deployment problem of sensor nodes, most swarm intelligence optimization algorithms still have several shortcomings, such as premature convergence, poor population diversity in the late iterations, and inability to balance the relationship between exploitation and exploration, which will lead to numerous tiny coverage holes and node redundancy. In addition, excessive studies only paid attention to elevating the coverage rate of the region, but limited works have devoted to improving node uniformity while ensuring sufficient coverage. As a novel swarm intelligence optimization algorithm, the applications of BES in WSN are relatively few at present, especially for the deployment problem of sensor nodes. Hence, the motivation that impelled us to conduct further research is to overcome the above drawbacks for the sake of sufficient coverage of the monitoring area and excellent node uniformity. However, BES has a chronic deficiency in terms of weak robustness while dealing with high-dimensional complicated problems, which sometimes results in an inferior solution. Significantly, it is a novel academic idea to enhance the optimization performance of the BES algorithm and apply it to address the problem of sensor node deployment. Therefore, this paper proposes an orthogonal learning multi-strategy bald eagle search (OLMBES) algorithm and applies it to WSN coverage optimization successfully. A series of simulation experiment results show that the proposed method exhibits remarkable performance in the WSN coverage optimization problem, which verifies the effectiveness and superiority of the OLMBES algorithm. The primary contributions of this paper are demonstrated in the following three facets:
(1)
A mathematical model for a coverage optimization problem is formulated. To facilitate quantitative analysis, the continuous surveillance area is discretized into multiple target monitoring points.
(2)
An OLMBES algorithm is proposed. To begin with, this paper incorporates three kinds of strategies into the BES algorithm, including Lévy flight, quasi-reflection-based learning, and quadratic interpolation, which enhances the global exploration ability of the algorithm and accelerates the convergence speed. Furthermore, orthogonal learning is integrated into the BES algorithm in order to prevent the algorithm trapping in the local optima and to strengthen the robustness of the proposed method.
(3)
The performance of the OLMBES algorithm is verified on CEC2014 benchmark functions, applying it to the coverage optimization problem of the WSN. With a series of comparative experimental simulation results, the OLMBES algorithm is confirmed as the most excellent method to tackle with coverage optimization of the WSN compared with the state-of-the-art methods, exhibiting a remarkable performance in terms of coverage rate and node uniformity.
The remainder of the paper is arranged as follows. Section 2 introduces related works on sensor node deployment methods. Section 3 introduces the probabilistic perception model of sensors and coverage performance evaluation indicators. The hunting process of the standard BES algorithm is briefly sketched in Section 4. Section 5 describes the proposed method in detail. The application of the OLMBES algorithm to coverage optimization of the WSN is presented in Section 6. An analysis of experimental simulations is discussed in Section 7. In the end, conclusions and future improvements are presented in Section 8.

2. Related Works

In deployment, according to the function of a network, coverage is the most important performance metric for a WSN, and it expresses the ability of the network to monitor an area of interest, meaning that all points within this area are always monitored. To ensure the service quality of a WSN, the coverage optimization problem as a basic research task should be brought to the forefront of public attention.
At present, coverage optimization algorithms are mainly classified as either distributed or centralized. For the distributed coverage optimization methods, each sensor node determines its position at each timestep based on the local information it has received from neighbouring nodes, and the distributed coverage control algorithms can be divided into two groups: force-based and geometrical algorithms [11]. The coverage control algorithms proposed under the forced-based group are inspired by natural phenomena, such as animal aggregation [26] or the equilibrium of molecules [27]. In the force-based group, sensor nodes move based on the force entered from their neighbouring nodes to distribute uniformly in the area, and every sensor node calculates the entered force based on information obtained from neighbouring nodes. In the second group of geometrical algorithms, the Voronoi diagram is the most commonly used structure in WSN [28]. A Voronoi diagram splits the region of interest into cells, and every sensor node undertakes a cell to cover [29].
In a centralized coverage optimization algorithm, the placement of a sensor node is decided by a centralized sensor which is usually called a sink. The sink sensor analyses all of the data and determines where all of the other sensors in the network should be placed. A major problem in deploying sensor nodes is that their area coverage should be maximized. In recent years, numerous experts and scholars have noticed the potential of swarm intelligence optimization algorithms that use nature-inspired computational methodologies in solving high-dimensional complex problems. Several swarm intelligence optimization algorithms have been applied to cope with the sensor deployment problem. Zhao et al. [30] integrated chaotic optimization methods into PSO for the purpose of better coverage performance, which increased the coverage ratio of a monitoring region to a certain extent and improved the phenomenon of the uneven distribution of nodes. Miao et al. [31] proposed an improved GWO with an enhanced hierarchical structure, enhancing the global search ability of the GWO with a new position update equation of grey wolf individuals. Moreover, the proposed approach was found to be usable and efficient in solving the WSN coverage optimization problem, reducing the blind area in the monitoring zone. To improve coverage optimization performance, Wang et al. [32] added the notion of reverse learning to a standard WOA, which improved the node utilization rate while increasing the coverage ratio. Zhu et al. [33] presented a hybrid algorithm of IWO and a differential evolution (DE) algorithm, integrating Lévy flight and random walk strategy into the hybrid algorithm to improve the coverage redundancy and insufficient coverage caused by an uneven distribution of nodes in the surveillance zone. To a degree, it avoided falling into the local optimal solution, and its convergence speed was accelerated. Considering the energy consumption of sensors, Zhang et al. [34] analyzed the relationship between redeployment positions optimized by the DE algorithm and the initial positions of nodes, which effectively reduced the average moving distance and energy consumption of nodes while maintaining a high coverage rate. Bat algorithm (BA), inspired by the foraging behaviour of bats’ echolocation, is employed in WSN coverage optimization due to its rapid convergence and ease of implementation. Mohar et al. [35] introduced an improved BA to optimize node deployment. Nevertheless, this proposed method failed to demonstrate strong robustness due to its many parameter settings, influencing coverage performance greatly. Wang et al. [36] adopted a water wave optimization (WWO) algorithm for location deployment optimization of sensor nodes since it has the advantages of easy operation, fewer control parameters, and powerful search ability. Li et al. [37] proposed a node deployment method based on autonomous multi-decision PSO to improve the coverage ratio of the WSN. Chaotic mapping, multi-decision learning, Cauchy mutation, and reverse learning strategies are integrated into the PSO algorithm to enhance the optimization ability in high-dimensional optimization problems. Zhao et al. [38] proposed an improved ant lion optimization algorithm to optimize the sensor node deployment problem. This algorithm employed the cuckoo search (CS) algorithm and Cauchy mutation strategy to update the positions of ants in the population. The DE algorithm is used to update the position of the ant lion population. On the one hand, this method improves the network coverage performance and reduces the cost of node deployment. On the other hand, the hybrid optimization algorithm greatly increases computational complexity. To improve the network coverage performance, Dao et al. [39] divided the monitoring area into multiple sub-regions, integrated the two strategies of reverse learning and multi-directional technology into the Archimedes optimization algorithm (AOA), and then combined the optimal node locations searched in the sub-regions to obtain an optimal deployment scheme.
These above mentioned optimization methods have improved the coverage ratio of the monitoring region to some extent, but there are still some common drawbacks in solving the problem of sensor deployment. The primary shortcomings include an inefficient coverage rate, coverage redundancy, and poor node uniformity. Most studies focus on coverage rate maximization, ignoring the influence of sensor node uniformity on WSN, which will lead to a redundant coverage, excessive energy consumption of nodes, and thus affect service quality of WSN. Aside from that, BES is a novel swarm intelligence optimization algorithm proposed in recent years, and it has the advantages of high optimization accuracy and fast convergence rate. However, the coverage maximization problem is considered a high-dimensional complex problem, and the high dimensionality directly affects the optimization performance of the BES algorithm. Based on the previous research, considering the influence of node uniformity comprehensively, a new sensor deployment method using the improved BES algorithm is proposed to improve the coverage performance of WSN.

3. WSN Coverage Optimization Problem

In the two-dimensional surveillance area with length L and width W, a set of mobile sensors S = {s1, s2, s3, ⋯, sN} are randomly deployed on the surface of the region, where the position of the ith sensor is represented as si = {xi, yi}, i = {1, 2, 3, ⋯, N}. Mobile sensors are moved to the optimal position by conducting the specific coverage control algorithm, achieving the maximal coverage ratio of the observation region. Make the following assumptions in this paper:
(a)
All sensors are identical in terms of structure, computational power, communication power, storage energy, and synchronous clock.
(b)
Each sensor can acquire location information about its own and neighbouring nodes.
(c)
The communication radius Rc of each sensor is twice the range of the sensing radius Rs.
(d)
In the observation region, there are no obstacles. Each mobile sensor has enough power to perform position update.
The surveillance region is discretized into a × b grid points to evaluate the coverage performance indicators effectively, where the jth target point in the region is represented as oj = {xj, yj}, j = 1, , a × b. The smaller the distance between target points, the higher the accuracy of the coverage ratio. Figure 1 shows the discretization process of the monitoring area. This paper adopts the probabilistic perception model since it can simulate the information monitoring process in the actual deployment environment. Figure 2 displays a schematic diagram of the probabilistic perception model.
The distance between sensor si and grid point oj is expressed as
d s i , o j = x i x j 2 + y i y j 2
The perception probability of node si to grid point oj is defined by
ρ s i , o j = 0 d s i , o j > R s + R e exp α 1 λ 1 β 1 λ 2 β 2 + α 2 R s R e d s i , o j R s + R e 1 d s i , o j < R s R e
where Re is the perceived reliability parameter of sensors, and α1, α2, β1, β2 are the corresponding parameters related to the property of sensors. In general, α1 = 1, α2 = 0, β1 = 1, β2 = 1. λ1, and λ2 are defined as
λ 1 = R e R s + d s i , o j λ 2 = R e + R s d s i , o j
Therefore, the joint sensing probability of multiple sensors in the monitoring area to grid point oj is expressed as
ρ cov S , o j = 1 i = 1 N 1 ρ s i , o j

4. Bald Eagle Search Algorithm

The bald eagle is a kind of huge raptor belonging to the Accipitridae family, which mainly feeds on large fish and small mammals that dwell near water [40]. They have acute vision as well as outstanding flight ability, allowing them to quickly locate and swoop to catch their preys. In the process of foraging, bald eagles identify and choose a search space with more preys according to self-searching or tracking the population, flying towards a specific area. Once a target prey is determined, bald eagles will promptly swoop to catch the prey. The BES algorithm mimics the behaviour of bald eagles during predation. Correspondingly, this algorithm is divided into three stages, namely, selecting the search space, searching within the selected area, and swooping [41].
(1)
Select stage
To determine an optimal hunting area, bald eagles select a search space with plentiful preys and fly spirally within the selected area. Position update equation of bald eagles in determining the search space stage is indicated using Equation (5)
Q i n e w = Q + δ r Q m e a n Q i
where Qi indicates the position of the ith bald eagle individual, Q* and Qmean, respectively, represent the optimal search position and mean position obtained in the previous probe of population, parameter δ influences the variations in position that takes a value between 1.5 and 2, and r is a random number that ranges from 0 to 1.
(2)
Search stage
After identifying the optimal search space, bald eagles spirally fly to expedite the speed of search in the specific area with might and main. They search for preys around the Qmean (mean position of population), moving in spiral direction. We use the polar coordinate equation to describe the process of position update, as shown in Equations (6)–(9):
Q i n e w = Q i + u i Q i Q m e a n + v i Q i Q i + 1
u i = u r i max u r , v i = v r i max v r
u r i = r i sin θ i , v r i = r i cos θ i
θ i = α π r a n d , r i = θ i + R r a n d
where α is a parameter that controls the angle between adjacent search points, taking a value between 5 and 10, and R is used to determine the number of search cycles that takes a value from 0.5 to 2.
(3)
Swooping stage
In the swooping stage, the bald eagles descend from the optimal subduction position to capture the target prey. In the meanwhile, other individuals in the population also move promptly to the optimal position and attack preys. Likewise, this paper also adopts the polar coordinate equation to describe the position update of the swooping stage, as shown in Equations (10)–(13):
Q i n e w = r a n d Q + u l i Q i c 1 Q m e a n + v l i Q i c 2 Q
u l i = u r i max u r , v l i = v r i max v r
u r i = r i sinh θ i , v r i = r i cosh θ i
θ i = α π r a n d , r i = θ i
where c1, c2 ∈ [1, 2].
A flowchart depicting the BES algorithm is exhibited in Figure 3.

5. Proposed Methodology

5.1. Lévy Flight

The standard BES algorithm heavily relies on search information in the stage of selecting the search space. It is inefficient to merely search for new space near the global optimal solution, which leads to a sluggish convergence speed and stagnation in the local optimal solution. Therefore, the BES algorithm cannot exhibit remarkable optimization performance when optimizing complicated practical problems. Lévy flight, as a kind of random walk, has a paramount characteristic of executing occasional leaps interspersed with several tiny steps, which helps the population to seek a more potential search space and jump out of the local optimal solution. The position update equation is indicated in Equation (14)
Q i l e v y = Q * + s i g n r a n d 1 2 L e v y λ
where rand is a random number in the interval [0, 1] obeying uniform distribution, sign() is expressed as a sign function, and Lévy(λ) represents the route that obeys Lévy distribution. The calculation equations for Lévy flight are as follows
L e v y λ = μ σ ν 1 / λ
μ ~ N 0 , σ μ 2 , ν ~ N 0 , σ ν 2 , σ μ = σ ν = 1
σ = Γ 1 + λ sin π λ / 2 Γ 1 + λ / 2 λ 2 λ 1 / 2 1 / λ
where μ and ν obey the standard normal distribution, σμ = σν = 1, their dimensions are consistent with each individual in the population, and λ is generally taken as 1.5. Figure 4 presents a simulated image of the Lévy flight path.

5.2. Quasi-Reflection-Based Learning

Bald eagles spirally fly in the selected region to search for prey. It is distinctly possible to miss a more remarkable solution due to inefficient exploration within the selected space. The primary concept of quasi-reflection-based learning (QRBL) is to calculate and evaluate the current solution vector and quasi-reflection solution at the same time, and then choose the solution equipped with better fitness to enter the next iteration [42]. This method can effectively raise the population diversity and speed up convergence.
If x is a point in the search interval [lb, ub] and c = (lb + ub)/2 represents the midpoint of the search interval, then the quasi-reflection point corresponding to point x can be calculated using Equation (18). The relative positional relationship between a random point and its quasi-reflection point in the search interval is depicted in Figure 5.
x q r = r a n d c , x
In a high-dimensional vector space, the quasi-reflection mechanism can be applied to each dimension, as shown in Equation (19):
Q i , j q r = r a n d l b j + u b j 2 , q i , j

5.3. Quadratic Interpolation

In the swooping stage, bald eagles swoop rapidly to capture a target prey. In the meanwhile, other individuals also move to the optimal position, which leads to inferior population diversity and stagnation in the local optimal solution. Quadratic interpolation (QI), as a type of nonlinear crossover operator, approximately fits the shape of quadratic curve through three solution vectors in the population, generating a new solution vector by mutation [43]. In this paper, QI is applied to position update strategy of a random individual in the population, guiding by the top three solution vectors. Position update equation is indicated in Equation (20):
Q m u = 1 2 Q t 2 Q s 2 f Q + Q 2 Q t 2 f Q s + Q s 2 Q 2 f Q t Q t Q s f Q + Q Q t f Q s + Q s Q f Q t
where Q*, Qs, and Qt, respectively, represent the top three solution vectors of fitness, and f(.) represents its fitness value.

5.4. Orthogonal Learning Strategy

(1)
Orthogonal experimental design
Orthogonal experimental design (OED) is an experimental design method used to study multi-factor and multi-level problems, using the least number of experiments to achieve equivalent results with comprehensive experiments [44]. Considering an optimization problem with fitness related to Z factors, each factor is assigned to one of H levels. If the experimenter adopts the exhaustive method to calculate all test combinations, we are supposed to evaluate HZ calculations to seek the optimal solution. When the value of H or Z is large, it is time-consuming and inefficient to find the best combination.
Orthogonal table (OA) is an extraordinarily crucial tool in OED. According to the orthogonality of the orthogonal table, representative test combinations can be selected from comprehensive experiments to reduce the amount of calculation [45]. LM(HZ) signifies an orthogonal array with Z factors and H levels per factor, where L is the orthogonal array and M denotes the number of test combinations. An orthogonal table with four factors and three levels per factor is expressed in Equation (21). Assuming an optimization problem has four factors and three levels, 34 = 81 experimental calculations are required to seek the optimal combination if we adopt the exhaustive method. Nevertheless, we only calculate 9 experimental results using the OED method.
L 9 3 4 = 1 1 1 1 1 2 2 2 1 3 3 3 2 1 2 3 2 2 3 1 2 3 1 2 3 1 3 2 3 2 1 3 3 3 2 1
Factor analysis (FA) can judge the influence of each level on each factor according to the fitness value of M test combinations [46]. fm represents fitness value of the mth test combination (m = 1, 2, 3, …, M). Szh indicates the impact degree of the hth level (h = 1, 2, …, H) on the zth factor (z = 1, 2, …, Z); the calculation process is expressed in Equation (22).
S z h = m = 1 M f m × e m z h m = 1 M e m z h
where emzh is 1 if the level of the zth (z = 1, 2, ⋯, Z) factor of the mth (m = 1, 2, ⋯, M) test combination is h (h = 1, 2, ⋯, H). Otherwise, emzh is 0. For an optimization problem, the larger the Szh is, the better the hth level on factor z will be. Otherwise, vice versa.
(2)
Orthogonal learning strategy
In the BES algorithm, the optimal position of population plays an indispensable role in guiding other individuals to hunt for preys. To further enhance the global search ability, the orthogonal learning strategy is integrated into the BES algorithm, fully utilizing the search information of the population to help it find a better position guiding vector and avoid falling into the local optimal solution.
For high-dimensional optimization problems, there are several preparatory works to conduct before embedding the orthogonal learning strategy into the algorithm. First of all, the solution vector needs to be divided into k groups so as to reduce the number of factors, with each group corresponding to a factor. Furthermore, it is necessary to construct several levels for each factor so that comprehensive information can be obtained from each factor. This paper constructs four levels for each factor, and the construction process is given as follows:
(a)
The global optimal solution vector Q* with the best fitness value is chosen.
(b)
The second optimal solution vector Qs with suboptimal fitness value is selected.
(c)
A random solution vector Qi that differs from Q* and Qs is determined.
(d)
The centroid opposition-based solution vector Q i ¯ corresponding to Qi is calculated. The calculation process of Q i ¯ is given as follows:
Q i ¯ = 2 G Q i
G = Q 1 + Q 2 + + Q n P o p n P o p
where G represents the gravity centre of the population. The search space of reverse points of the gravity centre is a dynamic boundary, denoted as qij ∈ [paj, pbj]. If the reverse points of the gravity centre surpass the boundary, Equation (26) is used to amend the position of the points.
p a j = min q i j , p b j = max q i j
q i , j ¯ = p a j + r a n d 0 , 1 G j p a j i f     q i , j ¯ < p a j G j + r a n d 0 , 1 p b j G j i f     q i , j ¯ > p b j
To summarize, four different levels of each factor can be obtained, denoted as T = {Q*, Qs, Qi, Q i ¯ }. The set of M different search solution vectors can be obtained by OED, denoted as C = {C1, C2, …, CM}. According to factor analysis, the best combination of different levels of each factor is obtained, generating a new guidance vector Qgv of the population. The orthogonal learning strategy is indicated by
Q g v = O E D Q , Q s , Q i , Q i ¯
The orthogonal learning strategy (OLS) helps the population to jump out of the local optimal solution and speeds up convergence. When the fitness value of the optimal solution falls into stagnation, the OLS can help it find a new guidance vector that potentially facilitates a more remarkable solution. However, overuse of the OLS may also disrupt original search patterns of bald eagles. Therefore, this paper sets up a triggering mechanism of the OLS, defining a stagnation number parameter, stagnated_num. If and only if stagnated_num is greater than or equal to limit (maximal stagnation times), execute the OLS and then reset stagnated_num to 0.

5.5. Complexity Analysis of Proposed OLMBES Algorithm

Different algorithms take varying amounts of time to optimize the same problems, and assessing the computational complexity of an algorithm is an essential way to evaluate its execution time. For the proposed OLMBES algorithm, we utilize Big O notation [47] to analyze the time complexity. Let nPop represent the population size of the proposed algorithm, and Maxgen be the maximum number of iterations. In the OLS, M denotes the number of experiments generated by the OLS. Following the symbol O rules of operation for the time complexity, the time complexity for randomly initializing the population is O(nPop). During the solution update process, the computational complexity for the selecting stage, the searching stage and the swooping stage are same as O(nPop*Maxgen), which encompasses both finding the best positions and updating the positions of all solutions, and O(M*Maxgen) represents the computational complexity of the OLS. Therefore, the total computational complexity of the proposed OLMBES algorithm can be expressed as O(Maxgen*(nPop+M)+nPop). Table 1 shows the pseudo-code of the proposed OLMBES algorithm.

6. Proposed OLMBES Algorithm for WSN Coverage Optimization

Application of the OLMBES algorithm in WSN coverage optimization is indicated in Figure 6. The procedures of the OLMBES algorithm are depicted below.
Step 1: Initialize the lower bound (lb) and upper bound (ub) of the monitoring region. At the same time, determine the number of sensor nodes (N), denoted as S = {s1, s2, s3, ⋯, sN}.
Step 2: Initialize the parameters of the OLMBES algorithm.
Step 3: Randomly generate positions of nPop bald eagles, indicated as Q = {Q1, Q2, …, QnPop}, Qi = {q1, q2, …, q2N−1, q2N} (i = 1, 2, …, nPop). Evaluate the coverage ratio (f(Qi)) of each position set and record the top three fitness values and its corresponding position vectors.
Step 4: Enter the iterative loop. In the stage of selecting the search space, update positions using Equations (5) and (14). Evaluate the coverage ratio of the updated positions and judge whether the updated position is a better choice.
Step 5: In the stage of searching for preys, update positions using Equation (6) and calculate the quasi-reflection position vectors corresponding to the updated positions utilizing Equation (19). Evaluate the coverage ratio of the updated positions and corresponding quasi-reflection position vectors, and then choose the position with greater coverage ratio.
Step 6: In the stage of swooping, randomly choose an individual of the population, using Equation (20) to update the position. In the meanwhile, update the position according to Equation (10) for other individuals. Evaluate the coverage ratio of the updated positions and judge whether the updated position is a better choice.
Step 7: Update the top three position vectors, namely, Q*, Qs, Qt. Check whether stagnated_num is greater than or equal to limit. If stagnated_num reaches the stagnation threshold, execute the OLS according to Equation (27), and then reset the stagnated_num to 0. Calculate the coverage ratio of the updated position and determine whether the updated position is a better choice.
Step 8: Check whether the number of iterations is greater than Maxgen. If not, then t = t + 1, so go to Step 4. Otherwise, end the iterative loop and output the optimal deployment positions of sensor nodes.

7. Simulation Experiments and Analysis

7.1. CEC2014 Benchmark Functions Test

To verify the effectiveness of the OLMBES algorithm and validate the performance of different strategies embedded in the OLMBES algorithm in solving high-dimensional optimization problems, the CEC2014 benchmark functions set is used to test the performance of the existing algorithms and the proposed OLMBES algorithm. F1~F3 are unimodal rotation functions. F4~F16 are simple multimodal functions with shift and rotation. F17~F22 are hybrid functions. F23~F30 are composite functions. On 30 test functions of CEC2014, the OLMBES algorithm is compared with the BES, GWO, WOA, SSA, BA, and CS algorithms. In order to justify the effect of different strategies proposed in this paper on the performance improvement of the OLMBES algorithm, the following definitions are made. The OLMBES algorithm not fused with Lévy flight strategy is named OLMBES-1. The OLMBES algorithm without quasi-reflection learning and quadratic interpolation strategies is named OLMBES-2. The OLMBES algorithm without orthogonal learning is named OLMBES-3. The algorithm parameters are set as follows: the population size is 100, dimension is 50, and maximum number of iterations is 1000. Table 2 exhibits the comparison results between the proposed algorithm and the existing algorithms in terms of mean and standard deviation obtained after 30 independent runs of selected CEC2014 test functions. Figure 7 displays the convergence curve of the algorithm on several benchmark functions.
Observing the average fitness values of the OLMBES, BES, GWO, WOA, SSA, BA, and CS algorithms in Table 2 on the CEC2014 test functions set, the algorithm proposed in this paper achieves the best performance on the benchmark functions F1~F4, F7~F27 and F30, equipping it with the ability to seek solutions closer to the theoretical global optimal value. The SSA, GWO, CS, and BA algorithms show the best average convergence precision on functions F5, F6, F28, and F29, respectively. On most benchmark functions, the excellent performance of the OLMBES algorithm on the average fitness value verifies that the proposed algorithm has a strong global search ability compared with the six other algorithms, which can effectively balance the relationship between exploitation and exploration and find a solution closer to the theoretical global optimal value. Observing the standard deviation of the seven algorithms in Table 2 on the test functions, it can be found that the proposed algorithm has the smallest standard deviation value on the F1~F3, F8, F11~F13, F15, F17~F25, and F30 functions. The stability of the OLMBES algorithm is the best among the seven algorithms. Among the six other algorithms, the CS algorithm has the best optimization stability on the F4, F6, F9~F10, F14, F16, F26, and F28~F29 functions, and its stability ranks second among the seven algorithms. BA has the most stable optimization ability on the F5 and F26 functions. SSA has the best standard deviation on function F7. On the CEC2014 benchmark functions, the OLMBES algorithm has the strongest optimization ability and optimization stability compared with the other six algorithms, which provides a new research direction for solving high-dimensional complex problems.
By observing the mean value of the OLMBES-1, OLMBES-2, OLMBES-3, and OLMBES algorithms in Table 2, it is found that the average fitness values of OLMBES-1, OLMBES-2, OLMBS-3, and OLMBES algorithms are smaller than the BES and the other five algorithms on most benchmark functions, and the OLMBES algorithm has the best performance on the mean indicator except F22, F28, and F30. This proves that Lévy flight, quasi-reflection learning, quadratic interpolation, and orthogonal learning strategies can enhance the convergence precision of the algorithm. On the most benchmark functions, the OLMBES-1 algorithm performs better than the BES, OLMBES-2, and OLMBES-3 algorithms on the mean values except F5, F6, F8, and F30. The OLMBES-2 algorithm has better performance than the BES and OLMBES-3 algorithms on mean value. Except for the F6, F17, F22, F26~F28, and F30 functions, the OLMBES-3 algorithm has better optimization ability than the BES algorithm. From the perspective of improving the convergence precision of the algorithm, the orthogonal learning strategy has the greatest impact on the algorithm, which can effectively prevent falling into the local optimal solution. However, the influence of quasi-reflection learning and quadratic interpolation strategies are greater than Lévy flight. By observing the standard deviation index of these algorithms in Table 2, the OLMBES and its variant algorithms have superior robustness to other algorithms in the optimization of most benchmark functions. In general, the four strategies embedded in the OLMBES algorithm make the optimization performance more stable.

7.2. Simulation Experiments on WSN Coverage Optimization

7.2.1. Comparison of Coverage Performance

To verify the superiority and effectiveness of the proposed OLMBES algorithm in a WSN coverage optimization problem, the BES, GWO, WOA, SSA, BA, and CS algorithms are compared with the proposed method in the same surveillance area. The coverage rate and node uniformity are primary indicators to effectively evaluate the quality of solutions. The fitness function of a coverage optimization task is maximization of the coverage rate. In the meanwhile, the uniformity of sensor nodes is considered a crucial index to judge the quality of the optimal solutions. Make the following definitions:
(1)
The coverage rate is one of the indispensable indicators used to evaluate coverage performance [48]. The greater the coverage rate is, the more comprehensive information sensors will collect. Assume that the coverage rate (CR) is defined as the ratio of the sum of joint sensing probability for all grid points to the total number of grid points, as shown in Equation (28).
C R = j = 1 a × b ρ cov S , o j a × b
(2)
Uniformity is an indicator to measure the distribution of sensors in the surveillance area [49]. The smaller the uniformity is, the more even the sensors will distribute. Suppose that uniformity (U) is defined as the mean of the standard deviation of distance between sensors and its neighbouring nodes, as shown by
U = 1 N i = 1 N 1 p i j = 1 p i 1 p i z = 1 p i D i , z D i , j 2
where Di,j represents the distance between the ith and jth sensors, N denotes the number of sensor nodes, and pi denotes the number of neighbouring nodes of the ith sensor.
The network simulation environment is set as follows. The size of the monitoring area is 20 m × 20 m, deploying a specific number of sensors in this region, denoted as N = 30. The sensing radius of sensors in the deployment region is Rs = 2.5 m, defining the sensing reliability parameter Re = 0.8 m. The population size is indicated as nPop = 30, and the maximum number of iterations (Maxgen) is 500. For the sake of effectively calculating the coverage ratio, the distance between grid points is set to 0.4. We repeatedly executed 10 times under the same experimental environment to reduce the influence of experimental randomness. Table 3 shows the parameter settings of the seven comparison algorithms.
Table 4 shows the average coverage ratio and uniformity after optimization by seven different algorithms. What seems beyond dispute from Table 4 is that the OLMBES algorithm has the most extraordinary coverage performance in terms of the average coverage ratio and uniformity. Compared with the other six algorithms, the average coverage ratio of the OLMBES algorithm is improved by 2.25%, 2.69%, 4.16%, 2.54%, 2.31%, and 3.39%, respectively. In the meanwhile, with the OLMBES algorithm, the average uniformity is improved by 0.177, 0.204 0.249, 0.179, 0.178, and 0.234, respectively.
Figure 8a shows a graph of the average coverage ratio changing with the number of iterations. It can be seen from Figure 8a that the OLMBES algorithm has the highest average coverage ratio and the fastest convergence speed. In comparison, both the BES and GWO algorithms converge more slowly throughout the iterative process, but their final average coverage rates after optimization are similar and significantly better than that of the WOA and CS algorithms. Additionally, the BA, WOA, and CS algorithms have similar convergence speeds in the early iteration process, but the final coverage rates of the BA is better than that of the WOA and CS algorithms, and comparable to that of the BES and GWO algorithms. Although the SSA has a slower convergence speed during the early iterations, it converges faster in the mid-iteration process, and its final average coverage rate is similar to that of the BA, GWO, and BES algorithms but better than that of the WOA and CS algorithms. Figure 8b shows a histogram of the uniformity of the node distribution in the network after optimization by the seven different algorithms. Since a lower uniformity value indicates a more uniform distribution of sensor nodes, it proves that the OLMBES algorithm has the most remarkable performance on uniformity of sensor nodes.
In order to verify the robustness of the OLMBES algorithm, boxplot is employed to describe the data distribution of the coverage ratio and uniformity obtained by running the seven algorithms repeatedly in the same environment, and it is a kind of statistical graph showing the distribution of a set of data by indicating the maximum, minimum, median, upper and lower quartiles, and outliers. Figure 9a shows a boxplot graph of the network coverage rate optimized by the seven algorithms. For the indicator of network coverage rate, a larger value of network coverage rate indicates better coverage performance of sensor nodes in the network. As shown in Figure 9a, the upper quartile, median, and lower quartile of the OLMBES algorithm are higher than the other six algorithms, equipping the shortest interquartile range and no outliers, indicating that the OLMBES algorithm has excellent global search ability and strong robustness. Compared with the other six algorithms, the boxplot height of the OLMBES algorithm is the shortest, which indicates that the OLMBES algorithm has a small degree of data fluctuation and can provide a feasible solution with a high coverage ratio for the coverage optimization problem. In the boxplots of the BES, WOA, and CS algorithms, there are outliers that deviate from the average coverage level, which indicates that the solutions provided by these algorithms in optimizing the deployment of sensor nodes are unstable. Compared with the GWO, WOA, SSA, BA, and CS algorithms, the median line of the BES algorithm is significantly higher than these five algorithms, but the BES algorithm has a longer interquartile range and greater data volatility, further indicating that the BES algorithm has a strong global search ability and poor stability. Similarly, Figure 9b shows a boxplot graph of the uniformity obtained by the seven algorithms. For the network uniformity, a lower value indicates a better distribution performance of sensor nodes in the network. As shown in Figure 9b, the upper quartile, median, and lower quartile of the OLMBES algorithm are all lower than the other six algorithms, indicating that the optimized node deployment position of the OLMBES algorithm has the highest degree of uniformity and robustness.

7.2.2. Influence Comparison of Sensor Nodes Number

To explore the effect of sensor number on the coverage rate and uniformity, the total number of sensors was gradually increased from 24 to 32 (increase two nodes per group) under the same simulation environment. Table 5 shows the performance indicators of the coverage rate when the total number of sensors increases sequentially. It can be seen from Table 5 that the average coverage rate of the seven algorithms elevate with an increase in the total number of sensors when the size of the surveillance area remains unchanged. By contrast, the OLMBES algorithm has the highest coverage rate and the most outstanding uniformity. Table 6 shows the optimized network node uniformity of the seven algorithms for different numbers of nodes. Since a lower value of uniformity indicates a more uniform distribution of network nodes, it can be intuitively seen that the proposed OLMBES algorithm shows the best network node distribution performance for different numbers of nodes, and as the number of nodes increases, the network uniformity improves gradually. With the increase in the number of nodes, the network uniformity values optimized by the CS and WOA algorithms remain around 0.6. In addition, the network uniformity values of the GWO, BA, BES, and SSA algorithms show an increasing trend with the increase in the number of nodes, especially for the GWO algorithm. For the sake of ensuring the higher coverage ratio level of the WSN, the minimum total number of nodes should be 30 for this size of region, according to the theory on the number of theoretical sensors in the literature [50]. Therefore, 30 sensor nodes were used in this paper to maximize the coverage rate for the surveillance area of 20 m × 20 m.
Figure 10a and Figure 10b, respectively, show a histogram of the average coverage rate and a line chart of uniformity in the same monitoring area with different total numbers of sensor nodes. It can be seen from Figure 10a,b that when the number of sensor nodes is identical, the average coverage rate and uniformity of the solutions provided by the OLMBES algorithm are always better than the other six algorithms.
Figure 11a and Figure 11b, respectively, correspond to error bar graphs. By observing Figure 11a, it can be found that for the same number of sensor nodes, the solutions provided by OLMBES algorithm not only have the highest average coverage rate, but also possess the smallest difference between the optimal coverage rate and the worst coverage rate in multiple runs. It is indisputable that the OLMBES algorithm has the strongest stability. The BES algorithm displays distinct instability when optimizing the positions of sensor nodes. The coverage index fluctuates greatly and the stability is the weakest. The node deployment schemes provided by the GWO, WOA, SSA, and CS algorithms cannot meet the requirements for the effective coverage of monitoring areas. Figure 11b shows that the OLMBES algorithm provides the best node deployment scheme in terms of uniformity index compared to the other six algorithms. The proposed method can improve coverage efficiency, ensure the connectivity between nodes, and reduce the occurrence of coverage redundancy.

7.2.3. Effect Comparison of Monitoring Area Size

We set surveillance areas of different sizes and observed the impact of area size on the coverage performance indicators. Table 7 shows the parameter settings for three different surveillance areas, and the rest of the parameter settings remained unchanged. Table 8 and Table 9 show the comparison of the average coverage rates and uniformity in three monitoring areas of different sizes. It can be seen from Table 8 and Table 9 that the OLMBES algorithm can ensure the sufficient coverage of WSN, whatever the size of the surveillance area is.
Figure 12a,b shows histograms of the average coverage rates and uniformity in three monitoring areas of different sizes. Figure 13a and Figure 13b, respectively, correspond to error bar graphs. It can be seen from Figure 12 and Figure 13 that the OLMBES algorithm can ensure the higher regional coverage rate and node uniformity compared with the six other algorithms. The OLMBES algorithm exhibits the most excellent coverage performance and the strongest robustness. By contrast, the other six algorithms have several problems with inefficient coverage rates, poor uniformity of sensor nodes, and unstable optimization performance. To a certain extent, simulation results on different sizes of monitoring areas show that the proposed algorithm has excellent adaptability to the sizes of surveillance regions and can stably provide high-quality node deployment solutions.

7.3. Summary and Discussions

In order to comprehensively evaluate the optimization performance of the proposed OLMBES algorithm and its effectiveness in network coverage, we conducted specific simulation experiments on both standard test functions and the application of network coverage control. The compared algorithms in the experiments include the standard BES algorithm as well as several state-of-the-art metaheuristic algorithms, such as GWO, CS, SSA, BA, and WOA.
For the simulation experiments on the standard test functions, we selected the CEC2014 benchmark, which consists of 30 test functions including unimodal functions, multimodal functions, hybrid functions, and composite functions, to comprehensively assess the algorithm optimization performance. First, compared with the other six algorithms, the proposed OLMBES algorithm achieved the lowest average fitness and mean squared error on most benchmark functions, demonstrating its superior optimization accuracy and stability. Then, to further verify the effects of different strategies on optimization performance, the OLMBES algorithm and its variants also displayed excellent performance on most benchmark functions relative to the other six algorithms. Additionally, the convergence curves of some benchmark functions demonstrated the faster convergence speed of the proposed algorithm.
In the network coverage simulation experiments, two evaluation metrics were used: network coverage rate and the uniformity of network node distribution. On this basis, we first conducted experiments under an unchanged network simulation environment and verified that the proposed OLMBES algorithm performed better than the other six algorithms by achieving the maximum network coverage rate and the best node distribution uniformity. Moreover, the proposed algorithm was also demonstrated to have the fastest convergence speed and the highest stability in coverage optimization. As the number of sensor nodes increased, the OLMBES algorithm not only improved network coverage but also enhanced the uniformity of node distribution. However, for the other six algorithms, although the optimized network coverage rate increased with more nodes, the uniformity of node distribution was not improved. Finally, for different sizes of coverage areas, the simulation results also confirmed that the proposed OLMBES algorithm is highly adaptable and able to provide a high-quality network coverage control solution.

8. Conclusions

Coverage control is a fundamental and critical issue in WSN applications. In order to further improve the coverage performance of nodes to WSN, this paper proposes a multi-strategy bald eagle search algorithm with orthogonal learning embedded. The algorithm introduces Lévy flight, QRBL, and QI into the BES algorithm, which accelerate the convergence speed and improve the global search ability of the algorithm. When the fitness of the global optimal solution falls into stagnation during the iterative process of the algorithm, the OL update strategy is triggered to help the algorithm find a better position guidance vector, jumping out of the local optimal solution and enhancing the robustness of the algorithm.
The performance of the OLMBES algorithm is verified on CEC2014 benchmark functions. The proposed method is successfully applied to the wireless sensor network coverage optimization problem, and three sets of simulation experiments are set up to compare with the BES, GWO, WOA, SSA, BA, and CS algorithms. Although the proposed method has the same drawback in that the computational complexity is positively correlated with the grid point density as the centralized coverage optimization algorithm, it further improves the network coverage and node uniformity and has a faster convergence speed and greater robustness.
In future work, the energy consumption of sensor nodes will be comprehensively considered to extend the lifetime of the WSN while ensuring adequate coverage. For the purpose of reducing the computational complexity, future investigation will also focus on combining the centralized method and the distributed method to tackle the coverage optimization problem. Moreover, we wish to apply the proposed method in three-dimensional (3D) space and other actual scenes.

Author Contributions

Conceptualization, T.C. and C.Z.; methodology, M.I.A.; software, Y.L.; validation, H.N., C.Z. and T.C.; formal analysis, H.N. and C.Z.; investigation, T.C.; resources, M.I.A.; writing—original draft preparation, C.Z. and H.N.; writing—review and editing, H.N., C.Z. and T.C.; visualization, Y.L.; supervision, M.I.A.; project administration, L.S.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Natural Science Foundation of China (grant number: 62173127, 62472143), Key R&D Special Projects in Henan Province (grant number: 241111521000), Top Young Talents in Central Plains (grant number: (2023)11), The Innovative Funds Plans of Henan University of Technology (grant number: 2020ZKCJ06), The Zhengzhou Science and Technology Collaborative Innovation Project (grant number: 21ZZXTCX06), The Open Fund from Research Platform of Grain Information Processing Center in Henan University of Technology (grant number: KFJJ2022003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, Y.; Cao, L.; Yue, Y.; Cai, Y.; Hang, B. A novel coverage optimization strategy based on grey wolf algorithm optimized by simulated annealing for wireless sensor networks. Comput. Intell. Neurosci. 2021, 2021, 6688408. [Google Scholar] [CrossRef]
  2. Zhang, L.; Huo, Y.; Ge, Q.; Ma, Y.; Liu, Q.; Ouyang, W. A privacy protection scheme for IoT big data based on time and frequency limitation. Wirel. Commun. Mob. Comput. 2021, 2021, 5545648. [Google Scholar] [CrossRef]
  3. Lv, L.; Wu, Z.; Zhang, J.; Zhang, L.; Tan, Z.; Tian, Z. A VMD and LSTM based hybrid model of load forecasting for power grid security. IEEE Trans. Ind. Inform. 2022, 18, 6474–6482. [Google Scholar] [CrossRef]
  4. Lv, L.; Zheng, C.; Zhang, L.; Shan, C.; Tian, Z.; Du, X.; Guizani, M. Contract and lyapunov optimization-based load scheduling and energy management for UAV charging stations. IEEE Trans. Green Commun. Netw. 2021, 5, 1381–1394. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Zhao, R.; Luo, Q.; Wen, C. Sensor deployment scheme based on social spider optimization algorithm for wireless sensor networks. Neural Process. Lett. 2017, 48, 71–94. [Google Scholar] [CrossRef]
  6. Ma, D.; Duan, Q. A hybrid-strategy-improved butterfly optimization algorithm applied to the node coverage problem of wireless sensor networks. Math. Biosci. Eng. 2022, 19, 3928–3952. [Google Scholar] [CrossRef]
  7. Yue, Y.; Cao, L.; Luo, Z. Hybrid artificial bee colony algorithm for improving the Coverage and connectivity of wireless sensor networks. Wirel. Pers. Commun. 2019, 108, 1719–1732. [Google Scholar] [CrossRef]
  8. Lv, L.; Wu, Z.; Zhang, L.; Gupta, B.B.; Tian, Z. An edge-AI based forecasting approach for improving smart microgrid efficiency. IEEE Trans. Ind. Inform. 2022, 18, 7946–7954. [Google Scholar] [CrossRef]
  9. Zhang, L.; Huang, Z.; Liu, W.; Guo, Z.; Zhang, Z. Weather radar echo prediction method based on convolution neural network and long short-term memory networks for sustainable e-agriculture. J. Clean. Prod. 2021, 298, 126776. [Google Scholar] [CrossRef]
  10. Qin, N.; Chen, J. An area coverage algorithm for wireless sensor networks based on differential evolution. Int. J. Distrib. Sens. Netw. 2018, 14, 1550147718796734. [Google Scholar] [CrossRef]
  11. Sadeghi Ghahroudi, M.; Shahrabi, A.; Ghoreyshi, S.M.; Alfouzan, F.A. Distributed node deployment algorithms in mobile wireless sensor networks: Survey and challenges. ACM Trans. Sens. Netw. 2023, 19, 91. [Google Scholar] [CrossRef]
  12. Osamy, W.; Khedr, A.M.; Salim, A.; Al Ali, A.I.; El-Sawy, A.A. Coverage, deployment and localization challenges in wireless sensor networks based on artificial intelligence techniques: A review. IEEE Access 2022, 10, 30232–30257. [Google Scholar] [CrossRef]
  13. Tarnaris, K.; Preka, I.; Kandris, D.; Alexandridis, A. Coverage and k-coverage optimization in wireless sensor networks using computational intelligence methods: A comparative study. Electronics 2020, 9, 675. [Google Scholar] [CrossRef]
  14. Abed-Alguni, B.H.; Alawad, N.A. Distributed Grey Wolf Optimizer for scheduling of workflow applications in cloud environments. Appl. Soft Comput. 2021, 102, 107113. [Google Scholar] [CrossRef]
  15. Alkhateeb, F.; Abed-Alguni, B.H.; Al-Rousan, M.H. Discrete hybrid cuckoo search and simulated annealing algorithm for solving the job shop scheduling problem. J. Supercomput. 2022, 78, 4799–4826. [Google Scholar] [CrossRef]
  16. Abed-alguni, B.H.; Paul, D.; Hammad, R. Improved Salp swarm algorithm for solving single-objective continuous optimization problems. Appl. Intell. 2022, 52, 17217–17236. [Google Scholar] [CrossRef]
  17. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  18. Chhabra, A.; Hussien, A.G.; Hashim, F.A. Improved bald eagle search algorithm for global optimization and feature selection. Alex. Eng. J. 2023, 68, 141–180. [Google Scholar] [CrossRef]
  19. Jia, H.; Jiang, Z.; Li, Y. Simultaneous feature selection optimization based on improved bald eagle search algorithm. Control. Decis. 2022, 37, 445–454. [Google Scholar]
  20. Sivakumar, R.; Angayarkanni, S.A.; Rao, R.Y.; Sadiq, A.S. Traffic flow forecasting using natural selection based hybrid Bald Eagle Search-Grey Wolf optimization algorithm. PLoS ONE 2022, 17, e0275104. [Google Scholar]
  21. Ramadan, A.; Kamel, S.; Hassan, M.H.; Khurshaid, T.; Rahmann, C. An Improved Bald Eagle Search Algorithm for Parameter Estimation of Different Photovoltaic Models. Processes 2021, 9, 1127. [Google Scholar] [CrossRef]
  22. Zhao, P.; Zhang, D.; Zhang, L.; Zou, C. Bald eagle search algorithm with golden sine algorithm and crisscross strategy. J. Comput. Appl. 2023, 43, 192. [Google Scholar]
  23. Ding, R.; Gao, J.; Zhang, Q. Bald eagle search algorithm combining adaptive inertial weighted Cauchy variation. J. Chin. Comput. Syst. 2023, 44, 910–915. [Google Scholar]
  24. Shen, X.; Chang, Z.; Xie, X.; Niu, S. Task Offloading Strategy of Vehicular Networks Based on Improved Bald Eagle Search Optimization Algorithm. Appl. Sci. 2022, 12, 9308. [Google Scholar] [CrossRef]
  25. Tong, Y.; Cheng, X. Location of Logistics Distribution Center Based on Improved Bald Eagle Algorithm. Sustainability 2022, 14, 9036. [Google Scholar] [CrossRef]
  26. Cheng, T.M.; Savkin, A.V. Decentralized control of mobile sensor networks for asymptotically optimal blanket coverage between two boundaries. IEEE Trans. Ind. Inform. 2013, 9, 365–376. [Google Scholar] [CrossRef]
  27. Ghahroudi, M.S.; Shahrabi, A.; Boutaleb, T. A distributed self-organising node deployment algorithm for mobile sensor networks. Int. J. Commun. Syst. 2022, 35, e5309. [Google Scholar] [CrossRef]
  28. Mahboubi, H.; Aghdam, A.G. Distributed Deployment Algorithms for Coverage Improvement in a Network of Wireless Mobile Sensors: Relocation by Virtual Force. IEEE Trans. Control Netw. Syst. 2017, 4, 736–748. [Google Scholar] [CrossRef]
  29. Eledlebi, K.; Ruta, D.; Hildmann, H.; Saffre, F.; Al Hammadi, Y.; Isakovic, A.F. Coverage and energy analysis of mobile sensor nodes in obstructed noisy indoor environment: A voronoi-approach. IEEE Trans. Mob. Comput. 2022, 21, 2745–2760. [Google Scholar] [CrossRef]
  30. Zhao, Q.; Li, C.; Zhu, D.; Xie, C. Coverage optimization of wireless sensor networks using combinations of PSO and chaos optimization. Electronics 2022, 11, 853. [Google Scholar] [CrossRef]
  31. Miao, Z.; Yuan, X.; Zhou, F.; Qiu, X.; Song, Y.; Chen, K. Grey wolf optimizer with an enhanced hierarchy and its application to the wireless sensor network coverage optimization problem. Appl. Soft Comput. 2020, 96, 106602. [Google Scholar] [CrossRef]
  32. Wang, L.; Wu, W.; Qi, J.; Jia, Z. Wireless sensor network coverage optimization based on whale group algorithm. Comput. Sci. Inf. Syst. 2018, 15, 569–583. [Google Scholar] [CrossRef]
  33. Zhu, F.; Wang, W. A coverage optimization method for WSNs based on the improved weed algorithm. Sensors 2021, 21, 5869. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Mable, F. A two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. Sensors 2017, 17, 117. [Google Scholar] [CrossRef]
  35. Mohar, S.S.; Goyal, S.; Kaur, R. Optimized sensor nodes deployment in wireless sensor network using bat algorithm. Wirel. Pers. Commun. 2021, 116, 2835–2853. [Google Scholar] [CrossRef]
  36. Wang, Y.; Shen, X.; Tang, C. Wireless sensor network coverage based on water wave optimization algorithm. J. Nanjing Univ. Sci. Technol. 2021, 45, 680–686. [Google Scholar]
  37. Li, S.; Wei, Y.; Qiu, Y. Coverage optimization of wireless sensor networks based on autonomous multi decision particle swarm. Instrum. Tech. Sens. 2022, 9, 26–35. [Google Scholar]
  38. Chen, W.; Yang, P.; Zhao, W.; Wei, L. Improved ant lion optimizer for coverage optimization in wireless sensor networks. Wirel. Commun. Mob. Comput. 2022, 2022, 8808575. [Google Scholar] [CrossRef]
  39. Dao, T.K.; Chu, S.C.; Nguyen, T.T.; Nguyen, T.D.; Nguyen, V.T. An Optimal WSN Node Coverage Based on Enhanced Archimedes Optimization Algorithm. Entropy 2022, 24, 2. [Google Scholar] [CrossRef]
  40. Karthik, A.G.; Saravanakumar, R.; Vijayakumar, P. Bald eagle search optimization on dual fueled reactivity controlled combustion ignition based engine characteristics by altering low reactive fuels. Environ. Prog. Sustain. Energy 2021, 40, e13683. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Zhou, Y.; Zhou, G.; Luo, Q.; Zhu, B. A curve approximation approach using bio-inspired polar coordinate bald eagle search algorithm. Int. J. Comput. Intell. Syst. 2022, 15, 30. [Google Scholar] [CrossRef]
  42. Fan, Q.; Chen, Z.; Xia, Z. A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft Comput. 2020, 24, 14825–14843. [Google Scholar] [CrossRef]
  43. Sun, Y.; Yang, T.; Liu, Z. A whale optimization algorithm based on quadratic interpolation for high-dimensional global optimization problems. Appl. Soft Comput. 2019, 85, 105744. [Google Scholar] [CrossRef]
  44. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y.-H. Orthogonal learning particle swarm optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef]
  45. Xiong, G.; Shi, D. Orthogonal learning competitive swarm optimizer for economic dispatch problems. Appl. Soft Comput. 2018, 66, 134–148. [Google Scholar] [CrossRef]
  46. Gao, W.; Liu, S.; Huang, L. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning. IEEE Trans. Cybern. 2013, 43, 1011–1024. [Google Scholar]
  47. Tallini, L.G.; Pelusi, D.; Mascella, R.; Pezza, L.; Elmougy, S.; Bose, B. Efficient non-recursive design of second-order spectral-null codes. IEEE Trans. Inf. Theory 2016, 62, 3084–3102. [Google Scholar] [CrossRef]
  48. Zhu, C.; Zheng, C.; Shu, L.; Han, G. A survey on coverage and connectivity issues in wireless sensor networks. J. Netw. Comput. Appl. 2012, 35, 619–632. [Google Scholar] [CrossRef]
  49. Fang, W.; Song, X.; Wu, X.; Sun, J.; Hu, M. Novel efficient deployment schemes for sensor coverage in mobile wireless sensor networks. Inf. Fusion 2018, 41, 25–36. [Google Scholar] [CrossRef]
  50. Fang, W.; Song, X. A deployment strategy for coverage control in wireless sensor networks based on the blind-zone of Voronoi diagram. J. Phys. 2014, 63, 132–141. [Google Scholar] [CrossRef]
Figure 1. Discrete monitoring area.
Figure 1. Discrete monitoring area.
Sensors 24 06794 g001
Figure 2. Probabilistic sensing model.
Figure 2. Probabilistic sensing model.
Sensors 24 06794 g002
Figure 3. Flowchart of the BES algorithm.
Figure 3. Flowchart of the BES algorithm.
Sensors 24 06794 g003
Figure 4. Trajectory diagram of Lévy flight.
Figure 4. Trajectory diagram of Lévy flight.
Sensors 24 06794 g004
Figure 5. Quasi-reflection points defined in domain [lb, ub].
Figure 5. Quasi-reflection points defined in domain [lb, ub].
Sensors 24 06794 g005
Figure 6. Flowchart of the OLMBES algorithm.
Figure 6. Flowchart of the OLMBES algorithm.
Sensors 24 06794 g006
Figure 7. Convergence curves of different algorithms under several benchmark functions.
Figure 7. Convergence curves of different algorithms under several benchmark functions.
Sensors 24 06794 g007
Figure 8. Comparison of coverage performance indicators: (a) variation curves of coverage rate changing with iterative times; (b) uniformity comparison of algorithms.
Figure 8. Comparison of coverage performance indicators: (a) variation curves of coverage rate changing with iterative times; (b) uniformity comparison of algorithms.
Sensors 24 06794 g008
Figure 9. Analysis of algorithm robustness: (a) network coverage ratio boxplot diagram; (b) uniformity boxplot diagram.
Figure 9. Analysis of algorithm robustness: (a) network coverage ratio boxplot diagram; (b) uniformity boxplot diagram.
Sensors 24 06794 g009
Figure 10. Analysis of the coverage rate and uniformity with the numbers of nodes: (a) the average coverage rate histogram for different numbers of nodes; (b) the average uniformity line chart for different numbers of nodes.
Figure 10. Analysis of the coverage rate and uniformity with the numbers of nodes: (a) the average coverage rate histogram for different numbers of nodes; (b) the average uniformity line chart for different numbers of nodes.
Sensors 24 06794 g010
Figure 11. Robustness analysis of the coverage rate and uniformity with the numbers of nodes: (a) coverage rate error bar for different numbers of nodes; (b) uniformity error bar for different numbers of nodes.
Figure 11. Robustness analysis of the coverage rate and uniformity with the numbers of nodes: (a) coverage rate error bar for different numbers of nodes; (b) uniformity error bar for different numbers of nodes.
Sensors 24 06794 g011
Figure 12. Analysis of coverage rates and uniformity changing with sizes of surveillance area: (a) the average coverage rate histogram for different surveillance areas; (b) the average uniformity histogram for different sizes of surveillance area.
Figure 12. Analysis of coverage rates and uniformity changing with sizes of surveillance area: (a) the average coverage rate histogram for different surveillance areas; (b) the average uniformity histogram for different sizes of surveillance area.
Sensors 24 06794 g012
Figure 13. Robustness analysis of the coverage rate and uniformity with different sizes of surveillance areas: (a) the coverage rate error bar for different sizes of surveillance areas; (b) uniformity error bar for different sizes of surveillance areas.
Figure 13. Robustness analysis of the coverage rate and uniformity with different sizes of surveillance areas: (a) the coverage rate error bar for different sizes of surveillance areas; (b) uniformity error bar for different sizes of surveillance areas.
Sensors 24 06794 g013
Table 1. The pseudo-code of proposed OLMBES algorithm.
Table 1. The pseudo-code of proposed OLMBES algorithm.
The pseudo-code of proposed OLMBES algorithm
Initialize the OLMBES parameters;
Randomly generate initial population;
For i = 1:nPop
  Calculate the fitness of initial population;
End For
Q* = the optimal solution;
Qs = the second optimal solution;
Qt = the third optimal solution;
While (iterationMaxgen)
  Select space
  For (each individual i in the population)
    Update the position using Equations (5) and (14);
     Evaluate   f ( Q i n e w ) , f ( Q i l e v y ) , f(Qi) and choose the best individual as Qi;
  End For
  Search in space
  For (each individual i in the population)
    Update the position using Equations (6) and (19);
     Evaluate   f ( Q i n e w ) , f ( Q i q r ) , f(Qi) and choose the best individual as Qi;
  End For
  Swooping
  For (each individual i in the population)
    r = randperm(nPop)
    If i = r(1)
      Update the position using Equation (10);
    else
      Update the position using Equation (20);
    End If
     Evaluate   f ( Q i n e w ) , f(Qi) and choose the best individual as Qi;
  End For
  Update Q*, Qs, Qt;
OL strategy
If stagnated_numlimit
   Q g v = O E D Q , Q s , Q i , Q i ¯
  Evaluate f(Qgv), f(Q*) and choose the best individual as Q*;
  stagnated_num = 0;
else
  stagnated_num = stagnated_num + 1;
End If
iteration = iteration + 1;
End While
Table 2. Comparative results of different algorithms on CEC2014 benchmark functions.
Table 2. Comparative results of different algorithms on CEC2014 benchmark functions.
ProblemsStatisticsOLMBESOLMBES-1OLMBES-2OLMBES-3BESGWOWOASSABACS
F1Mean3.1820 × 1039.7821 × 1038.2489 × 1041.7632 × 1054.5361 × 1057.5337 × 1071.3411 × 1081.9224 × 1071.3290 × 1074.3731 × 106
STD1.5479 × 1032.0798 × 1034.6990 × 1041.6879 × 1052.6220 × 1054.1621 × 1074.8072 × 1076.0163 × 1061.3210 × 1074.3712 × 106
F2Mean1.8853 × 1032.4708 × 1033.0579 × 1032.4789 × 1034.8839 × 1036.2216 × 1091.5911 × 1097.3036 × 1031.4007 × 1064.9723 × 107
STD2.1328 × 1032.8302 × 1033.0495 × 1032.1237 × 1034.5945 × 1033.0028 × 1095.9508 × 1086.8858 × 1038.4116 × 1051.2827 × 107
F3Mean3.0650 × 1023.0844 × 1023.0967 × 1021.3404 × 1031.5424 × 1036.8244 × 1041.0378 × 1055.7919 × 1043.0500 × 1057.9680 × 104
STD4.0519 × 1003.8738 × 1004.5274 × 1007.3993 × 1021.0808 × 1031.3343 × 1042.3284 × 1041.2651 × 1041.9889 × 1051.0209 × 104
F4Mean4.1507 × 1024.2460 × 1024.2762 × 1024.3389 × 1024.5305 × 1021.0290 × 1039.8126 × 1025.2079 × 1026.0023 × 1025.5509 × 102
STD3.1426 × 1013.1414 × 1013.4590 × 1013.4787 × 1013.8879 × 1011.9751 × 1021.4742 × 1024.0292 × 1026.4912 × 1011.8187 × 101
F5Mean5.2106 × 1025.2111 × 1025.2110 × 1025.2113 × 1025.2113 × 1025.2118 × 1025.2083 × 1025.2004 × 1025.2063 × 1025.2113 × 102
STD4.8100 × 10-28.7600 × 10-21.3510 × 10-13.7600 × 10-25.0000 × 10-22.8400 × 10-21.2600 × 10-26.2100 × 10-26.4500 × 10-23.9500 × 10-2
F6Mean6.3176 × 1026.3537 × 1026.3421 × 1026.5592 × 1026.4438 × 1026.3101 × 1026.6874 × 1026.3961 × 1026.7121 × 1026.5426 × 102
STD4.1231 × 1005.7097 × 1004.8988 × 1006.6622 × 1004.3224 × 1003.8579 × 1003.1133 × 1007.0864 × 1003.0445 × 1001.3467 × 100
F7Mean7.0000 × 1027.0001 × 1027.0003 × 1027.0003 × 1027.0005 × 1027.5898 × 1027.1417 × 1027.0000 × 1027.0088 × 1027.0126 × 102
STD1.1900 × 10-26.1000 × 10-34.6000 × 10-32.0750 × 10-22.3816 × 10-22.7691 × 1014.4903 × 1017.8000 × 10-38.0100 × 10-26.4000 × 10-2
F8Mean8.8320 × 1029.7830 × 1028.8875 × 1028.8872 × 1029.9456 × 1029.9537 × 1021.1582 × 1031.0818 × 1031.1124 × 1031.0606 × 103
STD1.3435 × 1014.1773 × 1011.3954 × 1011.2986 × 1012.1363 × 1012.8809 × 1016.0159 × 1014.8441 × 1014.2866 × 1011.9962 × 101
F9Mean9.2532 × 1021.1168 × 1031.1723 × 1031.2193 × 1031.2265 × 1031.0946 × 1031.4012 × 1031.1713 × 1031.2925 × 1031.2700 × 103
STD8.9037 × 1014.8512 × 1011.0096 × 1027.0924 × 1013.9698 × 1012.8493 × 1019.0448 × 1016.2147 × 1017.4593 × 1012.1871 × 101
F10Mean2.5973 × 1032.7618 × 1032.9140 × 1032.9511 × 1036.2658 × 1036.7547 × 1039.5217 × 1038.1038 × 1038.3857 × 1037.0936 × 103
STD5.4336 × 1027.3431 × 1021.8091 × 1037.5514 × 1026.6441 × 1028.1849 × 1021.4181 × 1039.4484 × 1028.4668 × 1022.5924 × 102
F11Mean2.0335 × 1035.2370 × 1036.2135 × 1038.0269 × 1038.3906 × 1037.2466 × 1031.0971 × 1047.8590 × 1038.9508 × 1039.0488 × 103
STD9.3474 × 1015.6362 × 1026.9241 × 1029.3474 × 1022.5459 × 1039.2996 × 1021.0103 × 1031.1171 × 1039.6015 × 1023.3310 × 102
F12Mean1.2002 × 1031.2003 × 1031.2004 × 1031.2022 × 1031.2024 × 1037.5337 × 1071.3411 × 1081.9224 × 1072.0790 × 1073.1631 × 107
STD9.8200 × 10-21.4300 × 10-11.3970 × 10-19.1830 × 10-11.0530 × 1004.1621 × 1074.8072 × 1076.0163 × 1061.3210 × 1074.3712 × 106
F13Mean1.3001 × 1031.3004 × 1031.3005 × 1031.3005 × 1031.3007 × 1031.3007 × 1031.3006 × 1031.3006 × 1031.3005 × 1031.3004 × 103
STD8.3900 × 10-21.0010 × 10-19.9200 × 10-28.7300 × 10-21.2570 × 10-18.4800 × 10-29.3300 × 10-21.2410 × 10-18.9800 × 10-24.2300 × 10-2
F14Mean1.4000 × 1031.4000 × 1031.4003 × 1031.4003 × 1031.4004 × 1031.4109 × 1031.4004 × 1031.4006 × 1031.4003 × 1031.4003 × 103
STD4.8800 × 10-21.5769 × 10-11.1590 × 10-11.2120 × 10-11.1279 × 10-19.0474 × 1001.6710 × 10-12.6040 × 10-17.6800 × 10-22.0600 × 10-2
F15Mean1.5109 × 1031.5245 × 1031.5277 × 1031.5274 × 1031.5374 × 1032.6384 × 1032.9205 × 1031.5220 × 1031.9248 × 1031.5464 × 103
STD2.4730 × 1001.3852 × 1017.7099 × 1005.9136 × 1001.3941 × 1012.1549 × 1031.4458 × 1036.4046 × 1008.5514 × 1013.7090 × 100
F16Mean1.6115 × 1031.6206 × 1031.6219 × 1031.6219 × 1031.6224 × 1031.6210 × 1031.6225 × 1031.6211 × 1031.6227 × 1031.6223 × 103
STD5.7080 × 10-15.4910 × 10-17.6120 × 10-14.8754 × 10-16.2289 × 10-11.0547 × 1004.6549 × 10-16.7810 × 10-16.1230 × 10-11.9115 × 10-1
F17Mean2.2612 × 1042.3816 × 1042.6387 × 1043.2701 × 1043.1594 × 1044.2176 × 1066.1585 × 1071.8051 × 1062.0263 × 1062.9514 × 106
STD9.1357 × 1032.3361 × 1041.2612 × 1041.3041 × 1042.2971 × 1043.1345 × 1063.4385 × 1071.0147 × 1061.9203 × 1063.4245 × 105
F18Mean2.5416 × 1033.6805 × 1033.7324 × 1033.7618 × 1034.1837 × 1031.7814 × 1077.7050 × 1054.1228 × 1033.4029 × 1042.2305 × 104
STD6.2703 × 1021.6624 × 1031.7083 × 1031.8009 × 1031.6298 × 1034.0682 × 1071.4604 × 1061.5807 × 1078.1861 × 1032.6244 × 103
F19Mean1.9125 × 1031.9166 × 1031.9325 × 1031.9211 × 1031.9241 × 1031.9831 × 1032.0470 × 1031.9380 × 1031.9729 × 1031.9335 × 103
STD3.3048 × 1001.2003 × 1013.1423 × 1002.3881 × 1001.1976 × 1012.7042 × 1017.3105 × 1011.5776 × 1013.2126 × 1013.9217 × 100
F20Mean2.3175 × 1032.3671 × 1032.4072 × 1032.4251 × 1032.6052 × 1032.5660 × 1042.5168 × 1052.5270 × 1041.0789 × 1052.6022 × 104
STD7.4845 × 1011.1310 × 1021.3901 × 1021.1199 × 1022.4131 × 1028.4631 × 1032.1040 × 1051.1150 × 1046.9083 × 1046.9989 × 103
F21Mean1.1428 × 1041.2980 × 1042.2481 × 1041.2980 × 1041.7789 × 1043.2236 × 1061.2556 × 1078.1890 × 1051.1928 × 1065.3020 × 105
STD9.7132 × 1031.7688 × 1041.2557 × 1043.1000 × 1041.1035 × 1042.4071 × 1066.0423 × 1064.6961 × 1052.0040 × 1061.5194 × 105
F22Mean2.7534 × 1032.5182 × 1033.0584 × 1033.2347 × 1033.0276 × 1033.0275 × 1034.3698 × 1033.4246 × 1034.4362 × 1033.2100 × 103
STD1.1710 × 1021.3869 × 1022.8654 × 1021.8543 × 1022.6189 × 1022.3956 × 1025.5036 × 1022.9716 × 1024.2964 × 1021.4045 × 102
F23Mean2.5072 × 1032.5072 × 1032.5144 × 1032.5144 × 1032.5216 × 1032.7265 × 1032.7726 × 1032.6656 × 1032.6600 × 1032.6444 × 103
STD3.2134 × 1013.2161 × 1014.4262 × 1014.4304 × 1015.2756 × 1012.2742 × 1012.5022 × 1016.1307 × 1009.8568 × 1008.6800 × 10-2
F24Mean2.6000 × 1032.6000 × 1032.6013 × 1032.6047 × 1032.6039 × 1032.6000 × 1032.6006 × 1032.6888 × 1032.7448 × 1032.6900 × 103
STD3.7518 × 10-71.8241 × 10-620.294 × 10-61.4201 × 10-61.6743 × 10-66.3000 × 10-31.0636 × 1001.0261 × 1012.6492 × 1011.6503 × 100
F25Mean2.7000 × 1032.7000 × 1032.7000 × 1032.7000 × 1032.7000 × 1032.7278 × 1032.7061 × 1032.7261 × 1032.7479 × 1032.7272 × 103
STD000007.0474 × 1001.8895 × 1016.0762 × 1001.2458 × 1012.3426 × 100
F26Mean2.7001 × 1032.7001 × 1032.7005 × 1032.7800 × 1032.7750 × 1032.7923 × 1032.7004 × 1032.7006 × 1032.7275 × 1032.7004 × 103
STD1.5431 × 1021.7264 × 1021.8705 × 1021.8879 × 1022.0742 × 1024.6954 × 1019.4800 × 10-21.1860 × 10-17.3204 × 1013.4300 × 10-2
F27Mean2.9000 × 1033.1341 × 1033.8265 × 1033.9540 × 1033.7809 × 1033.7615 × 1034.8413 × 1034.0325 × 1034.9454 × 1033.5695 × 103
STD9.8316 × 1011.0943 × 1023.0914 × 1024.0366 × 1021.7887 × 1029.7524 × 1011.2738 × 1021.6164 × 1027.7465 × 1013.2127 × 102
F28Mean4.4248 × 1034.2575 × 1034.4560 × 1034.6795 × 1034.6716 × 1035.0013 × 1038.3661 × 1034.9647 × 1038.5051 × 1034.2812 × 103
STD3.1162 × 1021.0867 × 1037.1191 × 1024.0583 × 1024.3825 × 1026.3735 × 1021.6951 × 1036.8043 × 1022.1457 × 1036.6154 × 101
F29Mean1.7611 × 1051.8690 × 1052.0766 × 1052.5024 × 1052.8723 × 1056.3729 × 1064.4366 × 1075.0505 × 1071.4954 × 1052.1996 × 105
STD1.4539 × 1052.5081 × 1051.0759 × 1052.3749 × 1051.8872 × 1059.8300 × 1062.7418 × 1075.9499 × 1071.3191 × 1056.4697 × 104
F30Mean1.3793 × 1041.3440 × 1041.1795 × 1041.5423 × 1041.4625 × 1041.7032 × 1053.0252 × 1057.2206 × 1043.2486 × 1043.1718 × 104
STD1.4711 × 1031.6530 × 1039.5112 × 1021.6869 × 1031.5527 × 1036.6880 × 1042.3657 × 1053.8024 × 1042.6221 × 1044.5067 × 103
Table 3. Parameter settings of algorithms.
Table 3. Parameter settings of algorithms.
AlgorithmParameter Settings
OLMBESδ = 1.5, α = 10, R = 1.5, c1 = c2 = 2, k = 10, limit = 5
BESδ = 1.5, α = 10, R = 1.5, c1 = c2 = 2
GWOaini = 2
WOAaini = 2
SSAc1, c2∈ [0, 1]
BAAo = 0.5, pr = 0.5, fmin = 0, fmax = 2
CSpa = 0.25
Table 4. Coverage performance indicators of algorithms.
Table 4. Coverage performance indicators of algorithms.
AlgorithmCoverage RateUniformity
OLMBES0.96020.3485
BES0.93770.5255
GWO0.93330.5523
WOA0.91860.5974
SSA0.93480.5274
BA0.93710.5263
CS0.92630.5826
Table 5. Coverage rates with different numbers of nodes.
Table 5. Coverage rates with different numbers of nodes.
Number of Nodes2426283032
OLMBES0.88900.91900.94210.96020.9742
BES0.87470.90270.92450.93770.9562
GWO0.86760.88060.91740.93330.9339
WOA0.84770.88070.90060.91860.9387
SSA0.87310.90180.91910.93480.9547
BA0.87260.90210.92330.93710.9537
CS0.85220.88120.90650.92630.9443
Table 6. Uniformity with different numbers of nodes.
Table 6. Uniformity with different numbers of nodes.
Number of Nodes2426283032
OLMBES0.40450.40660.38090.34850.3201
BES0.45220.49430.48160.52550.4838
GWO0.46330.54470.53920.55230.6153
WOA0.60010.59430.60310.59740.5982
SSA0.46800.48250.49840.52740.4865
BA0.47390.47910.49390.52630.4912
CS0.61930.58950.63670.58260.5972
Table 7. Parameter settings of different surveillance region sizes.
Table 7. Parameter settings of different surveillance region sizes.
Surveillance Area SizeSensing RadiusCommunication Radius
20 m × 20 m2.5 m5 m
40 m × 40 m5 m10 m
60 m × 60 m7.5 m15 m
Table 8. Coverage rates of different surveillance region sizes.
Table 8. Coverage rates of different surveillance region sizes.
Surveillance Area Size20 m × 20 m40 m × 40 m60 m × 60 m
OLMBES0.96020.96440.9624
BES0.93770.94620.9331
GWO0.93330.94100.9344
WOA0.91860.92510.9206
SSA0.93480.93750.9341
BA0.93710.94080.9366
CS0.92630.93300.9309
Table 9. Uniformity with different surveillance region sizes.
Table 9. Uniformity with different surveillance region sizes.
Surveillance Area Size20 m × 20 m40 m × 40 m60 m × 60 m
OLMBES0.34850.65671.0002
BES0.52550.94611.7508
GWO0.55230.99811.5662
WOA0.59741.20091.8364
SSA0.52741.00611.5945
BA0.52630.98211.5638
CS0.58261.16011.8108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Niu, H.; Li, Y.; Zhang, C.; Chen, T.; Sun, L.; Abdullah, M.I. Multi-Strategy Bald Eagle Search Algorithm Embedded Orthogonal Learning for Wireless Sensor Network (WSN) Coverage Optimization. Sensors 2024, 24, 6794. https://doi.org/10.3390/s24216794

AMA Style

Niu H, Li Y, Zhang C, Chen T, Sun L, Abdullah MI. Multi-Strategy Bald Eagle Search Algorithm Embedded Orthogonal Learning for Wireless Sensor Network (WSN) Coverage Optimization. Sensors. 2024; 24(21):6794. https://doi.org/10.3390/s24216794

Chicago/Turabian Style

Niu, Haixu, Yonghai Li, Chunyu Zhang, Tianfei Chen, Lijun Sun, and Muhammad Irsyad Abdullah. 2024. "Multi-Strategy Bald Eagle Search Algorithm Embedded Orthogonal Learning for Wireless Sensor Network (WSN) Coverage Optimization" Sensors 24, no. 21: 6794. https://doi.org/10.3390/s24216794

APA Style

Niu, H., Li, Y., Zhang, C., Chen, T., Sun, L., & Abdullah, M. I. (2024). Multi-Strategy Bald Eagle Search Algorithm Embedded Orthogonal Learning for Wireless Sensor Network (WSN) Coverage Optimization. Sensors, 24(21), 6794. https://doi.org/10.3390/s24216794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop