1. Introduction
With the rapid development of information technology, reconnaissance and search technologies, represented by drones, have been widely applied in both military and civilian scenarios [
1]. Among these, path planning is a critical aspect of drone applications, aiming to determine the optimal path from a starting point to a target point while meeting various constraints such as obstacle avoidance, energy efficiency, and time limits. Effective path planning not only enhances mission efficiency but also significantly reduces operational costs and risks for drones. In reconnaissance and search missions, drones often need to inspect targets over long distances while navigating environments with potential threats. Additionally, drones are constrained by their mobility, endurance, and payload capacity, which pose significant challenges to collaborative path planning [
2]. Traditional path planning methods often fail to meet the high real-time demands of dynamic and unpredictable environments, resulting in slow responses, poor adaptability, and low computational efficiency. To address these challenges, this study investigates an improved dung beetle optimization (DBO) algorithm for drone path planning, which holds significant engineering value.
To address the problem of drone path planning, scholars worldwide have proposed various solutions. Regarding model construction, Zhang et al. [
3] established a drone path planning model by integrating optimization objectives such as energy consumption, trajectory cost, obstacle avoidance, smoothing, and stability, transforming the path planning problem into a multi-constraint objective function optimization task. Liu et al. [
4] proposed a path planning model tailored to the limitations of agricultural drones operating in complex hilly terrain, considering topographic features and agricultural scheduling requirements. Bai et al. [
5] developed a multi-drone collaborative path planning model based on multi-objective optimization, focusing on trajectory distance, time, threats, and coordination costs. Wang et al. [
6] introduced a comprehensive framework incorporating drone capabilities, terrain, complex regions, and dynamic mission requirements, along with a dynamic collaborative path planning algorithm for complete area coverage. Wang et al. [
7] formulated a mathematical model integrating multiple constraints such as flight distance, collision threats, and path stability, effectively converting the complex problem of drone formation path planning into an optimization problem. These advancements have enabled subsequent research to construct more practical drone path planning models.
Path planning algorithms can be categorized into two main types: traditional algorithms and intelligent algorithms. Traditional algorithms include the artificial potential field (APF) method [
8], Simulated Annealing (SA) [
9], A* algorithm [
10], and Rapidly-Exploring Random Tree (RRT) [
11]. However, these methods are prone to premature convergence, leading to local optima, and exhibit slow convergence speeds in large-scale search spaces. For instance, APF, as a local path planning method, conceptualizes drones as particles moving within an artificial potential field, where attractive and repulsive forces guide the drones’ direction [
12]. However, APF often falls into local optima and sometimes fails to provide feasible solutions [
13]. The A* algorithm discretizes the search space into grids and selects the path covering the fewest grid cells, but its performance heavily depends on the grid segmentation method and incurs excessive computation time in complex spaces [
14]. RRT, a sampling-based algorithm, generates redundant nodes during iterations and frequently fails to find optimal paths [
15]. These limitations highlight the shortcomings of classical algorithms.
In contrast, intelligent algorithms offer faster convergence, broader applicability, and lower time complexity, such as Ant Colony Optimization (ACO) [
16], Particle Swarm Optimization (PSO) [
17], and Genetic Algorithm (GA) [
18]. Researchers have improved these intelligent optimization algorithms for path planning. For example, Li et al. [
19] introduced the Metropolis criterion into ACO node selection to propose an improved MACO algorithm, which includes initial trajectory generation, trajectory correction, and smooth trajectory planning. Zhang et al. [
3] addressed the inefficiency and infeasibility of traditional algorithms in complex 3D environments by enhancing Differential Evolution (DE) with mutation crossover strategies, adaptive guidance mechanisms, and elite interference mechanisms, resulting in the multi-strategy improved DE (MSIDE) algorithm. Liu et al. [
20] proposed a multi-mechanism enhanced Grey Wolf Optimization (NAS-GWO) algorithm, incorporating evolutionary boundary constraints to improve search accuracy, Gaussian mutation and spiral functions to avoid local optima, and a sigmoid function to balance exploration and exploitation. Hu et al. [
21] developed a multi-algorithm hybrid ACO (MAHACO) with adaptive foraging strategies, multi-stage stochastic strategies, and aggregation mutation strategies to enhance exploration and diversity. Gu et al. [
22] introduced the IRIME algorithm for complex urban environments, integrating frost diffusion mechanisms, high-altitude condensation strategies, and lattice weaving strategies to enhance the RIME algorithm’s global exploration and avoid premature convergence. Pang et al. [
23] proposed a multi-objective cat swarm optimization with dual-archive mechanisms (MOCSO_TA) to balance convergence and population diversity.
Although these improved algorithms generally exhibit superior convergence performance for low-dimensional unimodal problems, they often fall into local optima when addressing high-dimensional multimodal problems. To overcome these challenges, the main contributions of this study are as follows:
1. An improved population initialization method to comprehensively cover the solution space, enhancing the uniformity of the initial solution distribution.
2. A sinusoidal motion strategy to mitigate local optima in the traditional DBO algorithm, improving global search capability and convergence speed for multi-drone path planning in dynamic environments.
3. Evolutionary strategies to enhance the global search ability of the “rolling dung beetle” and the local exploitation ability of the “small dung beetle”, while enabling “thief dung beetles” to evolve positions based on opposition-based learning (OBL), increasing population diversity.
Building on these strategies, an improved dung beetle optimization (IDBO) algorithm is proposed. Validation is performed using 23 benchmark functions from the CEC 2005 competition and six engineering models from the CEC 2020 competition. Results demonstrate that IDBO outperforms DBO in convergence performance and provides better solutions to various problems. Experiments under three threat scenarios further confirm the scientific validity of IDBO for drone path planning, effectively reducing flight costs and threats while improving planning efficiency and accuracy.
Compared with other related papers in the same field [
24], this paper has the following distinctive features:
1. Algorithm Improvement: This paper improves the dung beetle optimization (DBO) algorithm using a uniform initialization strategy, a cosine function-based movement strategy, and a population evolution strategy. The strategies are more focused on updating the population positions, and the population evolution strategy is relatively novel.
2. Comprehensive Testing: The paper uses 23 commonly used benchmark functions from the 2005 Congress on Evolutionary Computation (CEC2005) and 6 commonly used engineering problem models from the 2020 Congress on Evolutionary Computation (CEC2020) for testing. This broader testing range is explained in detail in the paper, with the specifics of the test functions provided. Additionally, this paper compares 10 different metaheuristic algorithms, making the testing results more persuasive.
3. Performance Comparison: The improved algorithm proposed in this paper outperforms most other enhanced DBO algorithms. The experimental results show that the proposed algorithm performs better than others on 22 out of the 23 benchmark functions in CEC2005, and it converges the fastest on the engineering problem models in CEC2020.
4. Diverse Testing Scenarios: The paper tests the algorithm in different terrains and environments with varying obstacle densities and compares different cost–weight scenarios, providing a more comprehensive evaluation.
5. Results Visualization: In addition to presenting the total cost and computation time, the paper also provides values for the individual cost components and heading deviation. Furthermore, the optimization paths of each algorithm are visualized in both 3D and overhead views for each testing scenario, allowing readers to better compare the strengths and weaknesses of the algorithms.
This paper is organized as follows:
Section 2 introduces the cost components and constraints of the drone path planning model.
Section 3 details the traditional DBO algorithm and the proposed IDBO improvement strategies.
Section 4 evaluates IDBO’s performance by comparing it with nine algorithms on 23 benchmark functions and six engineering models.
Section 5 presents simulation experiments using different algorithms for drone path planning.
Section 6 concludes the study.
3. Improved Dung Beetle Optimization Algorithm and Algorithm Testing
3.1. Principle of the Dung Beetle Optimization Algorithm
Jianka Xue and Bo Shen [
25] proposed a population-based optimization algorithm called the dung beetle optimization (DBO) algorithm in 2022, inspired by the rolling, dancing, foraging, stealing, and breeding behaviors of dung beetles. This algorithm effectively balances global exploration and local exploitation, demonstrating fast convergence and high accuracy. It is suitable for addressing the challenges of complex dynamic environments in multi-UAV path planning and task allocation.
The DBO algorithm first requires population initialization and fitness calculation, which are similar to general algorithms. Then, the dung beetle population is divided into four subpopulations representing rollers, breeders, minors, and thieves, with the proportions set by the authors as 6:6:7:11. After partitioning the subpopulations, individuals update their positions according to different strategies, as follows:
Rollers use sunlight to guide them to roll along a straight line, and natural factors such as sunlight intensity and wind can affect their movement trajectory. When there are no obstacles, the position update of rollers is as follows:
In the equation, represents the iteration number; represents the position information of the i-th dung beetle at the t-th iteration; and is the perturbation coefficient, where ∈ [0, 2]. is a constant value in the range [0, 1], and α represents the deviation in the position of the dung beetle caused by natural factors. It takes a constant value of −1 or 1. When it is 1, there is no deviation, and when it is −1, there is deviation in the direction. is the worst position in the global, and represents the change in light intensity. A larger value indicates weaker light.
When a dung beetle encounters an obstacle and cannot move forward, it changes its direction by dancing, as follows:
where
represents the perturbation angle, with a range of [0,
]. If its value is 0,
/2, or
, the position of the dung beetle is not updated.
- (2)
Breeder Beetles
Breeder beetles use a boundary selection strategy to simulate the egg-laying area of female dung beetles. The egg-laying area is defined as follows:
where
is the upper bound of the egg-laying area,
is the lower bound of the egg-laying area,
represents the best local position,
is the upper bound of the optimization problem,
is the lower bound of the optimization problem,
is a parameter that varies with the number of iterations,
represents the current iteration number, and
is the maximum number of iterations.
The position of the breeder beetle dynamically changes during the iteration process and can be expressed as:
In the equation, represents the position of the i-th breeder ball at the t-th iteration. and are two independent vectors of size 1 × , where represents the dimensionality.
- (3)
Minor Beetles
An optimal foraging area needs to be established to guide the juvenile dung beetles in their search for food and to simulate their foraging behavior. The boundaries of the optimal foraging area are defined as follows:
In the equation, represents the lower boundary of the optimal foraging area, represents the global best foraging position, and represents the upper boundary of the optimal foraging area.
Thus, the updated position for the minor beetles is:
In the equation, represents the position of the i-th minor beetle at the t-th iteration; is a random number following a normal distribution; and is a random vector in the range (0, 1).
- (4)
Thief Beetles
is the optimal position for food competition, so the position update for thief beetles is as follows:
In the equation, represents the position of the -th thief beetle at the -th iteration; is a random vector of size 1 × following a normal distribution; and is a constant value.
In summary, the DBO algorithm consists of six main steps:
a. Initialize the population of dung beetles and the parameters of the DBO algorithm.
b. Calculate the fitness values of all individuals based on the objective function.
c. Update the positions of all dung beetles.
d. Perform boundary checks.
e. Update the current best solution and its fitness value.
f. Repeat the above steps until the maximum number of iterations is reached, then output the global optimal value and its solution.
In practical applications, to better test the effectiveness of the algorithm, it is necessary to demonstrate the algorithm and combine it with numerical examples for detailed solutions to test its performance, validating its effectiveness in solving real-world problems. Therefore, testing the algorithm’s performance through a detailed solution of a numerical example, based on the set initial parameters and the expression of the test function, verifies the applicability of the algorithm to real-world problems. This helps assess the practical value of the algorithm in solving real-world problems such as multi-drone trajectory planning and task allocation in dynamic environments.
3.2. Improvement Strategies for the Dung Beetle Optimization Algorithm
3.2.1. Improvement of Initial Solution Generation Method
Through analysis and research on metaheuristic algorithms, it is understood that the generation of initial solutions for the population can impact the accuracy and speed of the algorithm’s convergence. When solving optimization problems, the initial solutions should be distributed as uniformly as possible within the solution space. The traditional initialization method for the dung beetle optimization algorithm is as follows:
In the equation, represents the value of the -th dimension of the -th dung beetle, ub represents the upper bound, lb represents the lower bound, and α is a random number between 0 and 1.
Due to excessive randomness, the initialized population inevitably exhibits local clustering. To avoid this situation, many scholars choose to generate initial solutions through chaos and reverse learning. Initial solutions generated by chaos exhibit some improvement compared to randomly generated initial solutions, but chaos algorithms tend to introduce strong randomness, which may lead to unstable points. On the other hand, reverse learning, lacking randomness, optimizes initial solutions to some extent but does not significantly enhance solution diversity. Therefore, based on the concept of symmetry, this paper proposes an initialization method based on Formula (25) to comprehensively cover the solution space as much as possible.
Figure 3 compares the initial solutions generated by the traditional method with those generated based on Equation (25). The left panel illustrates the population initialized by the traditional method, while the right panel illustrates the population initialized by the proposed method in this paper. The improved population is uniformly distributed within the solution space according to central symmetry, making it easier for the algorithm to locate regions near the optimal value during the early stages of iteration. It can be observed from the comparison that the proposed method effectively optimizes the uniformity of the initial solution distribution.
3.2.2. Sine–Cosine Function-Shifting Strategy
To address the issue of local optima in the dung beetle optimization algorithm, based on the concept of symmetry, this paper introduces a sine–cosine function-based search mechanism to enhance the algorithm’s global search capability and convergence speed. When a dung beetle encounters an obstacle and cannot move forward, it updates its position using Equation (26) as follows:
where
represents the position vector of the current best solution, and r2, r3, and
p are uniformly distributed random numbers, with r2 belonging to [0, 2
], r3 belonging to [0, 2], and
p belonging to [0, 1]. When the values of
and
lie within the ranges (1, 2] and [−2, −1), the current individual moves away from the optimal individual, allowing the algorithm to conduct a global search in the solution space. In the range [−1, 1], the current individual stays near the optimal individual, facilitating local exploitation by the algorithm.
The formula above updates the subsequent positions of individuals within or outside the space between their current and previous iteration positions. The random number r2 in Equation (26) determines whether the update occurs inside or outside, ensuring a balance between exploration and exploitation in the solution space. Additionally, r1, acting as a control parameter, primarily governs the amplitude of the sine–cosine function and is adaptively adjusted using Equation (27).
In the equation, is a constant, typically set to 2, which makes the range of between [0, 2], thereby ensuring that and lie within the range [−2, 2].
In the early stages of iteration, most dung beetles conduct global searches, thus preventing the algorithm from falling into local optima. As the number of iterations increases, the value of gradually decreases, and the fluctuations of and gradually diminish, causing dung beetles to transition from global exploration to local exploitation, thereby enhancing the precision of the search.
By introducing the sine–cosine function-based movement strategy, this paper effectively addresses the issue of local optima in the dung beetle optimization algorithm, enhancing the algorithm’s global search capability and convergence speed. This provides an effective solution for dynamic environments in multi-drone trajectory planning and task allocation.
3.2.3. Population Evolution Strategy
The dung beetle optimization (DBO) algorithm divides the population into four subpopulations representing different behaviors: rolling dung beetles, breeding balls, small dung beetles, and thief dung beetles. The ratio given by the authors for these subpopulations is 6:6:7:11. After dividing the subpopulations, individuals update their positions according to different strategies. However, in traditional DBO, the proportions of these subpopulations remain fixed, resulting in a relatively balanced development and exploration capacity throughout the iteration process. Yet, in reality, the algorithm requires more local development capability in the later stages of iteration. To address this issue, this paper introduces a population evolution strategy as follows: after all individuals undergo one position update, the next iteration will involve population evolution to enhance population diversity and convergence speed and eliminate local optima. The evolution strategies for each subpopulation are as follows:
Individual rolling dung beetles expand the search range by increasing the absolute value of the position vector to enhance global exploration capability. Equation (29) describes the evolutionary process of the exploratory subpopulation.
where
is a random number between [1, 2]. If the current position is a local optimum, Equation (29) can help individuals escape from the local optimum with a certain probability. This is because, under the effect of Equation (29), the absolute value of the current position vector increases, allowing individuals to explore other regions far from the current position, thus enhancing the global search capability. For a solution space centered around the origin, this movement causes the rolling dung beetle subpopulation to diffuse towards the edges of the solution space, allowing the population individuals to explore other areas. This enhances the algorithm’s global search capability, as illustrated in
Figure 4.
- 2.
Evolution of the small ant lion subpopulation
Individuals of the small ant lion subpopulation evolve their positions by conducting deep local searches around the current best solution, as shown in Equation (31).
where
is a random number with a value ranging between [0, 2] and
represents the position vector of the current best solution. The position vector of the small ant lion will randomly increase or decrease, thus exploring the vicinity of the current best solution. This evolutionary strategy fully utilizes the positional information of the current best solution to accelerate convergence and improve accuracy.
- 3.
Group evolution of thief cockroaches
OBL is often used to enhance the performance of metaheuristic algorithms, as it can help increase the diversity of the population. Therefore, in this paper, individual thief cockroaches evolve their positions based on OBL, as shown in Equation (33).
where lb is the lower bound of the problem space, ub is the upper bound, fit represents the objective function, and
is the new position based on OBL. OBL increases the diversity of the population, allowing it to escape local optima and explore new regions.
By combining the movement strategies of each subpopulation with their respective evolution strategies, the process of improving the cockroach optimization algorithm can be completed. The main steps of the improved dung beetle optimization (IDBO) algorithm are as follows:
- (1)
Initialize the dung beetle population and the parameters of the IDBO algorithm based on the improved method.
- (2)
Calculate the fitness value of all individuals according to the objective function.
- (3)
If the iteration count is odd, update the positions of the dung beetle subpopulations using Equations (14), (26), (19), (22), and (23); otherwise, perform population evolution for the subpopulations using Equations (29), (31), and (32).
- (4)
Perform boundary checks.
- (5)
Update the current optimal solution and its fitness value.
- (6)
Repeat the above steps until the maximum number of iterations is reached, then output the global optimum and its corresponding solution.
The algorithm flow of the improved cockroach optimization algorithm is shown in
Figure 5.
4. Algorithm Testing
4.1. Introduction of the Tested Algorithms
In the performance testing process of intelligent optimization algorithms, test functions are often utilized to evaluate the global and local search capabilities of the algorithms. The International Evolutionary Computation Conference 2005 (CEC2005) provides a set of 23 commonly used benchmark test functions for assessing the performance of optimization algorithms. These functions encompass various problem types, including unimodal, multimodal, and fixed-dimensional multimodal functions. The proposed algorithm’s search performance is evaluated using the aforementioned test functions. Among them, the first seven benchmark functions are unimodal, the eighth to thirteenth are multimodal, and the rest are composite benchmark functions. In addition, the CEC 2020 competition provides practical engineering problem models commonly used in real-world scenarios, some of which were selected for function testing in this study. To verify the effectiveness of the improved dung beetle optimization algorithm, comparisons were conducted with nine commonly used metaheuristic algorithms across multiple dimensions. These nine algorithms include the Multi-Verse Optimizer (MVO) [
26,
27], Ant Lion Optimizer (ALO) [
28,
29], Sine–Cosine Algorithm (SCA) [
30,
31,
32], Grey Wolf Optimizer (GWO) [
33], Whale Optimization Algorithm (WOA) [
34,
35,
36], Bayesian Optimization (BO) [
37], Moth–Flame Optimization (MFO) [
38], Water Hyacinth and Water Lily-based Optimization Algorithm (WWPA) [
39], and the dung beetle optimization (DBO) [
25].
The Multi-Verse Optimizer (MVO) simulates the behavior of multiverse populations under the combined influence of white holes, black holes, and wormholes. Similar to other swarm intelligence optimization algorithms, MVO includes two main phases: the exploration phase, governed by white and black holes, and the exploitation phase, managed by wormholes. The Ant Lion Optimizer (ALO), proposed by Mirjalili in 2015, is a novel metaheuristic swarm intelligence algorithm that achieves global optimization by mimicking the hunting mechanism of ant lions. Before hunting, an ant lion digs a funnel-shaped trap in sandy soil and waits for prey. Once an ant falls into the trap, the ant lion quickly captures it and repairs the trap for the next hunt. The Sine–Cosine Algorithm (SCA), introduced by Seyedali Mirjalili in 2016, is a new bio-inspired optimization algorithm that solves optimization problems using sine and cosine mathematical models and random candidate solutions. Although SCA is simple in structure, has few parameters, and is easy to implement, it suffers from low optimization precision, susceptibility to local optima, and slow convergence speed. The Grey Wolf Optimizer (GWO), developed by Mirjalili et al. in 2014, is a swarm intelligence optimization algorithm that mimics the leadership hierarchy and hunting mechanism of grey wolves. GWO boasts strong convergence performance, few parameters, and ease of implementation, finding extensive application in job scheduling, parameter optimization, and image classification. The Whale Optimization Algorithm (WOA), introduced by Mirjalili and Lewis in 2016, is a novel swarm intelligence optimization method inspired by the hunting behavior of humpback whales. WOA includes searching for prey, encircling, and spiral position updating stages, with its population update mechanism being independent and not requiring manual setting of various control parameter values. Compared to other swarm intelligence optimization algorithms, WOA’s novel structure and fewer control parameters make it perform well in solving numerous numerical optimization and engineering problems. Bayesian Optimization (BO) is a method for finding the maximum of a black-box function by guessing its shape to locate an acceptable maximum. The Moth–Flame Optimization (MFO), proposed by Mirjalili in 2015, is a swarm intelligence optimization algorithm inspired by the night flight paths of moths. MFO is characterized by strong parallel optimization capabilities and overall performance, demonstrating high global search ability in solving non-convex functions. The Water Hyacinth and Water Lily-based Optimization Algorithm (WWPA) simulates the natural behavior of water hyacinth plants during hunting expeditions. WWPA uses plants as search agents to solve optimization problems and evaluates its performance using 23 different types of unimodal and multimodal objective functions. For unimodal functions, WWPA shows strong exploitation ability, approaching optimal solutions, while for multimodal functions, WWPA demonstrates strong exploration capability, identifying major optimal regions in the search space.
The corresponding parameters of the tested algorithms are listed in
Table 1.
4.2. CEC 2005 Function Testing
The CEC 2005 test functions, as shown in
Table 2, were used to compare the convergence accuracy and convergence speed of various algorithms. For each test function, the algorithm with the highest convergence accuracy had its final value bolded. If multiple algorithms achieved the optimal value, their convergence accuracy was compared. The test results are presented in
Table 3.
The analysis of the test functions included unimodal test functions, multimodal test functions, and composite benchmark test functions. The iterative processes for these three types of functions are shown in
Figure 6,
Figure 7 and
Figure 8, respectively.
As shown in the figures, for unimodal test functions, the improved dung beetle optimization (DBO) algorithm demonstrated significantly better performance than other algorithms over 500 iterations across seven functions. It not only converged faster but also achieved higher accuracy.
The performance of the improved cockroach optimization algorithm (ICROA) on six multimodal test functions was significantly superior to other algorithms when subjected to 500 iterations, as illustrated in the above figures. In the case of functions F9 and F11, both the cockroach optimization algorithm (COA) and ICROA converged to the optimal solution; however, ICROA exhibited notably faster convergence rates compared to the original COA. Moreover, for the remaining multimodal functions, ICROA demonstrated higher convergence accuracy than other algorithms. Even when other functions were trapped in local optima, ICROA continued to search for the global optimum, showcasing superior convergence performance.
As depicted in the above figures, the improved cockroach optimization algorithm (ICROA) consistently converged to the optimal solution in most cases of composite benchmark test functions. While some other algorithms also achieved convergence to the optimum in certain functions, ICROA consistently exhibited the fastest convergence speed, thus demonstrating superior convergence performance.
A comparative analysis involving 10 different algorithms revealed that ICROA outperformed the other algorithms in 22 out of 23 benchmark test functions. For the first seven single-modal benchmark test functions and the subsequent eight multimodal benchmark test functions, ICROA demonstrated the best convergence performance in each function, confirming its adaptability in handling these two types of functions. Across the 10 composite benchmark test functions, ICROA generally obtained superior solutions compared to other algorithms. Although there was a slight deficiency in convergence speed observed in function F16, overall, it proved to be an effective algorithm for solving composite functions.
4.3. CEC 2020 Engineering Problem Testing
The CEC 2020 test set consists of real-world optimization problems, mathematical models, and artificially constructed benchmark problems, characterized by a certain level of complexity and diversity, covering different types of optimization problems. This study selected six common engineering problem models from the CEC 2020 dataset to test various optimization algorithms. The parameter settings for all algorithms were consistent with those described earlier. The six CEC 2020 models are detailed as follows.
- (1)
Shifted and Rotated Bent Cigar Function
Objective functions:
where
, and
is the number of decision variables and is set to 25.
Objective constraints:
where
The search space is , .
- (2)
Shifted and Rotated Lunacek bi-Rastrigin Function
Objective functions:
where
, and n is the number of decision variables and is set to 25.
The search space is , .
- (3)
Expanded Rosenbrock’s plus Griewangk’s Function
Objective functions:
where
, and
is the number of decision variables and is set to 25.
The search space is .
- (4)
Composition Function 1
Objective functions:
where
−
.
The search space is , −10 .
- (5)
Composition Function 2
Objective functions:
where
.
The search space is
- (6)
Composition Function 3
Objective functions:
where
.
The search space is , , and .
The aforementioned models were used to test the algorithms, with the results presented in
Table 4 and the iteration process illustrated in
Figure 9.
As shown in the figure above, most of the algorithms tested in each model converged to the optimal value, with IDBO consistently achieving the fastest convergence speed.
In summary, by testing and comparing the improved dung beetle optimization (IDBO) algorithm with other algorithms using 23 benchmark functions of different dimensions and six engineering problem models, the high accuracy and fast convergence performance of IDBO were effectively validated. When applied to multi-drone path planning and task allocation in dynamic environments, IDBO effectively avoided the drawbacks of slow convergence speed and low convergence accuracy.