1 Introduction

In cloud computing environment, it can offer better services for users to handle and store large amounts of data in a secured manner. It has been evolved as a result of its benefits, which include high power for computation, low expenses for service, greater efficiency, scaling, convenience, and flexibility [1]. The application, preservation, and connection portions are separated out. Each section performs a variety of functions and satisfies the consumer and company requirements across the globe [2]. It enables customers and companies to utilize programs using the internet while it is not deploying and have access to their private files from any device. One of the fundamental components of cloud computing is virtualization. The software essentially divides up the infrastructure in order to produce different resources [3]. Virtualization is a method that allows different applications and operating devices to run simultaneously with an identical server [4]. When numerous systems share a small set of assets in a cloud-based setting, and in the management of VMs allocation of resources that is generally used [5]. Since every VM believes as it is solely using a node, sharing of resources is invisible. In order to meet each VM's business goal and ensure the best possible utilization of the accessible “Physical Machine (PMs)” to minimize expenses (like energy), the allocation of resources consequently involves dividing up the host's limited assets among all the virtual machines in an effective manner [6]. In many instances, this becomes an optimization challenge.

Task scheduling is the practice of allocating jobs to servers in a way that maximizes the utilization of resources and time spent on tasks [7]. The capacity, energy, delay, and expense effectiveness, handling different kinds of computation designs, information, and positions, among other essentials, are the primary necessities in task scheduling. Scheduling both dependent and independent activities and minimizing task migration are two additional key goals of task scheduling that can help shorten computation times and make better use of cluster assets [8]. Additionally, each network is built by large-scale information platforms, which may be constructed in either dynamic or static mode. In order to handle and plan dynamic as well as static task charts, the task scheduling system needs to be capable of doing numerous tasks. Deterministic models and non-deterministic schedules are the two main classifications for task scheduling concerns [9]. Heuristics and "Guided Random Search-Based (GRSB)" scheduling are the subsets of deterministic scheduling. Scheduling static is another name for the deterministic scheduling of tasks [10]. Because the GRSB strategies require additional iterations in order to provide an improved schedule, it is more expensive than heuristics-based methods of scheduling [11]. On another hand, heuristics-based strategies can deliver approximations in a blink of an eye. Further, it is divided into three categories based on list, clustering, and duplication. Clustering-related algorithms are appropriate for homogeneous models, while duplication-related algorithms have a greater temporal complexity [12].

Cloud computing must be able to satisfy every client's demand with outstanding efficiency and “Quality of Service (QoS)” as it serves thousands of consumers concurrently [13]. Therefore, in order to equitably and effectively fulfill these demands, there must be build a suitable task scheduling system [14]. In the cloud environment, the task scheduling becomes crucial part. By assigning particular tasks to specific resources at specific times, it serves to plan tasks for improved resource utilization [15]. The primary goals for scheduling the tasks-based algorithms are to raise output and QoS while simultaneously maintaining productivity and reducing expenses [16]. In the task scheduling process, accessible, effective sources are arbitrarily utilized [17]. Several variables are contemplated in the methods based on task scheduling, like time completion and cost of completing the task [18]. The basic difference of VM scheduling and VM placement is shown here. Generally, the VM placement is one of the significant operations that are conducted within the part of VM migration and also it has been identified to locate the Physical Machine (PM) for providing better performance to host the VMs. In this context, the VM scheduling in a cloud platform helps to maximize resource utilization. Additionally, scheduling the VM helps to maximize the QoS service is more benefits to the cloud service provider.

In order to measure different services, the diverse metrics and variables are measured to validate better performance. Hence, the services are monitored by providing different terms and conditions to violate the SLA services. Generally, the SLA metrics are used to measure the service providers to improve the quality of service and provides better improvements. In such a manner, it helps to provide better response time and throughput analysis. Hence, reliable monitoring in the SLA provides better services in cloud providers. Thus, the system actually detects the SLA violation to provide better provision in resources and services in an adaptive manner.

Problems to be addressed and motivation for the developed algorithm: In recent times, the diverse heuristic algorithms are emerged to show deep insights in the field of VM scheduling process. With the increasing number of users, the performance overhead and resource contention causes an impact in overall performance of VM scheduling system. The inefficient VM scheduling process can provide underutilized resources which results in maximizing energy consumption. With the increasing of energy consumption, the workload gets affected due to the uneven resource utilization provides that shows larger complexity in the traditional algorithms. Due to the rapid change of environments, the adaptability gets affected in the VM scheduling system. In this scenario, it affects the throughput performance and also it consumes additional system resources like memory and CPU in VM scheduling over cloud environment. Yet, the optimal VM scheduling is necessary to improve the scalability and security of the model in the traditional mechanisms. Due to these limitations, an innovative optimal VM scheduling in the cloud is modeled in this paper to tackle the issues. Hence, the research contributes to show the optimal VM resource scheduling process with the developed algorithm for rendering better performance with several evaluation metrics to show better analysis of the developed model.

The recommended VM scheduling in the cloud system is provided with major contributions are shown below:

  • To design the optimal VM scheduling in a cloud system with a hybridized meta-heuristic algorithm is suggested to offer better efficiency in the cloud and managing the utilization of resources. The suggested model receives the cloud sector's knowledge to enhance the VM scheduling process.

  • To construct the HES-SLO algorithm by combining the ESO and SLO to determine as what are the tasks assigned to which VM and attain the target to reduce the power consumption, execution time, cost, and resources utilization.

  • To authenticate the effectiveness, numerous experiments are conducted and the suggested system is compared with several techniques to affirm that the hybrid optimization strategy is more effective at assigning resources in the cloud sector.

The subsequent section describes structure of the developed system. The systematic review of existing technique is offered in Sect. 1. Energy-aware optimal VM scheduling in a cloud platform is evaluated using hybrid optimization algorithm is accessible in Sect. 2. Hybrid heuristic algorithm for energy-aware optimal VM scheduling in a cloud environment is provided. Multi-objective constraints-based optimal resource allocation in the cloud using a heuristic mechanism is detailed. Experimental findings and interpretation of result discussion are described in Section 3, and Section 4 shows the conclusion.

1.1 Literature Survey

In this systematic review, the different works of VM scheduling are analyzed with several conventional and heuristic mechanisms. Also, the several advantages and disadvantages of the existing works have been reviewed which shows the possibilities to develop a new method for the VM scheduling process.

1.1.1 Related Works

In 2019, Lianyong et al. [19] have introduced the “QoS-aware VM scheduling (QVMS)” method for conserving energy in a cloud-assisted “Cyber-Physical System (CPS)”. Virtualized software was used to handle the assets of a cloud platform to increase resource utilization and programs frequently operated by VMs. The designed model became difficult to achieve the QoS criteria. To find the best options for VM migration, the “Non-dominated Sorting Genetic Algorithm-III (NSGA-III)” was used. Additionally, to choose the best scheduling approach, “Simple Additive Weightings (SAW)” and “Multiple Criterion Decision Making (MCDM)” were used. At last, to confirm the viability of the suggested approach, experiments and simulations were carried out.

In 2021, Wang et al. [20] have suggested a deep reinforcement learning methodology that utilized QoS features to improve centralized resource allocation. It suggested a QoS features learning approach inspired by enhanced stacking noise reduction auto-encoders for obtaining more reliable QoS characteristics data during the deep learning phase. It suggested the multi-power devices cooperative resource allocation method by deep learning during the development phase of reinforcement learning. Extensive testing showed that the approach effectively reduced the energy usage while retaining the smallest failure rate when compared to other good scheduling of resource solutions. The goal of balancing energy conservation and QoS improvement was accomplished.

In 2019, Qiu et al. [21] have suggested an "Energy Efficiencies and the Proportionality Aware VM Scheduling (EASE)” framework, which included data gathering as well as scheduling methods. EASE framework's power consumption and characteristics for all servers were initially determined by running specialist memory, computing, and hybrid tests. Regarding their traits, servers were given labels and classifications based on their preference for various incoming inquiries. Then, EASE performed an operation characterization technique on every VM to identify whether the workload was done by monitoring and tracking resource utilization, such as memory, processing power, disc, and networking. Finally, EASE assigned workstations to VMs by comparing with the task type of the VM with its server preferences. The idea behind EASE was to assign VMs to hosts so that it would operate in the vicinity of the energy-efficiency peak or the near-best operating range. To ensure as close to their ideal operating area, they reasonably could demand changes, and EASE rescheduled and migrated the VMs to other hosts.

In 2023, Ajmera and Tewari [22] have suggested a “Green Particle Swarm Optimization (GPSO)” technique to improve the compromise between consumption of energy and breach. GPSO arranged VM across power-efficient green hosts. Customers seek the most resources possible for task execution when using an “Infrastructure as a Service (IaaS)” cloud provider. To lower the running costs of the data facility and offer consumers cloud-based services, cloud service suppliers must allocate resources effectively that were affordable. Through the most efficient planning of VM throughout the server collection, their study intended to increase resource utilization and decrease network electrical usage without straying. These "green" servers were environmentally friendly, searched the entire search distance, and discovered a VM scheduling with the fewest servers in use possible to save electricity in the data facility. The suggested GPSO technique was tested against the most advanced VM scheduling schemes using the cloud model.

In 2019, Jacob and Pradeep [23] have suggested a hybridization of the two methods of optimization as “Cuckoo Search (CS)” and “Particle Swarm Optimization (PSO)”. The novel hybrid technique was put forward, known as CPSO. The suggested model's major goal was to lower the costs, makespan, and incidence of deadline violations. With the Cloudsim toolbox, the suggested CPSO algorithm's efficiency was assessed. The proposed model minimized the makespan, expenses, and schedule violation percentage, according to the modeling findings. Task scheduling was carried out using a method called CPSO that offered the benefits of quick convergence and ease of execution. As a result, the scheduling method was capable of obtaining a result that was close to ideal.

In 2018, Praveen et al. [24] have introduced a system with two steps, namely resource allocation and task scheduling. For efficient allocation of resources that would minimize duration along with increased throughput, the group-based optimization strategy and the shortest-job-first scheduled method were proposed. In order to create precise models using synthetic data within a diverse cloud platform, experiments were carried out. The effectiveness of the suggested strategy significantly increased the system's efficiency in terms of makespan duration and throughput.

In 2019, Tong et al. [25] have proposed an innovative heuristic method named as “Biogeography-based Optimization (BBO)” and “Hybrid Migrating BBO (HMBBO)” approach, which has combined the migration approach with PSO. The issue of task scheduling for direct acyclic graphs in the context of cloud computing was addressed by both approaches. The fundamental goal of the strategy was to make use of both PSO and the BBO algorithms' benefits while reducing their disadvantages. To speed up the procedure for searching in HMBBO, the navigation approach within the BBO migrating design was hybridized to assess the task hierarchy. A comparison test centered on WorkflowSim was carried out using the scheduling task duration as its primary function. To confirm the efficacy of HMBBO, simulations, and actual studies were carried out. The results of the experiment demonstrated that HMBBO had benefits over a number of traditional heuristic methods with regard to broad search capability, quick convergence rate, and an excellent solution, as well as presented a novel approach for the work scheduler in the cloud.

In 2018, Wei et al. [26] have suggested a smart QoS-aware task-planning system for developers of applications. The framework's main element was a deep reinforced learning-assisted task scheduler. Without any previous data, the designed scheme was capable of acquiring the knowledge to arrive at acceptable virtual jobs to VM judgments for continuously requested workloads. The suggested task schedule strategy might effectively decrease the average task reaction times, ensure the QoS on an elevated level, and adjust to various traffic situations, according to findings from experiments utilizing simulated workloads and actual performance traces.

In 2020, Farzai et al. [27] have implemented a hybrid multi-objective genetic-based optimization algorithm for solving multi-objective problems. An extensive experiment was conducted with different variable correlation coefficients in VMs. Hence, the developed algorithm has suggested for enhancing the efficiency outcomes of the developed model with regards to power consumption, wastage of resource, and amount of data transfer in the network. This analysis was greatly impacted to provide better scalability analysis. In 2023, Hosseini et al. [28] have designed a new energy aware scheme and energy-efficient topology in the VM scheme. This algorithm has outperformed which tends to reduce the huge amount of power consumption. Consequently, the Multi-objective Discrete version of the JAYA (MOD-JAYA) algorithm was introduced in this research. With the help of developed algorithm, the resultant analysis has outperformed the enriched performance than other existing approaches.

1.1.2 Problem Statement

In cloud computing mechanism, the task scheduling is the significant approach. The reason behind the task scheduling initiate to distribute the resources with the particular time bound, where the tasks can be executed with the minimization of makespan and cost to the user. To simplify task scheduling in a cloud environment, it handles plenty of cloud data, various methodologies are used. The merits and demerits of various methodologies for routing in WSN are elaborated in Table 1.

  • On considering multiple users requirements, the traditional methods do not offer better QoS service in VM scheduling in cloud environments. Thus, it makes complex computations in experimental analysis. The efficient model is developed in this research work that offers better QoS services while offering better performance in terms of energy and power consumption.

  • Due to different fluctuations in workloads, it leads to cause VM failures to decrease the performance in larger data. Thus, it is not efficient to handle the different tasks in dynamic environments. This research work is carried out where the different tasks are handled by the developed model to offer superior performance in VM scheduling in cloud computing environments.

Table 1 Pros and cons of conventional optimal VM scheduling schemes

Problem formation of the developed algorithm:

In [29], the VM consolidation problem is considered an Integer Linear Programming (ILP) problem. Thus, it is formally stated where the minimum number of physical machine is suggested in VM and also it is adapted with the VM consolidation scheme. Generally, the Eq. (1) represents the number of active servers where it is achieved by some constraints.

$$ \left\{ \begin{gathered} \min \,f(x) = \sum\limits_{i = 1}^{l} {x_{i} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1)} \hfill \\ Subject\,\,to: \hfill \\ \hfill \\ Minimization\,of\,Pc + Et + Ct + Ur\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(2)_{{}} \hfill \\ \end{gathered} \right. $$

In this existing research work, the PM cannot be able to perform with more VMs while it exceeds with its constraints like memory and CPU utilization. With the increasing of memory and CPU utilization, the power consumption gets increased in the existing approaches.

We develop the effective model which is performed to minimize the power consumption \(Pc\), execution time \(Et\), resource utilization \(Ur\) and cost \(Ct\). Thus, it offers better performance in VM scheduling in cloud platforms. Based on this, the scheduling can provide better quality services to minimize the cost, and power consumption to save energy. For achieving better performance, the different optimal selected task in VM is considered using the developed HES-SLO algorithm. This optimal selected task is helpful to significantly reduce the constraints like power, cost, utilization of resources and execution time. The parameters in the developed algorithm are experimented along with a number of populations, and iterations.

2 Energy Aware Optimal Virtual Machine Scheduling in Cloud Environment Using Hybrid Optimization Algorithm

This section contributes to show how the VM scheduling process is performed in cloud sector with the brief discussion as given in sub Sect. 3.1. This sub-section also shows the different machine learning approaches for analyzing its performance in this scheduling process. Also, sub Sect. 3.2 shows the optimal virtual scheduling process with a developed methodology where it shows the effective performance.

2.1 Virtual Machine Scheduling in the Cloud

The efficiency of the requested resources is given to the clients through the web reflects the adaptability of computing technologies. The centralized architecture of the cloud manages a variety of functions, a few of which can be difficult and resource-intensive. Scheduling methods and processes allow a link between VMs organized in data centers and the jobs uploaded by the end-user that ought to be used in sequence to link the jobs as well as the resources required to carry out those jobs with accessibility in thoughts. The VM scheduling in the cloud involves three significant entities: cloud user, cloud provider, and cloud scheduler. Cloud users have to carry out their tasks in accordance with specific QoS standards. Expense and timeline are often two significant QoS issues consumers are most worried about. Every subtask may need a number of CPU cores to complete it. A single machine node must be used to run all of the sub-tasks of the identical task simultaneously. Cloud provider holds the hardware assets in the data centre may include a multitude of processing nodes. Because separate nodes for computers can include various CPU centres, various processing velocities, various memory capacities, and varied costs, these computational nodes are heterogeneous. The cloud scheduler oversees the accumulation of VM demands, collecting all relevant computation point resource data, matching every VM demand to an appropriate computation node, and allocating the necessary resources to fulfill the VM demands in accordance with the QoS specifications. The VM demand shall non-pre-emotively consume the allocated resources while it is running after it is properly routed into the computation node as well as has begun. The cloud service scheduler is additionally in charge of distributing the allocated resources once a VM request's tasks have been satisfactorily completed. The server provider continues to compensate consumers to support the resources that were supplied.

The VM scheduling is carried out by scheduling the method that assigns the VM in an appropriate order to execute the tasks and inside the slotting time is further enhanced with resource utilization. But it faced several challenges while scheduling the task. Numerous methods are implemented for solving the problems. The "Ant-Lion Optimization (ALO)” [30] system is designed to resolve the challenges. It offers less time for migration and correlation. However, the CPU and memory size are not considered during scheduling tasks, and it does not have the ability to take into account additional factors within the context of clouds, including high demand, utilization of memory, and overloading. Then, the PSO algorithm is suggested. Hence, the effective Fruit-fly Hybridized Cuckoos Search (FCHS) [31] algorithm is recommended. However, the overhead for communication and violation is not taken into account, and it does not show an effective presentation to enhance the use of resources and power savings. Following this, multiple schemes are implemented, but it is not sufficient to offer excellent outcomes. Perhaps the greatest important strategies for addressing the issue of reducing energy consumption in cloud computing centers are the organization of tasks. Thus, an effective energy-aware optimal VM scheduling scheme is designed using deep learning. Deep learning-based techniques provide performance among QoS and optimization of energy, support for lowering the use of energy while preserving the right violation level, and offer controlling capacity and portability. The pictorial depiction of the VM scheduling in the cloud system is shown in Fig. 1.

Fig. 1
figure 1

Pictorial depiction of the VM scheduling in cloud system

2.2 Developed Energy Aware Optimal Virtual Machine Scheduling in Cloud

The suggested energy-aware optimal VM Scheduling model based on deep learning is used for improving the efficiency in the cloud sector. The primary objective of this work is VM scheduling, which involves how to allocate VM queries to compute nodes whilst taking into consideration of QoS assurances and benefits offered by cloud consumers and cloud service providers. Varied computers could have varied processing rates and pricing, and cloud customers can have distinct QoS needs. Different types of jobs of varied sizes are uploaded to a cloud platform. The suggested model employs the process of an algorithm for performing tasks within the data storage facility. The allocation of jobs to the VMs constitutes the primary objective of the suggested approach. Because of this, the suggested technique uses the hierarchy process to regulate task precedence depending on the time of execution and duration. When the jobs have been ranked and allocated, a job queue is employed for scheduling them. The configurations are generated to perform the process. Then, the initialized configurations are offered as input to the HES-SLO. The suggested HES-SLO supports the optimal selection of the tasks assigned to which VM to reduce the power consumption, execution time, cost, and utilization of the resources. At last, the hybrid HES-SLO-based model offered the resource-allocated output. A diagrammatic depiction of the designed energy-aware optimal VM scheduling model is shown in Fig. 2.

Fig. 2
figure 2

Diagrammatic depiction of the suggested energy-aware optimal VM scheduling in cloud system

2.3 Hybrid Egret Swarm-based Sea Lion Optimization for Energy-Aware Optimal Virtual Machine Scheduling in Cloud Environment

The hybridized approach is suggested in this section, where the Sects. 4.1 and 4.2 shows the basic characteristics of the existing ESO and SLO optimization algorithm for the VM scheduling process. The development of a hybridized model is performed whereas the flow chart representation of the developed framework also provided for better performance.

2.3.1 1ESO

The sit-and-wait apoproach of the Snow Egret and the large Egret's lethal strategy serves as inspiration for “Egret Swarm Optimization (ESO)” [32], which incorporated the benefits of both strategies and built an appropriate mathematical framework to measure the behaviors. Three key elements of ESO show the approaches of sit-and-wait, lethal plans, and discerning requirement. One Egret team consists of 3 Egrets; Egret \(J\) uses a directing onward technique, whereas Egret \(K\) and \(L\) uses random walking and surrounding processes. Below are details for each component.

While Egret \(L\) deliberately investigates according to the position of superior egrets, Egret \(J\) calculates its descent flight and seeks depending on the inclination of the plane's characteristics. Egret \(K\) is going to participate in global at random wandering. By doing this, ESOA is better able to balance its exploitation and exploration efforts and conduct swift investigations. ESOA is more probable to arrive with optimum region with regards of optimization problems. By predicting tangential plane, ESOA distinguishes itself from different meta-heuristic methods and enables an immediate drop to the present optimal position.

Sit-and-Wait Strategy Equation for Observation: If the \(l^{th}\) egret squad's position is assumed to be \(a_{l} \in R^{q} ,q\), then \(q\) is the problem's dimension and \(J\left( \cdot \right)\) represents Snow Egret's assessment technique of the potential availability of food at its current position. The current location of the Egrets position is denoted as \(D(*)\). The term \(b\) is a prey estimate for the present location as in Eq. (3).

$$ b_{l} = D\left( {z_{l} } \right) $$
(3)

Moreover, the calculated process is parameterized by using Eq. (4).

$$ b_{l} = z_{l} ,a_{l} $$
(4)

The term \(a_{l} \in R^{q}\) indicates the weight calculation technique. The term \(h_{l}\) indicates the error value, which can be identified by Eq. (5).

$$ h_{l} = \frac{{\left| {b_{l} - \overline{{b_{l} }} } \right|^{2} }}{2} $$
(5)

The realistic gradient \(\sigma_{l}\) is recovered by considering the derivation partial of \(z_{l}\) by Eq. (6). The direction is termed as \(g_{l}\) can be determined by Eq. (7).

$$ j_{l} \left\{ \begin{gathered} = \frac{{\partial h_{l} }}{{\partial z_{l} }} \hfill \\ = \frac{{\frac{{\partial \left| {b_{l} - \overline{{b_{l} }} } \right|^{2} }}{2}}}{{\partial z_{l} }} \hfill \\ = \left( {b_{l} - \overline{{b_{l} }} } \right).a_{l} \hfill \\ \end{gathered} \right. $$
(6)
$$ g_{l} = \frac{{j_{l} }}{{\left| {j_{l} } \right|}} $$
(7)

The practical gradient of \(\omega\) is termed as \(j_{i}\). In order to estimate the behavior of their prey, egrets utilize superior while also applying their personal ideas. The vertical correction for the greatest squad position is \(\overline{{g_{k,l} }} \in R^{q}\), and the vertical correction for the greatest squad position overall is \(\overline{{g_{j,l} }} \in R^{q}\). Equation (8) and Eq. (9) describe this process.

$$ \overline{{g_{k,l} }} = \frac{{a_{l,Bt} - a_{l} }}{{\left| {a_{l,Bt} - a_{l} } \right|}} \cdot \frac{{i_{l,Bt} - i_{l} }}{{\left| {i_{l,Bt} - a_{l} } \right|}} + \overline{{g_{l,Bt} }} $$
(8)
$$ \overline{{g_{j,l} }} = \frac{{a_{j,Bt} - a_{l} }}{{\left| {a_{j,Bt} - a_{l} } \right|}} \cdot \frac{{i_{j,Bt} - i_{l} }}{{\left| {i_{j,Bt} - a_{l} } \right|}} + \overline{{g_{j,Bt} }} $$
(9)

Here, the best value is termed by \(Bt\). The combined gradient \(j_{l} \in R^{q}\) is mathematically designed in Eq. (10).

$$ \overline{{j_{l} }} = \left( {1 - u_{k} - u_{j} } \right) \cdot g_{l} + u_{k} \cdot \overline{{g_{k,l} }} + u_{j} \cdot \overline{{g_{j,l} }} $$
(10)

Here, the term \(u_{k} \in \left[ {0,0.5} \right]\) and \(u_{j} \in \left[ {0,0.5} \right]\). Equation (11) explains the upgraded weights.

$$ \begin{aligned} & p_{l} = \alpha_{1} \cdot p_{l} + \left( {1 - \alpha_{1} } \right).\overline{{j_{l} }} \\ & y_{l} = \alpha_{2} \cdot y_{l} + \left( {1 - \alpha_{2} } \right).\left( {\overline{{j_{l} }} } \right)^{2} \\ & z_{l} = \frac{{z_{1} - p_{l} }}{{\sqrt {y_{l} } }} \\ \end{aligned} $$
(11)

The values for \(\alpha_{1}\) and \(\alpha_{2}\) are 0.9 and 0.99. Based on the judgment for the present position of Egret \(J\), the value of \(a_{d,l}\) is calculated by Eq. (12), and its fitness value \(\alpha_{1}\) is identified by Eq. (13).

$$ a_{d,l} = a_{l} + Sp_{d} \cdot \exp \left( {\frac{ - w}{{0.1 \cdot w_{Y} }}} \right) \cdot PK \cdot \overline{{j_{l} }} $$
(12)
$$ \overline{{b_{d,l} }} = i\left( {a_{d,l} } \right) $$
(13)

Here, the maximum time of iteration and the current iteration are termed by \(w_{Y}\) and \(w\). The distance between the lower and the upper limits of the spacing solution are expressed by \(PK\). The term \(Sp_{d} \in \left[ {0,1} \right]\) in \(J^{th}\) Egret is for the factor of step size. Hence, the value of fitness is determined as \(\overline{{b_{d,l} }}\).

Aggressive Strategy: The \(K^{th}\) Egret tends to random searching prey, and also the activities are depicted in Eq. (14) and Eq. (15).

$$ a_{e,l} = a_{l} - Sp_{d} \cdot \tan \left( {u_{d,l} } \right) \cdot \frac{PK}{{\left( {1 + w} \right)}} $$
(14)
$$ b_{e,l} = i\left( {a_{d,l} } \right) $$
(15)

The variable \(u_{d,l}\) defines arbitrary value lies in \(\left( {\frac{ - \pi }{2},\frac{\pi }{2}} \right)\), the next position of the Egret, and the fitness is termed by \(a_{d,l}\) and \(b_{e,l}\).

The \(L^{th}\) Egret chooses to aggressively pursue the prey; therefore, the encircling function is utilized as the method for upgrading the location, as shown in Eqs. (16) and (17).

$$ \begin{aligned} & G_{k} = a_{l,Bt} - a_{l} \\ & G_{j} = a_{j,Bt} - a_{l} \\ \end{aligned} $$
(16)
$$ \begin{aligned} & a_{f,l} = \left( {1 - u_{l} - u_{j} } \right) \cdot a_{l} \cdot u_{k} \cdot G_{k} \cdot u_{j} \cdot G_{j} \\ & b_{f,l} = i\left( {a_{f,l} } \right) \\ \end{aligned} $$
(17)

The distance among the optimal and the present location is indicated by \(G_{k}\). The squad of Egret \(G_{k}\) is compared by the optimal location for every Egret. The expected position of \(L^{th}\) Egret is \(a_{f,l}\). The random values lie in \(\left[ {0,0.5} \right]\) the range is the terms \(u_{k}\) and \(u_{j}\).

Termination Condition: After every Egret squad member, the member chooses the best choice and gets the together action. The solution matrix for the Egret squad \(l\) can be determined by Eqs. (18) to (21).

$$ a_{v,l} = \left[ {\begin{array}{*{20}c} {a_{d,l} } & {a_{e,l} } & {a_{f,l} } \\ \end{array} } \right] $$
(18)
$$ b_{v,l} = \left[ {\begin{array}{*{20}c} {b_{d,l} } & {b_{e,l} } & {b_{f,l} } \\ \end{array} } \right] $$
(19)
$$ f_{l} = \arg \min \left( {b_{v,l} } \right) $$
(20)
$$ a_{l} = \left\{ {\begin{array}{*{20}l} {a_{v,l} |f_{l} } \hfill \\ {if\,b_{v,l} |f_{l} < \,b_{l} } \hfill \\ {a_{l} } \hfill \\ {else} \hfill \\ \end{array} } \right. $$
(21)

If the value of \(b_{v,l}\) less than the present fitness value of \(b_{l}\), the Egret group admits the option. Or else, an arbitrary number \(u \in \left( {0,1} \right)\) is smaller than 0.3 indicates that there has a chance to allow the inferior plan. The Pseudo-code for the ESO is depicted in Algorithm 1.

figure a

2.3.2 SLO

Three phases of Sea Lion Optimization algorithm (SLO) [33] such as tracking, surrounding, and prey attacking, are modeled mathematically in the following section. Every single ocean lion is viewed here like a user demand, and the victims being hunted are simulated machines.

Phase of detecting and pursuing: Sea lions' whiskers are used to determine the dimensions, form, and position of prey. In order to recognize the target and its position, the motion of the whisker is contrary for the orientation of the water's wave. It signals the remaining marine lions within its clique to pursue the target after locating it. The additional sea lions upgrade their position to find prey when the particular lion is designated as the person responsible for this behavior. The intended prey in this step is assumed to be the ideal solution. These tasks are mentioned mathematically in the phrase given in Eq. (22).

$$ G = \left| {2E \cdot R\left( w \right) - V\left( w \right)} \right| $$
(22)

here the term \(w\) denotes the present iteration, \(G\) shows the distance that exists between the intended prey and the marine lion, and \(V\left( w \right)\) and \(R\left( w \right)\) indicates the location vector of the targeted food and ocean lion, respectively. The search agent's distance that lies in the interval \(\left[ {0,2} \right]\) and increased by multiplying the arbitrary vector \(\overline{E}\) by two in order to locate the best or nearly the best answer for the desired agent. Another sea lion moves near the intended prey in the following iteration. The computational explanation for this behavior is provided in Eq. (23).

$$ V\left( {w + 1} \right) = R\left( w \right) - G \cdot F $$
(23)

here the following iteration is indicated by \(\left( {w + 1} \right)\). Additionally, \(F\) is decreased from 2 to 0 over a number of repetitions as this decrease enables the sea lion's leadership to go closer to a present prey.

Vocalization stage: Ocean lions are classified as vertebrates because it may exists under the water alongside the land. The speed at which this ocean lion produces noise under the water is fourfold greater than it is upon land. They produce a variety of noises to interact with one another when they're hunting. When an ocean lion spots its victim, it produces noises to signal other members of its pack to surround and attack it. The theoretical representation of this stage is explained in Eqs. (2426).

$$ L = \left| {\frac{{Y_{1} \left( {1 + Y_{2} } \right)}}{{Y_{2} }}} \right| $$
(24)
$$ Y_{1} = {\text{Sin}} \phi $$
(25)
$$ Y_{2} = {\text{Sin}} \theta $$
(26)

here the oceanic lion leader's velocity of sound in both underwater and the atmosphere is referred to as \(Y_{1}\) and \(Y_{2}\), and the ocean lion leader's noise velocity is indicated as \(L\). The term \({\text{Sin}}\) refers to both sound produced and captured by the atmosphere as well as sound that bounce back underwater. The reflected degree \(\varphi\) and refracted ratio \(\theta\) are both used here. The sound refracted in water is termed as \(\sin \phi\) and also the term \(\sin \theta\) is represented the reflected generated sound.

Attacking stage: These oceanic lions had no trouble locating their intended victim. The leader spots the target, and it alerts the remaining ocean lions via loud noise. The intended prey mentions the ideal candidate currently under consideration. A newly developed search agent has the ability to identify its target and surround them. Here, two stages have been added to mimic the way marine lions forage.

Dwindling encircled method: This behavior is fundamentally based on the value of \(F\). Over a period of repetitions, \(F{th}\) value decreases exponentially from 2 to 0.

Circle upgrading location: Ocean lions pursue and hunt their prey in a circular motion that updates position. This behavior is expressed in Eq. (27).

$$ V\left( {w + 1} \right) = \left| {S\left( w \right) - V\left( w \right)} \right|\,{\text{Cos}} \left( {2\pi t} \right) + S\left( w \right) $$
(27)

here the term \(t\) is an arbitrary value among \(\left[ {1,1} \right]\) and the spacing among the searching agent \(SA\) and the optimum solution \(OS\), which is denoted as \(\left| {S\left( w \right) - V\left( w \right)} \right|\,\). A marine lion commences its hunting procedure by paddling in a circling pattern to capture its prey. The term \({\text{Cos}} \left( {2\pi t} \right)\) indicates the theoretical definition of this process.

Searching for prey: The ocean lion glides in a zigzag pattern while swimming in order to find the prey. Therefore, the term \(F\) is involved in these arbitrary results. If the quantity of \(F\) is more than one or less than negatives, the ocean lions are absent in sea lions' leader and their intended prey. During the exploitation stage, the ocean lions adjust their positions according to the most effective searching agent. Regarding an ocean lion chosen at random, searching agents updated their position during the exploring phase, as in Eq. (28).

$$ G = \left| {2 \cdot EV_{Rnd} \left( w \right) - V\left( w \right)} \right| $$
(28)

here the term \(V_{Rnd} \left( w \right)\) denotes the ocean lion that is chosen randomly from the present population. This approach's main objective is to increase the network's connection and penetration rate. The Pseudo-code for the SLO is depicted in Algorithm 2.

figure b

2.3.3 Developed HES-SLO

In VM scheduling process, the diverse research is analyzed and investigated for the rapid development of metaheuristic algorithms. The analysis of this meta-heuristic algorithm shows advanced performance; still it needs to resolve in fewer areas for obtaining better outcomes. However, the existing algorithms suffer from few complications due to the deep structured networks. Also, the existing algorithm suffers from global optima problems to degrade the performance that does not make the system effective in optimal VM scheduling process. Consequently, the convergence also affected with the more number of users. In this context, the research work selects ESO and SLO algorithm due to its behavioral characteristics to maximize its performance. The adaptation of these algorithms shows stable and accurate performance than other existing algorithms due to its stability and robustness of the model get enhanced in VM scheduling process. While considering SLO algorithm, it is more effective to perform in the scheduling process. Thus, it facilitates to reduce cost and power consumption in order to maximize resource utilization. The consideration of two algorithms is very effective when compared with other algorithms. However, it is quite complex to solve the discrete optimization problems and also it offers unreliable performance due to the lack of consistent training. To mitigate these issues presented in the ESO and SLO algorithm, the research work combines the two models to derive the newly developed model as hybridized HES-SLO algorithm. This hybridization approach helps to evaluate with more accurate and reliable outcomes in the VM scheduling process.

Novelty of Developed HES-SLO algorithm: The HES-SLO algorithm is developed to execute the process of the suggested energy-aware optimal VM scheduling model. The developed HES-SLO algorithm provides excellent behaviors of convergence, stability, and comprehensive presentation. Moreover, it can solve practical engineering issues and offer robustness to the model. It can efficiently increase the rate of connectivity with different numbers. It offers best coverage, resource utility, and connectivity and minimizes delay. The optimal position is upgraded using the ESO if the current iteration \(w\) is divisible by 5 and 7; otherwise the position is upgraded using the SLO. Therefore, over the iteration, the condition is to be checked that can determine either ESO or SLO is used, in such way these two algorithms are hybridized. The developed HES-SLO algorithm helps optimally select the task assigned to the VM further reducing power consumption, cost, execution time and utilization of the resources. The developed HES-SLO algorithm helps control capacity and portability, decreases average reaction times, responds rapidly to unpredictability, and guarantees excellent outcomes. The Pseudo-code for the suggested HES-SLO is depicted in Algorithm 3. Figure 3 illustrates the flowchart for the suggested HES-SLO algorithm.

Fig. 3
figure 3

Flowchart for the suggested HES-SLO algorithm

figure c

2.4 Multi-Objective Constraints-Based Optimal Resource Allocation in Cloud Using Heuristic Mechanism

This section describes the different multi-objective constraints in the optimal resource allocation in the cloud sector. This section mainly focused on concentrating the objective function where the developed model helps to maximize throughput and minimize energy consumption to enhance VM optimal resource allocation in cloud.

2.4.1 Optimal Resource Allocation with HES-SLO

The cloud system can engage many task types, each of that has a different size. These jobs are correctly organized in a task line, and the recommended HES-SLO is used for allocating the tasks in accordance with the available resources. The proposed HES-SLO employs the ordering process to carry out the job responsibilities at the data in the cloud center. The main objective is to allocate the jobs to the VMs and to complete activities as quickly and efficiently as feasible. The idea regarding resource scheduling remains crucial. As a result, the suggested model manages task precedence according to duration and size using the HES-SLO processing algorithm. The jobs are handled inside the job list after they have been given precedence. In order to fulfill additional requirements, such as availability of resources and physical machine ability, the suggested HES-SLO algorithm seeks to schedule the jobs to VM with the lowest possible energy consumption, cost, time, and utilization of the resources. In order to provide the necessary QoS, the HES-SLO-based scheduling method can be employed to get an efficient distribution of resources across specified tasks in a certain amount of time. Building a schedule that specifies the resources needed and when every task can be carried out with the aim of the suggested HES-SLO algorithm. It must be stated that planning is currently an active research area in a variety of fields, such as computer system planning and workshop planning, etc. There ought to be just one VM allocated for every task. Every job's completion period must match its due date, and the entire quantity of resources needed to complete every task for an individual VM cannot exceed that system's capability. VMs and jobs are independent and several features offered by the suggested HES-SLO algorithm.

2.4.2 Objective of Optimal Resource Allocation Model

The developed energy-aware optimal VM scheduling module employs an HES-SLO-based optimization technique for optimizing the assessment criteria for determining an appropriate allocated resource for the provided VM. The optimal resource among the options is what the optimization method aims to find as its primary objective. The suggested HES-SLO algorithm aims to optimally allocate the tasks to VM to reduce the power consumption, execution time, cost, and utilization of the resources. Equation (29) offers the objective function \(BN\) for the developed HES-SLO algorithm.

$$ BN = \mathop {\arg \,\min }\limits_{{\left\{ {Taskr_{g}^{t} } \right\}}} \left( {Pc + Et + Ct + Ur} \right) $$
(29)

here the task assigned to the VM is defined by \(Taskr_{g}^{t}\) in the range of \(\left[ {1,5} \right]\). The solution diagram for the optimal resource allocation model is depicted in Fig. 4.

Fig. 4
figure 4

Solution diagram for optimal resource allocation model

2.4.3 Description of Objective Constraints

The terms \(Pc\), \(Et\), \(Ct\) and \(Ur\) represents the power consumption, execution time, cost, and utilization of the resources and the functions are elaborated in Eqs. (3033).

$$ Pc = R \times T $$
(30)

Here the term power and time are defined by \(R\) and \(T\).

$$ Et = C \times A \times T $$
(31)

Here the counts of instruction and average period to carry out the process are represented by the terms \(C\) and \(A\).

$$ Ct = Fc + \left( {Ac \times Tu} \right) $$
(32)

Here the fixed expenses, averaging variables, and total units are termed by \(Fc\), \(Ac\) and \(Tu\).

$$ Ur = \left( {\frac{Bt}{{At}} \times 100} \right) $$
(33)

Here the busy times and available times are indicated by the terms \(Bt\) and \(At\).

3 Result and Discussion

The experimental section is carried out with different analyses where the different configurations are performed for providing better performance. Hence, this analysis is more helpful to compare with traditional and conventional techniques to show accurate performance.

3.1 Experimental Setup

In this experimental setup, the result is implemented in the MATLAB platform. Matlab is one of the computing platforms where it can be applied in several scientific applications. This implementation analysis helps the model to reduce complexity which shows the enhancement in graph analysis. Hence, the functional performance in MATLAB provides more flexible and easily extensible performance while generating graph analysis. The recommended energy-aware optimal VM scheduling model was done by using 10 populations, and the maximum iteration was 250. The suggested scheme was analyzed with several baseline algorithms to show the authority, and they were “CuttleFish Optimization (CO)” [34], ALO [35], ESO [32], and SLO [35]. The simulation setup of this implementation platform is described below. The processor of Intelcore i3 is performed whereas the processing shows cost efficiency for the analysis. Hence, the RAM size is about 8 GB and 64 bit can be used in this implementation process. The parameter settings of the meta-heuristic algorithms are listed in Table 2.

Table 2 Parameter settings of the meta-heuristic algorithm

3.2 Configuration of Data Experimented

Table 3 illustrates the configuration of data experimented.

Table 3 Configuration of data experimented

3.3 Evaluation Metrics

The measures used to perform the suggested energy-aware optimal VM scheduling scheme are as follows.

(a) Throughput is calculated by Eq. (34).

$$ Pt = \frac{KP}{F} $$
(34)

Here, the terms \(KP\) and \(F\) defines the generated units and duration. The throughput can be indicated by \(Pt\).

3.4 Analysis of Cost Function in Developed Scheme

The suggested energy-aware optimal VM Scheduling system is depicted in Fig. 5 by validating with cost function. The suggested model offered the cost function as 7.14% enhanced than CO, 12.45% more than ALO, 8.45% superior to ESO and 7.47% enhanced than SLO at 100th iteration. The result showed that the suggested scheme attained the best analysis when compared with other conventional techniques.

Fig. 5
figure 5

Cost function report for the suggested energy aware optimal VM scheduling scheme regarding (a) configuration-1, (b) configuration-2, (c) configuration-3, (d) configuration-4 and (e) configuration-5

3.5 Performance Analysis on the Recommended Energy-aware Optimal VM Scheduling Scheme

The performance analysis on the recommended energy-aware optimal VM scheduling scheme is shown in Fig. 6. In Fig. 6a, the energy consumption of the suggested HES-SLO is 47.36% enhanced than CO, 31.03% more than ALO, 25.92% superior to ESO and 28.57% enhanced than SLO at 50th VM. In Fig. 6c, the execution time of the suggested model is 36.67%, 57.77%, 5%, and 72.85% increased than CO, ALO, ESO, and SLO. The result demonstrated that the suggested HES-SLO-based VM scheduling model outperformed other systems.

Fig. 6
figure 6

Performance analysis on the suggested energy aware optimal VM scheduling based on (a) energy consumption, (b) resource utilization, (c) execution time, and (d) throughput

3.6 Statistical Report of the Recommended Scheme Based on Objectives

Table 4 explains the performance measures-based statistical report of the recommended scheme regarding resource utilization, energy consumption, execution time, and throughput. When considering the execution time, the mean of the suggested HES-SLO scheme is 42.05%, 60.87%, 17.99%, and 70.58% more than CO, ALO, ESO, and SLO. The result confirmed the best evaluation of the suggested scheme.

Table 4 Performance measures-based statistical analysis of the recommended optimal VM scheduling scheme

3.7 Statistical Performance of Recommended Scheme based on Task Assigned

The Statistical performance analysis of the recommended optimal VM scheduling scheme is provided in Table 5. In Table 5, the best of the suggested HES-SLO-based optimal VM scheduling scheme is 5.92% enhanced than CO, 3.8% more than ALO, 27.83% increased than ESO, and 4.06% more than SLO when considering task 4. The experimental findings proved that the suggested scheme has the high ability to offer best result in the shortest period.

Table 5 Statistical performance analysis of the recommended optimal VM scheduling scheme

3.8 Computational Complexity of the Developed Model

The computational complexity of the proposed model has been compared and validated with existing approaches that is shown in Table 6. Hence, the different configurations are analyzed and compared with existing algorithms to enhance the performance of the system.

Table 6 Computational complexity of the developed model

3.9 Power Consumption Analysis of the Developed Model

The power consumption analysis of the developed model is suggested and also the resultant graph analysis is provided in Figs. 7 and 8. In Fig. 7, the analysis of power consumption is suggested in terms of different configurations. Generally, the power consumption becomes a significant issue in data centers and cloud server systems. In graph analysis, the developed model shows better performance which signifies to improve the lifetime of the network in optimal VM scheduling process. On considering Fig. 8, the analysis of resource dissipation is performed and compared with proposed as well as existing approaches for providing enhanced performance in the optimal resource scheduling process. In general, the resource dissipation occurs by the external environments which affect the computation of the model. Somehow, our developed model performs reliably performance in resource dissipation when compared with other existing algorithms. Throughout the entire validation setup, the developed model shows accurate performance in terms of power consumption and resource dissipation model.

Fig. 7
figure 7

Analysis of the developed model in terms of (a) Power consumption and (b) resource dissipation

Fig. 8
figure 8

Scalable analysis of the developed algorithm

3.10 Comparative Performance of the Developed Model with Diverse Research Works

The performance of the developed model is compared and evaluated with diverse methods which is tabulated in Table 7. The statistical analysis is evaluated to test the performance of the developed model in terms of power consumption. Overall, the performance of the developed model attains 24.30%, 19.8%, and 12.2% enhanced than CO, ALO, and ESO while considering standard deviation. This analysis is helpful to offer better performance rather than existing approaches.

Table 7 Statistical analysis of the developed model in terms of power consumption model

3.11 Scalability Analysis of the Proposed Algorithm

The scalability analysis of the proposed algorithm to show the accurate performance while comparing with the existing algorithm is shown in Fig. 8. This scalable analysis is compared with the throughput analysis to show accurate performance. The different number of tasks is considered where the performance is helpful to improve the proposed algorithms. Hence, the scalability analysis helps to identify the issues earlier to fix the problems which can save time and effort. This analysis shows the proposed algorithm shows better scalable performance.

4 Conclusion and Future direction

The suggested energy-aware optimal VM scheduling was carried out by scheduling the method which assigned the VM in an appropriate order to execute the tasks under confines and inside the slotting time to enhance the utilization of the resources. Initially, the configurations were generated to execute the process. Then, the initialized configurations were put as the input in the suggested HES-SLO algorithm. The suggested HES-SLO algorithm was supported for the optimal selection of the tasks assigned to which VM that could reduce different constraints. At last, the hybrid HES-SLO-based model provided the resource-allocated output. In this developed model, the experimental analysis is considered on focusing with different objective functions where the power consumption, execution time, cost, and utilization of resources get minimized. Based on this, the energy aware constraints like power consumption, resource dissipation, and also the analysis of cost function are tentatively performed to provide accurate performance in VM scheduling in the cloud platform. The suggested HES-SLO algorithm offered a median was 4.11%, 1.57%, 0.63%, and 0.4% more than CO, ALO, ESO, and SLO. The result proved that the suggested HES-SLO-based energy-aware optimal VM scheduling model outperformed existing schemes. The developed model helped to improve the efficiency in the cloud sector and manage the utilization of resources and received the cloud sector's knowledge to enhance the VM scheduling process. The result analysis in the developed model shows better performance in the VM scheduling process yet; it contains several challenges which need to be directed for future research. A deep investigation of the heterogeneous nature of the cloud environment is required when it meets the computational demands of large and diverse groups of tasks in VM. Also, the security among the data needs to be concentrated which helps to prevent it from being vulnerable to malware and other malicious attacks. The future consideration of the research is explored to show better advancement in this research area. Also, the research work will be extended on diverse multimedia applications with deep learning models. These multimedia applications will be effectively stored in the cloud computing environments which tend to manage the data and also it will be extended to diverse users to offer better services. Hence, the real world deployment strategies will be investigated using ensemble models to provide optimal workflow scheduling process.