Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Systematic and Comprehensive Review of Clustering and Multi-Target Tracking Techniques for LiDAR Point Clouds in Autonomous Driving Applications
Previous Article in Journal
A Personalized and Smart Flowerpot Enabled by 3D Printing and Cloud Technology for Ornamental Horticulture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Approach to Virtual Machine Migration in Cloud Computing with Updated SESA Algorithm

1
Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab 140401, India
2
Information Security and Engineering Technology, Abu Dhabi Polytechnic, Abu Dhabi 111499, United Arab Emirates
3
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Department of CSE, Chandigarh University, Mohali 140413, India
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(13), 6117; https://doi.org/10.3390/s23136117
Submission received: 6 March 2023 / Revised: 25 June 2023 / Accepted: 25 June 2023 / Published: 3 July 2023
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Cloud computing plays an important role in every IT sector. Many tech giants such as Google, Microsoft, and Facebook as deploying their data centres around the world to provide computation and storage services. The customers either submit their job directly or they take the help of the brokers for the submission of the jobs to the cloud centres. The preliminary aim is to reduce the overall power consumption which was ignored in the early days of cloud development. This was due to the performance expectations from cloud servers as they were supposed to provide all the services through their services layers IaaS, PaaS, and SaaS. As time passed and researchers came up with new terminologies and algorithmic architecture for the reduction of power consumption and sustainability, other algorithmic anarchies were also introduced, such as statistical oriented learning and bioinspired algorithms. In this paper, an indepth focus has been done on multiple approaches for migration among virtual machines and find out various issues among existing approaches. The proposed work utilizes elastic scheduling inspired by the smart elastic scheduling algorithm (SESA) to develop a more energy-efficient VM allocation and migration algorithm. The proposed work uses cosine similarity and bandwidth utilization as additional utilities to improve the current performance in terms of QoS. The proposed work is evaluated for overall power consumption and service level agreement violation (SLA-V) and is compared with related state of art techniques. A proposed algorithm is also presented in order to solve problems found during the survey.

1. Introduction

Cloud computing is one of the most emerging fields in modern-day developments. A cloud network is comprised of three layers of services as follows:
(a)
Infrastructure as a service (IaaS);
(b)
Platform as a service (PaaS);
(c)
Software as a service (SaaS).
To speed up the computation efficiency, the physical machines (PMs) (machines with physical attributes) are supported with virtual machines (VMs). There are two issues in the association of VM to the PM.
(a)
Allocation of a new VM to the PM;
(b)
Management of existing allocated VMs.
Identify the applicable funding agency here. If none, delete this text box.
Both processes consume power and, hence, if they are not managed well, high power consumption will be observed. The world is already suffering from global warming and, hence, in such a scenario, high power consumption is not affordable. From the records, it is analyzed that one data centre consumes as much energy as 25,000 households [1]. Energy consumption is integral power over a given unit of time and can be defined by Equation (1)
E = t 1 t 2 P c   d t
where Pc is consumed power and dt is the time interval when the power is consumed which can satisfy the quality of service (QoS) requirements of users considering the service level agreement (SLA). An energy-efficient cloud model is also termed a green cloud. Green cloud architecture has four elements, as shown in Figure 1.
(a)
Consumer or broker: A consumer or customer of the cloud submits its requirement directly to the cloud or gets it submitted by a broker;
(b)
Cloud-service allocator (CSA): The cloud infra is not directly associated with the user and, hence, the CSA negotiates on SLA, prices, and other terms between the service provider and the customer. The service allocation has a service scheduler associated with a CSA, which deals with the completion time and scalability of customer demand;
(c)
Physical machine (PM);
(d)
Virtual machine (VM).
VM allocation has two scenarios, namely the admission of a new VM and migrations of one VM to another PM. The first scenario is observed at the preliminary stage of the cloud server initialization. The most basic architecture was proposed by Dr. Rajkumar Buyya in 2010, which is named modified best fit decreasing utilization (MBFD) which is an extension of the best fit decreasing algorithm (BFD). BFD was developed by Baker et al. in the late 1980s and was used further in the early stages of cloud and grid computing [2]. When a VM is to be migrated from one PM and is to be allocated to another PM, two issues were addressed.
(a)
Hotspot detection: Which PM is to be detected for the migration?
(b)
Destination PM detection: Where to migrate?
The MBFD algorithm uses CPU utilization in order to allocate the VMs at the preliminary stage, i.e., when a new VM is to be allocated to the PM. The MBFD algorithm sorts the CPU utilization in descending order and looks for the resources with the highest CPU utilization containing VM. The algorithm also checks the most feasible host with the least power consumption. The pseudocode for MBFD is in Algorithm 1 as follows.
Algorithms 1:  M B F D
Inputs: VM Requirements and Specifications, Host Specifications
(a)  Sort all VM as per the CPU Utilization in descending order
(b)  For every VM in the VM List(Sorted)
(c)  Check if Host can satisfy the VM needs or not
(d)  Calculate Pc
(e)  Check if Pc is least
(f)  If Yes, Allocate VM to Host
(g)  Reduce Host resources by the amount which is consumed by the VM
(h)  Pick Next VM
As time passed on, amendments have been monitored in MBFD. In 2015 Lu. X and Zhang [3] presented an enhanced MBFD algorithm for allocation, which considered load over the PMs. Lu and Zhang neutralized bandwidth, random access memory (RAM), and CPU utilization as follows.
Normalized Demand (ND) =      w 1 × R a m + w 2 × C P U + w 3 × B a n d w i d t h
where w1, w2, and w3 are weights for RAM, CPU, and bandwidth of the VM.
The modified MBFD algorithm follows MBFD by replacing the CPU utilization by ND. It also adds up a hot migration policy in which a threshold is set considering all the loads over the PMs and, if the PM exceeds the threshold, then VMs are selected for the migration. The load calculation is done by applying three conditions, as given in Equation (3). The equations considered host CPU utilization (HCPU), average CPU utilization (ACPU), host RAM utilization (HRAMU), average RAM utilization (ARAMU), and host bandwidth utilization (HBU), along with average bandwidth utilization (ABU) for overloaded migrations.
0 O t h e r w i s e      1   i f   H C P U   >   A C P U | | H R A M U > A R A M U | | H B U   >   A B U
Considering load as an effective parameter, Mann et al. presented a computation core-aware load-based MBFD algorithm, which also considered the CPU cores involved in the allocation process. In order to enhance power management, the distribution of load is minutely analysed over the individual cores, and core-to-core migration is also incorporated [4]. The usage of natural computing is also observed in MBFD advancements. As per “de Castro et al.”, natural computing is a terminology which encompasses the algorithmic architecture which takes inspiration from nature or that employs natural phenomenon to complete a given task [5]. Cloud computing uses this phenomenon in order to identify the PM whose VMs are to be migrated. Kansal et al. [6] used swarm intelligence (SI) for VM selection (swarm intelligence)-based behaviours observed in groups. For example, the behaviour of ants, the behaviour of cuckoos and fireflies, etc.). The PM from where the VMs are to be migrated is termed an active source node. The algorithmic architecture is further extended to use an artificial bee colony (ABC) for the selection of the destination node where the VMs are to be migrated. The source node is selected by the computation of the “Attraction Index (AI)”, which is computed by the evaluation of “Consumed Energy (CE)” as shown in Equation (4).
C E = i = 1 n j = 1 k C P U i j i = 1 n j = 1 k M U i j M
where n is the total amount of VM executing over PM and k denotes the total amount of jobs allocated to ith VM. M corresponds to the total amount of memory units executing on the ith VM. In addition to C E , the computation time of the node or NCT corresponds to the alive node to compute the total invested time in order to complete supplied jobs is also calculated. N C T formulation is given by Equation (5).
N C T a c t i v e = i = 1 n j = 1 k N C T i j a c t i v e
where N C T i j is the execution time for k number of jobs supplied to n number of VMs. Using CE and NCT, AI is calculated by Equation (6).
A I i = A I a c t i v e ( C E a c t i v e , N C T a c t i v e )
By sorting the AI in decreasing order, the lowest AI index will be selected as the PM from which the VMs are to be migrated. In order to choose the VMs to be migrated, the load on each VM is calculated by using Equation (7).
L o a d i j = i = 1 n j o b i 0 t P c   d t
VMs with a high load compared to the average load are considered to be migrated from the active PM. Durgadevi et al. [7] also used swarm-based hybridization of leapfrog search and cuckoo search for the VM and PM selection for migration. In addition to CPU utilization, load, and computation time, the authors introduced entropy for the computation of VMs to be migrated.
When it comes to selecting the VMs for the migration, several research articles have focused on VM selection that is based on threshold [1,3,6,7,8]. In order to detect the hotspot, Beloglazov et al. proposed the minimization of migrations (MM). MM has two thresholds, namely upper and lower. If the usage of the host CPU is lower compared to the lower threshold, all host VMs are migrated and if the utilization of the host CPU is higher than the upper threshold, some VMs are migrated through the PM. The working is illustrated in Algorithm 2.
Algorithm 2: MM Algorithm
Input: Hotlist Output: Migration List
  1. Repeat until the hUtil is greater than upper threshold
  2. Forevery vm in the Host’s vmList
  3. If vm.utilization() > hUtil − upper threshold
  4. Evaluate t as t ← vm.utilization() − hUtil + upper threshold
  5. Keep vm as best fit vm until greater t is not attained or
host’s utilization does not go below upper threshold
  6. Reduce hUtil by best fit vm.utilization()
  7. Add vm to migration list
In this work, the main issue of power consumption, SLA-V, and optimal resource utilization has been addressed by allocating VM and migrating to the best available host in an energy-efficient manner. It has been done by developing an energy-efficient model using a modified SESA algorithm in order to improve the efficacy of the proposed model. Two additional parameters such as cosine similarity and bandwidth utilization have been employed.

2. Literature Survey

Masdari and Khezri (2020) introduced a detailed analysis of the process of VM migration [9]. Dubey and Sharma (2020) worked for the betterment of the utilization of the resources by introducing intelligent water drop (IWD) for the selection of the VMs to be migrated [10]. To check the effectiveness of the proposed algorithm, the authors increased the load through load files which are globally available on cloudbus.org (accessed on 24 June 2023).
Joshi and Munisamy (2020) introduced the dynamic degree balance (DDB) in collaboration with MBFD. The authors also considered the load mechanism in contrast to the VM allocation mechanism and, hence, the two policies run simultaneously to point the VM to an existing PM. The authors compared the algorithm with two static algorithms such as shortest job first (SJF) as well as first come first serve (FCFS). The rate of imbalance and waiting time to experiment have been considered as evaluation parameters and work simulation has been done by CloudSim [11].
Ruan et al. (2019) described a model for allocating VMs including migrations that leverage the PPR in different kinds of hosts. In comparison to three simple energy-efficient algorithms for allocating and selecting VMs, ThrRs, MadMmt, and IqrMc, the experimental performance showed that the system could reduce energy consumption to 69.31% for many types of host computers, including certain shutdown speeds and migration times along with a minor degradation in performance for cloud data centres [12,13].
Jin et al. (2019) implemented a VM allocation in a cloud data centre with a speed switch and VM consolidation, and the energy efficiency and response results were considered by the data centre. Consequently, to study the queuing model, the authors set up a contiguous multiple-server queuing model [13].
Jia et al. (2019) proposed the latest VM allocation approach for power consumption and load balancing to resolve security issues [14].
Nashaat et al. (2019) developed an updated smart elastic scheduling algorithm (SESA) to cluster the VMs to be migrated. SESA aimed to obtain a load-balanced allocation and migration policy. SESA used CPU utilization and RAM as the parameter for the VMs to be located on the same physical machine [8].
Gamal et al. (2019) consider load as a primary parameter for VM allocation and migration. In order to migrate the VMs, a bioinspired VM migration policy was presented, which uses RAM and CPU utilization as the main parameters [15].
Basu S. et al. (2019) have resolved the problem of workload using nature-inspired GA along with the improved scheduling of VMs. Every chromosome of GA has been defined as a node, and the VM is allocated to a node related to the chromosome’s genes. After that, based on the crossover and mutation operator, the allocation of VM has been performed. The results have indicated that the proposed approach has performed well in terms of resource utilization and load balancing. However, the area of cloud computing and VM allocation is not new. However, still, the proposed algorithm presents a new behaviour of the cuckoo search algorithm for the selection of VMs to be migrated. The selected VMs are cross-validated using a feed-forward back propagation model which dual checks the allocation, which further ensures optimal power consumption [16].
Zhang et al. (2019) have presented a new, as well as effective, evolutionary-based strategy to allocate VM and thereafter minimized the energy consumption of CDC. The comparative analysis through performing simulation on CloudSim has demonstrated that the proposed method enables for rapidly implementing an allocation corresponding to a set of VMs in an optimized way and integrating more VMs with fewer PMs. Higher energy efficiency than baseline energy efficiency methods has been achieved. Specifically, compared with the most advanced methods, the complete profit gain as well as saves consumed energy in a preset approach up to 24% and 41%, respectively. In addition, our approach can enable CDC to satisfy more end-user requests [17].
Jana et al. (2019) have presented a Modified PSO method in which the researchers have considered two parameters, namely average scheduling length along with the rate to execute effectively. The comparison with baseline techniques such as Max–Min, Min–Min, and simple PSO approaches have been performed to analyse the effectiveness of the proposed work. The main motive of this paper is to allocate resources by properly managing the workload in a cloud server, which is performed by using a task scheduling algorithm [18].
Gawali and Shinde (2018) have used a metaheuristic approach by integrating the two optimization approaches, namely, bandwidth-aware divisible scheduling with BAR optimization for task scheduling along with solving the purposes of resource allocation. Using the modified analytic hierarchy process (MAHP), each task is processed. The experiments have been performed on a cyber snake in which epigenomics is passed as an input task. The performance in terms of turnaround time and response time has been compared to the BATS and IDEA approaches. The proposed approach found better results with high resource utility [19].
Verma and Kaushal (2017) have proposed an integrated approach and given the name of “Hybrid Particle Swarm” optimization (HPSO). The algorithm is used to maintain the scheduling workflow problem using a fitness function designed for an HPSO approach and used in an IaaS cloud. The optimization technique has been integrated into the deadline constraint heterogeneous earliest finish time (HEFT) algorithm [20].
Wang et al. (2017) presented an approach on the basis of classifying memory pages into five categories with the aim to minimize total data transferred, by which total migration time has also been reduced. Such categories are inode, cache, anonymous, free memory pages, and kernel, by which they transmit only the first three categories of memory pages since they are required for the execution of the kernel. However, normal execution has not affected by free memory pages, avoiding the transfer of cache memory pages leads to degradation of performance because of inconsistency among actual transmitted memory pages along with the kernel state. The attained findings contrast with the default per copy of KVM and indicated that this scheme can minimize average migration time by 72% [21].
Kansal et al. (2016) proposed a firefly algorithm-based approach in which the maximum loaded VMs are migrated to the least loaded PM without violating the performance of energy efficiency in CDC. From experiments, the improvement in energy consumption of 44.39%, minimization in migration of 72.34%, and hosts of up to 34.36% have been saved [6].
Akbar et al. (2016) introduced a recent task scheduling approach named median deviation-based task scheduling (MDTS) that utilized median absolute deviation (MAD) of the expected time to compute (ETC) in a task as a key attribute to determine ranks of that task. Authors utilized a coefficient-of-variation (COV)-based approach that considered task along with device heterogeneity with the aim to get the ETC of specific DAG. The execution in the cloud program can be visualized as a collection of various tasks depicted through DAG which executes in their logical sequence. To accomplish the improved performance along with enhanced efficiency in a cloud environment, the prioritization of such tasks plays a key role. The evaluation of presented schemes has been done in different circumstances through DAGs with real-world applications. The outcome indicated that the MDTS approach produced high-quality schedulers along with significantly minimising the makespan by 25.01% [22].
Esa and Yousif (2016) introduced a novel job-scheduling mechanism that uses the firefly algorithm with the aim to minimize the time to execute jobs. The presented frameworks for a job, along with resource information such as the length of the resource, along with job speed identifiers. In the work-schedule process, the proposed scheduling function first generates a collection of jobs and resources to produce the population by assigning the work to the resources randomly and measuring the population using fitness values that reflect the execution time of work. Next, to provide the shortest execution time for the job, the function consumed iterations to reproduce the population on the basis of the actions of the fireflies to deliver the best job plan. The Java language and the CloudSim simulator are used to implement various scenarios [23].
R. Durga Lakshmi (2016) proposed a minor modification in the working architecture of the genetic algorithm (GA) to optimize the overall system efficiency. Quality of service (QoS) management distributes the problem generated by cloud applications. Tools were addressed to guarantee service levels in terms of performance, reliability, availability and so on. The system’s waiting time should be reduced to boost the QoS in the system. GA is a technique of heuristic search that provides the optimal solution for the task. The discussed approach produces a scheduling algorithm based on GA to maximize the system’s overall latency. The cloud environment is split into two parts: the cloud user (CU) is one and the cloud-service provider (CSP) is another. A service request is sent to the CSP by the CU and all the requests have been stored in the request. The queue (RQ) explicitly interacts within the C with the GA module queue sequencer (GAQS) [24].
Deshpande et al. (2016) suggested a combining copying technique for live VM migration. The amount of transferred memory pages has decreased by this technique. The hidden pages for dirtied work can be demanded by the destination or actually printed through the source [25].
Forsman et al. (2015) have presented two schemes to cooperate for automated live migration among multiple VMs. The researchers have used a push scheme in order to migrate VMs from overload PMs to under-loaded PMs. Using this strategy, the under-utilized PMs have requested to the heavy-workload servers that the load can be passed to under-utilized servers. Using three conditions, namely, PM state after migration, cost of migration, and workload distribution, at what time VM should be migrated was determined. The redistribution of workload and the quick attainment of a balanced cloud system has been the key objective of this work, whereas, how SLA violation has been minimized was not explained in this paper [26].
M. S. Pilavare et al. (2015) have presented a number of schemes to enhance the existing load-balancing algorithms in CDC. Among all schemes, the researchers have stated that GA is superior to various technologies. GA uses a randomly selected processor as the input and then processes it. It was thought that the same priority was given to processors and jobs but this is not the case. The logarithmic matrix with the least squares is provided here to increase the efficiency of the GA. When a PM is not performing anything, ethically, it is termed as idle, which wastes the available resources and, hence, to solve the problems of idleness through observation, the proposed algorithm performed better. It has been concluded by observing the method of the GA randomly selected processors and concluded that the processors with a higher fitness value have been utilized and that the virtual machines with lower fitness values have left. The simulation has been done using a cloud simulator [27].
Garg et al. (2014) have resolved the problem of resource allocation in CDC in which the task has been performed with different workload conditions using distinct applications. An admission control, along with a scheduling mechanism, has been proposed which can not only maximize the use of resources but also guaranteed that the end-user QoS requirements must be satisfied along with the SLA parameters. From the research, it has been concluded that the understanding of different kinds of SLAs and the applicable penalties along with the workload mix are very important for better resource allocation and utilization of CDC [28].
Song et al. (2014) presented a method of resource allocation as an online box problem. To dynamically assign available resources in the cloud data centre according to the application requirements and maximize the number of active servers to help green computers, virtualization technology has been used. To resolve this problem, a variable item size boxing (VISBP) algorithm has been presented that has been implemented carefully using both VM and PM as classification functions. Small variation has been tolerated as long as the classification rules have been satisfied by the system. The finding indicated that the performance of VISBP is better for migration of a hotspot as well as in load balancing, in contrast to the existing algorithms. The key difficulty in its implementation may be the assumption that all PMs in this work have homogeneous unit power [29].
Beloglazov et al. (2012) investigated the research on energy-saving computing and put forward: (i) cloud energy-saving management architecture principles; (ii) energy-saving resource allocation strategies and scheduling algorithms that consider the QoS expectations and power user features of devices; and (iii) deal with some openness research challenges, such challenges may offer immense advantages for suppliers of resources and customers. By performing a performance assessment using the CloudSim toolkit, the methodology was validated. The findings showed that the model of cloud computing has great potential because it can save a lot of costs and has great potential in complex workload scenarios to increase energy efficiency [1].
Quang-Hung et al. (2013) proposed a genetic algorithm for VM allocation. For experiments, the workload in a computer lab at a university for one day has been examined. The VMs are sorted in start time using the BFD algorithm. The results obtained using the proposed GAPA scheme have been compared with the existing approach and find that the proposed algorithm achieves less energy consumption. The results show that the computation time has been reduced to a great extent [30].
Madhusudhan et al. (2013) introduced an energy-aware VM placement strategy in combination with swarm-inspired particle swarm optimization (PSO) algorithm to minimize the total energy consumption rate and hence enhances server usage and provide satisfaction to mobile cloud users. The parameters, as well as the operators, have been redefined to resolve the discrete optimization issues. The parameters have been readjusted as the traditional PSO technique has failed to solve the problem of VM placement since it has been mostly utilized for solving only continuous optimization complexities. The PSO algorithm helps to minimize the searching time and, hence, saves energy with better server utilization [31].
Talwani and Singla (2021) suggested using an extended artificial bee colony (E-ABC) method. Results from simulations show that the E-ABC approach has high scalability [32,33,34]. The E-ABC strategy saved 15–17% more energy and resulted in 10% fewer migrations than the current concept [35]. The proposed approach has a higher computation time.
Dia et al. (2022) studied a cloud-assisted fog computing framework with task offloading and service caching presented to ensure effective task processing. The framework allowed tasks to make offloading decisions for local processing, fog processing, and cloud processing, with the objective of minimizing task delay and energy consumption, taking into account dynamic service caching. To achieve this, a distributed task offloading algorithm based on noncooperative game theory was proposed. Furthermore, the 0–1 knapsack method was employed to realize dynamic service caching. Adjustments were made to the offloading decisions for tasks that were offloaded to the fog server but lacked caching service support [36].
Tran et al. (2022) addressed the challenges related to the migration of machines in cloud-computing architecture. The authors proposed a VM migration algorithm based on the concept of Q-learning and Markov decision-making models. The work comprised a training phase followed by the extraction phase. The superiority of the proposed algorithm in terms of feasibility and strengths of the extraction phase is demonstrated using comparative analysis against a max–min ant system, round robin and ant system algorithms [37].
Abedi et al. (2022) considered the heuristic method using the firefly algorithm and fuzzy approach to prioritize the tasks. The authors improved the population of the firefly algorithm with the intention of balancing the load, migration rate, average run time, and completion time. The authors used MATLAB software for implementation and to maintain the makespan to 50.16. The drawback of the study was that some objectives still need to achieve using better optimization [38].
Khan et al. (2022) presented a hybrid cuckoo search and particle swarm optimization (CU-PSO) approach for effective VM migration. The primary goals of this study are to shorten the duration of computations, decrease energy use, and lower the cost of relocation. The optimal use of available resources is another focus area. The study aim is supported by a simulation analysis that compares the efficiency of the hybrid optimization model to that of more traditional methods. The proposed approach is outperformed in terms of energy consumption, migration cost, resource availability, and computation time [39]. The proposed work is insufficient to address the issue of SLA violation.
Zhao, H. et al. (2023) suggested a VM performance-aware approach (PAVMM) using ant-colony optimization (ACO). The process continues by setting a goal of user-friendly VM performance optimization. The suggested work has goals for cloud-service vendors including reducing the overall migration cost and the number of active PMsThe proposed framework minimizes the energy consumption and minimizes the active host which improves the speed of operation but proposed work lacking in SLA violations [40].
Bali, et al. (2023) developed a priority-aware task scheduling (PaTS) algorithm specifically designed for sensor networks. The algorithm aims to schedule priority-aware tasks for data offloading on edge and cloud servers. The proposed sensor-network design includes dividing tasks into four distinct groups: very urgent (OVU), urgent (OU), moderate (OM), and nonurgent (ONU). To address this problem, a multiobjective function formulation was used and the efficiency of the algorithm was evaluated using the bio-inspired NSGA-2 technique. The results obtained demonstrated significant improvement compared to benchmark algorithms, highlighting the effectiveness of the proposed solution. Additionally, when the number of tasks was increased from 200 to 1000, subsequent improvements were observed. Specifically, the average queue delay, computation time, and energy showed overall improvements of 17.2%, 7.08%, and 11.4% respectively, for the 200-task scenario [41].
Singh et al. (2023) focused on the utilization of containerized environments, specifically docker, for big data applications with load balancing. A novel scheduling mechanism for containers in big-data applications was proposed, based on the docker swarm and the microservice architecture. Docker swarm was employed to effectively manage the workload and service discovery of big-data applications. The implementation of the proposed mechanism involved a case study that was initially deployed on a single server and then scaled to four instances. The master node, implemented using the NGINX service, ran the swarm commands, while the other three services were microservices dedicated to the implemented scenario for a big-data application. These three worker nodes consisted of the PHP front-end microservice, Python API microservice, and Redis cache microservice. The results of the study demonstrated that increasing workloads in big-data applications could be effectively managed by utilizing microservices in containerized environments, and docker swarm enabled efficient load balancing. Furthermore, applications developed using containerized microservices exhibited reduced average deployment time and improved continuous integration [42].
Kavitha et al. (2023) developed a novel approach called filter-based ensemble feature selection (FEFS) combined with a deep-learning model (DLM) for intrusion detection in cloud computing. The DLM utilized a recurrent neural network (RNN) and Tasmanian devil optimization (TDO) to determine the optimal weighting parameter. The initial phase involved collecting intrusion data from global datasets, namely KDDCup-99 and NSL-KDD, which were used for validating the proposed methodology. The collected database was then utilized for feature selection to enhance intrusion prediction. FEFS, a combination of filter, wrapper, and embedded algorithms, was employed for feature extraction, selecting essential features for the training process in the DLM. The proposed technique was implemented using MATLAB and its effectiveness was evaluated using performance metrics such as sensitivity, F-measure, precision, recall, and accuracy. The suggested strategy demonstrated significant improvements based on these performance metrics. A comparison was made between the proposed method and conventional techniques such as RNN, deep neural network (DNN), and RNN-genetic algorithm (RNN-GA) [43].

3. Motivation of the Research

This research work is inspired by the research done by Beloglazov et al. [1], Kansal et al. [6], and Nashaat et al. [8]. There is a possibility of enhancement in the SESA algorithm by adding a third variable, bandwidth utilization (BU), as done in [3]. In addition, the SESA can be improved by adding similarity measures other than ED. In addition to this, the allocation is done based on the clusters’ density only, which could be further improved.

4. Methodology

The methodology of the proposed algorithm is centric, as per the written objectives. The first objective is to analyse existing VM allocation and migration techniques in which the focus would be towards studying SESA, MBFD, and its other enhancement. The design of the proposed algorithm is divided into two segments, namely the organization of the clusters and the placement of VMs into the concerned clusters. Considering the VMs to be migrated, as per the MM algorithm, Nashaat et al. [8] in 2019 proposed a smart elastic scheduling algorithm (SESA) which clusters the VMs in order to migrate. Nashaat modified the k-means algorithm by using RAM and CPU utilization of the VMs and the host. In order to modify the k-means algorithm, Nashaat used Euclidean distance (ED).

4.1. Detection of the Hotspot

Detection of the hotspot, based on the unit threshold, which may be SLAV or load, is a common practice. Even if a PM is idle, it consumes 70% of its peak power and, hence, letting the PM sit idle would also be using the power without producing an output. A dual threshold policy which keeps one upper threshold and one lower threshold of utilization is observed to be practised commonly [11,12,13]. The upper and the lower threshold are responsible for the determination of the fact that whether all the VMs should be migrated from the PM or some VMs should be migrated.

4.2. Selection of VMs

The selection of VMs has been done by considering a lot of parameters such as utilization, time of migration, and correlation between the VMs [44,45]. Considering the essentials to reduce the power consumption, minimization of migrations (MM) and minimum utilization (MU) were proposed and have gained potential enhancements time by time [6,14].

4.3. Selection of PM

Once the VMs are selected, other than hotspot PMs, the rest of the PMs are listed down as per the required essentials and then the VMs are placed as per shown in Section 4.4. The listed PMs are now called the available PMs.

4.4. VM Migration

Putting the right VM to the right PM is the task of this section. The proposed CESCA algorithm enhances the k-means algorithm by using CPU utilization (CPU), bandwidth utilization (BU), and currently associated random access memory (AsRAM). The algorithm takes inspiration from Nashaat et al.’s clustering algorithm, the smart elastic scheduling algorithm (SESA) [8]. CESCA is subdivided into three parts for the calculation of the number of centroids and the identification of centroids, placement of VMs into the centroid, and prioritization of the created clusters.
The resource assignment or allocation [46,47,48] has four steps for VM allocation, as demonstrated in Figure 2. The first step is the detection of the hotspot PM. The SESA algorithm given in Algorithm 3 finds the optimal number of clusters required for the VMs to be migrated. In order to excel in the performance, an association of VM to PM is done using the Euclidian distance.
Algorithm 3: SESA
Input: hostList, VMList, Standard Deviation threshold, Output: high density arranged
cluster list of co-located VMs, allocation of VMs
 1.  F i n d   K   p o i n t s   f o r   s e l e c t i n g   t h e   o p t i m a l   n u m b e r   o f   c l u s t e r s
 2.  C a l c u l a t i n g   K 1   ( f o r   P a r a m e t e r   >   C P U )
 3.  K 1 _ m a x p o i n t   =   h o s t l i s t . g e t _ m a x ( C P U ) / V M L i s t . g e t _ m i n ( C P U )
 4.  K 1 _ m i n p o i n t   =   h o s t l i s t . g e t _ m i n ( C P U ) / V M L i s t . g e t _ m a x ( C P U )
 5.  K 1   =   a v e r a g e ( K 1 _ m a x p o i n t , K 1 _ m i n p o i n t )
 6.  C a l c u l a t i n g   K 2   ( f o r   P a r a m e t e r   >   R A M )
 7.  K 2 _ m a x p o i n t   =   h o s t l i s t . g e t _ m a x ( R A M ) / V M L i s t . g e t _ m i n ( R A M )
 8.  K 2 _ m i n p o i n t   =   h o s t l i s t . g e t _ m i n ( R A M ) / V M L i s t . g e t _ m a x ( R A M )
 9.  K 2   =   a v e r a g e ( K 2 _ m a x p o i n t , K 2 _ m i n p o i n t )
 10.  K   =   a v e r a g e ( K 1 ,   K 2 )
 11.  S e l e c t   t h e   i n i t i a l   c e n t r o i d   a s   a   p a i r   o f   t w o   v a l u e s   ( C P U ,   R A M )
 12.  c e n t r o i d s   [ 1 ,   1 ]   =   G e t _ a v e r a g e _ C P U ( V M L i s t )
 13.  c e n t r o i d s [ 1 ,   2 ]   =   G e t _ a v e r a g e _ R A M ( V M L i s t )  
 14.  F i n d   t h e   r e m a i n i n g   K     1   c e n t r o i d s ,
 15.  f o r   e a c h   m t h   c e n t r o i d   n u m b e r   d o ,   W h e r e   m   t a k e s   v a l u e s   f r o m   1   t o   K     1
 16.  C a l c u l a t e   t h e   E u c l i d i a n   d i s t a n c e   E D between previous centroid and ( C P U ,   R A M )
 17.  p a r a m e t e r s   o f   e a c h   V M   i n   V M L i s t
 18.  f o r   e a c h   j t h V M   i n   V M L i s t   d o ,
 19.  W h e r e   j   t a k e s   v a l u e s   f r o m   1   t o   n o .   o f   V M s   i n   V M L i s t
 20.  E c u _ d i s [ j ]   =   f i n d _ E c u l e d i a n ( V M L i s t . g e t ( j ) ,   c e n t r o i d s [ m ,   1 ] ,   c e n t r o i d s   [ m ,   2 ] )  
 21.  e n d   f o r
 22. Choose the Next Centroid to be (CPU, RAM) values for VMwithmaximum ED
 23.  c e n t r o i d s [ m + 1 , 1 ]   =   V M L i s t . g e t ( g e t _ i n d e x _ f o r M a x V a l u e ( E c u _ d i s ) ) . g e t ( C P U )
 24.  c e n t r o i d s [ m + 1 , 2 ]   =   V M L i s t . g e t ( g e t _ i n d e x _ f o r M a x V a l u e ( E c u _ d i s ) ) . g e t ( R A M )
 25. end for
 26. Calculate the ED between each VMs and all Cluster’s centroids
 27. for each jthVM in VMList do, Where j takes values from 1 to no. of VMs in VMList
 28. for each mth centroid number do, Where mtakes values from 1 to K − 1
 29.  E D [ m ]   =   f i n d _ E c u l e d i a n ( V M L i s t . g e t ( j ) , c e n t r o i d s [ m , 1 ] , c e n t r o i d s [ m , 2 ] )
 30.  e n d   f o r
 31. Append VM to the Cluster with minimumED
 32.  C l u s t e r   =   A p p e n d _ i n _ C l u s t e r ( g e t _ i n d e x _ f o r M i n V a l u e ( E D ) ,   V M L i s t . g e t ( j ) )
 33. end for
 34. Arrange the co-located VMs
 35. for each ith VMs cluster list in cluster do
 36. arrangeBy Co-locatedVMs(Cluster.get(i))
 37. end for
 38. VMList = arrangeBy HighDensityCluster(Cluster)

4.5. VM Migraion

PSEUDO-CODE CESCA
Inputs: H L i s t , V M L i s t Output: prioritized
//Calculate the total number of centroids P
//Calculate P x , P y   a n d   P z , Where P x is based on CPU, P y is calculated based on AsRAM
//and P z are calculated based on BU
P x m a x = H L i s t . g e t m a x ( C P U ) V M L i s t . g e t m i n ( C P U ) ;   P x m i n = H L i s t . g e t m i n ( C P U ) V M L i s t . g e t m a x ( C P U ) P x = P x m i n + P x m a x 2
P y m a x = H L i s t . g e t m a x ( A s R A M ) V M L i s t . g e t m i n ( A s R A M ) ;   P y m i n = H L i s t . g e t m i n ( A s R A M ) V M L i s t . g e t m a x ( A s R A M ) P y = P y m i n + P y m a x 2
P z m a x = H L i s t . g e t m a x ( B U ) V M L i s t . g e t m i n ( B U ) ;   P z m i n = H L i s t . g e t m i n ( B U ) V M L i s t . g e t m a x ( B U ) P z = P x m i n + P x m a x 2
P = ? i = x z P i 3 // Calculating the average of all p values.
Initialize Centroids to empty
  • 1st centroid [1, 1] = V M L i s t . g e t a v g C P U // Average CPU from VM list
  • 1st centroid [1, 2] = V M L i s t . g e t a v g ( R A M ) // Average RAM from VM list
  • 1st centroid [1, 3] = V M L i s t . g e t a v g B U // Average BU from VM list
  • Append 1st centroid to Centroids
  • h 1 = L i s t . C e n t r o i d ( C P U , R A M , B U ) // 1st centroid attributes
  • h 2 = L i s t . V M ( C P U , R A M , B U ) // VMs attributes
    C o s s i m = i = 1 n h 1 n × h 2 n k = 1 n h 1 k 2 × k = 1 n h 2 k 2
    i
    Calculate the rest (P−1) of the centroids
  • V M r e m a i n i n g = V M L i s t
  • CS = []; // Initialize cosine similarity to 0
  • i f   P > 0
  • C S i = G e t C o S i m ( V M r e m a i n i n g i . g e t a t t r i b u t e s ( ) ,  1st  c e n t r o i d )
  • C S i n d e x = F i n d m i n i n d e x ( C S )
  • N e x t C e n t r o i d 1 , 1 = V M L i s . C S i n d e x . g e t ( C P U )
  • N e x t C e n t r o i d 1 , 1 = V M L i s . C S i n d e x . g e t ( R A M )
  • N e x t C e n t r o i d 1 , 1 = V M L i s . C S i n d e x . g e t ( B U )
  • P = P 1
  • V M r e m a i n i n g . R e m o v e [ C S i n d e x ]
  • E n d   i f
  • E n d f o r
  • F o r e a c h   v m   i n   V M r e m a i n i n g
  • S i m v  = [ ] // Initialize simv to empty
  • F o r e a c h   c e n t   i n   C e n t r o i d
  • S i m v . a p p e n d ( G e t C o S i m ( V M , c e n t )
  • E n d F o r
  • F i n d   min index v = m i n   ( S i m v )
  • A l l o c a t e   v m   t o   C e n t r o i d . min index   v
// Prioritization of the created clusters
  • a v g s i m i l a r i t y   =   [ ]
  • C a l c u l a t e   a v g S i m i l a r i t y   a n d   A p p e n d   t o   a v g s i m i l a r i t y
  • A p p e n d   a v g S i m i l a r t y   t o   a v g s i m i l a r i t y
  • E n d f o r
  • P r i o r t i z e d   =   S o r t . a s c e n d i n g ( a v g s i m i l a r i t y )
  • R e t u r n   P r i o r t i z e d
The proposed work modifies the current state of art with the alterations in the calculation of the similarity indexes. With the alteration in the similarity evaluation and added parameter of BU, the proposed work results in better power efficiency and violation ratios. The results are illustrated in the next section.

5. Results and Discussion

In order to compare the proposed work with existing algorithm architectures, two parameters, namely the overall consumed power and SLA-V, and in order to compute the SLA-V, again power consumption has been utilized along with a total number of migrations per PM. The power consumption has been evaluated in watts whereas the SLA-V is unit less and is defined in Table 1.
The analysis has been done taking the total VM in the incremental ratio of 100 VMs per 10 PMs, viz. the VMs range is 100–1000 VMs whereas the PMs range is 10–100 PMs. The power consumption is in watts and has been evaluated using Equation (3). As the total number of VMs increases, the average consumption also increases gradually. It is not necessary that if the load increases and the power consumption should also increase until and unless the variation of the load is significantly high. In most of the observed cases, the average power consumption increases. The average consumption of power can be illustrated using Figure 3 as follows.
In order to evaluate one scenario, the proposed work and other compared algorithms/techniques have been evaluated for 10,000 simulations. The average consumed power in the case of the proposed work scenario is improved by 3.8% as compared to [1], 3.14% as compared to [8], and 3.92% as compared to [28]. The effect of migration is evident when the total number of VMs increases after 200 VMs. When the load increases over the system, the system with sustainable allocation architecture will be more efficient in terms of holding the VMs at the right PM and discarding the VMs that are not required at the moment. As shown in Figure 3, the proposed work demonstrates minimum power consumption with the least number of migrations, the performance is evaluated to be improved overall. This is due to the additional evaluation measure added to the distance calculation and with additional input parameters, viz. BU in the assembly of the VMs. The proposed work evaluates the overall SLA-V based on both the parameters, including the total number of migrations and total power consumption, as follows.
As the proposed work balances both the total number of migrations and the power consumption parallelly, the overall SLA-V for the proposed work is quite economical in relation to other state-of-art techniques, as shown in Figure 4. The average overall SLA-V for the proposed work is 0.201 in all the scenarios, whereas [1,8,28] demonstrate 0.241, 0.2314, and 0.2712 for the same load conditions.

6. Conclusions

This paper presented an advanced algorithm architecture that was inspired by Nashaat et al. [8] architecture. The proposed work modified the current state of art technique by adding two utility parameters to the system, namely the bandwidth utilization and the cosine similarity to enhance the overall consumed power in order to allocate and migrate the VMs from the Nashaat et al. presented SESA algorithm, which is enhanced by utilizing more similarity indexes [8]. This research work is inspired by the research done by Beloglazov et al. [1], Kansal et al. [6], Nashaat et al. [8] and Garg et al. [28]. Due to improved utility function, the proposed work outcasts [1,8,28] by 3.8% as compared to [1], 3.14% as compared to [8], and 3.92% as compared to [28]. The evaluation has also been made on the basis of the total number of migrations and as the VM selection is precise in the case of the proposed work; the VMs are migrated in less quantity maintaining the overall power consumption in the case of the proposed work. The overall SLA V in all the presented scenarios, viz. 10–100 PMs supplied for 100–1000 VMs, is the least, as compared to other state-of-art techniques.

Author Contributions

The following authors contributed to this research work: A.K. (Amandeep Kaur), S.K., D.G., Y.H., M.H., A.K. (Amel Ksibi), H.E. and S.S.; Conceptualization, A.K. (Amandeep Kaur) and D.G.; methodology, D.G.; software, S.K. and Y.H.; validation, D.G. and S.S.; formal analysis, A.K. (Amandeep Kaur) and S.K.; investigation, S.S.; resources, M.H. and A.K. (Amel Ksibi); data curation, H.E. and A.K. (Amel Ksibi); writing—original draft preparation, A.K. (Amandeep Kaur) and S.K.; writing—review and editing, D.G. and S.S.; visualization, Y.H. and S.K.; supervision, D.G.; project administration, H.E., A.K. (Amel Ksibi) and M.H.; funding acquisition, H.E., A.K. (Amel Ksibi) and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R125), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data would be made available on request.

Acknowledgments

The authors would like to thank the Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R125), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beloglazov, A.; Abawajy, J.; Buyya, R. Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing. Future Gener. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef] [Green Version]
  2. Baker, B.S.; Coffman, E.G., Jr. A Tight Asymptotic Bound for Next-Fit-Decreasing Bin-Packing. SIAM J. Algebr. Discret. Methods 1981, 2, 147–152. [Google Scholar] [CrossRef]
  3. Lu, X.; Zhang, Z. A Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm. Int. J. Comput. Theory Eng. 2015, 7, 278. [Google Scholar]
  4. Mann, Z.Á. Multicore-Aware Virtual Machine Placement in Cloud Data Centers. IEEE Trans. Comput. 2016, 65, 3357–3369. [Google Scholar] [CrossRef]
  5. de Castro, L.N. Fundamentals of Natural Computing: An Overview. Phys. Life Rev. 2007, 4, 1–36. [Google Scholar] [CrossRef]
  6. Kansal, N.J.; Chana, I. Energy-Aware Virtual Machine Migration for Cloud Computing-a Firefly Optimization Approach. J. Grid Comput. 2016, 14, 327–345. [Google Scholar] [CrossRef]
  7. Durgadevi, P.; Srinivasan, S. Resource Allocation in Cloud Computing Using SFLA and Cuckoo Search Hybridization. Int. J. Parallel Program. 2018, 48, 549–565. [Google Scholar] [CrossRef]
  8. Nashaat, H.; Ashry, N.; Rizk, R. Smart Elastic Scheduling Algorithm for Virtual Machine Migration in Cloud Computing. J. Supercomput. 2019, 75, 3842–3865. [Google Scholar] [CrossRef]
  9. Masdari, M.; Khezri, H. Efficient VM Migrations Using Forecasting Techniques in Cloud Computing: A Comprehensive Review. Clust. Comput. 2020, 23, 2629–2658. [Google Scholar] [CrossRef]
  10. Dubey, K.; Sharma, S.C. An Extended Intelligent Water Drop Approach for Efficient VM Allocation in Secure Cloud Computing Framework. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 3948–3958. [Google Scholar] [CrossRef]
  11. Joshi, A.S.; Munisamy, S.D. Dynamic Degree Balanced with CPU Based VM Allocation Policy for Load Balancing. J. Inf. Optim. Sci. 2020, 41, 543–553. [Google Scholar] [CrossRef]
  12. Ruan, X.; Chen, H.; Tian, Y.; Yin, S. Virtual Machine Allocation and Migration Based on Performance-to-Power Ratio in Energy-Efficient Clouds. Future Gener. Comput. Syst. 2019, 100, 380–394. [Google Scholar] [CrossRef]
  13. Jin, S.; Qie, X.; Hao, S. Virtual Machine Allocation Strategy in Energy-Efficient Cloud Data Centres. Int. J. Commun. Netw. Distrib. Syst. 2019, 22, 181–195. [Google Scholar] [CrossRef]
  14. Jia, H.; Liu, X.; Di, X.; Qi, H.; Cong, L.; Li, J.; Yang, H. Security Strategy for Virtual Machine Allocation in Cloud Computing. Procedia Comput. Sci. 2019, 147, 140–144. [Google Scholar] [CrossRef]
  15. Gamal, M.; Rizk, R.; Mahdi, H.; Elnaghi, B.E. Osmotic Bio-Inspired Load Balancing Algorithm in Cloud Computing. IEEE Access 2019, 7, 42735–42744. [Google Scholar] [CrossRef]
  16. Islam, M.; Razzaque, A.; Islam, J. A Genetic Algorithm for Virtual Machine Migration in Heterogeneous Mobile Cloud Computing. In Proceedings of the 2016 International Conference on Networking Systems and Security (NSysS), Dhaka, Bangladesh, 7–9 January 2016; pp. 1–6. [Google Scholar]
  17. Zhang, X.; Wu, T.; Chen, M.; Wei, T.; Zhou, J.; Hu, S.; Buyya, R. Energy-Aware Virtual Machine Allocation for Cloud with Resource Reservation. J. Syst. Softw. 2019, 147, 147–161. [Google Scholar] [CrossRef]
  18. Jana, B.; Chakraborty, M.; Mandal, T. A Task Scheduling Technique Based on Particle Swarm Optimization Algorithm in Cloud Environment. In Soft Computing: Theories and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 525–536. [Google Scholar]
  19. Gawali, M.B.; Shinde, S.K. Task Scheduling and Resource Allocation in Cloud Computing Using a Heuristic Approach. J. Cloud Comput. 2018, 7, 4. [Google Scholar] [CrossRef]
  20. Verma, A.; Kaushal, S. A Hybrid Multi-Objective Particle Swarm Optimization for Scientific Workflow Scheduling. Parallel Comput. 2017, 62, 1–19. [Google Scholar] [CrossRef]
  21. Wang, C.; Hao, Z.; Cui, L.; Zhang, X.; Yun, X. Introspection-Based Memory Pruning for Live VM Migration. Int. J. Parallel Program. 2017, 45, 1298–1309. [Google Scholar] [CrossRef]
  22. Akbar, M.F.; Munir, E.U.; Rafique, M.M.; Malik, Z.; Khan, S.U.; Yang, L.T. List-Based Task Scheduling for Cloud Computing. In Proceedings of the 2016 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Chengdu, China, 15–18 December 2016; pp. 652–659. [Google Scholar]
  23. Esa, D.I.; Yousif, A. Scheduling Jobs on Cloud Computing Using Firefly Algorithm. Int. J. Grid Distrib. Comput. 2016, 9, 149–158. [Google Scholar] [CrossRef]
  24. Lakshmi, R.D.; Srinivasu, N. A Dynamic Approach to Task Scheduling in Cloud Computing Using Genetic Algorithm. J. Theor. Appl. Inf. Technol. 2016, 85, 124–135. [Google Scholar]
  25. Deshpande, U.; Chan, D.; Guh, T.-Y.; Edouard, J.; Gopalan, K.; Bila, N. Agile Live Migration of Virtual Machines. In Proceedings of the 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Chicago, IL, USA, 23–27 May 2016; pp. 1061–1070. [Google Scholar]
  26. Forsman, M.; Glad, A.; Lundberg, L.; Ilie, D. Algorithms for Automated Live Migration of Virtual Machines. J. Syst. Softw. 2015, 101, 110–126. [Google Scholar] [CrossRef] [Green Version]
  27. Pilavare, M.S.; Desai, A. A Novel Approach towards Improving Performance of Load Balancing Using Genetic Algorithm in Cloud Computing. In Proceedings of the 2015 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 19–20 March 2015; pp. 1–4. [Google Scholar]
  28. Garg, S.K.; Toosi, A.N.; Gopalaiyengar, S.K.; Buyya, R. SLA-Based Virtual Machine Management for Heterogeneous Workloads in a Cloud Datacenter. J. Netw. Comput. Appl. 2014, 45, 108–120. [Google Scholar] [CrossRef]
  29. Song, W.; Xiao, Z.; Chen, Q.; Luo, H. Adaptive Resource Provisioning for the Cloud Using Online Bin Packing. IEEE Trans. Comput. 2014, 63, 2647–2660. [Google Scholar] [CrossRef]
  30. Quang-Hung, N.; Nien, P.D.; Nam, N.H.; Huynh Tuong, N.; Thoai, N. A Genetic Algorithm for Power-Aware Virtual Machine Allocation in Private Cloud. In Proceedings of the Information and Communication Technology: International Conference, ICT-EurAsia 2013, Yogyakarta, Indonesia, 25–29 March 2013; pp. 183–191. [Google Scholar]
  31. Madhusudhan, B.; Sekaran, K.C. A Genetic Algorithm Approach for Virtual Machine Placement in Cloud. In Proceedings of the International Conference on Emerging Research in Computing, Information, Communication and Applications (ERCICA2013), Bangalore, India, 2–3 August 2013. [Google Scholar]
  32. Priya, B.; Pilli, E.S.; Joshi, R.C. A Survey on Energy and Power Consumption Models for Greener Cloud. In Proceedings of the 2013 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, India, 22–23 February 2013; pp. 76–82. [Google Scholar]
  33. Syed-Abdul, S.; Malwade, S.; Nursetyo, A.A.; Sood, M.; Bhatia, M.; Barsasella, D.; Liu, M.F.; Chang, C.-C.; Srinivasan, K.; Li, Y.-C.J.; et al. Virtual Reality among the Elderly: A Usefulness and Acceptance Study from Taiwan. BMC Geriatr. 2019, 19, 223. [Google Scholar] [CrossRef] [Green Version]
  34. Sehra, S.S.; Singh, J.; Rai, H.S. Assessing OpenStreetMap Data Using Intrinsic Quality Indicators: An Extension to the QGIS Processing Toolbox. Future Internet 2017, 9, 15. [Google Scholar] [CrossRef]
  35. Talwani, S.; Singla, J. Enhanced Bee Colony Approach for reducing the energy consumption during VM migration in cloud computing environment. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1022, 012069. [Google Scholar] [CrossRef]
  36. Dai, X.; Xiao, Z.; Jiang, H.; Alazab, M.; Lui, J.C.; Min, G.; Liu, J. Task offloading for cloud-assisted fog computing with dynamic service caching in enterprise management systems. IEEE Trans. Ind. Inform. 2022, 19, 662–672. [Google Scholar] [CrossRef]
  37. Tran, C.H.; Bui, T.K.; Pham, T.V. Virtual machine migration policy for multi-tier application in cloud computing based on Q-learning algorithm. Computing 2022, 104, 1285–1306. [Google Scholar] [CrossRef]
  38. Abedi, S.; Ghobaei-Arani, M.; Khorami, E.; Mojarad, M. Dynamic Resource Allocation Using Improved Firefly Optimization Algorithm in Cloud Environment. Appl. Artif. Intell. 2022, 36, 2055394. [Google Scholar] [CrossRef]
  39. Khan, M.S.A.; Santhosh, R. Hybrid Optimization Algorithm for VM Migration in Cloud Computing. Comput. Electr. Eng. 2022, 102, 108152. [Google Scholar] [CrossRef]
  40. Zhao HWang, Y.; Han, X.; Jin, S. MAP based modeling method and performance study of a task offloading scheme with time-correlated traffic and VM repair in MEC systems. Wirel. Netw. 2023, 29, 47–68. [Google Scholar] [CrossRef]
  41. Bali, M.S.; Gupta, K.; Gupta, D.; Srivastava, G.; Juneja, S.; Nauman, A. An effective Technique to Schedule priority aware tasks to offload data at edge and cloud servers. Meas. Sens. 2023, 26, 100670. [Google Scholar] [CrossRef]
  42. Singh, N.; Hamid, Y.; Juneja, S.; Srivastava, G.; Dhiman, G.; Gadekallu, T.R.; Shah, M.A. Load balancing and service discovery using Docker Swarm for microservice based big data applications. J. Cloud Comput. 2023, 12, 4. [Google Scholar] [CrossRef]
  43. Kavitha, C.; Gadekallu, T.R.; Kavin, B.P.; Lai, W.C. Filter-Based Ensemble Feature Selection and Deep Learning Model for Intrusion Detection in Cloud Computing. Electronics 2023, 12, 556. [Google Scholar] [CrossRef]
  44. Zhao, H.; Feng, N.; Li, J.; Zhang, G.; Wang, J.; Wang, Q.; Wan, B. VM performance-aware virtual machine migration method based on ant colony optimization in cloud environment. J. Parallel Distrib. Comput. 2023, 176, 17–27. [Google Scholar] [CrossRef]
  45. Wu, Q.; Zhao, Y.; Fan, Q.; Fan, P.; Wang, J.; Zhang, C. Mobility-Aware Cooperative Caching in Vehicular Edge Computing Based on Asynchronous Federated and Deep Reinforcement Learning. IEEE J. Sel. Top. Signal Process. 2023, 17, 66–81. [Google Scholar] [CrossRef]
  46. Song, Y.; Xin, R.; Chen, P.; Zhang, R.; Chen, J.; Zhao, Z. Identifying performance anomalies in fluctuating cloud environments: A robust correlative-GNN-based explainable approach. Future Gener. Comput. Syst. 2023, 145, 77–86. [Google Scholar] [CrossRef]
  47. Jiang, H.; Dai, X.; Xiao, Z.; Iyengar, A.K. Joint Task Offloading and Resource Allocation for Energy-Constrained Mobile Edge Computing. IEEE Trans. Mob. Comput. 2022, 22, 4000–4015. [Google Scholar] [CrossRef]
  48. Uppal, M.; Gupta, D.; Juneja, S.; Dhiman, G.; Kautish, S. Cloud-based fault prediction using IoT in office automation for improvisation of health of employees. J. Healthc. Eng. 2021, 2021, 8106467. [Google Scholar] [CrossRef]
Figure 1. Green Cloud Components.
Figure 1. Green Cloud Components.
Sensors 23 06117 g001
Figure 2. VM Allocation [1,8].
Figure 2. VM Allocation [1,8].
Sensors 23 06117 g002
Figure 3. Average Power Consumption against [1,8,28].
Figure 3. Average Power Consumption against [1,8,28].
Sensors 23 06117 g003
Figure 4. SLA-V against [1,8,28].
Figure 4. SLA-V against [1,8,28].
Sensors 23 06117 g004
Table 1. (a) Power Consumption. (b) VM Migration Count.
Table 1. (a) Power Consumption. (b) VM Migration Count.
(a)
‘Total Number of VMs’‘Total Number of PMs’‘PC Proposed’‘PC A. Beloglazov et al. [1]’‘PC H. Nashaat et al. [8]’‘PC S.K. Garg et al. [28]’
1001036.1917239.4802539.4651737.844
2002082.2500191.468783.5321987.8643
30030123.8285128.453131.6822126.29
40040163.2594172.086164.9016167.7453
50050216.4734218.5042227.9728219.2439
60060252.1217254.6763264.2169259.984
70070251.7995256.1971255.911278.066
80080316.4311329.7573335.92331.6884
90090321.5957337.6987323.512356.4329
1000100423.3387442.0967428.9879462.976
(b)
‘Total Number of VMs’‘Total Number of PMs’‘Number of Migrations Proposed’‘Number of Migrations A. Beloglazov et al. [1]‘Number of Migrations H. Nashaat et al. [8]‘Number of Migrations S.K. Garg et al. [28]
1001049494952
2002095959997
30030148153160160
40040212216233226
50050268268293294
60060310326344317
70070343366349378
80080343353348351
90090480508508499
1000100467481490472
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaur, A.; Kumar, S.; Gupta, D.; Hamid, Y.; Hamdi, M.; Ksibi, A.; Elmannai, H.; Saini, S. Algorithmic Approach to Virtual Machine Migration in Cloud Computing with Updated SESA Algorithm. Sensors 2023, 23, 6117. https://doi.org/10.3390/s23136117

AMA Style

Kaur A, Kumar S, Gupta D, Hamid Y, Hamdi M, Ksibi A, Elmannai H, Saini S. Algorithmic Approach to Virtual Machine Migration in Cloud Computing with Updated SESA Algorithm. Sensors. 2023; 23(13):6117. https://doi.org/10.3390/s23136117

Chicago/Turabian Style

Kaur, Amandeep, Saurabh Kumar, Deepali Gupta, Yasir Hamid, Monia Hamdi, Amel Ksibi, Hela Elmannai, and Shilpa Saini. 2023. "Algorithmic Approach to Virtual Machine Migration in Cloud Computing with Updated SESA Algorithm" Sensors 23, no. 13: 6117. https://doi.org/10.3390/s23136117

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop