Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Index

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Received 15 February 2023, accepted 1 March 2023, date of publication 10 March 2023, date of current version 22 March 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3255781

Task Scheduling in Cloud Computing:


A Priority-Based Heuristic Approach
SWATI LIPSA1 , RANJAN KUMAR DASH 1 , (Senior Member, IEEE), NIKOLA IVKOVIĆ 2,

AND KORHAN CENGIZ 3 , (Senior Member, IEEE)


1 Department of Information Technology, Odisha University of Technology and Research, Bhubaneswar 751029, India
2 Faculty of Organization and Informatics, University of Zagreb, 10000 Zagreb, Croatia
3 Department of Computer Engineering, Istinye University, 34010 Istanbul, Turkey
Corresponding author: Ranjan Kumar Dash (rkdash@outr.ac.in)
This work was supported by the Croatian Science Foundation under Project IP-2019-04-4864.

ABSTRACT In this paper, a task scheduling problem for a cloud computing environment is formulated
by using the M/M/n queuing model. A priority assignment algorithm is designed to employ a new data
structure named the waiting time matrix to assign priority to individual tasks upon arrival. In addition to this,
the waiting queue implements a unique concept based on the principle of the Fibonacci heap for extracting
the task with the highest priority. This work introduces a parallel algorithm for task scheduling in which the
priority assignment to task and building of heap is executed in parallel with respect to the non-preemptive
and preemptive nature of tasks. The proposed work is illustrated in a step-by-step manner with an appropriate
number of tasks. The performance of the proposed model is compared in terms of overall waiting time and
CPU time against some existing techniques like BATS, IDEA, and BATS+BAR to determine the efficacy
of our proposed algorithms. Additionally, three distinct scenarios have been considered to demonstrate the
competency of the task scheduling method in handling tasks with different priorities. Furthermore, the task
scheduling algorithm is also applied in a dynamic cloud computing environment.

INDEX TERMS Fibonacci heap, cloud computing, preemptive scheduling, priority queue, task scheduling,
virtual machine.

I. INTRODUCTION company’s or organization’s internal data centers that are not


Cloud computing refers to the provision of on-demand accessible to the general public.
computing resources, including anything from software to A cloud permits workloads to be easily installed and
storage and processing power [1]. Due to technological scaled owing to the fast provisioning of a virtual or physical
improvements, a wide range of sectors is now adopting machine [2]. In a cloud computing environment, multiple
cloud computing applications to improve and streamline their virtual machines (VMs) can share physical resources (CPU,
operations. These applications are accessible from different memory, and bandwidth) on a single physical host, and
geographical locations at any given time. It provides diverse multiple VMs can share a data center’s bandwidth using
services across multiple sectors, including data storage, network virtualization. As there are usually many user
social networking, education, medical management, and requests, a significant challenge is to efficiently schedule user
entertainment, among others. Cloud computing services fall requests with a minimal turnaround time for tasks related to
into three broad categories: Infrastructure as a Service (IaaS), user demands.
Platform as a Service (PaaS), and Software as a Service Task scheduling is used to schedule tasks for optimum
(SaaS). A public cloud is made available to the general public resource utilization by allocating specific tasks to certain
on a pay-as-you-go basis, while a private cloud refers to a resources at specific times. Tasks are computational activities
that may necessitate diverse processing skills and resource
The associate editor coordinating the review of this manuscript and requirements such as CPU, memory, number of nodes,
approving it for publication was Kuo-Ching Ying . network bandwidth, etc. Each task may have different
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
VOLUME 11, 2023 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ 27111
S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

criteria, such as task priority, a deadline for completion, Particle Swarm Optimization (MOPSO) and Multi-Objective
an estimated execution time, and so on. The task scheduling Genetic Algorithm (MOGA) to implement their model in the
problem covers two categories of users: cloud providers and Cloudsim toolkit environment. They conclude that MOPSO
consumers. Cloud consumers seek to run their tasks to solve is a better method than MOGA for solving such problems.
problems of various scales and levels of complexity, whereas In order to minimize the overall makespan of a set of tasks,
resources from cloud service providers will be used to execute the model [6] uses a Dynamic Adaptive Particle Swarm
custom tasks. Cloud consumers will benefit from prudent Optimization algorithm (DAPSO). They also propose a task
resource selection and aggregation, while cloud providers scheduling algorithm that incorporates both the Dynamic
will gain from optimal resource utilization. Since many users Adaptive PSO (DAPSO) algorithm and the Cuckoo Search
and applications share device resources, appropriate task (CS) algorithm and is called MDAPSO. According to the
scheduling is essential and crucial [7]. simulation results provided in this paper, MDAPSO and
Task scheduling in cloud computing includes two DAPSO perform better than PSO.
basic types of scheduling approaches: preemptive and The work carried out in [7] defines a divisible task schedul-
non-preemptive scheduling methods. The VM is assigned ing problem by using a nonlinear programming approach.
to the tasks for a specified amount of time in preemptive A divisible load scheduling algorithm has been proposed
scheduling, whereas in non-preemptive scheduling, the VM in this work that takes network bandwidth availability into
is assigned to the task until it finishes. account. The authors [8] propose an optimal task scheduling
Task scheduling and resource management allow cloud algorithm by using Discrete Symbiotic Organism Search
providers to optimize revenue and resource usage to the (DSOS) algorithm. This work reveals that DSOS is well
maximum extent possible. The scheduling and distribution of suited to handling large-scale scheduling problems as it
resources appear to be significant bottlenecks in the effective converges more quickly than PSO for a wide search area.
utilization of cloud computing resources. This bottleneck in A priority-based task scheduling algorithm is presented
efficient scheduling in turn inspires researchers to explore in [9]. The priority is assigned to different tasks as per
task scheduling in cloud computing. The fundamental their classification based on the deadline and minimum
principle behind task scheduling is to arrange tasks to cost. The work [10] presented a fault tolerance awareness
attenuate time loss and boost performance. Systems without scheduling strategy based on the dynamic clustering league
a proper task scheduling feature may exhibit a longer championship algorithm (DCLCA), which would reflect the
waiting period and even compel the less important tasks currently available resources and reduce the premature failure
towards starvation. Hence, scheduling strategies must include of autonomous activities. The paper [11] presents a dynamic
important parameters like the nature, size, and execution time priority-based job scheduling algorithm where the priority
of tasks as well as the availability of computing resources is dynamically set considering CPU usage, IO usage, and
when calculating task priority and finalizing scheduling job criticality. This algorithm decreases issues related to
decisions. starvation.
In this paper, we address a time-efficient heuristic model The work carried out in [12] uses Orthogonal Taguchi
for task scheduling in a cloud computing environment. The Based-Cat Swarm Optimization (OTB-CSO) to optimize the
remainder of the paper is organized as follows: Section II overall task processing time. A Scheduling Cost Approach
gives an overview of existing approaches. Section III (SCA) is an approach proposed in [13] that can determine
describes the proposed task scheduling model. Section IV the cost of CPU, RAM, bandwidth, and storage, with tasks
specifies an illustration of the proposed algorithms. Section V being prioritized based on the user’s budget. They compared
demonstrates the experimental results and their comparisons their work with FCFS and Round Robin Scheduling and
with the existing algorithms, and finally, the entire paper is found their work to be better than FCFS and the Round
concluded in Section VI. Robin Scheduling algorithm. The performance of different
scheduling approaches is discussed in [14] considering differ-
II. RELATED WORKS ent parameters like resource utilization, energy consumption,
In the work [3], the task scheduling problem is designed etc.
using an integer linear program (ILP) formulation. The work The authors [15] propose a grouped task scheduling
carried out in [4] finds the most suitable task for execution (GTS) algorithm for task scheduling in a cloud computing
by integrating the pairwise comparison matrix technique environment. They use quality of service as the scheduling
and the Analytic Hierarchy Process (AHP). They use these criteria. In this work, the tasks are first grouped into
techniques to rank the tasks for effective resource allocation. five different categories and then scheduled in increasing
They also use an induced matrix to enhance the consistency order of their execution time. In [16], various scheduling
among the tasks. strategies have been discussed for the effective usage of
In the paper [5], a multi-objective optimization problem for resources so to minimize the power consumption and cost of
scheduling tasks among VMs is formulated by considering processing.
the parameters such as execution cost, transfer time, power A computational framework is proposed in [17] for cloud
consumption, and queue length. They use Multi-Objective service selection and evaluation. This framework is an
27112 VOLUME 11, 2023
S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

integration of the Analytic Hierarchy Process (AHP) and machine learning technique has been used in [33] to select
Technique for Order Preference having Similarity to Ideal the best scheduling algorithm for effectively allocating tasks
Solution (TOPSIS). The task scheduling algorithm presented to VMs.
in [18] uses the Genetic Gray Wolf Optimization Algorithm The method proposed in [34] uses the Bumble Bee Mating
(GGWO), where the GGWO algorithm is a combination of Optimization (BBMO) algorithm to optimize the makespan
the Gray Wolf Optimizer (GWO) and Genetic Algorithm of the tasks. Similarly, Total Resource Execution Time Aware
(GA). Algorithm (TRETA) can be found in [35] for solving the
Four task scheduling algorithms, namely CZSN, CDSN, task scheduling problem. In the work [36], a modified
CDN, and CNRSN have been proposed by [19] for Harris Hawkes Optimization (HHO) along with the simulated
multi-cloud environments that are heterogeneous by nature. annealing (SA) algorithm are used to propose the HHSOA
They use normalization techniques like z-score and decimal approach for job scheduling in cloud computing. Game
scaling to propose the first two algorithms, respectively, while theory has been applied in [37] to propose a task scheduling
the other two algorithms are based on distribution scaling and algorithm while considering the reliability of the balanced
nearest radix scaling, respectively. tasks. In the work [38], the task scheduling problem is defined
A multi-objective hybrid strategy, which amalgamates as a multi-objective optimization model, and its solution
the desirable features of two algorithms, the bacteria strategy includes the whale optimization algorithm (WOA).
foraging (BF) algorithm and the genetic algorithm (GA), Further, an Improved WOA is also proposed in this paper
is proposed in [20] for task scheduling in cloud comput- for cloud task scheduling (IWC). A parental prioritization
ing. In the work [21], the proposed scheduling algorithm earliest finish time (PPEFT) based scheduling algorithm has
is an integration of four techniques, namely, the modi- been proposed in [39] for a heterogeneous environment. The
fied analytic hierarchy process (MAHP), longest expected tasks are scheduled in the parental priority queue (PPQ)
processing time preemption (LEPT), bandwidth aware divis- on the basis of downward rank and parental priority. The
ible scheduling(BATS)+BAR optimization and divide-and- simulated results show this algorithm outperforms HEFT and
conquer methods. The method [22] uses a queue to store CPOP in terms of cost and makespan of schedules. An intel-
and manage all the incoming tasks to the system. The ligent scheduling mechanism has been proposed in [40]
task allocation is performed by assigning a priority to each that uses genetic algorithm based multiphase fault tolerance
task. They use Hybrid Genetic-Particle Swarm Optimization (MFTGA) to schedule tasks over VMs. This strategy works
(HGPSO) algorithm for different tasks assignment. through four phases namely, the individual phase, local phase,
The work carried out in [23] applies the mean grey wolf global phase, and fault tolerance phase. This scheduling
optimization algorithm to minimize the overall makespan strategy was compared against GA and Adoptive Incremental
and energy consumption in cloud computing networks. Genetic Algorithm (AIGA) in terms of execution time,
Gravitational Search and Non-dominated Sorting Genetic memory usage, overall energy consumption, SLA violation,
Algorithm (GSA and NSGA) have been found in [24] and cost. This comparison reveals the proposed strategy
to select the candidate VMs. The cloudlet scheduling to have better performance than its counterpart methods
problem has been solved by applying the monarch butterfly discussed above. Ajmal et al. [41] proposed a hybrid task
optimization algorithm [25]. scheduling approach that combines genetic algorithm and
The paper [26] presents an improved version of the ant colony optimization for the allocation of tasks to various
Moth Search Algorithm (MSA) for solving the cloud task VMs. This work substantially decreased both execution
scheduling problem. Similarly, the scheduling algorithm [27] time and data center cost by an amount of 64% and 11%
uses a modified GA algorithm integrated with greedy strategy respectively.
(MGGS). An intelligent meta-heuristic algorithm has been The works discussed so far are summarized in Table 1.
presented in [29] that combines the imperialist competitive The abbreviations used in Table 1 are P - Preemptive,
algorithm (ICA) and the firefly algorithm (FA). Genetic NP - Non-preemptive, TAA - Task arrival assumption,
algorithm has been used in [28] for task allocation among E - Evolutionary, and NE - Non-evolutionary. This table sug-
the VMs in cloud computing. They use the CloudSim toolkit gests the following issues and challenges in task scheduling
for simulation purposes. A new method based on the nature for a cloud infrastructure that must be taken into consideration
of the grey wolf has been proposed in [30] to select the best while designing an efficient task scheduling model:
scheduling algorithm. In the proposed method [31], tasks 1) Efficiency of priority algorithms: Priority-based
are prioritized considering two factors, such as consumer scheduling algorithms must be time-efficient to
preferences and pre-defined criteria, using the Best-Worst compute the priority of tasks while taking into account
Method (BWM) as a light Multi-Criteria Decision- important decision criteria such as waiting time,
Making (MCDM) technique for task scheduling in cloud execution time, etc.
computing. 2) Adoption of queuing model: The queuing model has
The paper [32] proposes a new heuristic method termed been adopted to accommodate the tasks in the system,
Efficient Resource Allocation with Score (ERAS) for task but its implementation details are missing in the
scheduling in a cloud computing network. A supervised literature. Further, the general queue cannot be used

VOLUME 11, 2023 27113


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

here due to the randomness and dynamic nature of the If a task needs to wait before its execution, equation (1) can
tasks. be rewritten as:
3) Nature of task scheduling: Most of the methods only
TT (Ti ) = TT (Ti ) + TP (Ti ) + TW (Ti ) (2)
concentrate on the non-preemptive allocation of tasks
among the VMs. However, in reality, we require a where TW (Ti ) is the waiting time of task Ti .
method that can handle both the non-preemptive and In a cloud computing environment, the inter-arrival time
preemptive natures of task scheduling. Preemption of and service time of the tasks can be assumed to be
tasks can occur due to many reasons as explained exponentially distributed [48] and thus follow M /M /n queue
in [42] even though it may cause some overheads. model [49]. Let α be the mean rate of arrival and β be the
However, in terms of waiting time, preemption of tasks mean service rate, then the probability that VMs are busy [44]
occurs due to minimizing the overall waiting time of is expressed as:
the tasks. α
PBusy = (3)
The main objectives of the work carried out in this paper, n×β
which is an attempt to resolve the above-enumerated issues, The mean number of tasks in the queue can be computed by
are presented below: using the following equation:
1) The task scheduling problem is formulated by using the P0 ( βα )n PBusy
queuing model to minimize the overall waiting time for QM = (4)
each task. n!(1 − PBusy )2
2) A new data structure, namely the waiting time matrix, where P0 is the probability that there is 0 number of tasks in
is defined in the proposed scheduling model. the system [44] and is expressed as:
3) Another algorithm is introduced in this work to assign 1
priority to each incoming task based on its size. P0 = P (5)
n−1 (nPBusy )i (nPBusy )n
4) A detailed implementation of the waiting queue is i=0 i! + n!(1−PBusy
presented, which uses a Fibonacci heap data structure.
The mean waiting time [44] in the queue can be calculated by
5) A new parallel algorithm is proposed for the
using equation(4) i.e.
non-preemptive and preemptive scheduling of tasks.
QM
TW = (6)
III. PROPOSED TASK SCHEDULING MODEL α
The proposed task scheduling model for a cloud computing Thus, the turnaround time of a task can be expressed by using
environment consists of the following mechanisms, which are equations(2) and (6) as:
depicted in Figure 1: 1 QM (Ti )
TT (Ti ) = TT (Ti ) + + (7)
1) An algorithm to assign the priority to each task upon β(Ti ) α
their arrival in the system, i.e., Priority assignment to The turnaround time of a task (Ti ) can be minimized by
tasks (PAT) minimizing the mean waiting time of the task in a queue
2) A priority queue is implemented using a Fibonacci heap since the transmission time and processing time of a task is
3) Task Scheduling Algorithms (both non-preemptive and normally fixed for a task. Thus, the minimization problem can
preemptive) be formulated as follows:
4) A virtual machine associated with a queue called FIFO. m
X QM (Ti )
The task scheduling problem is NP-hard and is devised in the Minimize: ( ) (8)
following sub-section so that the overall waiting time of each α
i=1
task must be minimized. such that the following constraints can be satisfied:

A. PROBLEM FORMULATION 1 ≤ QM ≤ m (9)


Let there be n number of identical virtual machines denoted
by S1 , S2 · · · , Sn at one data center. Users make the requests
by sending tasks to the cloud system to get the desired output. αl ≤ α ≤ αu (10)
Let total number of tasks T1 , T2 , · · · ·, Tm at a particular time and
be m. The time required to send a task to the cloud and
receive back the results is defined as transmission time (TT ). P0 , PBusy < 1 (11)
Similarly, the time taken by the VM to complete the execution The constraint (9) ensures that there must be an upper
of a task is called processing time (TP ). Thus, the turnaround bound(m) to the mean number of tasks present in the system.
time of the task can be defined as: The second constraint i.e. (10) defines a lower and upper
bound to the task arrival rate without which the system
TT (Ti ) = TT (Ti ) + TP (Ti ) (1) will become highly volatile and unstable. The values of

27114 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

TABLE 1. Summary of related work.

the lower and upper bound to the task arrival rate are B. PRIORITY ASSIGNMENT TO TASKS (PAT)
system dependent [44](however, this work suggests any Every task remains in the queue after its arrival in the
value between 1 and 65 can be a profitable task arrival system. The priority of each task must be computed for
rate measured per second). Both the probability terms in its subsequent execution while minimizing its waiting time.
constraint (11) are tightly bound by 1 otherwise it will lead to Thus, if two tasks (Ti and Tj ) are waiting for VM with
an indefinite turnaround time. priority values x and y respectively, the task Ti is executed

VOLUME 11, 2023 27115


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

FIGURE 1. Proposed scheduling model.

before Tj iff x > y to minimize overall waiting time of the Algorithm 1 Priority_Assignment_to_Task(Task)
tasks i.e. 1: while True do
Ti ≻ Tj iff x > y, where ≻ represents the ordering of task. 2: for each task Tk do
The processing time is the aspect to consider when 3: Calculate the waiting time WT (Tk )
prioritizing tasks since it depends largely on the size of 4: end for
the tasks in almost all cases. The following heuristic func- 5: Generate the waiting matrix WM
tion [47] is used to approximate the execution time of each 6: Compute eigen vector (egvt) and eigen value (egv) of
task: WM
7: let λmax = max(egv)
size(Ti ) × n × 8 8: compute consistency index (CI)= λmax −m
m−1
PT (Ti ) = (12) CI
MIPS 9: compute consistency ratio(CR)= RI
10: if CR < 0.1 then
where n specifies the n-bit architecture and MIPS represents 11: break
a million instructions per second. 12: else
A new data structure called a waiting time matrix is 13: decrease WT (Tk ) (i.e. the task with maximum
proposed whose elements are defined below: waiting time)
1) The diagonal elements are set to 1. 14: end if
15: end while
2) Assuming a sequential execution of the tasks as
16: for k=1 to m do
per their arrival, the waiting time for each task is
calculated and placed in the upper part of the diagonal 17: Priority(Tk )=1- maximum eigen value of egvt(k)
18: end for
of the matrix. e.g. if the ordering of the tasks is
T1 ≻ T2 ≻ T3 ≻ T4 ,
the waiting time for WT (T1 ) = 0,
WT (T2 ) = PT (T1 ), Thus, the waiting matrix is represented as:
WT (T3 ) = PT (T1 ) + PT (T2 ) 
1 WT (T2 ) WT (T3 ) WT (T4 )

and WT (T4 ) = PT (T1 ) + PT (T2 ) + PT (T3 )  1
 WT (T2 ) 1 WT (T3 ) WT (T4 )
 1 1
1 WT (T4 )

   WT (T3 ) WT (T2 )
1 WT (T2 ) WT (T3 ) WT (T4 ) 1 1 1
1
−
 1 WT (T3 ) WT (T4 )
WT (T4 ) WT (T3 ) WT (T2 )
− − 1 WT (T4 ) Time complexity of Priority_Assignment_to_Task
− − − 1 The while loop (step-1) can be executed up to a number
of tasks i.e. m times in the worst case for which the
3) The lower part of the diagonal of this matrix represents waiting time is decreased for each task. The calculation
the reverse ordering of the tasks and is filled with the of the waiting time of m tasks involves a worst-case time
reciprocal value of the waiting time. complexity of (m2 )which supersedes all other operations of

27116 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

the algorithms. Thus, the worst-case time complexity of the


proposed Priority_Assignment_to_Task algorithm is found to
be (m3 ).
From the above discussion, the following observation can
be made for the priority value computed for a task:
Lemma 1: The priority value for each task lies within
0 and 1.
Proof: As the priority is assigned based on the eigenvalue
and eigenvector, it must lie between 0 and 1.

C. REPRESENTATION OF THE WAITING QUEUE


The task, upon its arrival in the system, will be assigned
a priority using the algorithm and will be appended to the
queue. Since each task is assigned a certain priority, a priority
queue is preferred here over any other type of queue. This
priority queue is implemented by using a Fibonacci heap [24]
due to the following facts:
1) It has a height of h when it contains 2h number of nodes
and thus accommodates a very large number of tasks
(Table 2).
2) The amortized time complexity is proven to be substan-
tially more efficient than that of many other priority
queue data structures. An explanation of amortized
time complexity can be found in [46].
3) The worst case time complexity of basic operations
such as creation, insertion, and the union is O(1) while
its extraction of maximum requires O(m).
In Fibonacci heaps [45], each node x contains a pointer p[x]
to its parent, as well as a pointer child[x] to any of its children. FIGURE 2. Graphical representation of heap data structure used in
The children of x are connected together in the child list of x priority queue.
through a circular doubly linked list. Each child y in a child
list has left[y] and right[y] pointers that point to y’s left and TABLE 2. Number of tasks vs the height of Fibonacci heap.
right siblings, respectively. Left[y] = right[y] = y, if node y
is the only child.
Initially, when the Fibonacci heap (H) is empty, new
tasks based upon their priority values are inserted into
the Fibonacci heap by Fib_Heap_Insertion(H, element).
Fib_Heap_Union(H1 , H2 ) performs the union operation
between two heaps (H1 , H2 ). The extraction of the max-
imum element of the Fibonacci heap is performed by 4) This extraction operation makes the child node 0.99,
Fib_Heap_Extraction_Max(H ). To reduce the number of i.e., 0.44, the root node.
trees in the Fibonacci heap, the heap is rebuilt by using 5) The max-pointer now points to node 0.96.
Rebuild(H ). An example of the same is presented in Figure 2. 6) As 0.84 and 0.44 have the same degree, they are united,
The details of this operation can be found in [46]. making 0.84 the root and 0.44 its child.
Fib_Heap_Extraction_Max(H ) is the most significant Thus, the final heap is built after the extract-max
operation in the Fibonacci heap. This process involves operation.
removing the node from the heap that has the highest value These steps are repeated for each extract-max operation to
and readjusting the heap. This operation proceeds through the be performed subsequently.
steps illustrated in the following example.
1) Initially, the root nodes are 0.84, 0.99, and 0.96. D. PROPOSED TASK SCHEDULING ALGORITHM
2) The max-pointer marks 0.99 as the maximum valued When a task enters the system, its priority is computed
node. and added to the Heap queue. The scheduling algorithm
3) Fib_Heap_Extraction_Max(H ) extracts the maximum must allocate VM to the task with the highest priority. This
valued node, i.e., 0.99. allocation is implicitly true unless any new task arrives with

VOLUME 11, 2023 27117


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

a higher priority than the currently running tasks. When it Algorithm 2 Parallel_Task_Scheduling_algorithm(Task)
comes to accomplishing a certain task, the former adheres do in parallel
to its non-preemptive nature, whereas the latter adheres to for arrival of each α no. of tasks do
its preemptive nature. Moreover, both time and memory try{
are saved by adopting non-preemption of tasks. However, Priority_Assignment_to_Task(Task) where Task=
preemption of tasks though being less efficient in terms of T1 , T2 , . . . , Tα
CPU time and memory can not be avoided. Hence, it is }
necessary to switch between these methods for efficient task if Heap=NIL then
completion. However, this switching cannot be accomplished try{ Fib_Heap_Insertion(Heap,T) for each new
by sequential algorithms in which priority is calculated first task(T) }
and then the task is scheduled. Thus, the priority assigned to WT (Tk ) = 0 for each Tk ϵHeap
the task along with the building of the Heap must be executed else
concurrently with the scheduling algorithm. As initially there for new task(Tj ) do
is no task in the system, the heap must be built first and then try{ Fib_Heap_Insertion(H , Tj ) }
only the tasks can be scheduled, which in turn requires that for WT (Tj ) = 0
the arrival of each α number of tasks, the priority assignment WT (Si , Tk ) = 0 for each Tk ϵH and Si ϵS
and Heap construction must be accomplished while during try{ Fib_Heap_Union(Heap,H) }
the next the arrival of α number of tasks, the task must be catch()
allocated. In order to the above discussed steps, a parallel {
scheduling algorithm is proposed for optimizing the overall report the error message
waiting of tasks. The algorithm uses the following operations Rebuild(Heap) and Restart execution
and data structures: }
1) The status of each Si is defined as: Si .Busy() = 1 if the while Heap ̸ = φ do
VM Si is currently executing a task(Tj ) else Si .Busy() = delay(α ms)
0 indicating it is free. try{
2) Assign the task Tj to the VM Si , i.e., Si (Tj ) Tj = Fib_Heap_Extraction_Max(Heap)
3) ts− the time stamp recorded for each task as it arrives }
in the system if ts(Tj ) ≥ ts(Tk ) ∀ Tk ϵHeap and ∀ Si = 1 and
4) WT (Tk )− waiting time of ith task key[Tj ] > key[Si (Tk )]∀i, k then
try{ execute Preemptive_task_scheduling
5) WT (Si , Tk )− ith task waits for ith S, where S is the VM
}
6) FIFO - a queue to store the preempted tasks by a VM else
7) Heap and FIFO are the global data structures try{ execute Non-preemptive_task_scheduling
During the initial execution of the proposed algorithm, }
priority is assigned to the tasks. Then, the Heap is built for each VM Si do
in terms of the descending order of priority values of the for each task Tk do
tasks. The building of the heap is executed concurrently with WT (Tk ) = WT (Tk ) + WT (Si , Tk ) if
the scheduling of the tasks (while Heap ̸ = φ). The first Si (Tk )=True
step of this part of the algorithm delays its execution by a catch()
time of α ms in order to allow the building of heap of α {
number of tasks. In the Parallel_Task_Scheduling_algorithm, report the error message
the assignment of priority to tasks and the building of Rebuild(Heap) and Restart execution
Heap proceeds in parallel with the scheduling of tasks. }
If the arrival rate is α, α number of tasks are assigned
with priority and then Heap is built for these tasks. The
timestamp of each task(ts) is recorded. If Tj is the extracted Error handling mechanism in Parallel_Task_Scheduling
task from the Fibonacci heap, its ts is compared against algorithm-
each Tk ϵHeap only when there are no free VMs. The truth In order to handle, the different types of errors such as task-
of this condition leads to task preemption and accordingly specific errors, heap-specific errors, running time errors, etc,
Preemptive_task_scheduling is executed. Failing of the above each error-sensitive step of this algorithm are embedded by
condition executes Non-preemptive_task_scheduling. The a try block. The catch block used in this algorithm is used
last for loop of this algorithm is used to calculate the to report the errors, rebuild the heap, and then restart the
waiting time of each task by considering across the VMs execution (details can be found in [42]).
(i.e. WT (Si , Tk ) if Si (Tk ) = True). The flow control of Time complexity of Parallel_Task_Scheduling_ algorithm-
the proposed algorithm for task scheduling is depicted in The worst-case time complexity of for loop is O(α).
Figure 3. The worst-case time complexity of the while loop is

27118 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

FIGURE 3. Flowchart of the proposed parallel task scheduling algorithm.

Algorithm 3 Non-preemptive_task_scheduling(Task) Time complexity of Non-preemptive_task_scheduling()


1: bool=False The for loop(step-2) of the Non-preemptive_task _schedul-
2: for each VM Si do ing() can execute up to a maximum of n number of times
3: if Si .Busy() = 0 then where n is the number of VMs. The nested for loop mentioned
4: assign Si with Tj =Fib_Heap_Extraction_Max(Heap) in step-7 of this algorithm is executed up to a maximum
number of times equal to the heap size. In the worst-case
5: bool = True scenario, the heap can accommodate m number of tasks,
6: else if bool=True then thus, the overall worst-case time complexity of the proposed
7: for each Tk ϵHeap do Non-preemptive_task_scheduling() algorithm is found to be
8: WT (Si , Tk ) = WT (Si , Tk ) + PT (Si (Tj )) O(m × n).
9: end for
10: end if 2) PREEMPTIVE TASK SCHEDULING ALGORITHM
11: end for The steps involved in preemptive task scheduling are:
1) The preemption of tasks occurs when any task executed
O(|Heap| × n). The heap can accommodate m num- by the processor has less priority than the tasks which
ber of tasks in the worst-case scenario. Thereby, the are waiting.
worst case time complexity of the proposed algorithm 2) In order to ensure that no task waits for an indefinite
2 i.e. Parallel_Task_Scheduling_algorithm is found to be amount of time(i.e. starvation problem), the priority
O(m × n). of each task waiting for VM is updated by a small
amount (i.e. reciprocal to its size) after each preemption
1) NON-PREEMPTIVE TASK SCHEDULING ALGORITHM operation.
1
Non-preemptive task scheduling involves steps mentioned in Key[Tj ] = Key[Tj ] + size[T j]
Non-preemptive_task_scheduling. size[Tj ] is taken into account to preserve the initial
In algorithm 3, the Boolean variable bool is used to priority ordering. The heap is rebuilt to update only the
determine the waiting time of the remaining tasks (step-1). key values without any structural alteration.
The new task must wait if any other task is assigned to any Steps 4–11 of the proposed algorithm 4 are used to check
of the VM and their waiting time is updated accordingly. whether any task from FIFO can be allocated to the available
This happens for bool=True (steps 6-9). However, for VM. The waiting time for each task is updated by using
bool=False, the tasks must also wait but their waiting steps 14–16. Steps 17-29 are used for preemption of the
time must not be updated (since these have been updated running task and to assign the VM to the newly arrived
earlier during the assignment of the previous task to task with higher priority than the priority of the running
the VM). task.

VOLUME 11, 2023 27119


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

Algorithm 4 Preemptive_task_scheduling(Task) TABLE 3. Four number of tasks with their size as per [21].

1: bool=False
2: FIFO = empty
3: for each VM Si do
4: if Si .Busy() = 0 then
5: Tj = Fib_Heap_Extraction_Max(Heap)
6: assign Si = max(Tj , max(FIFO))
7: if key[Tj ] is maximum then which shows that the algorithm is complete in terms of the
8: for each Tk ϵFIFO do tasks.
9: WT (Si , Tk ) = WT (Si , Tk ) + PT (Tj )
10: end for IV. ILLUSTRATION
11: end if The proposed algorithms are illustrated by taking the
12: bool=True following four tasks (Table 3) and two VMs(S1 , S2 ) with
13: else if bool=True then configuration as per [21]:
14: for each Tk ϵHeap do The estimated processing time for each of the tasks is
15: WT (Si , Tk ) =WT (Si , Tk ) + PT (Si (Tj )) calculated by using equation(12) and is shown in the third
16: end for column of Table 3. Comparing the actual processing time [21]
17: else if Si .Busy() = 1 then and estimated processing time, the heuristic adopted in this
18: if key[Si (Tj )] ≤ key[Tk = work seems to be correct(except for the task1). Priority
Fib_Heap_Extraction_Max(Heap)] then Assignment to Task algorithm:
19: FIFO.add(Tj )
20: assign Si with Tk Step-5: The waiting matrix for the four tasks is
21: for each Tk ϵHeap do  
22: WT (Si , Tk ) =WT (Si , Tk ) + PT (Si (Tj )) 1 33.43 70.44 100.75
0.0299 1 70.44 100.75
23: end for  
0.0141 0.0299 1 100.75
24: for each Tk ϵFIFO do
25: WT (Si , Tk ) =WT (Si , Tk ) + PT (Si (Tj )) 0.0092 0.0141 0.0299 1
26: end for Step-6: The eigenvalue and the eigenvector are calculated
27: end if for the waiting time matrix WM. The absolute value of the
28: end if eigenvector is shown below:
29: end for  
0.77997646 0.77888532 0.77888532 0.77717669
30: for each Tk ϵHeap do 0.19799301 0.20340335 0.20340335 0.21154905
1
31: Key[Tj ] = Key[Tk ] + size[T k]

0.02103499 0.02021466 0.02021466 0.01926724

32: Rebuild(Heap) 
 0.0015625 0.00140796 0.00140796 0.00121732

33: end for
1.0005669 1.00391128 1.00391128 1.00921030
Step-7: λmax = 9.3
Time complexity of Preemptive_task_scheduling()
Step-8: CI = 1.76
The for loop(step-3) of the Preemptive_task _scheduling()
Step-10: Since CR < 0.1 there is no decrease in waiting time.
can execute up to a maximum of n number of times where
Step-16: The absolute value of eigenvectors is normalized i.e.
n is the number of VMs. The nested for loop mentioned
the sum of each column equals to ≈ 1. The maximum value
in step-14 of this algorithm is executed up to a maximum
of each eigenvector is extracted and is:
number of times equal to the heap size. In the worst-  
case scenario, the heap can accommodate m number of 0.7799
tasks, thus, the overall worst-case time complexity of the 0.2115
 
proposed Preemptive_task_scheduling() algorithm is found 0.0210
to be O(m × n). 0.0015
In order to minimize the waiting time, the priority
3) COMPLETENESS AND CONVERGENCE OF THE is calculated by subtracting each maximum value from
PROPOSED PARALLEL SCHEDULING ALGORITHM 1(Table 4).
The proposed algorithm increases the priority of the waiting Task scheduling:
tasks after every task preemption operation by a small Step-2: Since, the Heap is initially Nil
1
amount, i.e., Key[Tj ] = Key[Tj ] + size[T j]
. Hence, no task will Step-3: Fib_Heap_Insertion(Heap,T1 ) will insert task T1 into
wait for a long or indefinite time, which in turn assures the Heap. Subsequently, the following task will be inserted:
convergence of the algorithm. Furthermore, the waiting time Fib_Heap_Insertion(Heap,T2 )
of each task is considered to compute the overall waiting time, Fib_Heap_Insertion(Heap,T3 )

27120 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

TABLE 4. Tasks with priority and estimated processing time. and 20 MB Intel smart cache), 192 GB DDR4 SDRAM
(6 × 32 GB), and Ubuntu operating system. Virtual machines
are created by using Oracle VM Virtual box having the
configuration: 1 GB RAM, operating system - Ubuntu.
In subsection-V-A, the performance of the proposed
scheduling model in terms of the overall waiting time
is compared with some existing techniques like BATS,
Fib_Heap_Insertion(Heap,T4 ) IDEA, and BATS+BAR. The proposed scheduling model
Step-4 or 8: The waiting time of each task is set to 0. is used to find the overall waiting time by considering
Step-13: The root of the Heap contains task T4 as it has the three distinct cases and the simulated results are discussed
highest priority. in subsection-V-B. The behavior of the proposed algorithm
Step-14: Initially, S1 .busy() = 0 in a dynamic cloud computing environment is elucidated in
Step-17: Execute Non-preemptive_task_scheduling(Task) subsection-V-C.
Non-preemptive Task Scheduling:
Step-4: T4 is assigned to S1 by performing the opera- A. COMPARISON
tion Fib_Heap_Extraction_Max(Heap). Extraction of task The Epigenomics tasks used in [21] have been executed by
T4 from Heap makes T3 its root. following the Epigenomics scientific workflow. Many tasks
Task T3 is assigned to S2 as its status is 0. are executed in parallel to minimize the execution time. The
Step-5: The above assignments set the variable bool=True, exact execution time is mentioned in Table 5 which is as
which causes the tasks T2 and T1 to wait. Thus, their waiting specified in [21]. However, the following are some of the
time is updated as follows: limitations of [21]:
Step-8: WT (S1 , T1 ) = 0 + 30.31, WT (S2 , T2 ) = 0 + 31.23
1) In reality, there is no way to know the exact execution
WT (S1 , T2 ) = 0 + 30.31, WT (S2 , T1 ) = 0 + 31.23
time of a task before its actual execution. Hence, one
When S1 becomes free(i.e. status=0), the task T2 is assigned
must obtain an expected execution time in order to
to it, similarly, T1 is assigned to S2 when it will become free.
determine the priority to be assigned to different tasks.
The final waiting time for each task is:
2) The work done in [21] uses 20 VMs for 13 tasks, which
WT (T4 ) = 0, WT (T3 ) = 0, WT (T2 ) = 30.31 and WT (T1 ) =
leads to the direct assignment of VMs to the tasks.
31.23
Hence, the tasks will not have any waiting time.
The total waiting time amounts to WT (T ) = 61.54
Preemptive Task scheduling: In order to compare the proposed PAT with BATS+BAR,
Suppose during the execution of T1 and T2 (which has already the VMs configuration as specified in [21] has been adopted
been scheduled by a non-preemptive algorithm), a new task here,i.e., 9600 MIPS, average RAM 512 MB. The exact
(T5 ) with priority value (0.85) more than that of T1 and execution time(ET ) and expected execution time E(ET )of
T2 enters the system having execution time of 30. Since different tasks are presented in Table 5. Although the E(ET )
the priority of the tasks is now in the order key[T5 ] > are not quite satisfactory when compared to their exact
key[T2 ] > key[T1 ], the preemptive algorithm preempts the execution time, still it provides a better approximation for
task T1 to the waiting state and assigns VM to task T5 . Let ranking the tasks in terms of some priority assignment.
15 and 16 remaining times for completion of the task T2 and In order to compare the total waiting time for these tasks,
T1 respectively. 8 VMs have been considered instead of 20.
Step-5: T5 = Fib_Heap_Extraction_Max(Heap) The following observations can be inferred from Table 5:
Step-18: Since key[T1 ] < key[T5 ] 1) The overall waiting time obtained by employing
Step-19: FIFO.add(T1 ) BATS+BAR over 13 tasks is 135.97 which is more than
Step-20: Assign T5 to S2 that obtained by using the proposed method i.e. 134.73.
Step-25: WT (S2 , T1 ) = 31.23 + 30 = 61.23 2) The expected execution time though being an approx-
Once the VM S1 completes the execution of T2 , then the imation can be effective enough to assign priority to
scheduler checks for the priority value of the remaining tasks different tasks so as to minimize the overall waiting
present in the priority queue as well as all FIFO queues. time.
As there is only one task T1 which is in a waiting state, it will CPU-bound tasks are generated from [50] with size ≥
be assigned to S1 . So, the overall waiting time of task T1 will 1 MB to 5 MB by using Apache-airflow 2.3.4 in a
be 31.23 + 30 = 61.23. Thus, the total waiting time for five Python environment to determine the efficiency of different
tasks is WT (T ) = WT (T1 ) + WT (T2 ) = 61.23 + 30.31 = algorithms in minimizing the waiting time for large-sized
91.54. tasks.
The overall waiting time calculated by the proposed
V. RESULTS AND DISCUSSION method is compared against that obtained from BATS,
All the algorithms are simulated by Python 3.8 in an HP-Work IDEA, and BATS+BAR for 25 tasks, 40 tasks, and 50 tasks
station with Intel Xeon W Processor(3 GHz, 10 cores, respectively (Figure 4). Figure 5 demonstrates the waiting

VOLUME 11, 2023 27121


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

TABLE 5. The priority assignment and waiting time of different tasks by BATS+BAR and Proposed PAT.

FIGURE 4. Waiting time obtained from BATS, IDEA, BATS+BAR, and FIGURE 6. CPU time BATS, IDEA, BATS+BAR and proposed method.
proposed method.

FIGURE 5. Waiting time obtained from BATS, IDEA, BATS+BAR and


FIGURE 7. Impact of priority of tasks on waiting time.
proposed method upon arrival of five more tasks.

time obtained upon arrival of five more tasks (preemptive BATS+BAR, and the proposed method considering 30 tasks,
scheduling). All these tasks are generated by using the 45 tasks, and 55 tasks respectively (Figure 6). The CPU time
above mentioned method. 16 number of VMs are used each of the proposed algorithm is very less as compared to the other
with 9600 MIPS and 512 MB RAM. This figure entails mentioned methods due to its parallel nature.
BATS+BAR to be more efficient than BATS and IDEA. The impact of high-priority tasks on the waiting time of
However, the proposed method outperforms BATS+BAR in low-priority tasks is shown in Figure 7. The low-priority
terms of the overall waiting time. tasks such as task 14, task 11, task 24, task 16, and task 5
The total CPU time in millisecond to complete the have at least a waiting time equal to the processing time of
execution of all the tasks are shown for BATS, IDEA, high-priority tasks respectively.

27122 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

FIGURE 8. Impact of the size of tasks on waiting time of low and high
priority tasks.
FIGURE 10. Waiting time for Case-II.

FIGURE 9. Waiting time for Case-I.


FIGURE 11. Waiting time for Case-III.

Similarly, the impact of the size of tasks on the waiting 100 tasks is depicted in Figure 10. This case also yields
time of low and high-priority tasks is depicted in Figure 8. results that are similar to those obtained in Case I.
The priority of each task is shown in addition to its waiting 3) Case-III: Many tasks with an equal number of high and
time and size. low priority tasks
Here, two categories of tasks are generated such as very
B. SCHEDULING OF TASKS IN CLOUD COMPUTING large-sized tasks (≥ 5 MB) and an equal number of
ENVIRONMENT small-sized tasks (≥ 10KB to 1 MB). The behavior
The following three distinct cases have been taken to show of the scheduling algorithm for such tasks is shown
the efficiency of the proposed scheduling algorithm to assign in Figure 11. This figure entails the fact that 50 high-
priority and schedule tasks while minimizing the overall priority tasks have been executed with minimal waiting
waiting time. time while rest 50 tasks contribute heavily towards the
1) Case-I: Many tasks with low priority and few or no overall waiting time of 386.25.
ones with high priority
In order to have low-priority tasks, the very large-sized C. SCHEDULING OF TASKS IN DYNAMIC CLOUD
tasks (≥ 5 MB) are generated by following the steps COMPUTING ENVIRONMENT
mentioned earlier. The waiting time of 100 such tasks Dynamic cloud computing allocates the resources to the
is shown in Figure 9. This figure also makes it clear that tasks by automatically adapting to the changes in workload.
all the tasks are getting a fair chance for a successful The proposed Task_scheduling algorithm is applied for task
execution which in turn confirms the convergence of scheduling in a dynamic cloud environment. The waiting time
the proposed algorithm to optimal solutions. for 500 tasks is shown in Figure 12. This figure depicts the
2) Case-II: Many tasks with high priority and few or no fact that initially when there are fewer tasks, only 16 VM are
ones with low priority considered for their execution. However, as per the increase
As explained above, small-sized tasks (≥ 10KB in the number of tasks, more number of VMs are allotted to
to 1 MB) are used for this case. The waiting time for tasks to minimize the overall waiting time. Further, due to the

VOLUME 11, 2023 27123


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

TABLE 7. Arrival of five new tasks.

FIGURE 12. Waiting time of 500 tasks under dynamic cloud computing
environment.

TABLE 6. 25 number of tasks scheduled by proposed model.

from 1000 to 10000 is generated (Table 7). The priority of


each task is calculated and then the tasks are allocated to
VMs by allowing preemption among the tasks. The number
of virtual machines is fixed to 64 in this case(however, it can
be changed as per the system configuration). The overall
waiting time ensures that the proposed scheduling model can
be applied even when the number of tasks is very huge.

VI. CONCLUSION
This paper emphasizes on task scheduling for the cloud
computing domain. We have introduced a priority assignment
algorithm built on a new data structure known as a waiting
time matrix for assigning priority to each task. The task with
the highest priority is extracted from the waiting queue by
adhering to the principle of the Fibonacci heap. We have
proposed a parallel algorithm for task scheduling where
task priority assignment and heap construction is carried
out in a parallel manner concerning the preemptive and
non-preemptive scheduling approaches. The efficiency of
gradual increase in the number of allotted VMs, the waiting the proposed algorithms has been tested using a variety of
time for the tasks is within the range of 0 to 60 which can be benchmarks and synthetic data sets. The simulated results are
observed in Figure 12. compared with the existing techniques like BATS, IDEA, and
BATS+BAR, and the comparison proves that our proposed
D. THE VALIDATION OF THE PROPOSED SCHEDULING algorithms perform better in terms of optimizing the overall
ALGORITHM ON SYNTHETIC TASKS waiting time as well as the CPU time consumed. Our
Test case - 1 work also exemplifies three distinct scenarios to evaluate
Here, 25 number of synthetically generated tasks are con- the effectiveness of the proposed task scheduling approach
sidered. The size of the tasks and their estimated processing while dealing with tasks of different priorities. Furthermore,
time are presented in Table 6. Each task is assigned a priority a demonstration for applying a task scheduling algorithm in
value by using the algorithm. The non-preemptive scheduling a dynamic cloud computing environment is also provided
algorithm is used to assign each task to the VMs. The where the decision for virtual machine allocation is based on
calculated waiting time for each task is shown in Table 6. the number of tasks in the system.
Test case - 2
In order to validate the performance of the proposed REFERENCES
scheduling model in handling large number of computa- [1] D. C. Marinescu, Cloud Computing: Theory and Practice. San Mateo, CA,
tionally less intensive tasks, a varying number of tasks USA: Morgan Kaufmann, 2017.

27124 VOLUME 11, 2023


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

[2] Distributed and Cloud Computing, From Parallel Processing to the [26] M. A. Elaziz, S. Xiong, K. P. N. Jayasena, and L. Li, ‘‘Task scheduling in
Internet of Things, San Mateo, CA, USA: Morgan Kaufmann, 2012. cloud computing based on hybrid moth search algorithm and differential
[3] T. A. L. Genez, L. F. Bittencourt, and E. R. M. Madeira, ‘‘Workflow evolution,’’ Knowl.-Based Syst., vol. 169, pp. 39–52, Apr. 2019.
scheduling for SaaS/PaaS cloud providers considering two SLA levels,’’ [27] Z. Zhou, F. Li, H. Zhu, H. Xie, J. H. Abawajy, and M. U. Chowdhury,
in Proc. IEEE Netw. Oper. Manage. Symp., Apr. 2012, pp. 906–912. ‘‘An improved genetic algorithm using greedy strategy toward task
[4] D. Ergu, G. Kou, Y. Peng, Y. Shi, and Y. Shi, ‘‘The analytic hierarchy scheduling optimization in cloud environments,’’ Neural Comput. Appl.,
process: Task scheduling and resource allocation in cloud computing vol. 32, pp. 1531–1541, Mar. 2019.
environment,’’ J. Supercomput., vol. 64, no. 3, pp. 1–14, 2013. [28] P. M. Rekha and M. Dakshayini, ‘‘Efficient task allocation approach using
[5] F. Ramezani, J. Lu, J. Taheri, and F. K. Hussain, ‘‘Evolutionary algorithm- genetic algorithm for cloud environment,’’ Cluster Comput., vol. 22, no. 4,
based multi-objective task scheduling optimization model in cloud pp. 1241–1251, Dec. 2019.
environments,’’ World Wide Web, vol. 18, no. 6, pp. 1737–1757, 2015. [29] S. M. G. Kashikolaei, A. A. R. Hosseinabadi, B. Saemi, M. B. Shareh,
[6] A. Al-maamari and F. A. Omara, ‘‘Task scheduling using PSO algorithm A. K. Sangaiah, and G.-B. Bian, ‘‘An enhancement of task scheduling in
in cloud computing environments,’’ Int. J. Grid Distrib. Comput., vol. 8, cloud computing based on imperialist competitive algorithm and firefly
no. 5, pp. 245–256, Oct. 2015. algorithm,’’ J. Supercomputing, vol. 76, pp. 6302–6329, Aug. 2019.
[7] A. Razaque, ‘‘Task scheduling in cloud computing,’’ in Proc. IEEE Long [30] N. Bansal and A. K. Singh, ‘‘Grey wolf optimized task scheduling
Island Syst., Appl. Technol. Conf. (LISAT), Apr. 2016, pp. 1–5. algorithm in cloud computing,’’ in Proc. Frontiers Intell. Comput., Theory
[8] M. Abdullahi, M. A. Ngadi, and S. M. Abdulhamid, ‘‘Symbiotic organism Appl., 2020, pp. 137–145.
search optimization based task scheduling in cloud computing environ- [31] A. Alhubaishy and A. Aljuhani, ‘‘The best-worst method for resource
ment,’’ Future Gener. Comput. Syst., vol. 56, pp. 640–650, Mar. 2016. allocation and task scheduling in cloud computing,’’ in Proc. 3rd Int.
[9] D. Saxena, ‘‘Dynamic fair priority optimization task scheduling algorithm Conf. Comput. Appl. Inf. Secur. (ICCAIS), Mar. 2020, pp. 1–6, doi:
in cloud computing: Concepts and implementations,’’ Int. J. Comput. Netw. 10.1109/ICCAIS48893.2020.9096877.
Inf. Secur., vol. 8, no. 2, pp. 41–48, Feb. 2016. [32] V. A. Lepakshi and C. S. R. Prashanth, ‘‘Efficient resource allocation with
[10] S. M. Abdulhamid, M. S. A. Latiff, S. H. H. Madni, and M. Abdullahi, score for reliable task scheduling in cloud computing systems,’’ in Proc.
‘‘Fault tolerance aware scheduling technique for cloud computing 2nd Int. Conf. Innov. Mech. Ind. Appl. (ICIMIA), Mar. 2020, pp. 6–12, doi:
environment using dynamic clustering algorithm,’’ Neural Comput. Appl., 10.1109/ICIMIA48430.2020.9074914.
vol. 29, no. 1, pp. 279–293, Jan. 2018. [33] C. Shetty and H. Sarojadevi, ‘‘Framework for task scheduling
[11] E. S. Ahmad, E. I. Ahmad, E. S. Mirdha, and M. Tech, ‘‘A novel dynamic in cloud using machine learning techniques,’’ in Proc. 4th Int.
priority based job scheduling approach for cloud environment,’’ Int. Res. Conf. Inventive Syst. Control (ICISC), Jan. 2020, pp. 727–731, doi:
J. Eng. Technol. (IRJET), vol. 4, no. 6, pp. 518–522, 2017. 10.1109/ICISC47916.2020.9171141.
[12] D. Gabi, A. S. Ismail, A. Zainal, and Z. Zakaria, ‘‘Solving task scheduling [34] M. T. Alotaibi, M. S. Almalag, and K. Werntz, ‘‘Task scheduling in cloud
problem in cloud computing environment using orthogonal taguchi-cat computing environment using bumble bee mating algorithm,’’ in Proc.
algorithm,’’ Int. J. Electr. Comput. Eng. (IJECE), vol. 7, no. 3, p. 1489, IEEE Global Conf. Artif. Intell. Internet Things (GCAIoT), Dec. 2020,
Jun. 2017. pp. 01–06, doi: 10.1109/GCAIoT51063.2020.9345824.
[13] M. A. Alworafi, A. Dhari, A. A. Al-Hashmi, and A. B. Darem, ‘‘Cost- [35] K. M. S. U. Bandaranayake, K. P. N. Jayasena, and B. T. G. S.
aware task scheduling in cloud computing environment,’’ Int. J. Comput. Kumara, ‘‘An efficient task scheduling algorithm using total resource
Netw. Inf. Secur., vol. 9, no. 5, p. 52, 2017. execution time aware algorithm in cloud computing,’’ in Proc. IEEE
[14] R. R. Patel, T. T. Desai, and S. J. Patel, ‘‘Scheduling of jobs based on Int. Conf. Smart Cloud (SmartCloud), Nov. 2020, pp. 29–34, doi:
Hungarian method in cloud computing,’’ in Proc. Int. Conf. Inventive 10.1109/SmartCloud49737.2020.00015.
Commun. Comput. Technol. (ICICCT), Mar. 2017, pp. 6–9. [36] I. Attiya, M. A. Elaziz, and S. Xiong, ‘‘Job scheduling in cloud computing
[15] H. G. El Din Hassan Ali, I. A. Saroit, and A. M. Kotb, ‘‘Grouped using a modified Harris hawks optimization and simulated annealing
tasks scheduling algorithm based on QoS in cloud computing network,’’ algorithm,’’ Comput. Intell. Neurosci., vol. 2020, pp. 1–17, Mar. 2020, doi:
Egyptian Informat. J., vol. 18, no. 1, pp. 11–19, Mar. 2017. 10.1155/2020/3504642.
[16] N. Solanki and N. C. Barwar, ‘‘Efficiency enhancing resource scheduling [37] J. Yang, B. Jiang, Z. Lv, and K.-K.-R. Choo, ‘‘A task scheduling
strategies in cloud computing,’’ Int. J. Eng. Techn., vol. 3, no. 3, algorithm considering game theory designed for energy management in
pp. 149–151, 2017. cloud computing,’’ Future Gener. Comput. Syst., vol. 105, pp. 985–992,
[17] R. R. Kumar, S. Mishra, and C. Kumar, ‘‘A novel framework for cloud Apr. 2020.
service evaluation and selection using hybrid MCDM methods,’’ Arabian [38] X. Chen, L. Cheng, C. Liu, Q. Liu, J. Liu, Y. Mao, and J. Murphy, ‘‘A WOA-
J. Sci. Eng., vol. 43, no. 12, pp. 7015–7030, Dec. 2018. based optimization approach for task scheduling in cloud computing
[18] N. Gobalakrishnan and C. Arun, ‘‘A new multi-objective optimal program- systems,’’ IEEE Syst. J., vol. 14, no. 3, pp. 3117–3128, Sep. 2020, doi:
ming model for task scheduling using genetic gray wolf optimization in 10.1109/JSYST.2019.2960088.
cloud computing,’’ Comput. J., vol. 61, no. 10, pp. 1523–1536, Oct. 2018. [39] M. S. Arif, Z. Iqbal, R. Tariq, F. Aadil, and M. Awais, ‘‘Parental
[19] S. K. Panda and P. K. Jana, ‘‘Normalization-based task scheduling algo- prioritization-based task scheduling in heterogeneous systems,’’ Arabian J.
rithms for heterogeneous multi-cloud environment,’’ Inf. Syst. Frontiers, Sci. Eng., vol. 44, no. 4, pp. 3943–3952, Apr. 2019, doi: 10.1007/s13369-
vol. 20, no. 2, pp. 373–399, Apr. 2018. 018-03698-2.
[20] S. Srichandan, T. A. Kumar, and S. Bibhudatta, ‘‘Task scheduling for cloud [40] S. Kanwal, Z. Iqbal, F. Al-Turjman, A. Irtaza, and M. A. Khan,
computing using multi-objective hybrid bacteria foraging algorithm,’’ ‘‘Multiphase fault tolerance genetic algorithm for vm and task scheduling
Future Comput. Inf. J., vol. 3, no. 2, pp. 210–230, Dec. 2018. in datacenter,’’ Inf. Process. Manage., vol. 58, no. 5, Sep. 2021,
[21] M. B. Gawali and S. K. Shinde, ‘‘Task scheduling and resource allocation Art. no. 102676, doi: 10.1016/j.ipm.2021.102676.
in cloud computing using a heuristic approach,’’ J. Cloud Comput., vol. 7, [41] M. S. Ajmal, Z. Iqbal, F. Z. Khan, M. Ahmad, I. Ahmad, and B. B. Gupta,
no. 1, pp. 1–16, Dec. 2018. ‘‘Hybrid ant genetic algorithm for efficient task scheduling in cloud data
[22] A. M. S. Kumar and M. Venkatesan, ‘‘Task scheduling in a cloud centers,’’ Comput. Electr. Eng., vol. 95, Oct. 2021, Art. no. 107419, doi:
computing environment using HGPSO algorithm,’’ Cluster Comput., 10.1016/j.compeleceng.2021.107419.
vol. 22, no. S1, pp. 2179–2185, Jan. 2019. [42] M. Naghibzadeh, Contemporary Operating Systems, 2023.
[23] G. Natesan and A. Chokkalingam, ‘‘Task scheduling in heterogeneous [43] H. Goudarzi and M. Pedram, ‘‘Achieving energy efficiency in datacenters
cloud environment using mean grey wolf optimization algorithm,’’ ICT by virtual machine sizing, replication, and placement,’’ in Advances in
Exp., vol. 5, no. 2, pp. 110–114, Jun. 2019. Computers, vol. 100, 2016, pp. 161–200.
[24] V. Karunakaran, ‘‘A stochastic development of cloud computing based [44] I. Adan and J. Resing, Queueing Theory. Eindhoven, The Netherlands:
task scheduling algorithm,’’ J. Soft Comput. Paradigm, vol. 2019, no. 1, Eindhoven Univ. Technology, 2002.
pp. 41–48, Sep. 2019. [45] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to
[25] I. Strumberger, M. Tuba, N. Bacanin, and E. Tuba, ‘‘Cloudlet scheduling by Algorithms, 2nd ed. Cambridge, MA, USA: MIT Press, 2022.
hybridized monarch butterfly optimization algorithm,’’ J. Sensor Actuator [46] M. Naghibzadeh, ‘‘New generation computer algorithms,’’
Netw., vol. 8, no. 3, p. 44, Aug. 2019. Tech. Rep., 2023.

VOLUME 11, 2023 27125


S. Lipsa et al.: Task Scheduling in Cloud Computing: A Priority-Based Heuristic Approach

[47] J. P. Hayes, Computer Architecture and Organization. New York, NY, Organization and Informatics, University of Zagreb. He teaches computer
USA: McGraw-Hill, 2007. networks, operating systems, and computer architecture-related courses.
[48] S. Singhal and A. Sharma, ‘‘Resource scheduling algorithms in cloud He gave several invited talks at international scientific conferences and
computing: A big picture,’’ in Proc. 5th Int. Conf. Inf. Syst. Comput. Netw. guest lectures at different universities in Europe and Asia. His research
(ISCON), Oct. 2021, pp. 1–6, doi: 10.1109/ISCON52037.2021.9702313. interests include computational intelligence and optimization, especially
[49] Y.-J. Chiang and Y.-C. Ouyang, ‘‘Profit optimization in SLA-aware cloud swarm intelligence, and also computer networks, security, and formal
services with a finite capacity queuing model,’’ Math. Problems Eng., methods. He was a member of committees for creating new university study
vol. 2014, pp. 1–11, Jan. 2014.
programs. He serves as a regular reviewer for high-quality scientific journals
[50] Center SC. (2014). Cybershake and Epigenomics Scientific Workflow.
and takes part in a number of international conference committees. He won
Accessed: Jan. 1, 2016. [Online]. Available: https://confluence.pegasus.
the Best Presentation Award from the International Conference on Computer
isi.edu/display/pegasus/WorkflowGenerator
Technology and Development (IACT 2015), Singapore, and the International
Conference on Frontiers of Intelligent Technology (ICFIT 2018), Paris,
and the Excellent Presentation Award from the International Conference
SWATI LIPSA received the B.Tech. and M.Tech. on Computer Science and Information Technology (ICCSIT 2016), Ireland.
degrees from the Biju Patnaik University of Tech- He joined the Association for Computing Machinery (ACM) and the Institute
nology, Rourkela, Odisha, India, in 2008 and 2014, of Electrical and Electronics Engineers (IEEE), in 2008.
respectively. She is currently an Assistant Pro-
fessor with the Department of Information Tech-
nology, Odisha University of Technology and
Research, Bhubaneswar, Odisha. She has a good
number of publications in different interna-
tional journals/conferences. Her research interests KORHAN CENGIZ (Senior Member, IEEE) was
include wireless sensor networks, cloud comput- born in Edirne, Turkey, in 1986. He received the
ing, network security, machine learning, and software-defined networking. B.Sc. degree in electronics and communication
engineering from Kocaeli University, in 2008,
the B.Sc. degree in business administration from
RANJAN KUMAR DASH (Senior Member, IEEE) Anadolu University, Turkey, in 2009, the M.Sc.
received the Ph.D. degree from Sambalpur Univer- degree in electronics and communication engi-
sity, Sambalpur, Odisha, India, in 2008. He is cur- neering from Namik Kemal University, Turkey,
rently a Professor and the Head of the Department in 2011, and the Ph.D. degree in electronics
of Information Technology, Odisha University of engineering from Kadir Has University, Turkey,
Technology and Research, Bhubaneswar, Odisha. in 2016. Since August 2021, he has been an Assistant Professor with the
He has more than 42 publications in different College of Information Technology, University of Fujairah, United Arab
international journals/conferences. His primary Emirates. Since April 2022, he has been the Chair of the Research Committee
interests include the reliability of distributed sys- of the University of Fujairah. Since September 2022, he has been an
tems, wireless sensor networks, soft computing, Associate Professor with the Department of Computer Engineering, Istinye
machine learning, and cloud computing. His current research focus is on University, Istanbul, Turkey. He is the author of more than 40 SCI/SCI-
enhancing processor performance and energy consumption through the use E articles, including IEEE INTERNET OF THINGS JOURNAL, IEEE ACCESS,
of machine-learning techniques. He has served on the technical program and Expert Systems with Applications, Knowledge-Based Systems, and ACM
organization committees for several conferences. Transactions on Sensor Networks, five international patents, more than ten
book chapters, and one book in Turkish. He is an editor of more than
20 books. His research interests include wireless sensor networks, wireless
communications, statistical signal processing, indoor positioning systems,
NIKOLA IVKOVIĆ was born in Zagreb, Croatia, the Internet of Things, power electronics, and 5G. He is a Professional
in 1979. He received the M.S. degree in computing Member of ACM. He received several awards and honors, such as the Tubitak
and the Ph.D. degree in computer science from the Priority Areas Ph.D. Scholarship, the Kadir Has University Ph.D. Student
Faculty of Electrical Engineering and Computing, Scholarship, the Best Presentation Award at the ICAT 2016 Conference, and
University of Zagreb. His Ph.D. thesis was in the the Best Paper Award at the ICAT 2018 Conference. He is an Associate
area of the swarm and evolutionary computation. Editor of IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE
He was the Head of the Department of Com- Potentials, IET Electronics Letters, and IET Networks. He is the Handling
puting and Technology, Faculty of Organization Editor of Microprocessors and Microsystems (Elsevier). He serves as a
and Informatics, University of Zagreb, where he is Reviewer for IEEE INTERNET OF THINGS JOURNAL, IEEE SENSORS JOURNAL, and
currently an Assistant Professor. He is a member IEEE ACCESS. He serves several book editor positions in IEEE, Springer,
of two research laboratories, such as the Artificial Intelligence Laboratory, Elsevier, Wiley, and CRC. He presented more than 40 keynote talks at reputed
Faculty of Organization and Informatics, University of Zagreb, and the IEEE and Springer conferences about WSNs, the IoT, and 5G.
Laboratory for Generative Programming and Machine Learning, Faculty of

27126 VOLUME 11, 2023

You might also like