28
28
28
Review
An Analysis of Methods and Metrics for Task Scheduling in
Fog Computing
Javid Misirli * and Emiliano Casalicchio *
Abstract: The Internet of Things (IoT) uptake brought a paradigm shift in application deployment.
Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage
are moved close to the consumers, creating a computing continuum between the edge of the network
and the cloud. This paradigm shift is called fog computing, a concept introduced by Cisco in 2012.
Scheduling applications in this decentralized, heterogeneous, and resource-constrained environment
is challenging. The task scheduling problem in fog computing has been widely explored and
addressed using many approaches, from traditional operational research to heuristics and machine
learning. This paper aims to analyze the literature on task scheduling in fog computing published in
the last five years to classify the criteria used for decision-making and the technique used to solve the
task scheduling problem. We propose a taxonomy of task scheduling algorithms, and we identify the
research gaps and challenges.
Keywords: fog computing; task scheduling; internet of things; machine learning; deep learning;
cloud computing
1. Introduction
The current era has witnessed a paradigm shift in the landscape of the information
and communication technology industry with the emergence of the Internet of Things
(IoT). IoT represents a groundbreaking technological revolution that has extended Internet
connectivity beyond the boundaries of traditional smart devices such as smartphones and
Citation: Misirli, J.; Casalicchio, E. An
tablets, enabling a wide range of appliances, including sensors, machines, and vehicles,
Analysis of Methods and Metrics for
Task Scheduling in Fog Computing.
to connect and communicate seamlessly. This revolutionary phenomenon has paved the
Future Internet 2024, 16, 16. https://
way for various applications and services, including healthcare, medical treatment, traffic
doi.org/10.3390/fi16010016 control, energy management, vehicle networks, and many others [1].
With its centralized processing capabilities, cloud computing faces significant chal-
Academic Editor: Antonio Esposito
lenges in meeting the performance requirements of Internet of Things (IoT) applications [2].
Received: 31 October 2023 This is primarily due to the limited bandwidth and high latency between the cloud and
Revised: 17 December 2023 IoT devices. However, a promising solution, known as fog computing, was introduced by
Accepted: 20 December 2023 Cisco in 2012, establishing a computing continuum between remote cloud datacenters and
Published: 30 December 2023 IoT devices. Fog computing extends the cloud to the network’s edge, providing a platform
for executing latency-sensitive tasks in fog servers close to the edge devices. Delay-tolerant
or compute-intensive IoT applications can be placed to the cloud.
In this context, selecting the appropriate fog node to offload the computation is
Copyright: © 2023 by the authors.
paramount. Scheduling decisions (i.e., selecting a fog node for computation offloading)
Licensee MDPI, Basel, Switzerland.
consider many factors, like energy consumption, limited resource availability, and low
This article is an open access article
response time. Task scheduling in fog computing has been widely investigated in recent
distributed under the terms and
years [3–5]; the problem has been solved using different techniques, and various optimiza-
conditions of the Creative Commons
Attribution (CC BY) license (https://
tion criteria have been adopted. This study aims to sort out the state-of-the-art literature to
creativecommons.org/licenses/by/
answer the following research questions:
4.0/). RQ1: What performance-related metrics drive the task scheduling decisions in fog computing?
RQ2: What techniques are used to solve the task scheduling problem in fog computing?
RQ3: What are the open challenges of task scheduling in fog computing?
Unlike previous surveys, our review takes a contemporary, comprehensive approach,
addressing the latest developments and research directions in fog task scheduling. We
reviewed many recent publications and learned about emerging technologies and real-
world uses from various sources. A key innovation introduced through our literature review
is the development of a novel taxonomy and classification framework. This framework
integrates state-of-the-art technologies and classifies the current task-scheduling techniques.
Researchers can better grasp the changing fog computing landscape with its help, as it
offers a comprehensive reference guide and acts as a dynamic roadmap.
In this research paper, we conduct a comprehensive literature review of the existing
research work done in the fog computing task scheduling field. Key contributions are:
• The definition of a taxonomy for task scheduling algorithms in fog computing, which
classifies them along two dimensions: the technique for solving the scheduling prob-
lem and the metric used to make scheduling decisions.
• A comprehensive literature review of existing scheduling algorithms, particularly
focusing on intelligent dynamic scheduling techniques based on machine learning,
fuzzy logic, reinforcement learning, and deep reinforcement learning, describing their
strengths and weaknesses.
• Identification of the research gaps and challenges for task scheduling in fog computing
for future research work in this field.
The remainder of this paper is organized as follows. Section 2 explains the concept
of fog computing, its architecture, and its characteristics. The task scheduling problem is
fog computing is formulated in Section 3. Section 4 introduces the research methodology
adopted. The categorization of the literature is proposed in Section 5. The analysis of
the optimization criteria is presented in Section 6, and the analysis of problem solution
techniques is discussed in Section 7. Research gaps and challenges are presented in Section 8.
Finally, Section 9 concludes the paper.
In this paper, we assume the fog nodes execute IoT applications providing services to
the edge nodes. We consider an IoT application composed of multiple software components
(tasks) that can be executed at the edge, in the fog, or on the cloud layers. When deploying
an IoT application to cater to a single edge node or a group of edge nodes, the control node
of the closest Fog colony takes scheduling decisions for the application’s tasks. Schedul-
ing decisions are based on the computational demand and available resources, and on
functional and non-functional requirements.
The architecture described above generalizes what has been proposed in the literature.
According to [3], the fog computing architecture is a heterogeneous environment containing
multiple fog layers and the cloud; fog and cloud servers are collaborative, and tasks
are dependent.
The authors of [9] depict the architecture integrated with mobile edge computing
(MEC). The model consists of user level, edge level, and remote level. At the edge level,
devices are multiple, and requested tasks are heterogeneous. MEC contains virtualization
technology that provides computing, storage, and network resources. The edge layer
comprises a homogeneous environment and a single layer. According to the model, an ap-
plication has dependent tasks. One main drawback is that it is not robust in case of broken
network connections or insufficient power.
A three-layer network architecture consisting of an IoT device layer, a fog layer, and a
cloud layer has also been proposed in [10]. The IoT layer routes requests from IoT devices
to the fog layer using intelligent gateways. The fog layer consists of a collection of fog
nodes with a fog controller or broker. The fog controller distributes tasks between the
fog and cloud layers. The fog controller also manages the scheduling of tasks across
the fog nodes and coordinates their available resources. The cloud layer consists of data
centers with significant computing power. The tasks are characterized by the fact that
they are indivisible, non-preemptible, and independent, which makes them unsuitable for
being divided into sub-tasks. Fog nodes are heterogeneous, meaning each node differs
from others.
Characteristics
The main fog computing characteristics are geographical distribution [11], decentral-
ization [1], heterogeneity [12], real-time interaction, low latency [13], and mobility [14].
Future Internet 2024, 16, 16 4 of 22
As a result, they got less response time for an application. Zheng et al. [16] present a
comprehensive investigation of the problem of task migration in mobile edge computing
systems. The fundamental issue is the increase in mobility possibilities and the dynamic
changes in mobility patterns in connection with autonomous vehicles. A DQN-based task
scheduling algorithm is proposed, in which an agent can learn an optimal task migration
policy from past experience, even without knowing the users’ mobility patterns.
This work considers scheduling IoT applications in a fog computing environment.
An IoT application, represented in Figure 2, provides data, information, or actuating
functions to the requesting clients. The application is an ensemble of software components
that perform dependent or independent tasks [22]. The objective of the task scheduling
problem is to assign tasks to computational nodes to meet the IoT application’s QoS
requirements and consider the fog nodes’ future processing commitments [23]. In the fog
environment model, we consider computational nodes to be constrained regarding CPU,
memory, disk, bandwidth, or battery lifetime. Figure 3 shows the scheduler can assign tasks
to edge, fog, or cloud nodes. Although tasks can be classified as dependent or independent,
we consider independent tasks with heterogeneous resource demand.
translate into the problem’s constraints. Performance metrics measure schedulers’ objec-
tives and applications’ requirements. In Section 5, we will analyze the literature regarding
performance metrics optimized by the scheduler. Table 1 shows what the applications’ and
environments’ constraints are used in the literature.
4. Research Methodology
As mentioned in the introduction, this paper aims to answer the following research
questions: What performance-related metrics drive the task scheduling decisions in fog
computing? What techniques are used to solve the task scheduling problem in fog comput-
ing? and What are the open challenges of task scheduling in fog computing?
In undertaking our research, we employed a Systematic Mapping Study (SMS) method-
ology with the primary objective of systematically organizing and categorizing literature
spanning the last five years in fog computing [59,60]. Our literature search commenced
by identifying seminal works in the field, serving as the foundational starting point for
our study. From this initial stage, we systematically traced forward citations using the
snowballing technique, a key component of the SMS methodology [61]. At each stage
of the mapping process, we meticulously considered various dimensions, encompassing
decision-making criteria and techniques utilized in addressing the task scheduling problem
within fog computing. The SMS methodology facilitated the systematic categorization
and visualization of the literature, offering a comprehensive overview of the research land-
scape [59]. The resulting taxonomy of task scheduling algorithms and the identification
of research gaps and challenges stand as direct outcomes of our SMS methodology. This
approach ensured a rigorous and structured analysis of the literature, providing valuable
insights into the evolving domain of fog computing task scheduling.
The search string used to gather papers is (“task scheduling” OR “job scheduling”)
AND (“fog computing” OR “edge computing”) AND (“IoT” OR “Internet of Things”) AND
(“challenges” OR “issues” OR “solutions”).
We used the Boolean operators “OR” and “AND” to refine the results we obtained.
The use of “AND” limited the search results to articles containing both keywords, while
“OR” expanded the search scope to include articles containing either.
Our literature search, conducted from 2018 to October 2023, employed a targeted
search string, applied to major Digital Libraries (DLs). The initial search returned a substan-
tial pool of papers totaling 200. To ensure a robust dataset, we implemented a systematic
deduplication process. Following this meticulous process, we arrived at a final set of
71 papers for detailed analysis. The temporal distribution of these articles is presented in
Future Internet 2024, 16, 16 7 of 22
Figure 4. Notably, about 50% of the papers were published in 2021–2022, reflecting the
contemporary nature of our selected studies.
2019
2020
12.7%
2018
16.9%
16.9%
7%
24% 2023
2021 22.5%
2022
Our screening process is structured into two distinctive stages to ensure a meticulous
and unbiased selection of studies [62]:
1. Initial Screening: Deduplication: rigorous deduplication is conducted to eliminate
duplicate records from the database, ensuring a clean dataset. Title and abstract
examination: The initial screening phase involves a detailed examination of titles
and abstracts. This preliminary assessment is essential to minimize biases, including
potential publication bias.
2. Secondary Screening: Full study screening and critical appraisal: The second phase
comprises a comprehensive evaluation of the full texts of selected studies. We meticu-
lously followed predefined inclusion and exclusion criteria throughout the screening
process, as detailed in Table 2. These criteria served as a robust framework for assess-
ing the relevance and quality of each identified study.
Figure 5 highlights that a significant portion of our final set of papers has been pub-
lished in IEEE journals, indicating both the credibility and relevance of our selected studies.
We employed a two-fold approach to comprehensively survey the literature on task
scheduling in fog computing, integrating the snowballing technique and the Quasi-Gold
Standard [63]. This method allowed us to explore both direct and indirect citation relation-
ships, contributing to a nuanced perspective on the task scheduling landscape. The snow-
balling technique was applied from August 2021 to January 2022. Starting with seminal
works, we systematically traced forward citations over three iterations. Each round in-
volved meticulous examination of references in identified studies. By January 2022, we
concluded the process, ensuring a comprehensive and up-to-date exploration of task
scheduling in fog computing.
Future Internet 2024, 16, 16 8 of 22
IEEE Elsevier
29.58% 23.94%
14.08 %
15.5%
Other Libraries
Springer 12.68% 4.22%
5.1.1. Cost
Costs can be related to computation, communication, resource usage (other than CPU
and network), and energy consumption.
Computation cost. Performing a task on a fog node or device is called computation
cost. It covers the time, energy, and resources used to complete the computation [15].
Depending on the difficulty of the task and the fog node’s capabilities, computation costs
might change.
In the Cloud–Fog system, carrying out a task results in costs tied to processing, memory
consumption, and bandwidth utilization. The cost estimated when node Ni processes the
task Tk is expressed as [33]:
Communication cost. Tasks may need to be transferred between fog nodes for pro-
cessing in a distributed system like IoT and fog computing [64]. Communication cost
includes the overhead of transferring data, such as bandwidth usage, latency, and energy
consumption between nodes [40].
Resource cost. Fog nodes have constrained CPU, memory, and storage capabilities.
In the context of fog computing, resource cost entails efficiently assigning and utilizing
computing resources essential for the effective execution of tasks. This encompasses the
utilization and consumption of a range of resources, such as computational power, storage,
and communication bandwidth. The term reflects the comprehensive expenditure needed
to ensure efficiency within a fog computing environment.
Energy cost. Energy efficiency is a crucial factor for many IoT and fog computing
devices as they are battery-powered. The cost of energy consumption is dependent on the
amount used for communication and processing [56].
where α Cif is the weighted transmission energy with α being the relative weight and Cif
being the relative cost for processing task i at the fog node.
5.1.3. Latency
Latency in fog computing task scheduling signifies the time delays during different
stages of task execution. It is a vital performance metric influencing overall system respon-
siveness and efficiency. Reducing latency is essential for meeting real-time requirements,
enhancing user satisfaction, and optimizing resource use in fog computing. Scheduling al-
gorithms strive to minimize latency by efficiently assigning tasks, handling communication,
and adapting to dynamic system changes [46].
where Tn,m defines the execution time of the task n and RTn,m represents ready time that
the earliest completion time for all its dependent tasks.
Future Internet 2024, 16, 16 11 of 22
5.1.5. Availability
This is the ability of a system to ensure the data are available with the desired level of
performance in normal as well as in fatal situations, excluding scheduled downtime. It is a
ratio of Mean Time Between Failure (MTBF) to reliability (Mean Time Between Failure +
Mean Time To Repair). The following formula can be applied to calculate availability [66]:
devices based on various criteria, such as device availability, processing power, and net-
work latency. Heuristic algorithms are well-suited for handling large-scale systems where
the computational complexity of mathematical models, such as ILP and MILP, can become
impractical. Heuristic solutions are also combined with mathematical programming mod-
eling to determine the optimal solution and satisfy a predefined set of constraints and
objectives [18].
6. Literature Mapping
Optimization Criteria
Table 3 maps the studied literature with the optimization criteria described in Section 5.1
and is used to make scheduling decisions. The metrics authors consider more often are
energy consumption, latency, and cost. Less frequently, research work use response time,
execution time, completion time, and availability.
Future Internet 2024, 16, 16 13 of 22
The significance of incorporating energy consumption and latency metrics into the
decision-making process is demonstrated in [10,39,42,44,46]. Zhao et al. [39] utilized energy
consumption and delay metrics within homogeneous fog networks. They considered the
energy expended during task execution and the power consumed during task scheduling.
Additionally, they established that the average traffic delay for each mobile fog node is
directly linked to the average number of unexecuted computation tasks. Other authors offer
innovative strategies for fog computing task scheduling, utilizing fuzzy reinforcement learn-
ing, centralized DDQN algorithms, and efficient schedulers to reduce energy consumption
and latency. The emphasis is on enhancing efficiency in dynamic and resource-constrained
environments, with a focus on optimizing performance metrics for IoT devices.
Execution time and cost metrics have been investigated together by [32,33,35,36,38,72].
Tsai et al. [32] examine CPU processing power, CPU usage cost, memory usage cost,
and bandwidth usage cost as operating cost factors alongside timespan, representing
execution time, in their assessment framework. Their approach focuses on minimizing
execution time with higher priority unless there is a tradeoff where minimizing total
operating costs becomes more crucial than execution time. The specific value of the tradeoff
coefficient (λ) is determined based on factors such as budget constraints or the desired
response time level. Considering these collective contributions, the papers provide unique
perspectives on optimizing task scheduling in Cloud–Fog environments, with a specific
focus on processing large-scale Internet of Things (IoT) applications. The introduction
of inventive algorithms such as Time-Cost aware Scheduling (TCaS) and the application
of a Deep Reinforcement Learning (DRL)-based solution underscores the emphasis on
delicately balancing execution time minimization and effective monetary cost management.
This collaborative initiative aims to propel efficiency and resource utilization within highly
distributed computing platforms, effectively addressing pivotal challenges in the realm of
fog computing and IoT applications.
Scheduling algorithms that aim to reduce, or in general to optimize, the energy
consumed by a fog-based system are studied in [3,4,10,15,18,20,24,26,28,29,36,37,39,41,42,
44,46,49–51,55–58,73]. Scheduling based on latency is proposed in [5,9–11,15–17,20,30,37,
39,41,42,44–50,52–54,64].
Computation cost is considered in [15,40,72,73]. This cost in fog computing task
scheduling refers to the resources used for task execution which is a critical factor impacting
system efficiency and performance. The cost of communication as decision criteria is
studied by [33,40,47,64,73], while in [20,32,33,40,47], the authors studied the influence of
resource costs, and in [29,36,41,47,49,56], they consider energy costs. The authors focus
on optimizing different costs to enhance the overall efficiency and performance of fog
computing task scheduling.
The importance of execution time metrics while building an optimal task scheduling
solution is considered in [21,31–36,38,47,51,68,72,73]. Response time is used for scheduling
decisions in [3,6,11,19,20,25–27,29]. A few papers [3,4,6,18,28,30,31] take into account
completion time metric.
The often-neglected metric is availability, which remains critical in fog computing
environments and has been discussed by [43,74,75].
to reassign tasks along the non-critical path. This process serves to elevate the overall
reliability of the application.
Hosseinioun et al. [58] introduced a hybrid Invasive Weed Optimization and Culture
(IWO-CA) evolutionary algorithm to addresses energy-aware task scheduling. The authors
consider leveraging the Dynamic Voltage and Frequency Scaling (DVFS) technique to
regulate the energy consumption of fog nodes by adjusting their voltage and frequency
levels based on workload demand.
The study conducted by Defude et al. [28] aimed to decrease task completion time and
energy consumption. Their proposed approach utilizes a population-based meta-heuristic
technique crafted to choose the most optimal fog nodes. This novel approach, called
Opposition-based Chaotic Whale Optimization Algorithm (OppoCWOA), combines Whale
optimization algorithm (WOA), Opposition-based learning, and Chaos theory.
achieves lower energy consumption than existing reinforcement learning models, allowing
the scheduling of resource-intensive tasks on more powerful machines. A weakness of the
proposed approach is the need for continuous profiling of CPU, RAM, disk, and bandwidth
demand of new tasks in real edge-cloud environments. This could result in additional
computational overhead.
Karimi et al. [19] focus on Mobile Edge Computing, and propose a task scheduling
algorithm that relies on deep reinforcement learning and incorporates a resource allocation
strategy that involves task migration among edge and central cloud servers. With an
appropriate scheduling strategy, they can achieve a shorter response time. However, one
drawback of their solution is a reallocation policy, which is necessary since time-sensitive
application tasks may arrive at an edge server when resources are insufficient, requiring
immediate execution.
Sami et al. [35] model the scheduling problem as an MDP and use deep reinforcement
learning techniques to optimize the service placement decision to satisfy QoS requirements.
The simulations demonstrate that the proposed DRL-based approach is more effective
than heuristic-based solutions for time-sensitive service placement problems. Their main
goals are to reduce the number of unfulfilled requests, the number of fog nodes chosen for
placement, the number of high-priority services that were not allocated, and the distance of
the chosen fog nodes from the users requesting services.
Goudarzi et al. [36] propose a novel approach for addressing the application place-
ment problem in heterogeneous fog computing environments using a distributed deep
reinforcement learning technique.
Shi et al. [43] describe a study that uses Deep Reinforcement Learning (DRL) tech-
niques applied to Vehicle-to-Vehicle communication to optimize the utility of scheduling
tasks and minimize delay in a fog computing environment that supports vehicle heterogeneity.
Kumar et al. [41] aims to decrease energy consumption and latency in heteroge-
neous fog environments by proposing a binary offloading strategy based on deep learning
that leverages several parallel deep neural networks (DNNs) to facilitate the scheduling
decision-making process. After the decision outputs are stored in a relay memory, they
are used to train and test all neural networks. Notably, in real-time scheduling, their
proposal may require the continuous allocation of resources. Additionally, in a dynamic
fog environment where user mobility is critical, their strategy may not be a viable option
for data offloading patterns.
Qian et al. [5] formulated a two-step decision problem in cloud–edge based group-
distributed manufacturing systems. The scope of the traditional task scheduling problem
has been expanded in this paper to include deadline constraints. The authors approach the
task placement problem in two steps: first, they prioritize tasks, and second, they schedule
the tasks based on their priorities. The proposed approach based on deep reinforcement
learning aims to minimize the total delay time of task scheduling by employing two agents:
one for prioritizing tasks and another for selecting the suitable node. One challenge that
remains to be addressed is the reduction of model training time.
Gazori et al. [15] focused on addressing task scheduling to reduce latency and com-
puting costs while also meeting resource and deadline constraints. To tackle this problem,
the authors proposed using either machine learning-based reinforcement learning algo-
rithms or rule-based heuristic algorithms. They suggested that deep reinforcement learning
could improve decision-making in resource-constrained environments and formulated a
solution using the Markov Decision Process framework. Their proposed solution was a
Double Deep Q-Learning (DDQL)-based scheduling algorithm that determines whether to
assign a task to a fog node or to a cloud data center. While the DDQL-based scheduling
algorithm yielded significant results in simulations, there is still a need to enhance the
efficiency and effectiveness of the agents’ learning process for this novel idea.
Future Internet 2024, 16, 16 18 of 22
9. Conclusions
In summary, our exploration of the fog computing domain’s task scheduling land-
scape reveals a notable evolution marked by substantial expansion and transformative
changes. Through an extensive literature review, we have uncovered crucial insights into
the challenges, solutions, and future prospects of task scheduling in fog computing.
At the heart of fog task scheduling lies the persistent pursuit of efficiency, encompass-
ing resource allocation, low-latency delivery, energy consumption, and cost-effectiveness.
Our analysis underscores the dynamic nature of this field and the ongoing efforts to
optimize scheduling processes.
Significantly, our review identifies several critical research gaps that signal a pressing
need for further investigation. These gaps include the exploration of AI-driven scheduling
using explainable ML techniques, the development of innovative autonomous task schedul-
ing algorithms, and the consideration of mobility-aware, availability-aware, and multi-
criteria task scheduling.
These key takeaways emphasize the imperative for continuous advancement in fog
computing task scheduling to fully unlock its potential across diverse domains. By ad-
dressing these challenges and gaps, the fog computing community can pave the way for
enhanced efficiency, reliability, and adaptability in task scheduling, contributing to the
continued growth and impact of fog computing.
Author Contributions: Conceptualization, J.M. and E.C.; Methodology, J.M. and E.C.; Validation, J.M.
and E.C.; Formal Analysis, J.M. and E.C.; Investigation, J.M. and E.C.; Resources, J.M. and E.C.; Data
Curation, J.M. and E.C.; Writing—original Draft Preparation, J.M. and E.C.; Writing—review and
Editing, J.M. and E.C.; Visualization, J.M. and E.C.; Supervision, E.C.; Funding Acquisition, E.C. All
authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by DRONES AS A SERVICE for FIRST EMERGENCY RESPONSE
(Ateneo 2021). Grant number is RG12117A8AFA07F7.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author. The data are not publicly available due to ethical considerations and to protect
participant privacy.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Rahman, G.; Chuah, C.W. Fog Computing, Applications, Security and Challenges, Review. Int. J. Eng. Technol. 2018, 7, 1615.
[CrossRef]
2. Aslanpour, M.S.; Gill, S.S.; Toosi, A. Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy,
Benchmarks and Standards for Future Research. Internet Things 2020, 12, 100273. [CrossRef]
3. Tuli, S.; Ilager, S.; Ramamohanarao, K.; Buyya, R. Dynamic Scheduling for Stochastic Edge-Cloud Computing Environments
Using A3C Learning and Residual Recurrent Neural Networks. IEEE Trans. Mob. Comput. 2022, 21, 940–954. [CrossRef]
Future Internet 2024, 16, 16 20 of 22
4. Abdel-Kader, R.; El Sayad, N.; Rizk, R. Efficient energy and completion time for dependent task computation offloading algorithm
in industry 4.0. PLoS ONE 2021, 16, e0252756. [CrossRef] [PubMed]
5. Xiong, J.; Guo, P.; Wang, Y.; Meng, X.; Zhang, J.; Qian, L.; Yu, Z. Multi-agent deep reinforcement learning for task offloading in
group distributed manufacturing systems. Eng. Appl. Artif. Intell. 2023, 118, 105710. [CrossRef]
6. Nikolopoulos, V.; Nikolaidou, M.; Voreakou, M.; Anagnostopoulos, D. Fog Node Self-Control Middleware: Enhancing context
awareness towards autonomous decision making in Fog Colonies. Internet Things 2022, 19, 100549. [CrossRef]
7. Apat, H.K.; Sahoo, B.; Maiti, P. Service Placement in Fog Computing Environment. In Proceedings of the 2018 International
Conference on Information Technology (ICIT), Bhubaneswar, India, 19–21 December 2018; pp. 272–277. [CrossRef]
8. Dehury, C.; Srirama, S. Personalized Service Delivery using Reinforcement Learning in Fog and Cloud Environment. In
Proceedings of the iiWAS2019: The 21st International Conference on Information Integration and Web-based Applications &
Services, Munich, Germany, 2–4 December 2019; pp. 522–529. [CrossRef]
9. Wang, J.; Hu, J.; Min, G.; Zomaya, A.Y.; Georgalas, N. Fast Adaptive Task Offloading in Edge Computing Based on Meta
Reinforcement Learning. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 242–253. [CrossRef]
10. Raju, M.R.; Mothku, S.K. Delay and energy aware task scheduling mechanism for fog-enabled IoT applications: A reinforcement
learning approach. Comput. Netw. 2023, 224, 109603. [CrossRef]
11. Lima, D.; Miranda, H. A geographical-aware state deployment service for Fog Computing. Comput. Netw. 2022, 216, 109208.
[CrossRef]
12. Wu, H.Y.; Lee, C.R. Energy Efficient Scheduling for Heterogeneous Fog Computing Architectures. In Proceedings of the 2018
IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; Volume 1,
pp. 555–560. [CrossRef]
13. Das, R.; Inuwa, M.M. A review on fog computing: Issues, characteristics, challenges, and potential applications. Telemat. Inform.
Rep. 2023, 10, 100049. [CrossRef]
14. Waqas, M.; Niu, Y.; He, J.; Ahmed, M.; Chen, X.; Li, Y.; Jin, D.; Han, Z. Mobility-Aware Fog Computing in Dynamic Environments:
Understandings and Implementation. IEEE Access 2018, 7, 38867–38879. [CrossRef]
15. Gazori, P.; Rahbari, D.; Nickray, M. Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement
learning approach. Future Gener. Comput. Syst. 2019, 110, 1098–1115. [CrossRef]
16. Zhang, C.; Zheng, Z. Task migration for mobile edge computing using deep reinforcement learning. Future Gener. Comput. Syst.
2019, 96, 111–118. [CrossRef]
17. Mai, L.; Dao, N.N.; Park, M. Real-Time Task Assignment Approach Leveraging Reinforcement Learning with Evolution Strategies
for Long-Term Latency Minimization in Fog Computing. Sensors 2018, 18, 2830. [CrossRef] [PubMed]
18. Wang, J.; Li, D. Task Scheduling Based on a Hybrid Heuristic Algorithm for Smart Production Line with Fog Computing. Sensors
2019, 19, 1023. [CrossRef] [PubMed]
19. Karimi, E.; Chen, Y.; Akbari, B. Task offloading in vehicular edge computing networks via deep reinforcement learning. Comput.
Commun. 2022, 189, 193–204. [CrossRef]
20. Bukhari, M.; Ghazal, T.; Abbas, S.; Khan, M.; Farooq, U.; Wahbah, H.; Ahmad, M.; Khan, M. An Intelligent Proposed Model for
Task Offloading in Fog-Cloud Collaboration Using Logistics Regression. Comput. Intell. Neurosci. 2022, 2022, 1–25. [CrossRef]
21. Yin, L.; Luo, J.; Luo, H. Tasks Scheduling and Resource Allocation in Fog Computing Based on Containers for Smart Manufacturing.
IEEE Trans. Ind. Inform. 2018, 14, 4712–4721. [CrossRef]
22. Goudarzi, M.; Palaniswami, M.; Buyya, R. Scheduling IoT Applications in Edge and Fog Computing Environments: A Taxonomy
and Future Directions. ACM Comput. Surv. 2022, 55, 1–41. [CrossRef]
23. Mahmud, M.; Ramamohanarao, K.; Buyya, R. Application Management in Fog Computing Environments: A Taxonomy, Review
and Future Directions. ACM Comput. Surv. 2020, 53, 1–43. [CrossRef]
24. Singh, J.; Singh, P.; Amhoud, E.M.; Hedabou, M. Energy-Efficient and Secure Load Balancing Technique for SDN-Enabled Fog
Computing. Sustainability 2022, 14, 12951. [CrossRef]
25. Mohamed, I.; Al-Mahdi, H.; Tahoun, M.; Nassar, H. Characterization of task response time in fog enabled networks using
queueing theory under different virtualization modes. J. Cloud Comput. 2022, 11, 21. [CrossRef]
26. Vakilian, S.; Moravvej, S.V.; Fanian, A. Using the Cuckoo Algorithm to Optimizing the Response Time and Energy Consumption
Cost of Fog Nodes by Considering Collaboration in the Fog Layer. In Proceedings of the 2021 5th International Conference on
Internet of Things and Applications (IoT), Isfahan, Iran, 19–20 May 2021; pp. 1–5. [CrossRef]
27. Razaq, M.; Rahim, S.; Tak, B.; Peng, L. Fragmented Task Scheduling for Load-Balanced Fog Computing Based on Q-Learning.
Wirel. Commun. Mob. Comput. 2022, 2022, 4218696. [CrossRef]
28. Movahedi, Z.; Defude, B.; Hosseininia, A.M. An Efficient Population-Based Multi-Objective Task Scheduling Approach in Fog
Computing Systems. J. Cloud Comput. 2021, 10, 53. [CrossRef]
29. Lin, K.; Pankaj, S.; Wang, D. Task offloading and resource allocation for edge-of-things computing on smart healthcare systems.
Comput. Electr. Eng. 2018, 72, 348–360. [CrossRef]
30. Ali, H.; Rout, R.; Parimi, P.; Das, S. Real-Time Task Scheduling in Fog-Cloud Computing Framework for IoT Applications: A
Fuzzy Logic based Approach. In Proceedings of the 2021 International Conference on COMmunication Systems & NETworkS
(COMSNETS), Bangalore, India, 5–9 January 2021; pp. 556–564. [CrossRef]
31. Guevara, J.; Fonseca, N. Task scheduling in cloud-fog computing systems. Peer-Netw. Appl. 2021, 14, 962–977. [CrossRef]
Future Internet 2024, 16, 16 21 of 22
32. Tsai, J.F.; Huang, C.H.; Lin, M.H. An Optimal Task Assignment Strategy in Cloud-Fog Computing Environment. Appl. Sci. 2021,
11, 1909. [CrossRef]
33. Nguyen, B.M.; Thi Thanh Binh, H.; The Anh, T.; Bao Son, D. Evolutionary Algorithms to Optimize Task Scheduling Problem for
the IoT Based Bag-of-Tasks Application in Cloud–Fog Computing Environment. Appl. Sci. 2019, 9, 1730. [CrossRef]
34. Ghobaei-Arani, M.; Souri, A.; Safara, F.; Norouzi, M. An Efficient Task Scheduling Approach Using Moth-Flame Optimization
Algorithm for Cyber-Physical System Applications in Fog Computing. Trans. Emerg. Telecommun. Technol. 2020, 31, e3770.
[CrossRef]
35. Sami, H.; Mourad, A.; Otrok, H.; Bentahar, J. Demand-Driven Deep Reinforcement Learning for Scalable Fog and Service
Placement. IEEE Trans. Serv. Comput. 2022, 15, 2671–2684. [CrossRef]
36. Goudarzi, M.; Palaniswami, M.; Buyya, R. A Distributed Deep Reinforcement Learning Technique for Application Placement in
Edge and Fog Computing Environments. IEEE Trans. Mob. Comput. 2021, 22, 2491–2505. [CrossRef]
37. Jazayeri, F.; Shahidinejad, A.; Ghobaei-Arani, M. Autonomous computation offloading and auto-scaling the in the mobile fog
computing: A deep reinforcement learning-based approach. J. Ambient Intell. Humaniz. Comput. 2021, 12, 8265–8284. [CrossRef]
38. Abdulredha, M.N.; Attea, B.A.; Jabir, A.J. An Evolutionary Algorithm for Task scheduling Problem in the Cloud-Fog environment.
J. Phys. Conf. Ser. 2021, 1963, 012044. [CrossRef]
39. Yang, Y.; Zhao, S.; Zhang, W.; Chen, Y.; Luo, X.; Wang, J. DEBTS: Delay Energy Balanced Task Scheduling in Homogeneous Fog
Networks. IEEE Internet Things J. 2018, 5, 2094–2106. [CrossRef]
40. Nikoui, T.S.; Balador, A.; Rahmani, A.M.; Bakhshi, Z. Cost-Aware Task Scheduling in Fog-Cloud Environment. In Proceedings of
the 2020 CSI/CPSSI International Symposium on Real-Time and Embedded Systems and Technologies (RTEST), Tehran, Iran,
10–11 June 2020; pp. 1–8. [CrossRef]
41. Sarkar, I.; Kumar, S. Deep Learning-Based Energy-Efficient Computational Offloading Strategy in Heterogeneous Fog Computing
Networks. J. Supercomput. 2022, 78, 15089–15106. [CrossRef]
42. Fan, J.; Ma, R.; Gao, Y.; Gu, Z. A Reinforcement Learning based Computing Offloading and Resource Allocation Scheme in
F-RAN. EURASIP J. Adv. Signal Process. 2021, 2021, 91. [CrossRef]
43. Shi, J.; Du, J.; Wang, J.; Yuan, J. Deep Reinforcement Learning-Based V2V Partial Computation Offloading in Vehicular Fog
Computing. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China,
29 March–1 April 2021; pp. 1–6. [CrossRef]
44. Jamil, B.; Shojafar, M.; Ahmed, I.; Ullah, A.; Munir, K.; Ijaz, H. A Job Scheduling Algorithm for Delay and Performance
Optimization in Fog Computing. Concurr. Comput. Pract. Exp. 2019, 32, e5581. [CrossRef]
45. Yang, M.; Zhu, H.; Wang, H.; Koucheryavy, Y.; Samouylov, K.; Qian, H. An Online Learning Approach to Computation Offloading
in Dynamic Fog Networks. IEEE Internet Things J. 2020, 8, 1572–1584. [CrossRef]
46. Alatoun, K.; Matrouk, K.; Mohammed, M.A.; Nedoma, J.; Martinek, R.; Zmij, P. A Novel Low-Latency and Energy-Efficient Task
Scheduling Framework for Internet of Medical Things in an Edge Fog Cloud System. Sensors 2022, 22, 5327. [CrossRef]
47. Wu, B.; Lv, X.; Deyah Shamsi, W.; Gholami Dizicheh, E. Optimal deploying IoT services on the fog computing: A metaheuristic-
based multi-objective approach. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 10010–10027. [CrossRef]
48. Baek, J.Y.; Kaddoum, G. Heterogeneous Task Offloading and Resource Allocations via Deep Recurrent Reinforcement Learning
in Partial Observable Multi-Fog Networks. IEEE Internet Things J. 2020, 8, 1041–1056. [CrossRef]
49. Wang, Y.; Huang, H.; Miyazaki, T.; Guo, S. Traffic and Computation Co-Offloading With Reinforcement Learning in Fog
Computing for Industrial Applications. IEEE Trans. Ind. Inform. 2018, 15, 976–986. [CrossRef]
50. Wang, S.; Hu, Z.; Deng, Y.; Hu, L. Game-Theory-Based Task Offloading and Resource Scheduling in Cloud-Edge Collaborative
Systems. Appl. Sci. 2022, 12, 6154. [CrossRef]
51. Benblidia, M.; Brik, B.; Merghem-Boulahia, L.; Esseghir, M. Ranking Fog nodes for Tasks Scheduling in Fog-Cloud Environments:
A Fuzzy Logic Approach. In Proceedings of the 2019 15th International Wireless Communications & Mobile Computing
Conference (IWCMC), Tangier, Morocco, 24–28 June 2019; pp. 1451–1457. [CrossRef]
52. Aburukba, R.O.; AliKarrar, M.; Landolsi, T.; El-Fakih, K. Scheduling Internet of Things requests to minimize latency in hybrid
Fog–Cloud computing. Future Gener. Comput. Syst. 2020, 111, 539–551. [CrossRef]
53. Canali, C.; Lancellotti, R. GASP: Genetic Algorithms for Service Placement in Fog Computing Systems. Algorithms 2019, 12, 201.
[CrossRef]
54. Zhang, J.; Guo, H.; Liu, J. A Reinforcement Learning Based Task Offloading Scheme for Vehicular Edge Computing Network; Artificial
Intelligence for Communications and Networks; Springer: Harbin, China, 2019; pp. 438–449. [CrossRef]
55. Ren, Y.; Sun, Y.; Peng, M. Deep Reinforcement Learning Based Computation Offloading in Fog Enabled Industrial Internet of
Things. IEEE Trans. Ind. Inform. 2021, 17, 4978–4987. [CrossRef]
56. Ahvar, E.; Ahvar, S.; Mann, Z.; Crespi, N.; Glitho, R.; Garcia-Alfaro, J. DECA: A Dynamic Energy Cost and Carbon Emission-
Efficient Application Placement Method for Edge Clouds. IEEE Access 2021, 9, 70192–70213. [CrossRef]
57. Bozorgchenani, A.; Disabato, S.; Tarchi, D.; Roveri, M. An energy harvesting solution for computation offloading in Fog
Computing networks. Comput. Commun. 2020, 160, 577–587. [CrossRef]
58. Hosseinioun, P.; Kheirabadi, M.; Kamel Tabbakh, S.R.; Ghaemi, R. A new energy-aware tasks scheduling approach in fog
computing using hybrid meta-heuristic algorithm. J. Parallel Distrib. Comput. 2020, 143, 88–96. [CrossRef]
Future Internet 2024, 16, 16 22 of 22
59. Vanhala, E.; Kasurinen, J.; Knutas, A.; Herala, A. The Application Domains of Systematic Mapping Studies: A Mapping Study of
the First Decade of Practice With the Method. IEEE Access 2022, 10, 37924–37937. [CrossRef]
60. Giuffrida, R.; Dittrich, Y. Empirical studies on the use of social software in global software development – A systematic mapping
study. Inf. Softw. Technol. 2013, 55, 1143–1164. [CrossRef]
61. Wohlin, C. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings
of the 18th International Conference on Evaluation and Assessment in Software Engineering (EASE ’14), New York, NY, USA,
13–14 May 2014. [CrossRef]
62. Rožanc, I.; Mernik, M. The screening phase in systematic reviews: Can we speed up the process? Adv. Comput. 2021, 123, 115–191.
[CrossRef]
63. Zhang, H.; Babar, M.A.; Tell, P. Identifying Relevant Studies in Software Engineering. Inf. Softw. Technol. 2011, 53, 625–637.
[CrossRef]
64. Pandit, M.K.; Mir, R.; Chishti, M.A. Adaptive task scheduling in IoT using reinforcement learning. Int. J. Intell. Comput. Cybern.
2020. [CrossRef]
65. Sopin, E.; Nikita, Z.; Ageev, K.; Shorgin, S. Analysis of the Response Time Characteristics of the Fog Computing Enabled Real-Time
Mobile Applications; Springer: St. Petersburg, Russia, 2020; pp. 99–109. [CrossRef]
66. Gill, S.S.; Chana, I.; Singh, M.; Buyya, R. CHOPPER: An intelligent QoS-aware autonomic resource management approach for
cloud computing. Clust. Comput. 2018, 21, 1203–1241. [CrossRef]
67. Hosseinioun, P.; Kheirabadi, M.; Kamel Tabbakh, S.R.; Ghaemi, R. ATask Scheduling Approaches in Fog Computing: A Survey.
Trans. Emerg. Telecommun. Technol. 2022, 33. [CrossRef]
68. Aburukba, R.; Landolsi, T.; Omer, D. A heuristic scheduling approach for fog-cloud computing environment with stationary IoT
devices. J. Netw. Comput. Appl. 2021, 180, 102994. [CrossRef]
69. Khaleel, M. Hybrid cloud-fog computing workflow application placement: Joint consideration of reliability and time credibility.
Multimed. Tools Appl. 2022, 82, 1–32. [CrossRef]
70. Poltronieri, F.; Tortonesi, M.; Stefanelli, C.; Suri, N. Reinforcement Learning for value-based Placement of Fog Services. In
Proceedings of the 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), Bordeaux, France, 17–21
May 2021; pp. 466–472.
71. Rai, K.; Vemireddy, S.; Rout, R. Fuzzy Logic based Task Scheduling Algorithm in Vehicular Fog Computing Framework. In
Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021;
pp. 1–6. [CrossRef]
72. Wu, Q.; Wu, Z.; Zhuang, Y.; Cheng, Y. Adaptive DAG Tasks Scheduling with Deep Reinforcement Learning. In Proceedings of
the 18th International Conference, ICA3PP 2018, Guangzhou, China, 15–17 November 2018; Proceedings, Part II; pp. 477–490.
[CrossRef]
73. Kumar, M.S.; Karri, G.R. EEOA: Cost and Energy Efficient Task Scheduling in a Cloud-Fog Framework. Sensors 2023, 23, 2445.
[CrossRef]
74. Farhat, P.; Sami, H.; Mourad, A. Reinforcement R-learning model for time scheduling of on-demand fog placement. J. Supercomput.
2020, 76, 1–23. [CrossRef]
75. Mahmud, M.; Srirama, S.; Ramamohanarao, K.; Buyya, R. Quality of Experience (QoE)-aware placement of applications in Fog
computing environments. J. Parallel Distrib. Comput. 2018, 132, 190–203. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.