Mobile edge computing (MEC) is emerging as a cornerstone technology to address the conflict between resource-constrained smart devices (SDs) and the ever-increasing computational demands of the mobile applications. MEC enables the SDs to... more
Mobile edge computing (MEC) is emerging as a cornerstone technology to address the conflict between resource-constrained smart devices (SDs) and the ever-increasing computational demands of the mobile applications. MEC enables the SDs to offload computational-intensive tasks to the nearby edge nodes for providing better quality-of-services (QoS). The recently proposed offloading strategies, mainly consider a centralized approach for a limited number of SDs. However, with the growing popularity of the SDs, these offloading models may have the scalability issue and can be susceptible to single point failure. Although there are few distributed offloading models in the literature, they ignore the vast computational resources of the cloud, load sharing between the MEC servers, and other optimization parameters. Toward this end, we propose an efficient computation offloading scheme for a distributed load sharing MEC network in cooperation with cloud computing to enhance the capabilities of the SDs. We formulate a nonlinear multiobjective optimization problem by applying queuing theory to model the execution delay, energy consumption, and payment cost for using edge and cloud services. To solve the formulated problem, we propose a stochastic gradient descent (SGD) algorithm based solution approach to jointly optimize the offloading probability and transmission power of the SDs for finding an optimal trade-off between energy consumption, execution delay, and cost of the SDs. Finally, we perform extensive simulations to demonstrate the effectiveness of the proposed offloading scheme. Moreover, compared to the other solutions, the proposed scheme is scalable and outperforms the existing schemes.
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC)... more
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC) extends the cloud computing paradigm and leverages servers near end-users at the network edge to provide a cloud-like environment. The optimum placement of services on edge servers plays a crucial role in the performance of such service-based applications. Dynamic service placement problem addresses the adaptive configuration of application services at edge servers to facilitate end-users and those devices that need to offload computation tasks. While reported approaches in the literature shed light on this problem from a particular perspective, a panoramic study of this problem reveals the research gaps in the big picture. This paper introduces the dynamic service placement problem and outline its relations with other problems such as task scheduling, resource management, and caching at the edge. We also present a systematic literature review of existing dynamic service placement methods for MEC environments from networking, middleware, applications, and evaluation perspectives. In the first step, we review different MEC architectures and their enabling technologies from a networking point of view. We also introduce different cache deployment solutions in network architectures and discuss their design considerations. The second step investigates dynamic service placement methods from a middleware viewpoint. We review different service packaging technologies and discuss their trade-offs. We also survey the methods and identify eight research directions that researchers follow. Our study categorises the research objectives into six main classes, proposing a taxonomy of design objectives for the dynamic service placement problem. We also investigate the reported methods and devise a solutions taxonomy comprising six criteria. In the third step, we concentrate on the application layer and introduce the applications that can take advantage of dynamic service placement. The fourth step investigates evaluation environments used to validate the solutions, including simulators and testbeds. We introduce real-world datasets such as edge server locations, mobility traces, and service requests used to evaluate the methods. We compile a list of open issues and challenges categorised by various viewpoints in the last step.
Today's advances of mobile technologies in both hardware and software have pushed the vast utilization of mobile devices for diverse purposes. Along with this progress, today's mobile devices are expected to perform various types of... more
Today's advances of mobile technologies in both hardware and software have pushed the vast utilization of mobile devices for diverse purposes. Along with this progress, today's mobile devices are expected to perform various types of applications. However, the energy challenge of mobile devices along with their limited computation power act as a barrier. To address this deficiency, mobile cloud computing has been proposed in which cloud resources are used to extend mobile devices' capabilities. However, due to varying conditions of wireless channel in terms of connectivity and bandwidth, an online offloading mechanism is required which may lead to high decision time and energy. To address this challenge, we propose a priority-based fast computation offloading mechanism which finds the optimal offloading solution based on a modified branch-and-bound algorithm. Results of intensive simulation and testbed experiments demonstrated that our proposal can outperform all existing optimal counterparts in terms of energy consumption and execution time.
This paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices.... more
This paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices. When the UAV schedules an IoT device to offload its computation task, the tasks buffered at the other unselected devices could be outdated and have to be cancelled. We investigate a new joint optimization of UAV cruise control and task offloading allocation, which maximizes tasks offloaded to the UAV, subject to the IoT device’s computation capacity and battery budget, and the UAV’s speed limit. Since the optimization contains a large solution space while the instantaneous network states are unknown to the UAV, we propose a new deep graph-based reinforcement learning framework. An advantage actor-critic (A2C) structure is developed to train the real-time continuous actions of the UAV in terms of the flight speed, heading, and the offloading schedule of the IoT device. By exploring hidden representations resulting from the network feature correlation, our framework takes advantage of graph neural networks (GNN) to supervise the training of UAV’s actions in A2C. The proposed GNN-A2C framework is implemented with Google Tensorflow. The performance analysis shows that GNN-A2C achieves fast convergence and reduces considerably the task missing rate in aerial EdgeIoT.
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the... more
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consump...
In recent years, the importance of the mobile edge computing (MEC) paradigm along with the 5G, the Internet of Things (IoT) and virtualization of network functions is well noticed. Besides, the implementation of computation-intensive... more
In recent years, the importance of the mobile edge computing (MEC) paradigm along with the 5G, the Internet of Things (IoT) and virtualization of network functions is well noticed. Besides, the implementation of computation-intensive applications at the mobile device level is limited by battery capacity, processing capabalities and execution time. To increase the batteries life and improve the quality of experience for computationally intensive and latency-sensitive applications, offloading some parts of these applications to the MEC is proposed. This paper presents a solution for a hard decision problem that jointly optimizes the processing time and computing resources in a mobile edge-computing node. Hence, we consider a mobile device with an offloadable list of heavy tasks and we jointly optimize the offloading decisions and the allocation of IT resources to reduce the latency of tasks' processing. Thus, we developped a heuristic solution based on the simulated annealing algorithm, which can improve the offloading rate and reduce the total task latency while meeting short decision time. We performed a series of experiments to show its efficiency. Finally, the obtained results in terms of full-time treatrement are very encouraging. In addition, our solution makes offloading decisions within acceptable and achievable deadlines.
Blockchain enables smart contract for secure data transfer by which fog offloading servers can have trustworthy access control to work with data execution. When cloud is used for handling requests from mobile users, the attacker may... more
Blockchain enables smart contract for secure data transfer by which fog offloading servers can have trustworthy access control to work with data execution. When cloud is used for handling requests from mobile users, the attacker may perform denial of service attack and the same is possible at fog nodes and the same can be handled with the help of blockchain technology. In this paper, smart city application is discussed a use case study for blockchain based fog computing architecture. We propose a novel offload chain architecture for blockchain-based offloading in internet of things (IoT) networks where mobile devices can offload their data to fog servers for computation by an access control mechanism. The offload chain model using deep reinforcement learning (DRL) is proposed to improve the efficiency of blockchain based fog offloading amongst existing models.
Computation offloading is a prominent exposition for the mobile devices that lack the computational power to execute applications that require a high computational cost. There are several criteria on which computational offloading can be... more
Computation offloading is a prominent exposition for the mobile devices that lack the computational power to execute applications that require a high computational cost. There are several criteria on which computational offloading can be performed. The common measures’ being load harmonizing at the servers on which task is to be computed, energy management, security and privacy of tasks to be offloaded and the most important being the computational requirement of the task. That being said more and more solutions for offloading use various machine learning (ML) and deep learning (DL) algorithms for predicting the best nodes off to which task is to be offloaded improving the performance of offloading by reducing the delay in computing the tasks. We present various computational offloading techniques which use ML and DL. Also, we describe numerous middleware technologies and the criteria's that are crucial for offloading in specific developments.