Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (725)

Search Parameters:
Keywords = mobile-edge computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1292 KiB  
Article
Improved JPEG Lossless Compression for Compression of Intermediate Layers in Neural Networks Based on Compute-In-Memory
by Junyong Hua, Hang Xu, Yuan Du and Li Du
Electronics 2024, 13(19), 3872; https://doi.org/10.3390/electronics13193872 - 30 Sep 2024
Viewed by 251
Abstract
With the development of Convolutional Neural Networks (CNNs), there is a growing requirement for their deployment on edge devices. At the same time, Compute-In-Memory (CIM) technology has gained significant attention in edge CNN applications due to its ability to minimize data movement between [...] Read more.
With the development of Convolutional Neural Networks (CNNs), there is a growing requirement for their deployment on edge devices. At the same time, Compute-In-Memory (CIM) technology has gained significant attention in edge CNN applications due to its ability to minimize data movement between memory and computing units. However, the deployment of complex deep neural network models on edge devices with restricted hardware resources continues to be challenged by a lack of adequate storage for intermediate layer data. In this article, we propose an optimized JPEG Lossless Compression (JPEG-LS) algorithm that implements serial context parameter updating alongside parallel encoding. This method is designed for the global prediction and efficient compression of intermediate data layers in neural networks employing CIM techniques. The results indicate average compression ratios of 6.44× for VGG16, 3.62× for ResNet34, 1.67× for MobileNetV2, and 2.31× for InceptionV3. Moreover, the implementation achieves a data throughput of 32 bits per cycle at 600 MHz on the TSMC 28 nm, with a hardware cost of 122 K Gate Count. Full article
(This article belongs to the Special Issue New Insights into Memory/Storage Circuit, Architecture, and System)
Show Figures

Figure 1

26 pages, 2842 KiB  
Article
Industrial IoT-Based Energy Monitoring System: Using Data Processing at Edge
by Akseer Ali Mirani, Anshul Awasthi, Niall O’Mahony and Joseph Walsh
IoT 2024, 5(4), 608-633; https://doi.org/10.3390/iot5040027 - 28 Sep 2024
Viewed by 526
Abstract
Edge-assisted IoT technologies combined with conventional industrial processes help evolve diverse applications under the Industrial IoT (IIoT) and Industry 4.0 era by bringing cloud computing technologies near the hardware. The resulting innovations offer intelligent management of the industrial ecosystems, focusing on increasing productivity [...] Read more.
Edge-assisted IoT technologies combined with conventional industrial processes help evolve diverse applications under the Industrial IoT (IIoT) and Industry 4.0 era by bringing cloud computing technologies near the hardware. The resulting innovations offer intelligent management of the industrial ecosystems, focusing on increasing productivity and reducing running costs by processing massive data locally. In this research, we design, develop, and implement an IIoT and edge-based system to monitor the energy consumption of a factory floor’s stationary and mobile assets using wireless and wired energy meters. Once the edge receives the meter’s data, it stores the information in the database server, followed by the data processing method to find nine additional analytical parameters. The edge also provides a master user interface (UI) for comparative analysis and individual UI for in-depth energy usage insights, followed by activity and inactivity alarms and daily reporting features via email. Moreover, the edge uses a data-filtering technique to send a single wireless meter’s data to the cloud for remote energy and alarm monitoring per project scope. Based on the evaluation, the edge server efficiently processes the data with an average CPU utilization of up to 5.58% while avoiding measurement errors due to random power failures throughout the day. Full article
Show Figures

Figure 1

19 pages, 535 KiB  
Article
Optimizing Convolutional Neural Network Architectures
by Luis Balderas, Miguel Lastra and José M. Benítez
Mathematics 2024, 12(19), 3032; https://doi.org/10.3390/math12193032 - 28 Sep 2024
Viewed by 468
Abstract
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited [...] Read more.
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices. Full article
Show Figures

Figure 1

22 pages, 1098 KiB  
Article
Enhanced Link Prediction and Traffic Load Balancing in Unmanned Aerial Vehicle-Based Cloud-Edge-Local Networks
by Hao Long, Feng Hu and Lingjun Kong
Drones 2024, 8(10), 528; https://doi.org/10.3390/drones8100528 - 27 Sep 2024
Viewed by 506
Abstract
With the advancement of cloud-edge-local computing, Unmanned Aerial Vehicles (UAVs), as flexible mobile nodes, offer novel solutions for dynamic network deployment. However, existing research on UAV networks faces substantial challenges in accurately predicting link dynamics and efficiently managing traffic loads, particularly in highly [...] Read more.
With the advancement of cloud-edge-local computing, Unmanned Aerial Vehicles (UAVs), as flexible mobile nodes, offer novel solutions for dynamic network deployment. However, existing research on UAV networks faces substantial challenges in accurately predicting link dynamics and efficiently managing traffic loads, particularly in highly distributed and rapidly changing environments. These limitations result in inefficient resource allocation and suboptimal network performance. To address these challenges, this paper proposes a UAV-based cloud-edge-local network resource elastic scheduling architecture, which integrates the Graph-Autoencoder–GAN-LSTM (GA–GLU) algorithm for accurate link prediction and the FlowBender-Enhanced Reinforcement Learning for Load Balancing (FERL-LB) algorithm for dynamic traffic load balancing. GA–GLU accurately predicts dynamic changes in UAV network topologies, enabling adaptive and efficient scheduling of network resources. FERL-LB leverages these predictions to optimize traffic load balancing within the architecture, enhancing both performance and resource utilization. To validate the effectiveness of GA–GLU, comparisons are made with classical methods such as CN and Katz, as well as modern approaches like Node2vec and GAE–LSTM, which are commonly used for link prediction. Experimental results demonstrate that GA–GLU consistently outperforms these competitors in metrics such as AUC, MAP, and error rate. The integration of GA–GLU and FERL-LB within the proposed architecture significantly improves network performance in highly dynamic environments. Full article
Show Figures

Figure 1

26 pages, 3533 KiB  
Systematic Review
Energy-Efficient Industrial Internet of Things in Green 6G Networks
by Xavier Fernando and George Lăzăroiu
Appl. Sci. 2024, 14(18), 8558; https://doi.org/10.3390/app14188558 - 23 Sep 2024
Viewed by 1482
Abstract
The research problem of this systematic review was whether green 6G networks can integrate energy-efficient Industrial Internet of Things (IIoT) in terms of distributed artificial intelligence, green 6G pervasive edge computing communication networks and big-data-based intelligent decision algorithms. We show that sensor data [...] Read more.
The research problem of this systematic review was whether green 6G networks can integrate energy-efficient Industrial Internet of Things (IIoT) in terms of distributed artificial intelligence, green 6G pervasive edge computing communication networks and big-data-based intelligent decision algorithms. We show that sensor data fusion can be carried out in energy-efficient IoT smart industrial urban environments by cooperative perception and inference tasks. Our analyses debate on 6G wireless communication, vehicular IoT intelligent and autonomous networks, and energy-efficient algorithm and green computing technologies in smart industrial equipment and manufacturing environments. Mobile edge and cloud computing task processing capabilities of decentralized network control and power grid system monitoring were thereby analyzed. Our results and contributions clarify that sustainable energy efficiency and green power generation together with IoT decision support and smart environmental systems operate efficiently in distributed artificial intelligence 6G pervasive edge computing communication networks. PRISMA was used, and with its web-based Shiny app flow design, the search outcomes and screening procedures were integrated. A quantitative literature review was performed in July 2024 on original and review research published between 2019 and 2024. Study screening, evidence map visualization, and data extraction and reporting tools, machine learning classifiers, and reference management software were harnessed for qualitative and quantitative data, collection, management, and analysis in research synthesis. Dimensions and VOSviewer were deployed for data visualization and analysis. Full article
Show Figures

Figure 1

19 pages, 575 KiB  
Article
Jointly Optimization of Delay and Energy Consumption for Multi-Device FDMA in WPT-MEC System
by Danxia Qiao, Lu Sun, Dianju Li, Huajie Xiong, Rina Liang, Zhenyuan Han and Liangtian Wan
Sensors 2024, 24(18), 6123; https://doi.org/10.3390/s24186123 - 22 Sep 2024
Viewed by 549
Abstract
With the rapid development of mobile edge computing (MEC) and wireless power transfer (WPT) technologies, the MEC-WPT system makes it possible to provide high-quality data processing services for end users. However, in a real-world WPT-MEC system, the channel gain decreases with the transmission [...] Read more.
With the rapid development of mobile edge computing (MEC) and wireless power transfer (WPT) technologies, the MEC-WPT system makes it possible to provide high-quality data processing services for end users. However, in a real-world WPT-MEC system, the channel gain decreases with the transmission distance, leading to “double near and far effect” in the joint transmission of wireless energy and data, which affects the quality of the data processing service for end users. Consequently, it is essential to design a reasonable system model to overcome the “double near and far effect” and make reasonable scheduling of multi-dimensional resources such as energy, communication and computing to guarantee high-quality data processing services. First, this paper designs a relay collaboration WPT-MEC resource scheduling model to improve wireless energy utilization efficiency. The optimization goal is to minimize the normalization of the total communication delay and total energy consumption while meeting multiple resource constraints. Second, this paper imports a BK-means algorithm to complete the end terminals cluster to guarantee effective energy reception and adapts the whale optimization algorithm with adaptive mechanism (AWOA) for mobile vehicle path-planning to reduce energy waste. Third, this paper proposes an immune differential enhanced deep deterministic policy gradient (IDDPG) algorithm to realize efficient resource scheduling of multiple resources and minimize the optimization goal. Finally, simulation experiments are carried out on different data, and the simulation results prove the validity of the designed scheduling model and proposed IDDPG. Full article
Show Figures

Figure 1

16 pages, 4295 KiB  
Article
Cloud-Edge Collaborative Optimization Based on Distributed UAV Network
by Jian Yang, Jinyu Tao, Cheng Wang and Qinghai Yang
Electronics 2024, 13(18), 3763; https://doi.org/10.3390/electronics13183763 - 22 Sep 2024
Viewed by 392
Abstract
With the continuous development of mobile communication technology, edge intelligence has received widespread attention from academia. However, when enabling edge intelligence in Unmanned Aerial Vehicle (UAV) networks where drones serve as edge devices, the problem of insufficient computing power often arises due to [...] Read more.
With the continuous development of mobile communication technology, edge intelligence has received widespread attention from academia. However, when enabling edge intelligence in Unmanned Aerial Vehicle (UAV) networks where drones serve as edge devices, the problem of insufficient computing power often arises due to limited storage and computing resources. In order to solve the problem of insufficient UAV computing power, this paper proposes a distributed cloud-edge collaborative optimization algorithm (DCECOA). The core idea of the DCECOA is to make full use of the local data of edge devices (i.e., UAVs) to optimize the neural network model more efficiently and achieve model volume compression. Compared with the traditional Taylor evaluation criterion, this algorithm consumes less resources on the communication uplink. The neural network model compressed by the proposed optimization algorithm can achieve higher performance under the same compression rate. Full article
Show Figures

Figure 1

26 pages, 7193 KiB  
Article
Multi-UAV Assisted Air–Ground Collaborative MEC System: DRL-Based Joint Task Offloading and Resource Allocation and 3D UAV Trajectory Optimization
by Mingjun Wang, Ruishan Li, Feng Jing and Mei Gao
Drones 2024, 8(9), 510; https://doi.org/10.3390/drones8090510 - 21 Sep 2024
Viewed by 502
Abstract
In disaster-stricken areas that were severely damaged by earthquakes, typhoons, floods, mudslides, and the like, employing unmanned aerial vehicles (UAVs) as airborne base stations for mobile edge computing (MEC) constitutes an effective solution. Concerning this, we investigate a 3D air–ground collaborative MEC scenario [...] Read more.
In disaster-stricken areas that were severely damaged by earthquakes, typhoons, floods, mudslides, and the like, employing unmanned aerial vehicles (UAVs) as airborne base stations for mobile edge computing (MEC) constitutes an effective solution. Concerning this, we investigate a 3D air–ground collaborative MEC scenario facilitated by multi-UAV for multiple ground devices (GDs). Specifically, we first design a 3D multi-UAV-assisted air–ground cooperative MEC system, and construct system communication, computation, and UAV flight energy consumption models. Subsequently, a cooperative resource optimization (CRO) problem is proposed by jointly optimizing task offloading, UAV flight trajectories, and edge computing resource allocation to minimize the total energy consumption of the system. Further, the CRO problem is decoupled into two sub-problems. Among them, the MATD3 deep reinforcement learning algorithm is utilized to jointly optimize the offloading decisions of GDs and the flight trajectories of UAVs; subsequently, the optimal resource allocation scheme at the edge is demonstrated through the derivation of KKT conditions. Finally, the simulation results show that the algorithm has good convergence compared with other algorithms and can effectively reduce the system energy consumption. Full article
Show Figures

Figure 1

15 pages, 474 KiB  
Article
Federated Learning in Dynamic and Heterogeneous Environments: Advantages, Performances, and Privacy Problems
by Fabio Liberti, Davide Berardi and Barbara Martini
Appl. Sci. 2024, 14(18), 8490; https://doi.org/10.3390/app14188490 - 20 Sep 2024
Viewed by 1280
Abstract
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on [...] Read more.
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on edge devices, mitigating data privacy concerns and reducing latency due to transmitting data to central servers. However, the heterogeneity of computational resources, the variability of network connections, and the mobility of IoT devices pose significant challenges to the efficient implementation of FL. This work explores advanced techniques for dynamic model adaptation and heterogeneous data management in edge computing scenarios, proposing innovative solutions to improve the robustness and efficiency of federated learning. We present an innovative solution based on Kubernetes which enables the fast application of FL models to Heterogeneous Architectures. Experimental results demonstrate that our proposals can improve the performance of FL in IoT and edge environments, offering new perspectives for the practical implementation of decentralized intelligent systems. Full article
Show Figures

Figure 1

13 pages, 9262 KiB  
Article
Decentralized Mechanism for Edge Node Allocation in Access Network: An Experimental Evaluation
by Jesus Calle-Cancho, Carlos Cañada, Rafael Pastor-Vargas, Mercedes E. Paoletti and Juan M. Haut
Future Internet 2024, 16(9), 342; https://doi.org/10.3390/fi16090342 - 20 Sep 2024
Viewed by 298
Abstract
With the rapid advancement of the Internet of Things and the emergence of 6G networks in smart city environments, a growth in the generation of data, commonly known as big data, is expected to consequently lead to higher latency. To mitigate this latency, [...] Read more.
With the rapid advancement of the Internet of Things and the emergence of 6G networks in smart city environments, a growth in the generation of data, commonly known as big data, is expected to consequently lead to higher latency. To mitigate this latency, mobile edge computing has been proposed to alleviate a portion of the workload from mobile devices by offloading it to nearby edge servers equipped with appropriate computational resources. However, existing solutions often exhibit poor performance when confronted with complex network topologies. Thus, this paper introduces a decentralized mechanism aimed at determining the locations of network edge nodes in such complex network topologies, characterized by lengthy execution times. Our proposal provides performance improvements and offers scalability and flexibility as networks become more complex. Experimental evaluations are conducted using the Shanghai Telecom dataset to validate our proposed approach. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

18 pages, 3659 KiB  
Article
Enabling Pandemic-Resilient Healthcare: Edge-Computing-Assisted Real-Time Elderly Caring Monitoring System
by Muhammad Zubair Islam, A. S. M. Sharifuzzaman Sagar and Hyung Seok Kim
Appl. Sci. 2024, 14(18), 8486; https://doi.org/10.3390/app14188486 - 20 Sep 2024
Viewed by 550
Abstract
Over the past few years, life expectancy has increased significantly. However, elderly individuals living independently often require assistance due to mobility issues, symptoms of dementia, or other health-related challenges. In these situations, high-quality elderly care systems for the aging population require innovative approaches [...] Read more.
Over the past few years, life expectancy has increased significantly. However, elderly individuals living independently often require assistance due to mobility issues, symptoms of dementia, or other health-related challenges. In these situations, high-quality elderly care systems for the aging population require innovative approaches to guarantee Quality of Service (QoS) and Quality of Experience (QoE). Traditional remote elderly care methods face several challenges, including high latency and poor service quality, which affect their transparency and stability. This paper proposes an Edge Computational Intelligence (ECI)-based haptic-driven ECI-TeleCaring system for the remote caring and monitoring of elderly people. It utilizes a Software-Defined Network (SDN) and Mobile Edge Computing (MEC) to reduce latency and enhance responsiveness. Dual Long Short-Term Memory (LSTM) models are deployed at the edge to enable real-time location-aware activity prediction to ensure QoS and QoE. The results from the simulation demonstrate that the proposed system is proficient in managing the transmission of data in real time without and with an activity recognition and location-aware model by communication latency under 2.5 ms (more than 60%) and from 11∼12 ms (60∼95%) for 10 to 1000 data packets, respectively. The results also show that the proposed system ensures a trade-off between the transparency and stability of the system from the QoS and QoE perspectives. Moreover, the proposed system serves as a testbed for implementing, investigating, and managing elder telecaring services for QoS/QoE provisioning. It facilitates real-time monitoring of the deployed technological parameters along with network delay and packet loss, and it oversees data exchange between the master domain (human operator) and slave domain (telerobot). Full article
(This article belongs to the Special Issue Advances in Intelligent Communication System)
Show Figures

Figure 1

20 pages, 1850 KiB  
Article
Generative AI-Enabled Energy-Efficient Mobile Augmented Reality in Multi-Access Edge Computing
by Minsu Na and Joohyung Lee
Appl. Sci. 2024, 14(18), 8419; https://doi.org/10.3390/app14188419 - 19 Sep 2024
Viewed by 549
Abstract
This paper proposes a novel offloading and super-resolution (SR) control scheme for energy-efficient mobile augmented reality (MAR) in multi-access edge computing (MEC) using SR as a promising generative artificial intelligence (GAI) technology. Specifically, SR can enhance low-resolution images into high-resolution versions using GAI [...] Read more.
This paper proposes a novel offloading and super-resolution (SR) control scheme for energy-efficient mobile augmented reality (MAR) in multi-access edge computing (MEC) using SR as a promising generative artificial intelligence (GAI) technology. Specifically, SR can enhance low-resolution images into high-resolution versions using GAI technologies. This capability is particularly advantageous in MAR by lowering the bitrate required for network transmission. However, this SR process requires considerable computational resources and can introduce latency, potentially overloading the MEC server if there are numerous offload requests for MAR services. In this context, we conduct an empirical study to verify that the computational latency of SR increases with the upscaling level. Therefore, we demonstrate a trade-off between computational latency and improved service satisfaction when upscaling images for object detection, as it enhances the detection accuracy. From this perspective, determining whether to apply SR for MAR, while jointly controlling offloading decisions, is challenging. Consequently, to design energy-efficient MAR, we rigorously formulate analytical models for the energy consumption of a MAR device, the overall latency and the MAR satisfaction of service quality from the enforcement of the service accuracy, taking into account the SR process at the MEC server. Finally, we develop a theoretical framework that optimizes the computation offloading and SR control problem for MAR clients by jointly optimizing the offloading and SR decisions, considering their trade-off in MAR with MEC. Finally, the performance evaluation indicates that our proposed framework effectively supports MAR services by efficiently managing offloading and SR decisions, balancing trade-offs between energy consumption, latency, and service satisfaction compared to benchmarks. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

29 pages, 6358 KiB  
Article
A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints
by Ruipeng Zhang, Yikang Yang and Hengnian Li
Remote Sens. 2024, 16(18), 3459; https://doi.org/10.3390/rs16183459 - 18 Sep 2024
Viewed by 457
Abstract
Integrated communication–sensing–computing (ICSC) satellites, which integrate edge computing servers on Earth observation satellites to process collected data directly in orbit, are attracting growing attention. Nevertheless, some monitoring tasks involve sequential sub-tasks like target observation and movement prediction, leading to data dependencies. Moreover, the [...] Read more.
Integrated communication–sensing–computing (ICSC) satellites, which integrate edge computing servers on Earth observation satellites to process collected data directly in orbit, are attracting growing attention. Nevertheless, some monitoring tasks involve sequential sub-tasks like target observation and movement prediction, leading to data dependencies. Moreover, the limited energy supply on satellites requires the sequential execution of sub-tasks. Therefore, inappropriate assignments can cause circular waiting among satellites, resulting in deadlocks. This paper formulates task offloading in ICSC satellites with data-dependent constraints as a mixed-integer linear programming (MILP) problem, aiming to minimize service latency and energy consumption simultaneously. Given the non-centrality of ICSC satellites, we propose a distributed deadlock-free task offloading (DDFTO) algorithm. DDFTO operates in parallel on each satellite, alternating between sub-task inclusion and consensus and sub-task removal until a common offloading assignment is reached. To avoid deadlocks arising from sub-task inclusion, we introduce the deadlock-free insertion mechanism (DFIM), which strategically restricts the insertion positions of sub-tasks based on interval relationships, ensuring deadlock-free assignments. Extensive experiments demonstrate the effectiveness of DFIM in avoiding deadlocks and show that the DDFTO algorithm outperforms benchmark algorithms in achieving deadlock-free offloading assignments. Full article
Show Figures

Figure 1

20 pages, 6757 KiB  
Article
A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing
by Guiwen Jiang, Rongxi Huang, Zhiming Bao and Gaocai Wang
Future Internet 2024, 16(9), 333; https://doi.org/10.3390/fi16090333 - 11 Sep 2024
Viewed by 573
Abstract
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view [...] Read more.
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view of the above deficiencies, this paper constructs a cloud-edge collaborative computing model, and related task queue, delay, and energy consumption model, and gives joint optimization problem modeling for task offloading and resource allocation with multiple constraints. Then, in order to solve the joint optimization problem, this paper designs a decentralized offloading and scheduling scheme based on “task-oriented” multi-agent reinforcement learning. In this scheme, we present information synchronization protocols and offloading scheduling rules and use edge servers as agents to construct a multi-agent system based on the Actor–Critic framework. In order to solve delayed rewards, this paper models the offloading and scheduling problem as a “task-oriented” Markov decision process. This process abandons the commonly used equidistant time slot model but uses dynamic and parallel slots in the step of task processing time. Finally, an offloading decision algorithm TOMAC-PPO is proposed. The algorithm applies the proximal policy optimization to the multi-agent system and combines the Transformer neural network model to realize the memory and prediction of network state information. Experimental results show that this algorithm has better convergence speed and can effectively reduce the service cost, energy consumption, and task drop rate under high load and high failure rates. For example, the proposed TOMAC-PPO can reduce the average cost by from 19.4% to 66.6% compared to other offloading schemes under the same network load. In addition, the drop rate of some baseline algorithms with 50 users can achieve 62.5% for critical tasks, while the proposed TOMAC-PPO only has 5.5%. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

23 pages, 2029 KiB  
Article
Task Offloading and Resource Allocation for Augmented Reality Applications in UAV-Based Networks Using a Dual Network Architecture
by Dat Van Anh Duong, Shathee Akter and Seokhoon Yoon
Electronics 2024, 13(18), 3590; https://doi.org/10.3390/electronics13183590 - 10 Sep 2024
Viewed by 366
Abstract
This paper proposes a novel UAV-based edge computing system for augmented reality (AR) applications, addressing the challenges posed by the limited resources in mobile devices. The system uses UAVs equipped with edge computing servers (UECs) specifically to enable efficient task offloading and resource [...] Read more.
This paper proposes a novel UAV-based edge computing system for augmented reality (AR) applications, addressing the challenges posed by the limited resources in mobile devices. The system uses UAVs equipped with edge computing servers (UECs) specifically to enable efficient task offloading and resource allocation for AR tasks with dependent relationships. This work specifically focuses on the problem of dependent tasks in AR applications within UAV-based networks. This problem has not been thoroughly addressed in previous research. A dual network architecture-based task offloading (DNA-TO) algorithm is proposed, leveraging the DNA framework to enhance decision-making in reinforcement learning while mitigating noise. In addition, a Karush–Kuhn–Tucker-based resource allocation (KKT-RA) algorithm is proposed to optimize resource allocation. Various simulations using real-world movement data are conducted. The results indicate that our proposed algorithm outperforms existing approaches in terms of latency and energy efficiency. Full article
Show Figures

Figure 1

Back to TopTop