Last updated: 2024-10-10 03:01 UTC
All documents
Number of pages: 128
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Ali Chouman, Dimitrios Michael Manias, Abdallah Shami | A Modular, End-to-End Next-Generation Network Testbed: Towards a Fully Automated Network Management Platform | 2024 | Early Access | 5G mobile communication Network systems Testing Cloud computing Next generation networking Emulation Software Next-Generation Networks End-to-End Networks 5G and Beyond Network Automation Network Management Platform | Experimentation in practical, end-to-end (E2E) next-generation networks deployments is becoming increasingly prevalent and significant in the realm of modern networking and wireless communications research. The prevalence of fifth-generation technology (5G) testbeds and the emergence of developing networks systems, for the purposes of research and testing, focus on the capabilities and features of analytics, intelligence, and automated management using novel testbed designs and architectures, ranging from simple simulations and setups to complex networking systems; however, with the ever-demanding application requirements for modern and future networks, 5G-and-beyond (denoted as 5G+) testbed experimentation can be useful in assessing the creation of large-scale network infrastructures that are capable of supporting E2E virtualized mobile network services. To this end, this paper presents a functional, modular E2E 5G+ system, complete with the integration of a Radio Access Network (RAN) and handling the connection of User Equipment (UE) in real-world scenarios. As well, this paper assesses and evaluates the effectiveness of emulating full network functionalities and capabilities, including a complete description of user-plane data, from UE registrations to communications sequences, and leads to the presentation of a future outlook in powering new experimentation for 6G and next-generation networks. | 10.1109/TNSM.2024.3416031 |
Ibrahim Mohammed Sayem, Moinul Islam Sayed, Sajal Saha, Anwar Haque | ENIDS: A Deep Learning-Based Ensemble Framework for Network Intrusion Detection Systems | 2024 | Early Access | Computer crime Support vector machines Telecommunication traffic Network intrusion detection Training Security Data models Cyber Security Data Resampling Deep Learning Ensemble Learning Intrusion Detection System | Rapid and widespread adoption of emerging Information Technology (IT) infrastructures and services in commercial and private endeavors opens new horizons for novel cyberattacks. Network Intrusion Detection Systems (NIDS) gained attention as an effective means of combating various cyber threats. Recent research demonstrates the potency of machine learning (ML) and deep learning (DL) approaches in the development of NIDS. In this paper, we propose a DL-based framework called the Ensemble Framework for Network Intrusion Detection System (ENIDS) to detect various types of cyberattacks, which includes dynamic data pre-processing, optimal feature selection, the handling of imbalanced data samples, and a DL-based ensemble model. Our DL-based ensemble model is comprised of two layers: the base learner and the meta-learner. The base learner is composed of three robust DL models: convolutional neural networks (CNN), long short-term memory (LSTM), and gated recurrent units (GRU), and the meta-learner is a deep neural network (DNN) model. The proposed framework experimented with two publicly available and popular network traffic datasets, namely UNSW-15 and CICIDS-2017. In the UNSW-15 and CICIDS-2017 datasets, our proposed framework detects cyberattacks with an accuracy of 90.6% and 99.6% and an F1-score of 90.5 and 99.6%, respectively. According to experimental findings, the proposed ensemble framework outperforms existing state-ofthe-art approaches and demonstrates better performance than benchmark DL methods in terms of accuracy, F1-score, and execution time for training and testing. | 10.1109/TNSM.2024.3414305 |
Yuyang Zhou, Guang Cheng, Zhi Ouyang, Zongyao Chen | Resource-Efficient Low-Rate DDoS Mitigation With Moving Target Defense in Edge Clouds | 2024 | Early Access | Security Telecommunication traffic Computer crime Quality of service Internet of Things Cloud computing Denial-of-service attack Low-rate Distributed Denial-of-Service Attacks Moving Target Defense Resource Efficient Deep Reinforcement Learning Edge Computing | Edge computing (EC) and container technology have been widely used to increase the flexibility of computing resources and meet the real-time requirements for delay-sensitive applications. However, it has been shown that edge clouds suffer from distributed denial-of-service (DDoS) attacks, especially low-rate DDoS (LDDoS) attacks, which can be stealthily crafted to evade detection. Unfortunately, the existing techniques cannot provide effective protection, and the amplifying resource consumption and service delay incurred by defense greatly diminish the efficiency of the security system. To tackle these problems, this paper exploits Moving Target Defense (MTD) techniques and deep reinforcement learning (DRL) for mitigating the impact of LDDoS attacks in a resource-efficient way by effectively partially invalidating, avoiding, and tolerating malicious traffic that improves the web services’ security and quality with lower overhead. We first design several lightweight MTD mechanisms by utilizing the built-in functionalities of container-based applications. To further optimize resource utilization, we formulate the interaction between attacks and MTD deployment as a Markov decision process (MDP), and adopt a deep Q-network (DQN) algorithm to achieve the best trade-off between effectiveness and overhead. The simulations prove the effectiveness of the proposed approach in LDDoS mitigation, with a significant improvement of up to 31.7% in security and 26.95% in service quality when compared with other practical strategies, and the experimental results also demonstrate that our method exhibits the lowest response time per request of 276.66 ms and the lowest webpage load time of 1.413 s with only 2.44% additional memory usage in comparison with previous works in the high workload scenario. | 10.1109/TNSM.2024.3413685 |
Mahdi Rabbani, Leila Rashidi, Ali A. Ghorbani | A Graph Learning-Based Approach for Lateral Movement Detection | 2024 | Early Access | Feature extraction Vectors Authentication Machine learning Predictive models Image edge detection Accuracy Graph Learning Machine Learning Lateral Movement Detection Advanced Persistent Threat | Lateral movement, a crucial phase in the Advanced Persistent Threat (APT) life cycle, refers to a strategy employed by adversaries to traverse horizontally within a network. The aim is to gain access to various systems or resources, thereby expanding their control and potential access to valuable targets. Detecting these attacks becomes challenging for conventional detection systems due to various factors, including the complexity of pathways, the mimicking of legitimate user behavior by attackers, and limited network visibility. To address these challenges, advanced detection techniques are required to effectively and dynamically analyze multiple features within the interconnected structure of the network. This paper introduces an innovative approach to detect malicious lateral movement paths by leveraging authentication events and graph learning techniques. The proposed method involves constructing a heterogeneous graph, and employing DeepWalk for node embedding. By combining node embedding features with the temporal information of authentication events, feature vectors are generated for each authentication request. These features are then used to train multiple machine learning-based classifiers to detect malicious lateral movement paths. Furthermore, to assess the model’s performance in a more realistic scenario, a series of additional experiments were conducted. These experiments provided further validation of the model’s robustness and its capability for forward prediction. | 10.1109/TNSM.2024.3414267 |
Nicola Di Cicco, Memedhe Ibrahimi, Francesco Musumeci, Federica Bruschetta, Michele Milano, Claudio Passera, Massimo Tornatore | Machine Learning for Failure Management in Microwave Networks: A Data-Centric Approach | 2024 | Early Access | Training Hardware Failure analysis Data models Synthetic data Labeling Costs Microwave Networks Machine Learning Failure Management | We consider the problem of classifying hardware failures in microwave networks given a collection of alarms using Machine Learning (ML). While ML models have been shown to work extremely well on similar tasks, an ML model is, at most, as good as its training data. In microwave networks, building a good-quality dataset is significantly harder than training a good classifier: annotating data is a costly and time-consuming procedure. We, therefore, shift the perspective from a Model-Centric approach, i.e., how to train the best ML model from a given dataset, to a Data-Centric approach, i.e., how to make the best use of the data at our disposal. To this end, we explore two orthogonal Data-Centric approaches for hardware failure identification in microwave networks. At training time, we leverage synthetic data generation with Conditional Variational Autoencoders to cope with extreme data imbalance and ensure fair performance in all failure classes. At inference time, we leverage Batch Uncertainty-based Active Learning to guide the data annotation procedure of multiple concurrent domain-expert labelers and achieve the best possible classification performance with the smallest possible training dataset. Illustrative experimental results on a real-world dataset show that our Data-Centric approaches allow for training top-performing models with ~4.5x less annotated data, while improving the classifier’s F1-Score by ~2.5% in a condition of extreme data scarcity. Finally, for the first time to the best of our knowledge, we make our dataset (curated by microwave industry experts) publicly available, aiming to foster research in data-driven failure management. | 10.1109/TNSM.2024.3406934 |
Yusuf Ïslam Demir, Ahmet Yazar, Hüseyin Arslan | Waveform Management Approach With Machine Learning for 6G Systems | 2024 | Early Access | 5G mobile communication Wireless communication 6G mobile communication Measurement Peak to average power ratio Interference Distortion 6G Internet of Things machine learning OFDM waveform smart city | 5th Generation (5G) systems are designed with a more flexible structure compared to previous generations with an increasing variety of applications and services. Thus, new flexibility dimensions are observed in 5G technologies. Furthermore, emergence of these flexibility dimensions is triggered a need for advanced management paradigms for 5G and beyond. It is expected that application richness, flexibility dimensions, and the related management paradigms will show an increase with 6th Generation (6G) systems. It is possible that different flexibilities related to the waveform design can be introduced in 6G while a uniform method is used in 5G and previous generations. One of these flexibilities can be the ability to make selection through a waveform set for a new capability to meet different application and user requirements with the waveform selection. In this paper, waveform selection approaches are proposed based on machine learning (ML) with single-stage and multi-stage networks for the waveform management in the same coverage area under the assumption that multiple waveforms can be used in 6G. Hence, the problem of deciding on the best waveform for a coverage area considering different requirements and environmental conditions is studied. To provide environmental awareness, a new synthetic dataset is formed with an example simulation setup. Moreover, a feature control algorithm is proposed to limit side effects of the waveform selection approaches. | 10.1109/TNSM.2024.3407017 |
Zhiyuan Rao, Lin You, Gengran Hu, Fei Zhu | EBDTS: An Efficient BCoT-Based Data Trading System Using PUF for Authentication | 2024 | Early Access | Internet of Things Blockchains Authentication Security Smart contracts Data models Big Data Blockchain trading system smart contract physical unclonable functions (PUFs) | The volume of the data generated by the Internet of Things (IoT) is expanding rapidly, primarily driven by the personal devices. However, with the severely limited memory resources and difficult security authentication processes, large amounts of the data must be abandoned. Blockchain of Things (BCoT) provides a scalable solution to these challenges by integrating blockchain and the IoT. In this paper, we propose an efficient BCoT-based data trading system using physical unclonable functions (PUFs) for authentication. PUFs are utilized for iterative keys and pseudo-identity generation to ensure the privacy and security of the IoT devices. To protect the copyright of the sellers’ data, the system sets up a data arbitration center to address the issues related to the unauthorized resale of the datasets. In addition, a fuzzy comprehensive evaluation model is proposed to regulate the behavior of the data traders. Our security analysis demonstrates that the proposed system is not only secure under the ROR model but also resistant to the internal attacks. The experimental results show the reliability and the effectiveness of the proposed system. | 10.1109/TNSM.2024.3406524 |
Yize Yang, Peng Shi, Chee Peng Lim, Jonathon Chambers | Optimal Bipartite Tracking Control for Heterogeneous Systems under DoS Attacks | 2024 | Early Access | Target tracking Control systems Denial-of-service attack Optimal control Internet of Things Packet loss Observers Bipartite tracking denial-of-service attacks resilient control system optimization | The problem of resilient optimal bipartite tracking control for heterogeneous multi-agent systems with multiple targets under denial-of-service (DoS) attacks is investigated in this paper. A bipartite tracking mechanism is devised in which the agents track the targets under bipartite consensus control, which becomes non-autonomous. This is owing to DoS attacks, as the agents cannot obtain real-time information on the tracked targets and neighboring agents. Consequently, A target observer with a storage module has been developed for the efficient estimation of agent states and the storage of observed information as historical data. By recalling the historical data of the observed state, a new type of distributed resilient optimal controller is formulated, which can achieve the control objective in the case of communication blockage, while minimizing the performance index function of the system. Numerical simulations are performed to verify the proposed secure control design. | 10.1109/TNSM.2024.3405974 |
Tai Manh Ho, Kim-Khoa Nguyen, Mohamed Cheriet | Energy Efficiency Deep Reinforcement Learning for URLLC in 5G Mission-Critical Swarm Robotics | 2024 | Early Access | Robots Robot kinematics Collision avoidance Task analysis Service robots Swarm robotics Robot sensing systems 5G network swarm robotics cloud robotics industrial automation deep reinforcement learning | 5G network provides high-rate, ultra-low latency, and high-reliability connections in support of wireless mobile robots with increased agility for factory automation. In this paper, we address the problem of swarm robotics control for mission-critical robotic applications in an automated grid-based warehouse scenario. Our goal is to maximize long-term energy efficiency while meeting the energy consumption constraint of the robots and the ultra-reliable and low latency communication (URLLC) requirements between the central controller and the swarm robotics. The problem of swarm robotics control in the URLLC regime is formulated as a nonconvex optimization problem since the achievable rate and decoding error probability with short block-length are neither convex nor concave in bandwidth and transmit power. We propose a deep reinforcement learning (DRL) based approach that employs the deep deterministic policy gradient (DDPG) method and convolutional neural network (CNN) to achieve a stationary optimal control policy that consists of a number of continuous and discrete actions. Numerical results show that our proposed multi-agent DDPG algorithm outperforms the baselines in terms of decoding error probability and energy efficiency. | 10.1109/TNSM.2024.3406350 |
Yan Jiao, Pin-Han Ho, Xiangzhu Lu, János Tapolcai, Limei Peng | A Novel Framework for Optical Layer Device Board Failure Localization in Optical Transport Network | 2024 | Early Access | Optical fiber sensors Correlation Network topology Optical network units Optical amplifiers Location awareness Stimulated emission optical transport network (OTN) failure localization alarm correlation integer linear programming (ILP) | This paper presents a novel framework called Failure-Alarm Correlation Tree based Failure Localization (FACT-FL), designed to localize failed optical layer device boards in an Optical Transport Network (OTN). Specifically, FACT-FL aims to construct a set of FACTs by correlating the failed boards and alarms, where each FACT takes one failed board and its correlated alarms as the root and leaves, respectively. Furthermore, a FACT consists of a suite of kth order Failure-Alarm Correlation Chains (k-FACCs) with different order values of k. Each k-FACC indicates the chain-like correlation established by k alarms due to one common failed board. To identify all previously undetected k-FACCs, a set of binary classifiers is trained that characterizes each k-FACC from various dimensions, including time, network topology, traffic distribution, and board/alarm attributes. Eventually, an integer linear programming (ILP) problem is formulated to extract the most likely FACT(s) from those k-FACCs. Extensive case studies demonstrate the superior results of FACT-FL in terms of metrics evaluating the identified failed boards and root alarms. We also analyze its performance under different maximum order values of k and environmental changes, including failure scenarios, network topologies, traffic distributions, and noise alarms. | 10.1109/TNSM.2024.3405901 |
Jianhua Wang, Xiaolin Chang, Jelena Mišić, Vojislav B. Mišić, Lin Li, Yingying Yao | CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning | 2024 | Early Access | Privacy Computational modeling Standards Servers Training Data models Internet of Things Federated learning differential privacy Poisson sampling communication efficiency | Federated Learning (FL), a privacy-oriented distributed ML paradigm, is gaining great interest in the Internet of Things because of its capability to protect participants’ data privacy. Studies have been conducted to address the challenges of communication efficiency and privacy-preserving, which exist in standard FL. However, they cannot achieve the goal of making a tradeoff between communication efficiency and model accuracy while guaranteeing privacy. This paper proposes a Conditional Random Sampling (CRS) method and implements it into the standard FL (CRS-FL) to tackle the above-mentioned challenges. CRS explores a Poisson-sampling-based stochastic coefficient to achieve a higher probability of obtaining zero-gradient unbiasedly and then decreases the communication overhead effectively without model accuracy degradation. Moreover, we dig out the relaxation Local Differential Privacy (LDP) guarantee conditions of CRS theoretically. Extensive experiment results indicate that (1) in communication efficiency, CRS-FL performs better than the existing methods in metric accuracy per transmission byte without model accuracy reduction in more than 7% sampling ratio (# sampling size / # model size); (2) in privacy-preserving, CRS-FL achieves no accuracy reduction compared with LDP baselines while holding the efficiency, even exceeding them in model accuracy under more sampling ratio conditions. | 10.1109/TNSM.2024.3405004 |
Luigi De Simone, Mario Di Mauro, Roberto Natella, Fabio Postiglione | Performance and Availability Challenges in Designing Resilient 5G Architectures | 2024 | Early Access | 5G mobile communication Stochastic processes Resilience Maintenance engineering Termination of employment Software Hardware 5G networks Performance and Availability analyses of 5G networks Resilience of 5G networks | This work proposes a stochastic characterization of resilient 5G architectures, where attributes such as performance and availability play a crucial role. As regards performance, we focus on the delay associated with the Packet Data Unit session establishment, a 5G procedure recognized as critical for its impact on the Quality of Service and Experience of end-users. To formally characterize this aspect, we employ the non-product-form queueing networks framework where: i) main nodes of a 5G architecture have been realistically modeled as G/G/m queues which do not admit analytical solutions; ii) the decomposition method useful to catch subtle quantities involved in the chain of 5G interconnected nodes has been conveniently customized. The results of performance characterization constitute the input of the availability modeling, where we design a hierarchical scheme to characterize the probabilistic failure/repair behavior of 5G nodes combining two formalisms: i) the Reliability Block Diagrams, useful to capture the high-level interconnections between nodes; ii) the Stochastic Reward Networks to model the internal structure of each node. The final result is an optimal resilient 5G setting that fulfills both a performance constraint (e.g., a temporal threshold) and an availability constraint (e.g., the so-called five nines) at the minimum cost, namely, with the smallest number of redundant elements. The theoretical part is complemented by an empirical assessment carried out through Open5GS, a 5G testbed that we have deployed to realistically estimate main performance and availability metrics. | 10.1109/TNSM.2024.3404560 |
Xuefeng Yan, Nelson L. S. da Fonseca, Zuqing Zhu | Self-Adaptive SRv6-INT-Driven System Adjustment in Runtime for Reliable Service Function Chaining | 2024 | Early Access | Switches Monitoring Routing Servers Resource management Runtime Quality of service IPv6 segment routing (SRv6) In-band network telemetry (INT) Programmable data plane (PDP) Service function chain (SFC) Resource management Closed-loop control | Self-adaptation of service function chains (SFCs) has been considered as an important attribute to ensure the resource-efficiency and reliability of network function virtualization (NFV) systems. In this work, we leverage the idea of integrating segment routing over IPv6 (SRv6) and in-band network telemetry (INT) seamlessly to realize SRv6-INT and explore the mutual benefits of SRv6 and INT for achieving self-adaptive SFC deployment. Specifically, we design and experimentally demonstrate a self-adaptive SRv6-INT-driven SFC deployment system that orchestrates network and IT resources timely to adapt to bursty traffic and network changes. We first enhance our previous design of SRv6-INT to better use it for self-adaptive SFC deployment, and then propose an IT resource management technique for Kubernetes (K8s) to accomplish resource allocation and contention resolution without offline virtual network function (vNF) profiling. Next, a closed-loop system is designed to manage SFCs in both the local and global ways. As for the local way, we let servers make local decisions based on the INT data encoded in packets to scale the vNFs running on them vertically. The global way involves the control plane, which oversees the SFC deployment in the whole network to change the number and placement of vNFs and the traffic routing through them. Finally, we prototype our proposal with commodity servers and hardware PDP switches based on Tofino ASICs, and experimentally demonstrate its effectiveness. | 10.1109/TNSM.2024.3404461 |
Miao Ye, Chenwei Zhao, Peng Wen, Yong Wang, Xiaoli Wang, Hongbing Qiu | DHRL-FNMR: An Intelligent Multicast Routing Approach Based on Deep Hierarchical Reinforcement Learning in SDN | 2024 | Early Access | Multicast algorithms Routing Aerospace electronics Heuristic algorithms Signal processing algorithms Bandwidth Reinforcement learning Deep Hierarchical Reinforcement Learning Multicast Tree Deep Reinforcement Learning Software-Defined Networking | The multicast routing problem in software-defined networking (SDN) is an NP-hard problem. The existing solution methods based on deep strength learning suffer from the problems of branch redundancy, an excessively large action space and slow convergence of the intelligent models. In this paper, an intelligent multicast routing algorithm based on deep hierarchical reinforcement learning is proposed to circumvent the aforementioned problems. First, the optimal multicast tree problem is decomposed into two subproblems: fork node selection and the construction of an optimal path from a fork node to a destination node. Second, a multichannel matrix is designed as the state space for the internal and external controllers of hierarchical reinforcement learning based on the global network-aware information characteristics of SDN. Then, different action spaces are designed for the upper and lower subproblems, four action selection policies are designed for constructing multicast paths, and different reward policies are designed at different levels. Finally, a series of experiments and their results show that the designed algorithm not only searches the multicast tree efficiently but also converges faster and without redundant branches, with better performance in terms of bandwidth, delay and packet loss rate than the current mainstream solution algorithms. The codes for DHRL-FNMR are open and available at https://github.com/GuetYe/DHRL-FNMR. | 10.1109/TNSM.2024.3402275 |
Francesco Betti Sorbelli, Punyasha Chatterjee, Federico Corò, Sajjad Ghobadi, Lorenzo Palazzetti, Cristina M. Pinotti | A Novel Graph-Based Multi-Layer Framework for Managing Drone BVLoS Operations | 2024 | Early Access | Drones Path planning Planning Risk management Autonomous aerial vehicles Visualization Vegetation Drones BVLoS Connectivity Ground risk | Drones have become increasingly popular in a variety of fields, including agriculture, emergency response, and package delivery. However, most drone operations are currently limited to within Visual Line of Sight () due to safety concerns. Flying drones Beyond Visual Line of Sight () broadens to new challenges and opportunities, but also requires new technologies and regulatory frameworks to ensure that the drone is constantly under the control of a remote operator. In this work, we propose a novel graph-based multi-layer framework that closely resembles real-world scenarios and challenges in order to plan drone operations. Our framework includes layers of constraints such as ground risk, cellular network infrastructure, and obstacles, at different heights. From the multi-layer structure, a graph is constructed whose edges are weighted with a dependability score that takes into account the information of the layers, allowing efficient path planning of missions, using algorithms such as Dijkstra’s. Since the built graph can be really large, we also propose lighter graph-based corridors by considering only a limited portion of the original graph. Through extensive experimental evaluation on a real dataset, we demonstrate the effectiveness of our framework in solving the (), which can be efficiently solved by applying the Dijkstra’s algorithm. | 10.1109/TNSM.2024.3401175 |
Peng Liu, Youquan Xian, Chuanjian Yao, Peng Wang, Li-e Wang, Xianxian Li | A Trustworthy and Consistent Blockchain Oracle Scheme for Industrial Internet of Things | 2024 | Early Access | Blockchains Security Industrial Internet of Things Contracts Quality of service Task analysis Soft sensors IIoT Blockchain Oracle | A blockchain provides decentralization and trustlessness features for the Industrial Internet of Things (IIoT), which expands the application scenarios of IIoT. To address the problem that blockchains cannot actively obtain off-chain data, the blockchain oracle is proposed as a bridge between the blockchain and external data. However, the existing oracle schemes make it difficult to solve the problem of low quality of service caused by frequent data changes and heterogeneous devices in IIoT, and the current oracle node selection schemes are difficult to balance security and quality of service. To tackle these problems, this paper proposes a secure and reliable oracle scheme that can obtain high-quality off-chain data. Specifically, we first design an oracle node selection algorithm based on a Verifiable Random Function (VRF) and reputation mechanism to securely select high-quality nodes. Second, we propose a data filtering algorithm based on a sliding window to further improve the consistency of the collected data. We verify the security of the proposed scheme through security analysis. The experimental results show that the proposed scheme can effectively select high-quality nodes, reduce data differences, and improve the quality of service of the oracle. In the oracle network with malicious nodes accounting for 10%, the data accuracy rate is increased by about 4%, and the data variance is reduced by about 45% on average. | 10.1109/TNSM.2024.3399837 |
B. Naresh Kumar Reddy, Md. Zia Ur Rahman, Aime Lay-Ekuakille | Enhancing Reliability and Energy Efficiency in Many-Core Processors Through Fault-Tolerant Network-On-Chip | 2024 | Early Access | Task analysis Fault tolerant systems Fault tolerance Topology Runtime Network topology Benchmark testing Network-on-Chip Core Task mapping Communication Energy Performance FPGA board | This article presents a proposal for fault-tolerant task mapping on many-core processors to enhance system performance and reduce communication energy. The proposed algorithm maps tasks onto a 2-D mesh network-on-chip (NoC) and a modified NoC (MNoC) platform. The focus of this article is primarily on addressing permanent faults. In the scenario of a permanent fault within the mapped core, the algorithm also proposes a spare core placement strategy. This involves allocating the spare core based on considerations related to communication energy. The proposed task mapping algorithm underwent evaluation using various benchmarks, including multimedia and synthetic benchmarks. The results were then compared to those obtained from a 2-D mesh NoC and three related algorithms, all under the same task graph and NoC size. The simulation results showed that the proposed mapping algorithm on the modified NoC platform leads to improved performance and communication energy reductions when compared to the 2-D mesh NoC and the other three algorithms. To validate the proposed fault-tolerant task mapping algorithm on the modified NoC platform, A Field Programmable Gate Array (FPGA) was used to measure performance metrics such as application runtime, area, and on-chip power consumption in both faulty and non-faulty conditions. The hardware results indicated significant improvements when comparing the proposed FTTM on MNoC and 2-D NoC with existing approaches. | 10.1109/TNSM.2024.3394886 |
David Tipper, Amy Babay, Balaji Palanisamy, Prashant Krishnamurthy | Network Connectivity Resilience in Next Generation Backhaul Networks: Challenges and Future Opportunities | 2024 | Early Access | Wireless communication Resilience Hardware Ultra reliable low latency communication FCC Cellular networks Power system reliability Cellular Networks Outages Availability | Next generation cellular networks are expected to enable a wide range of new applications, increasing societal dependence on the network infrastructure and requiring a higher level of resilience than current networks. In this paper, we consider the challenges network operators face in providing end-to-end connections across the backhaul part of the cellular network in the face of equipment failures and power outages. In particular, we discuss the impact of the move to commodity hardware, disaggregation of the radio access network, edge computing, densification of the network, and the increased electric power requirements on resilience. Techniques and research directions for overcoming the challenges are presented. This includes thinking beyond methods for a single network operator including cooperative operator techniques and extending resilient overlays to the wireless edge. | 10.1109/TNSM.2024.3392857 |
Fahime Khoramnejad, Aisha Syed, W. Sean Kennedy, Melike Erol-Kantarci | Energy and Delay Aware General Task Dependent Offloading in UAV-Aided Smart Farms | 2024 | Early Access | Task analysis Internet of Things Autonomous aerial vehicles Optimization Smart agriculture Delays Servers Edge computing compound-action reinforcement learning graph neural networks offloading task dependency | Edge computing offers a promising solution to enhance network reliability. In this study, we investigate the integration of mobile edge computing (MEC) technology and unmanned aerial vehicles (UAVs) within the context of smart agriculture. Smart agriculture relies on resource-constrained Internet of Things (IoT) devices for local environmental monitoring and data collection. These IoT devices send the collected data to UAVs for analysis. A central theme of this work is the focus on the applications generated by each UAV and the consideration of their topology to derive our optimization algorithm. To tackle these challenges, we propose harnessing the computational and power resources of UAVs and MEC at the network’s edge to offload and execute resource-intensive tasks in UAV-MEC-assisted networks. Our research focuses on the joint optimization of power allocation and task offloading in these wireless networks. Central to our investigation is the problem of minimizing the energy-time cost (ETC) for the UAVs, considering the interdependencies among tasks. To address this complex problem efficiently, we introduce graph convolutional neural networks (GCNs) and reinforcement learning (RL)-based techniques. We employ a directed acyclic graph (DAG) to model task interdependencies, with GCNs characterizing the DAG. Our approach incorporates an actor-critic method with embedding layers, trained using the compound-action actor-critic (CA2C) algorithm. Our findings reveal a significant improvement in minimizing both delay and energy consumption, with a 27% percent reduction in delay and a 45% reduction in consumed energy for executing complex, interdependent tasks. | 10.1109/TNSM.2024.3391664 |
Ru Huo, Xiangfeng Cheng, Chuang Sun, Tao Huang | A Cluster-Based Data Transmission Strategy for Blockchain Network in the Industrial Internet of Things | 2024 | Early Access | Blockchains Industrial Internet of Things Edge computing Data communication Computer architecture Topology Cloud computing Industrial Internet of Things (IIoT) blockchain edge computing clustering data transmission strategy | The proliferation of devices and data in the Industrial Internet of Things (IIoT) has rendered the traditional centralized cloud model unable to meet the stringent requirements of wide-scale and low latency in these IIoT scenarios. As emerging technologies, edge computing enables real-time processing and analysis on devices situated closer to the data source while reducing bandwidth requirements. Blockchain, being decentralized, could enhance data security. Therefore, edge computing and blockchain are integrated in IIoT to reduce latency and improve security. However, the inefficient data transmission of blockchain leads to increased transmission latency in the IIoT. To address this issue, we propose a cluster-based data transmission strategy (CDTS) for blockchain network. Initially, an improved weighted label propagation algorithm (WLPA) is proposed for clustering blockchain nodes. Subsequently, a spanning tree topology construction (STTC) is designed to simplify the blockchain network topology, based on the above node clustering results. Additionally, leveraging clustered nodes and tree topology, we propose a data transmission strategy to speed up data transmission. Simulation experiments show that CDTS effectively reduces data transmission time and better supports large-scale IIoT scenarios. | 10.1109/TNSM.2024.3387120 |